problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_16510 | rasdani/github-patches | git_diff | cupy__cupy-5771 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[RFC] Drop Python 3.6 support in v10
We are now discussing to drop support for Python 3.6 in CuPy v10. Feel free to leave a comment here if you have any concerns.
Background:
* CUDA Python is unlikely to provide a wheel for Python 3.6, although it can be built from the source without any issue. CUDA Python currently requires [`-std=c++14`](https://github.com/NVIDIA/cuda-python/blob/427c597959e6fe1409195a30d42fc4a1886bc89a/setup.py#L38) so recent versions of gcc, which is not in RHEL/CentOS 7 by default, is needed. We want to avoid requiring CuPy wheel users to manually install non-default GCC.
* NumPy dropped Python 3.6 support in June 2020: https://numpy.org/neps/nep-0029-deprecation_policy.html
* Python 3.6 support become EOL in December 2021.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import glob
4 import os
5 from setuptools import setup, find_packages
6 import sys
7
8 source_root = os.path.abspath(os.path.dirname(__file__))
9 sys.path.append(os.path.join(source_root, 'install'))
10
11 import cupy_builder # NOQA
12 from cupy_builder import cupy_setup_build # NOQA
13
14 ctx = cupy_builder.Context(source_root)
15 cupy_builder.initialize(ctx)
16 if not cupy_builder.preflight_check(ctx):
17 sys.exit(1)
18
19
20 # TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)
21 setup_requires = [
22 'Cython>=0.29.22,<3',
23 'fastrlock>=0.5',
24 ]
25 install_requires = [
26 'numpy>=1.17,<1.24', # see #4773
27 'fastrlock>=0.5',
28 ]
29 extras_require = {
30 'all': [
31 'scipy>=1.4,<1.10', # see #4773
32 'Cython>=0.29.22,<3',
33 'optuna>=2.0',
34 ],
35 'stylecheck': [
36 'autopep8==1.5.5',
37 'flake8==3.8.4',
38 'pbr==5.5.1',
39 'pycodestyle==2.6.0',
40 ],
41 'test': [
42 # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.
43 'pytest>=6.2',
44 ],
45 # TODO(kmaehashi): Remove 'jenkins' requirements.
46 'jenkins': [
47 'pytest>=6.2',
48 'pytest-timeout',
49 'pytest-cov',
50 'coveralls',
51 'codecov',
52 'coverage<5', # Otherwise, Python must be built with sqlite
53 ],
54 }
55 tests_require = extras_require['test']
56
57
58 # List of files that needs to be in the distribution (sdist/wheel).
59 # Notes:
60 # - Files only needed in sdist should be added to `MANIFEST.in`.
61 # - The following glob (`**`) ignores items starting with `.`.
62 cupy_package_data = [
63 'cupy/cuda/cupy_thrust.cu',
64 'cupy/cuda/cupy_cub.cu',
65 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback
66 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback
67 'cupy/cuda/cupy_cufft.h', # for cuFFT callback
68 'cupy/cuda/cufft.pxd', # for cuFFT callback
69 'cupy/cuda/cufft.pyx', # for cuFFT callback
70 'cupy/random/cupy_distributions.cu',
71 'cupy/random/cupy_distributions.cuh',
72 ] + [
73 x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)
74 if os.path.isfile(x)
75 ]
76
77 package_data = {
78 'cupy': [
79 os.path.relpath(x, 'cupy') for x in cupy_package_data
80 ],
81 }
82
83 package_data['cupy'] += cupy_setup_build.prepare_wheel_libs(ctx)
84
85 ext_modules = cupy_setup_build.get_ext_modules(False, ctx)
86 build_ext = cupy_setup_build.custom_build_ext
87
88 # Get __version__ variable
89 with open(os.path.join(source_root, 'cupy', '_version.py')) as f:
90 exec(f.read())
91
92 long_description = None
93 if ctx.long_description_path is not None:
94 with open(ctx.long_description_path) as f:
95 long_description = f.read()
96
97
98 CLASSIFIERS = """\
99 Development Status :: 5 - Production/Stable
100 Intended Audience :: Science/Research
101 Intended Audience :: Developers
102 License :: OSI Approved :: MIT License
103 Programming Language :: Python
104 Programming Language :: Python :: 3
105 Programming Language :: Python :: 3.6
106 Programming Language :: Python :: 3.7
107 Programming Language :: Python :: 3.8
108 Programming Language :: Python :: 3.9
109 Programming Language :: Python :: 3 :: Only
110 Programming Language :: Cython
111 Topic :: Software Development
112 Topic :: Scientific/Engineering
113 Operating System :: POSIX
114 Operating System :: Microsoft :: Windows
115 """
116
117
118 setup(
119 name=ctx.package_name,
120 version=__version__, # NOQA
121 description='CuPy: NumPy & SciPy for GPU',
122 long_description=long_description,
123 author='Seiya Tokui',
124 author_email='[email protected]',
125 maintainer='CuPy Developers',
126 url='https://cupy.dev/',
127 license='MIT License',
128 project_urls={
129 "Bug Tracker": "https://github.com/cupy/cupy/issues",
130 "Documentation": "https://docs.cupy.dev/",
131 "Source Code": "https://github.com/cupy/cupy",
132 },
133 classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f],
134 packages=find_packages(exclude=['install', 'tests']),
135 package_data=package_data,
136 zip_safe=False,
137 python_requires='>=3.6.0',
138 setup_requires=setup_requires,
139 install_requires=install_requires,
140 tests_require=tests_require,
141 extras_require=extras_require,
142 ext_modules=ext_modules,
143 cmdclass={'build_ext': build_ext},
144 )
145
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -102,7 +102,6 @@
License :: OSI Approved :: MIT License
Programming Language :: Python
Programming Language :: Python :: 3
-Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
@@ -134,7 +133,7 @@
packages=find_packages(exclude=['install', 'tests']),
package_data=package_data,
zip_safe=False,
- python_requires='>=3.6.0',
+ python_requires='>=3.7',
setup_requires=setup_requires,
install_requires=install_requires,
tests_require=tests_require,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -102,7 +102,6 @@\n License :: OSI Approved :: MIT License\n Programming Language :: Python\n Programming Language :: Python :: 3\n-Programming Language :: Python :: 3.6\n Programming Language :: Python :: 3.7\n Programming Language :: Python :: 3.8\n Programming Language :: Python :: 3.9\n@@ -134,7 +133,7 @@\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n- python_requires='>=3.6.0',\n+ python_requires='>=3.7',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n", "issue": "[RFC] Drop Python 3.6 support in v10\nWe are now discussing to drop support for Python 3.6 in CuPy v10. Feel free to leave a comment here if you have any concerns.\r\n\r\nBackground:\r\n* CUDA Python is unlikely to provide a wheel for Python 3.6, although it can be built from the source without any issue. CUDA Python currently requires [`-std=c++14`](https://github.com/NVIDIA/cuda-python/blob/427c597959e6fe1409195a30d42fc4a1886bc89a/setup.py#L38) so recent versions of gcc, which is not in RHEL/CentOS 7 by default, is needed. We want to avoid requiring CuPy wheel users to manually install non-default GCC.\r\n* NumPy dropped Python 3.6 support in June 2020: https://numpy.org/neps/nep-0029-deprecation_policy.html\r\n* Python 3.6 support become EOL in December 2021.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport glob\nimport os\nfrom setuptools import setup, find_packages\nimport sys\n\nsource_root = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(os.path.join(source_root, 'install'))\n\nimport cupy_builder # NOQA\nfrom cupy_builder import cupy_setup_build # NOQA\n\nctx = cupy_builder.Context(source_root)\ncupy_builder.initialize(ctx)\nif not cupy_builder.preflight_check(ctx):\n sys.exit(1)\n\n\n# TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)\nsetup_requires = [\n 'Cython>=0.29.22,<3',\n 'fastrlock>=0.5',\n]\ninstall_requires = [\n 'numpy>=1.17,<1.24', # see #4773\n 'fastrlock>=0.5',\n]\nextras_require = {\n 'all': [\n 'scipy>=1.4,<1.10', # see #4773\n 'Cython>=0.29.22,<3',\n 'optuna>=2.0',\n ],\n 'stylecheck': [\n 'autopep8==1.5.5',\n 'flake8==3.8.4',\n 'pbr==5.5.1',\n 'pycodestyle==2.6.0',\n ],\n 'test': [\n # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.\n 'pytest>=6.2',\n ],\n # TODO(kmaehashi): Remove 'jenkins' requirements.\n 'jenkins': [\n 'pytest>=6.2',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n 'coverage<5', # Otherwise, Python must be built with sqlite\n ],\n}\ntests_require = extras_require['test']\n\n\n# List of files that needs to be in the distribution (sdist/wheel).\n# Notes:\n# - Files only needed in sdist should be added to `MANIFEST.in`.\n# - The following glob (`**`) ignores items starting with `.`.\ncupy_package_data = [\n 'cupy/cuda/cupy_thrust.cu',\n 'cupy/cuda/cupy_cub.cu',\n 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback\n 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback\n 'cupy/cuda/cupy_cufft.h', # for cuFFT callback\n 'cupy/cuda/cufft.pxd', # for cuFFT callback\n 'cupy/cuda/cufft.pyx', # for cuFFT callback\n 'cupy/random/cupy_distributions.cu',\n 'cupy/random/cupy_distributions.cuh',\n] + [\n x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)\n if os.path.isfile(x)\n]\n\npackage_data = {\n 'cupy': [\n os.path.relpath(x, 'cupy') for x in cupy_package_data\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs(ctx)\n\next_modules = cupy_setup_build.get_ext_modules(False, ctx)\nbuild_ext = cupy_setup_build.custom_build_ext\n\n# Get __version__ variable\nwith open(os.path.join(source_root, 'cupy', '_version.py')) as f:\n exec(f.read())\n\nlong_description = None\nif ctx.long_description_path is not None:\n with open(ctx.long_description_path) as f:\n long_description = f.read()\n\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.6\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3.8\nProgramming Language :: Python :: 3.9\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: POSIX\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n\nsetup(\n name=ctx.package_name,\n version=__version__, # NOQA\n description='CuPy: NumPy & SciPy for GPU',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n maintainer='CuPy Developers',\n url='https://cupy.dev/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs.cupy.dev/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.6.0',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext},\n)\n", "path": "setup.py"}]} | 2,279 | 177 |
gh_patches_debug_56202 | rasdani/github-patches | git_diff | svthalia__concrexit-3558 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Separate promotion permissions in eventadmin inline from the main promotion perms
### What?
Currently, people need add/change_promotionrequest permission to make promotionrequests for their events. But with this permission they also get full access to all other promotionrequests. So we should make the inline in the eventadmin bypass that check. For all I care, anyone who can change an event can make a promorquest from the eventadmin (by virtue of their 'change_event' permission and being an organizer of the event), without having the add/change_promotionrequest permission, and thus without seeing the main Promotion Requests changelist page.
### Why?
<!-- A clear and concise motivation why we should consider implementing this. -->
Least privilege principle: many people should be allowed to request promotion for their own events, but don't need to be able to edit unrelated requests. And this way we can have promocie be able to bypass the requirements in #3529, without normal organizers being able to do the same.
### How?
Override has_xxx_permission() on the inline class. Read the inlinemodeladmin docs for guidance.
</issue>
<code>
[start of website/events/admin/inlines.py]
1 from django.contrib import admin
2
3 from events import models
4 from pizzas.models import FoodEvent
5 from promotion.models import PromotionRequest
6
7 from .forms import RegistrationInformationFieldForm
8
9
10 class RegistrationInformationFieldInline(admin.TabularInline):
11 """The inline for registration information fields in the Event admin."""
12
13 form = RegistrationInformationFieldForm
14 extra = 0
15 model = models.RegistrationInformationField
16 ordering = ("_order",)
17
18 radio_fields = {"type": admin.VERTICAL}
19
20 def get_formset(self, request, obj=None, **kwargs):
21 formset = super().get_formset(request, obj, **kwargs)
22 if obj is not None:
23 count = obj.registrationinformationfield_set.count()
24 formset.form.declared_fields["order"].initial = count
25 return formset
26
27
28 class PizzaEventInline(admin.StackedInline):
29 """The inline for pizza events in the Event admin."""
30
31 model = FoodEvent
32 extra = 0
33 max_num = 1
34
35
36 class PromotionRequestInline(admin.StackedInline):
37 model = PromotionRequest
38 readonly_fields = (
39 "assigned_to",
40 "status",
41 "drive_folder",
42 )
43 extra = 0
44
[end of website/events/admin/inlines.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/events/admin/inlines.py b/website/events/admin/inlines.py
--- a/website/events/admin/inlines.py
+++ b/website/events/admin/inlines.py
@@ -39,5 +39,19 @@
"assigned_to",
"status",
"drive_folder",
+ "status_updated",
)
+
+ def has_add_permission(self, request, obj=None):
+ return True
+
+ def has_view_permission(self, request, obj=None):
+ return True
+
+ def has_change_permission(self, request, obj=None):
+ return True
+
+ def has_delete_permission(self, request, obj=None):
+ return True
+
extra = 0
| {"golden_diff": "diff --git a/website/events/admin/inlines.py b/website/events/admin/inlines.py\n--- a/website/events/admin/inlines.py\n+++ b/website/events/admin/inlines.py\n@@ -39,5 +39,19 @@\n \"assigned_to\",\n \"status\",\n \"drive_folder\",\n+ \"status_updated\",\n )\n+\n+ def has_add_permission(self, request, obj=None):\n+ return True\n+\n+ def has_view_permission(self, request, obj=None):\n+ return True\n+\n+ def has_change_permission(self, request, obj=None):\n+ return True\n+\n+ def has_delete_permission(self, request, obj=None):\n+ return True\n+\n extra = 0\n", "issue": "Separate promotion permissions in eventadmin inline from the main promotion perms\n### What?\r\nCurrently, people need add/change_promotionrequest permission to make promotionrequests for their events. But with this permission they also get full access to all other promotionrequests. So we should make the inline in the eventadmin bypass that check. For all I care, anyone who can change an event can make a promorquest from the eventadmin (by virtue of their 'change_event' permission and being an organizer of the event), without having the add/change_promotionrequest permission, and thus without seeing the main Promotion Requests changelist page.\r\n\r\n### Why?\r\n<!-- A clear and concise motivation why we should consider implementing this. -->\r\nLeast privilege principle: many people should be allowed to request promotion for their own events, but don't need to be able to edit unrelated requests. And this way we can have promocie be able to bypass the requirements in #3529, without normal organizers being able to do the same.\r\n\r\n### How?\r\nOverride has_xxx_permission() on the inline class. Read the inlinemodeladmin docs for guidance.\r\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom events import models\nfrom pizzas.models import FoodEvent\nfrom promotion.models import PromotionRequest\n\nfrom .forms import RegistrationInformationFieldForm\n\n\nclass RegistrationInformationFieldInline(admin.TabularInline):\n \"\"\"The inline for registration information fields in the Event admin.\"\"\"\n\n form = RegistrationInformationFieldForm\n extra = 0\n model = models.RegistrationInformationField\n ordering = (\"_order\",)\n\n radio_fields = {\"type\": admin.VERTICAL}\n\n def get_formset(self, request, obj=None, **kwargs):\n formset = super().get_formset(request, obj, **kwargs)\n if obj is not None:\n count = obj.registrationinformationfield_set.count()\n formset.form.declared_fields[\"order\"].initial = count\n return formset\n\n\nclass PizzaEventInline(admin.StackedInline):\n \"\"\"The inline for pizza events in the Event admin.\"\"\"\n\n model = FoodEvent\n extra = 0\n max_num = 1\n\n\nclass PromotionRequestInline(admin.StackedInline):\n model = PromotionRequest\n readonly_fields = (\n \"assigned_to\",\n \"status\",\n \"drive_folder\",\n )\n extra = 0\n", "path": "website/events/admin/inlines.py"}]} | 1,104 | 158 |
gh_patches_debug_32880 | rasdani/github-patches | git_diff | learningequality__kolibri-7214 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Learner activity records partially not visible after upgrade to 0.14b3
# Observed Behaviour
Upgraded from 0.13.3 to 0.14b3. Learner activity records were partially not visible in Class Home -> Class activity and Reports. Downgraded back to 0.13.3 and they are all visible again.
# Expected behavior
All learner activity should be visible as prior to upgrade
# User-facing consequences
Confusion and fear of data loss.
# Errors and logs
None on screen
# Steps to reproduce
Upgrade from 0.13.3 to 0.14.0-b3 and check learner activity records.
# Context
Ubuntu 18.04.3
Package : 0.14.beta3 .deb
</issue>
<code>
[start of kolibri/core/query.py]
1 from django.db import connection
2 from django.db.models import Aggregate
3 from django.db.models import CharField
4 from django.db.models import IntegerField
5 from django.db.models import Subquery
6
7 try:
8 from django.contrib.postgres.aggregates import ArrayAgg
9
10 class NotNullArrayAgg(ArrayAgg):
11 def convert_value(self, value, expression, connection, context):
12 if not value:
13 return []
14 return filter(lambda x: x is not None, value)
15
16
17 except ImportError:
18 NotNullArrayAgg = None
19
20
21 class SQCount(Subquery):
22 # Include ALIAS at the end to support Postgres
23 template = "(SELECT COUNT(%(field)s) FROM (%(subquery)s) AS %(field)s__sum)"
24 output_field = IntegerField()
25
26
27 class SQSum(Subquery):
28 # Include ALIAS at the end to support Postgres
29 template = "(SELECT SUM(%(field)s) FROM (%(subquery)s) AS %(field)s__sum)"
30 output_field = IntegerField()
31
32
33 class GroupConcatSubquery(Subquery):
34 template = "(SELECT GROUP_CONCAT(%(field)s) FROM (%(subquery)s) AS %(field)s__sum)"
35 output_field = CharField()
36
37 def as_postgresql(self, compiler, connection):
38 self.template = (
39 "(SELECT STRING_AGG(%(field)s, ',') FROM (%(subquery)s) AS %(field)s__sum)"
40 )
41 return super(GroupConcatSubquery, self).as_sql(compiler, connection)
42
43
44 class GroupConcat(Aggregate):
45 template = "GROUP_CONCAT(%(field)s)"
46 output_field = CharField()
47
48 def __init__(self, *args, **kwargs):
49 self.result_field = kwargs.pop("result_field", None)
50 super(GroupConcat, self).__init__(*args, **kwargs)
51
52 def convert_value(self, value, expression, connection, context):
53 if not value:
54 return []
55 results = value.split(",")
56 if self.result_field is not None:
57 return map(self.result_field.to_python, results)
58 return results
59
60
61 def get_source_field(model, field_path):
62 # Get the source field from the model so that we can properly coerce values
63 # this is necessary when we are using GroupConcat to return non-string fields.
64 paths = field_path.split("__")
65 while len(paths) > 1:
66 model = model._meta.get_field(paths.pop(0)).related_model
67 return model._meta.get_field(paths[0])
68
69
70 def annotate_array_aggregate(queryset, **kwargs):
71 if connection.vendor == "postgresql" and NotNullArrayAgg is not None:
72 return queryset.annotate(
73 **{target: NotNullArrayAgg(source) for target, source in kwargs.items()}
74 )
75 model = queryset.model
76 # Call values on "pk" to insert a GROUP BY to ensure the GROUP CONCAT
77 # is called by row and not across the entire queryset.
78 return queryset.values("pk").annotate(
79 **{
80 target: GroupConcat(source, result_field=get_source_field(model, source))
81 for target, source in kwargs.items()
82 }
83 )
84
[end of kolibri/core/query.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/core/query.py b/kolibri/core/query.py
--- a/kolibri/core/query.py
+++ b/kolibri/core/query.py
@@ -8,10 +8,17 @@
from django.contrib.postgres.aggregates import ArrayAgg
class NotNullArrayAgg(ArrayAgg):
+ def __init__(self, *args, **kwargs):
+ self.result_field = kwargs.pop("result_field", None)
+ super(NotNullArrayAgg, self).__init__(*args, **kwargs)
+
def convert_value(self, value, expression, connection, context):
if not value:
return []
- return filter(lambda x: x is not None, value)
+ results = list(filter(lambda x: x is not None, value))
+ if self.result_field is not None:
+ return list(map(self.result_field.to_python, results))
+ return results
except ImportError:
@@ -54,7 +61,7 @@
return []
results = value.split(",")
if self.result_field is not None:
- return map(self.result_field.to_python, results)
+ return list(map(self.result_field.to_python, results))
return results
@@ -68,11 +75,16 @@
def annotate_array_aggregate(queryset, **kwargs):
+ model = queryset.model
if connection.vendor == "postgresql" and NotNullArrayAgg is not None:
return queryset.annotate(
- **{target: NotNullArrayAgg(source) for target, source in kwargs.items()}
+ **{
+ target: NotNullArrayAgg(
+ source, result_field=get_source_field(model, source)
+ )
+ for target, source in kwargs.items()
+ }
)
- model = queryset.model
# Call values on "pk" to insert a GROUP BY to ensure the GROUP CONCAT
# is called by row and not across the entire queryset.
return queryset.values("pk").annotate(
| {"golden_diff": "diff --git a/kolibri/core/query.py b/kolibri/core/query.py\n--- a/kolibri/core/query.py\n+++ b/kolibri/core/query.py\n@@ -8,10 +8,17 @@\n from django.contrib.postgres.aggregates import ArrayAgg\n \n class NotNullArrayAgg(ArrayAgg):\n+ def __init__(self, *args, **kwargs):\n+ self.result_field = kwargs.pop(\"result_field\", None)\n+ super(NotNullArrayAgg, self).__init__(*args, **kwargs)\n+\n def convert_value(self, value, expression, connection, context):\n if not value:\n return []\n- return filter(lambda x: x is not None, value)\n+ results = list(filter(lambda x: x is not None, value))\n+ if self.result_field is not None:\n+ return list(map(self.result_field.to_python, results))\n+ return results\n \n \n except ImportError:\n@@ -54,7 +61,7 @@\n return []\n results = value.split(\",\")\n if self.result_field is not None:\n- return map(self.result_field.to_python, results)\n+ return list(map(self.result_field.to_python, results))\n return results\n \n \n@@ -68,11 +75,16 @@\n \n \n def annotate_array_aggregate(queryset, **kwargs):\n+ model = queryset.model\n if connection.vendor == \"postgresql\" and NotNullArrayAgg is not None:\n return queryset.annotate(\n- **{target: NotNullArrayAgg(source) for target, source in kwargs.items()}\n+ **{\n+ target: NotNullArrayAgg(\n+ source, result_field=get_source_field(model, source)\n+ )\n+ for target, source in kwargs.items()\n+ }\n )\n- model = queryset.model\n # Call values on \"pk\" to insert a GROUP BY to ensure the GROUP CONCAT\n # is called by row and not across the entire queryset.\n return queryset.values(\"pk\").annotate(\n", "issue": "Learner activity records partially not visible after upgrade to 0.14b3\n# Observed Behaviour\n\nUpgraded from 0.13.3 to 0.14b3. Learner activity records were partially not visible in Class Home -> Class activity and Reports. Downgraded back to 0.13.3 and they are all visible again.\n\n# Expected behavior\n\nAll learner activity should be visible as prior to upgrade\n\n\n# User-facing consequences\n\n Confusion and fear of data loss.\n\n# Errors and logs\n\nNone on screen\n\n# Steps to reproduce\n\nUpgrade from 0.13.3 to 0.14.0-b3 and check learner activity records.\n\n\n\n# Context\nUbuntu 18.04.3\nPackage : 0.14.beta3 .deb\n\n", "before_files": [{"content": "from django.db import connection\nfrom django.db.models import Aggregate\nfrom django.db.models import CharField\nfrom django.db.models import IntegerField\nfrom django.db.models import Subquery\n\ntry:\n from django.contrib.postgres.aggregates import ArrayAgg\n\n class NotNullArrayAgg(ArrayAgg):\n def convert_value(self, value, expression, connection, context):\n if not value:\n return []\n return filter(lambda x: x is not None, value)\n\n\nexcept ImportError:\n NotNullArrayAgg = None\n\n\nclass SQCount(Subquery):\n # Include ALIAS at the end to support Postgres\n template = \"(SELECT COUNT(%(field)s) FROM (%(subquery)s) AS %(field)s__sum)\"\n output_field = IntegerField()\n\n\nclass SQSum(Subquery):\n # Include ALIAS at the end to support Postgres\n template = \"(SELECT SUM(%(field)s) FROM (%(subquery)s) AS %(field)s__sum)\"\n output_field = IntegerField()\n\n\nclass GroupConcatSubquery(Subquery):\n template = \"(SELECT GROUP_CONCAT(%(field)s) FROM (%(subquery)s) AS %(field)s__sum)\"\n output_field = CharField()\n\n def as_postgresql(self, compiler, connection):\n self.template = (\n \"(SELECT STRING_AGG(%(field)s, ',') FROM (%(subquery)s) AS %(field)s__sum)\"\n )\n return super(GroupConcatSubquery, self).as_sql(compiler, connection)\n\n\nclass GroupConcat(Aggregate):\n template = \"GROUP_CONCAT(%(field)s)\"\n output_field = CharField()\n\n def __init__(self, *args, **kwargs):\n self.result_field = kwargs.pop(\"result_field\", None)\n super(GroupConcat, self).__init__(*args, **kwargs)\n\n def convert_value(self, value, expression, connection, context):\n if not value:\n return []\n results = value.split(\",\")\n if self.result_field is not None:\n return map(self.result_field.to_python, results)\n return results\n\n\ndef get_source_field(model, field_path):\n # Get the source field from the model so that we can properly coerce values\n # this is necessary when we are using GroupConcat to return non-string fields.\n paths = field_path.split(\"__\")\n while len(paths) > 1:\n model = model._meta.get_field(paths.pop(0)).related_model\n return model._meta.get_field(paths[0])\n\n\ndef annotate_array_aggregate(queryset, **kwargs):\n if connection.vendor == \"postgresql\" and NotNullArrayAgg is not None:\n return queryset.annotate(\n **{target: NotNullArrayAgg(source) for target, source in kwargs.items()}\n )\n model = queryset.model\n # Call values on \"pk\" to insert a GROUP BY to ensure the GROUP CONCAT\n # is called by row and not across the entire queryset.\n return queryset.values(\"pk\").annotate(\n **{\n target: GroupConcat(source, result_field=get_source_field(model, source))\n for target, source in kwargs.items()\n }\n )\n", "path": "kolibri/core/query.py"}]} | 1,540 | 434 |
gh_patches_debug_9104 | rasdani/github-patches | git_diff | mesonbuild__meson-3943 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default value of UserFeatureOption is `enabled` instead of `auto`
According to `meson --help`:
```
--auto-features {enabled,disabled,auto}
Override value of all 'auto' features (default: auto).
```
However, in reality the default value is `enabled`. We should fix it to be `auto`.
</issue>
<code>
[start of mesonbuild/optinterpreter.py]
1 # Copyright 2013-2014 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os, re
16 import functools
17
18 from . import mparser
19 from . import coredata
20 from . import mesonlib
21 from . import compilers
22
23 forbidden_option_names = coredata.get_builtin_options()
24 forbidden_prefixes = [lang + '_' for lang in compilers.all_languages] + ['b_', 'backend_']
25
26 def is_invalid_name(name):
27 if name in forbidden_option_names:
28 return True
29 pref = name.split('_')[0] + '_'
30 if pref in forbidden_prefixes:
31 return True
32 return False
33
34 class OptionException(mesonlib.MesonException):
35 pass
36
37
38 def permitted_kwargs(permitted):
39 """Function that validates kwargs for options."""
40 def _wraps(func):
41 @functools.wraps(func)
42 def _inner(name, description, kwargs):
43 bad = [a for a in kwargs.keys() if a not in permitted]
44 if bad:
45 raise OptionException('Invalid kwargs for option "{}": "{}"'.format(
46 name, ' '.join(bad)))
47 return func(name, description, kwargs)
48 return _inner
49 return _wraps
50
51
52 optname_regex = re.compile('[^a-zA-Z0-9_-]')
53
54 @permitted_kwargs({'value', 'yield'})
55 def StringParser(name, description, kwargs):
56 return coredata.UserStringOption(name,
57 description,
58 kwargs.get('value', ''),
59 kwargs.get('choices', []),
60 kwargs.get('yield', coredata.default_yielding))
61
62 @permitted_kwargs({'value', 'yield'})
63 def BooleanParser(name, description, kwargs):
64 return coredata.UserBooleanOption(name, description,
65 kwargs.get('value', True),
66 kwargs.get('yield', coredata.default_yielding))
67
68 @permitted_kwargs({'value', 'yield', 'choices'})
69 def ComboParser(name, description, kwargs):
70 if 'choices' not in kwargs:
71 raise OptionException('Combo option missing "choices" keyword.')
72 choices = kwargs['choices']
73 if not isinstance(choices, list):
74 raise OptionException('Combo choices must be an array.')
75 for i in choices:
76 if not isinstance(i, str):
77 raise OptionException('Combo choice elements must be strings.')
78 return coredata.UserComboOption(name,
79 description,
80 choices,
81 kwargs.get('value', choices[0]),
82 kwargs.get('yield', coredata.default_yielding),)
83
84
85 @permitted_kwargs({'value', 'min', 'max', 'yield'})
86 def IntegerParser(name, description, kwargs):
87 if 'value' not in kwargs:
88 raise OptionException('Integer option must contain value argument.')
89 return coredata.UserIntegerOption(name,
90 description,
91 kwargs.get('min', None),
92 kwargs.get('max', None),
93 kwargs['value'],
94 kwargs.get('yield', coredata.default_yielding))
95
96 # FIXME: Cannot use FeatureNew while parsing options because we parse it before
97 # reading options in project(). See func_project() in interpreter.py
98 #@FeatureNew('array type option()', '0.44.0')
99 @permitted_kwargs({'value', 'yield', 'choices'})
100 def string_array_parser(name, description, kwargs):
101 if 'choices' in kwargs:
102 choices = kwargs['choices']
103 if not isinstance(choices, list):
104 raise OptionException('Array choices must be an array.')
105 for i in choices:
106 if not isinstance(i, str):
107 raise OptionException('Array choice elements must be strings.')
108 value = kwargs.get('value', choices)
109 else:
110 choices = None
111 value = kwargs.get('value', [])
112 if not isinstance(value, list):
113 raise OptionException('Array choices must be passed as an array.')
114 return coredata.UserArrayOption(name,
115 description,
116 value,
117 choices=choices,
118 yielding=kwargs.get('yield', coredata.default_yielding))
119
120 @permitted_kwargs({'value', 'yield'})
121 def FeatureParser(name, description, kwargs):
122 return coredata.UserFeatureOption(name,
123 description,
124 kwargs.get('value', 'enabled'),
125 yielding=kwargs.get('yield', coredata.default_yielding))
126
127 option_types = {'string': StringParser,
128 'boolean': BooleanParser,
129 'combo': ComboParser,
130 'integer': IntegerParser,
131 'array': string_array_parser,
132 'feature': FeatureParser,
133 }
134
135 class OptionInterpreter:
136 def __init__(self, subproject):
137 self.options = {}
138 self.subproject = subproject
139
140 def process(self, option_file):
141 try:
142 with open(option_file, 'r', encoding='utf8') as f:
143 ast = mparser.Parser(f.read(), '').parse()
144 except mesonlib.MesonException as me:
145 me.file = option_file
146 raise me
147 if not isinstance(ast, mparser.CodeBlockNode):
148 e = OptionException('Option file is malformed.')
149 e.lineno = ast.lineno()
150 raise e
151 for cur in ast.lines:
152 try:
153 self.evaluate_statement(cur)
154 except Exception as e:
155 e.lineno = cur.lineno
156 e.colno = cur.colno
157 e.file = os.path.join('meson_options.txt')
158 raise e
159
160 def reduce_single(self, arg):
161 if isinstance(arg, str):
162 return arg
163 elif isinstance(arg, (mparser.StringNode, mparser.BooleanNode,
164 mparser.NumberNode)):
165 return arg.value
166 elif isinstance(arg, mparser.ArrayNode):
167 return [self.reduce_single(curarg) for curarg in arg.args.arguments]
168 else:
169 raise OptionException('Arguments may only be string, int, bool, or array of those.')
170
171 def reduce_arguments(self, args):
172 assert(isinstance(args, mparser.ArgumentNode))
173 if args.incorrect_order():
174 raise OptionException('All keyword arguments must be after positional arguments.')
175 reduced_pos = [self.reduce_single(arg) for arg in args.arguments]
176 reduced_kw = {}
177 for key in args.kwargs.keys():
178 if not isinstance(key, str):
179 raise OptionException('Keyword argument name is not a string.')
180 a = args.kwargs[key]
181 reduced_kw[key] = self.reduce_single(a)
182 return reduced_pos, reduced_kw
183
184 def evaluate_statement(self, node):
185 if not isinstance(node, mparser.FunctionNode):
186 raise OptionException('Option file may only contain option definitions')
187 func_name = node.func_name
188 if func_name != 'option':
189 raise OptionException('Only calls to option() are allowed in option files.')
190 (posargs, kwargs) = self.reduce_arguments(node.args)
191
192 # FIXME: Cannot use FeatureNew while parsing options because we parse
193 # it before reading options in project(). See func_project() in
194 # interpreter.py
195 #if 'yield' in kwargs:
196 # FeatureNew('option yield', '0.45.0').use(self.subproject)
197
198 if 'type' not in kwargs:
199 raise OptionException('Option call missing mandatory "type" keyword argument')
200 opt_type = kwargs.pop('type')
201 if opt_type not in option_types:
202 raise OptionException('Unknown type %s.' % opt_type)
203 if len(posargs) != 1:
204 raise OptionException('Option() must have one (and only one) positional argument')
205 opt_name = posargs[0]
206 if not isinstance(opt_name, str):
207 raise OptionException('Positional argument must be a string.')
208 if optname_regex.search(opt_name) is not None:
209 raise OptionException('Option names can only contain letters, numbers or dashes.')
210 if is_invalid_name(opt_name):
211 raise OptionException('Option name %s is reserved.' % opt_name)
212 if self.subproject != '':
213 opt_name = self.subproject + ':' + opt_name
214 opt = option_types[opt_type](opt_name, kwargs.pop('description', ''), kwargs)
215 if opt.description == '':
216 opt.description = opt_name
217 self.options[opt_name] = opt
218
[end of mesonbuild/optinterpreter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesonbuild/optinterpreter.py b/mesonbuild/optinterpreter.py
--- a/mesonbuild/optinterpreter.py
+++ b/mesonbuild/optinterpreter.py
@@ -121,7 +121,7 @@
def FeatureParser(name, description, kwargs):
return coredata.UserFeatureOption(name,
description,
- kwargs.get('value', 'enabled'),
+ kwargs.get('value', 'auto'),
yielding=kwargs.get('yield', coredata.default_yielding))
option_types = {'string': StringParser,
| {"golden_diff": "diff --git a/mesonbuild/optinterpreter.py b/mesonbuild/optinterpreter.py\n--- a/mesonbuild/optinterpreter.py\n+++ b/mesonbuild/optinterpreter.py\n@@ -121,7 +121,7 @@\n def FeatureParser(name, description, kwargs):\n return coredata.UserFeatureOption(name,\n description,\n- kwargs.get('value', 'enabled'),\n+ kwargs.get('value', 'auto'),\n yielding=kwargs.get('yield', coredata.default_yielding))\n \n option_types = {'string': StringParser,\n", "issue": "Default value of UserFeatureOption is `enabled` instead of `auto`\nAccording to `meson --help`:\r\n\r\n```\r\n --auto-features {enabled,disabled,auto}\r\n Override value of all 'auto' features (default: auto).\r\n```\r\n\r\nHowever, in reality the default value is `enabled`. We should fix it to be `auto`.\n", "before_files": [{"content": "# Copyright 2013-2014 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os, re\nimport functools\n\nfrom . import mparser\nfrom . import coredata\nfrom . import mesonlib\nfrom . import compilers\n\nforbidden_option_names = coredata.get_builtin_options()\nforbidden_prefixes = [lang + '_' for lang in compilers.all_languages] + ['b_', 'backend_']\n\ndef is_invalid_name(name):\n if name in forbidden_option_names:\n return True\n pref = name.split('_')[0] + '_'\n if pref in forbidden_prefixes:\n return True\n return False\n\nclass OptionException(mesonlib.MesonException):\n pass\n\n\ndef permitted_kwargs(permitted):\n \"\"\"Function that validates kwargs for options.\"\"\"\n def _wraps(func):\n @functools.wraps(func)\n def _inner(name, description, kwargs):\n bad = [a for a in kwargs.keys() if a not in permitted]\n if bad:\n raise OptionException('Invalid kwargs for option \"{}\": \"{}\"'.format(\n name, ' '.join(bad)))\n return func(name, description, kwargs)\n return _inner\n return _wraps\n\n\noptname_regex = re.compile('[^a-zA-Z0-9_-]')\n\n@permitted_kwargs({'value', 'yield'})\ndef StringParser(name, description, kwargs):\n return coredata.UserStringOption(name,\n description,\n kwargs.get('value', ''),\n kwargs.get('choices', []),\n kwargs.get('yield', coredata.default_yielding))\n\n@permitted_kwargs({'value', 'yield'})\ndef BooleanParser(name, description, kwargs):\n return coredata.UserBooleanOption(name, description,\n kwargs.get('value', True),\n kwargs.get('yield', coredata.default_yielding))\n\n@permitted_kwargs({'value', 'yield', 'choices'})\ndef ComboParser(name, description, kwargs):\n if 'choices' not in kwargs:\n raise OptionException('Combo option missing \"choices\" keyword.')\n choices = kwargs['choices']\n if not isinstance(choices, list):\n raise OptionException('Combo choices must be an array.')\n for i in choices:\n if not isinstance(i, str):\n raise OptionException('Combo choice elements must be strings.')\n return coredata.UserComboOption(name,\n description,\n choices,\n kwargs.get('value', choices[0]),\n kwargs.get('yield', coredata.default_yielding),)\n\n\n@permitted_kwargs({'value', 'min', 'max', 'yield'})\ndef IntegerParser(name, description, kwargs):\n if 'value' not in kwargs:\n raise OptionException('Integer option must contain value argument.')\n return coredata.UserIntegerOption(name,\n description,\n kwargs.get('min', None),\n kwargs.get('max', None),\n kwargs['value'],\n kwargs.get('yield', coredata.default_yielding))\n\n# FIXME: Cannot use FeatureNew while parsing options because we parse it before\n# reading options in project(). See func_project() in interpreter.py\n#@FeatureNew('array type option()', '0.44.0')\n@permitted_kwargs({'value', 'yield', 'choices'})\ndef string_array_parser(name, description, kwargs):\n if 'choices' in kwargs:\n choices = kwargs['choices']\n if not isinstance(choices, list):\n raise OptionException('Array choices must be an array.')\n for i in choices:\n if not isinstance(i, str):\n raise OptionException('Array choice elements must be strings.')\n value = kwargs.get('value', choices)\n else:\n choices = None\n value = kwargs.get('value', [])\n if not isinstance(value, list):\n raise OptionException('Array choices must be passed as an array.')\n return coredata.UserArrayOption(name,\n description,\n value,\n choices=choices,\n yielding=kwargs.get('yield', coredata.default_yielding))\n\n@permitted_kwargs({'value', 'yield'})\ndef FeatureParser(name, description, kwargs):\n return coredata.UserFeatureOption(name,\n description,\n kwargs.get('value', 'enabled'),\n yielding=kwargs.get('yield', coredata.default_yielding))\n\noption_types = {'string': StringParser,\n 'boolean': BooleanParser,\n 'combo': ComboParser,\n 'integer': IntegerParser,\n 'array': string_array_parser,\n 'feature': FeatureParser,\n }\n\nclass OptionInterpreter:\n def __init__(self, subproject):\n self.options = {}\n self.subproject = subproject\n\n def process(self, option_file):\n try:\n with open(option_file, 'r', encoding='utf8') as f:\n ast = mparser.Parser(f.read(), '').parse()\n except mesonlib.MesonException as me:\n me.file = option_file\n raise me\n if not isinstance(ast, mparser.CodeBlockNode):\n e = OptionException('Option file is malformed.')\n e.lineno = ast.lineno()\n raise e\n for cur in ast.lines:\n try:\n self.evaluate_statement(cur)\n except Exception as e:\n e.lineno = cur.lineno\n e.colno = cur.colno\n e.file = os.path.join('meson_options.txt')\n raise e\n\n def reduce_single(self, arg):\n if isinstance(arg, str):\n return arg\n elif isinstance(arg, (mparser.StringNode, mparser.BooleanNode,\n mparser.NumberNode)):\n return arg.value\n elif isinstance(arg, mparser.ArrayNode):\n return [self.reduce_single(curarg) for curarg in arg.args.arguments]\n else:\n raise OptionException('Arguments may only be string, int, bool, or array of those.')\n\n def reduce_arguments(self, args):\n assert(isinstance(args, mparser.ArgumentNode))\n if args.incorrect_order():\n raise OptionException('All keyword arguments must be after positional arguments.')\n reduced_pos = [self.reduce_single(arg) for arg in args.arguments]\n reduced_kw = {}\n for key in args.kwargs.keys():\n if not isinstance(key, str):\n raise OptionException('Keyword argument name is not a string.')\n a = args.kwargs[key]\n reduced_kw[key] = self.reduce_single(a)\n return reduced_pos, reduced_kw\n\n def evaluate_statement(self, node):\n if not isinstance(node, mparser.FunctionNode):\n raise OptionException('Option file may only contain option definitions')\n func_name = node.func_name\n if func_name != 'option':\n raise OptionException('Only calls to option() are allowed in option files.')\n (posargs, kwargs) = self.reduce_arguments(node.args)\n\n # FIXME: Cannot use FeatureNew while parsing options because we parse\n # it before reading options in project(). See func_project() in\n # interpreter.py\n #if 'yield' in kwargs:\n # FeatureNew('option yield', '0.45.0').use(self.subproject)\n\n if 'type' not in kwargs:\n raise OptionException('Option call missing mandatory \"type\" keyword argument')\n opt_type = kwargs.pop('type')\n if opt_type not in option_types:\n raise OptionException('Unknown type %s.' % opt_type)\n if len(posargs) != 1:\n raise OptionException('Option() must have one (and only one) positional argument')\n opt_name = posargs[0]\n if not isinstance(opt_name, str):\n raise OptionException('Positional argument must be a string.')\n if optname_regex.search(opt_name) is not None:\n raise OptionException('Option names can only contain letters, numbers or dashes.')\n if is_invalid_name(opt_name):\n raise OptionException('Option name %s is reserved.' % opt_name)\n if self.subproject != '':\n opt_name = self.subproject + ':' + opt_name\n opt = option_types[opt_type](opt_name, kwargs.pop('description', ''), kwargs)\n if opt.description == '':\n opt.description = opt_name\n self.options[opt_name] = opt\n", "path": "mesonbuild/optinterpreter.py"}]} | 2,979 | 115 |
gh_patches_debug_38923 | rasdani/github-patches | git_diff | goauthentik__authentik-8858 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False version status in admin dashboard for AirGapped environments
**Describe the bug**
In an AirGapped environment with `AUTHENTIK_DISABLE_UPDATE_CHECK=true`, or when the version check has not yet been performed, the version tile on the admin dashboard will always state its `Up-to-date!` which may not actually be the case.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy a fresh Authentik installation on an older version (e.g. `2023.10.7`) and ensure `AUTHENTIK_DISABLE_UPDATE_CHECK=true` is set.
2. Go to the admin dashboard
3. Observe the version tile stating proudly that its up-to-date.
**Expected behavior**
Not incorrectly stating its up-to-date as that would give a false sense of security.
**Screenshots**

**Logs**
N/a
**Version and Deployment (please complete the following information):**
- authentik version: 2024.2.0
- Deployment: docker-compose
**Additional context**
N/a
</issue>
<code>
[start of authentik/admin/api/version.py]
1 """authentik administration overview"""
2
3 from django.core.cache import cache
4 from drf_spectacular.utils import extend_schema
5 from packaging.version import parse
6 from rest_framework.fields import SerializerMethodField
7 from rest_framework.permissions import IsAuthenticated
8 from rest_framework.request import Request
9 from rest_framework.response import Response
10 from rest_framework.views import APIView
11
12 from authentik import __version__, get_build_hash
13 from authentik.admin.tasks import VERSION_CACHE_KEY, update_latest_version
14 from authentik.core.api.utils import PassiveSerializer
15
16
17 class VersionSerializer(PassiveSerializer):
18 """Get running and latest version."""
19
20 version_current = SerializerMethodField()
21 version_latest = SerializerMethodField()
22 build_hash = SerializerMethodField()
23 outdated = SerializerMethodField()
24
25 def get_build_hash(self, _) -> str:
26 """Get build hash, if version is not latest or released"""
27 return get_build_hash()
28
29 def get_version_current(self, _) -> str:
30 """Get current version"""
31 return __version__
32
33 def get_version_latest(self, _) -> str:
34 """Get latest version from cache"""
35 version_in_cache = cache.get(VERSION_CACHE_KEY)
36 if not version_in_cache: # pragma: no cover
37 update_latest_version.delay()
38 return __version__
39 return version_in_cache
40
41 def get_outdated(self, instance) -> bool:
42 """Check if we're running the latest version"""
43 return parse(self.get_version_current(instance)) < parse(self.get_version_latest(instance))
44
45
46 class VersionView(APIView):
47 """Get running and latest version."""
48
49 permission_classes = [IsAuthenticated]
50 pagination_class = None
51 filter_backends = []
52
53 @extend_schema(responses={200: VersionSerializer(many=False)})
54 def get(self, request: Request) -> Response:
55 """Get running and latest version."""
56 return Response(VersionSerializer(True).data)
57
[end of authentik/admin/api/version.py]
[start of authentik/admin/tasks.py]
1 """authentik admin tasks"""
2
3 import re
4
5 from django.core.cache import cache
6 from django.core.validators import URLValidator
7 from django.db import DatabaseError, InternalError, ProgrammingError
8 from packaging.version import parse
9 from requests import RequestException
10 from structlog.stdlib import get_logger
11
12 from authentik import __version__, get_build_hash
13 from authentik.admin.apps import PROM_INFO
14 from authentik.events.models import Event, EventAction, Notification
15 from authentik.events.system_tasks import SystemTask, TaskStatus, prefill_task
16 from authentik.lib.config import CONFIG
17 from authentik.lib.utils.http import get_http_session
18 from authentik.root.celery import CELERY_APP
19
20 LOGGER = get_logger()
21 VERSION_CACHE_KEY = "authentik_latest_version"
22 VERSION_CACHE_TIMEOUT = 8 * 60 * 60 # 8 hours
23 # Chop of the first ^ because we want to search the entire string
24 URL_FINDER = URLValidator.regex.pattern[1:]
25 LOCAL_VERSION = parse(__version__)
26
27
28 def _set_prom_info():
29 """Set prometheus info for version"""
30 PROM_INFO.info(
31 {
32 "version": __version__,
33 "latest": cache.get(VERSION_CACHE_KEY, ""),
34 "build_hash": get_build_hash(),
35 }
36 )
37
38
39 @CELERY_APP.task(
40 throws=(DatabaseError, ProgrammingError, InternalError),
41 )
42 def clear_update_notifications():
43 """Clear update notifications on startup if the notification was for the version
44 we're running now."""
45 for notification in Notification.objects.filter(event__action=EventAction.UPDATE_AVAILABLE):
46 if "new_version" not in notification.event.context:
47 continue
48 notification_version = notification.event.context["new_version"]
49 if LOCAL_VERSION >= parse(notification_version):
50 notification.delete()
51
52
53 @CELERY_APP.task(bind=True, base=SystemTask)
54 @prefill_task
55 def update_latest_version(self: SystemTask):
56 """Update latest version info"""
57 if CONFIG.get_bool("disable_update_check"):
58 cache.set(VERSION_CACHE_KEY, "0.0.0", VERSION_CACHE_TIMEOUT)
59 self.set_status(TaskStatus.WARNING, "Version check disabled.")
60 return
61 try:
62 response = get_http_session().get(
63 "https://version.goauthentik.io/version.json",
64 )
65 response.raise_for_status()
66 data = response.json()
67 upstream_version = data.get("stable", {}).get("version")
68 cache.set(VERSION_CACHE_KEY, upstream_version, VERSION_CACHE_TIMEOUT)
69 self.set_status(TaskStatus.SUCCESSFUL, "Successfully updated latest Version")
70 _set_prom_info()
71 # Check if upstream version is newer than what we're running,
72 # and if no event exists yet, create one.
73 if LOCAL_VERSION < parse(upstream_version):
74 # Event has already been created, don't create duplicate
75 if Event.objects.filter(
76 action=EventAction.UPDATE_AVAILABLE,
77 context__new_version=upstream_version,
78 ).exists():
79 return
80 event_dict = {"new_version": upstream_version}
81 if match := re.search(URL_FINDER, data.get("stable", {}).get("changelog", "")):
82 event_dict["message"] = f"Changelog: {match.group()}"
83 Event.new(EventAction.UPDATE_AVAILABLE, **event_dict).save()
84 except (RequestException, IndexError) as exc:
85 cache.set(VERSION_CACHE_KEY, "0.0.0", VERSION_CACHE_TIMEOUT)
86 self.set_error(exc)
87
88
89 _set_prom_info()
90
[end of authentik/admin/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/admin/api/version.py b/authentik/admin/api/version.py
--- a/authentik/admin/api/version.py
+++ b/authentik/admin/api/version.py
@@ -10,7 +10,7 @@
from rest_framework.views import APIView
from authentik import __version__, get_build_hash
-from authentik.admin.tasks import VERSION_CACHE_KEY, update_latest_version
+from authentik.admin.tasks import VERSION_CACHE_KEY, VERSION_NULL, update_latest_version
from authentik.core.api.utils import PassiveSerializer
@@ -19,6 +19,7 @@
version_current = SerializerMethodField()
version_latest = SerializerMethodField()
+ version_latest_valid = SerializerMethodField()
build_hash = SerializerMethodField()
outdated = SerializerMethodField()
@@ -38,6 +39,10 @@
return __version__
return version_in_cache
+ def get_version_latest_valid(self, _) -> str:
+ """Check if latest version is valid"""
+ return cache.get(VERSION_CACHE_KEY) != VERSION_NULL
+
def get_outdated(self, instance) -> bool:
"""Check if we're running the latest version"""
return parse(self.get_version_current(instance)) < parse(self.get_version_latest(instance))
diff --git a/authentik/admin/tasks.py b/authentik/admin/tasks.py
--- a/authentik/admin/tasks.py
+++ b/authentik/admin/tasks.py
@@ -18,6 +18,7 @@
from authentik.root.celery import CELERY_APP
LOGGER = get_logger()
+VERSION_NULL = "0.0.0"
VERSION_CACHE_KEY = "authentik_latest_version"
VERSION_CACHE_TIMEOUT = 8 * 60 * 60 # 8 hours
# Chop of the first ^ because we want to search the entire string
@@ -55,7 +56,7 @@
def update_latest_version(self: SystemTask):
"""Update latest version info"""
if CONFIG.get_bool("disable_update_check"):
- cache.set(VERSION_CACHE_KEY, "0.0.0", VERSION_CACHE_TIMEOUT)
+ cache.set(VERSION_CACHE_KEY, VERSION_NULL, VERSION_CACHE_TIMEOUT)
self.set_status(TaskStatus.WARNING, "Version check disabled.")
return
try:
@@ -82,7 +83,7 @@
event_dict["message"] = f"Changelog: {match.group()}"
Event.new(EventAction.UPDATE_AVAILABLE, **event_dict).save()
except (RequestException, IndexError) as exc:
- cache.set(VERSION_CACHE_KEY, "0.0.0", VERSION_CACHE_TIMEOUT)
+ cache.set(VERSION_CACHE_KEY, VERSION_NULL, VERSION_CACHE_TIMEOUT)
self.set_error(exc)
| {"golden_diff": "diff --git a/authentik/admin/api/version.py b/authentik/admin/api/version.py\n--- a/authentik/admin/api/version.py\n+++ b/authentik/admin/api/version.py\n@@ -10,7 +10,7 @@\n from rest_framework.views import APIView\n \n from authentik import __version__, get_build_hash\n-from authentik.admin.tasks import VERSION_CACHE_KEY, update_latest_version\n+from authentik.admin.tasks import VERSION_CACHE_KEY, VERSION_NULL, update_latest_version\n from authentik.core.api.utils import PassiveSerializer\n \n \n@@ -19,6 +19,7 @@\n \n version_current = SerializerMethodField()\n version_latest = SerializerMethodField()\n+ version_latest_valid = SerializerMethodField()\n build_hash = SerializerMethodField()\n outdated = SerializerMethodField()\n \n@@ -38,6 +39,10 @@\n return __version__\n return version_in_cache\n \n+ def get_version_latest_valid(self, _) -> str:\n+ \"\"\"Check if latest version is valid\"\"\"\n+ return cache.get(VERSION_CACHE_KEY) != VERSION_NULL\n+\n def get_outdated(self, instance) -> bool:\n \"\"\"Check if we're running the latest version\"\"\"\n return parse(self.get_version_current(instance)) < parse(self.get_version_latest(instance))\ndiff --git a/authentik/admin/tasks.py b/authentik/admin/tasks.py\n--- a/authentik/admin/tasks.py\n+++ b/authentik/admin/tasks.py\n@@ -18,6 +18,7 @@\n from authentik.root.celery import CELERY_APP\n \n LOGGER = get_logger()\n+VERSION_NULL = \"0.0.0\"\n VERSION_CACHE_KEY = \"authentik_latest_version\"\n VERSION_CACHE_TIMEOUT = 8 * 60 * 60 # 8 hours\n # Chop of the first ^ because we want to search the entire string\n@@ -55,7 +56,7 @@\n def update_latest_version(self: SystemTask):\n \"\"\"Update latest version info\"\"\"\n if CONFIG.get_bool(\"disable_update_check\"):\n- cache.set(VERSION_CACHE_KEY, \"0.0.0\", VERSION_CACHE_TIMEOUT)\n+ cache.set(VERSION_CACHE_KEY, VERSION_NULL, VERSION_CACHE_TIMEOUT)\n self.set_status(TaskStatus.WARNING, \"Version check disabled.\")\n return\n try:\n@@ -82,7 +83,7 @@\n event_dict[\"message\"] = f\"Changelog: {match.group()}\"\n Event.new(EventAction.UPDATE_AVAILABLE, **event_dict).save()\n except (RequestException, IndexError) as exc:\n- cache.set(VERSION_CACHE_KEY, \"0.0.0\", VERSION_CACHE_TIMEOUT)\n+ cache.set(VERSION_CACHE_KEY, VERSION_NULL, VERSION_CACHE_TIMEOUT)\n self.set_error(exc)\n", "issue": "False version status in admin dashboard for AirGapped environments\n**Describe the bug**\r\nIn an AirGapped environment with `AUTHENTIK_DISABLE_UPDATE_CHECK=true`, or when the version check has not yet been performed, the version tile on the admin dashboard will always state its `Up-to-date!` which may not actually be the case.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Deploy a fresh Authentik installation on an older version (e.g. `2023.10.7`) and ensure `AUTHENTIK_DISABLE_UPDATE_CHECK=true` is set.\r\n2. Go to the admin dashboard\r\n3. Observe the version tile stating proudly that its up-to-date.\r\n\r\n**Expected behavior**\r\nNot incorrectly stating its up-to-date as that would give a false sense of security.\r\n\r\n**Screenshots**\r\n\r\n\r\n**Logs**\r\nN/a\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2024.2.0\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nN/a\r\n\n", "before_files": [{"content": "\"\"\"authentik administration overview\"\"\"\n\nfrom django.core.cache import cache\nfrom drf_spectacular.utils import extend_schema\nfrom packaging.version import parse\nfrom rest_framework.fields import SerializerMethodField\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom authentik import __version__, get_build_hash\nfrom authentik.admin.tasks import VERSION_CACHE_KEY, update_latest_version\nfrom authentik.core.api.utils import PassiveSerializer\n\n\nclass VersionSerializer(PassiveSerializer):\n \"\"\"Get running and latest version.\"\"\"\n\n version_current = SerializerMethodField()\n version_latest = SerializerMethodField()\n build_hash = SerializerMethodField()\n outdated = SerializerMethodField()\n\n def get_build_hash(self, _) -> str:\n \"\"\"Get build hash, if version is not latest or released\"\"\"\n return get_build_hash()\n\n def get_version_current(self, _) -> str:\n \"\"\"Get current version\"\"\"\n return __version__\n\n def get_version_latest(self, _) -> str:\n \"\"\"Get latest version from cache\"\"\"\n version_in_cache = cache.get(VERSION_CACHE_KEY)\n if not version_in_cache: # pragma: no cover\n update_latest_version.delay()\n return __version__\n return version_in_cache\n\n def get_outdated(self, instance) -> bool:\n \"\"\"Check if we're running the latest version\"\"\"\n return parse(self.get_version_current(instance)) < parse(self.get_version_latest(instance))\n\n\nclass VersionView(APIView):\n \"\"\"Get running and latest version.\"\"\"\n\n permission_classes = [IsAuthenticated]\n pagination_class = None\n filter_backends = []\n\n @extend_schema(responses={200: VersionSerializer(many=False)})\n def get(self, request: Request) -> Response:\n \"\"\"Get running and latest version.\"\"\"\n return Response(VersionSerializer(True).data)\n", "path": "authentik/admin/api/version.py"}, {"content": "\"\"\"authentik admin tasks\"\"\"\n\nimport re\n\nfrom django.core.cache import cache\nfrom django.core.validators import URLValidator\nfrom django.db import DatabaseError, InternalError, ProgrammingError\nfrom packaging.version import parse\nfrom requests import RequestException\nfrom structlog.stdlib import get_logger\n\nfrom authentik import __version__, get_build_hash\nfrom authentik.admin.apps import PROM_INFO\nfrom authentik.events.models import Event, EventAction, Notification\nfrom authentik.events.system_tasks import SystemTask, TaskStatus, prefill_task\nfrom authentik.lib.config import CONFIG\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger()\nVERSION_CACHE_KEY = \"authentik_latest_version\"\nVERSION_CACHE_TIMEOUT = 8 * 60 * 60 # 8 hours\n# Chop of the first ^ because we want to search the entire string\nURL_FINDER = URLValidator.regex.pattern[1:]\nLOCAL_VERSION = parse(__version__)\n\n\ndef _set_prom_info():\n \"\"\"Set prometheus info for version\"\"\"\n PROM_INFO.info(\n {\n \"version\": __version__,\n \"latest\": cache.get(VERSION_CACHE_KEY, \"\"),\n \"build_hash\": get_build_hash(),\n }\n )\n\n\n@CELERY_APP.task(\n throws=(DatabaseError, ProgrammingError, InternalError),\n)\ndef clear_update_notifications():\n \"\"\"Clear update notifications on startup if the notification was for the version\n we're running now.\"\"\"\n for notification in Notification.objects.filter(event__action=EventAction.UPDATE_AVAILABLE):\n if \"new_version\" not in notification.event.context:\n continue\n notification_version = notification.event.context[\"new_version\"]\n if LOCAL_VERSION >= parse(notification_version):\n notification.delete()\n\n\n@CELERY_APP.task(bind=True, base=SystemTask)\n@prefill_task\ndef update_latest_version(self: SystemTask):\n \"\"\"Update latest version info\"\"\"\n if CONFIG.get_bool(\"disable_update_check\"):\n cache.set(VERSION_CACHE_KEY, \"0.0.0\", VERSION_CACHE_TIMEOUT)\n self.set_status(TaskStatus.WARNING, \"Version check disabled.\")\n return\n try:\n response = get_http_session().get(\n \"https://version.goauthentik.io/version.json\",\n )\n response.raise_for_status()\n data = response.json()\n upstream_version = data.get(\"stable\", {}).get(\"version\")\n cache.set(VERSION_CACHE_KEY, upstream_version, VERSION_CACHE_TIMEOUT)\n self.set_status(TaskStatus.SUCCESSFUL, \"Successfully updated latest Version\")\n _set_prom_info()\n # Check if upstream version is newer than what we're running,\n # and if no event exists yet, create one.\n if LOCAL_VERSION < parse(upstream_version):\n # Event has already been created, don't create duplicate\n if Event.objects.filter(\n action=EventAction.UPDATE_AVAILABLE,\n context__new_version=upstream_version,\n ).exists():\n return\n event_dict = {\"new_version\": upstream_version}\n if match := re.search(URL_FINDER, data.get(\"stable\", {}).get(\"changelog\", \"\")):\n event_dict[\"message\"] = f\"Changelog: {match.group()}\"\n Event.new(EventAction.UPDATE_AVAILABLE, **event_dict).save()\n except (RequestException, IndexError) as exc:\n cache.set(VERSION_CACHE_KEY, \"0.0.0\", VERSION_CACHE_TIMEOUT)\n self.set_error(exc)\n\n\n_set_prom_info()\n", "path": "authentik/admin/tasks.py"}]} | 2,255 | 587 |
gh_patches_debug_5058 | rasdani/github-patches | git_diff | google__jax-7572 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
script "examples/advi.py" throws "ValueError" exception
Run:
```python3 jax/examples/advi.py```
Output:
```
Optimizing variational parameters...
Iteration 0 lower bound 0.4957694113254547
Traceback (most recent call last):
File "jax/examples/advi.py", line 138, in <module>
callback(params, t)
File "jax/examples/advi.py", line 98, in callback
X, Y, Z = mesh_eval(target_dist, x_limits, y_limits, 1)
File "jax/examples/advi.py", line 67, in mesh_eval
return _mesh_eval(func, x_limits, y_limits, params, num_ticks)
ValueError: Non-hashable static arguments are not supported. An error occured while trying to hash an object of type <class 'list'>, [-2, 2]. The error was:
TypeError: unhashable type: 'list'
```
</issue>
<code>
[start of examples/advi.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Automatic differentiation variational inference in Numpy and JAX.
16
17 This demo fits a Gaussian approximation to an intractable, unnormalized
18 density, by differentiating through a Monte Carlo estimate of the
19 variational evidence lower bound (ELBO)."""
20
21
22 from functools import partial
23 import matplotlib.pyplot as plt
24
25 from jax import jit, grad, vmap
26 from jax import random
27 from jax.experimental import optimizers
28 import jax.numpy as jnp
29 import jax.scipy.stats.norm as norm
30
31
32 # ========= Functions to define the evidence lower bound. =========
33
34 def diag_gaussian_sample(rng, mean, log_std):
35 # Take a single sample from a diagonal multivariate Gaussian.
36 return mean + jnp.exp(log_std) * random.normal(rng, mean.shape)
37
38 def diag_gaussian_logpdf(x, mean, log_std):
39 # Evaluate a single point on a diagonal multivariate Gaussian.
40 return jnp.sum(vmap(norm.logpdf)(x, mean, jnp.exp(log_std)))
41
42 def elbo(logprob, rng, mean, log_std):
43 # Single-sample Monte Carlo estimate of the variational lower bound.
44 sample = diag_gaussian_sample(rng, mean, log_std)
45 return logprob(sample) - diag_gaussian_logpdf(sample, mean, log_std)
46
47 def batch_elbo(logprob, rng, params, num_samples):
48 # Average over a batch of random samples.
49 rngs = random.split(rng, num_samples)
50 vectorized_elbo = vmap(partial(elbo, logprob), in_axes=(0, None, None))
51 return jnp.mean(vectorized_elbo(rngs, *params))
52
53
54 # ========= Helper function for plotting. =========
55
56 @partial(jit, static_argnums=(0, 1, 2, 4))
57 def _mesh_eval(func, x_limits, y_limits, params, num_ticks):
58 # Evaluate func on a 2D grid defined by x_limits and y_limits.
59 x = jnp.linspace(*x_limits, num=num_ticks)
60 y = jnp.linspace(*y_limits, num=num_ticks)
61 X, Y = jnp.meshgrid(x, y)
62 xy_vec = jnp.stack([X.ravel(), Y.ravel()]).T
63 zs = vmap(func, in_axes=(0, None))(xy_vec, params)
64 return X, Y, zs.reshape(X.shape)
65
66 def mesh_eval(func, x_limits, y_limits, params, num_ticks=101):
67 return _mesh_eval(func, x_limits, y_limits, params, num_ticks)
68
69 # ========= Define an intractable unnormalized density =========
70
71 def funnel_log_density(params):
72 return norm.logpdf(params[0], 0, jnp.exp(params[1])) + \
73 norm.logpdf(params[1], 0, 1.35)
74
75
76 if __name__ == "__main__":
77 num_samples = 40
78
79 @jit
80 def objective(params, t):
81 rng = random.PRNGKey(t)
82 return -batch_elbo(funnel_log_density, rng, params, num_samples)
83
84 # Set up figure.
85 fig = plt.figure(figsize=(8,8), facecolor='white')
86 ax = fig.add_subplot(111, frameon=False)
87 plt.ion()
88 plt.show(block=False)
89 x_limits = [-2, 2]
90 y_limits = [-4, 2]
91 target_dist = lambda x, _: jnp.exp(funnel_log_density(x))
92 approx_dist = lambda x, params: jnp.exp(diag_gaussian_logpdf(x, *params))
93
94 def callback(params, t):
95 print("Iteration {} lower bound {}".format(t, objective(params, t)))
96
97 plt.cla()
98 X, Y, Z = mesh_eval(target_dist, x_limits, y_limits, 1)
99 ax.contour(X, Y, Z, cmap='summer')
100 X, Y, Z = mesh_eval(approx_dist, x_limits, y_limits, params)
101 ax.contour(X, Y, Z, cmap='winter')
102 ax.set_xlim(x_limits)
103 ax.set_ylim(y_limits)
104 ax.set_yticks([])
105 ax.set_xticks([])
106
107 # Plot random samples from variational distribution.
108 # Here we clone the rng used in computing the objective
109 # so that we can show exactly the same samples.
110 rngs = random.split(random.PRNGKey(t), num_samples)
111 samples = vmap(diag_gaussian_sample, in_axes=(0, None, None))(rngs, *params)
112 ax.plot(samples[:, 0], samples[:, 1], 'b.')
113
114 plt.draw()
115 plt.pause(1.0/60.0)
116
117
118 # Set up optimizer.
119 D = 2
120 init_mean = jnp.zeros(D)
121 init_std = jnp.zeros(D)
122 init_params = (init_mean, init_std)
123 opt_init, opt_update, get_params = optimizers.momentum(step_size=0.1, mass=0.9)
124 opt_state = opt_init(init_params)
125
126 @jit
127 def update(i, opt_state):
128 params = get_params(opt_state)
129 gradient = grad(objective)(params, i)
130 return opt_update(i, gradient, opt_state)
131
132
133 # Main loop.
134 print("Optimizing variational parameters...")
135 for t in range(100):
136 opt_state = update(t, opt_state)
137 params = get_params(opt_state)
138 callback(params, t)
139 plt.show(block=True)
140
[end of examples/advi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/advi.py b/examples/advi.py
--- a/examples/advi.py
+++ b/examples/advi.py
@@ -86,8 +86,8 @@
ax = fig.add_subplot(111, frameon=False)
plt.ion()
plt.show(block=False)
- x_limits = [-2, 2]
- y_limits = [-4, 2]
+ x_limits = (-2, 2)
+ y_limits = (-4, 2)
target_dist = lambda x, _: jnp.exp(funnel_log_density(x))
approx_dist = lambda x, params: jnp.exp(diag_gaussian_logpdf(x, *params))
| {"golden_diff": "diff --git a/examples/advi.py b/examples/advi.py\n--- a/examples/advi.py\n+++ b/examples/advi.py\n@@ -86,8 +86,8 @@\n ax = fig.add_subplot(111, frameon=False)\n plt.ion()\n plt.show(block=False)\n- x_limits = [-2, 2]\n- y_limits = [-4, 2]\n+ x_limits = (-2, 2)\n+ y_limits = (-4, 2)\n target_dist = lambda x, _: jnp.exp(funnel_log_density(x))\n approx_dist = lambda x, params: jnp.exp(diag_gaussian_logpdf(x, *params))\n", "issue": "script \"examples/advi.py\" throws \"ValueError\" exception\nRun:\r\n ```python3 jax/examples/advi.py```\r\nOutput:\r\n```\r\nOptimizing variational parameters...\r\nIteration 0 lower bound 0.4957694113254547\r\nTraceback (most recent call last):\r\n File \"jax/examples/advi.py\", line 138, in <module>\r\n callback(params, t)\r\n File \"jax/examples/advi.py\", line 98, in callback\r\n X, Y, Z = mesh_eval(target_dist, x_limits, y_limits, 1)\r\n File \"jax/examples/advi.py\", line 67, in mesh_eval\r\n return _mesh_eval(func, x_limits, y_limits, params, num_ticks)\r\nValueError: Non-hashable static arguments are not supported. An error occured while trying to hash an object of type <class 'list'>, [-2, 2]. The error was:\r\nTypeError: unhashable type: 'list'\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Automatic differentiation variational inference in Numpy and JAX.\n\nThis demo fits a Gaussian approximation to an intractable, unnormalized\ndensity, by differentiating through a Monte Carlo estimate of the\nvariational evidence lower bound (ELBO).\"\"\"\n\n\nfrom functools import partial\nimport matplotlib.pyplot as plt\n\nfrom jax import jit, grad, vmap\nfrom jax import random\nfrom jax.experimental import optimizers\nimport jax.numpy as jnp\nimport jax.scipy.stats.norm as norm\n\n\n# ========= Functions to define the evidence lower bound. =========\n\ndef diag_gaussian_sample(rng, mean, log_std):\n # Take a single sample from a diagonal multivariate Gaussian.\n return mean + jnp.exp(log_std) * random.normal(rng, mean.shape)\n\ndef diag_gaussian_logpdf(x, mean, log_std):\n # Evaluate a single point on a diagonal multivariate Gaussian.\n return jnp.sum(vmap(norm.logpdf)(x, mean, jnp.exp(log_std)))\n\ndef elbo(logprob, rng, mean, log_std):\n # Single-sample Monte Carlo estimate of the variational lower bound.\n sample = diag_gaussian_sample(rng, mean, log_std)\n return logprob(sample) - diag_gaussian_logpdf(sample, mean, log_std)\n\ndef batch_elbo(logprob, rng, params, num_samples):\n # Average over a batch of random samples.\n rngs = random.split(rng, num_samples)\n vectorized_elbo = vmap(partial(elbo, logprob), in_axes=(0, None, None))\n return jnp.mean(vectorized_elbo(rngs, *params))\n\n\n# ========= Helper function for plotting. =========\n\n@partial(jit, static_argnums=(0, 1, 2, 4))\ndef _mesh_eval(func, x_limits, y_limits, params, num_ticks):\n # Evaluate func on a 2D grid defined by x_limits and y_limits.\n x = jnp.linspace(*x_limits, num=num_ticks)\n y = jnp.linspace(*y_limits, num=num_ticks)\n X, Y = jnp.meshgrid(x, y)\n xy_vec = jnp.stack([X.ravel(), Y.ravel()]).T\n zs = vmap(func, in_axes=(0, None))(xy_vec, params)\n return X, Y, zs.reshape(X.shape)\n\ndef mesh_eval(func, x_limits, y_limits, params, num_ticks=101):\n return _mesh_eval(func, x_limits, y_limits, params, num_ticks)\n\n# ========= Define an intractable unnormalized density =========\n\ndef funnel_log_density(params):\n return norm.logpdf(params[0], 0, jnp.exp(params[1])) + \\\n norm.logpdf(params[1], 0, 1.35)\n\n\nif __name__ == \"__main__\":\n num_samples = 40\n\n @jit\n def objective(params, t):\n rng = random.PRNGKey(t)\n return -batch_elbo(funnel_log_density, rng, params, num_samples)\n\n # Set up figure.\n fig = plt.figure(figsize=(8,8), facecolor='white')\n ax = fig.add_subplot(111, frameon=False)\n plt.ion()\n plt.show(block=False)\n x_limits = [-2, 2]\n y_limits = [-4, 2]\n target_dist = lambda x, _: jnp.exp(funnel_log_density(x))\n approx_dist = lambda x, params: jnp.exp(diag_gaussian_logpdf(x, *params))\n\n def callback(params, t):\n print(\"Iteration {} lower bound {}\".format(t, objective(params, t)))\n\n plt.cla()\n X, Y, Z = mesh_eval(target_dist, x_limits, y_limits, 1)\n ax.contour(X, Y, Z, cmap='summer')\n X, Y, Z = mesh_eval(approx_dist, x_limits, y_limits, params)\n ax.contour(X, Y, Z, cmap='winter')\n ax.set_xlim(x_limits)\n ax.set_ylim(y_limits)\n ax.set_yticks([])\n ax.set_xticks([])\n\n # Plot random samples from variational distribution.\n # Here we clone the rng used in computing the objective\n # so that we can show exactly the same samples.\n rngs = random.split(random.PRNGKey(t), num_samples)\n samples = vmap(diag_gaussian_sample, in_axes=(0, None, None))(rngs, *params)\n ax.plot(samples[:, 0], samples[:, 1], 'b.')\n\n plt.draw()\n plt.pause(1.0/60.0)\n\n\n # Set up optimizer.\n D = 2\n init_mean = jnp.zeros(D)\n init_std = jnp.zeros(D)\n init_params = (init_mean, init_std)\n opt_init, opt_update, get_params = optimizers.momentum(step_size=0.1, mass=0.9)\n opt_state = opt_init(init_params)\n\n @jit\n def update(i, opt_state):\n params = get_params(opt_state)\n gradient = grad(objective)(params, i)\n return opt_update(i, gradient, opt_state)\n\n\n # Main loop.\n print(\"Optimizing variational parameters...\")\n for t in range(100):\n opt_state = update(t, opt_state)\n params = get_params(opt_state)\n callback(params, t)\n plt.show(block=True)\n", "path": "examples/advi.py"}]} | 2,389 | 150 |
gh_patches_debug_32419 | rasdani/github-patches | git_diff | xonsh__xonsh-5388 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature: Search substring in env variable completer
```xsh
$TRA<Tab>
# Show all variables with `*TRA*`
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
</issue>
<code>
[start of xonsh/completers/environment.py]
1 from xonsh.built_ins import XSH
2 from xonsh.completers.tools import (
3 RichCompletion,
4 contextual_completer,
5 get_filter_function,
6 non_exclusive_completer,
7 )
8 from xonsh.parsers.completion_context import CompletionContext
9
10
11 @contextual_completer
12 @non_exclusive_completer
13 def complete_environment_vars(context: CompletionContext):
14 """Completes environment variables."""
15 if context.command:
16 prefix = context.command.prefix
17 elif context.python:
18 prefix = context.python.prefix
19 else:
20 return None
21
22 dollar_location = prefix.rfind("$")
23 if dollar_location == -1:
24 return None
25
26 key = prefix[dollar_location + 1 :]
27 lprefix = len(key) + 1
28 if context.command is not None and context.command.is_after_closing_quote:
29 lprefix += 1
30 filter_func = get_filter_function()
31 env = XSH.env
32
33 return (
34 RichCompletion(
35 "$" + k,
36 display=f"${k} [{type(v).__name__}]",
37 description=env.get_docs(k).doc,
38 )
39 for k, v in env.items()
40 if filter_func(k, key)
41 ), lprefix
42
[end of xonsh/completers/environment.py]
[start of xonsh/completer.py]
1 """A (tab-)completer for xonsh."""
2
3 import collections.abc as cabc
4 import sys
5 import typing as tp
6
7 from xonsh.built_ins import XSH
8 from xonsh.completers.tools import (
9 Completion,
10 RichCompletion,
11 apply_lprefix,
12 get_filter_function,
13 is_contextual_completer,
14 is_exclusive_completer,
15 )
16 from xonsh.parsers.completion_context import CompletionContext, CompletionContextParser
17 from xonsh.tools import print_exception
18
19
20 class Completer:
21 """This provides a list of optional completions for the xonsh shell."""
22
23 def __init__(self):
24 self.context_parser = CompletionContextParser()
25
26 def parse(
27 self, text: str, cursor_index: "None|int" = None, ctx=None
28 ) -> "CompletionContext":
29 """Parse the given text
30
31 Parameters
32 ----------
33 text
34 multi-line text
35 cursor_index
36 position of the cursor. If not given, then it is considered to be at the end.
37 ctx
38 Execution context
39 """
40 cursor_index = len(text) if cursor_index is None else cursor_index
41 return self.context_parser.parse(text, cursor_index, ctx)
42
43 def complete_line(self, text: str):
44 """Handy wrapper to build command-completion-context when cursor is at the end.
45
46 Notes
47 -----
48 suffix is not supported; text after last space is parsed as prefix.
49 """
50 ctx = self.parse(text)
51 cmd_ctx = ctx.command
52 if not cmd_ctx:
53 raise RuntimeError("Only Command context is empty")
54 prefix = cmd_ctx.prefix
55
56 line = text
57 begidx = text.rfind(prefix)
58 endidx = begidx + len(prefix)
59
60 return self.complete(
61 prefix,
62 line,
63 begidx,
64 endidx,
65 cursor_index=len(line),
66 multiline_text=line,
67 completion_context=ctx,
68 )
69
70 def complete(
71 self,
72 prefix,
73 line,
74 begidx,
75 endidx,
76 ctx=None,
77 multiline_text=None,
78 cursor_index=None,
79 completion_context=None,
80 ):
81 """Complete the string, given a possible execution context.
82
83 Parameters
84 ----------
85 prefix : str
86 The string to match
87 line : str
88 The line that prefix appears on.
89 begidx : int
90 The index in line that prefix starts on.
91 endidx : int
92 The index in line that prefix ends on.
93 ctx : dict, optional
94 Names in the current execution context.
95 multiline_text : str
96 The complete multiline text. Needed to get completion context.
97 cursor_index : int
98 The current cursor's index in the multiline text.
99 May be ``len(multiline_text)`` for cursor at the end.
100 Needed to get completion context.
101
102 Returns
103 -------
104 rtn : list of str
105 Possible completions of prefix, sorted alphabetically.
106 lprefix : int
107 Length of the prefix to be replaced in the completion.
108 """
109
110 if (
111 (multiline_text is not None)
112 and (cursor_index is not None)
113 and (completion_context is None)
114 ):
115 completion_context: tp.Optional[CompletionContext] = self.parse(
116 multiline_text,
117 cursor_index,
118 ctx,
119 )
120
121 ctx = ctx or {}
122 return self.complete_from_context(
123 completion_context,
124 (prefix, line, begidx, endidx, ctx),
125 )
126
127 @staticmethod
128 def _format_completion(
129 completion,
130 completion_context,
131 completing_contextual_command: bool,
132 lprefix: int,
133 custom_lprefix: bool,
134 ) -> tuple[Completion, int]:
135 if (
136 completing_contextual_command
137 and completion_context.command.is_after_closing_quote
138 ):
139 """
140 The cursor is appending to a closed string literal, i.e. cursor at the end of ``ls "/usr/"``.
141 1. The closing quote will be appended to all completions.
142 I.e the completion ``/usr/bin`` will turn into ``/usr/bin"``
143 To prevent this behavior, a completer can return a ``RichCompletion`` with ``append_closing_quote=False``.
144 2. If not specified, lprefix will cover the closing prefix.
145 I.e for ``ls "/usr/"``, the default lprefix will be 6 to include the closing quote.
146 To prevent this behavior, a completer can return a different lprefix or specify it inside ``RichCompletion``.
147 """
148 closing_quote = completion_context.command.closing_quote
149 if not custom_lprefix:
150 lprefix += len(closing_quote)
151 if closing_quote:
152 if isinstance(completion, RichCompletion):
153 if completion.append_closing_quote:
154 completion = completion.replace(
155 value=completion.value + closing_quote
156 )
157 else:
158 completion = completion + closing_quote
159
160 completion = list(apply_lprefix([completion], lprefix))[0]
161
162 if (
163 isinstance(completion, RichCompletion)
164 and completion.append_space
165 and not completion.value.endswith(" ")
166 ):
167 # append spaces AFTER appending closing quote
168 completion = completion.replace(value=completion.value + " ")
169
170 return completion, lprefix
171
172 @staticmethod
173 def generate_completions(
174 completion_context, old_completer_args, trace: bool
175 ) -> tp.Iterator[tuple[Completion, int]]:
176 filter_func = get_filter_function()
177
178 for name, func in XSH.completers.items():
179 try:
180 if is_contextual_completer(func):
181 if completion_context is None:
182 continue
183 out = func(completion_context)
184 else:
185 if old_completer_args is None:
186 continue
187 out = func(*old_completer_args)
188 except StopIteration:
189 # completer requested to stop collecting completions
190 break
191 except Exception as e:
192 name = func.__name__ if hasattr(func, "__name__") else str(func)
193 print_exception(
194 f"Completer {name} raises exception when gets "
195 f"old_args={old_completer_args[:-1]} / completion_context={completion_context!r}:\n"
196 f"{type(e)} - {e}"
197 )
198 continue
199
200 completing_contextual_command = (
201 is_contextual_completer(func)
202 and completion_context is not None
203 and completion_context.command is not None
204 )
205
206 # -- set comp-defaults --
207
208 # the default is that the completer function filters out as necessary
209 # we can change that once fuzzy/substring matches are added
210 is_filtered = True
211 custom_lprefix = False
212 prefix = ""
213 if completing_contextual_command:
214 prefix = completion_context.command.prefix
215 elif old_completer_args is not None:
216 prefix = old_completer_args[0]
217 lprefix = len(prefix)
218
219 if isinstance(out, cabc.Sequence):
220 # update comp-defaults from
221 res, lprefix_filtered = out
222 if isinstance(lprefix_filtered, bool):
223 is_filtered = lprefix_filtered
224 else:
225 lprefix = lprefix_filtered
226 custom_lprefix = True
227 else:
228 res = out
229
230 if res is None:
231 continue
232
233 items = []
234 for comp in res:
235 if (not is_filtered) and (not filter_func(comp, prefix)):
236 continue
237 comp = Completer._format_completion(
238 comp,
239 completion_context,
240 completing_contextual_command,
241 lprefix or 0,
242 custom_lprefix,
243 )
244 items.append(comp)
245 yield comp
246
247 if not items: # empty completion
248 continue
249
250 if trace:
251 print(
252 f"TRACE COMPLETIONS: Got {len(items)} results"
253 f" from {'' if is_exclusive_completer(func) else 'non-'}exclusive completer '{name}':"
254 )
255 sys.displayhook(items)
256
257 if is_exclusive_completer(func):
258 # we got completions for an exclusive completer
259 break
260
261 def complete_from_context(self, completion_context, old_completer_args=None):
262 trace = XSH.env.get("XONSH_TRACE_COMPLETIONS")
263 if trace:
264 print("\nTRACE COMPLETIONS: Getting completions with context:")
265 sys.displayhook(completion_context)
266 lprefix = 0
267
268 # using dict to keep order py3.6+
269 completions = {}
270
271 query_limit = XSH.env.get("COMPLETION_QUERY_LIMIT")
272
273 for comp in self.generate_completions(
274 completion_context,
275 old_completer_args,
276 trace,
277 ):
278 completion, lprefix = comp
279 completions[completion] = None
280 if query_limit and len(completions) >= query_limit:
281 if trace:
282 print(
283 "TRACE COMPLETIONS: Stopped after $COMPLETION_QUERY_LIMIT reached."
284 )
285 break
286
287 def sortkey(s):
288 # todo: should sort with prefix > substring > fuzzy
289 return s.lstrip(''''"''').lower()
290
291 # the last completer's lprefix is returned. other lprefix values are inside the RichCompletions.
292 return tuple(sorted(completions, key=sortkey)), lprefix
293
[end of xonsh/completer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/completer.py b/xonsh/completer.py
--- a/xonsh/completer.py
+++ b/xonsh/completer.py
@@ -284,9 +284,18 @@
)
break
- def sortkey(s):
- # todo: should sort with prefix > substring > fuzzy
- return s.lstrip(''''"''').lower()
+ prefix = None
+ if completion_context:
+ prefix = completion_context.command.prefix
+ if prefix.startswith("$"):
+ prefix = prefix[1:]
+
+ def sortkey(s):
+ """Sort values by prefix position and then alphabetically."""
+ return (s.lower().find(prefix.lower()), s.lower())
+ else:
+ # Fallback sort.
+ sortkey = lambda s: s.lstrip(''''"''').lower()
# the last completer's lprefix is returned. other lprefix values are inside the RichCompletions.
return tuple(sorted(completions, key=sortkey)), lprefix
diff --git a/xonsh/completers/environment.py b/xonsh/completers/environment.py
--- a/xonsh/completers/environment.py
+++ b/xonsh/completers/environment.py
@@ -2,7 +2,6 @@
from xonsh.completers.tools import (
RichCompletion,
contextual_completer,
- get_filter_function,
non_exclusive_completer,
)
from xonsh.parsers.completion_context import CompletionContext
@@ -27,15 +26,14 @@
lprefix = len(key) + 1
if context.command is not None and context.command.is_after_closing_quote:
lprefix += 1
- filter_func = get_filter_function()
env = XSH.env
+ vars = [k for k, v in env.items() if key.lower() in k.lower()]
return (
RichCompletion(
"$" + k,
- display=f"${k} [{type(v).__name__}]",
+ display=f"${k} [{type(env[k]).__name__}]",
description=env.get_docs(k).doc,
)
- for k, v in env.items()
- if filter_func(k, key)
+ for k in vars
), lprefix
| {"golden_diff": "diff --git a/xonsh/completer.py b/xonsh/completer.py\n--- a/xonsh/completer.py\n+++ b/xonsh/completer.py\n@@ -284,9 +284,18 @@\n )\n break\n \n- def sortkey(s):\n- # todo: should sort with prefix > substring > fuzzy\n- return s.lstrip(''''\"''').lower()\n+ prefix = None\n+ if completion_context:\n+ prefix = completion_context.command.prefix\n+ if prefix.startswith(\"$\"):\n+ prefix = prefix[1:]\n+\n+ def sortkey(s):\n+ \"\"\"Sort values by prefix position and then alphabetically.\"\"\"\n+ return (s.lower().find(prefix.lower()), s.lower())\n+ else:\n+ # Fallback sort.\n+ sortkey = lambda s: s.lstrip(''''\"''').lower()\n \n # the last completer's lprefix is returned. other lprefix values are inside the RichCompletions.\n return tuple(sorted(completions, key=sortkey)), lprefix\ndiff --git a/xonsh/completers/environment.py b/xonsh/completers/environment.py\n--- a/xonsh/completers/environment.py\n+++ b/xonsh/completers/environment.py\n@@ -2,7 +2,6 @@\n from xonsh.completers.tools import (\n RichCompletion,\n contextual_completer,\n- get_filter_function,\n non_exclusive_completer,\n )\n from xonsh.parsers.completion_context import CompletionContext\n@@ -27,15 +26,14 @@\n lprefix = len(key) + 1\n if context.command is not None and context.command.is_after_closing_quote:\n lprefix += 1\n- filter_func = get_filter_function()\n env = XSH.env\n \n+ vars = [k for k, v in env.items() if key.lower() in k.lower()]\n return (\n RichCompletion(\n \"$\" + k,\n- display=f\"${k} [{type(v).__name__}]\",\n+ display=f\"${k} [{type(env[k]).__name__}]\",\n description=env.get_docs(k).doc,\n )\n- for k, v in env.items()\n- if filter_func(k, key)\n+ for k in vars\n ), lprefix\n", "issue": "Feature: Search substring in env variable completer\n```xsh\r\n$TRA<Tab>\r\n# Show all variables with `*TRA*`\r\n```\r\n\r\n## For community\r\n\u2b07\ufe0f **Please click the \ud83d\udc4d reaction instead of leaving a `+1` or \ud83d\udc4d comment**\r\n\n", "before_files": [{"content": "from xonsh.built_ins import XSH\nfrom xonsh.completers.tools import (\n RichCompletion,\n contextual_completer,\n get_filter_function,\n non_exclusive_completer,\n)\nfrom xonsh.parsers.completion_context import CompletionContext\n\n\n@contextual_completer\n@non_exclusive_completer\ndef complete_environment_vars(context: CompletionContext):\n \"\"\"Completes environment variables.\"\"\"\n if context.command:\n prefix = context.command.prefix\n elif context.python:\n prefix = context.python.prefix\n else:\n return None\n\n dollar_location = prefix.rfind(\"$\")\n if dollar_location == -1:\n return None\n\n key = prefix[dollar_location + 1 :]\n lprefix = len(key) + 1\n if context.command is not None and context.command.is_after_closing_quote:\n lprefix += 1\n filter_func = get_filter_function()\n env = XSH.env\n\n return (\n RichCompletion(\n \"$\" + k,\n display=f\"${k} [{type(v).__name__}]\",\n description=env.get_docs(k).doc,\n )\n for k, v in env.items()\n if filter_func(k, key)\n ), lprefix\n", "path": "xonsh/completers/environment.py"}, {"content": "\"\"\"A (tab-)completer for xonsh.\"\"\"\n\nimport collections.abc as cabc\nimport sys\nimport typing as tp\n\nfrom xonsh.built_ins import XSH\nfrom xonsh.completers.tools import (\n Completion,\n RichCompletion,\n apply_lprefix,\n get_filter_function,\n is_contextual_completer,\n is_exclusive_completer,\n)\nfrom xonsh.parsers.completion_context import CompletionContext, CompletionContextParser\nfrom xonsh.tools import print_exception\n\n\nclass Completer:\n \"\"\"This provides a list of optional completions for the xonsh shell.\"\"\"\n\n def __init__(self):\n self.context_parser = CompletionContextParser()\n\n def parse(\n self, text: str, cursor_index: \"None|int\" = None, ctx=None\n ) -> \"CompletionContext\":\n \"\"\"Parse the given text\n\n Parameters\n ----------\n text\n multi-line text\n cursor_index\n position of the cursor. If not given, then it is considered to be at the end.\n ctx\n Execution context\n \"\"\"\n cursor_index = len(text) if cursor_index is None else cursor_index\n return self.context_parser.parse(text, cursor_index, ctx)\n\n def complete_line(self, text: str):\n \"\"\"Handy wrapper to build command-completion-context when cursor is at the end.\n\n Notes\n -----\n suffix is not supported; text after last space is parsed as prefix.\n \"\"\"\n ctx = self.parse(text)\n cmd_ctx = ctx.command\n if not cmd_ctx:\n raise RuntimeError(\"Only Command context is empty\")\n prefix = cmd_ctx.prefix\n\n line = text\n begidx = text.rfind(prefix)\n endidx = begidx + len(prefix)\n\n return self.complete(\n prefix,\n line,\n begidx,\n endidx,\n cursor_index=len(line),\n multiline_text=line,\n completion_context=ctx,\n )\n\n def complete(\n self,\n prefix,\n line,\n begidx,\n endidx,\n ctx=None,\n multiline_text=None,\n cursor_index=None,\n completion_context=None,\n ):\n \"\"\"Complete the string, given a possible execution context.\n\n Parameters\n ----------\n prefix : str\n The string to match\n line : str\n The line that prefix appears on.\n begidx : int\n The index in line that prefix starts on.\n endidx : int\n The index in line that prefix ends on.\n ctx : dict, optional\n Names in the current execution context.\n multiline_text : str\n The complete multiline text. Needed to get completion context.\n cursor_index : int\n The current cursor's index in the multiline text.\n May be ``len(multiline_text)`` for cursor at the end.\n Needed to get completion context.\n\n Returns\n -------\n rtn : list of str\n Possible completions of prefix, sorted alphabetically.\n lprefix : int\n Length of the prefix to be replaced in the completion.\n \"\"\"\n\n if (\n (multiline_text is not None)\n and (cursor_index is not None)\n and (completion_context is None)\n ):\n completion_context: tp.Optional[CompletionContext] = self.parse(\n multiline_text,\n cursor_index,\n ctx,\n )\n\n ctx = ctx or {}\n return self.complete_from_context(\n completion_context,\n (prefix, line, begidx, endidx, ctx),\n )\n\n @staticmethod\n def _format_completion(\n completion,\n completion_context,\n completing_contextual_command: bool,\n lprefix: int,\n custom_lprefix: bool,\n ) -> tuple[Completion, int]:\n if (\n completing_contextual_command\n and completion_context.command.is_after_closing_quote\n ):\n \"\"\"\n The cursor is appending to a closed string literal, i.e. cursor at the end of ``ls \"/usr/\"``.\n 1. The closing quote will be appended to all completions.\n I.e the completion ``/usr/bin`` will turn into ``/usr/bin\"``\n To prevent this behavior, a completer can return a ``RichCompletion`` with ``append_closing_quote=False``.\n 2. If not specified, lprefix will cover the closing prefix.\n I.e for ``ls \"/usr/\"``, the default lprefix will be 6 to include the closing quote.\n To prevent this behavior, a completer can return a different lprefix or specify it inside ``RichCompletion``.\n \"\"\"\n closing_quote = completion_context.command.closing_quote\n if not custom_lprefix:\n lprefix += len(closing_quote)\n if closing_quote:\n if isinstance(completion, RichCompletion):\n if completion.append_closing_quote:\n completion = completion.replace(\n value=completion.value + closing_quote\n )\n else:\n completion = completion + closing_quote\n\n completion = list(apply_lprefix([completion], lprefix))[0]\n\n if (\n isinstance(completion, RichCompletion)\n and completion.append_space\n and not completion.value.endswith(\" \")\n ):\n # append spaces AFTER appending closing quote\n completion = completion.replace(value=completion.value + \" \")\n\n return completion, lprefix\n\n @staticmethod\n def generate_completions(\n completion_context, old_completer_args, trace: bool\n ) -> tp.Iterator[tuple[Completion, int]]:\n filter_func = get_filter_function()\n\n for name, func in XSH.completers.items():\n try:\n if is_contextual_completer(func):\n if completion_context is None:\n continue\n out = func(completion_context)\n else:\n if old_completer_args is None:\n continue\n out = func(*old_completer_args)\n except StopIteration:\n # completer requested to stop collecting completions\n break\n except Exception as e:\n name = func.__name__ if hasattr(func, \"__name__\") else str(func)\n print_exception(\n f\"Completer {name} raises exception when gets \"\n f\"old_args={old_completer_args[:-1]} / completion_context={completion_context!r}:\\n\"\n f\"{type(e)} - {e}\"\n )\n continue\n\n completing_contextual_command = (\n is_contextual_completer(func)\n and completion_context is not None\n and completion_context.command is not None\n )\n\n # -- set comp-defaults --\n\n # the default is that the completer function filters out as necessary\n # we can change that once fuzzy/substring matches are added\n is_filtered = True\n custom_lprefix = False\n prefix = \"\"\n if completing_contextual_command:\n prefix = completion_context.command.prefix\n elif old_completer_args is not None:\n prefix = old_completer_args[0]\n lprefix = len(prefix)\n\n if isinstance(out, cabc.Sequence):\n # update comp-defaults from\n res, lprefix_filtered = out\n if isinstance(lprefix_filtered, bool):\n is_filtered = lprefix_filtered\n else:\n lprefix = lprefix_filtered\n custom_lprefix = True\n else:\n res = out\n\n if res is None:\n continue\n\n items = []\n for comp in res:\n if (not is_filtered) and (not filter_func(comp, prefix)):\n continue\n comp = Completer._format_completion(\n comp,\n completion_context,\n completing_contextual_command,\n lprefix or 0,\n custom_lprefix,\n )\n items.append(comp)\n yield comp\n\n if not items: # empty completion\n continue\n\n if trace:\n print(\n f\"TRACE COMPLETIONS: Got {len(items)} results\"\n f\" from {'' if is_exclusive_completer(func) else 'non-'}exclusive completer '{name}':\"\n )\n sys.displayhook(items)\n\n if is_exclusive_completer(func):\n # we got completions for an exclusive completer\n break\n\n def complete_from_context(self, completion_context, old_completer_args=None):\n trace = XSH.env.get(\"XONSH_TRACE_COMPLETIONS\")\n if trace:\n print(\"\\nTRACE COMPLETIONS: Getting completions with context:\")\n sys.displayhook(completion_context)\n lprefix = 0\n\n # using dict to keep order py3.6+\n completions = {}\n\n query_limit = XSH.env.get(\"COMPLETION_QUERY_LIMIT\")\n\n for comp in self.generate_completions(\n completion_context,\n old_completer_args,\n trace,\n ):\n completion, lprefix = comp\n completions[completion] = None\n if query_limit and len(completions) >= query_limit:\n if trace:\n print(\n \"TRACE COMPLETIONS: Stopped after $COMPLETION_QUERY_LIMIT reached.\"\n )\n break\n\n def sortkey(s):\n # todo: should sort with prefix > substring > fuzzy\n return s.lstrip(''''\"''').lower()\n\n # the last completer's lprefix is returned. other lprefix values are inside the RichCompletions.\n return tuple(sorted(completions, key=sortkey)), lprefix\n", "path": "xonsh/completer.py"}]} | 3,727 | 508 |
gh_patches_debug_7473 | rasdani/github-patches | git_diff | praw-dev__praw-1327 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PRAW installed by pip is missing the `images` directory and its contents
## Issue Description
PRAW's video submit method uses a placeholder image as a video thumbnail when the user doesn't provide a thumbnail. Here's the relevant code:
https://github.com/praw-dev/praw/blob/54f8b3f998008b81988aac057d33fe38d5ac7739/praw/models/reddit/subreddit.py#L511-L514
That image is at [`praw/images/PRAW logo.png`](https://github.com/praw-dev/praw/blob/master/praw/images/PRAW%20logo.png). Unfortunately the current release on PyPI is missing the file and the entire `images` directory, so the `submit_video` method fails when another thumbnail isn't provided.
It isn't just the wheel on PyPI that is missing the image. The source tarball is as well.
I suspect that a change to [`MANIFEST.in`](https://github.com/praw-dev/praw/blob/master/MANIFEST.in) might solve this problem. Or maybe not, as [this Stack Overflow answer](https://stackoverflow.com/a/25964691/8033766) suggests that `MANIFEST.in` is just for Python 2.6 and earlier.
Adding an `__init__.py` to the `images` folder would probably [make `find_packages()` in `setup.py`](https://setuptools.readthedocs.io/en/latest/setuptools.html#using-find-packages) notice the folder, but this would be a mis-use of an `__init__.py` since that folder is not a Python package.
[This page](https://docs.python.org/3.3/distutils/setupscript.html#installing-additional-files) suggests using the `data_files` argument in `setup.py`, but I tried that and couldn't get it to work ([branch here](https://github.com/jarhill0/praw/tree/image-upload-fix), [PyPI testing push here](https://test.pypi.org/project/praw/)).
</issue>
<code>
[start of setup.py]
1 """praw setup.py"""
2
3 import re
4 from codecs import open
5 from os import path
6 from setuptools import find_packages, setup
7
8
9 PACKAGE_NAME = "praw"
10 HERE = path.abspath(path.dirname(__file__))
11 with open(path.join(HERE, "README.rst"), encoding="utf-8") as fp:
12 README = fp.read()
13 with open(path.join(HERE, PACKAGE_NAME, "const.py"), encoding="utf-8") as fp:
14 VERSION = re.search('__version__ = "([^"]+)"', fp.read()).group(1)
15
16 extras = {
17 "ci": ["coveralls"],
18 "dev": ["pre-commit"],
19 "lint": ["black", "flake8", "pydocstyle", "sphinx", "sphinx_rtd_theme"],
20 "test": [
21 "betamax >=0.8, <0.9",
22 "betamax-matchers >=0.3.0, <0.5",
23 "betamax-serializers >=0.2, <0.3",
24 "mock >=0.8",
25 "pytest >=2.7.3",
26 ],
27 }
28 extras["dev"] += extras["lint"] + extras["test"]
29
30 setup(
31 name=PACKAGE_NAME,
32 author="Bryce Boe",
33 author_email="[email protected]",
34 python_requires=">=3.5",
35 classifiers=[
36 "Development Status :: 5 - Production/Stable",
37 "Environment :: Console",
38 "Intended Audience :: Developers",
39 "License :: OSI Approved :: BSD License",
40 "Natural Language :: English",
41 "Operating System :: OS Independent",
42 "Programming Language :: Python",
43 "Programming Language :: Python :: 3",
44 "Programming Language :: Python :: 3.5",
45 "Programming Language :: Python :: 3.6",
46 "Programming Language :: Python :: 3.7",
47 "Programming Language :: Python :: 3.8",
48 "Topic :: Utilities",
49 ],
50 description=(
51 "PRAW, an acronym for `Python Reddit API Wrapper`, is a "
52 "python package that allows for simple access to "
53 "reddit's API."
54 ),
55 extras_require=extras,
56 install_requires=[
57 "prawcore >=1.0.1, <2.0",
58 "update_checker >=0.16",
59 "websocket-client >=0.54.0",
60 ],
61 keywords="reddit api wrapper",
62 license="Simplified BSD License",
63 long_description=README,
64 package_data={"": ["LICENSE.txt"], PACKAGE_NAME: ["*.ini"]},
65 packages=find_packages(exclude=["tests", "tests.*", "tools", "tools.*"]),
66 url="https://praw.readthedocs.org/",
67 version=VERSION,
68 )
69
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,7 +61,10 @@
keywords="reddit api wrapper",
license="Simplified BSD License",
long_description=README,
- package_data={"": ["LICENSE.txt"], PACKAGE_NAME: ["*.ini"]},
+ package_data={
+ "": ["LICENSE.txt"],
+ PACKAGE_NAME: ["*.ini", "images/*.jpg"],
+ },
packages=find_packages(exclude=["tests", "tests.*", "tools", "tools.*"]),
url="https://praw.readthedocs.org/",
version=VERSION,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,7 +61,10 @@\n keywords=\"reddit api wrapper\",\n license=\"Simplified BSD License\",\n long_description=README,\n- package_data={\"\": [\"LICENSE.txt\"], PACKAGE_NAME: [\"*.ini\"]},\n+ package_data={\n+ \"\": [\"LICENSE.txt\"],\n+ PACKAGE_NAME: [\"*.ini\", \"images/*.jpg\"],\n+ },\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tools\", \"tools.*\"]),\n url=\"https://praw.readthedocs.org/\",\n version=VERSION,\n", "issue": "PRAW installed by pip is missing the `images` directory and its contents\n## Issue Description\r\n\r\nPRAW's video submit method uses a placeholder image as a video thumbnail when the user doesn't provide a thumbnail. Here's the relevant code:\r\n\r\nhttps://github.com/praw-dev/praw/blob/54f8b3f998008b81988aac057d33fe38d5ac7739/praw/models/reddit/subreddit.py#L511-L514\r\n\r\nThat image is at [`praw/images/PRAW logo.png`](https://github.com/praw-dev/praw/blob/master/praw/images/PRAW%20logo.png). Unfortunately the current release on PyPI is missing the file and the entire `images` directory, so the `submit_video` method fails when another thumbnail isn't provided.\r\n\r\nIt isn't just the wheel on PyPI that is missing the image. The source tarball is as well.\r\n\r\nI suspect that a change to [`MANIFEST.in`](https://github.com/praw-dev/praw/blob/master/MANIFEST.in) might solve this problem. Or maybe not, as [this Stack Overflow answer](https://stackoverflow.com/a/25964691/8033766) suggests that `MANIFEST.in` is just for Python 2.6 and earlier.\r\n\r\nAdding an `__init__.py` to the `images` folder would probably [make `find_packages()` in `setup.py`](https://setuptools.readthedocs.io/en/latest/setuptools.html#using-find-packages) notice the folder, but this would be a mis-use of an `__init__.py` since that folder is not a Python package.\r\n\r\n[This page](https://docs.python.org/3.3/distutils/setupscript.html#installing-additional-files) suggests using the `data_files` argument in `setup.py`, but I tried that and couldn't get it to work ([branch here](https://github.com/jarhill0/praw/tree/image-upload-fix), [PyPI testing push here](https://test.pypi.org/project/praw/)).\n", "before_files": [{"content": "\"\"\"praw setup.py\"\"\"\n\nimport re\nfrom codecs import open\nfrom os import path\nfrom setuptools import find_packages, setup\n\n\nPACKAGE_NAME = \"praw\"\nHERE = path.abspath(path.dirname(__file__))\nwith open(path.join(HERE, \"README.rst\"), encoding=\"utf-8\") as fp:\n README = fp.read()\nwith open(path.join(HERE, PACKAGE_NAME, \"const.py\"), encoding=\"utf-8\") as fp:\n VERSION = re.search('__version__ = \"([^\"]+)\"', fp.read()).group(1)\n\nextras = {\n \"ci\": [\"coveralls\"],\n \"dev\": [\"pre-commit\"],\n \"lint\": [\"black\", \"flake8\", \"pydocstyle\", \"sphinx\", \"sphinx_rtd_theme\"],\n \"test\": [\n \"betamax >=0.8, <0.9\",\n \"betamax-matchers >=0.3.0, <0.5\",\n \"betamax-serializers >=0.2, <0.3\",\n \"mock >=0.8\",\n \"pytest >=2.7.3\",\n ],\n}\nextras[\"dev\"] += extras[\"lint\"] + extras[\"test\"]\n\nsetup(\n name=PACKAGE_NAME,\n author=\"Bryce Boe\",\n author_email=\"[email protected]\",\n python_requires=\">=3.5\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Utilities\",\n ],\n description=(\n \"PRAW, an acronym for `Python Reddit API Wrapper`, is a \"\n \"python package that allows for simple access to \"\n \"reddit's API.\"\n ),\n extras_require=extras,\n install_requires=[\n \"prawcore >=1.0.1, <2.0\",\n \"update_checker >=0.16\",\n \"websocket-client >=0.54.0\",\n ],\n keywords=\"reddit api wrapper\",\n license=\"Simplified BSD License\",\n long_description=README,\n package_data={\"\": [\"LICENSE.txt\"], PACKAGE_NAME: [\"*.ini\"]},\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tools\", \"tools.*\"]),\n url=\"https://praw.readthedocs.org/\",\n version=VERSION,\n)\n", "path": "setup.py"}]} | 1,693 | 139 |
gh_patches_debug_28202 | rasdani/github-patches | git_diff | open-mmlab__mmpose-493 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Seed in sampler
https://github.com/open-mmlab/mmdetection/pull/4665
</issue>
<code>
[start of mmpose/datasets/samplers/distributed_sampler.py]
1 import torch
2 from torch.utils.data import DistributedSampler as _DistributedSampler
3
4
5 class DistributedSampler(_DistributedSampler):
6 """DistributedSampler inheriting from
7 `torch.utils.data.DistributedSampler`.
8
9 In pytorch of lower versions, there is no `shuffle` argument. This child
10 class will port one to DistributedSampler.
11 """
12
13 def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
14 super().__init__(dataset, num_replicas=num_replicas, rank=rank)
15 self.shuffle = shuffle
16
17 def __iter__(self):
18 """Deterministically shuffle based on epoch."""
19 if self.shuffle:
20 g = torch.Generator()
21 g.manual_seed(self.epoch)
22 indices = torch.randperm(len(self.dataset), generator=g).tolist()
23 else:
24 indices = torch.arange(len(self.dataset)).tolist()
25
26 # add extra samples to make it evenly divisible
27 indices += indices[:(self.total_size - len(indices))]
28 assert len(indices) == self.total_size
29
30 # subsample
31 indices = indices[self.rank:self.total_size:self.num_replicas]
32 assert len(indices) == self.num_samples
33 return iter(indices)
34
[end of mmpose/datasets/samplers/distributed_sampler.py]
[start of mmpose/datasets/builder.py]
1 import platform
2 import random
3 from functools import partial
4
5 import numpy as np
6 from mmcv.parallel import collate
7 from mmcv.runner import get_dist_info
8 from mmcv.utils import build_from_cfg
9 from mmcv.utils.parrots_wrapper import _get_dataloader
10
11 from .dataset_wrappers import RepeatDataset
12 from .registry import DATASETS
13 from .samplers import DistributedSampler
14
15 if platform.system() != 'Windows':
16 # https://github.com/pytorch/pytorch/issues/973
17 import resource
18 rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
19 hard_limit = rlimit[1]
20 soft_limit = min(4096, hard_limit)
21 resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
22
23
24 def build_dataset(cfg, default_args=None):
25 """Build a dataset from config dict.
26
27 Args:
28 cfg (dict): Config dict. It should at least contain the key "type".
29 default_args (dict, optional): Default initialization arguments.
30 Default: None.
31
32 Returns:
33 Dataset: The constructed dataset.
34 """
35 if cfg['type'] == 'RepeatDataset':
36 dataset = RepeatDataset(
37 build_dataset(cfg['dataset'], default_args), cfg['times'])
38 else:
39 dataset = build_from_cfg(cfg, DATASETS, default_args)
40 return dataset
41
42
43 def build_dataloader(dataset,
44 samples_per_gpu,
45 workers_per_gpu,
46 num_gpus=1,
47 dist=True,
48 shuffle=True,
49 seed=None,
50 drop_last=True,
51 pin_memory=True,
52 **kwargs):
53 """Build PyTorch DataLoader.
54
55 In distributed training, each GPU/process has a dataloader.
56 In non-distributed training, there is only one dataloader for all GPUs.
57
58 Args:
59 dataset (Dataset): A PyTorch dataset.
60 samples_per_gpu (int): Number of training samples on each GPU, i.e.,
61 batch size of each GPU.
62 workers_per_gpu (int): How many subprocesses to use for data loading
63 for each GPU.
64 num_gpus (int): Number of GPUs. Only used in non-distributed training.
65 dist (bool): Distributed training/test or not. Default: True.
66 shuffle (bool): Whether to shuffle the data at every epoch.
67 Default: True.
68 drop_last (bool): Whether to drop the last incomplete batch in epoch.
69 Default: True
70 pin_memory (bool): Whether to use pin_memory in DataLoader.
71 Default: True
72 kwargs: any keyword argument to be used to initialize DataLoader
73
74 Returns:
75 DataLoader: A PyTorch dataloader.
76 """
77 rank, world_size = get_dist_info()
78 if dist:
79 sampler = DistributedSampler(
80 dataset, world_size, rank, shuffle=shuffle)
81 shuffle = False
82 batch_size = samples_per_gpu
83 num_workers = workers_per_gpu
84 else:
85 sampler = None
86 batch_size = num_gpus * samples_per_gpu
87 num_workers = num_gpus * workers_per_gpu
88
89 init_fn = partial(
90 worker_init_fn, num_workers=num_workers, rank=rank,
91 seed=seed) if seed is not None else None
92
93 _, DataLoader = _get_dataloader()
94 data_loader = DataLoader(
95 dataset,
96 batch_size=batch_size,
97 sampler=sampler,
98 num_workers=num_workers,
99 collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),
100 pin_memory=pin_memory,
101 shuffle=shuffle,
102 worker_init_fn=init_fn,
103 drop_last=drop_last,
104 **kwargs)
105
106 return data_loader
107
108
109 def worker_init_fn(worker_id, num_workers, rank, seed):
110 """Init the random seed for various workers."""
111 # The seed of each worker equals to
112 # num_worker * rank + worker_id + user_seed
113 worker_seed = num_workers * rank + worker_id + seed
114 np.random.seed(worker_seed)
115 random.seed(worker_seed)
116
[end of mmpose/datasets/builder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmpose/datasets/builder.py b/mmpose/datasets/builder.py
--- a/mmpose/datasets/builder.py
+++ b/mmpose/datasets/builder.py
@@ -77,7 +77,7 @@
rank, world_size = get_dist_info()
if dist:
sampler = DistributedSampler(
- dataset, world_size, rank, shuffle=shuffle)
+ dataset, world_size, rank, shuffle=shuffle, seed=seed)
shuffle = False
batch_size = samples_per_gpu
num_workers = workers_per_gpu
diff --git a/mmpose/datasets/samplers/distributed_sampler.py b/mmpose/datasets/samplers/distributed_sampler.py
--- a/mmpose/datasets/samplers/distributed_sampler.py
+++ b/mmpose/datasets/samplers/distributed_sampler.py
@@ -10,15 +10,22 @@
class will port one to DistributedSampler.
"""
- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank)
- self.shuffle = shuffle
+ def __init__(self,
+ dataset,
+ num_replicas=None,
+ rank=None,
+ shuffle=True,
+ seed=0):
+ super().__init__(
+ dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
+ # for the compatibility from PyTorch 1.3+
+ self.seed = seed if seed is not None else 0
def __iter__(self):
"""Deterministically shuffle based on epoch."""
if self.shuffle:
g = torch.Generator()
- g.manual_seed(self.epoch)
+ g.manual_seed(self.epoch + self.seed)
indices = torch.randperm(len(self.dataset), generator=g).tolist()
else:
indices = torch.arange(len(self.dataset)).tolist()
| {"golden_diff": "diff --git a/mmpose/datasets/builder.py b/mmpose/datasets/builder.py\n--- a/mmpose/datasets/builder.py\n+++ b/mmpose/datasets/builder.py\n@@ -77,7 +77,7 @@\n rank, world_size = get_dist_info()\n if dist:\n sampler = DistributedSampler(\n- dataset, world_size, rank, shuffle=shuffle)\n+ dataset, world_size, rank, shuffle=shuffle, seed=seed)\n shuffle = False\n batch_size = samples_per_gpu\n num_workers = workers_per_gpu\ndiff --git a/mmpose/datasets/samplers/distributed_sampler.py b/mmpose/datasets/samplers/distributed_sampler.py\n--- a/mmpose/datasets/samplers/distributed_sampler.py\n+++ b/mmpose/datasets/samplers/distributed_sampler.py\n@@ -10,15 +10,22 @@\n class will port one to DistributedSampler.\n \"\"\"\n \n- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n- super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n- self.shuffle = shuffle\n+ def __init__(self,\n+ dataset,\n+ num_replicas=None,\n+ rank=None,\n+ shuffle=True,\n+ seed=0):\n+ super().__init__(\n+ dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)\n+ # for the compatibility from PyTorch 1.3+\n+ self.seed = seed if seed is not None else 0\n \n def __iter__(self):\n \"\"\"Deterministically shuffle based on epoch.\"\"\"\n if self.shuffle:\n g = torch.Generator()\n- g.manual_seed(self.epoch)\n+ g.manual_seed(self.epoch + self.seed)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n", "issue": "Seed in sampler\nhttps://github.com/open-mmlab/mmdetection/pull/4665\n", "before_files": [{"content": "import torch\nfrom torch.utils.data import DistributedSampler as _DistributedSampler\n\n\nclass DistributedSampler(_DistributedSampler):\n \"\"\"DistributedSampler inheriting from\n `torch.utils.data.DistributedSampler`.\n\n In pytorch of lower versions, there is no `shuffle` argument. This child\n class will port one to DistributedSampler.\n \"\"\"\n\n def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.shuffle = shuffle\n\n def __iter__(self):\n \"\"\"Deterministically shuffle based on epoch.\"\"\"\n if self.shuffle:\n g = torch.Generator()\n g.manual_seed(self.epoch)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n\n # add extra samples to make it evenly divisible\n indices += indices[:(self.total_size - len(indices))]\n assert len(indices) == self.total_size\n\n # subsample\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n return iter(indices)\n", "path": "mmpose/datasets/samplers/distributed_sampler.py"}, {"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom mmcv.utils import build_from_cfg\nfrom mmcv.utils.parrots_wrapper import _get_dataloader\n\nfrom .dataset_wrappers import RepeatDataset\nfrom .registry import DATASETS\nfrom .samplers import DistributedSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n hard_limit = rlimit[1]\n soft_limit = min(4096, hard_limit)\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\n\n\ndef build_dataset(cfg, default_args=None):\n \"\"\"Build a dataset from config dict.\n\n Args:\n cfg (dict): Config dict. It should at least contain the key \"type\".\n default_args (dict, optional): Default initialization arguments.\n Default: None.\n\n Returns:\n Dataset: The constructed dataset.\n \"\"\"\n if cfg['type'] == 'RepeatDataset':\n dataset = RepeatDataset(\n build_dataset(cfg['dataset'], default_args), cfg['times'])\n else:\n dataset = build_from_cfg(cfg, DATASETS, default_args)\n return dataset\n\n\ndef build_dataloader(dataset,\n samples_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n drop_last=True,\n pin_memory=True,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (Dataset): A PyTorch dataset.\n samples_per_gpu (int): Number of training samples on each GPU, i.e.,\n batch size of each GPU.\n workers_per_gpu (int): How many subprocesses to use for data loading\n for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed training.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n drop_last (bool): Whether to drop the last incomplete batch in epoch.\n Default: True\n pin_memory (bool): Whether to use pin_memory in DataLoader.\n Default: True\n kwargs: any keyword argument to be used to initialize DataLoader\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n if dist:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=shuffle)\n shuffle = False\n batch_size = samples_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = None\n batch_size = num_gpus * samples_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n init_fn = partial(\n worker_init_fn, num_workers=num_workers, rank=rank,\n seed=seed) if seed is not None else None\n\n _, DataLoader = _get_dataloader()\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),\n pin_memory=pin_memory,\n shuffle=shuffle,\n worker_init_fn=init_fn,\n drop_last=drop_last,\n **kwargs)\n\n return data_loader\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n \"\"\"Init the random seed for various workers.\"\"\"\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n", "path": "mmpose/datasets/builder.py"}]} | 2,002 | 426 |
gh_patches_debug_11774 | rasdani/github-patches | git_diff | fossasia__open-event-server-6770 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Task send_event_fee_notification failing
```
raised unexpected: AttributeError("'Ticket' object has no attribute 'ticket_id'")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/flask_celeryext/app.py", line 101, in __call__
res = Task.__call__(self, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/celery/app/trace.py", line 650, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 161, in _inner
reraise(*exc_info)
File "/usr/local/lib/python3.7/site-packages/sentry_sdk/_compat.py", line 57, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 156, in _inner
return f(*args, **kwargs)
File "/data/app/app/api/helpers/scheduled_jobs.py", line 90, in send_event_fee_notification
ticket = safe_query(db, Ticket, 'id', order_ticket.ticket_id, 'ticket_id')
AttributeError: 'Ticket' object has no attribute 'ticket_id'
```
</issue>
<code>
[start of app/api/helpers/scheduled_jobs.py]
1 import datetime
2
3 import pytz
4 from dateutil.relativedelta import relativedelta
5 from flask import render_template
6 from flask_celeryext import RequestContextTask
7 from app.instance import celery
8
9 from app.api.helpers.db import safe_query, save_to_db
10 from app.api.helpers.mail import send_email_after_event, send_email_for_monthly_fee_payment, \
11 send_followup_email_for_monthly_fee_payment
12 from app.api.helpers.notification import send_notif_monthly_fee_payment, send_followup_notif_monthly_fee_payment, \
13 send_notif_after_event
14 from app.api.helpers.query import get_upcoming_events, get_user_event_roles_by_role_name
15 from app.api.helpers.utilities import monthdelta
16 from app.api.helpers.files import create_save_pdf
17 from app.api.helpers.storage import UPLOAD_PATHS
18 from app.models import db
19 from app.models.event import Event
20 from app.models.event_invoice import EventInvoice
21 from app.models.order import Order
22 from app.models.speaker import Speaker
23 from app.models.session import Session
24 from app.models.ticket import Ticket
25 from app.models.ticket_fee import TicketFees, get_fee
26 from app.models.ticket_holder import TicketHolder
27
28 from app.settings import get_settings
29
30
31 @celery.task(base=RequestContextTask, name='send.after.event.mail')
32 def send_after_event_mail():
33 from app.instance import current_app as app
34 with app.app_context():
35 events = Event.query.filter_by(state='published', deleted_at=None).all()
36 for event in events:
37 organizers = get_user_event_roles_by_role_name(event.id, 'organizer')
38 speakers = Speaker.query.filter_by(event_id=event.id, deleted_at=None).all()
39 owner = get_user_event_roles_by_role_name(event.id, 'owner').first()
40 current_time = datetime.datetime.now(pytz.timezone(event.timezone))
41 time_difference = current_time - event.ends_at
42 time_difference_minutes = (time_difference.days * 24 * 60) + \
43 (time_difference.seconds / 60)
44 frontend_url = get_settings()['frontend_url']
45 if current_time > event.ends_at and time_difference_minutes < 1440:
46 for speaker in speakers:
47 if not speaker.is_email_overridden:
48 send_email_after_event(speaker.user.email, event.name, frontend_url)
49 send_notif_after_event(speaker.user, event.name)
50 for organizer in organizers:
51 send_email_after_event(organizer.user.email, event.name, frontend_url)
52 send_notif_after_event(organizer.user, event.name)
53 if owner:
54 send_email_after_event(owner.user.email, event.name, frontend_url)
55 send_notif_after_event(owner.user, event.name)
56
57
58 @celery.task(base=RequestContextTask, name='change.session.state.on.event.completion')
59 def change_session_state_on_event_completion():
60 from app.instance import current_app as app
61 with app.app_context():
62 sessions_to_be_changed = Session.query.join(Event).filter(Session.state == 'pending')\
63 .filter(Event.ends_at < datetime.datetime.now())
64 for session in sessions_to_be_changed:
65 session.state = 'rejected'
66 save_to_db(session, 'Changed {} session state to rejected'.format(session.title))
67
68
69 @celery.task(base=RequestContextTask, name='send.event.fee.notification')
70 def send_event_fee_notification():
71 from app.instance import current_app as app
72 with app.app_context():
73 events = Event.query.filter_by(deleted_at=None, state='published').all()
74 for event in events:
75 latest_invoice = EventInvoice.query.filter_by(
76 event_id=event.id).order_by(EventInvoice.created_at.desc()).first()
77
78 if latest_invoice:
79 orders = Order.query \
80 .filter_by(event_id=event.id) \
81 .filter_by(status='completed') \
82 .filter(Order.completed_at > latest_invoice.created_at).all()
83 else:
84 orders = Order.query.filter_by(
85 event_id=event.id).filter_by(status='completed').all()
86
87 fee_total = 0
88 for order in orders:
89 for order_ticket in order.tickets:
90 ticket = safe_query(db, Ticket, 'id', order_ticket.ticket_id, 'ticket_id')
91 if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:
92 fee = ticket.price * (get_fee(event.payment_country, order.event.payment_currency) / 100.0)
93 fee_total += fee
94
95 if fee_total > 0:
96 owner = get_user_event_roles_by_role_name(event.id, 'owner').first()
97 new_invoice = EventInvoice(
98 amount=fee_total, event_id=event.id, user_id=owner.user.id)
99
100 if event.discount_code_id and event.discount_code:
101 r = relativedelta(datetime.datetime.utcnow(), event.created_at)
102 if r <= event.discount_code.valid_till:
103 new_invoice.amount = fee_total - \
104 (fee_total * (event.discount_code.value / 100.0))
105 new_invoice.discount_code_id = event.discount_code_id
106
107 save_to_db(new_invoice)
108 prev_month = monthdelta(new_invoice.created_at, 1).strftime(
109 "%b %Y") # Displayed as Aug 2016
110 app_name = get_settings()['app_name']
111 frontend_url = get_settings()['frontend_url']
112 link = '{}/invoices/{}'.format(frontend_url, new_invoice.identifier)
113 send_email_for_monthly_fee_payment(new_invoice.user.email,
114 event.name,
115 prev_month,
116 new_invoice.amount,
117 app_name,
118 link)
119 send_notif_monthly_fee_payment(new_invoice.user,
120 event.name,
121 prev_month,
122 new_invoice.amount,
123 app_name,
124 link,
125 new_invoice.event_id)
126
127
128 @celery.task(base=RequestContextTask, name='send.event.fee.notification.followup')
129 def send_event_fee_notification_followup():
130 from app.instance import current_app as app
131 with app.app_context():
132 incomplete_invoices = EventInvoice.query.filter(EventInvoice.status != 'paid').all()
133 for incomplete_invoice in incomplete_invoices:
134 if incomplete_invoice.amount > 0:
135 prev_month = monthdelta(incomplete_invoice.created_at, 1).strftime(
136 "%b %Y") # Displayed as Aug 2016
137 app_name = get_settings()['app_name']
138 frontend_url = get_settings()['frontend_url']
139 link = '{}/event-invoice/{}/review'.format(frontend_url,
140 incomplete_invoice.identifier)
141 send_followup_email_for_monthly_fee_payment(incomplete_invoice.user.email,
142 incomplete_invoice.event.name,
143 prev_month,
144 incomplete_invoice.amount,
145 app_name,
146 link)
147 send_followup_notif_monthly_fee_payment(incomplete_invoice.user,
148 incomplete_invoice.event.name,
149 prev_month,
150 incomplete_invoice.amount,
151 app_name,
152 link,
153 incomplete_invoice.event.id)
154
155
156 @celery.task(base=RequestContextTask, name='expire.pending.tickets')
157 def expire_pending_tickets():
158 from app.instance import current_app as app
159 with app.app_context():
160 db.session.query(Order).filter(Order.status == 'pending',
161 (Order.created_at + datetime.timedelta(minutes=30)) <= datetime.datetime.now()).\
162 update({'status': 'expired'})
163 db.session.commit()
164
165
166 @celery.task(base=RequestContextTask, name='delete.ticket.holders.no.order.id')
167 def delete_ticket_holders_no_order_id():
168 from app.instance import current_app as app
169 with app.app_context():
170 order_expiry_time = get_settings()['order_expiry_time']
171 TicketHolder.query.filter(TicketHolder.order_id == None, TicketHolder.deleted_at.is_(None),
172 TicketHolder.created_at + datetime.timedelta(minutes=order_expiry_time)
173 < datetime.datetime.utcnow()).delete(synchronize_session=False)
174 db.session.commit()
175
176
177 @celery.task(base=RequestContextTask, name='event.invoices.mark.due')
178 def event_invoices_mark_due():
179 from app.instance import current_app as app
180 with app.app_context():
181 db.session.query(EventInvoice).filter(
182 EventInvoice.status == 'upcoming',
183 Event.id == EventInvoice.event_id,
184 Event.ends_at >= datetime.datetime.now(),
185 (EventInvoice.created_at + datetime.timedelta(days=30) <= datetime.datetime.now())
186 ).update({EventInvoice.status: 'due'}, synchronize_session=False)
187
188
189 @celery.task(base=RequestContextTask, name='send.monthly.event.invoice')
190 def send_monthly_event_invoice():
191 from app.instance import current_app as app
192 with app.app_context():
193 events = Event.query.filter_by(deleted_at=None, state='published').all()
194 for event in events:
195 # calculate net & gross revenues
196 user = event.owner
197 admin_info = get_settings()
198 currency = event.payment_currency
199 ticket_fee_object = db.session.query(TicketFees).filter_by(currency=currency).one()
200 ticket_fee_percentage = ticket_fee_object.service_fee
201 ticket_fee_maximum = ticket_fee_object.maximum_fee
202 orders = Order.query.filter_by(event=event).all()
203 gross_revenue = event.calc_monthly_revenue()
204 ticket_fees = event.tickets_sold * (ticket_fee_percentage / 100)
205 if ticket_fees > ticket_fee_maximum:
206 ticket_fees = ticket_fee_maximum
207 net_revenue = gross_revenue - ticket_fees
208 payment_details = {
209 'tickets_sold': event.tickets_sold,
210 'gross_revenue': gross_revenue,
211 'net_revenue': net_revenue,
212 'amount_payable': ticket_fees
213 }
214 # save invoice as pdf
215 pdf = create_save_pdf(render_template('pdf/event_invoice.html', orders=orders, user=user,
216 admin_info=admin_info, currency=currency, event=event,
217 ticket_fee_object=ticket_fee_object, payment_details=payment_details,
218 net_revenue=net_revenue), UPLOAD_PATHS['pdf']['event_invoice'],
219 dir_path='/static/uploads/pdf/event_invoices/', identifier=event.identifier)
220 # save event_invoice info to DB
221
222 event_invoice = EventInvoice(amount=net_revenue, invoice_pdf_url=pdf, event_id=event.id)
223 save_to_db(event_invoice)
224
225
226 @celery.on_after_configure.connect
227 def setup_scheduled_task(sender, **kwargs):
228 from celery.schedules import crontab
229 sender.add_periodic_task(crontab(hour='*/5', minute=30), send_after_event_mail)
230 sender.add_periodic_task(crontab(day_of_week='0-6'), send_event_fee_notification)
231 sender.add_periodic_task(crontab(minute=0, hour=0, day_of_month=1), send_event_fee_notification_followup)
232 sender.add_periodic_task(crontab(hour='*/5', minute=30), change_session_state_on_event_completion)
233 sender.add_periodic_task(crontab(minute='*/45'), expire_pending_tickets)
234 sender.add_periodic_task(crontab(minute=0, hour=0, day_of_month=1), send_monthly_event_invoice)
235 sender.add_periodic_task(crontab(minute=0, hour='*/5'), event_invoices_mark_due)
236 sender.add_periodic_task(crontab(minute='*/5'), delete_ticket_holders_no_order_id)
237
[end of app/api/helpers/scheduled_jobs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/helpers/scheduled_jobs.py b/app/api/helpers/scheduled_jobs.py
--- a/app/api/helpers/scheduled_jobs.py
+++ b/app/api/helpers/scheduled_jobs.py
@@ -86,8 +86,7 @@
fee_total = 0
for order in orders:
- for order_ticket in order.tickets:
- ticket = safe_query(db, Ticket, 'id', order_ticket.ticket_id, 'ticket_id')
+ for ticket in order.tickets:
if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:
fee = ticket.price * (get_fee(event.payment_country, order.event.payment_currency) / 100.0)
fee_total += fee
| {"golden_diff": "diff --git a/app/api/helpers/scheduled_jobs.py b/app/api/helpers/scheduled_jobs.py\n--- a/app/api/helpers/scheduled_jobs.py\n+++ b/app/api/helpers/scheduled_jobs.py\n@@ -86,8 +86,7 @@\n \n fee_total = 0\n for order in orders:\n- for order_ticket in order.tickets:\n- ticket = safe_query(db, Ticket, 'id', order_ticket.ticket_id, 'ticket_id')\n+ for ticket in order.tickets:\n if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:\n fee = ticket.price * (get_fee(event.payment_country, order.event.payment_currency) / 100.0)\n fee_total += fee\n", "issue": "Task send_event_fee_notification failing\n```\r\nraised unexpected: AttributeError(\"'Ticket' object has no attribute 'ticket_id'\")\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/celery/app/trace.py\", line 385, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/flask_celeryext/app.py\", line 101, in __call__\r\n res = Task.__call__(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/celery/app/trace.py\", line 650, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py\", line 161, in _inner\r\n reraise(*exc_info)\r\n File \"/usr/local/lib/python3.7/site-packages/sentry_sdk/_compat.py\", line 57, in reraise\r\n raise value\r\n File \"/usr/local/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py\", line 156, in _inner\r\n return f(*args, **kwargs)\r\n File \"/data/app/app/api/helpers/scheduled_jobs.py\", line 90, in send_event_fee_notification\r\n ticket = safe_query(db, Ticket, 'id', order_ticket.ticket_id, 'ticket_id')\r\nAttributeError: 'Ticket' object has no attribute 'ticket_id'\r\n\r\n```\n", "before_files": [{"content": "import datetime\n\nimport pytz\nfrom dateutil.relativedelta import relativedelta\nfrom flask import render_template\nfrom flask_celeryext import RequestContextTask\nfrom app.instance import celery\n\nfrom app.api.helpers.db import safe_query, save_to_db\nfrom app.api.helpers.mail import send_email_after_event, send_email_for_monthly_fee_payment, \\\n send_followup_email_for_monthly_fee_payment\nfrom app.api.helpers.notification import send_notif_monthly_fee_payment, send_followup_notif_monthly_fee_payment, \\\n send_notif_after_event\nfrom app.api.helpers.query import get_upcoming_events, get_user_event_roles_by_role_name\nfrom app.api.helpers.utilities import monthdelta\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.event_invoice import EventInvoice\nfrom app.models.order import Order\nfrom app.models.speaker import Speaker\nfrom app.models.session import Session\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_fee import TicketFees, get_fee\nfrom app.models.ticket_holder import TicketHolder\n\nfrom app.settings import get_settings\n\n\[email protected](base=RequestContextTask, name='send.after.event.mail')\ndef send_after_event_mail():\n from app.instance import current_app as app\n with app.app_context():\n events = Event.query.filter_by(state='published', deleted_at=None).all()\n for event in events:\n organizers = get_user_event_roles_by_role_name(event.id, 'organizer')\n speakers = Speaker.query.filter_by(event_id=event.id, deleted_at=None).all()\n owner = get_user_event_roles_by_role_name(event.id, 'owner').first()\n current_time = datetime.datetime.now(pytz.timezone(event.timezone))\n time_difference = current_time - event.ends_at\n time_difference_minutes = (time_difference.days * 24 * 60) + \\\n (time_difference.seconds / 60)\n frontend_url = get_settings()['frontend_url']\n if current_time > event.ends_at and time_difference_minutes < 1440:\n for speaker in speakers:\n if not speaker.is_email_overridden:\n send_email_after_event(speaker.user.email, event.name, frontend_url)\n send_notif_after_event(speaker.user, event.name)\n for organizer in organizers:\n send_email_after_event(organizer.user.email, event.name, frontend_url)\n send_notif_after_event(organizer.user, event.name)\n if owner:\n send_email_after_event(owner.user.email, event.name, frontend_url)\n send_notif_after_event(owner.user, event.name)\n\n\[email protected](base=RequestContextTask, name='change.session.state.on.event.completion')\ndef change_session_state_on_event_completion():\n from app.instance import current_app as app\n with app.app_context():\n sessions_to_be_changed = Session.query.join(Event).filter(Session.state == 'pending')\\\n .filter(Event.ends_at < datetime.datetime.now())\n for session in sessions_to_be_changed:\n session.state = 'rejected'\n save_to_db(session, 'Changed {} session state to rejected'.format(session.title))\n\n\[email protected](base=RequestContextTask, name='send.event.fee.notification')\ndef send_event_fee_notification():\n from app.instance import current_app as app\n with app.app_context():\n events = Event.query.filter_by(deleted_at=None, state='published').all()\n for event in events:\n latest_invoice = EventInvoice.query.filter_by(\n event_id=event.id).order_by(EventInvoice.created_at.desc()).first()\n\n if latest_invoice:\n orders = Order.query \\\n .filter_by(event_id=event.id) \\\n .filter_by(status='completed') \\\n .filter(Order.completed_at > latest_invoice.created_at).all()\n else:\n orders = Order.query.filter_by(\n event_id=event.id).filter_by(status='completed').all()\n\n fee_total = 0\n for order in orders:\n for order_ticket in order.tickets:\n ticket = safe_query(db, Ticket, 'id', order_ticket.ticket_id, 'ticket_id')\n if order.paid_via != 'free' and order.amount > 0 and ticket.price > 0:\n fee = ticket.price * (get_fee(event.payment_country, order.event.payment_currency) / 100.0)\n fee_total += fee\n\n if fee_total > 0:\n owner = get_user_event_roles_by_role_name(event.id, 'owner').first()\n new_invoice = EventInvoice(\n amount=fee_total, event_id=event.id, user_id=owner.user.id)\n\n if event.discount_code_id and event.discount_code:\n r = relativedelta(datetime.datetime.utcnow(), event.created_at)\n if r <= event.discount_code.valid_till:\n new_invoice.amount = fee_total - \\\n (fee_total * (event.discount_code.value / 100.0))\n new_invoice.discount_code_id = event.discount_code_id\n\n save_to_db(new_invoice)\n prev_month = monthdelta(new_invoice.created_at, 1).strftime(\n \"%b %Y\") # Displayed as Aug 2016\n app_name = get_settings()['app_name']\n frontend_url = get_settings()['frontend_url']\n link = '{}/invoices/{}'.format(frontend_url, new_invoice.identifier)\n send_email_for_monthly_fee_payment(new_invoice.user.email,\n event.name,\n prev_month,\n new_invoice.amount,\n app_name,\n link)\n send_notif_monthly_fee_payment(new_invoice.user,\n event.name,\n prev_month,\n new_invoice.amount,\n app_name,\n link,\n new_invoice.event_id)\n\n\[email protected](base=RequestContextTask, name='send.event.fee.notification.followup')\ndef send_event_fee_notification_followup():\n from app.instance import current_app as app\n with app.app_context():\n incomplete_invoices = EventInvoice.query.filter(EventInvoice.status != 'paid').all()\n for incomplete_invoice in incomplete_invoices:\n if incomplete_invoice.amount > 0:\n prev_month = monthdelta(incomplete_invoice.created_at, 1).strftime(\n \"%b %Y\") # Displayed as Aug 2016\n app_name = get_settings()['app_name']\n frontend_url = get_settings()['frontend_url']\n link = '{}/event-invoice/{}/review'.format(frontend_url,\n incomplete_invoice.identifier)\n send_followup_email_for_monthly_fee_payment(incomplete_invoice.user.email,\n incomplete_invoice.event.name,\n prev_month,\n incomplete_invoice.amount,\n app_name,\n link)\n send_followup_notif_monthly_fee_payment(incomplete_invoice.user,\n incomplete_invoice.event.name,\n prev_month,\n incomplete_invoice.amount,\n app_name,\n link,\n incomplete_invoice.event.id)\n\n\[email protected](base=RequestContextTask, name='expire.pending.tickets')\ndef expire_pending_tickets():\n from app.instance import current_app as app\n with app.app_context():\n db.session.query(Order).filter(Order.status == 'pending',\n (Order.created_at + datetime.timedelta(minutes=30)) <= datetime.datetime.now()).\\\n update({'status': 'expired'})\n db.session.commit()\n\n\[email protected](base=RequestContextTask, name='delete.ticket.holders.no.order.id')\ndef delete_ticket_holders_no_order_id():\n from app.instance import current_app as app\n with app.app_context():\n order_expiry_time = get_settings()['order_expiry_time']\n TicketHolder.query.filter(TicketHolder.order_id == None, TicketHolder.deleted_at.is_(None),\n TicketHolder.created_at + datetime.timedelta(minutes=order_expiry_time)\n < datetime.datetime.utcnow()).delete(synchronize_session=False)\n db.session.commit()\n\n\[email protected](base=RequestContextTask, name='event.invoices.mark.due')\ndef event_invoices_mark_due():\n from app.instance import current_app as app\n with app.app_context():\n db.session.query(EventInvoice).filter(\n EventInvoice.status == 'upcoming',\n Event.id == EventInvoice.event_id,\n Event.ends_at >= datetime.datetime.now(),\n (EventInvoice.created_at + datetime.timedelta(days=30) <= datetime.datetime.now())\n ).update({EventInvoice.status: 'due'}, synchronize_session=False)\n\n\[email protected](base=RequestContextTask, name='send.monthly.event.invoice')\ndef send_monthly_event_invoice():\n from app.instance import current_app as app\n with app.app_context():\n events = Event.query.filter_by(deleted_at=None, state='published').all()\n for event in events:\n # calculate net & gross revenues\n user = event.owner\n admin_info = get_settings()\n currency = event.payment_currency\n ticket_fee_object = db.session.query(TicketFees).filter_by(currency=currency).one()\n ticket_fee_percentage = ticket_fee_object.service_fee\n ticket_fee_maximum = ticket_fee_object.maximum_fee\n orders = Order.query.filter_by(event=event).all()\n gross_revenue = event.calc_monthly_revenue()\n ticket_fees = event.tickets_sold * (ticket_fee_percentage / 100)\n if ticket_fees > ticket_fee_maximum:\n ticket_fees = ticket_fee_maximum\n net_revenue = gross_revenue - ticket_fees\n payment_details = {\n 'tickets_sold': event.tickets_sold,\n 'gross_revenue': gross_revenue,\n 'net_revenue': net_revenue,\n 'amount_payable': ticket_fees\n }\n # save invoice as pdf\n pdf = create_save_pdf(render_template('pdf/event_invoice.html', orders=orders, user=user,\n admin_info=admin_info, currency=currency, event=event,\n ticket_fee_object=ticket_fee_object, payment_details=payment_details,\n net_revenue=net_revenue), UPLOAD_PATHS['pdf']['event_invoice'],\n dir_path='/static/uploads/pdf/event_invoices/', identifier=event.identifier)\n # save event_invoice info to DB\n\n event_invoice = EventInvoice(amount=net_revenue, invoice_pdf_url=pdf, event_id=event.id)\n save_to_db(event_invoice)\n\n\[email protected]_after_configure.connect\ndef setup_scheduled_task(sender, **kwargs):\n from celery.schedules import crontab\n sender.add_periodic_task(crontab(hour='*/5', minute=30), send_after_event_mail)\n sender.add_periodic_task(crontab(day_of_week='0-6'), send_event_fee_notification)\n sender.add_periodic_task(crontab(minute=0, hour=0, day_of_month=1), send_event_fee_notification_followup)\n sender.add_periodic_task(crontab(hour='*/5', minute=30), change_session_state_on_event_completion)\n sender.add_periodic_task(crontab(minute='*/45'), expire_pending_tickets)\n sender.add_periodic_task(crontab(minute=0, hour=0, day_of_month=1), send_monthly_event_invoice)\n sender.add_periodic_task(crontab(minute=0, hour='*/5'), event_invoices_mark_due)\n sender.add_periodic_task(crontab(minute='*/5'), delete_ticket_holders_no_order_id)\n", "path": "app/api/helpers/scheduled_jobs.py"}]} | 3,861 | 162 |
gh_patches_debug_11059 | rasdani/github-patches | git_diff | pyca__cryptography-4133 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Figure out how to fix docs build on rtd
It appears to be running into https://github.com/sphinx-doc/sphinx/issues/3976
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10 import platform
11 import subprocess
12 import sys
13 from distutils.command.build import build
14
15 import pkg_resources
16
17 import setuptools
18 from setuptools import find_packages, setup
19 from setuptools.command.install import install
20 from setuptools.command.test import test
21
22
23 if (
24 pkg_resources.parse_version(setuptools.__version__) <
25 pkg_resources.parse_version("18.5")
26 ):
27 raise RuntimeError(
28 "cryptography requires setuptools 18.5 or newer, please upgrade to a "
29 "newer version of setuptools"
30 )
31
32 base_dir = os.path.dirname(__file__)
33 src_dir = os.path.join(base_dir, "src")
34
35 # When executing the setup.py, we need to be able to import ourselves, this
36 # means that we need to add the src/ directory to the sys.path.
37 sys.path.insert(0, src_dir)
38
39 about = {}
40 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
41 exec(f.read(), about)
42
43
44 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
45
46 setup_requirements = []
47
48 if platform.python_implementation() == "PyPy":
49 if sys.pypy_version_info < (5, 3):
50 raise RuntimeError(
51 "cryptography 1.9 is not compatible with PyPy < 5.3. Please "
52 "upgrade PyPy to use this library."
53 )
54 else:
55 setup_requirements.append("cffi>=1.7,!=1.11.3")
56
57 test_requirements = [
58 "pytest>=3.2.1,!=3.3.0",
59 "pretend",
60 "iso8601",
61 "pytz",
62 "hypothesis>=1.11.4",
63 ]
64
65
66 # If there's no vectors locally that probably means we are in a tarball and
67 # need to go and get the matching vectors package from PyPi
68 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
69 test_requirements.append(VECTORS_DEPENDENCY)
70
71
72 class PyTest(test):
73 def finalize_options(self):
74 test.finalize_options(self)
75 self.test_args = []
76 self.test_suite = True
77
78 # This means there's a vectors/ folder with the package in here.
79 # cd into it, install the vectors package and then refresh sys.path
80 if VECTORS_DEPENDENCY not in test_requirements:
81 subprocess.check_call(
82 [sys.executable, "setup.py", "install"], cwd="vectors"
83 )
84 pkg_resources.get_distribution("cryptography_vectors").activate()
85
86 def run_tests(self):
87 # Import here because in module scope the eggs are not loaded.
88 import pytest
89 test_args = [os.path.join(base_dir, "tests")]
90 errno = pytest.main(test_args)
91 sys.exit(errno)
92
93
94 def keywords_with_side_effects(argv):
95 """
96 Get a dictionary with setup keywords that (can) have side effects.
97
98 :param argv: A list of strings with command line arguments.
99 :returns: A dictionary with keyword arguments for the ``setup()`` function.
100
101 This setup.py script uses the setuptools 'setup_requires' feature because
102 this is required by the cffi package to compile extension modules. The
103 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
104 build process as a result of setup.py invocations that don't need the cffi
105 module to be built (setup.py serves the dual purpose of exposing package
106 metadata).
107
108 All of the options listed by ``python setup.py --help`` that print
109 information should be recognized here. The commands ``clean``,
110 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
111 Any combination of these options and commands is also supported.
112
113 This function was originally based on the `setup.py script`_ of SciPy (see
114 also the discussion in `pip issue #25`_).
115
116 .. _pip issue #25: https://github.com/pypa/pip/issues/25
117 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
118 """
119 no_setup_requires_arguments = (
120 '-h', '--help',
121 '-n', '--dry-run',
122 '-q', '--quiet',
123 '-v', '--verbose',
124 '-V', '--version',
125 '--author',
126 '--author-email',
127 '--classifiers',
128 '--contact',
129 '--contact-email',
130 '--description',
131 '--egg-base',
132 '--fullname',
133 '--help-commands',
134 '--keywords',
135 '--licence',
136 '--license',
137 '--long-description',
138 '--maintainer',
139 '--maintainer-email',
140 '--name',
141 '--no-user-cfg',
142 '--obsoletes',
143 '--platforms',
144 '--provides',
145 '--requires',
146 '--url',
147 'clean',
148 'egg_info',
149 'register',
150 'sdist',
151 'upload',
152 )
153
154 def is_short_option(argument):
155 """Check whether a command line argument is a short option."""
156 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
157
158 def expand_short_options(argument):
159 """Expand combined short options into canonical short options."""
160 return ('-' + char for char in argument[1:])
161
162 def argument_without_setup_requirements(argv, i):
163 """Check whether a command line argument needs setup requirements."""
164 if argv[i] in no_setup_requires_arguments:
165 # Simple case: An argument which is either an option or a command
166 # which doesn't need setup requirements.
167 return True
168 elif (is_short_option(argv[i]) and
169 all(option in no_setup_requires_arguments
170 for option in expand_short_options(argv[i]))):
171 # Not so simple case: Combined short options none of which need
172 # setup requirements.
173 return True
174 elif argv[i - 1:i] == ['--egg-base']:
175 # Tricky case: --egg-info takes an argument which should not make
176 # us use setup_requires (defeating the purpose of this code).
177 return True
178 else:
179 return False
180
181 if all(argument_without_setup_requirements(argv, i)
182 for i in range(1, len(argv))):
183 return {
184 "cmdclass": {
185 "build": DummyBuild,
186 "install": DummyInstall,
187 "test": DummyPyTest,
188 }
189 }
190 else:
191 cffi_modules = [
192 "src/_cffi_src/build_openssl.py:ffi",
193 "src/_cffi_src/build_constant_time.py:ffi",
194 "src/_cffi_src/build_padding.py:ffi",
195 ]
196
197 return {
198 "setup_requires": setup_requirements,
199 "cmdclass": {
200 "test": PyTest,
201 },
202 "cffi_modules": cffi_modules
203 }
204
205
206 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
207 "while command line arguments implied a side effect "
208 "free command or option.")
209
210
211 class DummyBuild(build):
212 """
213 This class makes it very obvious when ``keywords_with_side_effects()`` has
214 incorrectly interpreted the command line arguments to ``setup.py build`` as
215 one of the 'side effect free' commands or options.
216 """
217
218 def run(self):
219 raise RuntimeError(setup_requires_error)
220
221
222 class DummyInstall(install):
223 """
224 This class makes it very obvious when ``keywords_with_side_effects()`` has
225 incorrectly interpreted the command line arguments to ``setup.py install``
226 as one of the 'side effect free' commands or options.
227 """
228
229 def run(self):
230 raise RuntimeError(setup_requires_error)
231
232
233 class DummyPyTest(test):
234 """
235 This class makes it very obvious when ``keywords_with_side_effects()`` has
236 incorrectly interpreted the command line arguments to ``setup.py test`` as
237 one of the 'side effect free' commands or options.
238 """
239
240 def run_tests(self):
241 raise RuntimeError(setup_requires_error)
242
243
244 with open(os.path.join(base_dir, "README.rst")) as f:
245 long_description = f.read()
246
247
248 setup(
249 name=about["__title__"],
250 version=about["__version__"],
251
252 description=about["__summary__"],
253 long_description=long_description,
254 license=about["__license__"],
255 url=about["__uri__"],
256
257 author=about["__author__"],
258 author_email=about["__email__"],
259
260 classifiers=[
261 "Intended Audience :: Developers",
262 "License :: OSI Approved :: Apache Software License",
263 "License :: OSI Approved :: BSD License",
264 "Natural Language :: English",
265 "Operating System :: MacOS :: MacOS X",
266 "Operating System :: POSIX",
267 "Operating System :: POSIX :: BSD",
268 "Operating System :: POSIX :: Linux",
269 "Operating System :: Microsoft :: Windows",
270 "Programming Language :: Python",
271 "Programming Language :: Python :: 2",
272 "Programming Language :: Python :: 2.7",
273 "Programming Language :: Python :: 3",
274 "Programming Language :: Python :: 3.4",
275 "Programming Language :: Python :: 3.5",
276 "Programming Language :: Python :: 3.6",
277 "Programming Language :: Python :: Implementation :: CPython",
278 "Programming Language :: Python :: Implementation :: PyPy",
279 "Topic :: Security :: Cryptography",
280 ],
281
282 package_dir={"": "src"},
283 packages=find_packages(where="src", exclude=["_cffi_src", "_cffi_src.*"]),
284 include_package_data=True,
285
286 python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
287
288 install_requires=[
289 "idna >= 2.1",
290 "asn1crypto >= 0.21.0",
291 "six >= 1.4.1",
292 ],
293 tests_require=test_requirements,
294 extras_require={
295 ":python_version < '3'": ["enum34", "ipaddress"],
296 ":platform_python_implementation != 'PyPy'": ["cffi >= 1.7"],
297
298 "test": test_requirements,
299 "docstest": [
300 "doc8",
301 "pyenchant >= 1.6.11",
302 "readme_renderer >= 16.0",
303 "sphinx >= 1.6.5",
304 "sphinx_rtd_theme",
305 "sphinxcontrib-spelling >= 4.0.1",
306 ],
307 "pep8test": [
308 "flake8",
309 "flake8-import-order",
310 "pep8-naming",
311 ],
312 },
313
314 # for cffi
315 zip_safe=False,
316 ext_package="cryptography.hazmat.bindings",
317 **keywords_with_side_effects(sys.argv)
318 )
319
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -296,12 +296,14 @@
":platform_python_implementation != 'PyPy'": ["cffi >= 1.7"],
"test": test_requirements,
+ "docs": [
+ "sphinx >= 1.6.5",
+ "sphinx_rtd_theme",
+ ],
"docstest": [
"doc8",
"pyenchant >= 1.6.11",
"readme_renderer >= 16.0",
- "sphinx >= 1.6.5",
- "sphinx_rtd_theme",
"sphinxcontrib-spelling >= 4.0.1",
],
"pep8test": [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -296,12 +296,14 @@\n \":platform_python_implementation != 'PyPy'\": [\"cffi >= 1.7\"],\n \n \"test\": test_requirements,\n+ \"docs\": [\n+ \"sphinx >= 1.6.5\",\n+ \"sphinx_rtd_theme\",\n+ ],\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n- \"sphinx >= 1.6.5\",\n- \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n", "issue": "Figure out how to fix docs build on rtd\nIt appears to be running into https://github.com/sphinx-doc/sphinx/issues/3976\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nimport setuptools\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nif (\n pkg_resources.parse_version(setuptools.__version__) <\n pkg_resources.parse_version(\"18.5\")\n):\n raise RuntimeError(\n \"cryptography requires setuptools 18.5 or newer, please upgrade to a \"\n \"newer version of setuptools\"\n )\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nsetup_requirements = []\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (5, 3):\n raise RuntimeError(\n \"cryptography 1.9 is not compatible with PyPy < 5.3. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n setup_requirements.append(\"cffi>=1.7,!=1.11.3\")\n\ntest_requirements = [\n \"pytest>=3.2.1,!=3.3.0\",\n \"pretend\",\n \"iso8601\",\n \"pytz\",\n \"hypothesis>=1.11.4\",\n]\n\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n\n install_requires=[\n \"idna >= 2.1\",\n \"asn1crypto >= 0.21.0\",\n \"six >= 1.4.1\",\n ],\n tests_require=test_requirements,\n extras_require={\n \":python_version < '3'\": [\"enum34\", \"ipaddress\"],\n \":platform_python_implementation != 'PyPy'\": [\"cffi >= 1.7\"],\n\n \"test\": test_requirements,\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n \"sphinx >= 1.6.5\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n },\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}]} | 3,797 | 181 |
gh_patches_debug_33771 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4750 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AWS_300 fails if there are more rules than one
**Describe the issue**
CKV_AWS_300 fails if there's more rules than one defined in `aws_s3_bucket_lifecycle_configuration`
**Examples**
```
resource "aws_s3_bucket_lifecycle_configuration" "bucket" {
bucket = aws_s3_bucket.bucket.bucket
rule {
id = "id-1"
status = "Enabled"
abort_incomplete_multipart_upload {
days_after_initiation = 1
}
}
rule {
id = "id-2"
status = "Enabled"
noncurrent_version_expiration {
noncurrent_days = 1
}
}
}
```
**Version (please complete the following information):**
- Checkov Version 2.3.111
Fails with:
```
Check: CKV_AWS_300: "Ensure S3 lifecycle configuration sets period for aborting failed uploads"
FAILED for resource: aws_s3_bucket_lifecycle_configuration.bucket
File: s3.tf:1-1
122 | resource "aws_s3_bucket_lifecycle_configuration" "bucket" {
123 | bucket = aws_s3_bucket.bucket.bucket
124 |
125 | rule {
126 | id = "id-1"
127 |
128 | abort_incomplete_multipart_upload {
129 | days_after_initiation = 1
130 | }
131 |
132 | status = "Enabled"
133 | }
134 |
135 | rule {
136 | id = "id-2"
137 | status = "Enabled"
138 |
139 | noncurrent_version_expiration {
140 | noncurrent_days = 1
141 | }
142 | }
143 | }
```
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py]
1 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
2 from checkov.common.models.enums import CheckCategories, CheckResult
3
4
5 class S3AbortIncompleteUploads(BaseResourceCheck):
6 def __init__(self):
7 """
8 If you don't set this value in a lifecycle configuration you'll end up paying for s3
9 resources you never could use
10 """
11 name = "Ensure S3 lifecycle configuration sets period for aborting failed uploads"
12 id = "CKV_AWS_300"
13 supported_resources = ('aws_s3_bucket_lifecycle_configuration',)
14 categories = (CheckCategories.GENERAL_SECURITY,)
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def scan_resource_conf(self, conf):
18 self.evaluated_keys = ["rule"]
19 rules = conf.get("rule")
20 if rules and isinstance(rules, list):
21 for idx_rule, rule in enumerate(rules):
22 if not rule.get("abort_incomplete_multipart_upload"):
23 self.evaluated_keys = [f"rule/[{idx_rule}]/"]
24 return CheckResult.FAILED
25 return CheckResult.PASSED
26 return CheckResult.FAILED
27
28
29 check = S3AbortIncompleteUploads()
30
[end of checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py
--- a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py
+++ b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py
@@ -1,28 +1,36 @@
+from __future__ import annotations
+
+from typing import Any
+
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.models.enums import CheckCategories, CheckResult
class S3AbortIncompleteUploads(BaseResourceCheck):
- def __init__(self):
+ def __init__(self) -> None:
"""
If you don't set this value in a lifecycle configuration you'll end up paying for s3
resources you never could use
"""
name = "Ensure S3 lifecycle configuration sets period for aborting failed uploads"
id = "CKV_AWS_300"
- supported_resources = ('aws_s3_bucket_lifecycle_configuration',)
+ supported_resources = ("aws_s3_bucket_lifecycle_configuration",)
categories = (CheckCategories.GENERAL_SECURITY,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def scan_resource_conf(self, conf):
+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
self.evaluated_keys = ["rule"]
rules = conf.get("rule")
if rules and isinstance(rules, list):
for idx_rule, rule in enumerate(rules):
- if not rule.get("abort_incomplete_multipart_upload"):
- self.evaluated_keys = [f"rule/[{idx_rule}]/"]
- return CheckResult.FAILED
- return CheckResult.PASSED
+ if (
+ rule.get("abort_incomplete_multipart_upload")
+ and rule.get("status") == ["Enabled"]
+ and not rule.get("filter")
+ ):
+ self.evaluated_keys = [f"rule/[{idx_rule}]/abort_incomplete_multipart_upload"]
+ return CheckResult.PASSED
+
return CheckResult.FAILED
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py\n--- a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py\n+++ b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py\n@@ -1,28 +1,36 @@\n+from __future__ import annotations\n+\n+from typing import Any\n+\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n from checkov.common.models.enums import CheckCategories, CheckResult\n \n \n class S3AbortIncompleteUploads(BaseResourceCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n \"\"\"\n If you don't set this value in a lifecycle configuration you'll end up paying for s3\n resources you never could use\n \"\"\"\n name = \"Ensure S3 lifecycle configuration sets period for aborting failed uploads\"\n id = \"CKV_AWS_300\"\n- supported_resources = ('aws_s3_bucket_lifecycle_configuration',)\n+ supported_resources = (\"aws_s3_bucket_lifecycle_configuration\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def scan_resource_conf(self, conf):\n+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n self.evaluated_keys = [\"rule\"]\n rules = conf.get(\"rule\")\n if rules and isinstance(rules, list):\n for idx_rule, rule in enumerate(rules):\n- if not rule.get(\"abort_incomplete_multipart_upload\"):\n- self.evaluated_keys = [f\"rule/[{idx_rule}]/\"]\n- return CheckResult.FAILED\n- return CheckResult.PASSED\n+ if (\n+ rule.get(\"abort_incomplete_multipart_upload\")\n+ and rule.get(\"status\") == [\"Enabled\"]\n+ and not rule.get(\"filter\")\n+ ):\n+ self.evaluated_keys = [f\"rule/[{idx_rule}]/abort_incomplete_multipart_upload\"]\n+ return CheckResult.PASSED\n+\n return CheckResult.FAILED\n", "issue": "CKV_AWS_300 fails if there are more rules than one\n**Describe the issue**\r\nCKV_AWS_300 fails if there's more rules than one defined in `aws_s3_bucket_lifecycle_configuration`\r\n\r\n**Examples**\r\n```\r\nresource \"aws_s3_bucket_lifecycle_configuration\" \"bucket\" {\r\n bucket = aws_s3_bucket.bucket.bucket\r\n\r\n rule {\r\n id = \"id-1\"\r\n status = \"Enabled\"\r\n\r\n abort_incomplete_multipart_upload {\r\n days_after_initiation = 1\r\n }\r\n }\r\n\r\n rule {\r\n id = \"id-2\"\r\n status = \"Enabled\"\r\n\r\n noncurrent_version_expiration {\r\n noncurrent_days = 1\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.3.111\r\n\r\nFails with:\r\n\r\n```\r\nCheck: CKV_AWS_300: \"Ensure S3 lifecycle configuration sets period for aborting failed uploads\"\r\n\tFAILED for resource: aws_s3_bucket_lifecycle_configuration.bucket\r\n\tFile: s3.tf:1-1\r\n\r\n\t\t122 | resource \"aws_s3_bucket_lifecycle_configuration\" \"bucket\" {\r\n\t\t123 | bucket = aws_s3_bucket.bucket.bucket\r\n\t\t124 | \r\n\t\t125 | rule {\r\n\t\t126 | id = \"id-1\"\r\n\t\t127 | \r\n\t\t128 | abort_incomplete_multipart_upload {\r\n\t\t129 | days_after_initiation = 1\r\n\t\t130 | }\r\n\t\t131 | \r\n\t\t132 | status = \"Enabled\"\r\n\t\t133 | }\r\n\t\t134 | \r\n\t\t135 | rule {\r\n\t\t136 | id = \"id-2\"\r\n\t\t137 | status = \"Enabled\"\r\n\t\t138 | \r\n\t\t139 | noncurrent_version_expiration {\r\n\t\t140 | noncurrent_days = 1\r\n\t\t141 | }\r\n\t\t142 | }\r\n\t\t143 | }\r\n```\n", "before_files": [{"content": "from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass S3AbortIncompleteUploads(BaseResourceCheck):\n def __init__(self):\n \"\"\"\n If you don't set this value in a lifecycle configuration you'll end up paying for s3\n resources you never could use\n \"\"\"\n name = \"Ensure S3 lifecycle configuration sets period for aborting failed uploads\"\n id = \"CKV_AWS_300\"\n supported_resources = ('aws_s3_bucket_lifecycle_configuration',)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n self.evaluated_keys = [\"rule\"]\n rules = conf.get(\"rule\")\n if rules and isinstance(rules, list):\n for idx_rule, rule in enumerate(rules):\n if not rule.get(\"abort_incomplete_multipart_upload\"):\n self.evaluated_keys = [f\"rule/[{idx_rule}]/\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = S3AbortIncompleteUploads()\n", "path": "checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py"}]} | 1,347 | 482 |
gh_patches_debug_13279 | rasdani/github-patches | git_diff | pypa__pip-3007 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip 7.0.1: you should use "--trusted-host". Hey, no such option "--trusted-host"!
```
$ cat req.txt
--extra-index-url http://pip.mycompany.com/simple
mylib
myanotherlib
$ pip install -r req.txt
Collecting mylib (from -r req.txt (line 2))
.../urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
The repository located at pip.mycompany.com is not a trusted or secure host and is being ignored. If this repository is available via HTTPS it is recommended to use HTTPS instead, otherwise you may silence this warning and allow it anyways with '--trusted-host pip.mycompany.com'.
Could not find a version that satisfies the requirement mylib (from -r req.txt (line 2)) (from versions: )
No matching distribution found for mylib (from -r req.txt (line 2))
$ cat req1.txt
--extra-index-url http://pip.mycompany.com/simple
--trusted-host pip.mycompany.com
mylib
myanotherlib
$ pip install -r req1.txt
Usage: pip [options]
pip: error: no such option: --trusted-host
$
```
I know that i can run `pip install -r file.txt --trusted-host=mycompany.com` but I configure my servers with Chef, not running pip directly.
</issue>
<code>
[start of pip/req/req_file.py]
1 """
2 Requirements file parsing
3 """
4
5 from __future__ import absolute_import
6
7 import os
8 import re
9 import shlex
10 import optparse
11 import warnings
12
13 from pip._vendor.six.moves.urllib import parse as urllib_parse
14 from pip._vendor.six.moves import filterfalse
15
16 import pip
17 from pip.download import get_file_content
18 from pip.req.req_install import InstallRequirement
19 from pip.exceptions import (RequirementsFileParseError)
20 from pip.utils.deprecation import RemovedInPip10Warning
21 from pip import cmdoptions
22
23 __all__ = ['parse_requirements']
24
25 SCHEME_RE = re.compile(r'^(http|https|file):', re.I)
26 COMMENT_RE = re.compile(r'(^|\s)+#.*$')
27
28 SUPPORTED_OPTIONS = [
29 cmdoptions.constraints,
30 cmdoptions.editable,
31 cmdoptions.requirements,
32 cmdoptions.no_index,
33 cmdoptions.index_url,
34 cmdoptions.find_links,
35 cmdoptions.extra_index_url,
36 cmdoptions.allow_external,
37 cmdoptions.allow_all_external,
38 cmdoptions.no_allow_external,
39 cmdoptions.allow_unsafe,
40 cmdoptions.no_allow_unsafe,
41 cmdoptions.use_wheel,
42 cmdoptions.no_use_wheel,
43 cmdoptions.always_unzip,
44 cmdoptions.no_binary,
45 cmdoptions.only_binary,
46 ]
47
48 # options to be passed to requirements
49 SUPPORTED_OPTIONS_REQ = [
50 cmdoptions.install_options,
51 cmdoptions.global_options
52 ]
53
54 # the 'dest' string values
55 SUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]
56
57
58 def parse_requirements(filename, finder=None, comes_from=None, options=None,
59 session=None, constraint=False, wheel_cache=None):
60 """Parse a requirements file and yield InstallRequirement instances.
61
62 :param filename: Path or url of requirements file.
63 :param finder: Instance of pip.index.PackageFinder.
64 :param comes_from: Origin description of requirements.
65 :param options: Global options.
66 :param session: Instance of pip.download.PipSession.
67 :param constraint: If true, parsing a constraint file rather than
68 requirements file.
69 :param wheel_cache: Instance of pip.wheel.WheelCache
70 """
71 if session is None:
72 raise TypeError(
73 "parse_requirements() missing 1 required keyword argument: "
74 "'session'"
75 )
76
77 _, content = get_file_content(
78 filename, comes_from=comes_from, session=session
79 )
80
81 lines = content.splitlines()
82 lines = ignore_comments(lines)
83 lines = join_lines(lines)
84 lines = skip_regex(lines, options)
85
86 for line_number, line in enumerate(lines, 1):
87 req_iter = process_line(line, filename, line_number, finder,
88 comes_from, options, session, wheel_cache,
89 constraint=constraint)
90 for req in req_iter:
91 yield req
92
93
94 def process_line(line, filename, line_number, finder=None, comes_from=None,
95 options=None, session=None, wheel_cache=None,
96 constraint=False):
97 """Process a single requirements line; This can result in creating/yielding
98 requirements, or updating the finder.
99
100 For lines that contain requirements, the only options that have an effect
101 are from SUPPORTED_OPTIONS_REQ, and they are scoped to the
102 requirement. Other options from SUPPORTED_OPTIONS may be present, but are
103 ignored.
104
105 For lines that do not contain requirements, the only options that have an
106 effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may
107 be present, but are ignored. These lines may contain multiple options
108 (although our docs imply only one is supported), and all our parsed and
109 affect the finder.
110
111 :param constraint: If True, parsing a constraints file.
112 """
113 parser = build_parser()
114 defaults = parser.get_default_values()
115 defaults.index_url = None
116 if finder:
117 # `finder.format_control` will be updated during parsing
118 defaults.format_control = finder.format_control
119 args_str, options_str = break_args_options(line)
120 opts, _ = parser.parse_args(shlex.split(options_str), defaults)
121
122 # preserve for the nested code path
123 line_comes_from = '%s %s (line %s)' % (
124 '-c' if constraint else '-r', filename, line_number)
125
126 # yield a line requirement
127 if args_str:
128 isolated = options.isolated_mode if options else False
129 if options:
130 cmdoptions.check_install_build_global(options, opts)
131 # get the options that apply to requirements
132 req_options = {}
133 for dest in SUPPORTED_OPTIONS_REQ_DEST:
134 if dest in opts.__dict__ and opts.__dict__[dest]:
135 req_options[dest] = opts.__dict__[dest]
136 yield InstallRequirement.from_line(
137 args_str, line_comes_from, constraint=constraint,
138 isolated=isolated, options=req_options, wheel_cache=wheel_cache
139 )
140
141 # yield an editable requirement
142 elif opts.editables:
143 isolated = options.isolated_mode if options else False
144 default_vcs = options.default_vcs if options else None
145 yield InstallRequirement.from_editable(
146 opts.editables[0], comes_from=line_comes_from,
147 constraint=constraint, default_vcs=default_vcs, isolated=isolated,
148 wheel_cache=wheel_cache
149 )
150
151 # parse a nested requirements file
152 elif opts.requirements or opts.constraints:
153 if opts.requirements:
154 req_path = opts.requirements[0]
155 nested_constraint = False
156 else:
157 req_path = opts.constraints[0]
158 nested_constraint = True
159 # original file is over http
160 if SCHEME_RE.search(filename):
161 # do a url join so relative paths work
162 req_path = urllib_parse.urljoin(filename, req_path)
163 # original file and nested file are paths
164 elif not SCHEME_RE.search(req_path):
165 # do a join so relative paths work
166 req_dir = os.path.dirname(filename)
167 req_path = os.path.join(os.path.dirname(filename), req_path)
168 # TODO: Why not use `comes_from='-r {} (line {})'` here as well?
169 parser = parse_requirements(
170 req_path, finder, comes_from, options, session,
171 constraint=nested_constraint, wheel_cache=wheel_cache
172 )
173 for req in parser:
174 yield req
175
176 # set finder options
177 elif finder:
178 if opts.allow_external:
179 warnings.warn(
180 "--allow-external has been deprecated and will be removed in "
181 "the future. Due to changes in the repository protocol, it no "
182 "longer has any effect.",
183 RemovedInPip10Warning,
184 )
185
186 if opts.allow_all_external:
187 warnings.warn(
188 "--allow-all-external has been deprecated and will be removed "
189 "in the future. Due to changes in the repository protocol, it "
190 "no longer has any effect.",
191 RemovedInPip10Warning,
192 )
193
194 if opts.allow_unverified:
195 warnings.warn(
196 "--allow-unverified has been deprecated and will be removed "
197 "in the future. Due to changes in the repository protocol, it "
198 "no longer has any effect.",
199 RemovedInPip10Warning,
200 )
201
202 if opts.index_url:
203 finder.index_urls = [opts.index_url]
204 if opts.use_wheel is False:
205 finder.use_wheel = False
206 pip.index.fmt_ctl_no_use_wheel(finder.format_control)
207 if opts.no_index is True:
208 finder.index_urls = []
209 if opts.extra_index_urls:
210 finder.index_urls.extend(opts.extra_index_urls)
211 if opts.find_links:
212 # FIXME: it would be nice to keep track of the source
213 # of the find_links: support a find-links local path
214 # relative to a requirements file.
215 value = opts.find_links[0]
216 req_dir = os.path.dirname(os.path.abspath(filename))
217 relative_to_reqs_file = os.path.join(req_dir, value)
218 if os.path.exists(relative_to_reqs_file):
219 value = relative_to_reqs_file
220 finder.find_links.append(value)
221
222
223 def break_args_options(line):
224 """Break up the line into an args and options string. We only want to shlex
225 (and then optparse) the options, not the args. args can contain markers
226 which are corrupted by shlex.
227 """
228 tokens = line.split(' ')
229 args = []
230 options = tokens[:]
231 for token in tokens:
232 if token.startswith('-') or token.startswith('--'):
233 break
234 else:
235 args.append(token)
236 options.pop(0)
237 return ' '.join(args), ' '.join(options)
238
239
240 def build_parser():
241 """
242 Return a parser for parsing requirement lines
243 """
244 parser = optparse.OptionParser(add_help_option=False)
245
246 option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ
247 for option_factory in option_factories:
248 option = option_factory()
249 parser.add_option(option)
250
251 # By default optparse sys.exits on parsing errors. We want to wrap
252 # that in our own exception.
253 def parser_exit(self, msg):
254 raise RequirementsFileParseError(msg)
255 parser.exit = parser_exit
256
257 return parser
258
259
260 def join_lines(iterator):
261 """
262 Joins a line ending in '\' with the previous line.
263 """
264 lines = []
265 for line in iterator:
266 if not line.endswith('\\'):
267 if lines:
268 lines.append(line)
269 yield ''.join(lines)
270 lines = []
271 else:
272 yield line
273 else:
274 lines.append(line.strip('\\'))
275
276 # TODO: handle space after '\'.
277 # TODO: handle '\' on last line.
278
279
280 def ignore_comments(iterator):
281 """
282 Strips and filters empty or commented lines.
283 """
284 for line in iterator:
285 line = COMMENT_RE.sub('', line)
286 line = line.strip()
287 if line:
288 yield line
289
290
291 def skip_regex(lines, options):
292 """
293 Optionally exclude lines that match '--skip-requirements-regex'
294 """
295 skip_regex = options.skip_requirements_regex if options else None
296 if skip_regex:
297 lines = filterfalse(re.compile(skip_regex).search, lines)
298 return lines
299
[end of pip/req/req_file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pip/req/req_file.py b/pip/req/req_file.py
--- a/pip/req/req_file.py
+++ b/pip/req/req_file.py
@@ -43,6 +43,7 @@
cmdoptions.always_unzip,
cmdoptions.no_binary,
cmdoptions.only_binary,
+ cmdoptions.trusted_host,
]
# options to be passed to requirements
@@ -218,6 +219,9 @@
if os.path.exists(relative_to_reqs_file):
value = relative_to_reqs_file
finder.find_links.append(value)
+ if opts.trusted_hosts:
+ finder.secure_origins.extend(
+ ("*", host, "*") for host in opts.trusted_hosts)
def break_args_options(line):
| {"golden_diff": "diff --git a/pip/req/req_file.py b/pip/req/req_file.py\n--- a/pip/req/req_file.py\n+++ b/pip/req/req_file.py\n@@ -43,6 +43,7 @@\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n+ cmdoptions.trusted_host,\n ]\n \n # options to be passed to requirements\n@@ -218,6 +219,9 @@\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n+ if opts.trusted_hosts:\n+ finder.secure_origins.extend(\n+ (\"*\", host, \"*\") for host in opts.trusted_hosts)\n \n \n def break_args_options(line):\n", "issue": "pip 7.0.1: you should use \"--trusted-host\". Hey, no such option \"--trusted-host\"!\n```\n$ cat req.txt \n--extra-index-url http://pip.mycompany.com/simple \nmylib\nmyanotherlib\n\n$ pip install -r req.txt\nCollecting mylib (from -r req.txt (line 2))\n.../urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.\n InsecurePlatformWarning\n The repository located at pip.mycompany.com is not a trusted or secure host and is being ignored. If this repository is available via HTTPS it is recommended to use HTTPS instead, otherwise you may silence this warning and allow it anyways with '--trusted-host pip.mycompany.com'.\n Could not find a version that satisfies the requirement mylib (from -r req.txt (line 2)) (from versions: )\nNo matching distribution found for mylib (from -r req.txt (line 2))\n\n$ cat req1.txt \n--extra-index-url http://pip.mycompany.com/simple \n--trusted-host pip.mycompany.com\nmylib\nmyanotherlib\n\n$ pip install -r req1.txt \nUsage: pip [options]\n\npip: error: no such option: --trusted-host\n$ \n```\n\nI know that i can run `pip install -r file.txt --trusted-host=mycompany.com` but I configure my servers with Chef, not running pip directly.\n\n", "before_files": [{"content": "\"\"\"\nRequirements file parsing\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport re\nimport shlex\nimport optparse\nimport warnings\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves import filterfalse\n\nimport pip\nfrom pip.download import get_file_content\nfrom pip.req.req_install import InstallRequirement\nfrom pip.exceptions import (RequirementsFileParseError)\nfrom pip.utils.deprecation import RemovedInPip10Warning\nfrom pip import cmdoptions\n\n__all__ = ['parse_requirements']\n\nSCHEME_RE = re.compile(r'^(http|https|file):', re.I)\nCOMMENT_RE = re.compile(r'(^|\\s)+#.*$')\n\nSUPPORTED_OPTIONS = [\n cmdoptions.constraints,\n cmdoptions.editable,\n cmdoptions.requirements,\n cmdoptions.no_index,\n cmdoptions.index_url,\n cmdoptions.find_links,\n cmdoptions.extra_index_url,\n cmdoptions.allow_external,\n cmdoptions.allow_all_external,\n cmdoptions.no_allow_external,\n cmdoptions.allow_unsafe,\n cmdoptions.no_allow_unsafe,\n cmdoptions.use_wheel,\n cmdoptions.no_use_wheel,\n cmdoptions.always_unzip,\n cmdoptions.no_binary,\n cmdoptions.only_binary,\n]\n\n# options to be passed to requirements\nSUPPORTED_OPTIONS_REQ = [\n cmdoptions.install_options,\n cmdoptions.global_options\n]\n\n# the 'dest' string values\nSUPPORTED_OPTIONS_REQ_DEST = [o().dest for o in SUPPORTED_OPTIONS_REQ]\n\n\ndef parse_requirements(filename, finder=None, comes_from=None, options=None,\n session=None, constraint=False, wheel_cache=None):\n \"\"\"Parse a requirements file and yield InstallRequirement instances.\n\n :param filename: Path or url of requirements file.\n :param finder: Instance of pip.index.PackageFinder.\n :param comes_from: Origin description of requirements.\n :param options: Global options.\n :param session: Instance of pip.download.PipSession.\n :param constraint: If true, parsing a constraint file rather than\n requirements file.\n :param wheel_cache: Instance of pip.wheel.WheelCache\n \"\"\"\n if session is None:\n raise TypeError(\n \"parse_requirements() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n _, content = get_file_content(\n filename, comes_from=comes_from, session=session\n )\n\n lines = content.splitlines()\n lines = ignore_comments(lines)\n lines = join_lines(lines)\n lines = skip_regex(lines, options)\n\n for line_number, line in enumerate(lines, 1):\n req_iter = process_line(line, filename, line_number, finder,\n comes_from, options, session, wheel_cache,\n constraint=constraint)\n for req in req_iter:\n yield req\n\n\ndef process_line(line, filename, line_number, finder=None, comes_from=None,\n options=None, session=None, wheel_cache=None,\n constraint=False):\n \"\"\"Process a single requirements line; This can result in creating/yielding\n requirements, or updating the finder.\n\n For lines that contain requirements, the only options that have an effect\n are from SUPPORTED_OPTIONS_REQ, and they are scoped to the\n requirement. Other options from SUPPORTED_OPTIONS may be present, but are\n ignored.\n\n For lines that do not contain requirements, the only options that have an\n effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may\n be present, but are ignored. These lines may contain multiple options\n (although our docs imply only one is supported), and all our parsed and\n affect the finder.\n\n :param constraint: If True, parsing a constraints file.\n \"\"\"\n parser = build_parser()\n defaults = parser.get_default_values()\n defaults.index_url = None\n if finder:\n # `finder.format_control` will be updated during parsing\n defaults.format_control = finder.format_control\n args_str, options_str = break_args_options(line)\n opts, _ = parser.parse_args(shlex.split(options_str), defaults)\n\n # preserve for the nested code path\n line_comes_from = '%s %s (line %s)' % (\n '-c' if constraint else '-r', filename, line_number)\n\n # yield a line requirement\n if args_str:\n isolated = options.isolated_mode if options else False\n if options:\n cmdoptions.check_install_build_global(options, opts)\n # get the options that apply to requirements\n req_options = {}\n for dest in SUPPORTED_OPTIONS_REQ_DEST:\n if dest in opts.__dict__ and opts.__dict__[dest]:\n req_options[dest] = opts.__dict__[dest]\n yield InstallRequirement.from_line(\n args_str, line_comes_from, constraint=constraint,\n isolated=isolated, options=req_options, wheel_cache=wheel_cache\n )\n\n # yield an editable requirement\n elif opts.editables:\n isolated = options.isolated_mode if options else False\n default_vcs = options.default_vcs if options else None\n yield InstallRequirement.from_editable(\n opts.editables[0], comes_from=line_comes_from,\n constraint=constraint, default_vcs=default_vcs, isolated=isolated,\n wheel_cache=wheel_cache\n )\n\n # parse a nested requirements file\n elif opts.requirements or opts.constraints:\n if opts.requirements:\n req_path = opts.requirements[0]\n nested_constraint = False\n else:\n req_path = opts.constraints[0]\n nested_constraint = True\n # original file is over http\n if SCHEME_RE.search(filename):\n # do a url join so relative paths work\n req_path = urllib_parse.urljoin(filename, req_path)\n # original file and nested file are paths\n elif not SCHEME_RE.search(req_path):\n # do a join so relative paths work\n req_dir = os.path.dirname(filename)\n req_path = os.path.join(os.path.dirname(filename), req_path)\n # TODO: Why not use `comes_from='-r {} (line {})'` here as well?\n parser = parse_requirements(\n req_path, finder, comes_from, options, session,\n constraint=nested_constraint, wheel_cache=wheel_cache\n )\n for req in parser:\n yield req\n\n # set finder options\n elif finder:\n if opts.allow_external:\n warnings.warn(\n \"--allow-external has been deprecated and will be removed in \"\n \"the future. Due to changes in the repository protocol, it no \"\n \"longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.allow_all_external:\n warnings.warn(\n \"--allow-all-external has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.allow_unverified:\n warnings.warn(\n \"--allow-unverified has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if opts.index_url:\n finder.index_urls = [opts.index_url]\n if opts.use_wheel is False:\n finder.use_wheel = False\n pip.index.fmt_ctl_no_use_wheel(finder.format_control)\n if opts.no_index is True:\n finder.index_urls = []\n if opts.extra_index_urls:\n finder.index_urls.extend(opts.extra_index_urls)\n if opts.find_links:\n # FIXME: it would be nice to keep track of the source\n # of the find_links: support a find-links local path\n # relative to a requirements file.\n value = opts.find_links[0]\n req_dir = os.path.dirname(os.path.abspath(filename))\n relative_to_reqs_file = os.path.join(req_dir, value)\n if os.path.exists(relative_to_reqs_file):\n value = relative_to_reqs_file\n finder.find_links.append(value)\n\n\ndef break_args_options(line):\n \"\"\"Break up the line into an args and options string. We only want to shlex\n (and then optparse) the options, not the args. args can contain markers\n which are corrupted by shlex.\n \"\"\"\n tokens = line.split(' ')\n args = []\n options = tokens[:]\n for token in tokens:\n if token.startswith('-') or token.startswith('--'):\n break\n else:\n args.append(token)\n options.pop(0)\n return ' '.join(args), ' '.join(options)\n\n\ndef build_parser():\n \"\"\"\n Return a parser for parsing requirement lines\n \"\"\"\n parser = optparse.OptionParser(add_help_option=False)\n\n option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ\n for option_factory in option_factories:\n option = option_factory()\n parser.add_option(option)\n\n # By default optparse sys.exits on parsing errors. We want to wrap\n # that in our own exception.\n def parser_exit(self, msg):\n raise RequirementsFileParseError(msg)\n parser.exit = parser_exit\n\n return parser\n\n\ndef join_lines(iterator):\n \"\"\"\n Joins a line ending in '\\' with the previous line.\n \"\"\"\n lines = []\n for line in iterator:\n if not line.endswith('\\\\'):\n if lines:\n lines.append(line)\n yield ''.join(lines)\n lines = []\n else:\n yield line\n else:\n lines.append(line.strip('\\\\'))\n\n # TODO: handle space after '\\'.\n # TODO: handle '\\' on last line.\n\n\ndef ignore_comments(iterator):\n \"\"\"\n Strips and filters empty or commented lines.\n \"\"\"\n for line in iterator:\n line = COMMENT_RE.sub('', line)\n line = line.strip()\n if line:\n yield line\n\n\ndef skip_regex(lines, options):\n \"\"\"\n Optionally exclude lines that match '--skip-requirements-regex'\n \"\"\"\n skip_regex = options.skip_requirements_regex if options else None\n if skip_regex:\n lines = filterfalse(re.compile(skip_regex).search, lines)\n return lines\n", "path": "pip/req/req_file.py"}]} | 3,853 | 175 |
gh_patches_debug_22629 | rasdani/github-patches | git_diff | yt-project__yt-3613 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: spurious log warning when saving a plot to png format
### Bug report
**Bug summary**
**Code for reproduction**
```python
import yt
yt.funcs.mylog.setLevel("warning")
ds = yt.load_sample("IsolatedGalaxy")
p = yt.SlicePlot(ds, "z", "density")
p.save("/tmp/test.png")
```
**Actual outcome**
```
yt : [WARNING ] 2021-10-20 11:50:44,393 Received two valid image formats '.png' (from `filename`) and 'png' (from `suffix`). The former is ignored.
```
**Expected outcome**
No log warning
</issue>
<code>
[start of yt/visualization/_commons.py]
1 import os
2 import sys
3 from typing import Optional, Type
4
5 import matplotlib
6 from packaging.version import Version
7
8 from yt.utilities.logger import ytLogger as mylog
9
10 from ._mpl_imports import (
11 FigureCanvasAgg,
12 FigureCanvasBase,
13 FigureCanvasPdf,
14 FigureCanvasPS,
15 FigureCanvasSVG,
16 )
17
18 MPL_VERSION = Version(matplotlib.__version__)
19
20 DEFAULT_FONT_PROPERTIES = {
21 "family": "stixgeneral",
22 "size": 18,
23 }
24
25 if MPL_VERSION >= Version("3.4"):
26 DEFAULT_FONT_PROPERTIES["math_fontfamily"] = "cm"
27
28 SUPPORTED_FORMATS = frozenset(FigureCanvasBase.get_supported_filetypes().keys())
29 SUPPORTED_CANVAS_CLASSES = frozenset(
30 (FigureCanvasAgg, FigureCanvasPdf, FigureCanvasPS, FigureCanvasSVG)
31 )
32
33
34 def get_canvas_class(suffix: str) -> Type[FigureCanvasBase]:
35 s = normalize_extension_string(suffix)
36 if s not in SUPPORTED_FORMATS:
37 raise ValueError(f"Unsupported file format '{suffix}'.")
38 for cls in SUPPORTED_CANVAS_CLASSES:
39 if s in cls.get_supported_filetypes():
40 return cls
41 raise RuntimeError(
42 "Something went terribly wrong. "
43 f"File extension '{suffix}' is supposed to be supported "
44 "but no compatible backend was found."
45 )
46
47
48 def normalize_extension_string(s: str) -> str:
49 if sys.version_info < (3, 9):
50 if s.startswith("."):
51 return s[1:]
52 return s
53 else:
54 return s.removeprefix(".")
55
56
57 def validate_image_name(filename, suffix: Optional[str] = None) -> str:
58 """
59 Build a valid image filename with a specified extension (default to png).
60 The suffix parameter is ignored if the input filename has a valid extension already.
61 Otherwise, suffix is appended to the filename, replacing any existing extension.
62 """
63 name, psuffix = os.path.splitext(filename)
64 if normalize_extension_string(psuffix) in SUPPORTED_FORMATS:
65 if suffix is not None:
66 suffix = normalize_extension_string(suffix)
67 if suffix in SUPPORTED_FORMATS and suffix != psuffix:
68 mylog.warning(
69 "Received two valid image formats '%s' (from `filename`) "
70 "and '%s' (from `suffix`). The former is ignored.",
71 psuffix,
72 suffix,
73 )
74 return f"{name}.{suffix}"
75 return str(filename)
76
77 if suffix is None:
78 suffix = ".png"
79
80 suffix = normalize_extension_string(suffix)
81
82 if suffix not in SUPPORTED_FORMATS:
83 raise ValueError(f"Unsupported file format '{suffix}'.")
84
85 return f"{filename}.{suffix}"
86
87
88 def get_canvas(figure, filename):
89
90 name, suffix = os.path.splitext(filename)
91
92 if not suffix:
93 raise ValueError(
94 f"Can not determine canvas class from filename '{filename}' "
95 f"without an extension."
96 )
97 return get_canvas_class(suffix)(figure)
98
[end of yt/visualization/_commons.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yt/visualization/_commons.py b/yt/visualization/_commons.py
--- a/yt/visualization/_commons.py
+++ b/yt/visualization/_commons.py
@@ -61,9 +61,12 @@
Otherwise, suffix is appended to the filename, replacing any existing extension.
"""
name, psuffix = os.path.splitext(filename)
- if normalize_extension_string(psuffix) in SUPPORTED_FORMATS:
- if suffix is not None:
- suffix = normalize_extension_string(suffix)
+ psuffix = normalize_extension_string(psuffix)
+
+ if suffix is not None:
+ suffix = normalize_extension_string(suffix)
+
+ if psuffix in SUPPORTED_FORMATS:
if suffix in SUPPORTED_FORMATS and suffix != psuffix:
mylog.warning(
"Received two valid image formats '%s' (from `filename`) "
@@ -75,9 +78,7 @@
return str(filename)
if suffix is None:
- suffix = ".png"
-
- suffix = normalize_extension_string(suffix)
+ suffix = "png"
if suffix not in SUPPORTED_FORMATS:
raise ValueError(f"Unsupported file format '{suffix}'.")
| {"golden_diff": "diff --git a/yt/visualization/_commons.py b/yt/visualization/_commons.py\n--- a/yt/visualization/_commons.py\n+++ b/yt/visualization/_commons.py\n@@ -61,9 +61,12 @@\n Otherwise, suffix is appended to the filename, replacing any existing extension.\n \"\"\"\n name, psuffix = os.path.splitext(filename)\n- if normalize_extension_string(psuffix) in SUPPORTED_FORMATS:\n- if suffix is not None:\n- suffix = normalize_extension_string(suffix)\n+ psuffix = normalize_extension_string(psuffix)\n+\n+ if suffix is not None:\n+ suffix = normalize_extension_string(suffix)\n+\n+ if psuffix in SUPPORTED_FORMATS:\n if suffix in SUPPORTED_FORMATS and suffix != psuffix:\n mylog.warning(\n \"Received two valid image formats '%s' (from `filename`) \"\n@@ -75,9 +78,7 @@\n return str(filename)\n \n if suffix is None:\n- suffix = \".png\"\n-\n- suffix = normalize_extension_string(suffix)\n+ suffix = \"png\"\n \n if suffix not in SUPPORTED_FORMATS:\n raise ValueError(f\"Unsupported file format '{suffix}'.\")\n", "issue": "BUG: spurious log warning when saving a plot to png format\n### Bug report\r\n\r\n**Bug summary**\r\n\r\n**Code for reproduction**\r\n\r\n```python\r\nimport yt\r\n\r\nyt.funcs.mylog.setLevel(\"warning\")\r\n\r\nds = yt.load_sample(\"IsolatedGalaxy\")\r\np = yt.SlicePlot(ds, \"z\", \"density\")\r\np.save(\"/tmp/test.png\")\r\n```\r\n\r\n\r\n**Actual outcome**\r\n\r\n```\r\nyt : [WARNING ] 2021-10-20 11:50:44,393 Received two valid image formats '.png' (from `filename`) and 'png' (from `suffix`). The former is ignored.\r\n```\r\n\r\n**Expected outcome**\r\n\r\nNo log warning\n", "before_files": [{"content": "import os\nimport sys\nfrom typing import Optional, Type\n\nimport matplotlib\nfrom packaging.version import Version\n\nfrom yt.utilities.logger import ytLogger as mylog\n\nfrom ._mpl_imports import (\n FigureCanvasAgg,\n FigureCanvasBase,\n FigureCanvasPdf,\n FigureCanvasPS,\n FigureCanvasSVG,\n)\n\nMPL_VERSION = Version(matplotlib.__version__)\n\nDEFAULT_FONT_PROPERTIES = {\n \"family\": \"stixgeneral\",\n \"size\": 18,\n}\n\nif MPL_VERSION >= Version(\"3.4\"):\n DEFAULT_FONT_PROPERTIES[\"math_fontfamily\"] = \"cm\"\n\nSUPPORTED_FORMATS = frozenset(FigureCanvasBase.get_supported_filetypes().keys())\nSUPPORTED_CANVAS_CLASSES = frozenset(\n (FigureCanvasAgg, FigureCanvasPdf, FigureCanvasPS, FigureCanvasSVG)\n)\n\n\ndef get_canvas_class(suffix: str) -> Type[FigureCanvasBase]:\n s = normalize_extension_string(suffix)\n if s not in SUPPORTED_FORMATS:\n raise ValueError(f\"Unsupported file format '{suffix}'.\")\n for cls in SUPPORTED_CANVAS_CLASSES:\n if s in cls.get_supported_filetypes():\n return cls\n raise RuntimeError(\n \"Something went terribly wrong. \"\n f\"File extension '{suffix}' is supposed to be supported \"\n \"but no compatible backend was found.\"\n )\n\n\ndef normalize_extension_string(s: str) -> str:\n if sys.version_info < (3, 9):\n if s.startswith(\".\"):\n return s[1:]\n return s\n else:\n return s.removeprefix(\".\")\n\n\ndef validate_image_name(filename, suffix: Optional[str] = None) -> str:\n \"\"\"\n Build a valid image filename with a specified extension (default to png).\n The suffix parameter is ignored if the input filename has a valid extension already.\n Otherwise, suffix is appended to the filename, replacing any existing extension.\n \"\"\"\n name, psuffix = os.path.splitext(filename)\n if normalize_extension_string(psuffix) in SUPPORTED_FORMATS:\n if suffix is not None:\n suffix = normalize_extension_string(suffix)\n if suffix in SUPPORTED_FORMATS and suffix != psuffix:\n mylog.warning(\n \"Received two valid image formats '%s' (from `filename`) \"\n \"and '%s' (from `suffix`). The former is ignored.\",\n psuffix,\n suffix,\n )\n return f\"{name}.{suffix}\"\n return str(filename)\n\n if suffix is None:\n suffix = \".png\"\n\n suffix = normalize_extension_string(suffix)\n\n if suffix not in SUPPORTED_FORMATS:\n raise ValueError(f\"Unsupported file format '{suffix}'.\")\n\n return f\"{filename}.{suffix}\"\n\n\ndef get_canvas(figure, filename):\n\n name, suffix = os.path.splitext(filename)\n\n if not suffix:\n raise ValueError(\n f\"Can not determine canvas class from filename '{filename}' \"\n f\"without an extension.\"\n )\n return get_canvas_class(suffix)(figure)\n", "path": "yt/visualization/_commons.py"}]} | 1,512 | 260 |
gh_patches_debug_38815 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[WSGI] Replace span name callback with request and response hooks
WSGI instrumentation accepts a span name callback which should be replaced with more generic request/response callbacks (hooks).
Details: https://github.com/open-telemetry/opentelemetry-python-contrib/issues/408
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This library provides a WSGI middleware that can be used on any WSGI framework
16 (such as Django / Flask) to track requests timing through OpenTelemetry.
17
18 Usage (Flask)
19 -------------
20
21 .. code-block:: python
22
23 from flask import Flask
24 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
25
26 app = Flask(__name__)
27 app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)
28
29 @app.route("/")
30 def hello():
31 return "Hello!"
32
33 if __name__ == "__main__":
34 app.run(debug=True)
35
36
37 Usage (Django)
38 --------------
39
40 Modify the application's ``wsgi.py`` file as shown below.
41
42 .. code-block:: python
43
44 import os
45 from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
46 from django.core.wsgi import get_wsgi_application
47
48 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')
49
50 application = get_wsgi_application()
51 application = OpenTelemetryMiddleware(application)
52
53 API
54 ---
55 """
56
57 import functools
58 import typing
59 import wsgiref.util as wsgiref_util
60
61 from opentelemetry import context, trace
62 from opentelemetry.instrumentation.utils import http_status_to_status_code
63 from opentelemetry.instrumentation.wsgi.version import __version__
64 from opentelemetry.propagate import extract
65 from opentelemetry.propagators.textmap import Getter
66 from opentelemetry.trace.status import Status, StatusCode
67
68 _HTTP_VERSION_PREFIX = "HTTP/"
69 _CARRIER_KEY_PREFIX = "HTTP_"
70 _CARRIER_KEY_PREFIX_LEN = len(_CARRIER_KEY_PREFIX)
71
72
73 class WSGIGetter(Getter):
74 def get(
75 self, carrier: dict, key: str
76 ) -> typing.Optional[typing.List[str]]:
77 """Getter implementation to retrieve a HTTP header value from the
78 PEP3333-conforming WSGI environ
79
80 Args:
81 carrier: WSGI environ object
82 key: header name in environ object
83 Returns:
84 A list with a single string with the header value if it exists,
85 else None.
86 """
87 environ_key = "HTTP_" + key.upper().replace("-", "_")
88 value = carrier.get(environ_key)
89 if value is not None:
90 return [value]
91 return None
92
93 def keys(self, carrier):
94 return [
95 key[_CARRIER_KEY_PREFIX_LEN:].lower().replace("_", "-")
96 for key in carrier
97 if key.startswith(_CARRIER_KEY_PREFIX)
98 ]
99
100
101 wsgi_getter = WSGIGetter()
102
103
104 def setifnotnone(dic, key, value):
105 if value is not None:
106 dic[key] = value
107
108
109 def collect_request_attributes(environ):
110 """Collects HTTP request attributes from the PEP3333-conforming
111 WSGI environ and returns a dictionary to be used as span creation attributes."""
112
113 result = {
114 "http.method": environ.get("REQUEST_METHOD"),
115 "http.server_name": environ.get("SERVER_NAME"),
116 "http.scheme": environ.get("wsgi.url_scheme"),
117 }
118
119 host_port = environ.get("SERVER_PORT")
120 if host_port is not None:
121 result.update({"net.host.port": int(host_port)})
122
123 setifnotnone(result, "http.host", environ.get("HTTP_HOST"))
124 target = environ.get("RAW_URI")
125 if target is None: # Note: `"" or None is None`
126 target = environ.get("REQUEST_URI")
127 if target is not None:
128 result["http.target"] = target
129 else:
130 result["http.url"] = wsgiref_util.request_uri(environ)
131
132 remote_addr = environ.get("REMOTE_ADDR")
133 if remote_addr:
134 result["net.peer.ip"] = remote_addr
135 remote_host = environ.get("REMOTE_HOST")
136 if remote_host and remote_host != remote_addr:
137 result["net.peer.name"] = remote_host
138
139 user_agent = environ.get("HTTP_USER_AGENT")
140 if user_agent is not None and len(user_agent) > 0:
141 result["http.user_agent"] = user_agent
142
143 setifnotnone(result, "net.peer.port", environ.get("REMOTE_PORT"))
144 flavor = environ.get("SERVER_PROTOCOL", "")
145 if flavor.upper().startswith(_HTTP_VERSION_PREFIX):
146 flavor = flavor[len(_HTTP_VERSION_PREFIX) :]
147 if flavor:
148 result["http.flavor"] = flavor
149
150 return result
151
152
153 def add_response_attributes(
154 span, start_response_status, response_headers
155 ): # pylint: disable=unused-argument
156 """Adds HTTP response attributes to span using the arguments
157 passed to a PEP3333-conforming start_response callable."""
158 if not span.is_recording():
159 return
160 status_code, _ = start_response_status.split(" ", 1)
161
162 try:
163 status_code = int(status_code)
164 except ValueError:
165 span.set_status(
166 Status(
167 StatusCode.ERROR,
168 "Non-integer HTTP status: " + repr(status_code),
169 )
170 )
171 else:
172 span.set_attribute("http.status_code", status_code)
173 span.set_status(Status(http_status_to_status_code(status_code)))
174
175
176 def get_default_span_name(environ):
177 """Default implementation for name_callback, returns HTTP {METHOD_NAME}."""
178 return "HTTP {}".format(environ.get("REQUEST_METHOD", "")).strip()
179
180
181 class OpenTelemetryMiddleware:
182 """The WSGI application middleware.
183
184 This class is a PEP 3333 conforming WSGI middleware that starts and
185 annotates spans for any requests it is invoked with.
186
187 Args:
188 wsgi: The WSGI application callable to forward requests to.
189 name_callback: Callback which calculates a generic span name for an
190 incoming HTTP request based on the PEP3333 WSGI environ.
191 Optional: Defaults to get_default_span_name.
192 """
193
194 def __init__(self, wsgi, name_callback=get_default_span_name):
195 self.wsgi = wsgi
196 self.tracer = trace.get_tracer(__name__, __version__)
197 self.name_callback = name_callback
198
199 @staticmethod
200 def _create_start_response(span, start_response):
201 @functools.wraps(start_response)
202 def _start_response(status, response_headers, *args, **kwargs):
203 add_response_attributes(span, status, response_headers)
204 return start_response(status, response_headers, *args, **kwargs)
205
206 return _start_response
207
208 def __call__(self, environ, start_response):
209 """The WSGI application
210
211 Args:
212 environ: A WSGI environment.
213 start_response: The WSGI start_response callable.
214 """
215
216 token = context.attach(extract(environ, getter=wsgi_getter))
217 span_name = self.name_callback(environ)
218
219 span = self.tracer.start_span(
220 span_name,
221 kind=trace.SpanKind.SERVER,
222 attributes=collect_request_attributes(environ),
223 )
224
225 try:
226 with trace.use_span(span):
227 start_response = self._create_start_response(
228 span, start_response
229 )
230 iterable = self.wsgi(environ, start_response)
231 return _end_span_after_iterating(
232 iterable, span, self.tracer, token
233 )
234 except Exception as ex:
235 if span.is_recording():
236 span.set_status(Status(StatusCode.ERROR, str(ex)))
237 span.end()
238 context.detach(token)
239 raise
240
241
242 # Put this in a subfunction to not delay the call to the wrapped
243 # WSGI application (instrumentation should change the application
244 # behavior as little as possible).
245 def _end_span_after_iterating(iterable, span, tracer, token):
246 try:
247 with trace.use_span(span):
248 for yielded in iterable:
249 yield yielded
250 finally:
251 close = getattr(iterable, "close", None)
252 if close:
253 close()
254 span.end()
255 context.detach(token)
256
[end of instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py
@@ -186,21 +186,26 @@
Args:
wsgi: The WSGI application callable to forward requests to.
- name_callback: Callback which calculates a generic span name for an
- incoming HTTP request based on the PEP3333 WSGI environ.
- Optional: Defaults to get_default_span_name.
+ request_hook: Optional callback which is called with the server span and WSGI
+ environ object for every incoming request.
+ response_hook: Optional callback which is called with the server span,
+ WSGI environ, status_code and response_headers for every
+ incoming request.
"""
- def __init__(self, wsgi, name_callback=get_default_span_name):
+ def __init__(self, wsgi, request_hook=None, response_hook=None):
self.wsgi = wsgi
self.tracer = trace.get_tracer(__name__, __version__)
- self.name_callback = name_callback
+ self.request_hook = request_hook
+ self.response_hook = response_hook
@staticmethod
- def _create_start_response(span, start_response):
+ def _create_start_response(span, start_response, response_hook):
@functools.wraps(start_response)
def _start_response(status, response_headers, *args, **kwargs):
add_response_attributes(span, status, response_headers)
+ if response_hook:
+ response_hook(status, response_headers)
return start_response(status, response_headers, *args, **kwargs)
return _start_response
@@ -214,18 +219,24 @@
"""
token = context.attach(extract(environ, getter=wsgi_getter))
- span_name = self.name_callback(environ)
span = self.tracer.start_span(
- span_name,
+ get_default_span_name(environ),
kind=trace.SpanKind.SERVER,
attributes=collect_request_attributes(environ),
)
+ if self.request_hook:
+ self.request_hook(span, environ)
+
+ response_hook = self.response_hook
+ if response_hook:
+ response_hook = functools.partial(response_hook, span, environ)
+
try:
with trace.use_span(span):
start_response = self._create_start_response(
- span, start_response
+ span, start_response, response_hook
)
iterable = self.wsgi(environ, start_response)
return _end_span_after_iterating(
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py\n@@ -186,21 +186,26 @@\n \n Args:\n wsgi: The WSGI application callable to forward requests to.\n- name_callback: Callback which calculates a generic span name for an\n- incoming HTTP request based on the PEP3333 WSGI environ.\n- Optional: Defaults to get_default_span_name.\n+ request_hook: Optional callback which is called with the server span and WSGI\n+ environ object for every incoming request.\n+ response_hook: Optional callback which is called with the server span,\n+ WSGI environ, status_code and response_headers for every\n+ incoming request.\n \"\"\"\n \n- def __init__(self, wsgi, name_callback=get_default_span_name):\n+ def __init__(self, wsgi, request_hook=None, response_hook=None):\n self.wsgi = wsgi\n self.tracer = trace.get_tracer(__name__, __version__)\n- self.name_callback = name_callback\n+ self.request_hook = request_hook\n+ self.response_hook = response_hook\n \n @staticmethod\n- def _create_start_response(span, start_response):\n+ def _create_start_response(span, start_response, response_hook):\n @functools.wraps(start_response)\n def _start_response(status, response_headers, *args, **kwargs):\n add_response_attributes(span, status, response_headers)\n+ if response_hook:\n+ response_hook(status, response_headers)\n return start_response(status, response_headers, *args, **kwargs)\n \n return _start_response\n@@ -214,18 +219,24 @@\n \"\"\"\n \n token = context.attach(extract(environ, getter=wsgi_getter))\n- span_name = self.name_callback(environ)\n \n span = self.tracer.start_span(\n- span_name,\n+ get_default_span_name(environ),\n kind=trace.SpanKind.SERVER,\n attributes=collect_request_attributes(environ),\n )\n \n+ if self.request_hook:\n+ self.request_hook(span, environ)\n+\n+ response_hook = self.response_hook\n+ if response_hook:\n+ response_hook = functools.partial(response_hook, span, environ)\n+\n try:\n with trace.use_span(span):\n start_response = self._create_start_response(\n- span, start_response\n+ span, start_response, response_hook\n )\n iterable = self.wsgi(environ, start_response)\n return _end_span_after_iterating(\n", "issue": "[WSGI] Replace span name callback with request and response hooks \nWSGI instrumentation accepts a span name callback which should be replaced with more generic request/response callbacks (hooks). \r\n\r\nDetails: https://github.com/open-telemetry/opentelemetry-python-contrib/issues/408\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis library provides a WSGI middleware that can be used on any WSGI framework\n(such as Django / Flask) to track requests timing through OpenTelemetry.\n\nUsage (Flask)\n-------------\n\n.. code-block:: python\n\n from flask import Flask\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n\n app = Flask(__name__)\n app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)\n\n @app.route(\"/\")\n def hello():\n return \"Hello!\"\n\n if __name__ == \"__main__\":\n app.run(debug=True)\n\n\nUsage (Django)\n--------------\n\nModify the application's ``wsgi.py`` file as shown below.\n\n.. code-block:: python\n\n import os\n from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware\n from django.core.wsgi import get_wsgi_application\n\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'application.settings')\n\n application = get_wsgi_application()\n application = OpenTelemetryMiddleware(application)\n\nAPI\n---\n\"\"\"\n\nimport functools\nimport typing\nimport wsgiref.util as wsgiref_util\n\nfrom opentelemetry import context, trace\nfrom opentelemetry.instrumentation.utils import http_status_to_status_code\nfrom opentelemetry.instrumentation.wsgi.version import __version__\nfrom opentelemetry.propagate import extract\nfrom opentelemetry.propagators.textmap import Getter\nfrom opentelemetry.trace.status import Status, StatusCode\n\n_HTTP_VERSION_PREFIX = \"HTTP/\"\n_CARRIER_KEY_PREFIX = \"HTTP_\"\n_CARRIER_KEY_PREFIX_LEN = len(_CARRIER_KEY_PREFIX)\n\n\nclass WSGIGetter(Getter):\n def get(\n self, carrier: dict, key: str\n ) -> typing.Optional[typing.List[str]]:\n \"\"\"Getter implementation to retrieve a HTTP header value from the\n PEP3333-conforming WSGI environ\n\n Args:\n carrier: WSGI environ object\n key: header name in environ object\n Returns:\n A list with a single string with the header value if it exists,\n else None.\n \"\"\"\n environ_key = \"HTTP_\" + key.upper().replace(\"-\", \"_\")\n value = carrier.get(environ_key)\n if value is not None:\n return [value]\n return None\n\n def keys(self, carrier):\n return [\n key[_CARRIER_KEY_PREFIX_LEN:].lower().replace(\"_\", \"-\")\n for key in carrier\n if key.startswith(_CARRIER_KEY_PREFIX)\n ]\n\n\nwsgi_getter = WSGIGetter()\n\n\ndef setifnotnone(dic, key, value):\n if value is not None:\n dic[key] = value\n\n\ndef collect_request_attributes(environ):\n \"\"\"Collects HTTP request attributes from the PEP3333-conforming\n WSGI environ and returns a dictionary to be used as span creation attributes.\"\"\"\n\n result = {\n \"http.method\": environ.get(\"REQUEST_METHOD\"),\n \"http.server_name\": environ.get(\"SERVER_NAME\"),\n \"http.scheme\": environ.get(\"wsgi.url_scheme\"),\n }\n\n host_port = environ.get(\"SERVER_PORT\")\n if host_port is not None:\n result.update({\"net.host.port\": int(host_port)})\n\n setifnotnone(result, \"http.host\", environ.get(\"HTTP_HOST\"))\n target = environ.get(\"RAW_URI\")\n if target is None: # Note: `\"\" or None is None`\n target = environ.get(\"REQUEST_URI\")\n if target is not None:\n result[\"http.target\"] = target\n else:\n result[\"http.url\"] = wsgiref_util.request_uri(environ)\n\n remote_addr = environ.get(\"REMOTE_ADDR\")\n if remote_addr:\n result[\"net.peer.ip\"] = remote_addr\n remote_host = environ.get(\"REMOTE_HOST\")\n if remote_host and remote_host != remote_addr:\n result[\"net.peer.name\"] = remote_host\n\n user_agent = environ.get(\"HTTP_USER_AGENT\")\n if user_agent is not None and len(user_agent) > 0:\n result[\"http.user_agent\"] = user_agent\n\n setifnotnone(result, \"net.peer.port\", environ.get(\"REMOTE_PORT\"))\n flavor = environ.get(\"SERVER_PROTOCOL\", \"\")\n if flavor.upper().startswith(_HTTP_VERSION_PREFIX):\n flavor = flavor[len(_HTTP_VERSION_PREFIX) :]\n if flavor:\n result[\"http.flavor\"] = flavor\n\n return result\n\n\ndef add_response_attributes(\n span, start_response_status, response_headers\n): # pylint: disable=unused-argument\n \"\"\"Adds HTTP response attributes to span using the arguments\n passed to a PEP3333-conforming start_response callable.\"\"\"\n if not span.is_recording():\n return\n status_code, _ = start_response_status.split(\" \", 1)\n\n try:\n status_code = int(status_code)\n except ValueError:\n span.set_status(\n Status(\n StatusCode.ERROR,\n \"Non-integer HTTP status: \" + repr(status_code),\n )\n )\n else:\n span.set_attribute(\"http.status_code\", status_code)\n span.set_status(Status(http_status_to_status_code(status_code)))\n\n\ndef get_default_span_name(environ):\n \"\"\"Default implementation for name_callback, returns HTTP {METHOD_NAME}.\"\"\"\n return \"HTTP {}\".format(environ.get(\"REQUEST_METHOD\", \"\")).strip()\n\n\nclass OpenTelemetryMiddleware:\n \"\"\"The WSGI application middleware.\n\n This class is a PEP 3333 conforming WSGI middleware that starts and\n annotates spans for any requests it is invoked with.\n\n Args:\n wsgi: The WSGI application callable to forward requests to.\n name_callback: Callback which calculates a generic span name for an\n incoming HTTP request based on the PEP3333 WSGI environ.\n Optional: Defaults to get_default_span_name.\n \"\"\"\n\n def __init__(self, wsgi, name_callback=get_default_span_name):\n self.wsgi = wsgi\n self.tracer = trace.get_tracer(__name__, __version__)\n self.name_callback = name_callback\n\n @staticmethod\n def _create_start_response(span, start_response):\n @functools.wraps(start_response)\n def _start_response(status, response_headers, *args, **kwargs):\n add_response_attributes(span, status, response_headers)\n return start_response(status, response_headers, *args, **kwargs)\n\n return _start_response\n\n def __call__(self, environ, start_response):\n \"\"\"The WSGI application\n\n Args:\n environ: A WSGI environment.\n start_response: The WSGI start_response callable.\n \"\"\"\n\n token = context.attach(extract(environ, getter=wsgi_getter))\n span_name = self.name_callback(environ)\n\n span = self.tracer.start_span(\n span_name,\n kind=trace.SpanKind.SERVER,\n attributes=collect_request_attributes(environ),\n )\n\n try:\n with trace.use_span(span):\n start_response = self._create_start_response(\n span, start_response\n )\n iterable = self.wsgi(environ, start_response)\n return _end_span_after_iterating(\n iterable, span, self.tracer, token\n )\n except Exception as ex:\n if span.is_recording():\n span.set_status(Status(StatusCode.ERROR, str(ex)))\n span.end()\n context.detach(token)\n raise\n\n\n# Put this in a subfunction to not delay the call to the wrapped\n# WSGI application (instrumentation should change the application\n# behavior as little as possible).\ndef _end_span_after_iterating(iterable, span, tracer, token):\n try:\n with trace.use_span(span):\n for yielded in iterable:\n yield yielded\n finally:\n close = getattr(iterable, \"close\", None)\n if close:\n close()\n span.end()\n context.detach(token)\n", "path": "instrumentation/opentelemetry-instrumentation-wsgi/src/opentelemetry/instrumentation/wsgi/__init__.py"}]} | 3,120 | 643 |
gh_patches_debug_5604 | rasdani/github-patches | git_diff | bokeh__bokeh-9682 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOCUMENTATION] typo in texas.py
Superfluous parenthesis on [line 34 of texas.py](https://github.com/bokeh/bokeh/blob/aa60b9c9d554fbd349a21da37d616c5b8eda8c09/examples/plotting/file/texas.py#L34). Shows up in the hover tool tip.
Not sure whether you want a PR for something so small? So far I haven't found any other corrections that could be added in.
</issue>
<code>
[start of examples/plotting/file/texas.py]
1 from bokeh.io import show
2 from bokeh.models import LogColorMapper
3 from bokeh.palettes import Viridis6 as palette
4 from bokeh.plotting import figure
5 from bokeh.sampledata.unemployment import data as unemployment
6 from bokeh.sampledata.us_counties import data as counties
7
8 palette = tuple(reversed(palette))
9
10 counties = {
11 code: county for code, county in counties.items() if county["state"] == "tx"
12 }
13
14 county_xs = [county["lons"] for county in counties.values()]
15 county_ys = [county["lats"] for county in counties.values()]
16
17 county_names = [county['name'] for county in counties.values()]
18 county_rates = [unemployment[county_id] for county_id in counties]
19 color_mapper = LogColorMapper(palette=palette)
20
21 data=dict(
22 x=county_xs,
23 y=county_ys,
24 name=county_names,
25 rate=county_rates,
26 )
27
28 TOOLS = "pan,wheel_zoom,reset,hover,save"
29
30 p = figure(
31 title="Texas Unemployment, 2009", tools=TOOLS,
32 x_axis_location=None, y_axis_location=None,
33 tooltips=[
34 ("Name", "@name"), ("Unemployment rate)", "@rate%"), ("(Long, Lat)", "($x, $y)")
35 ])
36 p.grid.grid_line_color = None
37 p.hover.point_policy = "follow_mouse"
38
39 p.patches('x', 'y', source=data,
40 fill_color={'field': 'rate', 'transform': color_mapper},
41 fill_alpha=0.7, line_color="white", line_width=0.5)
42
43 show(p)
44
[end of examples/plotting/file/texas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/plotting/file/texas.py b/examples/plotting/file/texas.py
--- a/examples/plotting/file/texas.py
+++ b/examples/plotting/file/texas.py
@@ -31,7 +31,7 @@
title="Texas Unemployment, 2009", tools=TOOLS,
x_axis_location=None, y_axis_location=None,
tooltips=[
- ("Name", "@name"), ("Unemployment rate)", "@rate%"), ("(Long, Lat)", "($x, $y)")
+ ("Name", "@name"), ("Unemployment rate", "@rate%"), ("(Long, Lat)", "($x, $y)")
])
p.grid.grid_line_color = None
p.hover.point_policy = "follow_mouse"
| {"golden_diff": "diff --git a/examples/plotting/file/texas.py b/examples/plotting/file/texas.py\n--- a/examples/plotting/file/texas.py\n+++ b/examples/plotting/file/texas.py\n@@ -31,7 +31,7 @@\n title=\"Texas Unemployment, 2009\", tools=TOOLS,\n x_axis_location=None, y_axis_location=None,\n tooltips=[\n- (\"Name\", \"@name\"), (\"Unemployment rate)\", \"@rate%\"), (\"(Long, Lat)\", \"($x, $y)\")\n+ (\"Name\", \"@name\"), (\"Unemployment rate\", \"@rate%\"), (\"(Long, Lat)\", \"($x, $y)\")\n ])\n p.grid.grid_line_color = None\n p.hover.point_policy = \"follow_mouse\"\n", "issue": "[DOCUMENTATION] typo in texas.py\nSuperfluous parenthesis on [line 34 of texas.py](https://github.com/bokeh/bokeh/blob/aa60b9c9d554fbd349a21da37d616c5b8eda8c09/examples/plotting/file/texas.py#L34). Shows up in the hover tool tip.\r\n\r\nNot sure whether you want a PR for something so small? So far I haven't found any other corrections that could be added in.\n", "before_files": [{"content": "from bokeh.io import show\nfrom bokeh.models import LogColorMapper\nfrom bokeh.palettes import Viridis6 as palette\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.unemployment import data as unemployment\nfrom bokeh.sampledata.us_counties import data as counties\n\npalette = tuple(reversed(palette))\n\ncounties = {\n code: county for code, county in counties.items() if county[\"state\"] == \"tx\"\n}\n\ncounty_xs = [county[\"lons\"] for county in counties.values()]\ncounty_ys = [county[\"lats\"] for county in counties.values()]\n\ncounty_names = [county['name'] for county in counties.values()]\ncounty_rates = [unemployment[county_id] for county_id in counties]\ncolor_mapper = LogColorMapper(palette=palette)\n\ndata=dict(\n x=county_xs,\n y=county_ys,\n name=county_names,\n rate=county_rates,\n)\n\nTOOLS = \"pan,wheel_zoom,reset,hover,save\"\n\np = figure(\n title=\"Texas Unemployment, 2009\", tools=TOOLS,\n x_axis_location=None, y_axis_location=None,\n tooltips=[\n (\"Name\", \"@name\"), (\"Unemployment rate)\", \"@rate%\"), (\"(Long, Lat)\", \"($x, $y)\")\n ])\np.grid.grid_line_color = None\np.hover.point_policy = \"follow_mouse\"\n\np.patches('x', 'y', source=data,\n fill_color={'field': 'rate', 'transform': color_mapper},\n fill_alpha=0.7, line_color=\"white\", line_width=0.5)\n\nshow(p)\n", "path": "examples/plotting/file/texas.py"}]} | 1,090 | 166 |
gh_patches_debug_50247 | rasdani/github-patches | git_diff | sopel-irc__sopel-2154 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `.clearpronouns` command
### The problem
Users might set their pronouns by mistake or just to test the functionality and then they are stuck.
### The solution
Add an "unsetpronouns" that deletes pronoun information for the nick.
Something like this might work.
```python
@plugin.command('unsetpronouns')
def unset_pronouns(bot, trigger):
bot.db.delete_nick_value(trigger.nick, 'pronouns')
```
</issue>
<code>
[start of sopel/modules/pronouns.py]
1 """
2 pronouns.py - Sopel Pronouns Plugin
3 Copyright © 2016, Elsie Powell
4 Licensed under the Eiffel Forum License 2.
5
6 https://sopel.chat
7 """
8 from __future__ import generator_stop
9
10 from sopel import plugin
11
12
13 # Copied from pronoun.is, leaving a *lot* out. If
14 # https://github.com/witch-house/pronoun.is/pull/96 gets merged, using that
15 # would be a lot easier.
16 # If ambiguous, the earlier one will be used.
17 KNOWN_SETS = {
18 "ze/hir": "ze/hir/hir/hirs/hirself",
19 "ze/zir": "ze/zir/zir/zirs/zirself",
20 "they/.../themselves": "they/them/their/theirs/themselves",
21 "they/.../themself": "they/them/their/theirs/themself",
22 "she/her": "she/her/her/hers/herself",
23 "he/him": "he/him/his/his/himself",
24 "xey/xem": "xey/xem/xyr/xyrs/xemself",
25 "sie/hir": "sie/hir/hir/hirs/hirself",
26 "it/it": "it/it/its/its/itself",
27 "ey/em": "ey/em/eir/eirs/eirself",
28 }
29
30
31 @plugin.command('pronouns')
32 @plugin.example('.pronouns Embolalia')
33 def pronouns(bot, trigger):
34 """Show the pronouns for a given user, defaulting to the current user if left blank."""
35 if not trigger.group(3):
36 pronouns = bot.db.get_nick_value(trigger.nick, 'pronouns')
37 if pronouns:
38 say_pronouns(bot, trigger.nick, pronouns)
39 else:
40 bot.reply("I don't know your pronouns! You can set them with "
41 "{}setpronouns".format(bot.config.core.help_prefix))
42 else:
43 pronouns = bot.db.get_nick_value(trigger.group(3), 'pronouns')
44 if pronouns:
45 say_pronouns(bot, trigger.group(3), pronouns)
46 elif trigger.group(3) == bot.nick:
47 # You can stuff an entry into the database manually for your bot's
48 # gender, but like… it's a bot.
49 bot.say(
50 "I am a bot. Beep boop. My pronouns are it/it/its/its/itself. "
51 "See https://pronoun.is/it for examples."
52 )
53 else:
54 bot.reply("I don't know {}'s pronouns. They can set them with "
55 "{}setpronouns".format(trigger.group(3),
56 bot.config.core.help_prefix))
57
58
59 def say_pronouns(bot, nick, pronouns):
60 for short, set_ in KNOWN_SETS.items():
61 if pronouns == set_:
62 break
63 short = pronouns
64
65 bot.say("{}'s pronouns are {}. See https://pronoun.is/{} for "
66 "examples.".format(nick, pronouns, short))
67
68
69 @plugin.command('setpronouns')
70 @plugin.example('.setpronouns fae/faer/faer/faers/faerself')
71 @plugin.example('.setpronouns they/them/theirs')
72 @plugin.example('.setpronouns they/them')
73 def set_pronouns(bot, trigger):
74 """Set your pronouns."""
75 pronouns = trigger.group(2)
76 if not pronouns:
77 bot.reply('What pronouns do you use?')
78 return
79
80 disambig = ''
81 requested_pronoun_split = pronouns.split("/")
82 if len(requested_pronoun_split) < 5:
83 matching = []
84 for known_pronoun_set in KNOWN_SETS.values():
85 known_pronoun_split = known_pronoun_set.split("/")
86 if known_pronoun_set.startswith(pronouns + "/") or (
87 len(requested_pronoun_split) == 3
88 and (
89 (
90 # "they/.../themself"
91 requested_pronoun_split[1] == "..."
92 and requested_pronoun_split[0] == known_pronoun_split[0]
93 and requested_pronoun_split[2] == known_pronoun_split[4]
94 )
95 or (
96 # "they/them/theirs"
97 requested_pronoun_split[0:2] == known_pronoun_split[0:2]
98 and requested_pronoun_split[2] == known_pronoun_split[3]
99 )
100 )
101 ):
102 matching.append(known_pronoun_set)
103
104 if len(matching) == 0:
105 bot.reply(
106 "I'm sorry, I don't know those pronouns. "
107 "You can give me a set I don't know by formatting it "
108 "subject/object/possessive-determiner/possessive-pronoun/"
109 "reflexive, as in: they/them/their/theirs/themselves"
110 )
111 return
112
113 pronouns = matching[0]
114 if len(matching) > 1:
115 disambig = " Or, if you meant one of these, please tell me: {}".format(
116 ", ".join(matching[1:])
117 )
118
119 bot.db.set_nick_value(trigger.nick, 'pronouns', pronouns)
120 bot.reply(
121 "Thanks for telling me! I'll remember you use {}.{}".format(pronouns, disambig)
122 )
123
[end of sopel/modules/pronouns.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/modules/pronouns.py b/sopel/modules/pronouns.py
--- a/sopel/modules/pronouns.py
+++ b/sopel/modules/pronouns.py
@@ -120,3 +120,10 @@
bot.reply(
"Thanks for telling me! I'll remember you use {}.{}".format(pronouns, disambig)
)
+
+
[email protected]('clearpronouns')
+def unset_pronouns(bot, trigger):
+ """Clear pronouns for the given user."""
+ bot.db.delete_nick_value(trigger.nick, 'pronouns')
+ bot.reply("Okay, I'll forget your pronouns.")
| {"golden_diff": "diff --git a/sopel/modules/pronouns.py b/sopel/modules/pronouns.py\n--- a/sopel/modules/pronouns.py\n+++ b/sopel/modules/pronouns.py\n@@ -120,3 +120,10 @@\n bot.reply(\n \"Thanks for telling me! I'll remember you use {}.{}\".format(pronouns, disambig)\n )\n+\n+\[email protected]('clearpronouns')\n+def unset_pronouns(bot, trigger):\n+ \"\"\"Clear pronouns for the given user.\"\"\"\n+ bot.db.delete_nick_value(trigger.nick, 'pronouns')\n+ bot.reply(\"Okay, I'll forget your pronouns.\")\n", "issue": "Add `.clearpronouns` command\n### The problem\r\nUsers might set their pronouns by mistake or just to test the functionality and then they are stuck.\r\n\r\n### The solution\r\n\r\nAdd an \"unsetpronouns\" that deletes pronoun information for the nick. \r\nSomething like this might work. \r\n\r\n```python\r\[email protected]('unsetpronouns')\r\ndef unset_pronouns(bot, trigger):\r\n bot.db.delete_nick_value(trigger.nick, 'pronouns')\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\npronouns.py - Sopel Pronouns Plugin\nCopyright \u00a9 2016, Elsie Powell\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import generator_stop\n\nfrom sopel import plugin\n\n\n# Copied from pronoun.is, leaving a *lot* out. If\n# https://github.com/witch-house/pronoun.is/pull/96 gets merged, using that\n# would be a lot easier.\n# If ambiguous, the earlier one will be used.\nKNOWN_SETS = {\n \"ze/hir\": \"ze/hir/hir/hirs/hirself\",\n \"ze/zir\": \"ze/zir/zir/zirs/zirself\",\n \"they/.../themselves\": \"they/them/their/theirs/themselves\",\n \"they/.../themself\": \"they/them/their/theirs/themself\",\n \"she/her\": \"she/her/her/hers/herself\",\n \"he/him\": \"he/him/his/his/himself\",\n \"xey/xem\": \"xey/xem/xyr/xyrs/xemself\",\n \"sie/hir\": \"sie/hir/hir/hirs/hirself\",\n \"it/it\": \"it/it/its/its/itself\",\n \"ey/em\": \"ey/em/eir/eirs/eirself\",\n}\n\n\[email protected]('pronouns')\[email protected]('.pronouns Embolalia')\ndef pronouns(bot, trigger):\n \"\"\"Show the pronouns for a given user, defaulting to the current user if left blank.\"\"\"\n if not trigger.group(3):\n pronouns = bot.db.get_nick_value(trigger.nick, 'pronouns')\n if pronouns:\n say_pronouns(bot, trigger.nick, pronouns)\n else:\n bot.reply(\"I don't know your pronouns! You can set them with \"\n \"{}setpronouns\".format(bot.config.core.help_prefix))\n else:\n pronouns = bot.db.get_nick_value(trigger.group(3), 'pronouns')\n if pronouns:\n say_pronouns(bot, trigger.group(3), pronouns)\n elif trigger.group(3) == bot.nick:\n # You can stuff an entry into the database manually for your bot's\n # gender, but like\u2026 it's a bot.\n bot.say(\n \"I am a bot. Beep boop. My pronouns are it/it/its/its/itself. \"\n \"See https://pronoun.is/it for examples.\"\n )\n else:\n bot.reply(\"I don't know {}'s pronouns. They can set them with \"\n \"{}setpronouns\".format(trigger.group(3),\n bot.config.core.help_prefix))\n\n\ndef say_pronouns(bot, nick, pronouns):\n for short, set_ in KNOWN_SETS.items():\n if pronouns == set_:\n break\n short = pronouns\n\n bot.say(\"{}'s pronouns are {}. See https://pronoun.is/{} for \"\n \"examples.\".format(nick, pronouns, short))\n\n\[email protected]('setpronouns')\[email protected]('.setpronouns fae/faer/faer/faers/faerself')\[email protected]('.setpronouns they/them/theirs')\[email protected]('.setpronouns they/them')\ndef set_pronouns(bot, trigger):\n \"\"\"Set your pronouns.\"\"\"\n pronouns = trigger.group(2)\n if not pronouns:\n bot.reply('What pronouns do you use?')\n return\n\n disambig = ''\n requested_pronoun_split = pronouns.split(\"/\")\n if len(requested_pronoun_split) < 5:\n matching = []\n for known_pronoun_set in KNOWN_SETS.values():\n known_pronoun_split = known_pronoun_set.split(\"/\")\n if known_pronoun_set.startswith(pronouns + \"/\") or (\n len(requested_pronoun_split) == 3\n and (\n (\n # \"they/.../themself\"\n requested_pronoun_split[1] == \"...\"\n and requested_pronoun_split[0] == known_pronoun_split[0]\n and requested_pronoun_split[2] == known_pronoun_split[4]\n )\n or (\n # \"they/them/theirs\"\n requested_pronoun_split[0:2] == known_pronoun_split[0:2]\n and requested_pronoun_split[2] == known_pronoun_split[3]\n )\n )\n ):\n matching.append(known_pronoun_set)\n\n if len(matching) == 0:\n bot.reply(\n \"I'm sorry, I don't know those pronouns. \"\n \"You can give me a set I don't know by formatting it \"\n \"subject/object/possessive-determiner/possessive-pronoun/\"\n \"reflexive, as in: they/them/their/theirs/themselves\"\n )\n return\n\n pronouns = matching[0]\n if len(matching) > 1:\n disambig = \" Or, if you meant one of these, please tell me: {}\".format(\n \", \".join(matching[1:])\n )\n\n bot.db.set_nick_value(trigger.nick, 'pronouns', pronouns)\n bot.reply(\n \"Thanks for telling me! I'll remember you use {}.{}\".format(pronouns, disambig)\n )\n", "path": "sopel/modules/pronouns.py"}]} | 2,094 | 148 |
gh_patches_debug_20340 | rasdani/github-patches | git_diff | carpentries__amy-1458 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include badge & badge date in member view
In the member view [such as this](https://amy.software-carpentry.org/fiscal/membership/121/), we see a list of instructor training seats with the columns Event, Person, and Task.
Can we also have two more columns in this view: Badge (SWC, DC, and/or LC) and Date Badged?
Task appears to be a concatenation of Event, Person, and Role. If Role is always going to be Learner, we can remove this column to make room for the new columns suggested above.
</issue>
<code>
[start of amy/fiscal/views.py]
1 from django.contrib.auth.mixins import (
2 PermissionRequiredMixin,
3 )
4 from django.db.models import (
5 F,
6 Q,
7 Count,
8 Prefetch,
9 )
10 from django.db.models.functions import Now
11 from django.urls import reverse, reverse_lazy
12
13 from fiscal.filters import (
14 OrganizationFilter,
15 MembershipFilter,
16 )
17 from fiscal.forms import (
18 OrganizationForm,
19 OrganizationCreateForm,
20 MembershipForm,
21 MembershipCreateForm,
22 SponsorshipForm,
23 )
24 from workshops.base_views import (
25 AMYCreateView,
26 AMYUpdateView,
27 AMYDeleteView,
28 AMYListView,
29 RedirectSupportMixin,
30 PrepopulationSupportMixin,
31 AMYDetailView,
32 )
33 from workshops.models import (
34 Organization,
35 Membership,
36 Sponsorship,
37 )
38 from workshops.util import (
39 OnlyForAdminsMixin,
40 )
41
42
43 # ------------------------------------------------------------
44 # Organization related views
45 # ------------------------------------------------------------
46
47 class AllOrganizations(OnlyForAdminsMixin, AMYListView):
48 context_object_name = 'all_organizations'
49 template_name = 'fiscal/all_organizations.html'
50 filter_class = OrganizationFilter
51 queryset = Organization.objects.prefetch_related(Prefetch(
52 'membership_set',
53 to_attr='current_memberships',
54 queryset=Membership.objects.filter(
55 agreement_start__lte=Now(),
56 agreement_end__gte=Now(),
57 )
58 ))
59 title = 'All Organizations'
60
61
62 class OrganizationDetails(OnlyForAdminsMixin, AMYDetailView):
63 queryset = Organization.objects.all()
64 context_object_name = 'organization'
65 template_name = 'fiscal/organization.html'
66 slug_field = 'domain'
67 slug_url_kwarg = 'org_domain'
68
69 def get_context_data(self, **kwargs):
70 context = super().get_context_data(**kwargs)
71 context['title'] = 'Organization {0}'.format(self.object)
72 return context
73
74
75 class OrganizationCreate(OnlyForAdminsMixin, PermissionRequiredMixin,
76 AMYCreateView):
77 permission_required = 'workshops.add_organization'
78 model = Organization
79 form_class = OrganizationCreateForm
80
81
82 class OrganizationUpdate(OnlyForAdminsMixin, PermissionRequiredMixin,
83 AMYUpdateView):
84 permission_required = 'workshops.change_organization'
85 model = Organization
86 form_class = OrganizationForm
87 slug_field = 'domain'
88 slug_url_kwarg = 'org_domain'
89 template_name = 'generic_form_with_comments.html'
90
91
92 class OrganizationDelete(OnlyForAdminsMixin, PermissionRequiredMixin,
93 AMYDeleteView):
94 model = Organization
95 slug_field = 'domain'
96 slug_url_kwarg = 'org_domain'
97 permission_required = 'workshops.delete_organization'
98 success_url = reverse_lazy('all_organizations')
99
100
101 # ------------------------------------------------------------
102 # Membership related views
103 # ------------------------------------------------------------
104
105 class AllMemberships(OnlyForAdminsMixin, AMYListView):
106 context_object_name = 'all_memberships'
107 template_name = 'fiscal/all_memberships.html'
108 filter_class = MembershipFilter
109 queryset = Membership.objects.annotate(
110 instructor_training_seats_total=(
111 F('seats_instructor_training') +
112 F('additional_instructor_training_seats')
113 ),
114 # for future reference, in case someone would want to implement
115 # this annotation
116 # instructor_training_seats_utilized=(
117 # Count('task', filter=Q(task__role__name='learner'))
118 # ),
119 instructor_training_seats_remaining=(
120 F('seats_instructor_training') +
121 F('additional_instructor_training_seats') -
122 Count('task', filter=Q(task__role__name='learner'))
123 ),
124 )
125 title = 'All Memberships'
126
127
128 class MembershipDetails(OnlyForAdminsMixin, AMYDetailView):
129 queryset = (
130 Membership.objects
131 .select_related('organization')
132 .prefetch_related('task_set')
133 )
134 context_object_name = 'membership'
135 template_name = 'fiscal/membership.html'
136 pk_url_kwarg = 'membership_id'
137
138 def get_context_data(self, **kwargs):
139 context = super().get_context_data(**kwargs)
140 context['title'] = '{0}'.format(self.object)
141 return context
142
143
144 class MembershipCreate(OnlyForAdminsMixin, PermissionRequiredMixin,
145 PrepopulationSupportMixin, AMYCreateView):
146 permission_required = [
147 'workshops.add_membership',
148 'workshops.change_organization',
149 ]
150 model = Membership
151 form_class = MembershipCreateForm
152 populate_fields = ['organization']
153
154
155 class MembershipUpdate(OnlyForAdminsMixin, PermissionRequiredMixin,
156 RedirectSupportMixin, AMYUpdateView):
157 permission_required = 'workshops.change_membership'
158 model = Membership
159 form_class = MembershipForm
160 pk_url_kwarg = 'membership_id'
161 template_name = 'generic_form_with_comments.html'
162
163
164 class MembershipDelete(OnlyForAdminsMixin, PermissionRequiredMixin,
165 AMYDeleteView):
166 model = Membership
167 permission_required = 'workshops.delete_membership'
168 pk_url_kwarg = 'membership_id'
169
170 def get_success_url(self):
171 return reverse('organization_details', args=[
172 self.get_object().organization.domain])
173
174
175 # ------------------------------------------------------------
176 # Sponsorship related views
177 # ------------------------------------------------------------
178
179 class SponsorshipCreate(OnlyForAdminsMixin, PermissionRequiredMixin,
180 AMYCreateView):
181 model = Sponsorship
182 permission_required = 'workshops.add_sponsorship'
183 form_class = SponsorshipForm
184
185 def get_success_url(self):
186 return reverse('event_edit', args=[self.object.event.slug]) + \
187 '#sponsors'
188
189
190 class SponsorshipDelete(OnlyForAdminsMixin, PermissionRequiredMixin,
191 AMYDeleteView):
192 model = Sponsorship
193 permission_required = 'workshops.delete_sponsorship'
194
195 def get_success_url(self):
196 return reverse('event_edit', args=[self.get_object().event.slug]) + \
197 '#sponsors'
198
[end of amy/fiscal/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/amy/fiscal/views.py b/amy/fiscal/views.py
--- a/amy/fiscal/views.py
+++ b/amy/fiscal/views.py
@@ -34,6 +34,8 @@
Organization,
Membership,
Sponsorship,
+ Task,
+ Award,
)
from workshops.util import (
OnlyForAdminsMixin,
@@ -126,11 +128,16 @@
class MembershipDetails(OnlyForAdminsMixin, AMYDetailView):
- queryset = (
- Membership.objects
- .select_related('organization')
- .prefetch_related('task_set')
- )
+ prefetch_awards = Prefetch('person__award_set',
+ queryset=Award.objects.select_related('badge'))
+ queryset = Membership.objects.select_related('organization') \
+ .prefetch_related(
+ Prefetch(
+ 'task_set',
+ queryset=Task.objects.select_related('event', 'person')
+ .prefetch_related(prefetch_awards)
+ )
+ )
context_object_name = 'membership'
template_name = 'fiscal/membership.html'
pk_url_kwarg = 'membership_id'
| {"golden_diff": "diff --git a/amy/fiscal/views.py b/amy/fiscal/views.py\n--- a/amy/fiscal/views.py\n+++ b/amy/fiscal/views.py\n@@ -34,6 +34,8 @@\n Organization,\n Membership,\n Sponsorship,\n+ Task,\n+ Award,\n )\n from workshops.util import (\n OnlyForAdminsMixin,\n@@ -126,11 +128,16 @@\n \n \n class MembershipDetails(OnlyForAdminsMixin, AMYDetailView):\n- queryset = (\n- Membership.objects\n- .select_related('organization')\n- .prefetch_related('task_set')\n- )\n+ prefetch_awards = Prefetch('person__award_set',\n+ queryset=Award.objects.select_related('badge'))\n+ queryset = Membership.objects.select_related('organization') \\\n+ .prefetch_related(\n+ Prefetch(\n+ 'task_set',\n+ queryset=Task.objects.select_related('event', 'person')\n+ .prefetch_related(prefetch_awards)\n+ )\n+ )\n context_object_name = 'membership'\n template_name = 'fiscal/membership.html'\n pk_url_kwarg = 'membership_id'\n", "issue": "Include badge & badge date in member view \nIn the member view [such as this](https://amy.software-carpentry.org/fiscal/membership/121/), we see a list of instructor training seats with the columns Event, Person, and Task.\r\n\r\nCan we also have two more columns in this view: Badge (SWC, DC, and/or LC) and Date Badged?\r\n\r\nTask appears to be a concatenation of Event, Person, and Role. If Role is always going to be Learner, we can remove this column to make room for the new columns suggested above.\n", "before_files": [{"content": "from django.contrib.auth.mixins import (\n PermissionRequiredMixin,\n)\nfrom django.db.models import (\n F,\n Q,\n Count,\n Prefetch,\n)\nfrom django.db.models.functions import Now\nfrom django.urls import reverse, reverse_lazy\n\nfrom fiscal.filters import (\n OrganizationFilter,\n MembershipFilter,\n)\nfrom fiscal.forms import (\n OrganizationForm,\n OrganizationCreateForm,\n MembershipForm,\n MembershipCreateForm,\n SponsorshipForm,\n)\nfrom workshops.base_views import (\n AMYCreateView,\n AMYUpdateView,\n AMYDeleteView,\n AMYListView,\n RedirectSupportMixin,\n PrepopulationSupportMixin,\n AMYDetailView,\n)\nfrom workshops.models import (\n Organization,\n Membership,\n Sponsorship,\n)\nfrom workshops.util import (\n OnlyForAdminsMixin,\n)\n\n\n# ------------------------------------------------------------\n# Organization related views\n# ------------------------------------------------------------\n\nclass AllOrganizations(OnlyForAdminsMixin, AMYListView):\n context_object_name = 'all_organizations'\n template_name = 'fiscal/all_organizations.html'\n filter_class = OrganizationFilter\n queryset = Organization.objects.prefetch_related(Prefetch(\n 'membership_set',\n to_attr='current_memberships',\n queryset=Membership.objects.filter(\n agreement_start__lte=Now(),\n agreement_end__gte=Now(),\n )\n ))\n title = 'All Organizations'\n\n\nclass OrganizationDetails(OnlyForAdminsMixin, AMYDetailView):\n queryset = Organization.objects.all()\n context_object_name = 'organization'\n template_name = 'fiscal/organization.html'\n slug_field = 'domain'\n slug_url_kwarg = 'org_domain'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['title'] = 'Organization {0}'.format(self.object)\n return context\n\n\nclass OrganizationCreate(OnlyForAdminsMixin, PermissionRequiredMixin,\n AMYCreateView):\n permission_required = 'workshops.add_organization'\n model = Organization\n form_class = OrganizationCreateForm\n\n\nclass OrganizationUpdate(OnlyForAdminsMixin, PermissionRequiredMixin,\n AMYUpdateView):\n permission_required = 'workshops.change_organization'\n model = Organization\n form_class = OrganizationForm\n slug_field = 'domain'\n slug_url_kwarg = 'org_domain'\n template_name = 'generic_form_with_comments.html'\n\n\nclass OrganizationDelete(OnlyForAdminsMixin, PermissionRequiredMixin,\n AMYDeleteView):\n model = Organization\n slug_field = 'domain'\n slug_url_kwarg = 'org_domain'\n permission_required = 'workshops.delete_organization'\n success_url = reverse_lazy('all_organizations')\n\n\n# ------------------------------------------------------------\n# Membership related views\n# ------------------------------------------------------------\n\nclass AllMemberships(OnlyForAdminsMixin, AMYListView):\n context_object_name = 'all_memberships'\n template_name = 'fiscal/all_memberships.html'\n filter_class = MembershipFilter\n queryset = Membership.objects.annotate(\n instructor_training_seats_total=(\n F('seats_instructor_training') +\n F('additional_instructor_training_seats')\n ),\n # for future reference, in case someone would want to implement\n # this annotation\n # instructor_training_seats_utilized=(\n # Count('task', filter=Q(task__role__name='learner'))\n # ),\n instructor_training_seats_remaining=(\n F('seats_instructor_training') +\n F('additional_instructor_training_seats') -\n Count('task', filter=Q(task__role__name='learner'))\n ),\n )\n title = 'All Memberships'\n\n\nclass MembershipDetails(OnlyForAdminsMixin, AMYDetailView):\n queryset = (\n Membership.objects\n .select_related('organization')\n .prefetch_related('task_set')\n )\n context_object_name = 'membership'\n template_name = 'fiscal/membership.html'\n pk_url_kwarg = 'membership_id'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['title'] = '{0}'.format(self.object)\n return context\n\n\nclass MembershipCreate(OnlyForAdminsMixin, PermissionRequiredMixin,\n PrepopulationSupportMixin, AMYCreateView):\n permission_required = [\n 'workshops.add_membership',\n 'workshops.change_organization',\n ]\n model = Membership\n form_class = MembershipCreateForm\n populate_fields = ['organization']\n\n\nclass MembershipUpdate(OnlyForAdminsMixin, PermissionRequiredMixin,\n RedirectSupportMixin, AMYUpdateView):\n permission_required = 'workshops.change_membership'\n model = Membership\n form_class = MembershipForm\n pk_url_kwarg = 'membership_id'\n template_name = 'generic_form_with_comments.html'\n\n\nclass MembershipDelete(OnlyForAdminsMixin, PermissionRequiredMixin,\n AMYDeleteView):\n model = Membership\n permission_required = 'workshops.delete_membership'\n pk_url_kwarg = 'membership_id'\n\n def get_success_url(self):\n return reverse('organization_details', args=[\n self.get_object().organization.domain])\n\n\n# ------------------------------------------------------------\n# Sponsorship related views\n# ------------------------------------------------------------\n\nclass SponsorshipCreate(OnlyForAdminsMixin, PermissionRequiredMixin,\n AMYCreateView):\n model = Sponsorship\n permission_required = 'workshops.add_sponsorship'\n form_class = SponsorshipForm\n\n def get_success_url(self):\n return reverse('event_edit', args=[self.object.event.slug]) + \\\n '#sponsors'\n\n\nclass SponsorshipDelete(OnlyForAdminsMixin, PermissionRequiredMixin,\n AMYDeleteView):\n model = Sponsorship\n permission_required = 'workshops.delete_sponsorship'\n\n def get_success_url(self):\n return reverse('event_edit', args=[self.get_object().event.slug]) + \\\n '#sponsors'\n", "path": "amy/fiscal/views.py"}]} | 2,387 | 257 |
gh_patches_debug_41079 | rasdani/github-patches | git_diff | carpentries__amy-637 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: add filtering by workshop type in published events
We need to grab DC-only, or SWC-only published events. Probably there's no need to return type of the event in the structure, just filtering.
</issue>
<code>
[start of api/serializers.py]
1 from rest_framework import serializers
2
3 from workshops.models import Badge, Airport, Person, Event, TodoItem
4
5
6 class PersonUsernameSerializer(serializers.ModelSerializer):
7 name = serializers.CharField(source='get_full_name')
8 user = serializers.CharField(source='username')
9
10 class Meta:
11 model = Person
12 fields = ('name', 'user', )
13
14
15 class PersonNameEmailSerializer(serializers.ModelSerializer):
16 name = serializers.CharField(source='get_full_name')
17
18 class Meta:
19 model = Person
20 fields = ('name', 'email')
21
22
23 class ExportBadgesSerializer(serializers.ModelSerializer):
24 persons = PersonUsernameSerializer(many=True, source='person_set')
25
26 class Meta:
27 model = Badge
28 fields = ('name', 'persons')
29
30
31 class ExportInstructorLocationsSerializer(serializers.ModelSerializer):
32 name = serializers.CharField(source='fullname')
33 instructors = PersonUsernameSerializer(many=True, source='person_set')
34
35 class Meta:
36 model = Airport
37 fields = ('name', 'latitude', 'longitude', 'instructors', 'country')
38
39
40 class EventSerializer(serializers.ModelSerializer):
41 humandate = serializers.SerializerMethodField()
42 country = serializers.CharField()
43 start = serializers.DateField(format=None)
44 end = serializers.DateField(format=None)
45 url = serializers.URLField(source='website_url')
46 eventbrite_id = serializers.CharField(source='reg_key')
47
48 def get_humandate(self, obj):
49 """Render start and end dates as human-readable short date."""
50 return EventSerializer.human_readable_date(obj.start, obj.end)
51
52 @staticmethod
53 def human_readable_date(date1, date2):
54 """Render start and end dates as human-readable short date."""
55 if date1 and not date2:
56 return '{:%b %d, %Y}-???'.format(date1)
57 elif date2 and not date1:
58 return '???-{:%b %d, %Y}'.format(date2)
59 elif not date2 and not date1:
60 return '???-???'
61
62 if date1.year == date2.year:
63 if date1.month == date2.month:
64 return '{:%b %d}-{:%d, %Y}'.format(date1, date2)
65 else:
66 return '{:%b %d}-{:%b %d, %Y}'.format(date1, date2)
67 else:
68 return '{:%b %d, %Y}-{:%b %d, %Y}'.format(date1, date2)
69
70 class Meta:
71 model = Event
72 fields = (
73 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',
74 'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',
75 )
76
77
78 class TodoSerializer(serializers.ModelSerializer):
79 content = serializers.SerializerMethodField()
80 start = serializers.DateField(format=None, source='due')
81
82 class Meta:
83 model = TodoItem
84 fields = (
85 'content', 'start',
86 )
87
88 def get_content(self, obj):
89 """Return HTML containing interesting information for admins. This
90 will be displayed on labels in the timeline."""
91
92 return '<a href="{url}">{event}</a><br><small>{todo}</small>'.format(
93 url=obj.event.get_absolute_url(),
94 event=obj.event.get_ident(),
95 todo=obj.title,
96 )
97
[end of api/serializers.py]
[start of api/views.py]
1 import datetime
2
3 from rest_framework.generics import ListAPIView
4 from rest_framework.metadata import SimpleMetadata
5 from rest_framework.permissions import (
6 IsAuthenticatedOrReadOnly, IsAuthenticated
7 )
8 from rest_framework.response import Response
9 from rest_framework.reverse import reverse
10 from rest_framework.views import APIView
11
12 from workshops.models import Badge, Airport, Event, TodoItem
13 from workshops.util import get_members, default_membership_cutoff
14
15 from .serializers import (
16 PersonNameEmailSerializer,
17 ExportBadgesSerializer,
18 ExportInstructorLocationsSerializer,
19 EventSerializer,
20 TodoSerializer,
21 )
22
23
24 class QueryMetadata(SimpleMetadata):
25 """Additionally include info about query parameters."""
26
27 def determine_metadata(self, request, view):
28 data = super().determine_metadata(request, view)
29
30 try:
31 data['query_params'] = view.get_query_params_description()
32 except AttributeError:
33 pass
34
35 return data
36
37
38 class ApiRoot(APIView):
39 def get(self, request, format=None):
40 return Response({
41 'export-badges': reverse('api:export-badges', request=request,
42 format=format),
43 'export-instructors': reverse('api:export-instructors',
44 request=request, format=format),
45 'export-members': reverse('api:export-members', request=request,
46 format=format),
47 'events-published': reverse('api:events-published',
48 request=request, format=format),
49 'user-todos': reverse('api:user-todos',
50 request=request, format=format),
51 })
52
53
54 class ExportBadgesView(ListAPIView):
55 """List all badges and people who have them."""
56 permission_classes = (IsAuthenticatedOrReadOnly, )
57 paginator = None # disable pagination
58
59 queryset = Badge.objects.prefetch_related('person_set')
60 serializer_class = ExportBadgesSerializer
61
62
63 class ExportInstructorLocationsView(ListAPIView):
64 """List all airports and instructors located near them."""
65 permission_classes = (IsAuthenticatedOrReadOnly, )
66 paginator = None # disable pagination
67
68 queryset = Airport.objects.exclude(person=None) \
69 .prefetch_related('person_set')
70 serializer_class = ExportInstructorLocationsSerializer
71
72
73 class ExportMembersView(ListAPIView):
74 """Show everyone who qualifies as an SCF member."""
75 permission_classes = (IsAuthenticatedOrReadOnly, )
76 paginator = None # disable pagination
77
78 serializer_class = PersonNameEmailSerializer
79
80 def get_queryset(self):
81 earliest_default, latest_default = default_membership_cutoff()
82
83 earliest = self.request.query_params.get('earliest', None)
84 if earliest is not None:
85 try:
86 earliest = datetime.datetime.strptime(earliest, '%Y-%m-%d') \
87 .date()
88 except ValueError:
89 earliest = earliest_default
90 else:
91 earliest = earliest_default
92
93 latest = self.request.query_params.get('latest', None)
94 if latest is not None:
95 try:
96 latest = datetime.datetime.strptime(latest, '%Y-%m-%d').date()
97 except ValueError:
98 latest = latest_default
99 else:
100 latest = latest_default
101
102 return get_members(earliest, latest)
103
104 def get_query_params_description(self):
105 return {
106 'earliest': 'Date of earliest workshop someone taught at.'
107 ' Defaults to -2*365 days from current date.',
108 'latest': 'Date of latest workshop someone taught at.'
109 ' Defaults to current date.',
110 }
111
112
113 class PublishedEvents(ListAPIView):
114 """List published events."""
115
116 # only events that have both a starting date and a URL
117 permission_classes = (IsAuthenticatedOrReadOnly, )
118 paginator = None # disable pagination
119
120 serializer_class = EventSerializer
121
122 metadata_class = QueryMetadata
123
124 def get_queryset(self):
125 """Optionally restrict the returned event set to events hosted by
126 specific host or administered by specific admin."""
127 queryset = Event.objects.published_events()
128
129 administrator = self.request.query_params.get('administrator', None)
130 if administrator is not None:
131 queryset = queryset.filter(administrator__pk=administrator)
132
133 host = self.request.query_params.get('host', None)
134 if host is not None:
135 queryset = queryset.filter(host__pk=host)
136
137 return queryset
138
139 def get_query_params_description(self):
140 return {
141 'administrator': 'ID of the organization responsible for admin '
142 'work on events.',
143 'host': 'ID of the organization hosting the event.',
144 }
145
146
147 class UserTodoItems(ListAPIView):
148 permission_classes = (IsAuthenticated, )
149 paginator = None
150 serializer_class = TodoSerializer
151
152 def get_queryset(self):
153 """Return current TODOs for currently logged in user."""
154 return TodoItem.objects.user(self.request.user) \
155 .incomplete() \
156 .select_related('event')
157
[end of api/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/serializers.py b/api/serializers.py
--- a/api/serializers.py
+++ b/api/serializers.py
@@ -1,6 +1,6 @@
from rest_framework import serializers
-from workshops.models import Badge, Airport, Person, Event, TodoItem
+from workshops.models import Badge, Airport, Person, Event, TodoItem, Tag
class PersonUsernameSerializer(serializers.ModelSerializer):
@@ -37,6 +37,12 @@
fields = ('name', 'latitude', 'longitude', 'instructors', 'country')
+class TagSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = Tag
+ fields = ('name', )
+
+
class EventSerializer(serializers.ModelSerializer):
humandate = serializers.SerializerMethodField()
country = serializers.CharField()
@@ -44,6 +50,7 @@
end = serializers.DateField(format=None)
url = serializers.URLField(source='website_url')
eventbrite_id = serializers.CharField(source='reg_key')
+ tags = TagSerializer(many=True)
def get_humandate(self, obj):
"""Render start and end dates as human-readable short date."""
@@ -72,6 +79,7 @@
fields = (
'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',
'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',
+ 'tags',
)
diff --git a/api/views.py b/api/views.py
--- a/api/views.py
+++ b/api/views.py
@@ -1,5 +1,6 @@
import datetime
+from django.db.models import Q
from rest_framework.generics import ListAPIView
from rest_framework.metadata import SimpleMetadata
from rest_framework.permissions import (
@@ -9,7 +10,7 @@
from rest_framework.reverse import reverse
from rest_framework.views import APIView
-from workshops.models import Badge, Airport, Event, TodoItem
+from workshops.models import Badge, Airport, Event, TodoItem, Tag
from workshops.util import get_members, default_membership_cutoff
from .serializers import (
@@ -134,6 +135,12 @@
if host is not None:
queryset = queryset.filter(host__pk=host)
+ tags = self.request.query_params.getlist('tag', None)
+ if tags:
+ tags = Tag.objects.filter(name__in=tags)
+ for tag in tags:
+ queryset = queryset.filter(tags=tag)
+
return queryset
def get_query_params_description(self):
@@ -141,6 +148,8 @@
'administrator': 'ID of the organization responsible for admin '
'work on events.',
'host': 'ID of the organization hosting the event.',
+ 'tag': "Events' tag(s). You can use this parameter multiple "
+ "times.",
}
| {"golden_diff": "diff --git a/api/serializers.py b/api/serializers.py\n--- a/api/serializers.py\n+++ b/api/serializers.py\n@@ -1,6 +1,6 @@\n from rest_framework import serializers\n \n-from workshops.models import Badge, Airport, Person, Event, TodoItem\n+from workshops.models import Badge, Airport, Person, Event, TodoItem, Tag\n \n \n class PersonUsernameSerializer(serializers.ModelSerializer):\n@@ -37,6 +37,12 @@\n fields = ('name', 'latitude', 'longitude', 'instructors', 'country')\n \n \n+class TagSerializer(serializers.ModelSerializer):\n+ class Meta:\n+ model = Tag\n+ fields = ('name', )\n+\n+\n class EventSerializer(serializers.ModelSerializer):\n humandate = serializers.SerializerMethodField()\n country = serializers.CharField()\n@@ -44,6 +50,7 @@\n end = serializers.DateField(format=None)\n url = serializers.URLField(source='website_url')\n eventbrite_id = serializers.CharField(source='reg_key')\n+ tags = TagSerializer(many=True)\n \n def get_humandate(self, obj):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n@@ -72,6 +79,7 @@\n fields = (\n 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',\n 'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',\n+ 'tags',\n )\n \n \ndiff --git a/api/views.py b/api/views.py\n--- a/api/views.py\n+++ b/api/views.py\n@@ -1,5 +1,6 @@\n import datetime\n \n+from django.db.models import Q\n from rest_framework.generics import ListAPIView\n from rest_framework.metadata import SimpleMetadata\n from rest_framework.permissions import (\n@@ -9,7 +10,7 @@\n from rest_framework.reverse import reverse\n from rest_framework.views import APIView\n \n-from workshops.models import Badge, Airport, Event, TodoItem\n+from workshops.models import Badge, Airport, Event, TodoItem, Tag\n from workshops.util import get_members, default_membership_cutoff\n \n from .serializers import (\n@@ -134,6 +135,12 @@\n if host is not None:\n queryset = queryset.filter(host__pk=host)\n \n+ tags = self.request.query_params.getlist('tag', None)\n+ if tags:\n+ tags = Tag.objects.filter(name__in=tags)\n+ for tag in tags:\n+ queryset = queryset.filter(tags=tag)\n+\n return queryset\n \n def get_query_params_description(self):\n@@ -141,6 +148,8 @@\n 'administrator': 'ID of the organization responsible for admin '\n 'work on events.',\n 'host': 'ID of the organization hosting the event.',\n+ 'tag': \"Events' tag(s). You can use this parameter multiple \"\n+ \"times.\",\n }\n", "issue": "API: add filtering by workshop type in published events\nWe need to grab DC-only, or SWC-only published events. Probably there's no need to return type of the event in the structure, just filtering.\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom workshops.models import Badge, Airport, Person, Event, TodoItem\n\n\nclass PersonUsernameSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='get_full_name')\n user = serializers.CharField(source='username')\n\n class Meta:\n model = Person\n fields = ('name', 'user', )\n\n\nclass PersonNameEmailSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='get_full_name')\n\n class Meta:\n model = Person\n fields = ('name', 'email')\n\n\nclass ExportBadgesSerializer(serializers.ModelSerializer):\n persons = PersonUsernameSerializer(many=True, source='person_set')\n\n class Meta:\n model = Badge\n fields = ('name', 'persons')\n\n\nclass ExportInstructorLocationsSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source='fullname')\n instructors = PersonUsernameSerializer(many=True, source='person_set')\n\n class Meta:\n model = Airport\n fields = ('name', 'latitude', 'longitude', 'instructors', 'country')\n\n\nclass EventSerializer(serializers.ModelSerializer):\n humandate = serializers.SerializerMethodField()\n country = serializers.CharField()\n start = serializers.DateField(format=None)\n end = serializers.DateField(format=None)\n url = serializers.URLField(source='website_url')\n eventbrite_id = serializers.CharField(source='reg_key')\n\n def get_humandate(self, obj):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n return EventSerializer.human_readable_date(obj.start, obj.end)\n\n @staticmethod\n def human_readable_date(date1, date2):\n \"\"\"Render start and end dates as human-readable short date.\"\"\"\n if date1 and not date2:\n return '{:%b %d, %Y}-???'.format(date1)\n elif date2 and not date1:\n return '???-{:%b %d, %Y}'.format(date2)\n elif not date2 and not date1:\n return '???-???'\n\n if date1.year == date2.year:\n if date1.month == date2.month:\n return '{:%b %d}-{:%d, %Y}'.format(date1, date2)\n else:\n return '{:%b %d}-{:%b %d, %Y}'.format(date1, date2)\n else:\n return '{:%b %d, %Y}-{:%b %d, %Y}'.format(date1, date2)\n\n class Meta:\n model = Event\n fields = (\n 'slug', 'start', 'end', 'url', 'humandate', 'contact', 'country',\n 'venue', 'address', 'latitude', 'longitude', 'eventbrite_id',\n )\n\n\nclass TodoSerializer(serializers.ModelSerializer):\n content = serializers.SerializerMethodField()\n start = serializers.DateField(format=None, source='due')\n\n class Meta:\n model = TodoItem\n fields = (\n 'content', 'start',\n )\n\n def get_content(self, obj):\n \"\"\"Return HTML containing interesting information for admins. This\n will be displayed on labels in the timeline.\"\"\"\n\n return '<a href=\"{url}\">{event}</a><br><small>{todo}</small>'.format(\n url=obj.event.get_absolute_url(),\n event=obj.event.get_ident(),\n todo=obj.title,\n )\n", "path": "api/serializers.py"}, {"content": "import datetime\n\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.metadata import SimpleMetadata\nfrom rest_framework.permissions import (\n IsAuthenticatedOrReadOnly, IsAuthenticated\n)\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\nfrom rest_framework.views import APIView\n\nfrom workshops.models import Badge, Airport, Event, TodoItem\nfrom workshops.util import get_members, default_membership_cutoff\n\nfrom .serializers import (\n PersonNameEmailSerializer,\n ExportBadgesSerializer,\n ExportInstructorLocationsSerializer,\n EventSerializer,\n TodoSerializer,\n)\n\n\nclass QueryMetadata(SimpleMetadata):\n \"\"\"Additionally include info about query parameters.\"\"\"\n\n def determine_metadata(self, request, view):\n data = super().determine_metadata(request, view)\n\n try:\n data['query_params'] = view.get_query_params_description()\n except AttributeError:\n pass\n\n return data\n\n\nclass ApiRoot(APIView):\n def get(self, request, format=None):\n return Response({\n 'export-badges': reverse('api:export-badges', request=request,\n format=format),\n 'export-instructors': reverse('api:export-instructors',\n request=request, format=format),\n 'export-members': reverse('api:export-members', request=request,\n format=format),\n 'events-published': reverse('api:events-published',\n request=request, format=format),\n 'user-todos': reverse('api:user-todos',\n request=request, format=format),\n })\n\n\nclass ExportBadgesView(ListAPIView):\n \"\"\"List all badges and people who have them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Badge.objects.prefetch_related('person_set')\n serializer_class = ExportBadgesSerializer\n\n\nclass ExportInstructorLocationsView(ListAPIView):\n \"\"\"List all airports and instructors located near them.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n queryset = Airport.objects.exclude(person=None) \\\n .prefetch_related('person_set')\n serializer_class = ExportInstructorLocationsSerializer\n\n\nclass ExportMembersView(ListAPIView):\n \"\"\"Show everyone who qualifies as an SCF member.\"\"\"\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n serializer_class = PersonNameEmailSerializer\n\n def get_queryset(self):\n earliest_default, latest_default = default_membership_cutoff()\n\n earliest = self.request.query_params.get('earliest', None)\n if earliest is not None:\n try:\n earliest = datetime.datetime.strptime(earliest, '%Y-%m-%d') \\\n .date()\n except ValueError:\n earliest = earliest_default\n else:\n earliest = earliest_default\n\n latest = self.request.query_params.get('latest', None)\n if latest is not None:\n try:\n latest = datetime.datetime.strptime(latest, '%Y-%m-%d').date()\n except ValueError:\n latest = latest_default\n else:\n latest = latest_default\n\n return get_members(earliest, latest)\n\n def get_query_params_description(self):\n return {\n 'earliest': 'Date of earliest workshop someone taught at.'\n ' Defaults to -2*365 days from current date.',\n 'latest': 'Date of latest workshop someone taught at.'\n ' Defaults to current date.',\n }\n\n\nclass PublishedEvents(ListAPIView):\n \"\"\"List published events.\"\"\"\n\n # only events that have both a starting date and a URL\n permission_classes = (IsAuthenticatedOrReadOnly, )\n paginator = None # disable pagination\n\n serializer_class = EventSerializer\n\n metadata_class = QueryMetadata\n\n def get_queryset(self):\n \"\"\"Optionally restrict the returned event set to events hosted by\n specific host or administered by specific admin.\"\"\"\n queryset = Event.objects.published_events()\n\n administrator = self.request.query_params.get('administrator', None)\n if administrator is not None:\n queryset = queryset.filter(administrator__pk=administrator)\n\n host = self.request.query_params.get('host', None)\n if host is not None:\n queryset = queryset.filter(host__pk=host)\n\n return queryset\n\n def get_query_params_description(self):\n return {\n 'administrator': 'ID of the organization responsible for admin '\n 'work on events.',\n 'host': 'ID of the organization hosting the event.',\n }\n\n\nclass UserTodoItems(ListAPIView):\n permission_classes = (IsAuthenticated, )\n paginator = None\n serializer_class = TodoSerializer\n\n def get_queryset(self):\n \"\"\"Return current TODOs for currently logged in user.\"\"\"\n return TodoItem.objects.user(self.request.user) \\\n .incomplete() \\\n .select_related('event')\n", "path": "api/views.py"}]} | 2,861 | 626 |
gh_patches_debug_29591 | rasdani/github-patches | git_diff | cupy__cupy-1374 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
matrix power is missing from cupy.linalg.
There is a corresponding `numpy.linalg.matrix_power` in NumPy. Is there a reason why this has not made it into CuPy?
If it helps, I am willing to take a shot at this one, though I might need a bit of hand-holding as I've not much experience with Cython code.
</issue>
<code>
[start of cupy/linalg/__init__.py]
1 # Functions from the following NumPy document
2 # https://docs.scipy.org/doc/numpy/reference/routines.linalg.html
3
4 # "NOQA" to suppress flake8 warning
5 from cupy.linalg import decomposition # NOQA
6 from cupy.linalg import eigenvalue # NOQA
7 from cupy.linalg import einsum # NOQA
8 from cupy.linalg import norms # NOQA
9 from cupy.linalg.norms import det # NOQA
10 from cupy.linalg.norms import matrix_rank # NOQA
11 from cupy.linalg.norms import norm # NOQA
12 from cupy.linalg.norms import slogdet # NOQA
13 from cupy.linalg import product # NOQA
14 from cupy.linalg import solve # NOQA
15
16 from cupy.linalg.decomposition import cholesky # NOQA
17 from cupy.linalg.decomposition import qr # NOQA
18 from cupy.linalg.decomposition import svd # NOQA
19
20 from cupy.linalg.eigenvalue import eigh # NOQA
21 from cupy.linalg.eigenvalue import eigvalsh # NOQA
22
23 from cupy.linalg.solve import inv # NOQA
24 from cupy.linalg.solve import pinv # NOQA
25 from cupy.linalg.solve import solve # NOQA
26 from cupy.linalg.solve import tensorinv # NOQA
27 from cupy.linalg.solve import tensorsolve # NOQA
28
[end of cupy/linalg/__init__.py]
[start of cupy/linalg/product.py]
1 import collections
2
3 import numpy
4 import six
5
6 import cupy
7 from cupy import core
8 from cupy import internal
9
10
11 matmul = core.matmul
12
13
14 def dot(a, b, out=None):
15 """Returns a dot product of two arrays.
16
17 For arrays with more than one axis, it computes the dot product along the
18 last axis of ``a`` and the second-to-last axis of ``b``. This is just a
19 matrix product if the both arrays are 2-D. For 1-D arrays, it uses their
20 unique axis as an axis to take dot product over.
21
22 Args:
23 a (cupy.ndarray): The left argument.
24 b (cupy.ndarray): The right argument.
25 out (cupy.ndarray): Output array.
26
27 Returns:
28 cupy.ndarray: The dot product of ``a`` and ``b``.
29
30 .. seealso:: :func:`numpy.dot`
31
32 """
33 # TODO(okuta): check type
34 return a.dot(b, out)
35
36
37 def vdot(a, b):
38 """Returns the dot product of two vectors.
39
40 The input arrays are flattened into 1-D vectors and then it performs inner
41 product of these vectors.
42
43 Args:
44 a (cupy.ndarray): The first argument.
45 b (cupy.ndarray): The second argument.
46
47 Returns:
48 cupy.ndarray: Zero-dimensional array of the dot product result.
49
50 .. seealso:: :func:`numpy.vdot`
51
52 """
53 if a.size != b.size:
54 raise ValueError('Axis dimension mismatch')
55 if a.dtype.kind == 'c':
56 a = a.conj()
57
58 return core.tensordot_core(a, b, None, 1, 1, a.size, ())
59
60
61 def inner(a, b):
62 """Returns the inner product of two arrays.
63
64 It uses the last axis of each argument to take sum product.
65
66 Args:
67 a (cupy.ndarray): The first argument.
68 b (cupy.ndarray): The second argument.
69
70 Returns:
71 cupy.ndarray: The inner product of ``a`` and ``b``.
72
73 .. seealso:: :func:`numpy.inner`
74
75 """
76 a_ndim = a.ndim
77 b_ndim = b.ndim
78 if a_ndim == 0 or b_ndim == 0:
79 return cupy.multiply(a, b)
80
81 a_axis = a_ndim - 1
82 b_axis = b_ndim - 1
83
84 if a.shape[-1] != b.shape[-1]:
85 raise ValueError('Axis dimension mismatch')
86
87 if a_axis:
88 a = cupy.rollaxis(a, a_axis, 0)
89 if b_axis:
90 b = cupy.rollaxis(b, b_axis, 0)
91
92 ret_shape = a.shape[1:] + b.shape[1:]
93
94 k = a.shape[0]
95 n = a.size // k
96 m = b.size // k
97
98 return core.tensordot_core(a, b, None, n, m, k, ret_shape)
99
100
101 def outer(a, b, out=None):
102 """Returns the outer product of two vectors.
103
104 The input arrays are flattened into 1-D vectors and then it performs outer
105 product of these vectors.
106
107 Args:
108 a (cupy.ndarray): The first argument.
109 b (cupy.ndarray): The second argument.
110 out (cupy.ndarray): Output array.
111
112 Returns:
113 cupy.ndarray: 2-D array of the outer product of ``a`` and ``b``.
114
115 .. seealso:: :func:`numpy.outer`
116
117 """
118 n = a.size
119 m = b.size
120 ret_shape = (n, m)
121
122 if out is None:
123 return core.tensordot_core(a, b, None, n, m, 1, ret_shape)
124
125 if out.size != n * m:
126 raise ValueError('Output array has an invalid size')
127 if out.flags.c_contiguous:
128 return core.tensordot_core(a, b, out, n, m, 1, ret_shape)
129 else:
130 out[:] = core.tensordot_core(a, b, None, n, m, 1, ret_shape)
131 return out
132
133
134 def tensordot(a, b, axes=2):
135 """Returns the tensor dot product of two arrays along specified axes.
136
137 This is equivalent to compute dot product along the specified axes which
138 are treated as one axis by reshaping.
139
140 Args:
141 a (cupy.ndarray): The first argument.
142 b (cupy.ndarray): The second argument.
143 axes:
144 - If it is an integer, then ``axes`` axes at the last of ``a`` and
145 the first of ``b`` are used.
146 - If it is a pair of sequences of integers, then these two
147 sequences specify the list of axes for ``a`` and ``b``. The
148 corresponding axes are paired for sum-product.
149
150 Returns:
151 cupy.ndarray: The tensor dot product of ``a`` and ``b`` along the
152 axes specified by ``axes``.
153
154 .. seealso:: :func:`numpy.tensordot`
155
156 """
157 a_ndim = a.ndim
158 b_ndim = b.ndim
159 if a_ndim == 0 or b_ndim == 0:
160 if axes != 0 and axes != ((), ()):
161 raise ValueError('An input is zero-dim while axes has dimensions')
162 return cupy.multiply(a, b)
163
164 if isinstance(axes, collections.Sequence):
165 if len(axes) != 2:
166 raise ValueError('Axes must consist of two arrays.')
167 a_axes, b_axes = axes
168 if numpy.isscalar(a_axes):
169 a_axes = a_axes,
170 if numpy.isscalar(b_axes):
171 b_axes = b_axes,
172 else:
173 a_axes = tuple(six.moves.range(a_ndim - axes, a_ndim))
174 b_axes = tuple(six.moves.range(axes))
175
176 sum_ndim = len(a_axes)
177 if sum_ndim != len(b_axes):
178 raise ValueError('Axes length mismatch')
179
180 for a_axis, b_axis in zip(a_axes, b_axes):
181 if a.shape[a_axis] != b.shape[b_axis]:
182 raise ValueError('Axis dimension mismatch')
183
184 # Make the axes non-negative
185 a = _move_axes_to_head(a, [axis % a_ndim for axis in a_axes])
186 b = _move_axes_to_head(b, [axis % b_ndim for axis in b_axes])
187
188 ret_shape = a.shape[sum_ndim:] + b.shape[sum_ndim:]
189
190 k = internal.prod(a.shape[:sum_ndim])
191 n = a.size // k
192 m = b.size // k
193
194 return core.tensordot_core(a, b, None, n, m, k, ret_shape)
195
196
197 # TODO(okuta): Implement matrix_power
198
199
200 def kron(a, b):
201 """Returns the kronecker product of two arrays.
202
203 Args:
204 a (~cupy.ndarray): The first argument.
205 b (~cupy.ndarray): The second argument.
206
207 Returns:
208 ~cupy.ndarray: Output array.
209
210 .. seealso:: :func:`numpy.kron`
211
212 """
213 a_ndim = a.ndim
214 b_ndim = b.ndim
215 if a_ndim == 0 or b_ndim == 0:
216 return cupy.multiply(a, b)
217
218 ndim = b_ndim
219 a_shape = a.shape
220 b_shape = b.shape
221 if a_ndim != b_ndim:
222 if b_ndim > a_ndim:
223 a_shape = (1,) * (b_ndim - a_ndim) + a_shape
224 else:
225 b_shape = (1,) * (a_ndim - b_ndim) + b_shape
226 ndim = a_ndim
227
228 axis = ndim - 1
229 out = core.tensordot_core(a, b, None, a.size, b.size, 1, a_shape + b_shape)
230 for _ in six.moves.range(ndim):
231 out = core.concatenate_method(out, axis=axis)
232
233 return out
234
235
236 def _move_axes_to_head(a, axes):
237 # This function moves the axes of ``s`` to the head of the shape.
238 for idx, axis in enumerate(axes):
239 if idx != axis:
240 break
241 else:
242 return a
243
244 return a.transpose(
245 axes + [i for i in six.moves.range(a.ndim) if i not in axes])
246
[end of cupy/linalg/product.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/linalg/__init__.py b/cupy/linalg/__init__.py
--- a/cupy/linalg/__init__.py
+++ b/cupy/linalg/__init__.py
@@ -25,3 +25,5 @@
from cupy.linalg.solve import solve # NOQA
from cupy.linalg.solve import tensorinv # NOQA
from cupy.linalg.solve import tensorsolve # NOQA
+
+from cupy.linalg.product import matrix_power # NOQA
diff --git a/cupy/linalg/product.py b/cupy/linalg/product.py
--- a/cupy/linalg/product.py
+++ b/cupy/linalg/product.py
@@ -7,6 +7,7 @@
from cupy import core
from cupy import internal
+from cupy.linalg.solve import inv
matmul = core.matmul
@@ -194,7 +195,49 @@
return core.tensordot_core(a, b, None, n, m, k, ret_shape)
-# TODO(okuta): Implement matrix_power
+def matrix_power(M, n):
+ """Raise a square matrix to the (integer) power `n`.
+
+ Args:
+ M (~cupy.ndarray): Matrix to raise by power n.
+ n (~int): Power to raise matrix to.
+
+ Returns:
+ ~cupy.ndarray: Output array.
+
+ .. note:: M must be of dtype `float32` or `float64`.
+
+ ..seealso:: :func:`numpy.linalg.matrix_power`
+ """
+ if M.ndim != 2 or M.shape[0] != M.shape[1]:
+ raise ValueError("input must be a square array")
+ if not isinstance(n, six.integer_types):
+ raise TypeError("exponent must be an integer")
+
+ if n == 0:
+ return cupy.identity(M.shape[0], dtype=M.dtype)
+ elif n < 0:
+ M = inv(M)
+ n *= -1
+
+ # short-cuts
+ if n <= 3:
+ if n == 1:
+ return M
+ elif n == 2:
+ return cupy.matmul(M, M)
+ else:
+ return cupy.matmul(cupy.matmul(M, M), M)
+
+ # binary decomposition to reduce the number of Matrix
+ # multiplications for n > 3.
+ result, Z = None, None
+ for b in cupy.binary_repr(n)[::-1]:
+ Z = M if Z is None else cupy.matmul(Z, Z)
+ if b == '1':
+ result = Z if result is None else cupy.matmul(result, Z)
+
+ return result
def kron(a, b):
| {"golden_diff": "diff --git a/cupy/linalg/__init__.py b/cupy/linalg/__init__.py\n--- a/cupy/linalg/__init__.py\n+++ b/cupy/linalg/__init__.py\n@@ -25,3 +25,5 @@\n from cupy.linalg.solve import solve # NOQA\n from cupy.linalg.solve import tensorinv # NOQA\n from cupy.linalg.solve import tensorsolve # NOQA\n+\n+from cupy.linalg.product import matrix_power # NOQA\ndiff --git a/cupy/linalg/product.py b/cupy/linalg/product.py\n--- a/cupy/linalg/product.py\n+++ b/cupy/linalg/product.py\n@@ -7,6 +7,7 @@\n from cupy import core\n from cupy import internal\n \n+from cupy.linalg.solve import inv\n \n matmul = core.matmul\n \n@@ -194,7 +195,49 @@\n return core.tensordot_core(a, b, None, n, m, k, ret_shape)\n \n \n-# TODO(okuta): Implement matrix_power\n+def matrix_power(M, n):\n+ \"\"\"Raise a square matrix to the (integer) power `n`.\n+\n+ Args:\n+ M (~cupy.ndarray): Matrix to raise by power n.\n+ n (~int): Power to raise matrix to.\n+\n+ Returns:\n+ ~cupy.ndarray: Output array.\n+\n+ .. note:: M must be of dtype `float32` or `float64`.\n+\n+ ..seealso:: :func:`numpy.linalg.matrix_power`\n+ \"\"\"\n+ if M.ndim != 2 or M.shape[0] != M.shape[1]:\n+ raise ValueError(\"input must be a square array\")\n+ if not isinstance(n, six.integer_types):\n+ raise TypeError(\"exponent must be an integer\")\n+\n+ if n == 0:\n+ return cupy.identity(M.shape[0], dtype=M.dtype)\n+ elif n < 0:\n+ M = inv(M)\n+ n *= -1\n+\n+ # short-cuts\n+ if n <= 3:\n+ if n == 1:\n+ return M\n+ elif n == 2:\n+ return cupy.matmul(M, M)\n+ else:\n+ return cupy.matmul(cupy.matmul(M, M), M)\n+\n+ # binary decomposition to reduce the number of Matrix\n+ # multiplications for n > 3.\n+ result, Z = None, None\n+ for b in cupy.binary_repr(n)[::-1]:\n+ Z = M if Z is None else cupy.matmul(Z, Z)\n+ if b == '1':\n+ result = Z if result is None else cupy.matmul(result, Z)\n+\n+ return result\n \n \n def kron(a, b):\n", "issue": "matrix power is missing from cupy.linalg.\nThere is a corresponding `numpy.linalg.matrix_power` in NumPy. Is there a reason why this has not made it into CuPy?\r\n\r\nIf it helps, I am willing to take a shot at this one, though I might need a bit of hand-holding as I've not much experience with Cython code.\n", "before_files": [{"content": "# Functions from the following NumPy document\n# https://docs.scipy.org/doc/numpy/reference/routines.linalg.html\n\n# \"NOQA\" to suppress flake8 warning\nfrom cupy.linalg import decomposition # NOQA\nfrom cupy.linalg import eigenvalue # NOQA\nfrom cupy.linalg import einsum # NOQA\nfrom cupy.linalg import norms # NOQA\nfrom cupy.linalg.norms import det # NOQA\nfrom cupy.linalg.norms import matrix_rank # NOQA\nfrom cupy.linalg.norms import norm # NOQA\nfrom cupy.linalg.norms import slogdet # NOQA\nfrom cupy.linalg import product # NOQA\nfrom cupy.linalg import solve # NOQA\n\nfrom cupy.linalg.decomposition import cholesky # NOQA\nfrom cupy.linalg.decomposition import qr # NOQA\nfrom cupy.linalg.decomposition import svd # NOQA\n\nfrom cupy.linalg.eigenvalue import eigh # NOQA\nfrom cupy.linalg.eigenvalue import eigvalsh # NOQA\n\nfrom cupy.linalg.solve import inv # NOQA\nfrom cupy.linalg.solve import pinv # NOQA\nfrom cupy.linalg.solve import solve # NOQA\nfrom cupy.linalg.solve import tensorinv # NOQA\nfrom cupy.linalg.solve import tensorsolve # NOQA\n", "path": "cupy/linalg/__init__.py"}, {"content": "import collections\n\nimport numpy\nimport six\n\nimport cupy\nfrom cupy import core\nfrom cupy import internal\n\n\nmatmul = core.matmul\n\n\ndef dot(a, b, out=None):\n \"\"\"Returns a dot product of two arrays.\n\n For arrays with more than one axis, it computes the dot product along the\n last axis of ``a`` and the second-to-last axis of ``b``. This is just a\n matrix product if the both arrays are 2-D. For 1-D arrays, it uses their\n unique axis as an axis to take dot product over.\n\n Args:\n a (cupy.ndarray): The left argument.\n b (cupy.ndarray): The right argument.\n out (cupy.ndarray): Output array.\n\n Returns:\n cupy.ndarray: The dot product of ``a`` and ``b``.\n\n .. seealso:: :func:`numpy.dot`\n\n \"\"\"\n # TODO(okuta): check type\n return a.dot(b, out)\n\n\ndef vdot(a, b):\n \"\"\"Returns the dot product of two vectors.\n\n The input arrays are flattened into 1-D vectors and then it performs inner\n product of these vectors.\n\n Args:\n a (cupy.ndarray): The first argument.\n b (cupy.ndarray): The second argument.\n\n Returns:\n cupy.ndarray: Zero-dimensional array of the dot product result.\n\n .. seealso:: :func:`numpy.vdot`\n\n \"\"\"\n if a.size != b.size:\n raise ValueError('Axis dimension mismatch')\n if a.dtype.kind == 'c':\n a = a.conj()\n\n return core.tensordot_core(a, b, None, 1, 1, a.size, ())\n\n\ndef inner(a, b):\n \"\"\"Returns the inner product of two arrays.\n\n It uses the last axis of each argument to take sum product.\n\n Args:\n a (cupy.ndarray): The first argument.\n b (cupy.ndarray): The second argument.\n\n Returns:\n cupy.ndarray: The inner product of ``a`` and ``b``.\n\n .. seealso:: :func:`numpy.inner`\n\n \"\"\"\n a_ndim = a.ndim\n b_ndim = b.ndim\n if a_ndim == 0 or b_ndim == 0:\n return cupy.multiply(a, b)\n\n a_axis = a_ndim - 1\n b_axis = b_ndim - 1\n\n if a.shape[-1] != b.shape[-1]:\n raise ValueError('Axis dimension mismatch')\n\n if a_axis:\n a = cupy.rollaxis(a, a_axis, 0)\n if b_axis:\n b = cupy.rollaxis(b, b_axis, 0)\n\n ret_shape = a.shape[1:] + b.shape[1:]\n\n k = a.shape[0]\n n = a.size // k\n m = b.size // k\n\n return core.tensordot_core(a, b, None, n, m, k, ret_shape)\n\n\ndef outer(a, b, out=None):\n \"\"\"Returns the outer product of two vectors.\n\n The input arrays are flattened into 1-D vectors and then it performs outer\n product of these vectors.\n\n Args:\n a (cupy.ndarray): The first argument.\n b (cupy.ndarray): The second argument.\n out (cupy.ndarray): Output array.\n\n Returns:\n cupy.ndarray: 2-D array of the outer product of ``a`` and ``b``.\n\n .. seealso:: :func:`numpy.outer`\n\n \"\"\"\n n = a.size\n m = b.size\n ret_shape = (n, m)\n\n if out is None:\n return core.tensordot_core(a, b, None, n, m, 1, ret_shape)\n\n if out.size != n * m:\n raise ValueError('Output array has an invalid size')\n if out.flags.c_contiguous:\n return core.tensordot_core(a, b, out, n, m, 1, ret_shape)\n else:\n out[:] = core.tensordot_core(a, b, None, n, m, 1, ret_shape)\n return out\n\n\ndef tensordot(a, b, axes=2):\n \"\"\"Returns the tensor dot product of two arrays along specified axes.\n\n This is equivalent to compute dot product along the specified axes which\n are treated as one axis by reshaping.\n\n Args:\n a (cupy.ndarray): The first argument.\n b (cupy.ndarray): The second argument.\n axes:\n - If it is an integer, then ``axes`` axes at the last of ``a`` and\n the first of ``b`` are used.\n - If it is a pair of sequences of integers, then these two\n sequences specify the list of axes for ``a`` and ``b``. The\n corresponding axes are paired for sum-product.\n\n Returns:\n cupy.ndarray: The tensor dot product of ``a`` and ``b`` along the\n axes specified by ``axes``.\n\n .. seealso:: :func:`numpy.tensordot`\n\n \"\"\"\n a_ndim = a.ndim\n b_ndim = b.ndim\n if a_ndim == 0 or b_ndim == 0:\n if axes != 0 and axes != ((), ()):\n raise ValueError('An input is zero-dim while axes has dimensions')\n return cupy.multiply(a, b)\n\n if isinstance(axes, collections.Sequence):\n if len(axes) != 2:\n raise ValueError('Axes must consist of two arrays.')\n a_axes, b_axes = axes\n if numpy.isscalar(a_axes):\n a_axes = a_axes,\n if numpy.isscalar(b_axes):\n b_axes = b_axes,\n else:\n a_axes = tuple(six.moves.range(a_ndim - axes, a_ndim))\n b_axes = tuple(six.moves.range(axes))\n\n sum_ndim = len(a_axes)\n if sum_ndim != len(b_axes):\n raise ValueError('Axes length mismatch')\n\n for a_axis, b_axis in zip(a_axes, b_axes):\n if a.shape[a_axis] != b.shape[b_axis]:\n raise ValueError('Axis dimension mismatch')\n\n # Make the axes non-negative\n a = _move_axes_to_head(a, [axis % a_ndim for axis in a_axes])\n b = _move_axes_to_head(b, [axis % b_ndim for axis in b_axes])\n\n ret_shape = a.shape[sum_ndim:] + b.shape[sum_ndim:]\n\n k = internal.prod(a.shape[:sum_ndim])\n n = a.size // k\n m = b.size // k\n\n return core.tensordot_core(a, b, None, n, m, k, ret_shape)\n\n\n# TODO(okuta): Implement matrix_power\n\n\ndef kron(a, b):\n \"\"\"Returns the kronecker product of two arrays.\n\n Args:\n a (~cupy.ndarray): The first argument.\n b (~cupy.ndarray): The second argument.\n\n Returns:\n ~cupy.ndarray: Output array.\n\n .. seealso:: :func:`numpy.kron`\n\n \"\"\"\n a_ndim = a.ndim\n b_ndim = b.ndim\n if a_ndim == 0 or b_ndim == 0:\n return cupy.multiply(a, b)\n\n ndim = b_ndim\n a_shape = a.shape\n b_shape = b.shape\n if a_ndim != b_ndim:\n if b_ndim > a_ndim:\n a_shape = (1,) * (b_ndim - a_ndim) + a_shape\n else:\n b_shape = (1,) * (a_ndim - b_ndim) + b_shape\n ndim = a_ndim\n\n axis = ndim - 1\n out = core.tensordot_core(a, b, None, a.size, b.size, 1, a_shape + b_shape)\n for _ in six.moves.range(ndim):\n out = core.concatenate_method(out, axis=axis)\n\n return out\n\n\ndef _move_axes_to_head(a, axes):\n # This function moves the axes of ``s`` to the head of the shape.\n for idx, axis in enumerate(axes):\n if idx != axis:\n break\n else:\n return a\n\n return a.transpose(\n axes + [i for i in six.moves.range(a.ndim) if i not in axes])\n", "path": "cupy/linalg/product.py"}]} | 3,482 | 611 |
gh_patches_debug_5488 | rasdani/github-patches | git_diff | iterative__dvc-7908 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
config: option for `--jobs` value
Several DVC commands have a `--jobs` option that has default values (e.g, https://dvc.org/doc/commands-reference/gc).
Afaik there is no way to change the default value. Having the option to change it through `dvc config` would be useful.
Can you consider adding it?
Thanks
</issue>
<code>
[start of dvc/fs/__init__.py]
1 from urllib.parse import urlparse
2
3 # pylint: disable=unused-import
4 from dvc_objects.fs import utils # noqa: F401
5 from dvc_objects.fs import ( # noqa: F401
6 FS_MAP,
7 AzureFileSystem,
8 GDriveFileSystem,
9 GSFileSystem,
10 HDFSFileSystem,
11 HTTPFileSystem,
12 HTTPSFileSystem,
13 LocalFileSystem,
14 MemoryFileSystem,
15 OSSFileSystem,
16 S3FileSystem,
17 Schemes,
18 SSHFileSystem,
19 WebDAVFileSystem,
20 WebDAVSFileSystem,
21 WebHDFSFileSystem,
22 generic,
23 get_fs_cls,
24 system,
25 )
26 from dvc_objects.fs.base import AnyFSPath, FileSystem # noqa: F401
27 from dvc_objects.fs.errors import ( # noqa: F401
28 AuthError,
29 ConfigError,
30 RemoteMissingDepsError,
31 )
32 from dvc_objects.fs.implementations.azure import AzureAuthError # noqa: F401
33 from dvc_objects.fs.implementations.gdrive import GDriveAuthError # noqa: F401
34 from dvc_objects.fs.implementations.local import localfs # noqa: F401
35 from dvc_objects.fs.implementations.ssh import ( # noqa: F401
36 DEFAULT_PORT as DEFAULT_SSH_PORT,
37 )
38 from dvc_objects.fs.path import Path # noqa: F401
39
40 from .data import DataFileSystem # noqa: F401
41 from .dvc import DvcFileSystem # noqa: F401
42 from .git import GitFileSystem # noqa: F401
43
44 # pylint: enable=unused-import
45
46
47 def get_fs_config(repo, config, **kwargs):
48 name = kwargs.get("name")
49 if name:
50 try:
51 remote_conf = config["remote"][name.lower()]
52 except KeyError:
53 from dvc.config import RemoteNotFoundError
54
55 raise RemoteNotFoundError(f"remote '{name}' doesn't exist")
56 else:
57 remote_conf = kwargs
58 return _resolve_remote_refs(repo, config, remote_conf)
59
60
61 def _resolve_remote_refs(repo, config, remote_conf):
62 # Support for cross referenced remotes.
63 # This will merge the settings, shadowing base ref with remote_conf.
64 # For example, having:
65 #
66 # dvc remote add server ssh://localhost
67 # dvc remote modify server user root
68 # dvc remote modify server ask_password true
69 #
70 # dvc remote add images remote://server/tmp/pictures
71 # dvc remote modify images user alice
72 # dvc remote modify images ask_password false
73 # dvc remote modify images password asdf1234
74 #
75 # Results on a config dictionary like:
76 #
77 # {
78 # "url": "ssh://localhost/tmp/pictures",
79 # "user": "alice",
80 # "password": "asdf1234",
81 # "ask_password": False,
82 # }
83 parsed = urlparse(remote_conf["url"])
84 if parsed.scheme != "remote":
85 return remote_conf
86
87 base = get_fs_config(repo, config, name=parsed.netloc)
88 cls, _, _ = get_cloud_fs(repo, **base)
89 relpath = parsed.path.lstrip("/").replace("/", cls.sep)
90 url = cls.sep.join((base["url"], relpath))
91 return {**base, **remote_conf, "url": url}
92
93
94 def get_cloud_fs(repo, **kwargs):
95 from dvc.config import ConfigError as RepoConfigError
96 from dvc.config_schema import SCHEMA, Invalid
97
98 repo_config = repo.config if repo else {}
99 core_config = repo_config.get("core", {})
100
101 remote_conf = get_fs_config(repo, repo_config, **kwargs)
102 try:
103 remote_conf = SCHEMA["remote"][str](remote_conf)
104 except Invalid as exc:
105 raise RepoConfigError(str(exc)) from None
106
107 if "jobs" not in remote_conf:
108 jobs = core_config.get("jobs")
109 if jobs:
110 remote_conf["jobs"] = jobs
111
112 if "checksum_jobs" not in remote_conf:
113 checksum_jobs = core_config.get("checksum_jobs")
114 if checksum_jobs:
115 remote_conf["checksum_jobs"] = checksum_jobs
116
117 cls = get_fs_cls(remote_conf)
118
119 if cls == GDriveFileSystem and repo:
120 remote_conf["gdrive_credentials_tmp_dir"] = repo.tmp_dir
121
122 url = remote_conf.pop("url")
123 if issubclass(cls, WebDAVFileSystem):
124 # For WebDAVFileSystem, provided url is the base path itself, so it
125 # should be treated as being a root path.
126 fs_path = cls.root_marker
127 else:
128 fs_path = cls._strip_protocol(url) # pylint:disable=protected-access
129
130 extras = cls._get_kwargs_from_urls(url) # pylint:disable=protected-access
131 conf = {**extras, **remote_conf} # remote config takes priority
132 return cls, conf, fs_path
133
[end of dvc/fs/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/fs/__init__.py b/dvc/fs/__init__.py
--- a/dvc/fs/__init__.py
+++ b/dvc/fs/__init__.py
@@ -104,11 +104,6 @@
except Invalid as exc:
raise RepoConfigError(str(exc)) from None
- if "jobs" not in remote_conf:
- jobs = core_config.get("jobs")
- if jobs:
- remote_conf["jobs"] = jobs
-
if "checksum_jobs" not in remote_conf:
checksum_jobs = core_config.get("checksum_jobs")
if checksum_jobs:
| {"golden_diff": "diff --git a/dvc/fs/__init__.py b/dvc/fs/__init__.py\n--- a/dvc/fs/__init__.py\n+++ b/dvc/fs/__init__.py\n@@ -104,11 +104,6 @@\n except Invalid as exc:\n raise RepoConfigError(str(exc)) from None\n \n- if \"jobs\" not in remote_conf:\n- jobs = core_config.get(\"jobs\")\n- if jobs:\n- remote_conf[\"jobs\"] = jobs\n-\n if \"checksum_jobs\" not in remote_conf:\n checksum_jobs = core_config.get(\"checksum_jobs\")\n if checksum_jobs:\n", "issue": "config: option for `--jobs` value\nSeveral DVC commands have a `--jobs` option that has default values (e.g, https://dvc.org/doc/commands-reference/gc).\r\n\r\nAfaik there is no way to change the default value. Having the option to change it through `dvc config` would be useful.\r\n\r\nCan you consider adding it?\r\n\r\nThanks\n", "before_files": [{"content": "from urllib.parse import urlparse\n\n# pylint: disable=unused-import\nfrom dvc_objects.fs import utils # noqa: F401\nfrom dvc_objects.fs import ( # noqa: F401\n FS_MAP,\n AzureFileSystem,\n GDriveFileSystem,\n GSFileSystem,\n HDFSFileSystem,\n HTTPFileSystem,\n HTTPSFileSystem,\n LocalFileSystem,\n MemoryFileSystem,\n OSSFileSystem,\n S3FileSystem,\n Schemes,\n SSHFileSystem,\n WebDAVFileSystem,\n WebDAVSFileSystem,\n WebHDFSFileSystem,\n generic,\n get_fs_cls,\n system,\n)\nfrom dvc_objects.fs.base import AnyFSPath, FileSystem # noqa: F401\nfrom dvc_objects.fs.errors import ( # noqa: F401\n AuthError,\n ConfigError,\n RemoteMissingDepsError,\n)\nfrom dvc_objects.fs.implementations.azure import AzureAuthError # noqa: F401\nfrom dvc_objects.fs.implementations.gdrive import GDriveAuthError # noqa: F401\nfrom dvc_objects.fs.implementations.local import localfs # noqa: F401\nfrom dvc_objects.fs.implementations.ssh import ( # noqa: F401\n DEFAULT_PORT as DEFAULT_SSH_PORT,\n)\nfrom dvc_objects.fs.path import Path # noqa: F401\n\nfrom .data import DataFileSystem # noqa: F401\nfrom .dvc import DvcFileSystem # noqa: F401\nfrom .git import GitFileSystem # noqa: F401\n\n# pylint: enable=unused-import\n\n\ndef get_fs_config(repo, config, **kwargs):\n name = kwargs.get(\"name\")\n if name:\n try:\n remote_conf = config[\"remote\"][name.lower()]\n except KeyError:\n from dvc.config import RemoteNotFoundError\n\n raise RemoteNotFoundError(f\"remote '{name}' doesn't exist\")\n else:\n remote_conf = kwargs\n return _resolve_remote_refs(repo, config, remote_conf)\n\n\ndef _resolve_remote_refs(repo, config, remote_conf):\n # Support for cross referenced remotes.\n # This will merge the settings, shadowing base ref with remote_conf.\n # For example, having:\n #\n # dvc remote add server ssh://localhost\n # dvc remote modify server user root\n # dvc remote modify server ask_password true\n #\n # dvc remote add images remote://server/tmp/pictures\n # dvc remote modify images user alice\n # dvc remote modify images ask_password false\n # dvc remote modify images password asdf1234\n #\n # Results on a config dictionary like:\n #\n # {\n # \"url\": \"ssh://localhost/tmp/pictures\",\n # \"user\": \"alice\",\n # \"password\": \"asdf1234\",\n # \"ask_password\": False,\n # }\n parsed = urlparse(remote_conf[\"url\"])\n if parsed.scheme != \"remote\":\n return remote_conf\n\n base = get_fs_config(repo, config, name=parsed.netloc)\n cls, _, _ = get_cloud_fs(repo, **base)\n relpath = parsed.path.lstrip(\"/\").replace(\"/\", cls.sep)\n url = cls.sep.join((base[\"url\"], relpath))\n return {**base, **remote_conf, \"url\": url}\n\n\ndef get_cloud_fs(repo, **kwargs):\n from dvc.config import ConfigError as RepoConfigError\n from dvc.config_schema import SCHEMA, Invalid\n\n repo_config = repo.config if repo else {}\n core_config = repo_config.get(\"core\", {})\n\n remote_conf = get_fs_config(repo, repo_config, **kwargs)\n try:\n remote_conf = SCHEMA[\"remote\"][str](remote_conf)\n except Invalid as exc:\n raise RepoConfigError(str(exc)) from None\n\n if \"jobs\" not in remote_conf:\n jobs = core_config.get(\"jobs\")\n if jobs:\n remote_conf[\"jobs\"] = jobs\n\n if \"checksum_jobs\" not in remote_conf:\n checksum_jobs = core_config.get(\"checksum_jobs\")\n if checksum_jobs:\n remote_conf[\"checksum_jobs\"] = checksum_jobs\n\n cls = get_fs_cls(remote_conf)\n\n if cls == GDriveFileSystem and repo:\n remote_conf[\"gdrive_credentials_tmp_dir\"] = repo.tmp_dir\n\n url = remote_conf.pop(\"url\")\n if issubclass(cls, WebDAVFileSystem):\n # For WebDAVFileSystem, provided url is the base path itself, so it\n # should be treated as being a root path.\n fs_path = cls.root_marker\n else:\n fs_path = cls._strip_protocol(url) # pylint:disable=protected-access\n\n extras = cls._get_kwargs_from_urls(url) # pylint:disable=protected-access\n conf = {**extras, **remote_conf} # remote config takes priority\n return cls, conf, fs_path\n", "path": "dvc/fs/__init__.py"}]} | 2,002 | 136 |
gh_patches_debug_34627 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2364 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update I1022 to only suggest sub if all values can be in the sub
### CloudFormation Lint Version
0.64.1
### What operating system are you using?
All
### Describe the bug
Original feedback provided by @iann0036. Translated to an issue for tracking.
```yaml
Fn::Join:
- ""
- - Fn::Select:
- 0
- Fn::Split:
- "/"
- !Ref MySubnet1CIDR
- !Ref MySubnetsCIDRSize
```
```
I1022: Prefer using Fn::Sub over Fn::Join with an empty delimiter
```
### Expected behavior
Currently the way to make this comply would be
```yaml
Fn::Sub:
- ${CIDR}${MySubnetsCIDRSize}
- CIDR:
Fn::Select:
- 0
- Fn::Split:
- "/"
- !Ref MySubnet1CIDR
```
which may not be as optimal
### Reproduction template
```yaml
Fn::Join:
- ""
- - Fn::Select:
- 0
- Fn::Split:
- "/"
- !Ref MySubnet1CIDR
- !Ref MySubnetsCIDRSize
````
</issue>
<code>
[start of src/cfnlint/rules/functions/SubNotJoin.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.rules import CloudFormationLintRule, RuleMatch
6
7
8 class SubNotJoin(CloudFormationLintRule):
9 """Check if Join is being used with no join characters"""
10 id = 'I1022'
11 shortdesc = 'Use Sub instead of Join'
12 description = 'Prefer a sub instead of Join when using a join delimiter that is empty'
13 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
14 tags = ['functions', 'sub', 'join']
15
16 def match(self, cfn):
17 matches = []
18
19 join_objs = cfn.search_deep_keys('Fn::Join')
20
21 for join_obj in join_objs:
22 if isinstance(join_obj[-1], list):
23 join_operator = join_obj[-1][0]
24 if isinstance(join_operator, str):
25 if join_operator == '':
26 matches.append(RuleMatch(
27 join_obj[0:-1], 'Prefer using Fn::Sub over Fn::Join with an empty delimiter'))
28 return matches
29
[end of src/cfnlint/rules/functions/SubNotJoin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/functions/SubNotJoin.py b/src/cfnlint/rules/functions/SubNotJoin.py
--- a/src/cfnlint/rules/functions/SubNotJoin.py
+++ b/src/cfnlint/rules/functions/SubNotJoin.py
@@ -7,12 +7,34 @@
class SubNotJoin(CloudFormationLintRule):
"""Check if Join is being used with no join characters"""
+
id = 'I1022'
shortdesc = 'Use Sub instead of Join'
- description = 'Prefer a sub instead of Join when using a join delimiter that is empty'
+ description = (
+ 'Prefer a sub instead of Join when using a join delimiter that is empty'
+ )
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
tags = ['functions', 'sub', 'join']
+ def _check_element(self, element):
+ if isinstance(element, dict):
+ if len(element) == 1:
+ for key, value in element.items():
+ if key in ['Fn::Sub']:
+ if not isinstance(value, str):
+ return False
+ elif key not in ['Ref', 'Fn::GetAtt']:
+ return False
+
+ return True
+
+ def _check_elements(self, elements):
+ for element in elements:
+ if not self._check_element(element):
+ return False
+
+ return True
+
def match(self, cfn):
matches = []
@@ -21,8 +43,15 @@
for join_obj in join_objs:
if isinstance(join_obj[-1], list):
join_operator = join_obj[-1][0]
+ join_elements = join_obj[-1][1]
if isinstance(join_operator, str):
if join_operator == '':
- matches.append(RuleMatch(
- join_obj[0:-1], 'Prefer using Fn::Sub over Fn::Join with an empty delimiter'))
+ if isinstance(join_elements, list):
+ if self._check_elements(join_elements):
+ matches.append(
+ RuleMatch(
+ join_obj[0:-1],
+ 'Prefer using Fn::Sub over Fn::Join with an empty delimiter',
+ )
+ )
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNotJoin.py b/src/cfnlint/rules/functions/SubNotJoin.py\n--- a/src/cfnlint/rules/functions/SubNotJoin.py\n+++ b/src/cfnlint/rules/functions/SubNotJoin.py\n@@ -7,12 +7,34 @@\n \n class SubNotJoin(CloudFormationLintRule):\n \"\"\"Check if Join is being used with no join characters\"\"\"\n+\n id = 'I1022'\n shortdesc = 'Use Sub instead of Join'\n- description = 'Prefer a sub instead of Join when using a join delimiter that is empty'\n+ description = (\n+ 'Prefer a sub instead of Join when using a join delimiter that is empty'\n+ )\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub', 'join']\n \n+ def _check_element(self, element):\n+ if isinstance(element, dict):\n+ if len(element) == 1:\n+ for key, value in element.items():\n+ if key in ['Fn::Sub']:\n+ if not isinstance(value, str):\n+ return False\n+ elif key not in ['Ref', 'Fn::GetAtt']:\n+ return False\n+\n+ return True\n+\n+ def _check_elements(self, elements):\n+ for element in elements:\n+ if not self._check_element(element):\n+ return False\n+\n+ return True\n+\n def match(self, cfn):\n matches = []\n \n@@ -21,8 +43,15 @@\n for join_obj in join_objs:\n if isinstance(join_obj[-1], list):\n join_operator = join_obj[-1][0]\n+ join_elements = join_obj[-1][1]\n if isinstance(join_operator, str):\n if join_operator == '':\n- matches.append(RuleMatch(\n- join_obj[0:-1], 'Prefer using Fn::Sub over Fn::Join with an empty delimiter'))\n+ if isinstance(join_elements, list):\n+ if self._check_elements(join_elements):\n+ matches.append(\n+ RuleMatch(\n+ join_obj[0:-1],\n+ 'Prefer using Fn::Sub over Fn::Join with an empty delimiter',\n+ )\n+ )\n return matches\n", "issue": "Update I1022 to only suggest sub if all values can be in the sub\n### CloudFormation Lint Version\n\n0.64.1\n\n### What operating system are you using?\n\nAll\n\n### Describe the bug\n\nOriginal feedback provided by @iann0036. Translated to an issue for tracking.\r\n\r\n```yaml\r\nFn::Join:\r\n - \"\"\r\n - - Fn::Select:\r\n - 0\r\n - Fn::Split:\r\n - \"/\"\r\n - !Ref MySubnet1CIDR\r\n - !Ref MySubnetsCIDRSize\r\n```\r\n\r\n```\r\nI1022: Prefer using Fn::Sub over Fn::Join with an empty delimiter\r\n```\n\n### Expected behavior\n\nCurrently the way to make this comply would be\r\n\r\n```yaml\r\nFn::Sub:\r\n - ${CIDR}${MySubnetsCIDRSize}\r\n - CIDR:\r\n Fn::Select:\r\n - 0\r\n - Fn::Split:\r\n - \"/\"\r\n - !Ref MySubnet1CIDR\r\n```\r\n\r\nwhich may not be as optimal \n\n### Reproduction template\n\n```yaml\r\nFn::Join:\r\n - \"\"\r\n - - Fn::Select:\r\n - 0\r\n - Fn::Split:\r\n - \"/\"\r\n - !Ref MySubnet1CIDR\r\n - !Ref MySubnetsCIDRSize\r\n ````\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass SubNotJoin(CloudFormationLintRule):\n \"\"\"Check if Join is being used with no join characters\"\"\"\n id = 'I1022'\n shortdesc = 'Use Sub instead of Join'\n description = 'Prefer a sub instead of Join when using a join delimiter that is empty'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub', 'join']\n\n def match(self, cfn):\n matches = []\n\n join_objs = cfn.search_deep_keys('Fn::Join')\n\n for join_obj in join_objs:\n if isinstance(join_obj[-1], list):\n join_operator = join_obj[-1][0]\n if isinstance(join_operator, str):\n if join_operator == '':\n matches.append(RuleMatch(\n join_obj[0:-1], 'Prefer using Fn::Sub over Fn::Join with an empty delimiter'))\n return matches\n", "path": "src/cfnlint/rules/functions/SubNotJoin.py"}]} | 1,132 | 504 |
gh_patches_debug_27776 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1296 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move first_login logic so that it occurs on all logins
### Issue description
Currently, we have a function (first_login in User.py) that is triggered when it is the users first time logging into our system. In this function, we are checking 1.) are they part of a transition domain, if so make sure there is a domain inviation for them and a domain information object, 2) for all domain invitations they have, we add a user domain role to their account so they can access the domains.
The problem is when it comes time for transition, if anyone logs into our system BEFORE we run the migration scripts then they will already have a User row and account on our system and they will not trigger the checks above. This is a bit of a race condition that is easily avoided by just having the first_login logic triggered on every login.
### Acceptance criteria
- [ ] rename first_login and all uses of it (as it is no longer used just at first login
- [ ] move the formerly named first_login to be called for all users.
- [ ] add or update existing unit tests
### Additional context
To accomplish the second AC above I suggest moving the highlight lint (user.first_login()) in the backend.py Authenticate function to right above the return on line 59 so it is called on each user. See picture below:

### Links to other issues
_No response_
</issue>
<code>
[start of src/djangooidc/backends.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import logging
5
6 from django.conf import settings
7 from django.contrib.auth import get_user_model
8 from django.contrib.auth.backends import ModelBackend
9 from django.utils import timezone
10
11 logger = logging.getLogger(__name__)
12
13
14 class OpenIdConnectBackend(ModelBackend):
15 """
16 This backend checks a previously performed OIDC authentication.
17 If it is OK and the user already exists in the database, it is returned.
18 If it is OK and user does not exist in the database, it is created and
19 returned unless setting OIDC_CREATE_UNKNOWN_USER is False.
20 In all other cases, None is returned.
21 """
22
23 def authenticate(self, request, **kwargs):
24 logger.debug("kwargs %s" % kwargs)
25 user = None
26 if not kwargs or "sub" not in kwargs.keys():
27 return user
28
29 UserModel = get_user_model()
30 username = self.clean_username(kwargs["sub"])
31
32 # Some OP may actually choose to withhold some information, so we must
33 # test if it is present
34 openid_data = {"last_login": timezone.now()}
35 openid_data["first_name"] = kwargs.get("given_name", "")
36 openid_data["last_name"] = kwargs.get("family_name", "")
37 openid_data["email"] = kwargs.get("email", "")
38 openid_data["phone"] = kwargs.get("phone", "")
39
40 # Note that this could be accomplished in one try-except clause, but
41 # instead we use get_or_create when creating unknown users since it has
42 # built-in safeguards for multiple threads.
43 if getattr(settings, "OIDC_CREATE_UNKNOWN_USER", True):
44 args = {
45 UserModel.USERNAME_FIELD: username,
46 # defaults _will_ be updated, these are not fallbacks
47 "defaults": openid_data,
48 }
49 user, created = UserModel.objects.update_or_create(**args)
50 if created:
51 user = self.configure_user(user, **kwargs)
52 # run a newly created user's callback for a first-time login
53 user.first_login()
54 else:
55 try:
56 user = UserModel.objects.get_by_natural_key(username)
57 except UserModel.DoesNotExist:
58 return None
59 return user
60
61 def clean_username(self, username):
62 """
63 Performs any cleaning on the "username" prior to using it to get or
64 create the user object. Returns the cleaned username.
65 """
66 return username
67
68 def configure_user(self, user, **kwargs):
69 """
70 Configures a user after creation and returns the updated user.
71 """
72 user.set_unusable_password()
73 return user
74
[end of src/djangooidc/backends.py]
[start of src/registrar/models/user.py]
1 import logging
2
3 from django.contrib.auth.models import AbstractUser
4 from django.db import models
5
6 from .domain_invitation import DomainInvitation
7 from .transition_domain import TransitionDomain
8 from .domain_information import DomainInformation
9 from .domain import Domain
10
11 from phonenumber_field.modelfields import PhoneNumberField # type: ignore
12
13
14 logger = logging.getLogger(__name__)
15
16
17 class User(AbstractUser):
18 """
19 A custom user model that performs identically to the default user model
20 but can be customized later.
21 """
22
23 # #### Constants for choice fields ####
24 RESTRICTED = "restricted"
25 STATUS_CHOICES = ((RESTRICTED, RESTRICTED),)
26
27 status = models.CharField(
28 max_length=10,
29 choices=STATUS_CHOICES,
30 default=None, # Set the default value to None
31 null=True, # Allow the field to be null
32 blank=True, # Allow the field to be blank
33 )
34
35 domains = models.ManyToManyField(
36 "registrar.Domain",
37 through="registrar.UserDomainRole",
38 related_name="users",
39 )
40
41 phone = PhoneNumberField(
42 null=True,
43 blank=True,
44 help_text="Phone",
45 db_index=True,
46 )
47
48 def __str__(self):
49 # this info is pulled from Login.gov
50 if self.first_name or self.last_name:
51 return f"{self.first_name or ''} {self.last_name or ''} {self.email or ''}"
52 elif self.email:
53 return self.email
54 else:
55 return self.username
56
57 def restrict_user(self):
58 self.status = self.RESTRICTED
59 self.save()
60
61 def unrestrict_user(self):
62 self.status = None
63 self.save()
64
65 def is_restricted(self):
66 return self.status == self.RESTRICTED
67
68 def check_domain_invitations_on_login(self):
69 """When a user first arrives on the site, we need to retrieve any domain
70 invitations that match their email address."""
71 for invitation in DomainInvitation.objects.filter(
72 email=self.email, status=DomainInvitation.INVITED
73 ):
74 try:
75 invitation.retrieve()
76 invitation.save()
77 except RuntimeError:
78 # retrieving should not fail because of a missing user, but
79 # if it does fail, log the error so a new user can continue
80 # logging in
81 logger.warn(
82 "Failed to retrieve invitation %s", invitation, exc_info=True
83 )
84
85 def create_domain_and_invite(self, transition_domain: TransitionDomain):
86 transition_domain_name = transition_domain.domain_name
87 transition_domain_status = transition_domain.status
88 transition_domain_email = transition_domain.username
89
90 # type safety check. name should never be none
91 if transition_domain_name is not None:
92 new_domain = Domain(
93 name=transition_domain_name, state=transition_domain_status
94 )
95 new_domain.save()
96 # check that a domain invitation doesn't already
97 # exist for this e-mail / Domain pair
98 domain_email_already_in_domain_invites = DomainInvitation.objects.filter(
99 email=transition_domain_email.lower(), domain=new_domain
100 ).exists()
101 if not domain_email_already_in_domain_invites:
102 # Create new domain invitation
103 new_domain_invitation = DomainInvitation(
104 email=transition_domain_email.lower(), domain=new_domain
105 )
106 new_domain_invitation.save()
107
108 def check_transition_domains_on_login(self):
109 """When a user first arrives on the site, we need to check
110 if they are logging in with the same e-mail as a
111 transition domain and update our database accordingly."""
112
113 for transition_domain in TransitionDomain.objects.filter(username=self.email):
114 # Looks like the user logged in with the same e-mail as
115 # one or more corresponding transition domains.
116 # Create corresponding DomainInformation objects.
117
118 # NOTE: adding an ADMIN user role for this user
119 # for each domain should already be done
120 # in the invitation.retrieve() method.
121 # However, if the migration scripts for transition
122 # domain objects were not executed correctly,
123 # there could be transition domains without
124 # any corresponding Domain & DomainInvitation objects,
125 # which means the invitation.retrieve() method might
126 # not execute.
127 # Check that there is a corresponding domain object
128 # for this transition domain. If not, we have an error
129 # with our data and migrations need to be run again.
130
131 # Get the domain that corresponds with this transition domain
132 domain_exists = Domain.objects.filter(
133 name=transition_domain.domain_name
134 ).exists()
135 if not domain_exists:
136 logger.warn(
137 """There are transition domains without
138 corresponding domain objects!
139 Please run migration scripts for transition domains
140 (See data_migration.md)"""
141 )
142 # No need to throw an exception...just create a domain
143 # and domain invite, then proceed as normal
144 self.create_domain_and_invite(transition_domain)
145
146 domain = Domain.objects.get(name=transition_domain.domain_name)
147
148 # Create a domain information object, if one doesn't
149 # already exist
150 domain_info_exists = DomainInformation.objects.filter(
151 domain=domain
152 ).exists()
153 if not domain_info_exists:
154 new_domain_info = DomainInformation(creator=self, domain=domain)
155 new_domain_info.save()
156
157 def first_login(self):
158 """Callback when the user is authenticated for the very first time.
159
160 When a user first arrives on the site, we need to retrieve any domain
161 invitations that match their email address.
162
163 We also need to check if they are logging in with the same e-mail
164 as a transition domain and update our domainInfo objects accordingly.
165 """
166
167 # PART 1: TRANSITION DOMAINS
168 #
169 # NOTE: THIS MUST RUN FIRST
170 # (If we have an issue where transition domains were
171 # not fully converted into Domain and DomainInvitation
172 # objects, this method will fill in the gaps.
173 # This will ensure the Domain Invitations method
174 # runs correctly (no missing invites))
175 self.check_transition_domains_on_login()
176
177 # PART 2: DOMAIN INVITATIONS
178 self.check_domain_invitations_on_login()
179
180 class Meta:
181 permissions = [
182 ("analyst_access_permission", "Analyst Access Permission"),
183 ("full_access_permission", "Full Access Permission"),
184 ]
185
[end of src/registrar/models/user.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/djangooidc/backends.py b/src/djangooidc/backends.py
--- a/src/djangooidc/backends.py
+++ b/src/djangooidc/backends.py
@@ -49,13 +49,13 @@
user, created = UserModel.objects.update_or_create(**args)
if created:
user = self.configure_user(user, **kwargs)
- # run a newly created user's callback for a first-time login
- user.first_login()
else:
try:
user = UserModel.objects.get_by_natural_key(username)
except UserModel.DoesNotExist:
return None
+ # run this callback for a each login
+ user.on_each_login()
return user
def clean_username(self, username):
diff --git a/src/registrar/models/user.py b/src/registrar/models/user.py
--- a/src/registrar/models/user.py
+++ b/src/registrar/models/user.py
@@ -154,10 +154,10 @@
new_domain_info = DomainInformation(creator=self, domain=domain)
new_domain_info.save()
- def first_login(self):
- """Callback when the user is authenticated for the very first time.
+ def on_each_login(self):
+ """Callback each time the user is authenticated.
- When a user first arrives on the site, we need to retrieve any domain
+ When a user arrives on the site each time, we need to retrieve any domain
invitations that match their email address.
We also need to check if they are logging in with the same e-mail
| {"golden_diff": "diff --git a/src/djangooidc/backends.py b/src/djangooidc/backends.py\n--- a/src/djangooidc/backends.py\n+++ b/src/djangooidc/backends.py\n@@ -49,13 +49,13 @@\n user, created = UserModel.objects.update_or_create(**args)\n if created:\n user = self.configure_user(user, **kwargs)\n- # run a newly created user's callback for a first-time login\n- user.first_login()\n else:\n try:\n user = UserModel.objects.get_by_natural_key(username)\n except UserModel.DoesNotExist:\n return None\n+ # run this callback for a each login\n+ user.on_each_login()\n return user\n \n def clean_username(self, username):\ndiff --git a/src/registrar/models/user.py b/src/registrar/models/user.py\n--- a/src/registrar/models/user.py\n+++ b/src/registrar/models/user.py\n@@ -154,10 +154,10 @@\n new_domain_info = DomainInformation(creator=self, domain=domain)\n new_domain_info.save()\n \n- def first_login(self):\n- \"\"\"Callback when the user is authenticated for the very first time.\n+ def on_each_login(self):\n+ \"\"\"Callback each time the user is authenticated.\n \n- When a user first arrives on the site, we need to retrieve any domain\n+ When a user arrives on the site each time, we need to retrieve any domain\n invitations that match their email address.\n \n We also need to check if they are logging in with the same e-mail\n", "issue": "Move first_login logic so that it occurs on all logins\n### Issue description\n\nCurrently, we have a function (first_login in User.py) that is triggered when it is the users first time logging into our system. In this function, we are checking 1.) are they part of a transition domain, if so make sure there is a domain inviation for them and a domain information object, 2) for all domain invitations they have, we add a user domain role to their account so they can access the domains.\r\n\r\nThe problem is when it comes time for transition, if anyone logs into our system BEFORE we run the migration scripts then they will already have a User row and account on our system and they will not trigger the checks above. This is a bit of a race condition that is easily avoided by just having the first_login logic triggered on every login.\n\n### Acceptance criteria\n\n- [ ] rename first_login and all uses of it (as it is no longer used just at first login\r\n- [ ] move the formerly named first_login to be called for all users. \r\n- [ ] add or update existing unit tests \n\n### Additional context\n\nTo accomplish the second AC above I suggest moving the highlight lint (user.first_login()) in the backend.py Authenticate function to right above the return on line 59 so it is called on each user. See picture below:\r\n\r\n\n\n### Links to other issues\n\n_No response_\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.backends import ModelBackend\nfrom django.utils import timezone\n\nlogger = logging.getLogger(__name__)\n\n\nclass OpenIdConnectBackend(ModelBackend):\n \"\"\"\n This backend checks a previously performed OIDC authentication.\n If it is OK and the user already exists in the database, it is returned.\n If it is OK and user does not exist in the database, it is created and\n returned unless setting OIDC_CREATE_UNKNOWN_USER is False.\n In all other cases, None is returned.\n \"\"\"\n\n def authenticate(self, request, **kwargs):\n logger.debug(\"kwargs %s\" % kwargs)\n user = None\n if not kwargs or \"sub\" not in kwargs.keys():\n return user\n\n UserModel = get_user_model()\n username = self.clean_username(kwargs[\"sub\"])\n\n # Some OP may actually choose to withhold some information, so we must\n # test if it is present\n openid_data = {\"last_login\": timezone.now()}\n openid_data[\"first_name\"] = kwargs.get(\"given_name\", \"\")\n openid_data[\"last_name\"] = kwargs.get(\"family_name\", \"\")\n openid_data[\"email\"] = kwargs.get(\"email\", \"\")\n openid_data[\"phone\"] = kwargs.get(\"phone\", \"\")\n\n # Note that this could be accomplished in one try-except clause, but\n # instead we use get_or_create when creating unknown users since it has\n # built-in safeguards for multiple threads.\n if getattr(settings, \"OIDC_CREATE_UNKNOWN_USER\", True):\n args = {\n UserModel.USERNAME_FIELD: username,\n # defaults _will_ be updated, these are not fallbacks\n \"defaults\": openid_data,\n }\n user, created = UserModel.objects.update_or_create(**args)\n if created:\n user = self.configure_user(user, **kwargs)\n # run a newly created user's callback for a first-time login\n user.first_login()\n else:\n try:\n user = UserModel.objects.get_by_natural_key(username)\n except UserModel.DoesNotExist:\n return None\n return user\n\n def clean_username(self, username):\n \"\"\"\n Performs any cleaning on the \"username\" prior to using it to get or\n create the user object. Returns the cleaned username.\n \"\"\"\n return username\n\n def configure_user(self, user, **kwargs):\n \"\"\"\n Configures a user after creation and returns the updated user.\n \"\"\"\n user.set_unusable_password()\n return user\n", "path": "src/djangooidc/backends.py"}, {"content": "import logging\n\nfrom django.contrib.auth.models import AbstractUser\nfrom django.db import models\n\nfrom .domain_invitation import DomainInvitation\nfrom .transition_domain import TransitionDomain\nfrom .domain_information import DomainInformation\nfrom .domain import Domain\n\nfrom phonenumber_field.modelfields import PhoneNumberField # type: ignore\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass User(AbstractUser):\n \"\"\"\n A custom user model that performs identically to the default user model\n but can be customized later.\n \"\"\"\n\n # #### Constants for choice fields ####\n RESTRICTED = \"restricted\"\n STATUS_CHOICES = ((RESTRICTED, RESTRICTED),)\n\n status = models.CharField(\n max_length=10,\n choices=STATUS_CHOICES,\n default=None, # Set the default value to None\n null=True, # Allow the field to be null\n blank=True, # Allow the field to be blank\n )\n\n domains = models.ManyToManyField(\n \"registrar.Domain\",\n through=\"registrar.UserDomainRole\",\n related_name=\"users\",\n )\n\n phone = PhoneNumberField(\n null=True,\n blank=True,\n help_text=\"Phone\",\n db_index=True,\n )\n\n def __str__(self):\n # this info is pulled from Login.gov\n if self.first_name or self.last_name:\n return f\"{self.first_name or ''} {self.last_name or ''} {self.email or ''}\"\n elif self.email:\n return self.email\n else:\n return self.username\n\n def restrict_user(self):\n self.status = self.RESTRICTED\n self.save()\n\n def unrestrict_user(self):\n self.status = None\n self.save()\n\n def is_restricted(self):\n return self.status == self.RESTRICTED\n\n def check_domain_invitations_on_login(self):\n \"\"\"When a user first arrives on the site, we need to retrieve any domain\n invitations that match their email address.\"\"\"\n for invitation in DomainInvitation.objects.filter(\n email=self.email, status=DomainInvitation.INVITED\n ):\n try:\n invitation.retrieve()\n invitation.save()\n except RuntimeError:\n # retrieving should not fail because of a missing user, but\n # if it does fail, log the error so a new user can continue\n # logging in\n logger.warn(\n \"Failed to retrieve invitation %s\", invitation, exc_info=True\n )\n\n def create_domain_and_invite(self, transition_domain: TransitionDomain):\n transition_domain_name = transition_domain.domain_name\n transition_domain_status = transition_domain.status\n transition_domain_email = transition_domain.username\n\n # type safety check. name should never be none\n if transition_domain_name is not None:\n new_domain = Domain(\n name=transition_domain_name, state=transition_domain_status\n )\n new_domain.save()\n # check that a domain invitation doesn't already\n # exist for this e-mail / Domain pair\n domain_email_already_in_domain_invites = DomainInvitation.objects.filter(\n email=transition_domain_email.lower(), domain=new_domain\n ).exists()\n if not domain_email_already_in_domain_invites:\n # Create new domain invitation\n new_domain_invitation = DomainInvitation(\n email=transition_domain_email.lower(), domain=new_domain\n )\n new_domain_invitation.save()\n\n def check_transition_domains_on_login(self):\n \"\"\"When a user first arrives on the site, we need to check\n if they are logging in with the same e-mail as a\n transition domain and update our database accordingly.\"\"\"\n\n for transition_domain in TransitionDomain.objects.filter(username=self.email):\n # Looks like the user logged in with the same e-mail as\n # one or more corresponding transition domains.\n # Create corresponding DomainInformation objects.\n\n # NOTE: adding an ADMIN user role for this user\n # for each domain should already be done\n # in the invitation.retrieve() method.\n # However, if the migration scripts for transition\n # domain objects were not executed correctly,\n # there could be transition domains without\n # any corresponding Domain & DomainInvitation objects,\n # which means the invitation.retrieve() method might\n # not execute.\n # Check that there is a corresponding domain object\n # for this transition domain. If not, we have an error\n # with our data and migrations need to be run again.\n\n # Get the domain that corresponds with this transition domain\n domain_exists = Domain.objects.filter(\n name=transition_domain.domain_name\n ).exists()\n if not domain_exists:\n logger.warn(\n \"\"\"There are transition domains without\n corresponding domain objects!\n Please run migration scripts for transition domains\n (See data_migration.md)\"\"\"\n )\n # No need to throw an exception...just create a domain\n # and domain invite, then proceed as normal\n self.create_domain_and_invite(transition_domain)\n\n domain = Domain.objects.get(name=transition_domain.domain_name)\n\n # Create a domain information object, if one doesn't\n # already exist\n domain_info_exists = DomainInformation.objects.filter(\n domain=domain\n ).exists()\n if not domain_info_exists:\n new_domain_info = DomainInformation(creator=self, domain=domain)\n new_domain_info.save()\n\n def first_login(self):\n \"\"\"Callback when the user is authenticated for the very first time.\n\n When a user first arrives on the site, we need to retrieve any domain\n invitations that match their email address.\n\n We also need to check if they are logging in with the same e-mail\n as a transition domain and update our domainInfo objects accordingly.\n \"\"\"\n\n # PART 1: TRANSITION DOMAINS\n #\n # NOTE: THIS MUST RUN FIRST\n # (If we have an issue where transition domains were\n # not fully converted into Domain and DomainInvitation\n # objects, this method will fill in the gaps.\n # This will ensure the Domain Invitations method\n # runs correctly (no missing invites))\n self.check_transition_domains_on_login()\n\n # PART 2: DOMAIN INVITATIONS\n self.check_domain_invitations_on_login()\n\n class Meta:\n permissions = [\n (\"analyst_access_permission\", \"Analyst Access Permission\"),\n (\"full_access_permission\", \"Full Access Permission\"),\n ]\n", "path": "src/registrar/models/user.py"}]} | 3,405 | 344 |
gh_patches_debug_37850 | rasdani/github-patches | git_diff | vispy__vispy-2523 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade to Cython 3 for all builds
Cython 3.0 is out now. There are a lot of changes including many changes to the defaults. I've learned quite a few gotchas from my other cython-based projects that hopefully upgrade will be easy for us since we only have a few Cython things in vispy. Just making an issue so if someone else wants to tackle it before me feel free.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4 """Vispy setup script.
5
6 Steps to do a new release:
7
8 Preparations:
9 * Test on Windows, Linux, Mac
10 * Make release notes
11 * Update API documentation and other docs that need updating.
12
13 Define the version and release:
14 * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z' -m "Version X.Y.Z"`
15 * push tag to github
16 * verify that azure pipelines complete
17 * verify that `.tar.gz` sdist and binary wheels are available on PyPI
18
19 Announcing:
20 * It can be worth waiting a day for eager users to report critical bugs
21 * Announce in scipy-user, vispy mailing list, twitter (@vispyproject)
22
23 """
24
25 import os
26 from os import path as op
27 from distutils import log
28 from setuptools import setup, find_packages, Extension
29
30 import numpy as np
31 from Cython.Build import cythonize
32
33 log.set_verbosity(log.DEBUG)
34 log.info('setup.py entered')
35 log.info('$PATH=%s' % os.environ['PATH'])
36
37 name = 'vispy'
38 description = 'Interactive visualization in Python'
39
40 # Special commands for building jupyter notebook extension
41 here = os.path.dirname(os.path.abspath(__file__))
42 node_root = os.path.join(here, 'js')
43 is_repo = os.path.exists(os.path.join(here, '.git'))
44
45 npm_path = os.pathsep.join([
46 os.path.join(node_root, 'node_modules', '.bin'),
47 os.environ.get('PATH', os.defpath),
48 ])
49
50
51 def set_builtin(name, value):
52 if isinstance(__builtins__, dict):
53 __builtins__[name] = value
54 else:
55 setattr(__builtins__, name, value)
56
57
58 extensions = [Extension('vispy.visuals.text._sdf_cpu',
59 [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],
60 include_dirs=[np.get_include()]),
61 ]
62
63 readme = open('README.rst', 'r').read()
64 setup(
65 name=name,
66 use_scm_version={
67 'write_to': 'vispy/version.py',
68 # uses setuptools_scm.version.get_local_dirty_tag (+dirty or empty string)
69 'local_scheme': 'dirty-tag',
70 },
71 author='Vispy contributors',
72 author_email='[email protected]',
73 license='(new) BSD',
74 url='http://vispy.org',
75 download_url='https://pypi.python.org/pypi/vispy',
76 keywords=[
77 'visualization',
78 'OpenGl',
79 'ES',
80 'medical',
81 'imaging',
82 '3D',
83 'plotting',
84 'numpy',
85 'bigdata',
86 'ipython',
87 'jupyter',
88 'widgets',
89 ],
90 description=description,
91 long_description=readme,
92 long_description_content_type='text/x-rst',
93 platforms='any',
94 provides=['vispy'],
95 python_requires='>=3.6',
96 install_requires=['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging'],
97 setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive', 'packaging'],
98 extras_require={
99 'ipython-static': ['ipython'],
100 'pyglet': ['pyglet>=1.2'],
101 'pyqt5': ['pyqt5'],
102 'pyqt6': ['pyqt6'],
103 'pyside': ['PySide'],
104 'pyside2': ['PySide2'],
105 'pyside6': ['PySide6'],
106 'sdl2': ['PySDL2'],
107 'wx': ['wxPython'],
108 'tk': ['pyopengltk'],
109 'doc': ['pydata-sphinx-theme', 'numpydoc', 'sphinxcontrib-apidoc',
110 'sphinx-gallery', 'myst-parser', 'pillow', 'pytest',
111 'pyopengl'],
112 'io': ['meshio', 'Pillow'],
113 },
114 packages=find_packages(exclude=['make']),
115 ext_modules=cythonize(extensions, language_level=3),
116 package_dir={'vispy': 'vispy'},
117 data_files=[],
118 include_package_data=True,
119 package_data={
120 'vispy': [op.join('io', '_data', '*'),
121 op.join('app', 'tests', 'qt-designer.ui'),
122 op.join('util', 'fonts', 'data', '*.ttf'),
123 ],
124
125 'vispy.glsl': ['*.vert', '*.frag', "*.glsl"],
126 'vispy.glsl.antialias': ['*.vert', '*.frag', "*.glsl"],
127 'vispy.glsl.arrowheads': ['*.vert', '*.frag', "*.glsl"],
128 'vispy.glsl.arrows': ['*.vert', '*.frag', "*.glsl"],
129 'vispy.glsl.collections': ['*.vert', '*.frag', "*.glsl"],
130 'vispy.glsl.colormaps': ['*.vert', '*.frag', "*.glsl"],
131 'vispy.glsl.lines': ['*.vert', '*.frag', "*.glsl"],
132 'vispy.glsl.markers': ['*.vert', '*.frag', "*.glsl"],
133 'vispy.glsl.math': ['*.vert', '*.frag', "*.glsl"],
134 'vispy.glsl.misc': ['*.vert', '*.frag', "*.glsl"],
135 'vispy.glsl.transforms': ['*.vert', '*.frag', "*.glsl"],
136
137 },
138 zip_safe=False,
139 classifiers=[
140 'Development Status :: 3 - Alpha',
141 'Intended Audience :: Science/Research',
142 'Intended Audience :: Education',
143 'Intended Audience :: Developers',
144 'Topic :: Scientific/Engineering :: Visualization',
145 'License :: OSI Approved :: BSD License',
146 'Operating System :: MacOS :: MacOS X',
147 'Operating System :: Microsoft :: Windows',
148 'Operating System :: POSIX',
149 'Programming Language :: Python',
150 'Programming Language :: Python :: 3.6',
151 'Programming Language :: Python :: 3.7',
152 'Programming Language :: Python :: 3.8',
153 'Framework :: IPython'
154 ],
155 )
156
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,15 +24,11 @@
import os
from os import path as op
-from distutils import log
-from setuptools import setup, find_packages, Extension
+from setuptools import setup, find_packages
import numpy as np
from Cython.Build import cythonize
-
-log.set_verbosity(log.DEBUG)
-log.info('setup.py entered')
-log.info('$PATH=%s' % os.environ['PATH'])
+from Cython.Distutils import Extension
name = 'vispy'
description = 'Interactive visualization in Python'
@@ -56,8 +52,11 @@
extensions = [Extension('vispy.visuals.text._sdf_cpu',
- [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],
- include_dirs=[np.get_include()]),
+ sources=[op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],
+ include_dirs=[np.get_include()],
+ cython_directives={"language_level": "3"},
+ define_macros=[("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION")],
+ ),
]
readme = open('README.rst', 'r').read()
@@ -70,7 +69,7 @@
},
author='Vispy contributors',
author_email='[email protected]',
- license='(new) BSD',
+ license='BSD-3-Clause',
url='http://vispy.org',
download_url='https://pypi.python.org/pypi/vispy',
keywords=[
@@ -92,9 +91,8 @@
long_description_content_type='text/x-rst',
platforms='any',
provides=['vispy'],
- python_requires='>=3.6',
+ python_requires='>=3.8',
install_requires=['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging'],
- setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive', 'packaging'],
extras_require={
'ipython-static': ['ipython'],
'pyglet': ['pyglet>=1.2'],
@@ -147,9 +145,10 @@
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python',
- 'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
+ 'Programming Language :: Python :: 3.10',
+ 'Programming Language :: Python :: 3.11',
'Framework :: IPython'
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,15 +24,11 @@\n \n import os\n from os import path as op\n-from distutils import log\n-from setuptools import setup, find_packages, Extension\n+from setuptools import setup, find_packages\n \n import numpy as np\n from Cython.Build import cythonize\n-\n-log.set_verbosity(log.DEBUG)\n-log.info('setup.py entered')\n-log.info('$PATH=%s' % os.environ['PATH'])\n+from Cython.Distutils import Extension\n \n name = 'vispy'\n description = 'Interactive visualization in Python'\n@@ -56,8 +52,11 @@\n \n \n extensions = [Extension('vispy.visuals.text._sdf_cpu',\n- [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],\n- include_dirs=[np.get_include()]),\n+ sources=[op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],\n+ include_dirs=[np.get_include()],\n+ cython_directives={\"language_level\": \"3\"},\n+ define_macros=[(\"NPY_NO_DEPRECATED_API\", \"NPY_1_7_API_VERSION\")],\n+ ),\n ]\n \n readme = open('README.rst', 'r').read()\n@@ -70,7 +69,7 @@\n },\n author='Vispy contributors',\n author_email='[email protected]',\n- license='(new) BSD',\n+ license='BSD-3-Clause',\n url='http://vispy.org',\n download_url='https://pypi.python.org/pypi/vispy',\n keywords=[\n@@ -92,9 +91,8 @@\n long_description_content_type='text/x-rst',\n platforms='any',\n provides=['vispy'],\n- python_requires='>=3.6',\n+ python_requires='>=3.8',\n install_requires=['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging'],\n- setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive', 'packaging'],\n extras_require={\n 'ipython-static': ['ipython'],\n 'pyglet': ['pyglet>=1.2'],\n@@ -147,9 +145,10 @@\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n- 'Programming Language :: Python :: 3.6',\n- 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n+ 'Programming Language :: Python :: 3.9',\n+ 'Programming Language :: Python :: 3.10',\n+ 'Programming Language :: Python :: 3.11',\n 'Framework :: IPython'\n ],\n )\n", "issue": "Upgrade to Cython 3 for all builds\nCython 3.0 is out now. There are a lot of changes including many changes to the defaults. I've learned quite a few gotchas from my other cython-based projects that hopefully upgrade will be easy for us since we only have a few Cython things in vispy. Just making an issue so if someone else wants to tackle it before me feel free.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"Vispy setup script.\n\nSteps to do a new release:\n\nPreparations:\n * Test on Windows, Linux, Mac\n * Make release notes\n * Update API documentation and other docs that need updating.\n\nDefine the version and release:\n * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z' -m \"Version X.Y.Z\"`\n * push tag to github\n * verify that azure pipelines complete\n * verify that `.tar.gz` sdist and binary wheels are available on PyPI\n\nAnnouncing:\n * It can be worth waiting a day for eager users to report critical bugs\n * Announce in scipy-user, vispy mailing list, twitter (@vispyproject)\n\n\"\"\"\n\nimport os\nfrom os import path as op\nfrom distutils import log\nfrom setuptools import setup, find_packages, Extension\n\nimport numpy as np\nfrom Cython.Build import cythonize\n\nlog.set_verbosity(log.DEBUG)\nlog.info('setup.py entered')\nlog.info('$PATH=%s' % os.environ['PATH'])\n\nname = 'vispy'\ndescription = 'Interactive visualization in Python'\n\n# Special commands for building jupyter notebook extension\nhere = os.path.dirname(os.path.abspath(__file__))\nnode_root = os.path.join(here, 'js')\nis_repo = os.path.exists(os.path.join(here, '.git'))\n\nnpm_path = os.pathsep.join([\n os.path.join(node_root, 'node_modules', '.bin'),\n os.environ.get('PATH', os.defpath),\n])\n\n\ndef set_builtin(name, value):\n if isinstance(__builtins__, dict):\n __builtins__[name] = value\n else:\n setattr(__builtins__, name, value)\n\n\nextensions = [Extension('vispy.visuals.text._sdf_cpu',\n [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],\n include_dirs=[np.get_include()]),\n ]\n\nreadme = open('README.rst', 'r').read()\nsetup(\n name=name,\n use_scm_version={\n 'write_to': 'vispy/version.py',\n # uses setuptools_scm.version.get_local_dirty_tag (+dirty or empty string)\n 'local_scheme': 'dirty-tag',\n },\n author='Vispy contributors',\n author_email='[email protected]',\n license='(new) BSD',\n url='http://vispy.org',\n download_url='https://pypi.python.org/pypi/vispy',\n keywords=[\n 'visualization',\n 'OpenGl',\n 'ES',\n 'medical',\n 'imaging',\n '3D',\n 'plotting',\n 'numpy',\n 'bigdata',\n 'ipython',\n 'jupyter',\n 'widgets',\n ],\n description=description,\n long_description=readme,\n long_description_content_type='text/x-rst',\n platforms='any',\n provides=['vispy'],\n python_requires='>=3.6',\n install_requires=['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging'],\n setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive', 'packaging'],\n extras_require={\n 'ipython-static': ['ipython'],\n 'pyglet': ['pyglet>=1.2'],\n 'pyqt5': ['pyqt5'],\n 'pyqt6': ['pyqt6'],\n 'pyside': ['PySide'],\n 'pyside2': ['PySide2'],\n 'pyside6': ['PySide6'],\n 'sdl2': ['PySDL2'],\n 'wx': ['wxPython'],\n 'tk': ['pyopengltk'],\n 'doc': ['pydata-sphinx-theme', 'numpydoc', 'sphinxcontrib-apidoc',\n 'sphinx-gallery', 'myst-parser', 'pillow', 'pytest',\n 'pyopengl'],\n 'io': ['meshio', 'Pillow'],\n },\n packages=find_packages(exclude=['make']),\n ext_modules=cythonize(extensions, language_level=3),\n package_dir={'vispy': 'vispy'},\n data_files=[],\n include_package_data=True,\n package_data={\n 'vispy': [op.join('io', '_data', '*'),\n op.join('app', 'tests', 'qt-designer.ui'),\n op.join('util', 'fonts', 'data', '*.ttf'),\n ],\n\n 'vispy.glsl': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.antialias': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.arrowheads': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.arrows': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.collections': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.colormaps': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.lines': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.markers': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.math': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.misc': ['*.vert', '*.frag', \"*.glsl\"],\n 'vispy.glsl.transforms': ['*.vert', '*.frag', \"*.glsl\"],\n\n },\n zip_safe=False,\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Education',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Framework :: IPython'\n ],\n)\n", "path": "setup.py"}]} | 2,345 | 631 |
gh_patches_debug_12979 | rasdani/github-patches | git_diff | beetbox__beets-4960 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sidebar scrolling in docs not working and general adjustment ideas
I was too lazy for a PR but I a quickfixed the sidebar in Alabaster. It did not stay "in place" like we had it before. More adjustments might come in proper PR.
**_~Well, the fix does not work in master but it does work on my local docs build!~_**
Update: Actually the sidebar scrolling did work, probably there was some caching issue with my local browser, so that is [x] DONE
Anyway, we should take care of docs build and customization. Some ideas:
- Explicitely state Alabaster as the theme to use.
- Set the pic we have for the beetbox organization as the logo in the docs. (Or do we have a logo or does someone want to design one?
- Default to "latest" version of the docs
- ...I might add some more ideas here...
Might be related: https://github.com/beetbox/beets/issues/4912
_Originally posted by @JOJ0 in https://github.com/beetbox/beets/issues/4644#issuecomment-1728881928_
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is part of beets.
4 # Copyright 2016, Adrian Sampson.
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining
7 # a copy of this software and associated documentation files (the
8 # "Software"), to deal in the Software without restriction, including
9 # without limitation the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the Software, and to
11 # permit persons to whom the Software is furnished to do so, subject to
12 # the following conditions:
13 #
14 # The above copyright notice and this permission notice shall be
15 # included in all copies or substantial portions of the Software.
16
17
18 import os
19 import shutil
20 import subprocess
21 import sys
22
23 from setuptools import setup
24
25
26 def _read(fn):
27 path = os.path.join(os.path.dirname(__file__), fn)
28 return open(path).read()
29
30
31 def build_manpages():
32 # Go into the docs directory and build the manpage.
33 docdir = os.path.join(os.path.dirname(__file__), "docs")
34 curdir = os.getcwd()
35 os.chdir(docdir)
36 try:
37 subprocess.check_call(["make", "man"])
38 except OSError:
39 print("Could not build manpages (make man failed)!", file=sys.stderr)
40 return
41 finally:
42 os.chdir(curdir)
43
44 # Copy resulting manpages.
45 mandir = os.path.join(os.path.dirname(__file__), "man")
46 if os.path.exists(mandir):
47 shutil.rmtree(mandir)
48 shutil.copytree(os.path.join(docdir, "_build", "man"), mandir)
49
50
51 # Build manpages if we're making a source distribution tarball.
52 if "sdist" in sys.argv:
53 build_manpages()
54
55
56 setup(
57 name="beets",
58 version="1.6.1",
59 description="music tagger and library organizer",
60 author="Adrian Sampson",
61 author_email="[email protected]",
62 url="https://beets.io/",
63 license="MIT",
64 platforms="ALL",
65 long_description=_read("README.rst"),
66 test_suite="test.testall.suite",
67 zip_safe=False,
68 include_package_data=True, # Install plugin resources.
69 packages=[
70 "beets",
71 "beets.ui",
72 "beets.autotag",
73 "beets.util",
74 "beets.dbcore",
75 "beetsplug",
76 "beetsplug.bpd",
77 "beetsplug.web",
78 "beetsplug.lastgenre",
79 "beetsplug.metasync",
80 ],
81 entry_points={
82 "console_scripts": [
83 "beet = beets.ui:main",
84 ],
85 },
86 install_requires=[
87 "unidecode>=1.3.6",
88 "musicbrainzngs>=0.4",
89 "pyyaml",
90 "mediafile>=0.12.0",
91 "confuse>=1.5.0",
92 "munkres>=1.0.0",
93 "jellyfish",
94 "typing_extensions",
95 ]
96 + (
97 # Support for ANSI console colors on Windows.
98 ["colorama"]
99 if (sys.platform == "win32")
100 else []
101 ),
102 extras_require={
103 "test": [
104 "beautifulsoup4",
105 "coverage",
106 "flask",
107 "mock",
108 "pylast",
109 "pytest",
110 "python-mpd2",
111 "pyxdg",
112 "responses>=0.3.0",
113 "requests_oauthlib",
114 "reflink",
115 "rarfile",
116 "python3-discogs-client>=2.3.15",
117 "py7zr",
118 ],
119 "lint": [
120 "flake8",
121 "flake8-docstrings",
122 "pep8-naming",
123 ],
124 "mypy": [
125 "mypy",
126 "types-Pillow",
127 "types-urllib3",
128 "types-beautifulsoup4",
129 "types-PyYAML",
130 "types-requests",
131 "types-Flask-Cors",
132 ],
133 "docs": [
134 "sphinx",
135 "sphinx_rtd_theme",
136 ],
137 # Plugin (optional) dependencies:
138 "absubmit": ["requests"],
139 "fetchart": ["requests", "Pillow", "beautifulsoup4"],
140 "embedart": ["Pillow"],
141 "embyupdate": ["requests"],
142 "chroma": ["pyacoustid"],
143 "discogs": ["python3-discogs-client>=2.3.15"],
144 "beatport": ["requests-oauthlib>=0.6.1"],
145 "kodiupdate": ["requests"],
146 "lastgenre": ["pylast"],
147 "lastimport": ["pylast"],
148 "lyrics": ["requests", "beautifulsoup4", "langdetect"],
149 "mpdstats": ["python-mpd2>=0.4.2"],
150 "plexupdate": ["requests"],
151 "web": ["flask", "flask-cors"],
152 "import": ["rarfile", "py7zr"],
153 "thumbnails": ["pyxdg", "Pillow"],
154 "metasync": ["dbus-python"],
155 "sonosupdate": ["soco"],
156 "scrub": ["mutagen>=1.33"],
157 "bpd": ["PyGObject"],
158 "replaygain": ["PyGObject"],
159 "reflink": ["reflink"],
160 },
161 # Non-Python/non-PyPI plugin dependencies:
162 # chroma: chromaprint or fpcalc
163 # convert: ffmpeg
164 # badfiles: mp3val and flac
165 # bpd: python-gi and GStreamer 1.0+
166 # embedart: ImageMagick
167 # absubmit: extractor binary from https://acousticbrainz.org/download
168 # keyfinder: KeyFinder
169 # replaygain: python-gi and GStreamer 1.0+
170 # or mp3gain/aacgain
171 # or Python Audio Tools
172 # or ffmpeg
173 # ipfs: go-ipfs
174 classifiers=[
175 "Topic :: Multimedia :: Sound/Audio",
176 "Topic :: Multimedia :: Sound/Audio :: Players :: MP3",
177 "License :: OSI Approved :: MIT License",
178 "Environment :: Console",
179 "Environment :: Web Environment",
180 "Programming Language :: Python",
181 "Programming Language :: Python :: 3",
182 "Programming Language :: Python :: 3.7",
183 "Programming Language :: Python :: 3.8",
184 "Programming Language :: Python :: 3.9",
185 "Programming Language :: Python :: 3.10",
186 "Programming Language :: Python :: Implementation :: CPython",
187 ],
188 )
189
[end of setup.py]
[start of docs/conf.py]
1 AUTHOR = "Adrian Sampson"
2
3 # General configuration
4
5 extensions = ["sphinx.ext.autodoc", "sphinx.ext.extlinks"]
6
7 exclude_patterns = ["_build"]
8 source_suffix = ".rst"
9 master_doc = "index"
10
11 project = "beets"
12 copyright = "2016, Adrian Sampson"
13
14 version = "1.6"
15 release = "1.6.1"
16
17 pygments_style = "sphinx"
18
19 # External links to the bug tracker and other sites.
20 extlinks = {
21 "bug": ("https://github.com/beetbox/beets/issues/%s", "#%s"),
22 "user": ("https://github.com/%s", "%s"),
23 "pypi": ("https://pypi.org/project/%s/", "%s"),
24 "stdlib": ("https://docs.python.org/3/library/%s.html", "%s"),
25 }
26
27 linkcheck_ignore = [
28 r"https://github.com/beetbox/beets/issues/",
29 r"https://github.com/[^/]+$", # ignore user pages
30 r".*localhost.*",
31 r"https?://127\.0\.0\.1",
32 r"https://www.musixmatch.com/", # blocks requests
33 r"https://genius.com/", # blocks requests
34 ]
35
36 # Options for HTML output
37 htmlhelp_basename = "beetsdoc"
38
39 # Options for LaTeX output
40 latex_documents = [
41 ("index", "beets.tex", "beets Documentation", AUTHOR, "manual"),
42 ]
43
44 # Options for manual page output
45 man_pages = [
46 (
47 "reference/cli",
48 "beet",
49 "music tagger and library organizer",
50 [AUTHOR],
51 1,
52 ),
53 (
54 "reference/config",
55 "beetsconfig",
56 "beets configuration file",
57 [AUTHOR],
58 5,
59 ),
60 ]
61
62 # Options for Alabaster theme
63 html_theme_options = {"fixed_sidebar": True}
64
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,5 +59,16 @@
),
]
-# Options for Alabaster theme
-html_theme_options = {"fixed_sidebar": True}
+# Options for pydata theme
+html_theme = 'pydata_sphinx_theme'
+html_theme_options = {
+ 'collapse_navigation': True,
+ "logo": {
+ "text": "beets",
+ },
+ "pygment_light_style": "bw",
+}
+html_title = "beets"
+html_logo = "_static/beets_logo.png"
+html_static_path = ['_static']
+html_css_files = ['beets.css']
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -132,7 +132,7 @@
],
"docs": [
"sphinx",
- "sphinx_rtd_theme",
+ "pydata_sphinx_theme",
],
# Plugin (optional) dependencies:
"absubmit": ["requests"],
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -59,5 +59,16 @@\n ),\n ]\n \n-# Options for Alabaster theme\n-html_theme_options = {\"fixed_sidebar\": True}\n+# Options for pydata theme\n+html_theme = 'pydata_sphinx_theme'\n+html_theme_options = {\n+ 'collapse_navigation': True,\n+ \"logo\": {\n+ \"text\": \"beets\",\n+ },\n+ \"pygment_light_style\": \"bw\",\n+}\n+html_title = \"beets\"\n+html_logo = \"_static/beets_logo.png\"\n+html_static_path = ['_static']\n+html_css_files = ['beets.css']\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -132,7 +132,7 @@\n ],\n \"docs\": [\n \"sphinx\",\n- \"sphinx_rtd_theme\",\n+ \"pydata_sphinx_theme\",\n ],\n # Plugin (optional) dependencies:\n \"absubmit\": [\"requests\"],\n", "issue": "Sidebar scrolling in docs not working and general adjustment ideas\nI was too lazy for a PR but I a quickfixed the sidebar in Alabaster. It did not stay \"in place\" like we had it before. More adjustments might come in proper PR.\r\n\r\n**_~Well, the fix does not work in master but it does work on my local docs build!~_**\r\n\r\nUpdate: Actually the sidebar scrolling did work, probably there was some caching issue with my local browser, so that is [x] DONE\r\n\r\nAnyway, we should take care of docs build and customization. Some ideas:\r\n\r\n- Explicitely state Alabaster as the theme to use.\r\n- Set the pic we have for the beetbox organization as the logo in the docs. (Or do we have a logo or does someone want to design one?\r\n- Default to \"latest\" version of the docs\r\n- ...I might add some more ideas here...\r\n\r\nMight be related: https://github.com/beetbox/beets/issues/4912\r\n\r\n_Originally posted by @JOJ0 in https://github.com/beetbox/beets/issues/4644#issuecomment-1728881928_\r\n \n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is part of beets.\n# Copyright 2016, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup\n\n\ndef _read(fn):\n path = os.path.join(os.path.dirname(__file__), fn)\n return open(path).read()\n\n\ndef build_manpages():\n # Go into the docs directory and build the manpage.\n docdir = os.path.join(os.path.dirname(__file__), \"docs\")\n curdir = os.getcwd()\n os.chdir(docdir)\n try:\n subprocess.check_call([\"make\", \"man\"])\n except OSError:\n print(\"Could not build manpages (make man failed)!\", file=sys.stderr)\n return\n finally:\n os.chdir(curdir)\n\n # Copy resulting manpages.\n mandir = os.path.join(os.path.dirname(__file__), \"man\")\n if os.path.exists(mandir):\n shutil.rmtree(mandir)\n shutil.copytree(os.path.join(docdir, \"_build\", \"man\"), mandir)\n\n\n# Build manpages if we're making a source distribution tarball.\nif \"sdist\" in sys.argv:\n build_manpages()\n\n\nsetup(\n name=\"beets\",\n version=\"1.6.1\",\n description=\"music tagger and library organizer\",\n author=\"Adrian Sampson\",\n author_email=\"[email protected]\",\n url=\"https://beets.io/\",\n license=\"MIT\",\n platforms=\"ALL\",\n long_description=_read(\"README.rst\"),\n test_suite=\"test.testall.suite\",\n zip_safe=False,\n include_package_data=True, # Install plugin resources.\n packages=[\n \"beets\",\n \"beets.ui\",\n \"beets.autotag\",\n \"beets.util\",\n \"beets.dbcore\",\n \"beetsplug\",\n \"beetsplug.bpd\",\n \"beetsplug.web\",\n \"beetsplug.lastgenre\",\n \"beetsplug.metasync\",\n ],\n entry_points={\n \"console_scripts\": [\n \"beet = beets.ui:main\",\n ],\n },\n install_requires=[\n \"unidecode>=1.3.6\",\n \"musicbrainzngs>=0.4\",\n \"pyyaml\",\n \"mediafile>=0.12.0\",\n \"confuse>=1.5.0\",\n \"munkres>=1.0.0\",\n \"jellyfish\",\n \"typing_extensions\",\n ]\n + (\n # Support for ANSI console colors on Windows.\n [\"colorama\"]\n if (sys.platform == \"win32\")\n else []\n ),\n extras_require={\n \"test\": [\n \"beautifulsoup4\",\n \"coverage\",\n \"flask\",\n \"mock\",\n \"pylast\",\n \"pytest\",\n \"python-mpd2\",\n \"pyxdg\",\n \"responses>=0.3.0\",\n \"requests_oauthlib\",\n \"reflink\",\n \"rarfile\",\n \"python3-discogs-client>=2.3.15\",\n \"py7zr\",\n ],\n \"lint\": [\n \"flake8\",\n \"flake8-docstrings\",\n \"pep8-naming\",\n ],\n \"mypy\": [\n \"mypy\",\n \"types-Pillow\",\n \"types-urllib3\",\n \"types-beautifulsoup4\",\n \"types-PyYAML\",\n \"types-requests\",\n \"types-Flask-Cors\",\n ],\n \"docs\": [\n \"sphinx\",\n \"sphinx_rtd_theme\",\n ],\n # Plugin (optional) dependencies:\n \"absubmit\": [\"requests\"],\n \"fetchart\": [\"requests\", \"Pillow\", \"beautifulsoup4\"],\n \"embedart\": [\"Pillow\"],\n \"embyupdate\": [\"requests\"],\n \"chroma\": [\"pyacoustid\"],\n \"discogs\": [\"python3-discogs-client>=2.3.15\"],\n \"beatport\": [\"requests-oauthlib>=0.6.1\"],\n \"kodiupdate\": [\"requests\"],\n \"lastgenre\": [\"pylast\"],\n \"lastimport\": [\"pylast\"],\n \"lyrics\": [\"requests\", \"beautifulsoup4\", \"langdetect\"],\n \"mpdstats\": [\"python-mpd2>=0.4.2\"],\n \"plexupdate\": [\"requests\"],\n \"web\": [\"flask\", \"flask-cors\"],\n \"import\": [\"rarfile\", \"py7zr\"],\n \"thumbnails\": [\"pyxdg\", \"Pillow\"],\n \"metasync\": [\"dbus-python\"],\n \"sonosupdate\": [\"soco\"],\n \"scrub\": [\"mutagen>=1.33\"],\n \"bpd\": [\"PyGObject\"],\n \"replaygain\": [\"PyGObject\"],\n \"reflink\": [\"reflink\"],\n },\n # Non-Python/non-PyPI plugin dependencies:\n # chroma: chromaprint or fpcalc\n # convert: ffmpeg\n # badfiles: mp3val and flac\n # bpd: python-gi and GStreamer 1.0+\n # embedart: ImageMagick\n # absubmit: extractor binary from https://acousticbrainz.org/download\n # keyfinder: KeyFinder\n # replaygain: python-gi and GStreamer 1.0+\n # or mp3gain/aacgain\n # or Python Audio Tools\n # or ffmpeg\n # ipfs: go-ipfs\n classifiers=[\n \"Topic :: Multimedia :: Sound/Audio\",\n \"Topic :: Multimedia :: Sound/Audio :: Players :: MP3\",\n \"License :: OSI Approved :: MIT License\",\n \"Environment :: Console\",\n \"Environment :: Web Environment\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n ],\n)\n", "path": "setup.py"}, {"content": "AUTHOR = \"Adrian Sampson\"\n\n# General configuration\n\nextensions = [\"sphinx.ext.autodoc\", \"sphinx.ext.extlinks\"]\n\nexclude_patterns = [\"_build\"]\nsource_suffix = \".rst\"\nmaster_doc = \"index\"\n\nproject = \"beets\"\ncopyright = \"2016, Adrian Sampson\"\n\nversion = \"1.6\"\nrelease = \"1.6.1\"\n\npygments_style = \"sphinx\"\n\n# External links to the bug tracker and other sites.\nextlinks = {\n \"bug\": (\"https://github.com/beetbox/beets/issues/%s\", \"#%s\"),\n \"user\": (\"https://github.com/%s\", \"%s\"),\n \"pypi\": (\"https://pypi.org/project/%s/\", \"%s\"),\n \"stdlib\": (\"https://docs.python.org/3/library/%s.html\", \"%s\"),\n}\n\nlinkcheck_ignore = [\n r\"https://github.com/beetbox/beets/issues/\",\n r\"https://github.com/[^/]+$\", # ignore user pages\n r\".*localhost.*\",\n r\"https?://127\\.0\\.0\\.1\",\n r\"https://www.musixmatch.com/\", # blocks requests\n r\"https://genius.com/\", # blocks requests\n]\n\n# Options for HTML output\nhtmlhelp_basename = \"beetsdoc\"\n\n# Options for LaTeX output\nlatex_documents = [\n (\"index\", \"beets.tex\", \"beets Documentation\", AUTHOR, \"manual\"),\n]\n\n# Options for manual page output\nman_pages = [\n (\n \"reference/cli\",\n \"beet\",\n \"music tagger and library organizer\",\n [AUTHOR],\n 1,\n ),\n (\n \"reference/config\",\n \"beetsconfig\",\n \"beets configuration file\",\n [AUTHOR],\n 5,\n ),\n]\n\n# Options for Alabaster theme\nhtml_theme_options = {\"fixed_sidebar\": True}\n", "path": "docs/conf.py"}]} | 3,254 | 243 |
gh_patches_debug_63371 | rasdani/github-patches | git_diff | mkdocs__mkdocs-190 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make syntax highlighting optional
It would be nice to have an option to disable the prettify class from being added to the pre-tag. Personally, I prefer using another highlighter that doesn't rely on extra classes.
</issue>
<code>
[start of mkdocs/build.py]
1 # coding: utf-8
2 from __future__ import print_function
3
4 from mkdocs import nav, toc, utils
5 from mkdocs.compat import urljoin, urlparse, urlunparse, PY2
6 import jinja2
7 import markdown
8 import os
9 import re
10 import json
11
12
13 class PathToURL(object):
14 def __init__(self, template, nav=None):
15 self.template = template
16 self.nav = nav
17
18 def __call__(self, match):
19 url = match.groups()[0]
20 scheme, netloc, path, query, query, fragment = urlparse(url)
21
22 if scheme or netloc:
23 # Ignore URLs unless they are a relative link to a markdown file.
24 return self.template % url
25
26 if self.nav and not utils.is_markdown_file(path):
27 path = utils.create_media_urls(self.nav, [path])[0]
28 elif self.nav:
29 # If the site navigation has been provided, then validate
30 # the internal hyperlink, making sure the target actually exists.
31 target_file = self.nav.file_context.make_absolute(path)
32 if target_file not in self.nav.source_files:
33 source_file = self.nav.file_context.current_file
34 msg = (
35 'The page "%s" contained a hyperlink to "%s" which '
36 'is not listed in the "pages" configuration.'
37 )
38 assert False, msg % (source_file, target_file)
39 path = utils.get_url_path(target_file, self.nav.use_directory_urls)
40 path = self.nav.url_context.make_relative(path)
41 else:
42 path = utils.get_url_path(path).lstrip('/')
43
44 # Convert the .md hyperlink to a relative hyperlink to the HTML page.
45 url = urlunparse((scheme, netloc, path, query, query, fragment))
46 return self.template % url
47
48
49 def convert_markdown(markdown_source, extensions=()):
50 """
51 Convert the Markdown source file to HTML content, and additionally
52 return the parsed table of contents, and a dictionary of any metadata
53 that was specified in the Markdown file.
54
55 `extensions` is an optional sequence of Python Markdown extensions to add
56 to the default set.
57 """
58
59 # Prepend a table of contents marker for the TOC extension
60 markdown_source = toc.pre_process(markdown_source)
61
62 # Generate the HTML from the markdown source
63 md = markdown.Markdown(
64 extensions=['meta', 'toc', 'tables', 'fenced_code'] + list(extensions)
65 )
66 html_content = md.convert(markdown_source)
67 meta = md.Meta
68
69 # Strip out the generated table of contents
70 (html_content, toc_html) = toc.post_process(html_content)
71
72 # Post process the generated table of contents into a data structure
73 table_of_contents = toc.TableOfContents(toc_html)
74
75 return (html_content, table_of_contents, meta)
76
77
78 def post_process_html(html_content, nav=None):
79
80 anchor_sub = PathToURL('a href="%s"', nav)
81 html_content = re.sub(r'a href="([^"]*)"', anchor_sub, html_content)
82
83 img_sub = PathToURL('src="%s"', nav)
84 html_content = re.sub(r'src="([^"]*)"', img_sub, html_content)
85
86 html_content = html_content.replace('<pre>', '<pre class="prettyprint well">')
87
88 return html_content
89
90
91 def get_context(page, content, nav, toc, meta, config):
92 site_name = config['site_name']
93
94 if page.is_homepage or page.title is None:
95 page_title = site_name
96 else:
97 page_title = page.title + ' - ' + site_name
98
99 if page.is_homepage:
100 page_description = config['site_description']
101 else:
102 page_description = None
103
104 if config['site_url']:
105 base = config['site_url']
106 if not base.endswith('/'):
107 base += '/'
108 canonical_url = urljoin(base, page.abs_url.lstrip('/'))
109 else:
110 canonical_url = None
111
112 if config['site_favicon']:
113 site_favicon = nav.url_context.make_relative('/' + config['site_favicon'])
114 else:
115 site_favicon = None
116
117 extra_javascript = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])
118
119 extra_css = utils.create_media_urls(nav=nav, url_list=config['extra_css'])
120
121 return {
122 'site_name': site_name,
123 'site_author': config['site_author'],
124 'favicon': site_favicon,
125
126 'page_title': page_title,
127 'page_description': page_description,
128
129 'content': content,
130 'toc': toc,
131 'nav': nav,
132 'meta': meta,
133
134 'base_url': nav.url_context.make_relative('/'),
135 'homepage_url': nav.homepage.url,
136 'canonical_url': canonical_url,
137
138 'current_page': page,
139 'previous_page': page.previous_page,
140 'next_page': page.next_page,
141
142 # Note that there's intentionally repetition here. Rather than simply
143 # provide the config dictionary we instead pass everything explicitly.
144 #
145 # This helps ensure that we can throughly document the context that
146 # gets passed to themes.
147 'repo_url': config['repo_url'],
148 'repo_name': config['repo_name'],
149
150 'extra_css': extra_css,
151 'extra_javascript': extra_javascript,
152
153 'include_nav': config['include_nav'],
154 'include_next_prev': config['include_next_prev'],
155 'include_search': config['include_search'],
156
157 'copyright': config['copyright'],
158 'google-analytics': config['google-analytics']
159 }
160
161
162 def build_pages(config, dump_json=False):
163 """
164 Builds all the pages and writes them into the build directory.
165 """
166 site_navigation = nav.SiteNavigation(config['pages'], config['use_directory_urls'])
167 loader = jinja2.FileSystemLoader(config['theme_dir'])
168 env = jinja2.Environment(loader=loader)
169
170 for page in site_navigation.walk_pages():
171 # Read the input file
172 input_path = os.path.join(config['docs_dir'], page.input_path)
173 input_content = open(input_path, 'r').read()
174 if PY2:
175 input_content = input_content.decode('utf-8')
176
177 # Process the markdown text
178 html_content, table_of_contents, meta = convert_markdown(
179 input_content, extensions=config['markdown_extensions']
180 )
181 html_content = post_process_html(html_content, site_navigation)
182
183 context = get_context(
184 page, html_content, site_navigation,
185 table_of_contents, meta, config
186 )
187
188 # Allow 'template:' override in md source files.
189 if 'template' in meta:
190 template = env.get_template(meta['template'][0])
191 else:
192 template = env.get_template('base.html')
193
194 # Render the template.
195 output_content = template.render(context)
196
197 # Write the output file.
198 output_path = os.path.join(config['site_dir'], page.output_path)
199 if dump_json:
200 json_context = {
201 'content': context['content'],
202 'title': context['current_page'].title,
203 'url': context['current_page'].abs_url,
204 'language': 'en',
205 }
206 utils.write_file(json.dumps(json_context, indent=4).encode('utf-8'), output_path.replace('.html', '.json'))
207 else:
208 utils.write_file(output_content.encode('utf-8'), output_path)
209
210
211 def build(config, live_server=False, dump_json=False, clean_site_dir=False):
212 """
213 Perform a full site build.
214 """
215 if clean_site_dir:
216 print("Cleaning site directory")
217 utils.clean_directory(config['site_dir'])
218 if not live_server:
219 print("Building documentation to directory: %s" % config['site_dir'])
220 if not clean_site_dir and site_directory_contains_stale_files(config['site_dir']):
221 print("Directory %s contains stale files. Use --clean to remove them." % config['site_dir'])
222
223 if dump_json:
224 build_pages(config, dump_json=True)
225 else:
226 # Reversed as we want to take the media files from the builtin theme
227 # and then from the custom theme_dir so the custom versions take take
228 # precedence.
229 for theme_dir in reversed(config['theme_dir']):
230 utils.copy_media_files(theme_dir, config['site_dir'])
231 utils.copy_media_files(config['docs_dir'], config['site_dir'])
232 build_pages(config)
233
234
235 def site_directory_contains_stale_files(site_directory):
236 """
237 Check if the site directory contains stale files from a previous build.
238 Right now the check returns true if the directory is not empty.
239 A more sophisticated approach should be found to trigger only if there are
240 files that won't be overwritten anyway.
241 """
242 if os.path.exists(site_directory):
243 if os.listdir(site_directory):
244 return True
245 return False
246
[end of mkdocs/build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/build.py b/mkdocs/build.py
--- a/mkdocs/build.py
+++ b/mkdocs/build.py
@@ -83,8 +83,6 @@
img_sub = PathToURL('src="%s"', nav)
html_content = re.sub(r'src="([^"]*)"', img_sub, html_content)
- html_content = html_content.replace('<pre>', '<pre class="prettyprint well">')
-
return html_content
| {"golden_diff": "diff --git a/mkdocs/build.py b/mkdocs/build.py\n--- a/mkdocs/build.py\n+++ b/mkdocs/build.py\n@@ -83,8 +83,6 @@\n img_sub = PathToURL('src=\"%s\"', nav)\n html_content = re.sub(r'src=\"([^\"]*)\"', img_sub, html_content)\n \n- html_content = html_content.replace('<pre>', '<pre class=\"prettyprint well\">')\n-\n return html_content\n", "issue": "Make syntax highlighting optional\nIt would be nice to have an option to disable the prettify class from being added to the pre-tag. Personally, I prefer using another highlighter that doesn't rely on extra classes.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\n\nfrom mkdocs import nav, toc, utils\nfrom mkdocs.compat import urljoin, urlparse, urlunparse, PY2\nimport jinja2\nimport markdown\nimport os\nimport re\nimport json\n\n\nclass PathToURL(object):\n def __init__(self, template, nav=None):\n self.template = template\n self.nav = nav\n\n def __call__(self, match):\n url = match.groups()[0]\n scheme, netloc, path, query, query, fragment = urlparse(url)\n\n if scheme or netloc:\n # Ignore URLs unless they are a relative link to a markdown file.\n return self.template % url\n\n if self.nav and not utils.is_markdown_file(path):\n path = utils.create_media_urls(self.nav, [path])[0]\n elif self.nav:\n # If the site navigation has been provided, then validate\n # the internal hyperlink, making sure the target actually exists.\n target_file = self.nav.file_context.make_absolute(path)\n if target_file not in self.nav.source_files:\n source_file = self.nav.file_context.current_file\n msg = (\n 'The page \"%s\" contained a hyperlink to \"%s\" which '\n 'is not listed in the \"pages\" configuration.'\n )\n assert False, msg % (source_file, target_file)\n path = utils.get_url_path(target_file, self.nav.use_directory_urls)\n path = self.nav.url_context.make_relative(path)\n else:\n path = utils.get_url_path(path).lstrip('/')\n\n # Convert the .md hyperlink to a relative hyperlink to the HTML page.\n url = urlunparse((scheme, netloc, path, query, query, fragment))\n return self.template % url\n\n\ndef convert_markdown(markdown_source, extensions=()):\n \"\"\"\n Convert the Markdown source file to HTML content, and additionally\n return the parsed table of contents, and a dictionary of any metadata\n that was specified in the Markdown file.\n\n `extensions` is an optional sequence of Python Markdown extensions to add\n to the default set.\n \"\"\"\n\n # Prepend a table of contents marker for the TOC extension\n markdown_source = toc.pre_process(markdown_source)\n\n # Generate the HTML from the markdown source\n md = markdown.Markdown(\n extensions=['meta', 'toc', 'tables', 'fenced_code'] + list(extensions)\n )\n html_content = md.convert(markdown_source)\n meta = md.Meta\n\n # Strip out the generated table of contents\n (html_content, toc_html) = toc.post_process(html_content)\n\n # Post process the generated table of contents into a data structure\n table_of_contents = toc.TableOfContents(toc_html)\n\n return (html_content, table_of_contents, meta)\n\n\ndef post_process_html(html_content, nav=None):\n\n anchor_sub = PathToURL('a href=\"%s\"', nav)\n html_content = re.sub(r'a href=\"([^\"]*)\"', anchor_sub, html_content)\n\n img_sub = PathToURL('src=\"%s\"', nav)\n html_content = re.sub(r'src=\"([^\"]*)\"', img_sub, html_content)\n\n html_content = html_content.replace('<pre>', '<pre class=\"prettyprint well\">')\n\n return html_content\n\n\ndef get_context(page, content, nav, toc, meta, config):\n site_name = config['site_name']\n\n if page.is_homepage or page.title is None:\n page_title = site_name\n else:\n page_title = page.title + ' - ' + site_name\n\n if page.is_homepage:\n page_description = config['site_description']\n else:\n page_description = None\n\n if config['site_url']:\n base = config['site_url']\n if not base.endswith('/'):\n base += '/'\n canonical_url = urljoin(base, page.abs_url.lstrip('/'))\n else:\n canonical_url = None\n\n if config['site_favicon']:\n site_favicon = nav.url_context.make_relative('/' + config['site_favicon'])\n else:\n site_favicon = None\n\n extra_javascript = utils.create_media_urls(nav=nav, url_list=config['extra_javascript'])\n\n extra_css = utils.create_media_urls(nav=nav, url_list=config['extra_css'])\n\n return {\n 'site_name': site_name,\n 'site_author': config['site_author'],\n 'favicon': site_favicon,\n\n 'page_title': page_title,\n 'page_description': page_description,\n\n 'content': content,\n 'toc': toc,\n 'nav': nav,\n 'meta': meta,\n\n 'base_url': nav.url_context.make_relative('/'),\n 'homepage_url': nav.homepage.url,\n 'canonical_url': canonical_url,\n\n 'current_page': page,\n 'previous_page': page.previous_page,\n 'next_page': page.next_page,\n\n # Note that there's intentionally repetition here. Rather than simply\n # provide the config dictionary we instead pass everything explicitly.\n #\n # This helps ensure that we can throughly document the context that\n # gets passed to themes.\n 'repo_url': config['repo_url'],\n 'repo_name': config['repo_name'],\n\n 'extra_css': extra_css,\n 'extra_javascript': extra_javascript,\n\n 'include_nav': config['include_nav'],\n 'include_next_prev': config['include_next_prev'],\n 'include_search': config['include_search'],\n\n 'copyright': config['copyright'],\n 'google-analytics': config['google-analytics']\n }\n\n\ndef build_pages(config, dump_json=False):\n \"\"\"\n Builds all the pages and writes them into the build directory.\n \"\"\"\n site_navigation = nav.SiteNavigation(config['pages'], config['use_directory_urls'])\n loader = jinja2.FileSystemLoader(config['theme_dir'])\n env = jinja2.Environment(loader=loader)\n\n for page in site_navigation.walk_pages():\n # Read the input file\n input_path = os.path.join(config['docs_dir'], page.input_path)\n input_content = open(input_path, 'r').read()\n if PY2:\n input_content = input_content.decode('utf-8')\n\n # Process the markdown text\n html_content, table_of_contents, meta = convert_markdown(\n input_content, extensions=config['markdown_extensions']\n )\n html_content = post_process_html(html_content, site_navigation)\n\n context = get_context(\n page, html_content, site_navigation,\n table_of_contents, meta, config\n )\n\n # Allow 'template:' override in md source files.\n if 'template' in meta:\n template = env.get_template(meta['template'][0])\n else:\n template = env.get_template('base.html')\n\n # Render the template.\n output_content = template.render(context)\n\n # Write the output file.\n output_path = os.path.join(config['site_dir'], page.output_path)\n if dump_json:\n json_context = {\n 'content': context['content'],\n 'title': context['current_page'].title,\n 'url': context['current_page'].abs_url,\n 'language': 'en',\n }\n utils.write_file(json.dumps(json_context, indent=4).encode('utf-8'), output_path.replace('.html', '.json'))\n else:\n utils.write_file(output_content.encode('utf-8'), output_path)\n\n\ndef build(config, live_server=False, dump_json=False, clean_site_dir=False):\n \"\"\"\n Perform a full site build.\n \"\"\"\n if clean_site_dir:\n print(\"Cleaning site directory\")\n utils.clean_directory(config['site_dir'])\n if not live_server:\n print(\"Building documentation to directory: %s\" % config['site_dir'])\n if not clean_site_dir and site_directory_contains_stale_files(config['site_dir']):\n print(\"Directory %s contains stale files. Use --clean to remove them.\" % config['site_dir'])\n\n if dump_json:\n build_pages(config, dump_json=True)\n else:\n # Reversed as we want to take the media files from the builtin theme\n # and then from the custom theme_dir so the custom versions take take\n # precedence.\n for theme_dir in reversed(config['theme_dir']):\n utils.copy_media_files(theme_dir, config['site_dir'])\n utils.copy_media_files(config['docs_dir'], config['site_dir'])\n build_pages(config)\n\n\ndef site_directory_contains_stale_files(site_directory):\n \"\"\"\n Check if the site directory contains stale files from a previous build.\n Right now the check returns true if the directory is not empty.\n A more sophisticated approach should be found to trigger only if there are\n files that won't be overwritten anyway.\n \"\"\"\n if os.path.exists(site_directory):\n if os.listdir(site_directory):\n return True\n return False\n", "path": "mkdocs/build.py"}]} | 3,113 | 105 |
gh_patches_debug_1475 | rasdani/github-patches | git_diff | graspologic-org__graspologic-654 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Possible issue with direct import
```
import graspologic
dir(graspologic)
```
returns
```
['__builtins__',
'__cached__',
'__doc__',
'__file__',
'__loader__',
'__name__',
'__package__',
'__path__',
'__spec__',
'__version',
'__version__',
'graspologic',
'layouts',
'models',
'partition',
'plot',
'preprocessing',
'subgraph',
'version']
```
and is missing lots of modules (align, cluster, datasets, embed, inference, match, nominate, pipeline, utils).
Is this intentional?
[BUG] Possible issue with direct import
```
import graspologic
dir(graspologic)
```
returns
```
['__builtins__',
'__cached__',
'__doc__',
'__file__',
'__loader__',
'__name__',
'__package__',
'__path__',
'__spec__',
'__version',
'__version__',
'graspologic',
'layouts',
'models',
'partition',
'plot',
'preprocessing',
'subgraph',
'version']
```
and is missing lots of modules (align, cluster, datasets, embed, inference, match, nominate, pipeline, utils).
Is this intentional?
</issue>
<code>
[start of graspologic/__init__.py]
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 import graspologic.align
5 import graspologic.cluster
6 import graspologic.datasets
7 import graspologic.embed
8 import graspologic.inference
9 import graspologic.layouts
10 import graspologic.models
11 import graspologic.partition
12 import graspologic.preprocessing
13 import graspologic.plot
14 import graspologic.simulations
15 import graspologic.subgraph
16 import graspologic.utils
17
18 from graspologic.version import __version
19
20 __version__ = __version()
21
[end of graspologic/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/graspologic/__init__.py b/graspologic/__init__.py
--- a/graspologic/__init__.py
+++ b/graspologic/__init__.py
@@ -8,6 +8,7 @@
import graspologic.inference
import graspologic.layouts
import graspologic.models
+import graspologic.nominate
import graspologic.partition
import graspologic.preprocessing
import graspologic.plot
| {"golden_diff": "diff --git a/graspologic/__init__.py b/graspologic/__init__.py\n--- a/graspologic/__init__.py\n+++ b/graspologic/__init__.py\n@@ -8,6 +8,7 @@\n import graspologic.inference\n import graspologic.layouts\n import graspologic.models\n+import graspologic.nominate\n import graspologic.partition\n import graspologic.preprocessing\n import graspologic.plot\n", "issue": "[BUG] Possible issue with direct import\n```\r\nimport graspologic\r\ndir(graspologic)\r\n```\r\nreturns \r\n\r\n```\r\n['__builtins__',\r\n '__cached__',\r\n '__doc__',\r\n '__file__',\r\n '__loader__',\r\n '__name__',\r\n '__package__',\r\n '__path__',\r\n '__spec__',\r\n '__version',\r\n '__version__',\r\n 'graspologic',\r\n 'layouts',\r\n 'models',\r\n 'partition',\r\n 'plot',\r\n 'preprocessing',\r\n 'subgraph',\r\n 'version']\r\n```\r\n\r\nand is missing lots of modules (align, cluster, datasets, embed, inference, match, nominate, pipeline, utils).\r\nIs this intentional?\n[BUG] Possible issue with direct import\n```\r\nimport graspologic\r\ndir(graspologic)\r\n```\r\nreturns \r\n\r\n```\r\n['__builtins__',\r\n '__cached__',\r\n '__doc__',\r\n '__file__',\r\n '__loader__',\r\n '__name__',\r\n '__package__',\r\n '__path__',\r\n '__spec__',\r\n '__version',\r\n '__version__',\r\n 'graspologic',\r\n 'layouts',\r\n 'models',\r\n 'partition',\r\n 'plot',\r\n 'preprocessing',\r\n 'subgraph',\r\n 'version']\r\n```\r\n\r\nand is missing lots of modules (align, cluster, datasets, embed, inference, match, nominate, pipeline, utils).\r\nIs this intentional?\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport graspologic.align\nimport graspologic.cluster\nimport graspologic.datasets\nimport graspologic.embed\nimport graspologic.inference\nimport graspologic.layouts\nimport graspologic.models\nimport graspologic.partition\nimport graspologic.preprocessing\nimport graspologic.plot\nimport graspologic.simulations\nimport graspologic.subgraph\nimport graspologic.utils\n\nfrom graspologic.version import __version\n\n__version__ = __version()\n", "path": "graspologic/__init__.py"}]} | 931 | 88 |
gh_patches_debug_37333 | rasdani/github-patches | git_diff | sublimelsp__LSP-2024 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide an optional response handler in LspExecuteCommand
**Is your feature request related to a problem? Please describe.**
I would like to extends `LspExecuteCommand` in order to send a `workspace/executeCommand` but `LspExecuteCommand` simply [logs](https://github.com/sublimelsp/LSP/blob/acfd6406ba4680a0e537dc87a72aa5b410a154e7/plugin/execute_command.py#L47) the response. In my case I have to open a file URI that is in the reponse.
**Describe the solution you'd like**
`LspExecuteCommand` should provide an optional response handler. When missing simply log the response as the case now. Otherwise delegate the response handling.
**Describe alternatives you've considered**
The current alternative is to copy `LspExecuteCommand` logic.
</issue>
<code>
[start of boot.py]
1 import os
2 import sublime
3 import sublime_plugin
4
5 # Please keep this list sorted (Edit -> Sort Lines)
6 from .plugin.code_actions import LspCodeActionsCommand
7 from .plugin.code_lens import LspCodeLensCommand
8 from .plugin.completion import LspResolveDocsCommand
9 from .plugin.completion import LspSelectCompletionItemCommand
10 from .plugin.configuration import LspDisableLanguageServerGloballyCommand
11 from .plugin.configuration import LspDisableLanguageServerInProjectCommand
12 from .plugin.configuration import LspEnableLanguageServerGloballyCommand
13 from .plugin.configuration import LspEnableLanguageServerInProjectCommand
14 from .plugin.core.collections import DottedDict
15 from .plugin.core.css import load as load_css
16 from .plugin.core.logging import exception_log
17 from .plugin.core.open import opening_files
18 from .plugin.core.panels import destroy_output_panels
19 from .plugin.core.panels import LspClearPanelCommand
20 from .plugin.core.panels import LspUpdatePanelCommand
21 from .plugin.core.panels import LspUpdateServerPanelCommand
22 from .plugin.core.panels import WindowPanelListener
23 from .plugin.core.protocol import Location
24 from .plugin.core.registry import LspRecheckSessionsCommand
25 from .plugin.core.registry import LspRestartServerCommand
26 from .plugin.core.registry import windows
27 from .plugin.core.sessions import AbstractPlugin
28 from .plugin.core.sessions import register_plugin
29 from .plugin.core.settings import client_configs
30 from .plugin.core.settings import load_settings
31 from .plugin.core.settings import unload_settings
32 from .plugin.core.signature_help import LspSignatureHelpNavigateCommand
33 from .plugin.core.signature_help import LspSignatureHelpShowCommand
34 from .plugin.core.transports import kill_all_subprocesses
35 from .plugin.core.typing import Any, Optional, List, Type, Dict
36 from .plugin.core.views import get_uri_and_position_from_location
37 from .plugin.core.views import LspRunTextCommandHelperCommand
38 from .plugin.document_link import LspOpenLinkCommand
39 from .plugin.documents import DocumentSyncListener
40 from .plugin.documents import TextChangeListener
41 from .plugin.edit import LspApplyDocumentEditCommand
42 from .plugin.execute_command import LspExecuteCommand
43 from .plugin.formatting import LspFormatDocumentCommand
44 from .plugin.formatting import LspFormatDocumentRangeCommand
45 from .plugin.goto import LspSymbolDeclarationCommand
46 from .plugin.goto import LspSymbolDefinitionCommand
47 from .plugin.goto import LspSymbolImplementationCommand
48 from .plugin.goto import LspSymbolTypeDefinitionCommand
49 from .plugin.goto_diagnostic import LspGotoDiagnosticCommand
50 from .plugin.hover import LspHoverCommand
51 from .plugin.inlay_hint import LspInlayHintClickCommand
52 from .plugin.panels import LspShowDiagnosticsPanelCommand
53 from .plugin.panels import LspToggleServerPanelCommand
54 from .plugin.references import LspSymbolReferencesCommand
55 from .plugin.rename import LspSymbolRenameCommand
56 from .plugin.save_command import LspSaveAllCommand
57 from .plugin.save_command import LspSaveCommand
58 from .plugin.selection_range import LspExpandSelectionCommand
59 from .plugin.semantic_highlighting import LspShowScopeNameCommand
60 from .plugin.symbols import LspDocumentSymbolsCommand
61 from .plugin.symbols import LspSelectionAddCommand
62 from .plugin.symbols import LspSelectionClearCommand
63 from .plugin.symbols import LspSelectionSetCommand
64 from .plugin.symbols import LspWorkspaceSymbolsCommand
65 from .plugin.tooling import LspCopyToClipboardFromBase64Command
66 from .plugin.tooling import LspDumpBufferCapabilities
67 from .plugin.tooling import LspDumpWindowConfigs
68 from .plugin.tooling import LspParseVscodePackageJson
69 from .plugin.tooling import LspTroubleshootServerCommand
70
71
72 def _get_final_subclasses(derived: List[Type], results: List[Type]) -> None:
73 for d in derived:
74 d_subclasses = d.__subclasses__()
75 if len(d_subclasses) > 0:
76 _get_final_subclasses(d_subclasses, results)
77 else:
78 results.append(d)
79
80
81 def _register_all_plugins() -> None:
82 plugin_classes = [] # type: List[Type[AbstractPlugin]]
83 _get_final_subclasses(AbstractPlugin.__subclasses__(), plugin_classes)
84 for plugin_class in plugin_classes:
85 try:
86 if not plugin_class.name():
87 continue
88 except NotImplementedError:
89 continue
90 register_plugin(plugin_class, notify_listener=False)
91
92
93 def _unregister_all_plugins() -> None:
94 from LSP.plugin.core.sessions import _plugins
95 _plugins.clear()
96 client_configs.external.clear()
97 client_configs.all.clear()
98
99
100 def plugin_loaded() -> None:
101 load_settings()
102 load_css()
103 _register_all_plugins()
104 client_configs.update_configs()
105 for window in sublime.windows():
106 windows.lookup(window)
107
108
109 def plugin_unloaded() -> None:
110 _unregister_all_plugins()
111 for window in sublime.windows():
112 destroy_output_panels(window) # references and diagnostics panels
113 try:
114 windows.lookup(window).plugin_unloaded()
115 windows.discard(window)
116 except Exception as ex:
117 exception_log("failed to unload window", ex)
118 unload_settings()
119
120
121 class Listener(sublime_plugin.EventListener):
122
123 def on_exit(self) -> None:
124 kill_all_subprocesses()
125
126 def on_load_project_async(self, w: sublime.Window) -> None:
127 windows.lookup(w).on_load_project_async()
128
129 def on_post_save_project_async(self, w: sublime.Window) -> None:
130 windows.lookup(w).on_post_save_project_async()
131
132 def on_new_window_async(self, w: sublime.Window) -> None:
133 sublime.set_timeout(lambda: windows.lookup(w))
134
135 def on_pre_close_window(self, w: sublime.Window) -> None:
136 windows.discard(w)
137
138 # Note: EventListener.on_post_move_async does not fire when a tab is moved out of the current window in such a way
139 # that a new window is created: https://github.com/sublimehq/sublime_text/issues/4630
140 # Hence, as a workaround we use on_pre_move, which still works in that case.
141 def on_pre_move(self, view: sublime.View) -> None:
142 listeners = sublime_plugin.view_event_listeners.get(view.id())
143 if not isinstance(listeners, list):
144 return
145 for listener in listeners:
146 if isinstance(listener, DocumentSyncListener):
147 # we need a small delay here, so that the DocumentSyncListener will recognize a possible new window
148 sublime.set_timeout_async(listener.on_post_move_window_async, timeout_ms=1)
149 return
150
151 def on_load(self, view: sublime.View) -> None:
152 file_name = view.file_name()
153 if not file_name:
154 return
155 for fn in opening_files.keys():
156 if fn == file_name or os.path.samefile(fn, file_name):
157 # Remove it from the pending opening files, and resolve the promise.
158 opening_files.pop(fn)[1](view)
159 break
160
161 def on_pre_close(self, view: sublime.View) -> None:
162 file_name = view.file_name()
163 if not file_name:
164 return
165 for fn in opening_files.keys():
166 if fn == file_name or os.path.samefile(fn, file_name):
167 tup = opening_files.pop(fn, None)
168 if tup:
169 # The view got closed before it finished loading. This can happen.
170 tup[1](None)
171 break
172
173 def on_post_window_command(self, window: sublime.Window, command_name: str, args: Optional[Dict[str, Any]]) -> None:
174 if command_name in ("next_result", "prev_result"):
175 view = window.active_view()
176 if view:
177 view.run_command("lsp_hover", {"only_diagnostics": True})
178
179
180 class LspOpenLocationCommand(sublime_plugin.TextCommand):
181 """
182 A command to be used by third-party ST packages that need to open an URI with some abstract scheme.
183 """
184
185 def run(
186 self,
187 _: sublime.Edit,
188 location: Location,
189 session_name: Optional[str] = None,
190 flags: int = 0,
191 group: int = -1
192 ) -> None:
193 sublime.set_timeout_async(lambda: self._run_async(location, session_name, flags, group))
194
195 def _run_async(self, location: Location, session_name: Optional[str], flags: int = 0, group: int = -1) -> None:
196 window = self.view.window()
197 if not window:
198 return
199 windows.lookup(window).open_location_async(location, session_name, self.view, flags, group).then(
200 lambda success: self._handle_continuation(location, success))
201
202 def _handle_continuation(self, location: Location, success: bool) -> None:
203 if not success:
204 uri, _ = get_uri_and_position_from_location(location)
205 message = "Failed to open {}".format(uri)
206 sublime.status_message(message)
207
[end of boot.py]
[start of plugin/execute_command.py]
1 import sublime
2 from .core.protocol import Error
3 from .core.protocol import ExecuteCommandParams
4 from .core.registry import LspTextCommand
5 from .core.registry import windows
6 from .core.typing import List, Optional, Any
7 from .core.views import first_selection_region
8 from .core.views import uri_from_view, offset_to_point, region_to_range, text_document_identifier
9
10
11 class LspExecuteCommand(LspTextCommand):
12
13 def run(self,
14 edit: sublime.Edit,
15 command_name: Optional[str] = None,
16 command_args: Optional[List[Any]] = None,
17 session_name: Optional[str] = None,
18 event: Optional[dict] = None) -> None:
19 # Handle VSCode-specific command for triggering AC/sighelp
20 if command_name == "editor.action.triggerSuggest":
21 # Triggered from set_timeout as suggestions popup doesn't trigger otherwise.
22 return sublime.set_timeout(lambda: self.view.run_command("auto_complete"))
23 if command_name == "editor.action.triggerParameterHints":
24
25 def run_async() -> None:
26 listener = windows.listener_for_view(self.view)
27 if listener:
28 listener.do_signature_help_async(manual=False)
29
30 return sublime.set_timeout_async(run_async)
31 session = self.session_by_name(session_name if session_name else self.session_name)
32 if session and command_name:
33 params = {"command": command_name} # type: ExecuteCommandParams
34 if command_args:
35 params["arguments"] = self._expand_variables(command_args)
36
37 def handle_response(response: Any) -> None:
38 assert command_name
39 if isinstance(response, Error):
40 sublime.message_dialog("command {} failed. Reason: {}".format(command_name, str(response)))
41 return
42 msg = "command {} completed".format(command_name)
43 if response:
44 msg += "with response: {}".format(response)
45 window = self.view.window()
46 if window:
47 window.status_message(msg)
48
49 session.execute_command(params, progress=True).then(handle_response)
50
51 def _expand_variables(self, command_args: List[Any]) -> List[Any]:
52 view = self.view # type: sublime.View
53 region = first_selection_region(view)
54 for i, arg in enumerate(command_args):
55 if arg in ["$document_id", "${document_id}"]:
56 command_args[i] = text_document_identifier(view)
57 elif arg in ["$file_uri", "${file_uri}"]:
58 command_args[i] = uri_from_view(view)
59 elif region is not None:
60 if arg in ["$selection", "${selection}"]:
61 command_args[i] = view.substr(region)
62 elif arg in ["$offset", "${offset}"]:
63 command_args[i] = region.b
64 elif arg in ["$selection_begin", "${selection_begin}"]:
65 command_args[i] = region.begin()
66 elif arg in ["$selection_end", "${selection_end}"]:
67 command_args[i] = region.end()
68 elif arg in ["$position", "${position}"]:
69 command_args[i] = offset_to_point(view, region.b).to_lsp()
70 elif arg in ["$range", "${range}"]:
71 command_args[i] = region_to_range(view, region).to_lsp()
72 window = view.window()
73 window_variables = window.extract_variables() if window else {}
74 return sublime.expand_variables(command_args, window_variables)
75
[end of plugin/execute_command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/boot.py b/boot.py
--- a/boot.py
+++ b/boot.py
@@ -20,6 +20,7 @@
from .plugin.core.panels import LspUpdatePanelCommand
from .plugin.core.panels import LspUpdateServerPanelCommand
from .plugin.core.panels import WindowPanelListener
+from .plugin.core.protocol import Error
from .plugin.core.protocol import Location
from .plugin.core.registry import LspRecheckSessionsCommand
from .plugin.core.registry import LspRestartServerCommand
diff --git a/plugin/execute_command.py b/plugin/execute_command.py
--- a/plugin/execute_command.py
+++ b/plugin/execute_command.py
@@ -9,6 +9,9 @@
class LspExecuteCommand(LspTextCommand):
+ """
+ Helper command for triggering workspace/executeCommand requests.
+ """
def run(self,
edit: sublime.Edit,
@@ -37,17 +40,33 @@
def handle_response(response: Any) -> None:
assert command_name
if isinstance(response, Error):
- sublime.message_dialog("command {} failed. Reason: {}".format(command_name, str(response)))
+ self.handle_error_async(response, command_name)
return
- msg = "command {} completed".format(command_name)
- if response:
- msg += "with response: {}".format(response)
- window = self.view.window()
- if window:
- window.status_message(msg)
+ self.handle_success_async(response, command_name)
session.execute_command(params, progress=True).then(handle_response)
+ def handle_success_async(self, result: Any, command_name: str) -> None:
+ """
+ Override this method to handle successful response to workspace/executeCommand.
+
+ :param result: The result returned from the server.
+ :param command_name: The name of the command that was executed.
+ """
+ msg = "command {} completed".format(command_name)
+ window = self.view.window()
+ if window:
+ window.status_message(msg)
+
+ def handle_error_async(self, error: Error, command_name: str) -> None:
+ """
+ Override this method to handle failed response to workspace/executeCommand.
+
+ :param error: The Error object.
+ :param command_name: The name of the command that was executed.
+ """
+ sublime.message_dialog("command {} failed. Reason: {}".format(command_name, str(error)))
+
def _expand_variables(self, command_args: List[Any]) -> List[Any]:
view = self.view # type: sublime.View
region = first_selection_region(view)
| {"golden_diff": "diff --git a/boot.py b/boot.py\n--- a/boot.py\n+++ b/boot.py\n@@ -20,6 +20,7 @@\n from .plugin.core.panels import LspUpdatePanelCommand\n from .plugin.core.panels import LspUpdateServerPanelCommand\n from .plugin.core.panels import WindowPanelListener\n+from .plugin.core.protocol import Error\n from .plugin.core.protocol import Location\n from .plugin.core.registry import LspRecheckSessionsCommand\n from .plugin.core.registry import LspRestartServerCommand\ndiff --git a/plugin/execute_command.py b/plugin/execute_command.py\n--- a/plugin/execute_command.py\n+++ b/plugin/execute_command.py\n@@ -9,6 +9,9 @@\n \n \n class LspExecuteCommand(LspTextCommand):\n+ \"\"\"\n+ Helper command for triggering workspace/executeCommand requests.\n+ \"\"\"\n \n def run(self,\n edit: sublime.Edit,\n@@ -37,17 +40,33 @@\n def handle_response(response: Any) -> None:\n assert command_name\n if isinstance(response, Error):\n- sublime.message_dialog(\"command {} failed. Reason: {}\".format(command_name, str(response)))\n+ self.handle_error_async(response, command_name)\n return\n- msg = \"command {} completed\".format(command_name)\n- if response:\n- msg += \"with response: {}\".format(response)\n- window = self.view.window()\n- if window:\n- window.status_message(msg)\n+ self.handle_success_async(response, command_name)\n \n session.execute_command(params, progress=True).then(handle_response)\n \n+ def handle_success_async(self, result: Any, command_name: str) -> None:\n+ \"\"\"\n+ Override this method to handle successful response to workspace/executeCommand.\n+\n+ :param result: The result returned from the server.\n+ :param command_name: The name of the command that was executed.\n+ \"\"\"\n+ msg = \"command {} completed\".format(command_name)\n+ window = self.view.window()\n+ if window:\n+ window.status_message(msg)\n+\n+ def handle_error_async(self, error: Error, command_name: str) -> None:\n+ \"\"\"\n+ Override this method to handle failed response to workspace/executeCommand.\n+\n+ :param error: The Error object.\n+ :param command_name: The name of the command that was executed.\n+ \"\"\"\n+ sublime.message_dialog(\"command {} failed. Reason: {}\".format(command_name, str(error)))\n+\n def _expand_variables(self, command_args: List[Any]) -> List[Any]:\n view = self.view # type: sublime.View\n region = first_selection_region(view)\n", "issue": "Provide an optional response handler in LspExecuteCommand\n**Is your feature request related to a problem? Please describe.**\r\nI would like to extends `LspExecuteCommand` in order to send a `workspace/executeCommand` but `LspExecuteCommand` simply [logs](https://github.com/sublimelsp/LSP/blob/acfd6406ba4680a0e537dc87a72aa5b410a154e7/plugin/execute_command.py#L47) the response. In my case I have to open a file URI that is in the reponse.\r\n\r\n**Describe the solution you'd like**\r\n`LspExecuteCommand` should provide an optional response handler. When missing simply log the response as the case now. Otherwise delegate the response handling. \r\n\r\n**Describe alternatives you've considered**\r\nThe current alternative is to copy `LspExecuteCommand` logic.\r\n\n", "before_files": [{"content": "import os\nimport sublime\nimport sublime_plugin\n\n# Please keep this list sorted (Edit -> Sort Lines)\nfrom .plugin.code_actions import LspCodeActionsCommand\nfrom .plugin.code_lens import LspCodeLensCommand\nfrom .plugin.completion import LspResolveDocsCommand\nfrom .plugin.completion import LspSelectCompletionItemCommand\nfrom .plugin.configuration import LspDisableLanguageServerGloballyCommand\nfrom .plugin.configuration import LspDisableLanguageServerInProjectCommand\nfrom .plugin.configuration import LspEnableLanguageServerGloballyCommand\nfrom .plugin.configuration import LspEnableLanguageServerInProjectCommand\nfrom .plugin.core.collections import DottedDict\nfrom .plugin.core.css import load as load_css\nfrom .plugin.core.logging import exception_log\nfrom .plugin.core.open import opening_files\nfrom .plugin.core.panels import destroy_output_panels\nfrom .plugin.core.panels import LspClearPanelCommand\nfrom .plugin.core.panels import LspUpdatePanelCommand\nfrom .plugin.core.panels import LspUpdateServerPanelCommand\nfrom .plugin.core.panels import WindowPanelListener\nfrom .plugin.core.protocol import Location\nfrom .plugin.core.registry import LspRecheckSessionsCommand\nfrom .plugin.core.registry import LspRestartServerCommand\nfrom .plugin.core.registry import windows\nfrom .plugin.core.sessions import AbstractPlugin\nfrom .plugin.core.sessions import register_plugin\nfrom .plugin.core.settings import client_configs\nfrom .plugin.core.settings import load_settings\nfrom .plugin.core.settings import unload_settings\nfrom .plugin.core.signature_help import LspSignatureHelpNavigateCommand\nfrom .plugin.core.signature_help import LspSignatureHelpShowCommand\nfrom .plugin.core.transports import kill_all_subprocesses\nfrom .plugin.core.typing import Any, Optional, List, Type, Dict\nfrom .plugin.core.views import get_uri_and_position_from_location\nfrom .plugin.core.views import LspRunTextCommandHelperCommand\nfrom .plugin.document_link import LspOpenLinkCommand\nfrom .plugin.documents import DocumentSyncListener\nfrom .plugin.documents import TextChangeListener\nfrom .plugin.edit import LspApplyDocumentEditCommand\nfrom .plugin.execute_command import LspExecuteCommand\nfrom .plugin.formatting import LspFormatDocumentCommand\nfrom .plugin.formatting import LspFormatDocumentRangeCommand\nfrom .plugin.goto import LspSymbolDeclarationCommand\nfrom .plugin.goto import LspSymbolDefinitionCommand\nfrom .plugin.goto import LspSymbolImplementationCommand\nfrom .plugin.goto import LspSymbolTypeDefinitionCommand\nfrom .plugin.goto_diagnostic import LspGotoDiagnosticCommand\nfrom .plugin.hover import LspHoverCommand\nfrom .plugin.inlay_hint import LspInlayHintClickCommand\nfrom .plugin.panels import LspShowDiagnosticsPanelCommand\nfrom .plugin.panels import LspToggleServerPanelCommand\nfrom .plugin.references import LspSymbolReferencesCommand\nfrom .plugin.rename import LspSymbolRenameCommand\nfrom .plugin.save_command import LspSaveAllCommand\nfrom .plugin.save_command import LspSaveCommand\nfrom .plugin.selection_range import LspExpandSelectionCommand\nfrom .plugin.semantic_highlighting import LspShowScopeNameCommand\nfrom .plugin.symbols import LspDocumentSymbolsCommand\nfrom .plugin.symbols import LspSelectionAddCommand\nfrom .plugin.symbols import LspSelectionClearCommand\nfrom .plugin.symbols import LspSelectionSetCommand\nfrom .plugin.symbols import LspWorkspaceSymbolsCommand\nfrom .plugin.tooling import LspCopyToClipboardFromBase64Command\nfrom .plugin.tooling import LspDumpBufferCapabilities\nfrom .plugin.tooling import LspDumpWindowConfigs\nfrom .plugin.tooling import LspParseVscodePackageJson\nfrom .plugin.tooling import LspTroubleshootServerCommand\n\n\ndef _get_final_subclasses(derived: List[Type], results: List[Type]) -> None:\n for d in derived:\n d_subclasses = d.__subclasses__()\n if len(d_subclasses) > 0:\n _get_final_subclasses(d_subclasses, results)\n else:\n results.append(d)\n\n\ndef _register_all_plugins() -> None:\n plugin_classes = [] # type: List[Type[AbstractPlugin]]\n _get_final_subclasses(AbstractPlugin.__subclasses__(), plugin_classes)\n for plugin_class in plugin_classes:\n try:\n if not plugin_class.name():\n continue\n except NotImplementedError:\n continue\n register_plugin(plugin_class, notify_listener=False)\n\n\ndef _unregister_all_plugins() -> None:\n from LSP.plugin.core.sessions import _plugins\n _plugins.clear()\n client_configs.external.clear()\n client_configs.all.clear()\n\n\ndef plugin_loaded() -> None:\n load_settings()\n load_css()\n _register_all_plugins()\n client_configs.update_configs()\n for window in sublime.windows():\n windows.lookup(window)\n\n\ndef plugin_unloaded() -> None:\n _unregister_all_plugins()\n for window in sublime.windows():\n destroy_output_panels(window) # references and diagnostics panels\n try:\n windows.lookup(window).plugin_unloaded()\n windows.discard(window)\n except Exception as ex:\n exception_log(\"failed to unload window\", ex)\n unload_settings()\n\n\nclass Listener(sublime_plugin.EventListener):\n\n def on_exit(self) -> None:\n kill_all_subprocesses()\n\n def on_load_project_async(self, w: sublime.Window) -> None:\n windows.lookup(w).on_load_project_async()\n\n def on_post_save_project_async(self, w: sublime.Window) -> None:\n windows.lookup(w).on_post_save_project_async()\n\n def on_new_window_async(self, w: sublime.Window) -> None:\n sublime.set_timeout(lambda: windows.lookup(w))\n\n def on_pre_close_window(self, w: sublime.Window) -> None:\n windows.discard(w)\n\n # Note: EventListener.on_post_move_async does not fire when a tab is moved out of the current window in such a way\n # that a new window is created: https://github.com/sublimehq/sublime_text/issues/4630\n # Hence, as a workaround we use on_pre_move, which still works in that case.\n def on_pre_move(self, view: sublime.View) -> None:\n listeners = sublime_plugin.view_event_listeners.get(view.id())\n if not isinstance(listeners, list):\n return\n for listener in listeners:\n if isinstance(listener, DocumentSyncListener):\n # we need a small delay here, so that the DocumentSyncListener will recognize a possible new window\n sublime.set_timeout_async(listener.on_post_move_window_async, timeout_ms=1)\n return\n\n def on_load(self, view: sublime.View) -> None:\n file_name = view.file_name()\n if not file_name:\n return\n for fn in opening_files.keys():\n if fn == file_name or os.path.samefile(fn, file_name):\n # Remove it from the pending opening files, and resolve the promise.\n opening_files.pop(fn)[1](view)\n break\n\n def on_pre_close(self, view: sublime.View) -> None:\n file_name = view.file_name()\n if not file_name:\n return\n for fn in opening_files.keys():\n if fn == file_name or os.path.samefile(fn, file_name):\n tup = opening_files.pop(fn, None)\n if tup:\n # The view got closed before it finished loading. This can happen.\n tup[1](None)\n break\n\n def on_post_window_command(self, window: sublime.Window, command_name: str, args: Optional[Dict[str, Any]]) -> None:\n if command_name in (\"next_result\", \"prev_result\"):\n view = window.active_view()\n if view:\n view.run_command(\"lsp_hover\", {\"only_diagnostics\": True})\n\n\nclass LspOpenLocationCommand(sublime_plugin.TextCommand):\n \"\"\"\n A command to be used by third-party ST packages that need to open an URI with some abstract scheme.\n \"\"\"\n\n def run(\n self,\n _: sublime.Edit,\n location: Location,\n session_name: Optional[str] = None,\n flags: int = 0,\n group: int = -1\n ) -> None:\n sublime.set_timeout_async(lambda: self._run_async(location, session_name, flags, group))\n\n def _run_async(self, location: Location, session_name: Optional[str], flags: int = 0, group: int = -1) -> None:\n window = self.view.window()\n if not window:\n return\n windows.lookup(window).open_location_async(location, session_name, self.view, flags, group).then(\n lambda success: self._handle_continuation(location, success))\n\n def _handle_continuation(self, location: Location, success: bool) -> None:\n if not success:\n uri, _ = get_uri_and_position_from_location(location)\n message = \"Failed to open {}\".format(uri)\n sublime.status_message(message)\n", "path": "boot.py"}, {"content": "import sublime\nfrom .core.protocol import Error\nfrom .core.protocol import ExecuteCommandParams\nfrom .core.registry import LspTextCommand\nfrom .core.registry import windows\nfrom .core.typing import List, Optional, Any\nfrom .core.views import first_selection_region\nfrom .core.views import uri_from_view, offset_to_point, region_to_range, text_document_identifier\n\n\nclass LspExecuteCommand(LspTextCommand):\n\n def run(self,\n edit: sublime.Edit,\n command_name: Optional[str] = None,\n command_args: Optional[List[Any]] = None,\n session_name: Optional[str] = None,\n event: Optional[dict] = None) -> None:\n # Handle VSCode-specific command for triggering AC/sighelp\n if command_name == \"editor.action.triggerSuggest\":\n # Triggered from set_timeout as suggestions popup doesn't trigger otherwise.\n return sublime.set_timeout(lambda: self.view.run_command(\"auto_complete\"))\n if command_name == \"editor.action.triggerParameterHints\":\n\n def run_async() -> None:\n listener = windows.listener_for_view(self.view)\n if listener:\n listener.do_signature_help_async(manual=False)\n\n return sublime.set_timeout_async(run_async)\n session = self.session_by_name(session_name if session_name else self.session_name)\n if session and command_name:\n params = {\"command\": command_name} # type: ExecuteCommandParams\n if command_args:\n params[\"arguments\"] = self._expand_variables(command_args)\n\n def handle_response(response: Any) -> None:\n assert command_name\n if isinstance(response, Error):\n sublime.message_dialog(\"command {} failed. Reason: {}\".format(command_name, str(response)))\n return\n msg = \"command {} completed\".format(command_name)\n if response:\n msg += \"with response: {}\".format(response)\n window = self.view.window()\n if window:\n window.status_message(msg)\n\n session.execute_command(params, progress=True).then(handle_response)\n\n def _expand_variables(self, command_args: List[Any]) -> List[Any]:\n view = self.view # type: sublime.View\n region = first_selection_region(view)\n for i, arg in enumerate(command_args):\n if arg in [\"$document_id\", \"${document_id}\"]:\n command_args[i] = text_document_identifier(view)\n elif arg in [\"$file_uri\", \"${file_uri}\"]:\n command_args[i] = uri_from_view(view)\n elif region is not None:\n if arg in [\"$selection\", \"${selection}\"]:\n command_args[i] = view.substr(region)\n elif arg in [\"$offset\", \"${offset}\"]:\n command_args[i] = region.b\n elif arg in [\"$selection_begin\", \"${selection_begin}\"]:\n command_args[i] = region.begin()\n elif arg in [\"$selection_end\", \"${selection_end}\"]:\n command_args[i] = region.end()\n elif arg in [\"$position\", \"${position}\"]:\n command_args[i] = offset_to_point(view, region.b).to_lsp()\n elif arg in [\"$range\", \"${range}\"]:\n command_args[i] = region_to_range(view, region).to_lsp()\n window = view.window()\n window_variables = window.extract_variables() if window else {}\n return sublime.expand_variables(command_args, window_variables)\n", "path": "plugin/execute_command.py"}]} | 4,004 | 573 |
gh_patches_debug_30097 | rasdani/github-patches | git_diff | facebookresearch__CompilerGym-547 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation for compiler_gym.wrappers is incomplete
## 📚 Documentation
The module docstring for `compiler_gym/wrappers/__init__.py` simply reads:
> The `compiler_gym.wrappers` module provides.
👎
</issue>
<code>
[start of compiler_gym/wrappers/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 """The :code:`compiler_gym.wrappers` module provides.
6 """
7 from compiler_gym.wrappers.commandline import (
8 CommandlineWithTerminalAction,
9 ConstrainedCommandline,
10 )
11 from compiler_gym.wrappers.core import (
12 ActionWrapper,
13 CompilerEnvWrapper,
14 ObservationWrapper,
15 RewardWrapper,
16 )
17 from compiler_gym.wrappers.datasets import (
18 CycleOverBenchmarks,
19 CycleOverBenchmarksIterator,
20 IterateOverBenchmarks,
21 RandomOrderBenchmarks,
22 )
23 from compiler_gym.wrappers.llvm import RuntimePointEstimateReward
24 from compiler_gym.wrappers.time_limit import TimeLimit
25
26 __all__ = [
27 "ActionWrapper",
28 "CommandlineWithTerminalAction",
29 "CompilerEnvWrapper",
30 "ConstrainedCommandline",
31 "CycleOverBenchmarks",
32 "CycleOverBenchmarksIterator",
33 "IterateOverBenchmarks",
34 "ObservationWrapper",
35 "RandomOrderBenchmarks",
36 "RewardWrapper",
37 "RuntimePointEstimateReward",
38 "TimeLimit",
39 ]
40
[end of compiler_gym/wrappers/__init__.py]
[start of compiler_gym/wrappers/datasets.py]
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 from itertools import cycle
6 from typing import Callable, Iterable, Optional, Union
7
8 import numpy as np
9
10 from compiler_gym.datasets import Benchmark
11 from compiler_gym.envs import CompilerEnv
12 from compiler_gym.util.parallelization import thread_safe_tee
13 from compiler_gym.wrappers.core import CompilerEnvWrapper
14
15 BenchmarkLike = Union[str, Benchmark]
16
17
18 class IterateOverBenchmarks(CompilerEnvWrapper):
19 """Iterate over a (possibly infinite) sequence of benchmarks on each call to
20 reset(). Will raise :code:`StopIteration` on :meth:`reset()
21 <compiler_gym.envs.CompilerEnv.reset>` once the iterator is exhausted. Use
22 :class:`CycleOverBenchmarks` or :class:`RandomOrderBenchmarks` for wrappers
23 which will loop over the benchmarks.
24 """
25
26 def __init__(
27 self,
28 env: CompilerEnv,
29 benchmarks: Iterable[BenchmarkLike],
30 fork_shares_iterator: bool = False,
31 ):
32 """Constructor.
33
34 :param env: The environment to wrap.
35
36 :param benchmarks: An iterable sequence of benchmarks.
37
38 :param fork_shares_iterator: If :code:`True`, the :code:`benchmarks`
39 iterator will bet shared by a forked environment created by
40 :meth:`env.fork() <compiler_gym.envs.CompilerEnv.fork>`. This means
41 that calling :meth:`env.reset()
42 <compiler_gym.envs.CompilerEnv.reset>` with one environment will
43 advance the iterator in the other. If :code:`False`, forked
44 environments will use :code:`itertools.tee()` to create a copy of
45 the iterator so that each iterator may advance independently.
46 However, this requires shared buffers between the environments which
47 can lead to memory overheads if :meth:`env.reset()
48 <compiler_gym.envs.CompilerEnv.reset>` is called many times more in
49 one environment than the other.
50 """
51 super().__init__(env)
52 self.benchmarks = iter(benchmarks)
53 self.fork_shares_iterator = fork_shares_iterator
54
55 def reset(self, benchmark: Optional[BenchmarkLike] = None, **kwargs):
56 if benchmark is not None:
57 raise TypeError("Benchmark passed to IterateOverBenchmarks.reset()")
58 benchmark: BenchmarkLike = next(self.benchmarks)
59 return self.env.reset(benchmark=benchmark)
60
61 def fork(self) -> "IterateOverBenchmarks":
62 if self.fork_shares_iterator:
63 other_benchmarks_iterator = self.benchmarks
64 else:
65 self.benchmarks, other_benchmarks_iterator = thread_safe_tee(
66 self.benchmarks
67 )
68 return IterateOverBenchmarks(
69 env=self.env.fork(),
70 benchmarks=other_benchmarks_iterator,
71 fork_shares_iterator=self.fork_shares_iterator,
72 )
73
74
75 class CycleOverBenchmarks(IterateOverBenchmarks):
76 """Cycle through a list of benchmarks on each call to :meth:`reset()
77 <compiler_gym.envs.CompilerEnv.reset>`. Same as
78 :class:`IterateOverBenchmarks` except the list of benchmarks repeats once
79 exhausted.
80 """
81
82 def __init__(
83 self,
84 env: CompilerEnv,
85 benchmarks: Iterable[BenchmarkLike],
86 fork_shares_iterator: bool = False,
87 ):
88 """Constructor.
89
90 :param env: The environment to wrap.
91
92 :param benchmarks: An iterable sequence of benchmarks.
93
94 :param fork_shares_iterator: If :code:`True`, the :code:`benchmarks`
95 iterator will be shared by a forked environment created by
96 :meth:`env.fork() <compiler_gym.envs.CompilerEnv.fork>`. This means
97 that calling :meth:`env.reset()
98 <compiler_gym.envs.CompilerEnv.reset>` with one environment will
99 advance the iterator in the other. If :code:`False`, forked
100 environments will use :code:`itertools.tee()` to create a copy of
101 the iterator so that each iterator may advance independently.
102 However, this requires shared buffers between the environments which
103 can lead to memory overheads if :meth:`env.reset()
104 <compiler_gym.envs.CompilerEnv.reset>` is called many times more in
105 one environment than the other.
106 """
107 super().__init__(
108 env, benchmarks=cycle(benchmarks), fork_shares_iterator=fork_shares_iterator
109 )
110
111
112 class CycleOverBenchmarksIterator(CompilerEnvWrapper):
113 """Same as :class:`CycleOverBenchmarks
114 <compiler_gym.wrappers.CycleOverBenchmarks>` except that the user generates
115 the iterator.
116 """
117
118 def __init__(
119 self,
120 env: CompilerEnv,
121 make_benchmark_iterator: Callable[[], Iterable[BenchmarkLike]],
122 ):
123 """Constructor.
124
125 :param env: The environment to wrap.
126
127 :param make_benchmark_iterator: A callback that returns an iterator over
128 a sequence of benchmarks. Once the iterator is exhausted, this
129 callback is called to produce a new iterator.
130 """
131 super().__init__(env)
132 self.make_benchmark_iterator = make_benchmark_iterator
133 self.benchmarks = iter(self.make_benchmark_iterator())
134
135 def reset(self, benchmark: Optional[BenchmarkLike] = None, **kwargs):
136 if benchmark is not None:
137 raise TypeError("Benchmark passed toIterateOverBenchmarks.reset()")
138 try:
139 benchmark: BenchmarkLike = next(self.benchmarks)
140 except StopIteration:
141 self.benchmarks = iter(self.make_benchmark_iterator())
142 benchmark: BenchmarkLike = next(self.benchmarks)
143
144 return self.env.reset(benchmark=benchmark)
145
146 def fork(self) -> "CycleOverBenchmarksIterator":
147 return CycleOverBenchmarksIterator(
148 env=self.env.fork(),
149 make_benchmark_iterator=self.make_benchmark_iterator,
150 )
151
152
153 class RandomOrderBenchmarks(IterateOverBenchmarks):
154 """Select randomly from a list of benchmarks on each call to :meth:`reset()
155 <compiler_gym.envs.CompilerEnv.reset>`.
156
157 .. note::
158
159 Uniform random selection is provided by evaluating the input benchmarks
160 iterator into a list and sampling randomly from the list. This will not
161 work for random iteration over infinite or very large iterables of
162 benchmarks.
163 """
164
165 def __init__(
166 self,
167 env: CompilerEnv,
168 benchmarks: Iterable[BenchmarkLike],
169 rng: Optional[np.random.Generator] = None,
170 ):
171 """Constructor.
172
173 :param env: The environment to wrap.
174
175 :param benchmarks: An iterable sequence of benchmarks. The entirety of
176 this input iterator is evaluated during construction.
177
178 :param rng: A random number generator to use for random benchmark
179 selection.
180 """
181 self._all_benchmarks = list(benchmarks)
182 rng = rng or np.random.default_rng()
183 super().__init__(
184 env,
185 benchmarks=(rng.choice(self._all_benchmarks) for _ in iter(int, 1)),
186 fork_shares_iterator=True,
187 )
188
189 def fork(self) -> "IterateOverBenchmarks":
190 """Fork the random order benchmark wrapper.
191
192 Note that RNG state is not copied to forked environments.
193 """
194 return IterateOverBenchmarks(
195 env=self.env.fork(), benchmarks=self._all_benchmarks
196 )
197
[end of compiler_gym/wrappers/datasets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/compiler_gym/wrappers/__init__.py b/compiler_gym/wrappers/__init__.py
--- a/compiler_gym/wrappers/__init__.py
+++ b/compiler_gym/wrappers/__init__.py
@@ -2,7 +2,30 @@
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
-"""The :code:`compiler_gym.wrappers` module provides.
+"""The :code:`compiler_gym.wrappers` module provides a set of classes that can
+be used to transform an environment in a modular way.
+
+For example:
+
+ >>> env = compiler_gym.make("llvm-v0")
+ >>> env = TimeLimit(env, n=10)
+ >>> env = CycleOverBenchmarks(
+ ... env,
+ ... benchmarks=[
+ ... "benchmark://cbench-v1/crc32",
+ ... "benchmark://cbench-v1/qsort",
+ ... ],
+ ... )
+
+.. warning::
+
+ CompilerGym environments are incompatible with the `OpenAI Gym wrappers
+ <https://github.com/openai/gym/tree/master/gym/wrappers>`_. This is because
+ CompilerGym extends the environment API with additional arguments and
+ methods. You must use the wrappers from this module when wrapping
+ CompilerGym environments. We provide a set of base wrappers that are
+ equivalent to those in OpenAI Gym that you can use to write your own
+ wrappers.
"""
from compiler_gym.wrappers.commandline import (
CommandlineWithTerminalAction,
diff --git a/compiler_gym/wrappers/datasets.py b/compiler_gym/wrappers/datasets.py
--- a/compiler_gym/wrappers/datasets.py
+++ b/compiler_gym/wrappers/datasets.py
@@ -157,9 +157,11 @@
.. note::
Uniform random selection is provided by evaluating the input benchmarks
- iterator into a list and sampling randomly from the list. This will not
- work for random iteration over infinite or very large iterables of
- benchmarks.
+ iterator into a list and sampling randomly from the list. For very large
+ and infinite iterables of benchmarks you must use the
+ :class:`IterateOverBenchmarks
+ <compiler_gym.wrappers.IterateOverBenchmarks>` wrapper with your own
+ random sampling iterator.
"""
def __init__(
| {"golden_diff": "diff --git a/compiler_gym/wrappers/__init__.py b/compiler_gym/wrappers/__init__.py\n--- a/compiler_gym/wrappers/__init__.py\n+++ b/compiler_gym/wrappers/__init__.py\n@@ -2,7 +2,30 @@\n #\n # This source code is licensed under the MIT license found in the\n # LICENSE file in the root directory of this source tree.\n-\"\"\"The :code:`compiler_gym.wrappers` module provides.\n+\"\"\"The :code:`compiler_gym.wrappers` module provides a set of classes that can\n+be used to transform an environment in a modular way.\n+\n+For example:\n+\n+ >>> env = compiler_gym.make(\"llvm-v0\")\n+ >>> env = TimeLimit(env, n=10)\n+ >>> env = CycleOverBenchmarks(\n+ ... env,\n+ ... benchmarks=[\n+ ... \"benchmark://cbench-v1/crc32\",\n+ ... \"benchmark://cbench-v1/qsort\",\n+ ... ],\n+ ... )\n+\n+.. warning::\n+\n+ CompilerGym environments are incompatible with the `OpenAI Gym wrappers\n+ <https://github.com/openai/gym/tree/master/gym/wrappers>`_. This is because\n+ CompilerGym extends the environment API with additional arguments and\n+ methods. You must use the wrappers from this module when wrapping\n+ CompilerGym environments. We provide a set of base wrappers that are\n+ equivalent to those in OpenAI Gym that you can use to write your own\n+ wrappers.\n \"\"\"\n from compiler_gym.wrappers.commandline import (\n CommandlineWithTerminalAction,\ndiff --git a/compiler_gym/wrappers/datasets.py b/compiler_gym/wrappers/datasets.py\n--- a/compiler_gym/wrappers/datasets.py\n+++ b/compiler_gym/wrappers/datasets.py\n@@ -157,9 +157,11 @@\n .. note::\n \n Uniform random selection is provided by evaluating the input benchmarks\n- iterator into a list and sampling randomly from the list. This will not\n- work for random iteration over infinite or very large iterables of\n- benchmarks.\n+ iterator into a list and sampling randomly from the list. For very large\n+ and infinite iterables of benchmarks you must use the\n+ :class:`IterateOverBenchmarks\n+ <compiler_gym.wrappers.IterateOverBenchmarks>` wrapper with your own\n+ random sampling iterator.\n \"\"\"\n \n def __init__(\n", "issue": "Documentation for compiler_gym.wrappers is incomplete\n## \ud83d\udcda Documentation\r\n\r\nThe module docstring for `compiler_gym/wrappers/__init__.py` simply reads:\r\n\r\n> The `compiler_gym.wrappers` module provides.\r\n\r\n\ud83d\udc4e \n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"The :code:`compiler_gym.wrappers` module provides.\n\"\"\"\nfrom compiler_gym.wrappers.commandline import (\n CommandlineWithTerminalAction,\n ConstrainedCommandline,\n)\nfrom compiler_gym.wrappers.core import (\n ActionWrapper,\n CompilerEnvWrapper,\n ObservationWrapper,\n RewardWrapper,\n)\nfrom compiler_gym.wrappers.datasets import (\n CycleOverBenchmarks,\n CycleOverBenchmarksIterator,\n IterateOverBenchmarks,\n RandomOrderBenchmarks,\n)\nfrom compiler_gym.wrappers.llvm import RuntimePointEstimateReward\nfrom compiler_gym.wrappers.time_limit import TimeLimit\n\n__all__ = [\n \"ActionWrapper\",\n \"CommandlineWithTerminalAction\",\n \"CompilerEnvWrapper\",\n \"ConstrainedCommandline\",\n \"CycleOverBenchmarks\",\n \"CycleOverBenchmarksIterator\",\n \"IterateOverBenchmarks\",\n \"ObservationWrapper\",\n \"RandomOrderBenchmarks\",\n \"RewardWrapper\",\n \"RuntimePointEstimateReward\",\n \"TimeLimit\",\n]\n", "path": "compiler_gym/wrappers/__init__.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom itertools import cycle\nfrom typing import Callable, Iterable, Optional, Union\n\nimport numpy as np\n\nfrom compiler_gym.datasets import Benchmark\nfrom compiler_gym.envs import CompilerEnv\nfrom compiler_gym.util.parallelization import thread_safe_tee\nfrom compiler_gym.wrappers.core import CompilerEnvWrapper\n\nBenchmarkLike = Union[str, Benchmark]\n\n\nclass IterateOverBenchmarks(CompilerEnvWrapper):\n \"\"\"Iterate over a (possibly infinite) sequence of benchmarks on each call to\n reset(). Will raise :code:`StopIteration` on :meth:`reset()\n <compiler_gym.envs.CompilerEnv.reset>` once the iterator is exhausted. Use\n :class:`CycleOverBenchmarks` or :class:`RandomOrderBenchmarks` for wrappers\n which will loop over the benchmarks.\n \"\"\"\n\n def __init__(\n self,\n env: CompilerEnv,\n benchmarks: Iterable[BenchmarkLike],\n fork_shares_iterator: bool = False,\n ):\n \"\"\"Constructor.\n\n :param env: The environment to wrap.\n\n :param benchmarks: An iterable sequence of benchmarks.\n\n :param fork_shares_iterator: If :code:`True`, the :code:`benchmarks`\n iterator will bet shared by a forked environment created by\n :meth:`env.fork() <compiler_gym.envs.CompilerEnv.fork>`. This means\n that calling :meth:`env.reset()\n <compiler_gym.envs.CompilerEnv.reset>` with one environment will\n advance the iterator in the other. If :code:`False`, forked\n environments will use :code:`itertools.tee()` to create a copy of\n the iterator so that each iterator may advance independently.\n However, this requires shared buffers between the environments which\n can lead to memory overheads if :meth:`env.reset()\n <compiler_gym.envs.CompilerEnv.reset>` is called many times more in\n one environment than the other.\n \"\"\"\n super().__init__(env)\n self.benchmarks = iter(benchmarks)\n self.fork_shares_iterator = fork_shares_iterator\n\n def reset(self, benchmark: Optional[BenchmarkLike] = None, **kwargs):\n if benchmark is not None:\n raise TypeError(\"Benchmark passed to IterateOverBenchmarks.reset()\")\n benchmark: BenchmarkLike = next(self.benchmarks)\n return self.env.reset(benchmark=benchmark)\n\n def fork(self) -> \"IterateOverBenchmarks\":\n if self.fork_shares_iterator:\n other_benchmarks_iterator = self.benchmarks\n else:\n self.benchmarks, other_benchmarks_iterator = thread_safe_tee(\n self.benchmarks\n )\n return IterateOverBenchmarks(\n env=self.env.fork(),\n benchmarks=other_benchmarks_iterator,\n fork_shares_iterator=self.fork_shares_iterator,\n )\n\n\nclass CycleOverBenchmarks(IterateOverBenchmarks):\n \"\"\"Cycle through a list of benchmarks on each call to :meth:`reset()\n <compiler_gym.envs.CompilerEnv.reset>`. Same as\n :class:`IterateOverBenchmarks` except the list of benchmarks repeats once\n exhausted.\n \"\"\"\n\n def __init__(\n self,\n env: CompilerEnv,\n benchmarks: Iterable[BenchmarkLike],\n fork_shares_iterator: bool = False,\n ):\n \"\"\"Constructor.\n\n :param env: The environment to wrap.\n\n :param benchmarks: An iterable sequence of benchmarks.\n\n :param fork_shares_iterator: If :code:`True`, the :code:`benchmarks`\n iterator will be shared by a forked environment created by\n :meth:`env.fork() <compiler_gym.envs.CompilerEnv.fork>`. This means\n that calling :meth:`env.reset()\n <compiler_gym.envs.CompilerEnv.reset>` with one environment will\n advance the iterator in the other. If :code:`False`, forked\n environments will use :code:`itertools.tee()` to create a copy of\n the iterator so that each iterator may advance independently.\n However, this requires shared buffers between the environments which\n can lead to memory overheads if :meth:`env.reset()\n <compiler_gym.envs.CompilerEnv.reset>` is called many times more in\n one environment than the other.\n \"\"\"\n super().__init__(\n env, benchmarks=cycle(benchmarks), fork_shares_iterator=fork_shares_iterator\n )\n\n\nclass CycleOverBenchmarksIterator(CompilerEnvWrapper):\n \"\"\"Same as :class:`CycleOverBenchmarks\n <compiler_gym.wrappers.CycleOverBenchmarks>` except that the user generates\n the iterator.\n \"\"\"\n\n def __init__(\n self,\n env: CompilerEnv,\n make_benchmark_iterator: Callable[[], Iterable[BenchmarkLike]],\n ):\n \"\"\"Constructor.\n\n :param env: The environment to wrap.\n\n :param make_benchmark_iterator: A callback that returns an iterator over\n a sequence of benchmarks. Once the iterator is exhausted, this\n callback is called to produce a new iterator.\n \"\"\"\n super().__init__(env)\n self.make_benchmark_iterator = make_benchmark_iterator\n self.benchmarks = iter(self.make_benchmark_iterator())\n\n def reset(self, benchmark: Optional[BenchmarkLike] = None, **kwargs):\n if benchmark is not None:\n raise TypeError(\"Benchmark passed toIterateOverBenchmarks.reset()\")\n try:\n benchmark: BenchmarkLike = next(self.benchmarks)\n except StopIteration:\n self.benchmarks = iter(self.make_benchmark_iterator())\n benchmark: BenchmarkLike = next(self.benchmarks)\n\n return self.env.reset(benchmark=benchmark)\n\n def fork(self) -> \"CycleOverBenchmarksIterator\":\n return CycleOverBenchmarksIterator(\n env=self.env.fork(),\n make_benchmark_iterator=self.make_benchmark_iterator,\n )\n\n\nclass RandomOrderBenchmarks(IterateOverBenchmarks):\n \"\"\"Select randomly from a list of benchmarks on each call to :meth:`reset()\n <compiler_gym.envs.CompilerEnv.reset>`.\n\n .. note::\n\n Uniform random selection is provided by evaluating the input benchmarks\n iterator into a list and sampling randomly from the list. This will not\n work for random iteration over infinite or very large iterables of\n benchmarks.\n \"\"\"\n\n def __init__(\n self,\n env: CompilerEnv,\n benchmarks: Iterable[BenchmarkLike],\n rng: Optional[np.random.Generator] = None,\n ):\n \"\"\"Constructor.\n\n :param env: The environment to wrap.\n\n :param benchmarks: An iterable sequence of benchmarks. The entirety of\n this input iterator is evaluated during construction.\n\n :param rng: A random number generator to use for random benchmark\n selection.\n \"\"\"\n self._all_benchmarks = list(benchmarks)\n rng = rng or np.random.default_rng()\n super().__init__(\n env,\n benchmarks=(rng.choice(self._all_benchmarks) for _ in iter(int, 1)),\n fork_shares_iterator=True,\n )\n\n def fork(self) -> \"IterateOverBenchmarks\":\n \"\"\"Fork the random order benchmark wrapper.\n\n Note that RNG state is not copied to forked environments.\n \"\"\"\n return IterateOverBenchmarks(\n env=self.env.fork(), benchmarks=self._all_benchmarks\n )\n", "path": "compiler_gym/wrappers/datasets.py"}]} | 3,073 | 558 |
gh_patches_debug_29334 | rasdani/github-patches | git_diff | svthalia__concrexit-1676 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sales order payments are not always saved
### Describe the bug
When paying for a Thalia pay order (via the sales payment view, so the QR code flow), the payment is not always stored back to the order. The payment is created properly, but after payment, the foreign key to the payment in the order is not saved.
### How to reproduce
I am not sure exactly when this happens, at least it happens for the current shift 2 on the current staging environment. It might be because the shift has already been ended.
### Expected behaviour
Store the payment properly
### Additional context
Might be related to https://github.com/svthalia/concrexit/blob/6d0866022afb7fdf3edab34709d4d99e28039d59/website/sales/models/order.py#L123
</issue>
<code>
[start of website/sales/models/order.py]
1 from decimal import Decimal
2
3 from django.conf import settings
4 from django.core.exceptions import ValidationError
5 from django.core.validators import MinValueValidator
6 from django.db import models
7 from django.db.models import (
8 Sum,
9 Value,
10 F,
11 DecimalField,
12 Q,
13 IntegerField,
14 BooleanField,
15 Count,
16 )
17 from django.db.models.functions import Coalesce
18 from django.urls import reverse
19 from django.utils import timezone
20 from django.utils.translation import gettext_lazy as _
21 from queryable_properties.managers import QueryablePropertiesManager
22 from queryable_properties.properties import AnnotationProperty
23
24 from members.models import uuid, Member
25 from payments.models import Payable, Payment
26 from sales.models.product import ProductListItem
27 from sales.models.shift import Shift
28
29
30 def default_order_shift():
31 return Shift.objects.filter(active=True).first()
32
33
34 class Order(models.Model, Payable):
35
36 objects = QueryablePropertiesManager()
37
38 class Meta:
39 verbose_name = _("order")
40 verbose_name_plural = _("orders")
41 permissions = [
42 ("custom_prices", _("Can use custom prices and discounts in orders")),
43 ]
44 ordering = ["created_at"]
45
46 id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
47
48 created_at = models.DateTimeField(
49 verbose_name=_("created at"), default=timezone.now
50 )
51
52 shift = models.ForeignKey(
53 Shift,
54 verbose_name=_("shift"),
55 related_name="orders",
56 default=default_order_shift,
57 null=False,
58 blank=False,
59 on_delete=models.PROTECT,
60 )
61
62 items = models.ManyToManyField(
63 ProductListItem, through="OrderItem", verbose_name=_("items"),
64 )
65
66 payment = models.OneToOneField(
67 Payment,
68 verbose_name=_("payment"),
69 related_name="sales_order",
70 on_delete=models.SET_NULL,
71 blank=True,
72 null=True,
73 )
74
75 discount = models.DecimalField(
76 verbose_name=_("discount"),
77 max_digits=6,
78 decimal_places=2,
79 null=True,
80 blank=True,
81 validators=[MinValueValidator(Decimal("0.00"))],
82 )
83
84 payer = models.ForeignKey(
85 Member,
86 models.SET_NULL,
87 verbose_name=_("payer"),
88 related_name="sales_order",
89 blank=True,
90 null=True,
91 )
92
93 age_restricted = AnnotationProperty(
94 Count(
95 "order_items__pk",
96 filter=Q(order_items__product__product__age_restricted=True),
97 output_field=BooleanField(),
98 )
99 )
100
101 subtotal = AnnotationProperty(
102 Coalesce(Sum("order_items__total"), Value(0.00), output_field=DecimalField())
103 )
104
105 total_amount = AnnotationProperty(
106 Coalesce(Sum("order_items__total"), Value(0.00), output_field=DecimalField())
107 - Coalesce(F("discount"), Value(0.00), output_field=DecimalField())
108 )
109
110 num_items = AnnotationProperty(
111 Coalesce(Sum("order_items__amount"), Value(0), output_field=IntegerField())
112 )
113
114 def save(
115 self, force_insert=False, force_update=False, using=None, update_fields=None
116 ):
117 if self.shift.locked:
118 return
119 if self.shift.start > timezone.now():
120 return
121 if (
122 self.payment
123 and float(sum(self.order_items.values_list("total", flat=True)))
124 - (self.discount or 0)
125 != self.payment.amount
126 ):
127 return
128 if self.payment and not self.payer:
129 self.payer = self.payment.paid_by
130
131 return super(Order, self).save(force_insert, force_update, using, update_fields)
132
133 def clean(self):
134 super().clean()
135 errors = {}
136
137 if self.shift.start > timezone.now():
138 errors.update({"shift": _("The shift hasn't started yet.")})
139
140 if self.shift.locked:
141 errors.update({"shift": _("The shift this order belongs to is locked.")})
142
143 if self.discount and self.discount > self.total_amount:
144 errors.update(
145 {"discount": _("Discount cannot be higher than total amount.")}
146 )
147
148 if errors:
149 raise ValidationError(errors)
150
151 @property
152 def payment_amount(self):
153 return self.total_amount
154
155 @property
156 def payment_topic(self):
157 return f"Sales at {self.shift}"
158
159 @property
160 def order_description(self):
161 return ", ".join(str(x) for x in self.order_items.all())
162
163 @property
164 def payment_notes(self):
165 return (
166 f"{self.order_description}. Ordered at {self.created_at.time()} ({self.id})"
167 )
168
169 @property
170 def payment_payer(self):
171 return self.payer
172
173 @property
174 def accept_payment_from_any_user(self):
175 return True
176
177 @property
178 def payment_url(self):
179 return (
180 settings.BASE_URL + reverse("sales:order-pay", kwargs={"pk": self.pk})
181 if not self.payment
182 and (self.payment_amount is not None and self.payment_amount != 0)
183 else None
184 )
185
186 def __str__(self):
187 return f"Order {self.id} ({self.shift})"
188
189
190 class OrderItem(models.Model):
191 class Meta:
192 verbose_name = "item"
193 verbose_name_plural = "items"
194 ordering = ["pk"]
195 indexes = [
196 models.Index(fields=["order"]),
197 ]
198
199 product = models.ForeignKey(
200 ProductListItem,
201 verbose_name=_("product"),
202 null=False,
203 blank=False,
204 on_delete=models.PROTECT,
205 )
206 order = models.ForeignKey(
207 Order,
208 verbose_name=_("order"),
209 related_name="order_items",
210 null=False,
211 blank=False,
212 on_delete=models.CASCADE,
213 )
214 total = models.DecimalField(
215 verbose_name=_("total"),
216 max_digits=6,
217 decimal_places=2,
218 null=False,
219 blank=True,
220 validators=[MinValueValidator(Decimal("0.00"))],
221 help_text="Only when overriding the default",
222 )
223 amount = models.PositiveSmallIntegerField(
224 verbose_name=_("amount"), null=False, blank=False
225 )
226
227 def save(
228 self, force_insert=False, force_update=False, using=None, update_fields=None
229 ):
230 if self.order.shift.locked:
231 return
232 if self.order.payment:
233 return
234
235 if not self.total:
236 self.total = self.product.price * self.amount
237
238 return super(OrderItem, self).save(
239 force_insert, force_update, using, update_fields
240 )
241
242 def clean(self):
243 super().clean()
244 errors = {}
245
246 if self.order.shift.locked:
247 errors.update({"order": _("The shift is locked.")})
248
249 if self.product not in self.order.shift.product_list.product_items.all():
250 errors.update({"product": _("This product is not available.")})
251
252 if errors:
253 raise ValidationError(errors)
254
255 def __str__(self):
256 return f"{self.amount}x {self.product.product.name}"
257
[end of website/sales/models/order.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/sales/models/order.py b/website/sales/models/order.py
--- a/website/sales/models/order.py
+++ b/website/sales/models/order.py
@@ -115,16 +115,17 @@
self, force_insert=False, force_update=False, using=None, update_fields=None
):
if self.shift.locked:
- return
+ raise ValueError("The shift this order belongs to is locked.")
if self.shift.start > timezone.now():
- return
+ raise ValueError("The shift hasn't started yet.")
if (
self.payment
- and float(sum(self.order_items.values_list("total", flat=True)))
- - (self.discount or 0)
- != self.payment.amount
+ and self.subtotal - Decimal(self.discount or 0) != self.payment.amount
):
- return
+ # We cannot use self.total_amount as it is a requires a database query and hence will not use any updated values
+ raise ValueError(
+ "The payment amount does not match the order total amount."
+ )
if self.payment and not self.payer:
self.payer = self.payment.paid_by
@@ -228,9 +229,9 @@
self, force_insert=False, force_update=False, using=None, update_fields=None
):
if self.order.shift.locked:
- return
+ raise ValueError("The shift this order belongs to is locked.")
if self.order.payment:
- return
+ raise ValueError("This order has already been paid for.")
if not self.total:
self.total = self.product.price * self.amount
| {"golden_diff": "diff --git a/website/sales/models/order.py b/website/sales/models/order.py\n--- a/website/sales/models/order.py\n+++ b/website/sales/models/order.py\n@@ -115,16 +115,17 @@\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.shift.locked:\n- return\n+ raise ValueError(\"The shift this order belongs to is locked.\")\n if self.shift.start > timezone.now():\n- return\n+ raise ValueError(\"The shift hasn't started yet.\")\n if (\n self.payment\n- and float(sum(self.order_items.values_list(\"total\", flat=True)))\n- - (self.discount or 0)\n- != self.payment.amount\n+ and self.subtotal - Decimal(self.discount or 0) != self.payment.amount\n ):\n- return\n+ # We cannot use self.total_amount as it is a requires a database query and hence will not use any updated values\n+ raise ValueError(\n+ \"The payment amount does not match the order total amount.\"\n+ )\n if self.payment and not self.payer:\n self.payer = self.payment.paid_by\n \n@@ -228,9 +229,9 @@\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.order.shift.locked:\n- return\n+ raise ValueError(\"The shift this order belongs to is locked.\")\n if self.order.payment:\n- return\n+ raise ValueError(\"This order has already been paid for.\")\n \n if not self.total:\n self.total = self.product.price * self.amount\n", "issue": "Sales order payments are not always saved\n### Describe the bug\r\nWhen paying for a Thalia pay order (via the sales payment view, so the QR code flow), the payment is not always stored back to the order. The payment is created properly, but after payment, the foreign key to the payment in the order is not saved. \r\n\r\n### How to reproduce\r\nI am not sure exactly when this happens, at least it happens for the current shift 2 on the current staging environment. It might be because the shift has already been ended.\r\n\r\n### Expected behaviour\r\nStore the payment properly\r\n\r\n### Additional context\r\nMight be related to https://github.com/svthalia/concrexit/blob/6d0866022afb7fdf3edab34709d4d99e28039d59/website/sales/models/order.py#L123\n", "before_files": [{"content": "from decimal import Decimal\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import MinValueValidator\nfrom django.db import models\nfrom django.db.models import (\n Sum,\n Value,\n F,\n DecimalField,\n Q,\n IntegerField,\n BooleanField,\n Count,\n)\nfrom django.db.models.functions import Coalesce\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AnnotationProperty\n\nfrom members.models import uuid, Member\nfrom payments.models import Payable, Payment\nfrom sales.models.product import ProductListItem\nfrom sales.models.shift import Shift\n\n\ndef default_order_shift():\n return Shift.objects.filter(active=True).first()\n\n\nclass Order(models.Model, Payable):\n\n objects = QueryablePropertiesManager()\n\n class Meta:\n verbose_name = _(\"order\")\n verbose_name_plural = _(\"orders\")\n permissions = [\n (\"custom_prices\", _(\"Can use custom prices and discounts in orders\")),\n ]\n ordering = [\"created_at\"]\n\n id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)\n\n created_at = models.DateTimeField(\n verbose_name=_(\"created at\"), default=timezone.now\n )\n\n shift = models.ForeignKey(\n Shift,\n verbose_name=_(\"shift\"),\n related_name=\"orders\",\n default=default_order_shift,\n null=False,\n blank=False,\n on_delete=models.PROTECT,\n )\n\n items = models.ManyToManyField(\n ProductListItem, through=\"OrderItem\", verbose_name=_(\"items\"),\n )\n\n payment = models.OneToOneField(\n Payment,\n verbose_name=_(\"payment\"),\n related_name=\"sales_order\",\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n )\n\n discount = models.DecimalField(\n verbose_name=_(\"discount\"),\n max_digits=6,\n decimal_places=2,\n null=True,\n blank=True,\n validators=[MinValueValidator(Decimal(\"0.00\"))],\n )\n\n payer = models.ForeignKey(\n Member,\n models.SET_NULL,\n verbose_name=_(\"payer\"),\n related_name=\"sales_order\",\n blank=True,\n null=True,\n )\n\n age_restricted = AnnotationProperty(\n Count(\n \"order_items__pk\",\n filter=Q(order_items__product__product__age_restricted=True),\n output_field=BooleanField(),\n )\n )\n\n subtotal = AnnotationProperty(\n Coalesce(Sum(\"order_items__total\"), Value(0.00), output_field=DecimalField())\n )\n\n total_amount = AnnotationProperty(\n Coalesce(Sum(\"order_items__total\"), Value(0.00), output_field=DecimalField())\n - Coalesce(F(\"discount\"), Value(0.00), output_field=DecimalField())\n )\n\n num_items = AnnotationProperty(\n Coalesce(Sum(\"order_items__amount\"), Value(0), output_field=IntegerField())\n )\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.shift.locked:\n return\n if self.shift.start > timezone.now():\n return\n if (\n self.payment\n and float(sum(self.order_items.values_list(\"total\", flat=True)))\n - (self.discount or 0)\n != self.payment.amount\n ):\n return\n if self.payment and not self.payer:\n self.payer = self.payment.paid_by\n\n return super(Order, self).save(force_insert, force_update, using, update_fields)\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.shift.start > timezone.now():\n errors.update({\"shift\": _(\"The shift hasn't started yet.\")})\n\n if self.shift.locked:\n errors.update({\"shift\": _(\"The shift this order belongs to is locked.\")})\n\n if self.discount and self.discount > self.total_amount:\n errors.update(\n {\"discount\": _(\"Discount cannot be higher than total amount.\")}\n )\n\n if errors:\n raise ValidationError(errors)\n\n @property\n def payment_amount(self):\n return self.total_amount\n\n @property\n def payment_topic(self):\n return f\"Sales at {self.shift}\"\n\n @property\n def order_description(self):\n return \", \".join(str(x) for x in self.order_items.all())\n\n @property\n def payment_notes(self):\n return (\n f\"{self.order_description}. Ordered at {self.created_at.time()} ({self.id})\"\n )\n\n @property\n def payment_payer(self):\n return self.payer\n\n @property\n def accept_payment_from_any_user(self):\n return True\n\n @property\n def payment_url(self):\n return (\n settings.BASE_URL + reverse(\"sales:order-pay\", kwargs={\"pk\": self.pk})\n if not self.payment\n and (self.payment_amount is not None and self.payment_amount != 0)\n else None\n )\n\n def __str__(self):\n return f\"Order {self.id} ({self.shift})\"\n\n\nclass OrderItem(models.Model):\n class Meta:\n verbose_name = \"item\"\n verbose_name_plural = \"items\"\n ordering = [\"pk\"]\n indexes = [\n models.Index(fields=[\"order\"]),\n ]\n\n product = models.ForeignKey(\n ProductListItem,\n verbose_name=_(\"product\"),\n null=False,\n blank=False,\n on_delete=models.PROTECT,\n )\n order = models.ForeignKey(\n Order,\n verbose_name=_(\"order\"),\n related_name=\"order_items\",\n null=False,\n blank=False,\n on_delete=models.CASCADE,\n )\n total = models.DecimalField(\n verbose_name=_(\"total\"),\n max_digits=6,\n decimal_places=2,\n null=False,\n blank=True,\n validators=[MinValueValidator(Decimal(\"0.00\"))],\n help_text=\"Only when overriding the default\",\n )\n amount = models.PositiveSmallIntegerField(\n verbose_name=_(\"amount\"), null=False, blank=False\n )\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.order.shift.locked:\n return\n if self.order.payment:\n return\n\n if not self.total:\n self.total = self.product.price * self.amount\n\n return super(OrderItem, self).save(\n force_insert, force_update, using, update_fields\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.order.shift.locked:\n errors.update({\"order\": _(\"The shift is locked.\")})\n\n if self.product not in self.order.shift.product_list.product_items.all():\n errors.update({\"product\": _(\"This product is not available.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def __str__(self):\n return f\"{self.amount}x {self.product.product.name}\"\n", "path": "website/sales/models/order.py"}]} | 2,870 | 361 |
gh_patches_debug_13790 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3620 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider wafflehouse is broken
During the global build at 2021-06-02-14-42-40, spider **wafflehouse** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/wafflehouse.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/wafflehouse.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/wafflehouse.geojson))
</issue>
<code>
[start of locations/spiders/wafflehouse.py]
1 # -*- coding: utf-8 -*-
2 import json
3
4 import scrapy
5
6 from locations.hours import OpeningHours
7 from locations.items import GeojsonPointItem
8
9
10 class WaffleHouseSpider(scrapy.Spider):
11 name = "wafflehouse"
12 item_attributes = {"brand": "Waffle House", "brand_wikidata": "Q1701206"}
13 allowed_domains = ["wafflehouse.com"]
14 start_urls = [
15 "https://wafflehouse.locally.com/stores/conversion_data?has_data=true&company_id=117995&store_mode=&style=&color=&upc=&category=&inline=1&show_links_in_list=&parent_domain=&map_center_lat=39.8&map_center_lng=-98.6&map_distance_diag=3000&sort_by=proximity&no_variants=0&only_retailer_id=&dealers_company_id=&only_store_id=false&uses_alt_coords=false&zoom_level=4&lang=en-us&forced_coords=1"
16 ]
17
18 def parse(self, response):
19 for row in response.json()["markers"]:
20 url = "https://locations.wafflehouse.com/" + row["slug"]
21 yield scrapy.Request(url, callback=self.parse_store)
22
23 def parse_store(self, response):
24 data = json.loads(
25 response.xpath('//head/script[@type="application/ld+json"]/text()').get()
26 )
27
28 hours = OpeningHours()
29 specs = data.get("openingHoursSpecification", [])
30 if any({"validFrom", "validThrough"} <= spec.keys() for spec in specs):
31 # Giving opening hours for specific dates, abandon the whole proposal
32 pass
33 else:
34 for spec in specs:
35 for day in spec["dayOfWeek"]:
36 hours.add_range(
37 day[:2].capitalize(), spec["opens"], spec["closes"], "%I%p"
38 )
39
40 properties = {
41 "ref": data["@id"],
42 "lat": data["geo"]["latitude"],
43 "lon": data["geo"]["longitude"],
44 "website": response.url,
45 "name": data["name"],
46 "phone": data["telephone"],
47 "addr_full": data["address"]["streetAddress"],
48 "city": data["address"]["addressLocality"],
49 "state": data["address"]["addressRegion"],
50 "postcode": data["address"]["postalCode"],
51 "opening_hours": hours.as_opening_hours(),
52 }
53 yield GeojsonPointItem(**properties)
54
[end of locations/spiders/wafflehouse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/wafflehouse.py b/locations/spiders/wafflehouse.py
--- a/locations/spiders/wafflehouse.py
+++ b/locations/spiders/wafflehouse.py
@@ -44,10 +44,11 @@
"website": response.url,
"name": data["name"],
"phone": data["telephone"],
- "addr_full": data["address"]["streetAddress"],
+ "street_address": data["address"]["streetAddress"],
"city": data["address"]["addressLocality"],
"state": data["address"]["addressRegion"],
"postcode": data["address"]["postalCode"],
"opening_hours": hours.as_opening_hours(),
}
+
yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/wafflehouse.py b/locations/spiders/wafflehouse.py\n--- a/locations/spiders/wafflehouse.py\n+++ b/locations/spiders/wafflehouse.py\n@@ -44,10 +44,11 @@\n \"website\": response.url,\n \"name\": data[\"name\"],\n \"phone\": data[\"telephone\"],\n- \"addr_full\": data[\"address\"][\"streetAddress\"],\n+ \"street_address\": data[\"address\"][\"streetAddress\"],\n \"city\": data[\"address\"][\"addressLocality\"],\n \"state\": data[\"address\"][\"addressRegion\"],\n \"postcode\": data[\"address\"][\"postalCode\"],\n \"opening_hours\": hours.as_opening_hours(),\n }\n+\n yield GeojsonPointItem(**properties)\n", "issue": "Spider wafflehouse is broken\nDuring the global build at 2021-06-02-14-42-40, spider **wafflehouse** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/wafflehouse.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/wafflehouse.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/wafflehouse.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\n\nimport scrapy\n\nfrom locations.hours import OpeningHours\nfrom locations.items import GeojsonPointItem\n\n\nclass WaffleHouseSpider(scrapy.Spider):\n name = \"wafflehouse\"\n item_attributes = {\"brand\": \"Waffle House\", \"brand_wikidata\": \"Q1701206\"}\n allowed_domains = [\"wafflehouse.com\"]\n start_urls = [\n \"https://wafflehouse.locally.com/stores/conversion_data?has_data=true&company_id=117995&store_mode=&style=&color=&upc=&category=&inline=1&show_links_in_list=&parent_domain=&map_center_lat=39.8&map_center_lng=-98.6&map_distance_diag=3000&sort_by=proximity&no_variants=0&only_retailer_id=&dealers_company_id=&only_store_id=false&uses_alt_coords=false&zoom_level=4&lang=en-us&forced_coords=1\"\n ]\n\n def parse(self, response):\n for row in response.json()[\"markers\"]:\n url = \"https://locations.wafflehouse.com/\" + row[\"slug\"]\n yield scrapy.Request(url, callback=self.parse_store)\n\n def parse_store(self, response):\n data = json.loads(\n response.xpath('//head/script[@type=\"application/ld+json\"]/text()').get()\n )\n\n hours = OpeningHours()\n specs = data.get(\"openingHoursSpecification\", [])\n if any({\"validFrom\", \"validThrough\"} <= spec.keys() for spec in specs):\n # Giving opening hours for specific dates, abandon the whole proposal\n pass\n else:\n for spec in specs:\n for day in spec[\"dayOfWeek\"]:\n hours.add_range(\n day[:2].capitalize(), spec[\"opens\"], spec[\"closes\"], \"%I%p\"\n )\n\n properties = {\n \"ref\": data[\"@id\"],\n \"lat\": data[\"geo\"][\"latitude\"],\n \"lon\": data[\"geo\"][\"longitude\"],\n \"website\": response.url,\n \"name\": data[\"name\"],\n \"phone\": data[\"telephone\"],\n \"addr_full\": data[\"address\"][\"streetAddress\"],\n \"city\": data[\"address\"][\"addressLocality\"],\n \"state\": data[\"address\"][\"addressRegion\"],\n \"postcode\": data[\"address\"][\"postalCode\"],\n \"opening_hours\": hours.as_opening_hours(),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/wafflehouse.py"}]} | 1,354 | 164 |
gh_patches_debug_31352 | rasdani/github-patches | git_diff | Theano__Theano-4512 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
d3viz doesn't work for some graphs
Sometimes d3viz doesn't work for me - empty graph is displayed, and there is an error message in JS console. I tried to reduce it to a minimal example
``` py
import os
os.environ['THEANO_FLAGS'] = "device=gpu0,floatX=float32" #,optimizer=fast_compile"
import numpy as np
from lasagne.updates import adam
from theano import tensor as T, shared, function
import theano.d3viz as d3v
def show_d3(g):
d3v.d3viz(g, 'example.html')
from IPython.display import IFrame
return IFrame('example.html', width=800, height=500)
x = T.fvector()
W = shared(np.zeros((10, 5), dtype=np.float32))
b = shared(np.zeros((10,), dtype=np.float32))
y_true = T.fvector()
y = T.nnet.sigmoid(T.dot(x, W) + b)
cost = T.sqrt(((y - y_true)**2).sum())
updates = adam(cost, [W]) # no b!
f_cost = function([x, y_true], cost, updates=updates)
show_d3(f_cost)
```
(IPython notebook: https://gist.github.com/kmike/13b0fb747dccd4f2f1e44789a9cb832c).
This is brittle: if `adam` is replaced with any other training method from Lasagne (or if a simple SGD is implemented manually) chart works; if `T.sqrt` is removed from `cost` expression chart works; if `T.nnet sigmoid` is removed from `y` expression chart works; if `b` is added to adam updates chart works; if `optimizer=fast_compile` is added to THEANO_FLAGS chart works.
</issue>
<code>
[start of theano/d3viz/d3viz.py]
1 """Dynamic visualization of Theano graphs.
2
3 Author: Christof Angermueller <[email protected]>
4 """
5 from __future__ import absolute_import, print_function, division
6
7 import os
8 import shutil
9 import re
10 import six
11 from six import iteritems
12
13 from theano.d3viz.formatting import PyDotFormatter
14
15 __path__ = os.path.dirname(os.path.realpath(__file__))
16
17
18 def replace_patterns(x, replace):
19 """Replace `replace` in string `x`.
20
21 Parameters
22 ----------
23 s : str
24 String on which function is applied
25 replace : dict
26 `key`, `value` pairs where key is a regular expression and `value` a
27 string by which `key` is replaced
28 """
29 for from_, to in iteritems(replace):
30 x = x.replace(str(from_), str(to))
31 return x
32
33
34 def escape_quotes(s):
35 """Escape quotes in string.
36
37 Parameters
38 ----------
39 s : str
40 String on which function is applied
41 """
42 s = re.sub(r'''(['"])''', r'\\\1', s)
43 return s
44
45
46 def d3viz(fct, outfile, copy_deps=True, *args, **kwargs):
47 """Create HTML file with dynamic visualizing of a Theano function graph.
48
49 In the HTML file, the whole graph or single nodes can be moved by drag and
50 drop. Zooming is possible via the mouse wheel. Detailed information about
51 nodes and edges are displayed via mouse-over events. Node labels can be
52 edited by selecting Edit from the context menu.
53
54 Input nodes are colored in green, output nodes in blue. Apply nodes are
55 ellipses, and colored depending on the type of operation they perform. Red
56 ellipses are transfers from/to the GPU (ops with names GpuFromHost,
57 HostFromGpu).
58
59 Edges are black by default. If a node returns a view of an
60 input, the input edge will be blue. If it returns a destroyed input, the
61 edge will be red.
62
63 Parameters
64 ----------
65 fct : theano.compile.function_module.Function
66 A compiled Theano function, variable, apply or a list of variables.
67 outfile : str
68 Path to output HTML file.
69 copy_deps : bool, optional
70 Copy javascript and CSS dependencies to output directory.
71
72 Notes
73 -----
74 This function accepts extra parameters which will be forwarded to
75 :class:`theano.d3viz.formatting.PyDotFormatter`.
76
77 """
78
79 # Create DOT graph
80 formatter = PyDotFormatter(*args, **kwargs)
81 graph = formatter(fct)
82 dot_graph_raw = graph.create_dot()
83 if not six.PY2:
84 dot_graph_raw = dot_graph_raw.decode('utf8')
85 dot_graph = escape_quotes(dot_graph_raw).replace('\n', '').replace('\r', '')
86
87 # Create output directory if not existing
88 outdir = os.path.dirname(outfile)
89 if not outdir == '' and not os.path.exists(outdir):
90 os.makedirs(outdir)
91
92 # Read template HTML file
93 template_file = os.path.join(__path__, 'html', 'template.html')
94 with open(template_file) as f:
95 template = f.read()
96
97 # Copy dependencies to output directory
98 src_deps = __path__
99 if copy_deps:
100 dst_deps = 'd3viz'
101 for d in ['js', 'css']:
102 dep = os.path.join(outdir, dst_deps, d)
103 if not os.path.exists(dep):
104 shutil.copytree(os.path.join(src_deps, d), dep)
105 else:
106 dst_deps = src_deps
107
108 # Replace patterns in template
109 replace = {
110 '%% JS_DIR %%': os.path.join(dst_deps, 'js'),
111 '%% CSS_DIR %%': os.path.join(dst_deps, 'css'),
112 '%% DOT_GRAPH %%': dot_graph,
113 }
114 html = replace_patterns(template, replace)
115
116 # Write HTML file
117 with open(outfile, 'w') as f:
118 f.write(html)
119
120
121 def d3write(fct, path, *args, **kwargs):
122 """Convert Theano graph to pydot graph and write to dot file.
123
124 Parameters
125 ----------
126 fct : theano.compile.function_module.Function
127 A compiled Theano function, variable, apply or a list of variables.
128 path: str
129 Path to output file
130
131 Notes
132 -----
133 This function accepts extra parameters which will be forwarded to
134 :class:`theano.d3viz.formatting.PyDotFormatter`.
135
136 """
137
138 formatter = PyDotFormatter(*args, **kwargs)
139 graph = formatter(fct)
140 graph.write_dot(path)
141
[end of theano/d3viz/d3viz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/theano/d3viz/d3viz.py b/theano/d3viz/d3viz.py
--- a/theano/d3viz/d3viz.py
+++ b/theano/d3viz/d3viz.py
@@ -5,8 +5,8 @@
from __future__ import absolute_import, print_function, division
import os
+import json
import shutil
-import re
import six
from six import iteritems
@@ -31,16 +31,15 @@
return x
-def escape_quotes(s):
- """Escape quotes in string.
+def safe_json(obj):
+ """Encode `obj` to JSON so that it can be embedded safely inside HTML.
Parameters
----------
- s : str
- String on which function is applied
+ obj : object
+ object to serialize
"""
- s = re.sub(r'''(['"])''', r'\\\1', s)
- return s
+ return json.dumps(obj).replace('<', '\\u003c')
def d3viz(fct, outfile, copy_deps=True, *args, **kwargs):
@@ -79,10 +78,9 @@
# Create DOT graph
formatter = PyDotFormatter(*args, **kwargs)
graph = formatter(fct)
- dot_graph_raw = graph.create_dot()
+ dot_graph = graph.create_dot()
if not six.PY2:
- dot_graph_raw = dot_graph_raw.decode('utf8')
- dot_graph = escape_quotes(dot_graph_raw).replace('\n', '').replace('\r', '')
+ dot_graph = dot_graph.decode('utf8')
# Create output directory if not existing
outdir = os.path.dirname(outfile)
@@ -109,7 +107,7 @@
replace = {
'%% JS_DIR %%': os.path.join(dst_deps, 'js'),
'%% CSS_DIR %%': os.path.join(dst_deps, 'css'),
- '%% DOT_GRAPH %%': dot_graph,
+ '%% DOT_GRAPH %%': safe_json(dot_graph),
}
html = replace_patterns(template, replace)
| {"golden_diff": "diff --git a/theano/d3viz/d3viz.py b/theano/d3viz/d3viz.py\n--- a/theano/d3viz/d3viz.py\n+++ b/theano/d3viz/d3viz.py\n@@ -5,8 +5,8 @@\n from __future__ import absolute_import, print_function, division\n \n import os\n+import json\n import shutil\n-import re\n import six\n from six import iteritems\n \n@@ -31,16 +31,15 @@\n return x\n \n \n-def escape_quotes(s):\n- \"\"\"Escape quotes in string.\n+def safe_json(obj):\n+ \"\"\"Encode `obj` to JSON so that it can be embedded safely inside HTML.\n \n Parameters\n ----------\n- s : str\n- String on which function is applied\n+ obj : object\n+ object to serialize\n \"\"\"\n- s = re.sub(r'''(['\"])''', r'\\\\\\1', s)\n- return s\n+ return json.dumps(obj).replace('<', '\\\\u003c')\n \n \n def d3viz(fct, outfile, copy_deps=True, *args, **kwargs):\n@@ -79,10 +78,9 @@\n # Create DOT graph\n formatter = PyDotFormatter(*args, **kwargs)\n graph = formatter(fct)\n- dot_graph_raw = graph.create_dot()\n+ dot_graph = graph.create_dot()\n if not six.PY2:\n- dot_graph_raw = dot_graph_raw.decode('utf8')\n- dot_graph = escape_quotes(dot_graph_raw).replace('\\n', '').replace('\\r', '')\n+ dot_graph = dot_graph.decode('utf8')\n \n # Create output directory if not existing\n outdir = os.path.dirname(outfile)\n@@ -109,7 +107,7 @@\n replace = {\n '%% JS_DIR %%': os.path.join(dst_deps, 'js'),\n '%% CSS_DIR %%': os.path.join(dst_deps, 'css'),\n- '%% DOT_GRAPH %%': dot_graph,\n+ '%% DOT_GRAPH %%': safe_json(dot_graph),\n }\n html = replace_patterns(template, replace)\n", "issue": "d3viz doesn't work for some graphs\nSometimes d3viz doesn't work for me - empty graph is displayed, and there is an error message in JS console. I tried to reduce it to a minimal example\n\n``` py\nimport os \nos.environ['THEANO_FLAGS'] = \"device=gpu0,floatX=float32\" #,optimizer=fast_compile\"\n\nimport numpy as np\nfrom lasagne.updates import adam\nfrom theano import tensor as T, shared, function\nimport theano.d3viz as d3v\n\ndef show_d3(g):\n d3v.d3viz(g, 'example.html')\n from IPython.display import IFrame\n return IFrame('example.html', width=800, height=500)\n\nx = T.fvector()\nW = shared(np.zeros((10, 5), dtype=np.float32))\nb = shared(np.zeros((10,), dtype=np.float32))\ny_true = T.fvector()\n\ny = T.nnet.sigmoid(T.dot(x, W) + b)\ncost = T.sqrt(((y - y_true)**2).sum())\n\nupdates = adam(cost, [W]) # no b!\nf_cost = function([x, y_true], cost, updates=updates)\nshow_d3(f_cost)\n```\n\n(IPython notebook: https://gist.github.com/kmike/13b0fb747dccd4f2f1e44789a9cb832c).\nThis is brittle: if `adam` is replaced with any other training method from Lasagne (or if a simple SGD is implemented manually) chart works; if `T.sqrt` is removed from `cost` expression chart works; if `T.nnet sigmoid` is removed from `y` expression chart works; if `b` is added to adam updates chart works; if `optimizer=fast_compile` is added to THEANO_FLAGS chart works.\n\n", "before_files": [{"content": "\"\"\"Dynamic visualization of Theano graphs.\n\nAuthor: Christof Angermueller <[email protected]>\n\"\"\"\nfrom __future__ import absolute_import, print_function, division\n\nimport os\nimport shutil\nimport re\nimport six\nfrom six import iteritems\n\nfrom theano.d3viz.formatting import PyDotFormatter\n\n__path__ = os.path.dirname(os.path.realpath(__file__))\n\n\ndef replace_patterns(x, replace):\n \"\"\"Replace `replace` in string `x`.\n\n Parameters\n ----------\n s : str\n String on which function is applied\n replace : dict\n `key`, `value` pairs where key is a regular expression and `value` a\n string by which `key` is replaced\n \"\"\"\n for from_, to in iteritems(replace):\n x = x.replace(str(from_), str(to))\n return x\n\n\ndef escape_quotes(s):\n \"\"\"Escape quotes in string.\n\n Parameters\n ----------\n s : str\n String on which function is applied\n \"\"\"\n s = re.sub(r'''(['\"])''', r'\\\\\\1', s)\n return s\n\n\ndef d3viz(fct, outfile, copy_deps=True, *args, **kwargs):\n \"\"\"Create HTML file with dynamic visualizing of a Theano function graph.\n\n In the HTML file, the whole graph or single nodes can be moved by drag and\n drop. Zooming is possible via the mouse wheel. Detailed information about\n nodes and edges are displayed via mouse-over events. Node labels can be\n edited by selecting Edit from the context menu.\n\n Input nodes are colored in green, output nodes in blue. Apply nodes are\n ellipses, and colored depending on the type of operation they perform. Red\n ellipses are transfers from/to the GPU (ops with names GpuFromHost,\n HostFromGpu).\n\n Edges are black by default. If a node returns a view of an\n input, the input edge will be blue. If it returns a destroyed input, the\n edge will be red.\n\n Parameters\n ----------\n fct : theano.compile.function_module.Function\n A compiled Theano function, variable, apply or a list of variables.\n outfile : str\n Path to output HTML file.\n copy_deps : bool, optional\n Copy javascript and CSS dependencies to output directory.\n\n Notes\n -----\n This function accepts extra parameters which will be forwarded to\n :class:`theano.d3viz.formatting.PyDotFormatter`.\n\n \"\"\"\n\n # Create DOT graph\n formatter = PyDotFormatter(*args, **kwargs)\n graph = formatter(fct)\n dot_graph_raw = graph.create_dot()\n if not six.PY2:\n dot_graph_raw = dot_graph_raw.decode('utf8')\n dot_graph = escape_quotes(dot_graph_raw).replace('\\n', '').replace('\\r', '')\n\n # Create output directory if not existing\n outdir = os.path.dirname(outfile)\n if not outdir == '' and not os.path.exists(outdir):\n os.makedirs(outdir)\n\n # Read template HTML file\n template_file = os.path.join(__path__, 'html', 'template.html')\n with open(template_file) as f:\n template = f.read()\n\n # Copy dependencies to output directory\n src_deps = __path__\n if copy_deps:\n dst_deps = 'd3viz'\n for d in ['js', 'css']:\n dep = os.path.join(outdir, dst_deps, d)\n if not os.path.exists(dep):\n shutil.copytree(os.path.join(src_deps, d), dep)\n else:\n dst_deps = src_deps\n\n # Replace patterns in template\n replace = {\n '%% JS_DIR %%': os.path.join(dst_deps, 'js'),\n '%% CSS_DIR %%': os.path.join(dst_deps, 'css'),\n '%% DOT_GRAPH %%': dot_graph,\n }\n html = replace_patterns(template, replace)\n\n # Write HTML file\n with open(outfile, 'w') as f:\n f.write(html)\n\n\ndef d3write(fct, path, *args, **kwargs):\n \"\"\"Convert Theano graph to pydot graph and write to dot file.\n\n Parameters\n ----------\n fct : theano.compile.function_module.Function\n A compiled Theano function, variable, apply or a list of variables.\n path: str\n Path to output file\n\n Notes\n -----\n This function accepts extra parameters which will be forwarded to\n :class:`theano.d3viz.formatting.PyDotFormatter`.\n\n \"\"\"\n\n formatter = PyDotFormatter(*args, **kwargs)\n graph = formatter(fct)\n graph.write_dot(path)\n", "path": "theano/d3viz/d3viz.py"}]} | 2,275 | 462 |
gh_patches_debug_9769 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1168 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Choice options are redundant using cookiecutter with latest click package
* Cookiecutter version: 1.6.0 -- installed via `pip install`
* Template project url: n/a
* Python version: 3.7
* Operating System: Windows 10
### Description:
If cookiecutter.json has the following:
```json
{
"my_choice": ["a","b"]
}
```
Then running cookiecutter gives the following prompt:
```
Select my_choice:
1 - a
2 - b
Choose from 1, 2 (1, 2) [1]:
```
Note how the choices are repeated twice in the last line. This is because the [Click API](https://click.palletsprojects.com/en/7.x/api/) has been updated to 7.0 and automatically shows the choices to the user in parentheses. This is redundant.
### Solution
The text passed to the `click.prompt` function should be changed to set `show_choices = False` or it should be changed to not show the choices and let the Click API do so instead.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import io
6 import sys
7
8 from setuptools import setup
9
10 version = "1.6.0"
11
12 if sys.argv[-1] == 'publish':
13 os.system('python setup.py sdist upload')
14 os.system('python setup.py bdist_wheel upload')
15 sys.exit()
16
17 if sys.argv[-1] == 'tag':
18 os.system("git tag -a %s -m 'version %s'" % (version, version))
19 os.system("git push --tags")
20 sys.exit()
21
22 with io.open('README.rst', 'r', encoding='utf-8') as readme_file:
23 readme = readme_file.read()
24
25 requirements = [
26 'future>=0.15.2',
27 'binaryornot>=0.2.0',
28 'jinja2>=2.7',
29 'click>=5.0',
30 'whichcraft>=0.4.0',
31 'poyo>=0.1.0',
32 'jinja2-time>=0.1.0',
33 'requests>=2.18.0',
34 ]
35
36 if sys.argv[-1] == 'readme':
37 print(readme)
38 sys.exit()
39
40
41 setup(
42 name='cookiecutter',
43 version=version,
44 description=('A command-line utility that creates projects from project '
45 'templates, e.g. creating a Python package project from a '
46 'Python package project template.'),
47 long_description=readme,
48 author='Audrey Roy',
49 author_email='[email protected]',
50 url='https://github.com/cookiecutter/cookiecutter',
51 packages=[
52 'cookiecutter',
53 ],
54 package_dir={'cookiecutter': 'cookiecutter'},
55 entry_points={
56 'console_scripts': [
57 'cookiecutter = cookiecutter.__main__:main',
58 ]
59 },
60 include_package_data=True,
61 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
62 install_requires=requirements,
63 license='BSD',
64 zip_safe=False,
65 classifiers=[
66 'Development Status :: 5 - Production/Stable',
67 'Environment :: Console',
68 'Intended Audience :: Developers',
69 'Natural Language :: English',
70 'License :: OSI Approved :: BSD License',
71 'Programming Language :: Python',
72 'Programming Language :: Python :: 2',
73 'Programming Language :: Python :: 2.7',
74 'Programming Language :: Python :: 3',
75 'Programming Language :: Python :: 3.5',
76 'Programming Language :: Python :: 3.6',
77 'Programming Language :: Python :: 3.7',
78 'Programming Language :: Python :: Implementation :: CPython',
79 'Programming Language :: Python :: Implementation :: PyPy',
80 'Topic :: Software Development',
81 ],
82 keywords=(
83 'cookiecutter, Python, projects, project templates, Jinja2, '
84 'skeleton, scaffolding, project directory, setup.py, package, '
85 'packaging'
86 ),
87 )
88
[end of setup.py]
[start of cookiecutter/prompt.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 cookiecutter.prompt
5 ---------------------
6
7 Functions for prompting the user for project info.
8 """
9
10 from collections import OrderedDict
11 import json
12
13 import click
14 from past.builtins import basestring
15
16 from future.utils import iteritems
17
18 from jinja2.exceptions import UndefinedError
19
20 from .exceptions import UndefinedVariableInTemplate
21 from .environment import StrictEnvironment
22
23
24 def read_user_variable(var_name, default_value):
25 """Prompt the user for the given variable and return the entered value
26 or the given default.
27
28 :param str var_name: Variable of the context to query the user
29 :param default_value: Value that will be returned if no input happens
30 """
31 # Please see http://click.pocoo.org/4/api/#click.prompt
32 return click.prompt(var_name, default=default_value)
33
34
35 def read_user_yes_no(question, default_value):
36 """Prompt the user to reply with 'yes' or 'no' (or equivalent values).
37
38 Note:
39 Possible choices are 'true', '1', 'yes', 'y' or 'false', '0', 'no', 'n'
40
41 :param str question: Question to the user
42 :param default_value: Value that will be returned if no input happens
43 """
44 # Please see http://click.pocoo.org/4/api/#click.prompt
45 return click.prompt(
46 question,
47 default=default_value,
48 type=click.BOOL
49 )
50
51
52 def read_repo_password(question):
53 """Prompt the user to enter a password
54
55 :param str question: Question to the user
56 """
57 # Please see http://click.pocoo.org/4/api/#click.prompt
58 return click.prompt(question, hide_input=True)
59
60
61 def read_user_choice(var_name, options):
62 """Prompt the user to choose from several options for the given variable.
63
64 The first item will be returned if no input happens.
65
66 :param str var_name: Variable as specified in the context
67 :param list options: Sequence of options that are available to select from
68 :return: Exactly one item of ``options`` that has been chosen by the user
69 """
70 # Please see http://click.pocoo.org/4/api/#click.prompt
71 if not isinstance(options, list):
72 raise TypeError
73
74 if not options:
75 raise ValueError
76
77 choice_map = OrderedDict(
78 (u'{}'.format(i), value) for i, value in enumerate(options, 1)
79 )
80 choices = choice_map.keys()
81 default = u'1'
82
83 choice_lines = [u'{} - {}'.format(*c) for c in choice_map.items()]
84 prompt = u'\n'.join((
85 u'Select {}:'.format(var_name),
86 u'\n'.join(choice_lines),
87 u'Choose from {}'.format(u', '.join(choices))
88 ))
89
90 user_choice = click.prompt(
91 prompt, type=click.Choice(choices), default=default
92 )
93 return choice_map[user_choice]
94
95
96 def process_json(user_value):
97 try:
98 user_dict = json.loads(
99 user_value,
100 object_pairs_hook=OrderedDict,
101 )
102 except Exception:
103 # Leave it up to click to ask the user again
104 raise click.UsageError('Unable to decode to JSON.')
105
106 if not isinstance(user_dict, dict):
107 # Leave it up to click to ask the user again
108 raise click.UsageError('Requires JSON dict.')
109
110 return user_dict
111
112
113 def read_user_dict(var_name, default_value):
114 """Prompt the user to provide a dictionary of data.
115
116 :param str var_name: Variable as specified in the context
117 :param default_value: Value that will be returned if no input is provided
118 :return: A Python dictionary to use in the context.
119 """
120 # Please see http://click.pocoo.org/4/api/#click.prompt
121 if not isinstance(default_value, dict):
122 raise TypeError
123
124 default_display = 'default'
125
126 user_value = click.prompt(
127 var_name,
128 default=default_display,
129 type=click.STRING,
130 value_proc=process_json,
131 )
132
133 if user_value == default_display:
134 # Return the given default w/o any processing
135 return default_value
136 return user_value
137
138
139 def render_variable(env, raw, cookiecutter_dict):
140 """Inside the prompting taken from the cookiecutter.json file, this renders
141 the next variable. For example, if a project_name is "Peanut Butter
142 Cookie", the repo_name could be be rendered with:
143
144 `{{ cookiecutter.project_name.replace(" ", "_") }}`.
145
146 This is then presented to the user as the default.
147
148 :param Environment env: A Jinja2 Environment object.
149 :param str raw: The next value to be prompted for by the user.
150 :param dict cookiecutter_dict: The current context as it's gradually
151 being populated with variables.
152 :return: The rendered value for the default variable.
153 """
154 if raw is None:
155 return None
156 elif isinstance(raw, dict):
157 return {
158 render_variable(env, k, cookiecutter_dict):
159 render_variable(env, v, cookiecutter_dict)
160 for k, v in raw.items()
161 }
162 elif isinstance(raw, list):
163 return [
164 render_variable(env, v, cookiecutter_dict)
165 for v in raw
166 ]
167 elif not isinstance(raw, basestring):
168 raw = str(raw)
169
170 template = env.from_string(raw)
171
172 rendered_template = template.render(cookiecutter=cookiecutter_dict)
173 return rendered_template
174
175
176 def prompt_choice_for_config(cookiecutter_dict, env, key, options, no_input):
177 """Prompt the user which option to choose from the given. Each of the
178 possible choices is rendered beforehand.
179 """
180 rendered_options = [
181 render_variable(env, raw, cookiecutter_dict) for raw in options
182 ]
183
184 if no_input:
185 return rendered_options[0]
186 return read_user_choice(key, rendered_options)
187
188
189 def prompt_for_config(context, no_input=False):
190 """
191 Prompts the user to enter new config, using context as a source for the
192 field names and sample values.
193
194 :param no_input: Prompt the user at command line for manual configuration?
195 """
196 cookiecutter_dict = OrderedDict([])
197 env = StrictEnvironment(context=context)
198
199 # First pass: Handle simple and raw variables, plus choices.
200 # These must be done first because the dictionaries keys and
201 # values might refer to them.
202 for key, raw in iteritems(context[u'cookiecutter']):
203 if key.startswith(u'_'):
204 cookiecutter_dict[key] = raw
205 continue
206
207 try:
208 if isinstance(raw, list):
209 # We are dealing with a choice variable
210 val = prompt_choice_for_config(
211 cookiecutter_dict, env, key, raw, no_input
212 )
213 cookiecutter_dict[key] = val
214 elif not isinstance(raw, dict):
215 # We are dealing with a regular variable
216 val = render_variable(env, raw, cookiecutter_dict)
217
218 if not no_input:
219 val = read_user_variable(key, val)
220
221 cookiecutter_dict[key] = val
222 except UndefinedError as err:
223 msg = "Unable to render variable '{}'".format(key)
224 raise UndefinedVariableInTemplate(msg, err, context)
225
226 # Second pass; handle the dictionaries.
227 for key, raw in iteritems(context[u'cookiecutter']):
228
229 try:
230 if isinstance(raw, dict):
231 # We are dealing with a dict variable
232 val = render_variable(env, raw, cookiecutter_dict)
233
234 if not no_input:
235 val = read_user_dict(key, val)
236
237 cookiecutter_dict[key] = val
238 except UndefinedError as err:
239 msg = "Unable to render variable '{}'".format(key)
240 raise UndefinedVariableInTemplate(msg, err, context)
241
242 return cookiecutter_dict
243
[end of cookiecutter/prompt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -88,7 +88,7 @@
))
user_choice = click.prompt(
- prompt, type=click.Choice(choices), default=default
+ prompt, type=click.Choice(choices), default=default, show_choices=False
)
return choice_map[user_choice]
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
'future>=0.15.2',
'binaryornot>=0.2.0',
'jinja2>=2.7',
- 'click>=5.0',
+ 'click>=7.0',
'whichcraft>=0.4.0',
'poyo>=0.1.0',
'jinja2-time>=0.1.0',
| {"golden_diff": "diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -88,7 +88,7 @@\n ))\n \n user_choice = click.prompt(\n- prompt, type=click.Choice(choices), default=default\n+ prompt, type=click.Choice(choices), default=default, show_choices=False\n )\n return choice_map[user_choice]\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n- 'click>=5.0',\n+ 'click>=7.0',\n 'whichcraft>=0.4.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n", "issue": "Choice options are redundant using cookiecutter with latest click package\n* Cookiecutter version: 1.6.0 -- installed via `pip install`\r\n* Template project url: n/a\r\n* Python version: 3.7\r\n* Operating System: Windows 10\r\n\r\n### Description:\r\n\r\nIf cookiecutter.json has the following:\r\n```json\r\n{\r\n \"my_choice\": [\"a\",\"b\"]\r\n}\r\n```\r\nThen running cookiecutter gives the following prompt:\r\n```\r\nSelect my_choice:\r\n1 - a\r\n2 - b\r\nChoose from 1, 2 (1, 2) [1]:\r\n```\r\n\r\nNote how the choices are repeated twice in the last line. This is because the [Click API](https://click.palletsprojects.com/en/7.x/api/) has been updated to 7.0 and automatically shows the choices to the user in parentheses. This is redundant. \r\n\r\n### Solution\r\nThe text passed to the `click.prompt` function should be changed to set `show_choices = False` or it should be changed to not show the choices and let the Click API do so instead.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.6.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.rst', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'future>=0.15.2',\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=5.0',\n 'whichcraft>=0.4.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n 'requests>=2.18.0',\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development',\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom collections import OrderedDict\nimport json\n\nimport click\nfrom past.builtins import basestring\n\nfrom future.utils import iteritems\n\nfrom jinja2.exceptions import UndefinedError\n\nfrom .exceptions import UndefinedVariableInTemplate\nfrom .environment import StrictEnvironment\n\n\ndef read_user_variable(var_name, default_value):\n \"\"\"Prompt the user for the given variable and return the entered value\n or the given default.\n\n :param str var_name: Variable of the context to query the user\n :param default_value: Value that will be returned if no input happens\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n return click.prompt(var_name, default=default_value)\n\n\ndef read_user_yes_no(question, default_value):\n \"\"\"Prompt the user to reply with 'yes' or 'no' (or equivalent values).\n\n Note:\n Possible choices are 'true', '1', 'yes', 'y' or 'false', '0', 'no', 'n'\n\n :param str question: Question to the user\n :param default_value: Value that will be returned if no input happens\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n return click.prompt(\n question,\n default=default_value,\n type=click.BOOL\n )\n\n\ndef read_repo_password(question):\n \"\"\"Prompt the user to enter a password\n\n :param str question: Question to the user\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n return click.prompt(question, hide_input=True)\n\n\ndef read_user_choice(var_name, options):\n \"\"\"Prompt the user to choose from several options for the given variable.\n\n The first item will be returned if no input happens.\n\n :param str var_name: Variable as specified in the context\n :param list options: Sequence of options that are available to select from\n :return: Exactly one item of ``options`` that has been chosen by the user\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n if not isinstance(options, list):\n raise TypeError\n\n if not options:\n raise ValueError\n\n choice_map = OrderedDict(\n (u'{}'.format(i), value) for i, value in enumerate(options, 1)\n )\n choices = choice_map.keys()\n default = u'1'\n\n choice_lines = [u'{} - {}'.format(*c) for c in choice_map.items()]\n prompt = u'\\n'.join((\n u'Select {}:'.format(var_name),\n u'\\n'.join(choice_lines),\n u'Choose from {}'.format(u', '.join(choices))\n ))\n\n user_choice = click.prompt(\n prompt, type=click.Choice(choices), default=default\n )\n return choice_map[user_choice]\n\n\ndef process_json(user_value):\n try:\n user_dict = json.loads(\n user_value,\n object_pairs_hook=OrderedDict,\n )\n except Exception:\n # Leave it up to click to ask the user again\n raise click.UsageError('Unable to decode to JSON.')\n\n if not isinstance(user_dict, dict):\n # Leave it up to click to ask the user again\n raise click.UsageError('Requires JSON dict.')\n\n return user_dict\n\n\ndef read_user_dict(var_name, default_value):\n \"\"\"Prompt the user to provide a dictionary of data.\n\n :param str var_name: Variable as specified in the context\n :param default_value: Value that will be returned if no input is provided\n :return: A Python dictionary to use in the context.\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n if not isinstance(default_value, dict):\n raise TypeError\n\n default_display = 'default'\n\n user_value = click.prompt(\n var_name,\n default=default_display,\n type=click.STRING,\n value_proc=process_json,\n )\n\n if user_value == default_display:\n # Return the given default w/o any processing\n return default_value\n return user_value\n\n\ndef render_variable(env, raw, cookiecutter_dict):\n \"\"\"Inside the prompting taken from the cookiecutter.json file, this renders\n the next variable. For example, if a project_name is \"Peanut Butter\n Cookie\", the repo_name could be be rendered with:\n\n `{{ cookiecutter.project_name.replace(\" \", \"_\") }}`.\n\n This is then presented to the user as the default.\n\n :param Environment env: A Jinja2 Environment object.\n :param str raw: The next value to be prompted for by the user.\n :param dict cookiecutter_dict: The current context as it's gradually\n being populated with variables.\n :return: The rendered value for the default variable.\n \"\"\"\n if raw is None:\n return None\n elif isinstance(raw, dict):\n return {\n render_variable(env, k, cookiecutter_dict):\n render_variable(env, v, cookiecutter_dict)\n for k, v in raw.items()\n }\n elif isinstance(raw, list):\n return [\n render_variable(env, v, cookiecutter_dict)\n for v in raw\n ]\n elif not isinstance(raw, basestring):\n raw = str(raw)\n\n template = env.from_string(raw)\n\n rendered_template = template.render(cookiecutter=cookiecutter_dict)\n return rendered_template\n\n\ndef prompt_choice_for_config(cookiecutter_dict, env, key, options, no_input):\n \"\"\"Prompt the user which option to choose from the given. Each of the\n possible choices is rendered beforehand.\n \"\"\"\n rendered_options = [\n render_variable(env, raw, cookiecutter_dict) for raw in options\n ]\n\n if no_input:\n return rendered_options[0]\n return read_user_choice(key, rendered_options)\n\n\ndef prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n\n :param no_input: Prompt the user at command line for manual configuration?\n \"\"\"\n cookiecutter_dict = OrderedDict([])\n env = StrictEnvironment(context=context)\n\n # First pass: Handle simple and raw variables, plus choices.\n # These must be done first because the dictionaries keys and\n # values might refer to them.\n for key, raw in iteritems(context[u'cookiecutter']):\n if key.startswith(u'_'):\n cookiecutter_dict[key] = raw\n continue\n\n try:\n if isinstance(raw, list):\n # We are dealing with a choice variable\n val = prompt_choice_for_config(\n cookiecutter_dict, env, key, raw, no_input\n )\n cookiecutter_dict[key] = val\n elif not isinstance(raw, dict):\n # We are dealing with a regular variable\n val = render_variable(env, raw, cookiecutter_dict)\n\n if not no_input:\n val = read_user_variable(key, val)\n\n cookiecutter_dict[key] = val\n except UndefinedError as err:\n msg = \"Unable to render variable '{}'\".format(key)\n raise UndefinedVariableInTemplate(msg, err, context)\n\n # Second pass; handle the dictionaries.\n for key, raw in iteritems(context[u'cookiecutter']):\n\n try:\n if isinstance(raw, dict):\n # We are dealing with a dict variable\n val = render_variable(env, raw, cookiecutter_dict)\n\n if not no_input:\n val = read_user_dict(key, val)\n\n cookiecutter_dict[key] = val\n except UndefinedError as err:\n msg = \"Unable to render variable '{}'\".format(key)\n raise UndefinedVariableInTemplate(msg, err, context)\n\n return cookiecutter_dict\n", "path": "cookiecutter/prompt.py"}]} | 3,958 | 222 |
gh_patches_debug_3837 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4338 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
[BUG]: wrong in KL_approx , the order of two distribution is handled incorrectly
compute_approx_kl for `NaiveExperienceMaker` maybe incorrect.
As motion in [Approximating KL Divergence](http://joschu.net/blog/kl-approx.html)
$$ KL[q,p] = \mathbb{E}_{x\sim q}[\log\frac{q(x)}{p(x)}] $$
let
$$ r = \frac{p(x)}{q(x)} $$
note that, x is sample from distribution q.
Then
$$ KL_{approx}[q,p] = \mathbb{E}_{x\sim q}[-\log(r) + (r-1) ] $$
---
In paper [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155), object for actor , (e.i. reward of experience , ignore loss_ptx)
<img width="756" alt="image" src="https://user-images.githubusercontent.com/22851737/236610916-2f068c34-1508-438f-bd16-6fe6ed491e8c.png">
<img width="795" alt="image" src="https://user-images.githubusercontent.com/22851737/236611424-17600f6e-7aca-4bdf-95bd-2ce4035bcd3a.png">
So for computing KL, samples are sampled from actor model e.i $\pi^{RL}_\phi$, instead of $\pi^{SFT}$
KL in the object should be $KL[\pi^{RL}, \pi^{SFT}] =KL[q,p]$ , and $r$ of KL_approx should be $\frac{\pi^{SFT}(x)}{\pi^{RL}_\phi(x)}$
---
While on the `coati.models.utils.compute_approx_kl`
``` python
log_ratio = log_probs - log_probs_base
```
and log_probs and log_probs_base correspond to actor_model and sft_model respectively.
This should be modify to
```python
log_ratio = log_probs_base - log_probs
```
</issue>
<code>
[start of applications/Chat/coati/models/utils.py]
1 from typing import Optional, Union
2
3 import loralib as lora
4 import torch
5 import torch.nn as nn
6 import torch.nn.functional as F
7
8
9 def compute_approx_kl(log_probs: torch.Tensor,
10 log_probs_base: torch.Tensor,
11 action_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
12 """
13 Compute the approximate KL divergence between two distributions.
14 Schulman blog: http://joschu.net/blog/kl-approx.html
15
16 Args:
17 log_probs: Log probabilities of the new distribution.
18 log_probs_base: Log probabilities of the base distribution.
19 action_mask: Mask for actions.
20 """
21
22 log_ratio = log_probs - log_probs_base
23 approx_kl = (log_ratio.exp() - 1) - log_ratio
24 if action_mask is not None:
25 approx_kl = masked_mean(approx_kl, action_mask, dim=1)
26 return approx_kl
27 approx_kl = approx_kl.mean(dim=1)
28 return approx_kl
29
30
31 def compute_reward(r: Union[torch.Tensor, float],
32 kl_coef: float,
33 log_probs: torch.Tensor,
34 log_probs_base: torch.Tensor,
35 action_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
36 if kl_coef <= 0.0:
37 return r
38 kl = compute_approx_kl(log_probs, log_probs_base, action_mask=action_mask)
39 reward = r - kl_coef * kl
40 return reward
41
42
43 def log_probs_from_logits(logits: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
44 log_probs = F.log_softmax(logits, dim=-1)
45 log_probs_labels = log_probs.gather(dim=-1, index=labels.unsqueeze(-1))
46 return log_probs_labels.squeeze(-1)
47
48
49 def calc_action_log_probs(output: torch.Tensor, sequences: torch.LongTensor, num_actions: int) -> torch.Tensor:
50 """Calculate action log probs.
51
52 Args:
53 output (torch.Tensor): Output tensor of Actor.forward.
54 sequences (torch.LongTensor): Input sequences.
55 num_actions (int): Number of actions.
56
57 Returns:
58 torch.Tensor: Action log probs.
59 """
60 logits = output['logits']
61 log_probs = log_probs_from_logits(logits[:, :-1, :], sequences[:, 1:])
62 return log_probs[:, -num_actions:]
63
64
65 def masked_mean(tensor: torch.Tensor, mask: torch.Tensor, dim: int = 1) -> torch.Tensor:
66 tensor = tensor * mask
67 tensor = tensor.sum(dim=dim)
68 mask_sum = mask.sum(dim=dim)
69 mean = tensor / (mask_sum + 1e-8)
70 return mean
71
72
73 def masked_normalize(tensor: torch.Tensor, mask: torch.Tensor, dim: int = 1, eps: float = 1e-8) -> torch.Tensor:
74 tensor = tensor * mask
75 mean = masked_mean(tensor, mask, dim=dim)
76 mean_centered = tensor - mean
77 var = masked_mean(mean_centered**2, mask, dim=dim)
78 return mean_centered * var.clamp(min=eps).rsqrt()
79
80
81 def normalize(tensor: torch.Tensor, dim: int = 0, eps: float = 1e-8) -> torch.Tensor:
82 mean = tensor.mean(dim)
83 mean_centered = tensor - mean
84 var = (mean_centered**2).mean(dim)
85 norm = mean_centered * var.clamp(min=eps).rsqrt()
86 return norm
87
88
89 def convert_to_lora(model: nn.Module,
90 input_size: int,
91 output_size: int,
92 lora_rank: int = 16,
93 lora_alpha: int = 1,
94 lora_dropout: float = 0.,
95 fan_in_fan_out: bool = False,
96 merge_weights: bool = True):
97 if lora_rank > min(input_size, output_size):
98 raise ValueError(f"LoRA rank {lora_rank} must be less or equal than {min(input_size, output_size)}")
99
100 for name, module in model.named_modules():
101 if isinstance(module, nn.Linear):
102 module._modules[name] = lora.Linear(input_size,
103 output_size,
104 r=lora_rank,
105 lora_alpha=lora_alpha,
106 lora_dropout=lora_dropout,
107 fan_in_fan_out=fan_in_fan_out,
108 merge_weights=merge_weights)
109
[end of applications/Chat/coati/models/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/applications/Chat/coati/models/utils.py b/applications/Chat/coati/models/utils.py
--- a/applications/Chat/coati/models/utils.py
+++ b/applications/Chat/coati/models/utils.py
@@ -19,7 +19,7 @@
action_mask: Mask for actions.
"""
- log_ratio = log_probs - log_probs_base
+ log_ratio = log_probs_base - log_probs
approx_kl = (log_ratio.exp() - 1) - log_ratio
if action_mask is not None:
approx_kl = masked_mean(approx_kl, action_mask, dim=1)
| {"golden_diff": "diff --git a/applications/Chat/coati/models/utils.py b/applications/Chat/coati/models/utils.py\n--- a/applications/Chat/coati/models/utils.py\n+++ b/applications/Chat/coati/models/utils.py\n@@ -19,7 +19,7 @@\n action_mask: Mask for actions.\n \"\"\"\n \n- log_ratio = log_probs - log_probs_base\n+ log_ratio = log_probs_base - log_probs\n approx_kl = (log_ratio.exp() - 1) - log_ratio\n if action_mask is not None:\n approx_kl = masked_mean(approx_kl, action_mask, dim=1)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[BUG]: wrong in KL_approx , the order of two distribution is handled incorrectly\ncompute_approx_kl for `NaiveExperienceMaker` maybe incorrect. \r\n\r\nAs motion in [Approximating KL Divergence](http://joschu.net/blog/kl-approx.html)\r\n\r\n \r\n$$ KL[q,p] = \\mathbb{E}_{x\\sim q}[\\log\\frac{q(x)}{p(x)}] $$\r\n\r\nlet \r\n\r\n$$ r = \\frac{p(x)}{q(x)} $$\r\n\r\nnote that, x is sample from distribution q. \r\n\r\nThen \r\n\r\n$$ KL_{approx}[q,p] = \\mathbb{E}_{x\\sim q}[-\\log(r) + (r-1) ] $$\r\n\r\n---\r\n \r\nIn paper [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155), object for actor , (e.i. reward of experience , ignore loss_ptx) \r\n\r\n<img width=\"756\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22851737/236610916-2f068c34-1508-438f-bd16-6fe6ed491e8c.png\">\r\n\r\n\r\n<img width=\"795\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22851737/236611424-17600f6e-7aca-4bdf-95bd-2ce4035bcd3a.png\">\r\n\r\nSo for computing KL, samples are sampled from actor model e.i $\\pi^{RL}_\\phi$, instead of $\\pi^{SFT}$\r\n\r\n KL in the object should be $KL[\\pi^{RL}, \\pi^{SFT}] =KL[q,p]$ , and $r$ of KL_approx should be $\\frac{\\pi^{SFT}(x)}{\\pi^{RL}_\\phi(x)}$\r\n\r\n--- \r\n\r\nWhile on the `coati.models.utils.compute_approx_kl`\r\n\r\n``` python \r\n log_ratio = log_probs - log_probs_base\r\n```\r\n\r\nand log_probs and log_probs_base correspond to actor_model and sft_model respectively.\r\nThis should be modify to \r\n\r\n```python \r\n log_ratio = log_probs_base - log_probs \r\n```\r\n\r\n\n", "before_files": [{"content": "from typing import Optional, Union\n\nimport loralib as lora\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef compute_approx_kl(log_probs: torch.Tensor,\n log_probs_base: torch.Tensor,\n action_mask: Optional[torch.Tensor] = None) -> torch.Tensor:\n \"\"\"\n Compute the approximate KL divergence between two distributions.\n Schulman blog: http://joschu.net/blog/kl-approx.html\n\n Args:\n log_probs: Log probabilities of the new distribution.\n log_probs_base: Log probabilities of the base distribution.\n action_mask: Mask for actions.\n \"\"\"\n\n log_ratio = log_probs - log_probs_base\n approx_kl = (log_ratio.exp() - 1) - log_ratio\n if action_mask is not None:\n approx_kl = masked_mean(approx_kl, action_mask, dim=1)\n return approx_kl\n approx_kl = approx_kl.mean(dim=1)\n return approx_kl\n\n\ndef compute_reward(r: Union[torch.Tensor, float],\n kl_coef: float,\n log_probs: torch.Tensor,\n log_probs_base: torch.Tensor,\n action_mask: Optional[torch.Tensor] = None) -> torch.Tensor:\n if kl_coef <= 0.0:\n return r\n kl = compute_approx_kl(log_probs, log_probs_base, action_mask=action_mask)\n reward = r - kl_coef * kl\n return reward\n\n\ndef log_probs_from_logits(logits: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:\n log_probs = F.log_softmax(logits, dim=-1)\n log_probs_labels = log_probs.gather(dim=-1, index=labels.unsqueeze(-1))\n return log_probs_labels.squeeze(-1)\n\n\ndef calc_action_log_probs(output: torch.Tensor, sequences: torch.LongTensor, num_actions: int) -> torch.Tensor:\n \"\"\"Calculate action log probs.\n\n Args:\n output (torch.Tensor): Output tensor of Actor.forward.\n sequences (torch.LongTensor): Input sequences.\n num_actions (int): Number of actions.\n\n Returns:\n torch.Tensor: Action log probs.\n \"\"\"\n logits = output['logits']\n log_probs = log_probs_from_logits(logits[:, :-1, :], sequences[:, 1:])\n return log_probs[:, -num_actions:]\n\n\ndef masked_mean(tensor: torch.Tensor, mask: torch.Tensor, dim: int = 1) -> torch.Tensor:\n tensor = tensor * mask\n tensor = tensor.sum(dim=dim)\n mask_sum = mask.sum(dim=dim)\n mean = tensor / (mask_sum + 1e-8)\n return mean\n\n\ndef masked_normalize(tensor: torch.Tensor, mask: torch.Tensor, dim: int = 1, eps: float = 1e-8) -> torch.Tensor:\n tensor = tensor * mask\n mean = masked_mean(tensor, mask, dim=dim)\n mean_centered = tensor - mean\n var = masked_mean(mean_centered**2, mask, dim=dim)\n return mean_centered * var.clamp(min=eps).rsqrt()\n\n\ndef normalize(tensor: torch.Tensor, dim: int = 0, eps: float = 1e-8) -> torch.Tensor:\n mean = tensor.mean(dim)\n mean_centered = tensor - mean\n var = (mean_centered**2).mean(dim)\n norm = mean_centered * var.clamp(min=eps).rsqrt()\n return norm\n\n\ndef convert_to_lora(model: nn.Module,\n input_size: int,\n output_size: int,\n lora_rank: int = 16,\n lora_alpha: int = 1,\n lora_dropout: float = 0.,\n fan_in_fan_out: bool = False,\n merge_weights: bool = True):\n if lora_rank > min(input_size, output_size):\n raise ValueError(f\"LoRA rank {lora_rank} must be less or equal than {min(input_size, output_size)}\")\n\n for name, module in model.named_modules():\n if isinstance(module, nn.Linear):\n module._modules[name] = lora.Linear(input_size,\n output_size,\n r=lora_rank,\n lora_alpha=lora_alpha,\n lora_dropout=lora_dropout,\n fan_in_fan_out=fan_in_fan_out,\n merge_weights=merge_weights)\n", "path": "applications/Chat/coati/models/utils.py"}]} | 2,233 | 140 |
gh_patches_debug_27561 | rasdani/github-patches | git_diff | huggingface__transformers-11746 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384)
the command to reproduce:
cd huggingface-transformers/examples/pytorch/question-answering
python -m torch.distributed.launch --nproc_per_node=8 ./run_qa.py \
--model_name_or_path roberta-large \
--dataset_name squad \
--do_train --do_eval \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 256 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir test_result2/$trials --overwrite_output_dir \
--logging_dir test_result2/$trials/tensorboard --logging_first_step --logging_steps 50 \
--fp16
i tried add "--max_eval_samples 10240", this will fix the error, while the AUC result is quite low(exact_match = 4.9414, f1 = 8.9784). and when i ran with 1gpu, the above command can succeed(exact_match = 88.5336, f1 = 94.3266)
the full error is "File "./transformers/src/transformers/trainer_pt_utils.py", line 410, in _nested_set_tensors
i * slice_len : (i + 1) * slice_len
i * slice_len : (i + 1) * slice_len
ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384)"
</issue>
<code>
[start of examples/pytorch/question-answering/trainer_qa.py]
1 # coding=utf-8
2 # Copyright 2020 The HuggingFace Team All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """
16 A subclass of `Trainer` specific to Question-Answering tasks
17 """
18
19 from transformers import Trainer, is_torch_tpu_available
20 from transformers.trainer_utils import PredictionOutput
21
22
23 if is_torch_tpu_available():
24 import torch_xla.core.xla_model as xm
25 import torch_xla.debug.metrics as met
26
27
28 class QuestionAnsweringTrainer(Trainer):
29 def __init__(self, *args, eval_examples=None, post_process_function=None, **kwargs):
30 super().__init__(*args, **kwargs)
31 self.eval_examples = eval_examples
32 self.post_process_function = post_process_function
33
34 def evaluate(self, eval_dataset=None, eval_examples=None, ignore_keys=None):
35 eval_dataset = self.eval_dataset if eval_dataset is None else eval_dataset
36 eval_dataloader = self.get_eval_dataloader(eval_dataset)
37 eval_examples = self.eval_examples if eval_examples is None else eval_examples
38
39 # Temporarily disable metric computation, we will do it in the loop here.
40 compute_metrics = self.compute_metrics
41 self.compute_metrics = None
42 try:
43 output = self.prediction_loop(
44 eval_dataloader,
45 description="Evaluation",
46 # No point gathering the predictions if there are no metrics, otherwise we defer to
47 # self.args.prediction_loss_only
48 prediction_loss_only=True if compute_metrics is None else None,
49 ignore_keys=ignore_keys,
50 )
51 finally:
52 self.compute_metrics = compute_metrics
53
54 if self.post_process_function is not None and self.compute_metrics is not None:
55 eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions)
56 metrics = self.compute_metrics(eval_preds)
57
58 self.log(metrics)
59 else:
60 metrics = {}
61
62 if self.args.tpu_metrics_debug or self.args.debug:
63 # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)
64 xm.master_print(met.metrics_report())
65
66 self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, metrics)
67 return metrics
68
69 def predict(self, predict_dataset, predict_examples, ignore_keys=None):
70 predict_dataloader = self.get_test_dataloader(predict_dataset)
71
72 # Temporarily disable metric computation, we will do it in the loop here.
73 compute_metrics = self.compute_metrics
74 self.compute_metrics = None
75 try:
76 output = self.prediction_loop(
77 predict_dataloader,
78 description="Prediction",
79 # No point gathering the predictions if there are no metrics, otherwise we defer to
80 # self.args.prediction_loss_only
81 prediction_loss_only=True if compute_metrics is None else None,
82 ignore_keys=ignore_keys,
83 )
84 finally:
85 self.compute_metrics = compute_metrics
86
87 if self.post_process_function is None or self.compute_metrics is None:
88 return output
89
90 predictions = self.post_process_function(predict_examples, predict_dataset, output.predictions, "predict")
91 metrics = self.compute_metrics(predictions)
92
93 return PredictionOutput(predictions=predictions.predictions, label_ids=predictions.label_ids, metrics=metrics)
94
[end of examples/pytorch/question-answering/trainer_qa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/pytorch/question-answering/trainer_qa.py b/examples/pytorch/question-answering/trainer_qa.py
--- a/examples/pytorch/question-answering/trainer_qa.py
+++ b/examples/pytorch/question-answering/trainer_qa.py
@@ -39,8 +39,9 @@
# Temporarily disable metric computation, we will do it in the loop here.
compute_metrics = self.compute_metrics
self.compute_metrics = None
+ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
try:
- output = self.prediction_loop(
+ output = eval_loop(
eval_dataloader,
description="Evaluation",
# No point gathering the predictions if there are no metrics, otherwise we defer to
@@ -72,8 +73,9 @@
# Temporarily disable metric computation, we will do it in the loop here.
compute_metrics = self.compute_metrics
self.compute_metrics = None
+ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
try:
- output = self.prediction_loop(
+ output = eval_loop(
predict_dataloader,
description="Prediction",
# No point gathering the predictions if there are no metrics, otherwise we defer to
| {"golden_diff": "diff --git a/examples/pytorch/question-answering/trainer_qa.py b/examples/pytorch/question-answering/trainer_qa.py\n--- a/examples/pytorch/question-answering/trainer_qa.py\n+++ b/examples/pytorch/question-answering/trainer_qa.py\n@@ -39,8 +39,9 @@\n # Temporarily disable metric computation, we will do it in the loop here.\n compute_metrics = self.compute_metrics\n self.compute_metrics = None\n+ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop\n try:\n- output = self.prediction_loop(\n+ output = eval_loop(\n eval_dataloader,\n description=\"Evaluation\",\n # No point gathering the predictions if there are no metrics, otherwise we defer to\n@@ -72,8 +73,9 @@\n # Temporarily disable metric computation, we will do it in the loop here.\n compute_metrics = self.compute_metrics\n self.compute_metrics = None\n+ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop\n try:\n- output = self.prediction_loop(\n+ output = eval_loop(\n predict_dataloader,\n description=\"Prediction\",\n # No point gathering the predictions if there are no metrics, otherwise we defer to\n", "issue": "ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384)\nthe command to reproduce:\r\ncd huggingface-transformers/examples/pytorch/question-answering\r\npython -m torch.distributed.launch --nproc_per_node=8 ./run_qa.py \\\r\n\t\t --model_name_or_path roberta-large \\\r\n\t\t --dataset_name squad \\\r\n\t\t --do_train --do_eval \\\r\n\t\t --per_device_train_batch_size 16 \\\r\n\t\t --per_device_eval_batch_size 256 \\\r\n\t\t --learning_rate 3e-5 \\\r\n\t\t --num_train_epochs 2 \\\r\n\t\t --max_seq_length 384 \\\r\n\t\t --doc_stride 128 \\\r\n\t\t --output_dir test_result2/$trials --overwrite_output_dir \\\r\n\t\t --logging_dir test_result2/$trials/tensorboard --logging_first_step --logging_steps 50 \\\r\n --fp16\r\n\r\n\r\n\r\ni tried add \"--max_eval_samples 10240\", this will fix the error, while the AUC result is quite low(exact_match = 4.9414, f1 = 8.9784). and when i ran with 1gpu, the above command can succeed(exact_match = 88.5336, f1 = 94.3266)\r\n\r\n\r\nthe full error is \"File \"./transformers/src/transformers/trainer_pt_utils.py\", line 410, in _nested_set_tensors\r\n i * slice_len : (i + 1) * slice_len\r\n i * slice_len : (i + 1) * slice_len\r\nValueError: could not broadcast input array from shape (2816,384) into shape (2698,384)\"\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2020 The HuggingFace Team All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nA subclass of `Trainer` specific to Question-Answering tasks\n\"\"\"\n\nfrom transformers import Trainer, is_torch_tpu_available\nfrom transformers.trainer_utils import PredictionOutput\n\n\nif is_torch_tpu_available():\n import torch_xla.core.xla_model as xm\n import torch_xla.debug.metrics as met\n\n\nclass QuestionAnsweringTrainer(Trainer):\n def __init__(self, *args, eval_examples=None, post_process_function=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.eval_examples = eval_examples\n self.post_process_function = post_process_function\n\n def evaluate(self, eval_dataset=None, eval_examples=None, ignore_keys=None):\n eval_dataset = self.eval_dataset if eval_dataset is None else eval_dataset\n eval_dataloader = self.get_eval_dataloader(eval_dataset)\n eval_examples = self.eval_examples if eval_examples is None else eval_examples\n\n # Temporarily disable metric computation, we will do it in the loop here.\n compute_metrics = self.compute_metrics\n self.compute_metrics = None\n try:\n output = self.prediction_loop(\n eval_dataloader,\n description=\"Evaluation\",\n # No point gathering the predictions if there are no metrics, otherwise we defer to\n # self.args.prediction_loss_only\n prediction_loss_only=True if compute_metrics is None else None,\n ignore_keys=ignore_keys,\n )\n finally:\n self.compute_metrics = compute_metrics\n\n if self.post_process_function is not None and self.compute_metrics is not None:\n eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions)\n metrics = self.compute_metrics(eval_preds)\n\n self.log(metrics)\n else:\n metrics = {}\n\n if self.args.tpu_metrics_debug or self.args.debug:\n # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)\n xm.master_print(met.metrics_report())\n\n self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, metrics)\n return metrics\n\n def predict(self, predict_dataset, predict_examples, ignore_keys=None):\n predict_dataloader = self.get_test_dataloader(predict_dataset)\n\n # Temporarily disable metric computation, we will do it in the loop here.\n compute_metrics = self.compute_metrics\n self.compute_metrics = None\n try:\n output = self.prediction_loop(\n predict_dataloader,\n description=\"Prediction\",\n # No point gathering the predictions if there are no metrics, otherwise we defer to\n # self.args.prediction_loss_only\n prediction_loss_only=True if compute_metrics is None else None,\n ignore_keys=ignore_keys,\n )\n finally:\n self.compute_metrics = compute_metrics\n\n if self.post_process_function is None or self.compute_metrics is None:\n return output\n\n predictions = self.post_process_function(predict_examples, predict_dataset, output.predictions, \"predict\")\n metrics = self.compute_metrics(predictions)\n\n return PredictionOutput(predictions=predictions.predictions, label_ids=predictions.label_ids, metrics=metrics)\n", "path": "examples/pytorch/question-answering/trainer_qa.py"}]} | 1,919 | 290 |
gh_patches_debug_16666 | rasdani/github-patches | git_diff | SeldonIO__MLServer-613 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Compress requests / responses
Leverage `gzip` to compress requests / responses.
</issue>
<code>
[start of mlserver/rest/app.py]
1 from typing import Callable
2 from fastapi import FastAPI
3 from fastapi.responses import Response as FastAPIResponse
4 from fastapi.routing import APIRoute as FastAPIRoute
5 from fastapi.middleware.cors import CORSMiddleware
6 from starlette_exporter import PrometheusMiddleware
7
8 from .endpoints import Endpoints, ModelRepositoryEndpoints
9 from .requests import Request
10 from .responses import Response
11 from .errors import _EXCEPTION_HANDLERS
12
13 from ..settings import Settings
14 from ..handlers import DataPlane, ModelRepositoryHandlers
15
16
17 class APIRoute(FastAPIRoute):
18 """
19 Custom route to use our own Request handler.
20 """
21
22 def get_route_handler(self) -> Callable:
23 original_route_handler = super().get_route_handler()
24
25 async def custom_route_handler(request: Request) -> FastAPIResponse:
26 request = Request(request.scope, request.receive)
27 return await original_route_handler(request)
28
29 return custom_route_handler
30
31
32 def create_app(
33 settings: Settings,
34 data_plane: DataPlane,
35 model_repository_handlers: ModelRepositoryHandlers,
36 ) -> FastAPI:
37 endpoints = Endpoints(data_plane)
38 model_repository_endpoints = ModelRepositoryEndpoints(model_repository_handlers)
39
40 routes = [
41 # Model ready
42 APIRoute(
43 "/v2/models/{model_name}/ready",
44 endpoints.model_ready,
45 ),
46 APIRoute(
47 "/v2/models/{model_name}/versions/{model_version}/ready",
48 endpoints.model_ready,
49 ),
50 # Model infer
51 APIRoute(
52 "/v2/models/{model_name}/infer",
53 endpoints.infer,
54 methods=["POST"],
55 ),
56 APIRoute(
57 "/v2/models/{model_name}/versions/{model_version}/infer",
58 endpoints.infer,
59 methods=["POST"],
60 ),
61 # Model metadata
62 APIRoute(
63 "/v2/models/{model_name}",
64 endpoints.model_metadata,
65 ),
66 APIRoute(
67 "/v2/models/{model_name}/versions/{model_version}",
68 endpoints.model_metadata,
69 ),
70 # Liveness and readiness
71 APIRoute("/v2/health/live", endpoints.live),
72 APIRoute("/v2/health/ready", endpoints.ready),
73 # Server metadata
74 APIRoute(
75 "/v2",
76 endpoints.metadata,
77 ),
78 ]
79
80 routes += [
81 # Model Repository API
82 APIRoute(
83 "/v2/repository/index",
84 model_repository_endpoints.index,
85 methods=["POST"],
86 ),
87 APIRoute(
88 "/v2/repository/models/{model_name}/load",
89 model_repository_endpoints.load,
90 methods=["POST"],
91 ),
92 APIRoute(
93 "/v2/repository/models/{model_name}/unload",
94 model_repository_endpoints.unload,
95 methods=["POST"],
96 ),
97 ]
98
99 app = FastAPI(
100 debug=settings.debug,
101 routes=routes, # type: ignore
102 default_response_class=Response,
103 exception_handlers=_EXCEPTION_HANDLERS, # type: ignore
104 )
105
106 if settings.cors_settings is not None:
107 app.add_middleware(
108 CORSMiddleware,
109 allow_origins=settings.cors_settings.allow_origins,
110 allow_origin_regex=settings.cors_settings.allow_origin_regex,
111 allow_credentials=settings.cors_settings.allow_credentials,
112 allow_methods=settings.cors_settings.allow_methods,
113 allow_headers=settings.cors_settings.allow_headers,
114 max_age=settings.cors_settings.max_age,
115 )
116
117 if settings.metrics_endpoint:
118 app.add_middleware(
119 PrometheusMiddleware,
120 app_name="mlserver",
121 prefix="rest_server",
122 # TODO: Should we also exclude model's health endpoints?
123 skip_paths=[
124 settings.metrics_endpoint,
125 "/v2/health/live",
126 "/v2/health/ready",
127 ],
128 )
129
130 return app
131
[end of mlserver/rest/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlserver/rest/app.py b/mlserver/rest/app.py
--- a/mlserver/rest/app.py
+++ b/mlserver/rest/app.py
@@ -3,6 +3,7 @@
from fastapi.responses import Response as FastAPIResponse
from fastapi.routing import APIRoute as FastAPIRoute
from fastapi.middleware.cors import CORSMiddleware
+from fastapi.middleware.gzip import GZipMiddleware
from starlette_exporter import PrometheusMiddleware
from .endpoints import Endpoints, ModelRepositoryEndpoints
@@ -103,6 +104,7 @@
exception_handlers=_EXCEPTION_HANDLERS, # type: ignore
)
+ app.add_middleware(GZipMiddleware)
if settings.cors_settings is not None:
app.add_middleware(
CORSMiddleware,
| {"golden_diff": "diff --git a/mlserver/rest/app.py b/mlserver/rest/app.py\n--- a/mlserver/rest/app.py\n+++ b/mlserver/rest/app.py\n@@ -3,6 +3,7 @@\n from fastapi.responses import Response as FastAPIResponse\n from fastapi.routing import APIRoute as FastAPIRoute\n from fastapi.middleware.cors import CORSMiddleware\n+from fastapi.middleware.gzip import GZipMiddleware\n from starlette_exporter import PrometheusMiddleware\n \n from .endpoints import Endpoints, ModelRepositoryEndpoints\n@@ -103,6 +104,7 @@\n exception_handlers=_EXCEPTION_HANDLERS, # type: ignore\n )\n \n+ app.add_middleware(GZipMiddleware)\n if settings.cors_settings is not None:\n app.add_middleware(\n CORSMiddleware,\n", "issue": "Compress requests / responses\nLeverage `gzip` to compress requests / responses.\n", "before_files": [{"content": "from typing import Callable\nfrom fastapi import FastAPI\nfrom fastapi.responses import Response as FastAPIResponse\nfrom fastapi.routing import APIRoute as FastAPIRoute\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom starlette_exporter import PrometheusMiddleware\n\nfrom .endpoints import Endpoints, ModelRepositoryEndpoints\nfrom .requests import Request\nfrom .responses import Response\nfrom .errors import _EXCEPTION_HANDLERS\n\nfrom ..settings import Settings\nfrom ..handlers import DataPlane, ModelRepositoryHandlers\n\n\nclass APIRoute(FastAPIRoute):\n \"\"\"\n Custom route to use our own Request handler.\n \"\"\"\n\n def get_route_handler(self) -> Callable:\n original_route_handler = super().get_route_handler()\n\n async def custom_route_handler(request: Request) -> FastAPIResponse:\n request = Request(request.scope, request.receive)\n return await original_route_handler(request)\n\n return custom_route_handler\n\n\ndef create_app(\n settings: Settings,\n data_plane: DataPlane,\n model_repository_handlers: ModelRepositoryHandlers,\n) -> FastAPI:\n endpoints = Endpoints(data_plane)\n model_repository_endpoints = ModelRepositoryEndpoints(model_repository_handlers)\n\n routes = [\n # Model ready\n APIRoute(\n \"/v2/models/{model_name}/ready\",\n endpoints.model_ready,\n ),\n APIRoute(\n \"/v2/models/{model_name}/versions/{model_version}/ready\",\n endpoints.model_ready,\n ),\n # Model infer\n APIRoute(\n \"/v2/models/{model_name}/infer\",\n endpoints.infer,\n methods=[\"POST\"],\n ),\n APIRoute(\n \"/v2/models/{model_name}/versions/{model_version}/infer\",\n endpoints.infer,\n methods=[\"POST\"],\n ),\n # Model metadata\n APIRoute(\n \"/v2/models/{model_name}\",\n endpoints.model_metadata,\n ),\n APIRoute(\n \"/v2/models/{model_name}/versions/{model_version}\",\n endpoints.model_metadata,\n ),\n # Liveness and readiness\n APIRoute(\"/v2/health/live\", endpoints.live),\n APIRoute(\"/v2/health/ready\", endpoints.ready),\n # Server metadata\n APIRoute(\n \"/v2\",\n endpoints.metadata,\n ),\n ]\n\n routes += [\n # Model Repository API\n APIRoute(\n \"/v2/repository/index\",\n model_repository_endpoints.index,\n methods=[\"POST\"],\n ),\n APIRoute(\n \"/v2/repository/models/{model_name}/load\",\n model_repository_endpoints.load,\n methods=[\"POST\"],\n ),\n APIRoute(\n \"/v2/repository/models/{model_name}/unload\",\n model_repository_endpoints.unload,\n methods=[\"POST\"],\n ),\n ]\n\n app = FastAPI(\n debug=settings.debug,\n routes=routes, # type: ignore\n default_response_class=Response,\n exception_handlers=_EXCEPTION_HANDLERS, # type: ignore\n )\n\n if settings.cors_settings is not None:\n app.add_middleware(\n CORSMiddleware,\n allow_origins=settings.cors_settings.allow_origins,\n allow_origin_regex=settings.cors_settings.allow_origin_regex,\n allow_credentials=settings.cors_settings.allow_credentials,\n allow_methods=settings.cors_settings.allow_methods,\n allow_headers=settings.cors_settings.allow_headers,\n max_age=settings.cors_settings.max_age,\n )\n\n if settings.metrics_endpoint:\n app.add_middleware(\n PrometheusMiddleware,\n app_name=\"mlserver\",\n prefix=\"rest_server\",\n # TODO: Should we also exclude model's health endpoints?\n skip_paths=[\n settings.metrics_endpoint,\n \"/v2/health/live\",\n \"/v2/health/ready\",\n ],\n )\n\n return app\n", "path": "mlserver/rest/app.py"}]} | 1,629 | 173 |
gh_patches_debug_58116 | rasdani/github-patches | git_diff | mindee__doctr-929 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix encode_string function
### Bug description
Currently there is no check if the single characters are also available in the given vocabulary.
We need a check for this :)
TODO's:
- [ ] check that in the function and throw a meaningful exception
- [ ] improve the corresponding test
discussion:
#926
### Code snippet to reproduce the bug
```python
from doctr.datasets.utils import encode_string
from doctr.datasets import VOCABS
x = encode_string(input_string='abcDÄÜ', vocab=VOCABS['english']) # Ä and Ü does not exist in vocab
# raises ValueError: substring not found
```
### Error traceback
```
Traceback (most recent call last):
File "/home/felix/Desktop/doctr/test.py", line 7, in <module>
x = encode_string(input_string='abcDÄÜ', vocab=VOCABS['english']) # Ä and Ü does not exist in vocab
File "/home/felix/Desktop/doctr/doctr/datasets/utils.py", line 75, in encode_string
return list(map(vocab.index, input_string)) # type: ignore[arg-type]
ValueError: substring not found
```
### Environment
not need :)
### Deep Learning backend
same
</issue>
<code>
[start of doctr/datasets/utils.py]
1 # Copyright (C) 2021-2022, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import string
7 import unicodedata
8 from collections.abc import Sequence
9 from functools import partial
10 from pathlib import Path
11 from typing import Any, Dict, List, Optional
12 from typing import Sequence as SequenceType
13 from typing import Tuple, TypeVar, Union
14
15 import numpy as np
16 from PIL import Image
17
18 from doctr.io.image import get_img_shape
19 from doctr.utils.geometry import convert_to_relative_coords, extract_crops, extract_rcrops
20
21 from .vocabs import VOCABS
22
23 __all__ = ['translate', 'encode_string', 'decode_sequence', 'encode_sequences']
24
25 ImageTensor = TypeVar('ImageTensor')
26
27
28 def translate(
29 input_string: str,
30 vocab_name: str,
31 unknown_char: str = '■',
32 ) -> str:
33 """Translate a string input in a given vocabulary
34
35 Args:
36 input_string: input string to translate
37 vocab_name: vocabulary to use (french, latin, ...)
38 unknown_char: unknown character for non-translatable characters
39
40 Returns:
41 A string translated in a given vocab"""
42
43 if VOCABS.get(vocab_name) is None:
44 raise KeyError("output vocabulary must be in vocabs dictionnary")
45
46 translated = ''
47 for char in input_string:
48 if char not in VOCABS[vocab_name]:
49 # we need to translate char into a vocab char
50 if char in string.whitespace:
51 # remove whitespaces
52 continue
53 # normalize character if it is not in vocab
54 char = unicodedata.normalize('NFD', char).encode('ascii', 'ignore').decode('ascii')
55 if char == '' or char not in VOCABS[vocab_name]:
56 # if normalization fails or char still not in vocab, return unknown character)
57 char = unknown_char
58 translated += char
59 return translated
60
61
62 def encode_string(
63 input_string: str,
64 vocab: str,
65 ) -> List[int]:
66 """Given a predefined mapping, encode the string to a sequence of numbers
67
68 Args:
69 input_string: string to encode
70 vocab: vocabulary (string), the encoding is given by the indexing of the character sequence
71
72 Returns:
73 A list encoding the input_string"""
74
75 return list(map(vocab.index, input_string)) # type: ignore[arg-type]
76
77
78 def decode_sequence(
79 input_seq: Union[np.array, SequenceType[int]],
80 mapping: str,
81 ) -> str:
82 """Given a predefined mapping, decode the sequence of numbers to a string
83
84 Args:
85 input_seq: array to decode
86 mapping: vocabulary (string), the encoding is given by the indexing of the character sequence
87
88 Returns:
89 A string, decoded from input_seq
90 """
91
92 if not isinstance(input_seq, (Sequence, np.ndarray)):
93 raise TypeError("Invalid sequence type")
94 if isinstance(input_seq, np.ndarray) and (input_seq.dtype != np.int_ or input_seq.max() >= len(mapping)):
95 raise AssertionError("Input must be an array of int, with max less than mapping size")
96
97 return ''.join(map(mapping.__getitem__, input_seq))
98
99
100 def encode_sequences(
101 sequences: List[str],
102 vocab: str,
103 target_size: Optional[int] = None,
104 eos: int = -1,
105 sos: Optional[int] = None,
106 pad: Optional[int] = None,
107 dynamic_seq_length: bool = False,
108 **kwargs: Any,
109 ) -> np.ndarray:
110 """Encode character sequences using a given vocab as mapping
111
112 Args:
113 sequences: the list of character sequences of size N
114 vocab: the ordered vocab to use for encoding
115 target_size: maximum length of the encoded data
116 eos: encoding of End Of String
117 sos: optional encoding of Start Of String
118 pad: optional encoding for padding. In case of padding, all sequences are followed by 1 EOS then PAD
119 dynamic_seq_length: if `target_size` is specified, uses it as upper bound and enables dynamic sequence size
120
121 Returns:
122 the padded encoded data as a tensor
123 """
124
125 if 0 <= eos < len(vocab):
126 raise ValueError("argument 'eos' needs to be outside of vocab possible indices")
127
128 if not isinstance(target_size, int) or dynamic_seq_length:
129 # Maximum string length + EOS
130 max_length = max(len(w) for w in sequences) + 1
131 if isinstance(sos, int):
132 max_length += 1
133 if isinstance(pad, int):
134 max_length += 1
135 target_size = max_length if not isinstance(target_size, int) else min(max_length, target_size)
136
137 # Pad all sequences
138 if isinstance(pad, int): # pad with padding symbol
139 if 0 <= pad < len(vocab):
140 raise ValueError("argument 'pad' needs to be outside of vocab possible indices")
141 # In that case, add EOS at the end of the word before padding
142 default_symbol = pad
143 else: # pad with eos symbol
144 default_symbol = eos
145 encoded_data = np.full([len(sequences), target_size], default_symbol, dtype=np.int32)
146
147 # Encode the strings
148 for idx, seq in enumerate(map(partial(encode_string, vocab=vocab), sequences)):
149 if isinstance(pad, int): # add eos at the end of the sequence
150 seq.append(eos)
151 encoded_data[idx, :min(len(seq), target_size)] = seq[:min(len(seq), target_size)]
152
153 if isinstance(sos, int): # place sos symbol at the beginning of each sequence
154 if 0 <= sos < len(vocab):
155 raise ValueError("argument 'sos' needs to be outside of vocab possible indices")
156 encoded_data = np.roll(encoded_data, 1)
157 encoded_data[:, 0] = sos
158
159 return encoded_data
160
161
162 def convert_target_to_relative(img: ImageTensor, target: Dict[str, Any]) -> Tuple[ImageTensor, Dict[str, Any]]:
163
164 target['boxes'] = convert_to_relative_coords(target['boxes'], get_img_shape(img))
165 return img, target
166
167
168 def crop_bboxes_from_image(img_path: Union[str, Path], geoms: np.ndarray) -> List[np.ndarray]:
169 """Crop a set of bounding boxes from an image
170 Args:
171 img_path: path to the image
172 geoms: a array of polygons of shape (N, 4, 2) or of straight boxes of shape (N, 4)
173 Returns:
174 a list of cropped images
175 """
176 img = np.array(Image.open(img_path))
177 # Polygon
178 if geoms.ndim == 3 and geoms.shape[1:] == (4, 2):
179 return extract_rcrops(img, geoms.astype(dtype=int))
180 if geoms.ndim == 2 and geoms.shape[1] == 4:
181 return extract_crops(img, geoms.astype(dtype=int))
182 raise ValueError("Invalid geometry format")
183
[end of doctr/datasets/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doctr/datasets/utils.py b/doctr/datasets/utils.py
--- a/doctr/datasets/utils.py
+++ b/doctr/datasets/utils.py
@@ -72,7 +72,10 @@
Returns:
A list encoding the input_string"""
- return list(map(vocab.index, input_string)) # type: ignore[arg-type]
+ try:
+ return list(map(vocab.index, input_string)) # type: ignore[arg-type]
+ except ValueError:
+ raise ValueError("some characters cannot be found in 'vocab'")
def decode_sequence(
| {"golden_diff": "diff --git a/doctr/datasets/utils.py b/doctr/datasets/utils.py\n--- a/doctr/datasets/utils.py\n+++ b/doctr/datasets/utils.py\n@@ -72,7 +72,10 @@\n Returns:\n A list encoding the input_string\"\"\"\n \n- return list(map(vocab.index, input_string)) # type: ignore[arg-type]\n+ try:\n+ return list(map(vocab.index, input_string)) # type: ignore[arg-type]\n+ except ValueError:\n+ raise ValueError(\"some characters cannot be found in 'vocab'\")\n \n \n def decode_sequence(\n", "issue": "Fix encode_string function\n### Bug description\n\nCurrently there is no check if the single characters are also available in the given vocabulary.\r\nWe need a check for this :) \r\n\r\nTODO's:\r\n\r\n- [ ] check that in the function and throw a meaningful exception\r\n- [ ] improve the corresponding test \r\n\r\ndiscussion:\r\n#926 \n\n### Code snippet to reproduce the bug\n\n```python\r\nfrom doctr.datasets.utils import encode_string\r\nfrom doctr.datasets import VOCABS\r\n\r\nx = encode_string(input_string='abcD\u00c4\u00dc', vocab=VOCABS['english']) # \u00c4 and \u00dc does not exist in vocab\r\n# raises ValueError: substring not found\r\n```\n\n### Error traceback\n\n```\r\nTraceback (most recent call last):\r\n File \"/home/felix/Desktop/doctr/test.py\", line 7, in <module>\r\n x = encode_string(input_string='abcD\u00c4\u00dc', vocab=VOCABS['english']) # \u00c4 and \u00dc does not exist in vocab\r\n File \"/home/felix/Desktop/doctr/doctr/datasets/utils.py\", line 75, in encode_string\r\n return list(map(vocab.index, input_string)) # type: ignore[arg-type]\r\nValueError: substring not found\r\n```\n\n### Environment\n\nnot need :)\n\n### Deep Learning backend\n\nsame\n", "before_files": [{"content": "# Copyright (C) 2021-2022, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport string\nimport unicodedata\nfrom collections.abc import Sequence\nfrom functools import partial\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional\nfrom typing import Sequence as SequenceType\nfrom typing import Tuple, TypeVar, Union\n\nimport numpy as np\nfrom PIL import Image\n\nfrom doctr.io.image import get_img_shape\nfrom doctr.utils.geometry import convert_to_relative_coords, extract_crops, extract_rcrops\n\nfrom .vocabs import VOCABS\n\n__all__ = ['translate', 'encode_string', 'decode_sequence', 'encode_sequences']\n\nImageTensor = TypeVar('ImageTensor')\n\n\ndef translate(\n input_string: str,\n vocab_name: str,\n unknown_char: str = '\u25a0',\n) -> str:\n \"\"\"Translate a string input in a given vocabulary\n\n Args:\n input_string: input string to translate\n vocab_name: vocabulary to use (french, latin, ...)\n unknown_char: unknown character for non-translatable characters\n\n Returns:\n A string translated in a given vocab\"\"\"\n\n if VOCABS.get(vocab_name) is None:\n raise KeyError(\"output vocabulary must be in vocabs dictionnary\")\n\n translated = ''\n for char in input_string:\n if char not in VOCABS[vocab_name]:\n # we need to translate char into a vocab char\n if char in string.whitespace:\n # remove whitespaces\n continue\n # normalize character if it is not in vocab\n char = unicodedata.normalize('NFD', char).encode('ascii', 'ignore').decode('ascii')\n if char == '' or char not in VOCABS[vocab_name]:\n # if normalization fails or char still not in vocab, return unknown character)\n char = unknown_char\n translated += char\n return translated\n\n\ndef encode_string(\n input_string: str,\n vocab: str,\n) -> List[int]:\n \"\"\"Given a predefined mapping, encode the string to a sequence of numbers\n\n Args:\n input_string: string to encode\n vocab: vocabulary (string), the encoding is given by the indexing of the character sequence\n\n Returns:\n A list encoding the input_string\"\"\"\n\n return list(map(vocab.index, input_string)) # type: ignore[arg-type]\n\n\ndef decode_sequence(\n input_seq: Union[np.array, SequenceType[int]],\n mapping: str,\n) -> str:\n \"\"\"Given a predefined mapping, decode the sequence of numbers to a string\n\n Args:\n input_seq: array to decode\n mapping: vocabulary (string), the encoding is given by the indexing of the character sequence\n\n Returns:\n A string, decoded from input_seq\n \"\"\"\n\n if not isinstance(input_seq, (Sequence, np.ndarray)):\n raise TypeError(\"Invalid sequence type\")\n if isinstance(input_seq, np.ndarray) and (input_seq.dtype != np.int_ or input_seq.max() >= len(mapping)):\n raise AssertionError(\"Input must be an array of int, with max less than mapping size\")\n\n return ''.join(map(mapping.__getitem__, input_seq))\n\n\ndef encode_sequences(\n sequences: List[str],\n vocab: str,\n target_size: Optional[int] = None,\n eos: int = -1,\n sos: Optional[int] = None,\n pad: Optional[int] = None,\n dynamic_seq_length: bool = False,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Encode character sequences using a given vocab as mapping\n\n Args:\n sequences: the list of character sequences of size N\n vocab: the ordered vocab to use for encoding\n target_size: maximum length of the encoded data\n eos: encoding of End Of String\n sos: optional encoding of Start Of String\n pad: optional encoding for padding. In case of padding, all sequences are followed by 1 EOS then PAD\n dynamic_seq_length: if `target_size` is specified, uses it as upper bound and enables dynamic sequence size\n\n Returns:\n the padded encoded data as a tensor\n \"\"\"\n\n if 0 <= eos < len(vocab):\n raise ValueError(\"argument 'eos' needs to be outside of vocab possible indices\")\n\n if not isinstance(target_size, int) or dynamic_seq_length:\n # Maximum string length + EOS\n max_length = max(len(w) for w in sequences) + 1\n if isinstance(sos, int):\n max_length += 1\n if isinstance(pad, int):\n max_length += 1\n target_size = max_length if not isinstance(target_size, int) else min(max_length, target_size)\n\n # Pad all sequences\n if isinstance(pad, int): # pad with padding symbol\n if 0 <= pad < len(vocab):\n raise ValueError(\"argument 'pad' needs to be outside of vocab possible indices\")\n # In that case, add EOS at the end of the word before padding\n default_symbol = pad\n else: # pad with eos symbol\n default_symbol = eos\n encoded_data = np.full([len(sequences), target_size], default_symbol, dtype=np.int32)\n\n # Encode the strings\n for idx, seq in enumerate(map(partial(encode_string, vocab=vocab), sequences)):\n if isinstance(pad, int): # add eos at the end of the sequence\n seq.append(eos)\n encoded_data[idx, :min(len(seq), target_size)] = seq[:min(len(seq), target_size)]\n\n if isinstance(sos, int): # place sos symbol at the beginning of each sequence\n if 0 <= sos < len(vocab):\n raise ValueError(\"argument 'sos' needs to be outside of vocab possible indices\")\n encoded_data = np.roll(encoded_data, 1)\n encoded_data[:, 0] = sos\n\n return encoded_data\n\n\ndef convert_target_to_relative(img: ImageTensor, target: Dict[str, Any]) -> Tuple[ImageTensor, Dict[str, Any]]:\n\n target['boxes'] = convert_to_relative_coords(target['boxes'], get_img_shape(img))\n return img, target\n\n\ndef crop_bboxes_from_image(img_path: Union[str, Path], geoms: np.ndarray) -> List[np.ndarray]:\n \"\"\"Crop a set of bounding boxes from an image\n Args:\n img_path: path to the image\n geoms: a array of polygons of shape (N, 4, 2) or of straight boxes of shape (N, 4)\n Returns:\n a list of cropped images\n \"\"\"\n img = np.array(Image.open(img_path))\n # Polygon\n if geoms.ndim == 3 and geoms.shape[1:] == (4, 2):\n return extract_rcrops(img, geoms.astype(dtype=int))\n if geoms.ndim == 2 and geoms.shape[1] == 4:\n return extract_crops(img, geoms.astype(dtype=int))\n raise ValueError(\"Invalid geometry format\")\n", "path": "doctr/datasets/utils.py"}]} | 2,786 | 129 |
gh_patches_debug_16254 | rasdani/github-patches | git_diff | pyodide__pyodide-123 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Full build path is included in package `.js` files
As @rth pointed out in #121, the full build path to the `.data` file is included in the `.js` file for each package. This is *really* a problem, since it doesn't prevent the packages from being deployed anywhere, but it is leaking information we probably don't want to and makes the builds less reproducible.
</issue>
<code>
[start of tools/buildpkg.py]
1 #!/usr/bin/env python3
2
3 """
4 Builds a Pyodide package.
5 """
6
7 import argparse
8 import hashlib
9 import os
10 from pathlib import Path
11 import shutil
12 import subprocess
13
14
15 import common
16
17
18 ROOTDIR = Path(__file__).parent.resolve()
19
20
21 def check_checksum(path, pkg):
22 """
23 Checks that a tarball matches the checksum in the package metadata.
24 """
25 checksum_keys = {'md5', 'sha256'}.intersection(pkg['source'])
26 if not checksum_keys:
27 return
28 elif len(checksum_keys) != 1:
29 raise ValueError('Only one checksum should be included in a package '
30 'setup; found {}.'.format(checksum_keys))
31 checksum_algorithm = checksum_keys.pop()
32 checksum = pkg['source'][checksum_algorithm]
33 CHUNK_SIZE = 1 << 16
34 h = getattr(hashlib, checksum_algorithm)()
35 with open(path, 'rb') as fd:
36 while True:
37 chunk = fd.read(CHUNK_SIZE)
38 h.update(chunk)
39 if len(chunk) < CHUNK_SIZE:
40 break
41 if h.hexdigest() != checksum:
42 raise ValueError("Invalid {} checksum".format(checksum_algorithm))
43
44
45 def download_and_extract(buildpath, packagedir, pkg, args):
46 tarballpath = buildpath / Path(pkg['source']['url']).name
47 if not tarballpath.is_file():
48 subprocess.run([
49 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']
50 ], check=True)
51 check_checksum(tarballpath, pkg)
52 srcpath = buildpath / packagedir
53 if not srcpath.is_dir():
54 shutil.unpack_archive(str(tarballpath), str(buildpath))
55 return srcpath
56
57
58 def patch(path, srcpath, pkg, args):
59 if (srcpath / '.patched').is_file():
60 return
61
62 # Apply all of the patches
63 orig_dir = Path.cwd()
64 pkgdir = path.parent.resolve()
65 os.chdir(srcpath)
66 try:
67 for patch in pkg['source'].get('patches', []):
68 subprocess.run([
69 'patch', '-p1', '--binary', '-i', pkgdir / patch
70 ], check=True)
71 finally:
72 os.chdir(orig_dir)
73
74 # Add any extra files
75 for src, dst in pkg['source'].get('extras', []):
76 shutil.copyfile(pkgdir / src, srcpath / dst)
77
78 with open(srcpath / '.patched', 'wb') as fd:
79 fd.write(b'\n')
80
81
82 def get_libdir(srcpath, args):
83 # Get the name of the build/lib.XXX directory that distutils wrote its
84 # output to
85 slug = subprocess.check_output([
86 str(Path(args.host) / 'bin' / 'python3'),
87 '-c',
88 'import sysconfig, sys; '
89 'print("{}-{}.{}".format('
90 'sysconfig.get_platform(), '
91 'sys.version_info[0], '
92 'sys.version_info[1]))']).decode('ascii').strip()
93 purelib = srcpath / 'build' / 'lib'
94 if purelib.is_dir():
95 libdir = purelib
96 else:
97 libdir = srcpath / 'build' / ('lib.' + slug)
98 return libdir
99
100
101 def compile(path, srcpath, pkg, args):
102 if (srcpath / '.built').is_file():
103 return
104
105 orig_dir = Path.cwd()
106 os.chdir(srcpath)
107 try:
108 subprocess.run([
109 str(Path(args.host) / 'bin' / 'python3'),
110 str(ROOTDIR / 'pywasmcross'),
111 '--cflags',
112 args.cflags + ' ' +
113 pkg.get('build', {}).get('cflags', ''),
114 '--ldflags',
115 args.ldflags + ' ' +
116 pkg.get('build', {}).get('ldflags', ''),
117 '--host', args.host,
118 '--target', args.target], check=True)
119 finally:
120 os.chdir(orig_dir)
121
122 post = pkg.get('build', {}).get('post')
123 if post is not None:
124 libdir = get_libdir(srcpath, args)
125 pkgdir = path.parent.resolve()
126 env = {
127 'BUILD': libdir,
128 'PKGDIR': pkgdir
129 }
130 subprocess.run([
131 'bash', '-c', post], env=env, check=True)
132
133 with open(srcpath / '.built', 'wb') as fd:
134 fd.write(b'\n')
135
136
137 def package_files(buildpath, srcpath, pkg, args):
138 if (buildpath / '.pacakaged').is_file():
139 return
140
141 name = pkg['package']['name']
142 libdir = get_libdir(srcpath, args)
143 subprocess.run([
144 'python',
145 Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',
146 buildpath / (name + '.data'),
147 '--preload',
148 '{}@/lib/python3.6/site-packages'.format(libdir),
149 '--js-output={}'.format(buildpath / (name + '.js')),
150 '--export-name=pyodide',
151 '--exclude', '*.wasm.pre',
152 '--exclude', '__pycache__',
153 '--use-preload-plugins'], check=True)
154 subprocess.run([
155 'uglifyjs',
156 buildpath / (name + '.js'),
157 '-o',
158 buildpath / (name + '.js')], check=True)
159
160 with open(buildpath / '.packaged', 'wb') as fd:
161 fd.write(b'\n')
162
163
164 def build_package(path, args):
165 pkg = common.parse_package(path)
166 packagedir = pkg['package']['name'] + '-' + pkg['package']['version']
167 dirpath = path.parent
168 orig_path = Path.cwd()
169 os.chdir(dirpath)
170 try:
171 buildpath = dirpath / 'build'
172 if not buildpath.is_dir():
173 os.makedirs(buildpath)
174 srcpath = download_and_extract(buildpath, packagedir, pkg, args)
175 patch(path, srcpath, pkg, args)
176 compile(path, srcpath, pkg, args)
177 package_files(buildpath, srcpath, pkg, args)
178 finally:
179 os.chdir(orig_path)
180
181
182 def parse_args():
183 parser = argparse.ArgumentParser('Build a pyodide package.')
184 parser.add_argument(
185 'package', type=str, nargs=1,
186 help="Path to meta.yaml package description")
187 parser.add_argument(
188 '--cflags', type=str, nargs='?', default=common.DEFAULTCFLAGS,
189 help='Extra compiling flags')
190 parser.add_argument(
191 '--ldflags', type=str, nargs='?', default=common.DEFAULTLDFLAGS,
192 help='Extra linking flags')
193 parser.add_argument(
194 '--host', type=str, nargs='?', default=common.HOSTPYTHON,
195 help='The path to the host Python installation')
196 parser.add_argument(
197 '--target', type=str, nargs='?', default=common.TARGETPYTHON,
198 help='The path to the target Python installation')
199 return parser.parse_args()
200
201
202 def main(args):
203 path = Path(args.package[0]).resolve()
204 build_package(path, args)
205
206
207 if __name__ == '__main__':
208 args = parse_args()
209 main(args)
210
[end of tools/buildpkg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/buildpkg.py b/tools/buildpkg.py
--- a/tools/buildpkg.py
+++ b/tools/buildpkg.py
@@ -143,14 +143,15 @@
subprocess.run([
'python',
Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',
- buildpath / (name + '.data'),
+ name + '.data',
'--preload',
'{}@/lib/python3.6/site-packages'.format(libdir),
- '--js-output={}'.format(buildpath / (name + '.js')),
+ '--js-output={}'.format(name + '.js'),
'--export-name=pyodide',
'--exclude', '*.wasm.pre',
'--exclude', '__pycache__',
- '--use-preload-plugins'], check=True)
+ '--use-preload-plugins'],
+ cwd=buildpath, check=True)
subprocess.run([
'uglifyjs',
buildpath / (name + '.js'),
| {"golden_diff": "diff --git a/tools/buildpkg.py b/tools/buildpkg.py\n--- a/tools/buildpkg.py\n+++ b/tools/buildpkg.py\n@@ -143,14 +143,15 @@\n subprocess.run([\n 'python',\n Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',\n- buildpath / (name + '.data'),\n+ name + '.data',\n '--preload',\n '{}@/lib/python3.6/site-packages'.format(libdir),\n- '--js-output={}'.format(buildpath / (name + '.js')),\n+ '--js-output={}'.format(name + '.js'),\n '--export-name=pyodide',\n '--exclude', '*.wasm.pre',\n '--exclude', '__pycache__',\n- '--use-preload-plugins'], check=True)\n+ '--use-preload-plugins'],\n+ cwd=buildpath, check=True)\n subprocess.run([\n 'uglifyjs',\n buildpath / (name + '.js'),\n", "issue": "Full build path is included in package `.js` files\nAs @rth pointed out in #121, the full build path to the `.data` file is included in the `.js` file for each package. This is *really* a problem, since it doesn't prevent the packages from being deployed anywhere, but it is leaking information we probably don't want to and makes the builds less reproducible.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nBuilds a Pyodide package.\n\"\"\"\n\nimport argparse\nimport hashlib\nimport os\nfrom pathlib import Path\nimport shutil\nimport subprocess\n\n\nimport common\n\n\nROOTDIR = Path(__file__).parent.resolve()\n\n\ndef check_checksum(path, pkg):\n \"\"\"\n Checks that a tarball matches the checksum in the package metadata.\n \"\"\"\n checksum_keys = {'md5', 'sha256'}.intersection(pkg['source'])\n if not checksum_keys:\n return\n elif len(checksum_keys) != 1:\n raise ValueError('Only one checksum should be included in a package '\n 'setup; found {}.'.format(checksum_keys))\n checksum_algorithm = checksum_keys.pop()\n checksum = pkg['source'][checksum_algorithm]\n CHUNK_SIZE = 1 << 16\n h = getattr(hashlib, checksum_algorithm)()\n with open(path, 'rb') as fd:\n while True:\n chunk = fd.read(CHUNK_SIZE)\n h.update(chunk)\n if len(chunk) < CHUNK_SIZE:\n break\n if h.hexdigest() != checksum:\n raise ValueError(\"Invalid {} checksum\".format(checksum_algorithm))\n\n\ndef download_and_extract(buildpath, packagedir, pkg, args):\n tarballpath = buildpath / Path(pkg['source']['url']).name\n if not tarballpath.is_file():\n subprocess.run([\n 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']\n ], check=True)\n check_checksum(tarballpath, pkg)\n srcpath = buildpath / packagedir\n if not srcpath.is_dir():\n shutil.unpack_archive(str(tarballpath), str(buildpath))\n return srcpath\n\n\ndef patch(path, srcpath, pkg, args):\n if (srcpath / '.patched').is_file():\n return\n\n # Apply all of the patches\n orig_dir = Path.cwd()\n pkgdir = path.parent.resolve()\n os.chdir(srcpath)\n try:\n for patch in pkg['source'].get('patches', []):\n subprocess.run([\n 'patch', '-p1', '--binary', '-i', pkgdir / patch\n ], check=True)\n finally:\n os.chdir(orig_dir)\n\n # Add any extra files\n for src, dst in pkg['source'].get('extras', []):\n shutil.copyfile(pkgdir / src, srcpath / dst)\n\n with open(srcpath / '.patched', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef get_libdir(srcpath, args):\n # Get the name of the build/lib.XXX directory that distutils wrote its\n # output to\n slug = subprocess.check_output([\n str(Path(args.host) / 'bin' / 'python3'),\n '-c',\n 'import sysconfig, sys; '\n 'print(\"{}-{}.{}\".format('\n 'sysconfig.get_platform(), '\n 'sys.version_info[0], '\n 'sys.version_info[1]))']).decode('ascii').strip()\n purelib = srcpath / 'build' / 'lib'\n if purelib.is_dir():\n libdir = purelib\n else:\n libdir = srcpath / 'build' / ('lib.' + slug)\n return libdir\n\n\ndef compile(path, srcpath, pkg, args):\n if (srcpath / '.built').is_file():\n return\n\n orig_dir = Path.cwd()\n os.chdir(srcpath)\n try:\n subprocess.run([\n str(Path(args.host) / 'bin' / 'python3'),\n str(ROOTDIR / 'pywasmcross'),\n '--cflags',\n args.cflags + ' ' +\n pkg.get('build', {}).get('cflags', ''),\n '--ldflags',\n args.ldflags + ' ' +\n pkg.get('build', {}).get('ldflags', ''),\n '--host', args.host,\n '--target', args.target], check=True)\n finally:\n os.chdir(orig_dir)\n\n post = pkg.get('build', {}).get('post')\n if post is not None:\n libdir = get_libdir(srcpath, args)\n pkgdir = path.parent.resolve()\n env = {\n 'BUILD': libdir,\n 'PKGDIR': pkgdir\n }\n subprocess.run([\n 'bash', '-c', post], env=env, check=True)\n\n with open(srcpath / '.built', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef package_files(buildpath, srcpath, pkg, args):\n if (buildpath / '.pacakaged').is_file():\n return\n\n name = pkg['package']['name']\n libdir = get_libdir(srcpath, args)\n subprocess.run([\n 'python',\n Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',\n buildpath / (name + '.data'),\n '--preload',\n '{}@/lib/python3.6/site-packages'.format(libdir),\n '--js-output={}'.format(buildpath / (name + '.js')),\n '--export-name=pyodide',\n '--exclude', '*.wasm.pre',\n '--exclude', '__pycache__',\n '--use-preload-plugins'], check=True)\n subprocess.run([\n 'uglifyjs',\n buildpath / (name + '.js'),\n '-o',\n buildpath / (name + '.js')], check=True)\n\n with open(buildpath / '.packaged', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef build_package(path, args):\n pkg = common.parse_package(path)\n packagedir = pkg['package']['name'] + '-' + pkg['package']['version']\n dirpath = path.parent\n orig_path = Path.cwd()\n os.chdir(dirpath)\n try:\n buildpath = dirpath / 'build'\n if not buildpath.is_dir():\n os.makedirs(buildpath)\n srcpath = download_and_extract(buildpath, packagedir, pkg, args)\n patch(path, srcpath, pkg, args)\n compile(path, srcpath, pkg, args)\n package_files(buildpath, srcpath, pkg, args)\n finally:\n os.chdir(orig_path)\n\n\ndef parse_args():\n parser = argparse.ArgumentParser('Build a pyodide package.')\n parser.add_argument(\n 'package', type=str, nargs=1,\n help=\"Path to meta.yaml package description\")\n parser.add_argument(\n '--cflags', type=str, nargs='?', default=common.DEFAULTCFLAGS,\n help='Extra compiling flags')\n parser.add_argument(\n '--ldflags', type=str, nargs='?', default=common.DEFAULTLDFLAGS,\n help='Extra linking flags')\n parser.add_argument(\n '--host', type=str, nargs='?', default=common.HOSTPYTHON,\n help='The path to the host Python installation')\n parser.add_argument(\n '--target', type=str, nargs='?', default=common.TARGETPYTHON,\n help='The path to the target Python installation')\n return parser.parse_args()\n\n\ndef main(args):\n path = Path(args.package[0]).resolve()\n build_package(path, args)\n\n\nif __name__ == '__main__':\n args = parse_args()\n main(args)\n", "path": "tools/buildpkg.py"}]} | 2,703 | 215 |
gh_patches_debug_4204 | rasdani/github-patches | git_diff | statsmodels__statsmodels-9082 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in CanCorr documentation and docstring
CanCorr's documentation and docstring say that CanCorr has attributes x_cancoeff and y_cancoeff. However, they should say x_cancoef and y_cancoef. Should I submit a PR?
</issue>
<code>
[start of statsmodels/multivariate/cancorr.py]
1 # -*- coding: utf-8 -*-
2
3 """Canonical correlation analysis
4
5 author: Yichuan Liu
6 """
7 import numpy as np
8 from numpy.linalg import svd
9 import scipy
10 import pandas as pd
11
12 from statsmodels.base.model import Model
13 from statsmodels.iolib import summary2
14 from .multivariate_ols import multivariate_stats
15
16
17 class CanCorr(Model):
18 """
19 Canonical correlation analysis using singular value decomposition
20
21 For matrices exog=x and endog=y, find projections x_cancoef and y_cancoef
22 such that:
23
24 x1 = x * x_cancoef, x1' * x1 is identity matrix
25 y1 = y * y_cancoef, y1' * y1 is identity matrix
26
27 and the correlation between x1 and y1 is maximized.
28
29 Attributes
30 ----------
31 endog : ndarray
32 See Parameters.
33 exog : ndarray
34 See Parameters.
35 cancorr : ndarray
36 The canonical correlation values
37 y_cancoeff : ndarray
38 The canonical coefficients for endog
39 x_cancoeff : ndarray
40 The canonical coefficients for exog
41
42 References
43 ----------
44 .. [*] http://numerical.recipes/whp/notes/CanonCorrBySVD.pdf
45 .. [*] http://www.csun.edu/~ata20315/psy524/docs/Psy524%20Lecture%208%20CC.pdf
46 .. [*] http://www.mathematica-journal.com/2014/06/canonical-correlation-analysis/
47 """ # noqa:E501
48 def __init__(self, endog, exog, tolerance=1e-8, missing='none', hasconst=None, **kwargs):
49 super(CanCorr, self).__init__(endog, exog, missing=missing,
50 hasconst=hasconst, **kwargs)
51 self._fit(tolerance)
52
53 def _fit(self, tolerance=1e-8):
54 """Fit the model
55
56 A ValueError is raised if there are singular values smaller than the
57 tolerance. The treatment of singular arrays might change in future.
58
59 Parameters
60 ----------
61 tolerance : float
62 eigenvalue tolerance, values smaller than which is considered 0
63 """
64 nobs, k_yvar = self.endog.shape
65 nobs, k_xvar = self.exog.shape
66 k = np.min([k_yvar, k_xvar])
67
68 x = np.array(self.exog)
69 x = x - x.mean(0)
70 y = np.array(self.endog)
71 y = y - y.mean(0)
72
73 ux, sx, vx = svd(x, 0)
74 # vx_ds = vx.T divided by sx
75 vx_ds = vx.T
76 mask = sx > tolerance
77 if mask.sum() < len(mask):
78 raise ValueError('exog is collinear.')
79 vx_ds[:, mask] /= sx[mask]
80 uy, sy, vy = svd(y, 0)
81 # vy_ds = vy.T divided by sy
82 vy_ds = vy.T
83 mask = sy > tolerance
84 if mask.sum() < len(mask):
85 raise ValueError('endog is collinear.')
86 vy_ds[:, mask] /= sy[mask]
87 u, s, v = svd(ux.T.dot(uy), 0)
88
89 # Correct any roundoff
90 self.cancorr = np.array([max(0, min(s[i], 1)) for i in range(len(s))])
91
92 self.x_cancoef = vx_ds.dot(u[:, :k])
93 self.y_cancoef = vy_ds.dot(v.T[:, :k])
94
95 def corr_test(self):
96 """Approximate F test
97 Perform multivariate statistical tests of the hypothesis that
98 there is no canonical correlation between endog and exog.
99 For each canonical correlation, testing its significance based on
100 Wilks' lambda.
101
102 Returns
103 -------
104 CanCorrTestResults instance
105 """
106 nobs, k_yvar = self.endog.shape
107 nobs, k_xvar = self.exog.shape
108 eigenvals = np.power(self.cancorr, 2)
109 stats = pd.DataFrame(columns=['Canonical Correlation', "Wilks' lambda",
110 'Num DF','Den DF', 'F Value','Pr > F'],
111 index=list(range(len(eigenvals) - 1, -1, -1)))
112 prod = 1
113 for i in range(len(eigenvals) - 1, -1, -1):
114 prod *= 1 - eigenvals[i]
115 p = k_yvar - i
116 q = k_xvar - i
117 r = (nobs - k_yvar - 1) - (p - q + 1) / 2
118 u = (p * q - 2) / 4
119 df1 = p * q
120 if p ** 2 + q ** 2 - 5 > 0:
121 t = np.sqrt(((p * q) ** 2 - 4) / (p ** 2 + q ** 2 - 5))
122 else:
123 t = 1
124 df2 = r * t - 2 * u
125 lmd = np.power(prod, 1 / t)
126 F = (1 - lmd) / lmd * df2 / df1
127 stats.loc[i, 'Canonical Correlation'] = self.cancorr[i]
128 stats.loc[i, "Wilks' lambda"] = prod
129 stats.loc[i, 'Num DF'] = df1
130 stats.loc[i, 'Den DF'] = df2
131 stats.loc[i, 'F Value'] = F
132 pval = scipy.stats.f.sf(F, df1, df2)
133 stats.loc[i, 'Pr > F'] = pval
134 '''
135 # Wilk's Chi square test of each canonical correlation
136 df = (p - i + 1) * (q - i + 1)
137 chi2 = a * np.log(prod)
138 pval = stats.chi2.sf(chi2, df)
139 stats.loc[i, 'Canonical correlation'] = self.cancorr[i]
140 stats.loc[i, 'Chi-square'] = chi2
141 stats.loc[i, 'DF'] = df
142 stats.loc[i, 'Pr > ChiSq'] = pval
143 '''
144 ind = stats.index.values[::-1]
145 stats = stats.loc[ind, :]
146
147 # Multivariate tests (remember x has mean removed)
148 stats_mv = multivariate_stats(eigenvals,
149 k_yvar, k_xvar, nobs - k_xvar - 1)
150 return CanCorrTestResults(stats, stats_mv)
151
152
153 class CanCorrTestResults:
154 """
155 Canonical correlation results class
156
157 Attributes
158 ----------
159 stats : DataFrame
160 Contain statistical tests results for each canonical correlation
161 stats_mv : DataFrame
162 Contain the multivariate statistical tests results
163 """
164 def __init__(self, stats, stats_mv):
165 self.stats = stats
166 self.stats_mv = stats_mv
167
168 def __str__(self):
169 return self.summary().__str__()
170
171 def summary(self):
172 summ = summary2.Summary()
173 summ.add_title('Cancorr results')
174 summ.add_df(self.stats)
175 summ.add_dict({'': ''})
176 summ.add_dict({'Multivariate Statistics and F Approximations': ''})
177 summ.add_df(self.stats_mv)
178 return summ
179
[end of statsmodels/multivariate/cancorr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/statsmodels/multivariate/cancorr.py b/statsmodels/multivariate/cancorr.py
--- a/statsmodels/multivariate/cancorr.py
+++ b/statsmodels/multivariate/cancorr.py
@@ -34,9 +34,9 @@
See Parameters.
cancorr : ndarray
The canonical correlation values
- y_cancoeff : ndarray
+ y_cancoef : ndarray
The canonical coefficients for endog
- x_cancoeff : ndarray
+ x_cancoef : ndarray
The canonical coefficients for exog
References
| {"golden_diff": "diff --git a/statsmodels/multivariate/cancorr.py b/statsmodels/multivariate/cancorr.py\n--- a/statsmodels/multivariate/cancorr.py\n+++ b/statsmodels/multivariate/cancorr.py\n@@ -34,9 +34,9 @@\n See Parameters.\n cancorr : ndarray\n The canonical correlation values\n- y_cancoeff : ndarray\n+ y_cancoef : ndarray\n The canonical coefficients for endog\n- x_cancoeff : ndarray\n+ x_cancoef : ndarray\n The canonical coefficients for exog\n \n References\n", "issue": "Typo in CanCorr documentation and docstring\nCanCorr's documentation and docstring say that CanCorr has attributes x_cancoeff and y_cancoeff. However, they should say x_cancoef and y_cancoef. Should I submit a PR?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Canonical correlation analysis\n\nauthor: Yichuan Liu\n\"\"\"\nimport numpy as np\nfrom numpy.linalg import svd\nimport scipy\nimport pandas as pd\n\nfrom statsmodels.base.model import Model\nfrom statsmodels.iolib import summary2\nfrom .multivariate_ols import multivariate_stats\n\n\nclass CanCorr(Model):\n \"\"\"\n Canonical correlation analysis using singular value decomposition\n\n For matrices exog=x and endog=y, find projections x_cancoef and y_cancoef\n such that:\n\n x1 = x * x_cancoef, x1' * x1 is identity matrix\n y1 = y * y_cancoef, y1' * y1 is identity matrix\n\n and the correlation between x1 and y1 is maximized.\n\n Attributes\n ----------\n endog : ndarray\n See Parameters.\n exog : ndarray\n See Parameters.\n cancorr : ndarray\n The canonical correlation values\n y_cancoeff : ndarray\n The canonical coefficients for endog\n x_cancoeff : ndarray\n The canonical coefficients for exog\n\n References\n ----------\n .. [*] http://numerical.recipes/whp/notes/CanonCorrBySVD.pdf\n .. [*] http://www.csun.edu/~ata20315/psy524/docs/Psy524%20Lecture%208%20CC.pdf\n .. [*] http://www.mathematica-journal.com/2014/06/canonical-correlation-analysis/\n \"\"\" # noqa:E501\n def __init__(self, endog, exog, tolerance=1e-8, missing='none', hasconst=None, **kwargs):\n super(CanCorr, self).__init__(endog, exog, missing=missing,\n hasconst=hasconst, **kwargs)\n self._fit(tolerance)\n\n def _fit(self, tolerance=1e-8):\n \"\"\"Fit the model\n\n A ValueError is raised if there are singular values smaller than the\n tolerance. The treatment of singular arrays might change in future.\n\n Parameters\n ----------\n tolerance : float\n eigenvalue tolerance, values smaller than which is considered 0\n \"\"\"\n nobs, k_yvar = self.endog.shape\n nobs, k_xvar = self.exog.shape\n k = np.min([k_yvar, k_xvar])\n\n x = np.array(self.exog)\n x = x - x.mean(0)\n y = np.array(self.endog)\n y = y - y.mean(0)\n\n ux, sx, vx = svd(x, 0)\n # vx_ds = vx.T divided by sx\n vx_ds = vx.T\n mask = sx > tolerance\n if mask.sum() < len(mask):\n raise ValueError('exog is collinear.')\n vx_ds[:, mask] /= sx[mask]\n uy, sy, vy = svd(y, 0)\n # vy_ds = vy.T divided by sy\n vy_ds = vy.T\n mask = sy > tolerance\n if mask.sum() < len(mask):\n raise ValueError('endog is collinear.')\n vy_ds[:, mask] /= sy[mask]\n u, s, v = svd(ux.T.dot(uy), 0)\n\n # Correct any roundoff\n self.cancorr = np.array([max(0, min(s[i], 1)) for i in range(len(s))])\n\n self.x_cancoef = vx_ds.dot(u[:, :k])\n self.y_cancoef = vy_ds.dot(v.T[:, :k])\n\n def corr_test(self):\n \"\"\"Approximate F test\n Perform multivariate statistical tests of the hypothesis that\n there is no canonical correlation between endog and exog.\n For each canonical correlation, testing its significance based on\n Wilks' lambda.\n\n Returns\n -------\n CanCorrTestResults instance\n \"\"\"\n nobs, k_yvar = self.endog.shape\n nobs, k_xvar = self.exog.shape\n eigenvals = np.power(self.cancorr, 2)\n stats = pd.DataFrame(columns=['Canonical Correlation', \"Wilks' lambda\",\n 'Num DF','Den DF', 'F Value','Pr > F'],\n index=list(range(len(eigenvals) - 1, -1, -1)))\n prod = 1\n for i in range(len(eigenvals) - 1, -1, -1):\n prod *= 1 - eigenvals[i]\n p = k_yvar - i\n q = k_xvar - i\n r = (nobs - k_yvar - 1) - (p - q + 1) / 2\n u = (p * q - 2) / 4\n df1 = p * q\n if p ** 2 + q ** 2 - 5 > 0:\n t = np.sqrt(((p * q) ** 2 - 4) / (p ** 2 + q ** 2 - 5))\n else:\n t = 1\n df2 = r * t - 2 * u\n lmd = np.power(prod, 1 / t)\n F = (1 - lmd) / lmd * df2 / df1\n stats.loc[i, 'Canonical Correlation'] = self.cancorr[i]\n stats.loc[i, \"Wilks' lambda\"] = prod\n stats.loc[i, 'Num DF'] = df1\n stats.loc[i, 'Den DF'] = df2\n stats.loc[i, 'F Value'] = F\n pval = scipy.stats.f.sf(F, df1, df2)\n stats.loc[i, 'Pr > F'] = pval\n '''\n # Wilk's Chi square test of each canonical correlation\n df = (p - i + 1) * (q - i + 1)\n chi2 = a * np.log(prod)\n pval = stats.chi2.sf(chi2, df)\n stats.loc[i, 'Canonical correlation'] = self.cancorr[i]\n stats.loc[i, 'Chi-square'] = chi2\n stats.loc[i, 'DF'] = df\n stats.loc[i, 'Pr > ChiSq'] = pval\n '''\n ind = stats.index.values[::-1]\n stats = stats.loc[ind, :]\n\n # Multivariate tests (remember x has mean removed)\n stats_mv = multivariate_stats(eigenvals,\n k_yvar, k_xvar, nobs - k_xvar - 1)\n return CanCorrTestResults(stats, stats_mv)\n\n\nclass CanCorrTestResults:\n \"\"\"\n Canonical correlation results class\n\n Attributes\n ----------\n stats : DataFrame\n Contain statistical tests results for each canonical correlation\n stats_mv : DataFrame\n Contain the multivariate statistical tests results\n \"\"\"\n def __init__(self, stats, stats_mv):\n self.stats = stats\n self.stats_mv = stats_mv\n\n def __str__(self):\n return self.summary().__str__()\n\n def summary(self):\n summ = summary2.Summary()\n summ.add_title('Cancorr results')\n summ.add_df(self.stats)\n summ.add_dict({'': ''})\n summ.add_dict({'Multivariate Statistics and F Approximations': ''})\n summ.add_df(self.stats_mv)\n return summ\n", "path": "statsmodels/multivariate/cancorr.py"}]} | 2,657 | 131 |
gh_patches_debug_26969 | rasdani/github-patches | git_diff | conda__conda-707 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add ability to keep retrying with a lock error
The yum installer (IIRC) has a nice feature that it will keep trying every 10 seconds or so if there is a lock error. This could be useful for conda.
</issue>
<code>
[start of conda/lock.py]
1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6
7 """
8 Tools for working with locks
9
10 A lock is just an empty directory. We use directories because this lets us use
11 the race condition-proof os.makedirs.
12
13 For now, there is one global lock for all of conda, because some things happen
14 globally (such as downloading packages).
15
16 We don't raise an error if the lock is named with the current PID
17 """
18
19 import os
20 from os.path import join
21 import glob
22
23
24 LOCKFN = '.conda_lock'
25
26
27 class Locked(object):
28 """
29 Context manager to handle locks.
30 """
31 def __init__(self, path):
32 self.path = path
33 self.end = "-" + str(os.getpid())
34 self.lock_path = join(self.path, LOCKFN + self.end)
35 self.pattern = join(self.path, LOCKFN + '-*')
36 self.remove = True
37
38 def __enter__(self):
39 files = glob.glob(self.pattern)
40 if files and not files[0].endswith(self.end):
41 # Keep the string "LOCKERROR" in this string so that external
42 # programs can look for it.
43 raise RuntimeError("""\
44 LOCKERROR: It looks like conda is already doing something.
45 The lock %s was found. Wait for it to finish before continuing.
46 If you are sure that conda is not running, remove it and try again.
47 You can also use: $ conda clean --lock""" % self.lock_path)
48
49 if not files:
50 try:
51 os.makedirs(self.lock_path)
52 except OSError:
53 pass
54 else: # PID lock already here --- someone else will remove it.
55 self.remove = False
56
57 def __exit__(self, exc_type, exc_value, traceback):
58 if self.remove:
59 for path in self.lock_path, self.path:
60 try:
61 os.rmdir(path)
62 except OSError:
63 pass
64
[end of conda/lock.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda/lock.py b/conda/lock.py
--- a/conda/lock.py
+++ b/conda/lock.py
@@ -19,7 +19,7 @@
import os
from os.path import join
import glob
-
+from time import sleep
LOCKFN = '.conda_lock'
@@ -36,15 +36,28 @@
self.remove = True
def __enter__(self):
- files = glob.glob(self.pattern)
- if files and not files[0].endswith(self.end):
- # Keep the string "LOCKERROR" in this string so that external
- # programs can look for it.
- raise RuntimeError("""\
-LOCKERROR: It looks like conda is already doing something.
-The lock %s was found. Wait for it to finish before continuing.
-If you are sure that conda is not running, remove it and try again.
-You can also use: $ conda clean --lock""" % self.lock_path)
+ retries = 10
+ # Keep the string "LOCKERROR" in this string so that external
+ # programs can look for it.
+ lockstr = ("""\
+ LOCKERROR: It looks like conda is already doing something.
+ The lock %s was found. Wait for it to finish before continuing.
+ If you are sure that conda is not running, remove it and try again.
+ You can also use: $ conda clean --lock""" % self.lock_path)
+ sleeptime = 1
+ while retries:
+ files = glob.glob(self.pattern)
+ if files and not files[0].endswith(self.end):
+ print(lockstr)
+ print("Sleeping for %s seconds" % sleeptime)
+ sleep(sleeptime)
+ sleeptime *= 2
+ retries -= 1
+ else:
+ break
+ else:
+ print("Exceeded max retries, giving up")
+ raise RuntimeError(lockstr)
if not files:
try:
| {"golden_diff": "diff --git a/conda/lock.py b/conda/lock.py\n--- a/conda/lock.py\n+++ b/conda/lock.py\n@@ -19,7 +19,7 @@\n import os\n from os.path import join\n import glob\n-\n+from time import sleep\n \n LOCKFN = '.conda_lock'\n \n@@ -36,15 +36,28 @@\n self.remove = True\n \n def __enter__(self):\n- files = glob.glob(self.pattern)\n- if files and not files[0].endswith(self.end):\n- # Keep the string \"LOCKERROR\" in this string so that external\n- # programs can look for it.\n- raise RuntimeError(\"\"\"\\\n-LOCKERROR: It looks like conda is already doing something.\n-The lock %s was found. Wait for it to finish before continuing.\n-If you are sure that conda is not running, remove it and try again.\n-You can also use: $ conda clean --lock\"\"\" % self.lock_path)\n+ retries = 10\n+ # Keep the string \"LOCKERROR\" in this string so that external\n+ # programs can look for it.\n+ lockstr = (\"\"\"\\\n+ LOCKERROR: It looks like conda is already doing something.\n+ The lock %s was found. Wait for it to finish before continuing.\n+ If you are sure that conda is not running, remove it and try again.\n+ You can also use: $ conda clean --lock\"\"\" % self.lock_path)\n+ sleeptime = 1\n+ while retries:\n+ files = glob.glob(self.pattern)\n+ if files and not files[0].endswith(self.end):\n+ print(lockstr)\n+ print(\"Sleeping for %s seconds\" % sleeptime)\n+ sleep(sleeptime)\n+ sleeptime *= 2\n+ retries -= 1\n+ else:\n+ break\n+ else:\n+ print(\"Exceeded max retries, giving up\")\n+ raise RuntimeError(lockstr)\n \n if not files:\n try:\n", "issue": "Add ability to keep retrying with a lock error\nThe yum installer (IIRC) has a nice feature that it will keep trying every 10 seconds or so if there is a lock error. This could be useful for conda. \n\n", "before_files": [{"content": "# (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\n\"\"\"\nTools for working with locks\n\nA lock is just an empty directory. We use directories because this lets us use\nthe race condition-proof os.makedirs.\n\nFor now, there is one global lock for all of conda, because some things happen\nglobally (such as downloading packages).\n\nWe don't raise an error if the lock is named with the current PID\n\"\"\"\n\nimport os\nfrom os.path import join\nimport glob\n\n\nLOCKFN = '.conda_lock'\n\n\nclass Locked(object):\n \"\"\"\n Context manager to handle locks.\n \"\"\"\n def __init__(self, path):\n self.path = path\n self.end = \"-\" + str(os.getpid())\n self.lock_path = join(self.path, LOCKFN + self.end)\n self.pattern = join(self.path, LOCKFN + '-*')\n self.remove = True\n\n def __enter__(self):\n files = glob.glob(self.pattern)\n if files and not files[0].endswith(self.end):\n # Keep the string \"LOCKERROR\" in this string so that external\n # programs can look for it.\n raise RuntimeError(\"\"\"\\\nLOCKERROR: It looks like conda is already doing something.\nThe lock %s was found. Wait for it to finish before continuing.\nIf you are sure that conda is not running, remove it and try again.\nYou can also use: $ conda clean --lock\"\"\" % self.lock_path)\n\n if not files:\n try:\n os.makedirs(self.lock_path)\n except OSError:\n pass\n else: # PID lock already here --- someone else will remove it.\n self.remove = False\n\n def __exit__(self, exc_type, exc_value, traceback):\n if self.remove:\n for path in self.lock_path, self.path:\n try:\n os.rmdir(path)\n except OSError:\n pass\n", "path": "conda/lock.py"}]} | 1,161 | 451 |
gh_patches_debug_58136 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4730 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No "moderation tasks" filter in participatory budget (one phase)
**URL:** https://meinberlin-dev.liqd.net/projekte/module/burgerhaushalt/?mode=list (list view)
or https://meinberlin-dev.liqd.net/dashboard/projects/burgerhaushalt-spandau/basic/ (dashboard)
**user:** Moderator, Admin
**expected behaviour:** When using participatory budget with one phase i want to be able to set up moderation tasks for the discussion of ideas and want to filter ideas with an filter "open moderationtasks"
**behaviour:** There is no filter "moderation tasks" in the list view of ideas in participatory budget (one phase) nor is there the possibility to create moderation tasks in the dashboard of the project
**important screensize:** no
**device & browser:** Mac/Windows Chrome, Edge Firefox, Iphone, Samsung Galaxy 20
</issue>
<code>
[start of meinberlin/apps/moderationtasks/dashboard.py]
1 from django.utils.translation import gettext_lazy as _
2
3 from adhocracy4.dashboard import ModuleFormSetComponent
4 from adhocracy4.dashboard import components
5
6 from . import forms
7
8
9 class ModerationTasksComponent(ModuleFormSetComponent):
10 identifier = 'moderation_tasks'
11 weight = 15
12 label = _('Moderation Tasks')
13
14 form_title = _('Edit moderation tasks')
15 form_class = forms.ModerationTasksFormSet
16 form_template_name = \
17 'meinberlin_moderationtasks/moderation_tasks_form.html'
18
19 def is_effective(self, module):
20 return module.blueprint_type in ['PB1', 'PB2', 'PB3']
21
22
23 components.register_module(ModerationTasksComponent())
24
[end of meinberlin/apps/moderationtasks/dashboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/moderationtasks/dashboard.py b/meinberlin/apps/moderationtasks/dashboard.py
--- a/meinberlin/apps/moderationtasks/dashboard.py
+++ b/meinberlin/apps/moderationtasks/dashboard.py
@@ -17,7 +17,7 @@
'meinberlin_moderationtasks/moderation_tasks_form.html'
def is_effective(self, module):
- return module.blueprint_type in ['PB1', 'PB2', 'PB3']
+ return module.blueprint_type in ['PB', 'PB2', 'PB3']
components.register_module(ModerationTasksComponent())
| {"golden_diff": "diff --git a/meinberlin/apps/moderationtasks/dashboard.py b/meinberlin/apps/moderationtasks/dashboard.py\n--- a/meinberlin/apps/moderationtasks/dashboard.py\n+++ b/meinberlin/apps/moderationtasks/dashboard.py\n@@ -17,7 +17,7 @@\n 'meinberlin_moderationtasks/moderation_tasks_form.html'\n \n def is_effective(self, module):\n- return module.blueprint_type in ['PB1', 'PB2', 'PB3']\n+ return module.blueprint_type in ['PB', 'PB2', 'PB3']\n \n \n components.register_module(ModerationTasksComponent())\n", "issue": "No \"moderation tasks\" filter in participatory budget (one phase)\n**URL:** https://meinberlin-dev.liqd.net/projekte/module/burgerhaushalt/?mode=list (list view)\r\nor https://meinberlin-dev.liqd.net/dashboard/projects/burgerhaushalt-spandau/basic/ (dashboard)\r\n**user:** Moderator, Admin\r\n**expected behaviour:** When using participatory budget with one phase i want to be able to set up moderation tasks for the discussion of ideas and want to filter ideas with an filter \"open moderationtasks\"\r\n**behaviour:** There is no filter \"moderation tasks\" in the list view of ideas in participatory budget (one phase) nor is there the possibility to create moderation tasks in the dashboard of the project\r\n**important screensize:** no\r\n**device & browser:** Mac/Windows Chrome, Edge Firefox, Iphone, Samsung Galaxy 20\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import ModuleFormSetComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import forms\n\n\nclass ModerationTasksComponent(ModuleFormSetComponent):\n identifier = 'moderation_tasks'\n weight = 15\n label = _('Moderation Tasks')\n\n form_title = _('Edit moderation tasks')\n form_class = forms.ModerationTasksFormSet\n form_template_name = \\\n 'meinberlin_moderationtasks/moderation_tasks_form.html'\n\n def is_effective(self, module):\n return module.blueprint_type in ['PB1', 'PB2', 'PB3']\n\n\ncomponents.register_module(ModerationTasksComponent())\n", "path": "meinberlin/apps/moderationtasks/dashboard.py"}]} | 929 | 139 |
gh_patches_debug_61599 | rasdani/github-patches | git_diff | beetbox__beets-3159 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BadFiles plugin crashes beets with latest git master
### Problem
If the `badfiles` plugin is activated, beets crashes when starting an import task.
Running this command in verbose (`-vv`) mode:
```sh
$ beet -vv import --write /data/music
user configuration: /home/jan/.config/beets/config.yaml
data directory: /home/jan/.config/beets
plugin paths:
Sending event: pluginload
artresizer: method is (2, (7, 0, 8))
lyrics: Disabling google source: no API key configured.
library database: /home/jan/beets.db
library directory: /data/music
Sending event: library_opened
Traceback (most recent call last):
File "/home/jan/.local/bin/beet", line 11, in <module>
load_entry_point('beets', 'console_scripts', 'beet')()
File "/data/jan/Projects/beets/beets/ui/__init__.py", line 1266, in main
_raw_main(args)
File "/data/jan/Projects/beets/beets/ui/__init__.py", line 1253, in _raw_main
subcommand.func(lib, suboptions, subargs)
File "/data/jan/Projects/beets/beets/ui/commands.py", line 955, in import_func
import_files(lib, paths, query)
File "/data/jan/Projects/beets/beets/ui/commands.py", line 925, in import_files
session.run()
File "/data/jan/Projects/beets/beets/importer.py", line 316, in run
for stage_func in plugins.early_import_stages():
File "/data/jan/Projects/beets/beets/plugins.py", line 426, in early_import_stages
stages += plugin.get_early_import_stages()
File "/data/jan/Projects/beets/beets/plugins.py", line 112, in get_early_import_stages
return self._set_stage_log_level(self.early_import_stages)
AttributeError: 'BadFiles' object has no attribute 'early_import_stages'
```
### Setup
* OS: Arch Linux
* Python version: 3.7.2
* beets version: be118b92
* Turning off plugins made problem go away (yes/no): Yes (Disabling the `badfiles` plugin suffices)
My configuration (output of `beet config`) is: https://gist.github.com/Holzhaus/500b790c06fe2250ac9182bd8a6760da
</issue>
<code>
[start of beetsplug/badfiles.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, François-Xavier Thomas.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Use command-line tools to check for audio file corruption.
17 """
18
19 from __future__ import division, absolute_import, print_function
20
21 from subprocess import check_output, CalledProcessError, list2cmdline, STDOUT
22
23 import shlex
24 import os
25 import errno
26 import sys
27 import six
28 from beets.plugins import BeetsPlugin
29 from beets.ui import Subcommand
30 from beets.util import displayable_path, confit, par_map
31 from beets import ui
32
33
34 class CheckerCommandException(Exception):
35 """Raised when running a checker failed.
36
37 Attributes:
38 checker: Checker command name.
39 path: Path to the file being validated.
40 errno: Error number from the checker execution error.
41 msg: Message from the checker execution error.
42 """
43
44 def __init__(self, cmd, oserror):
45 self.checker = cmd[0]
46 self.path = cmd[-1]
47 self.errno = oserror.errno
48 self.msg = str(oserror)
49
50
51 class BadFiles(BeetsPlugin):
52 def __init__(self):
53 self.verbose = False
54
55 def run_command(self, cmd):
56 self._log.debug(u"running command: {}",
57 displayable_path(list2cmdline(cmd)))
58 try:
59 output = check_output(cmd, stderr=STDOUT)
60 errors = 0
61 status = 0
62 except CalledProcessError as e:
63 output = e.output
64 errors = 1
65 status = e.returncode
66 except OSError as e:
67 raise CheckerCommandException(cmd, e)
68 output = output.decode(sys.getfilesystemencoding())
69 return status, errors, [line for line in output.split("\n") if line]
70
71 def check_mp3val(self, path):
72 status, errors, output = self.run_command(["mp3val", path])
73 if status == 0:
74 output = [line for line in output if line.startswith("WARNING:")]
75 errors = len(output)
76 return status, errors, output
77
78 def check_flac(self, path):
79 return self.run_command(["flac", "-wst", path])
80
81 def check_custom(self, command):
82 def checker(path):
83 cmd = shlex.split(command)
84 cmd.append(path)
85 return self.run_command(cmd)
86 return checker
87
88 def get_checker(self, ext):
89 ext = ext.lower()
90 try:
91 command = self.config['commands'].get(dict).get(ext)
92 except confit.NotFoundError:
93 command = None
94 if command:
95 return self.check_custom(command)
96 if ext == "mp3":
97 return self.check_mp3val
98 if ext == "flac":
99 return self.check_flac
100
101 def check_item(self, item):
102 # First, check whether the path exists. If not, the user
103 # should probably run `beet update` to cleanup your library.
104 dpath = displayable_path(item.path)
105 self._log.debug(u"checking path: {}", dpath)
106 if not os.path.exists(item.path):
107 ui.print_(u"{}: file does not exist".format(
108 ui.colorize('text_error', dpath)))
109
110 # Run the checker against the file if one is found
111 ext = os.path.splitext(item.path)[1][1:].decode('utf8', 'ignore')
112 checker = self.get_checker(ext)
113 if not checker:
114 self._log.error(u"no checker specified in the config for {}",
115 ext)
116 return
117 path = item.path
118 if not isinstance(path, six.text_type):
119 path = item.path.decode(sys.getfilesystemencoding())
120 try:
121 status, errors, output = checker(path)
122 except CheckerCommandException as e:
123 if e.errno == errno.ENOENT:
124 self._log.error(
125 u"command not found: {} when validating file: {}",
126 e.checker,
127 e.path
128 )
129 else:
130 self._log.error(u"error invoking {}: {}", e.checker, e.msg)
131 return
132 if status > 0:
133 ui.print_(u"{}: checker exited with status {}"
134 .format(ui.colorize('text_error', dpath), status))
135 for line in output:
136 ui.print_(u" {}".format(displayable_path(line)))
137 elif errors > 0:
138 ui.print_(u"{}: checker found {} errors or warnings"
139 .format(ui.colorize('text_warning', dpath), errors))
140 for line in output:
141 ui.print_(u" {}".format(displayable_path(line)))
142 elif self.verbose:
143 ui.print_(u"{}: ok".format(ui.colorize('text_success', dpath)))
144
145 def command(self, lib, opts, args):
146 # Get items from arguments
147 items = lib.items(ui.decargs(args))
148 self.verbose = opts.verbose
149 par_map(self.check_item, items)
150
151 def commands(self):
152 bad_command = Subcommand('bad',
153 help=u'check for corrupt or missing files')
154 bad_command.parser.add_option(
155 u'-v', u'--verbose',
156 action='store_true', default=False, dest='verbose',
157 help=u'view results for both the bad and uncorrupted files'
158 )
159 bad_command.func = self.command
160 return [bad_command]
161
[end of beetsplug/badfiles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beetsplug/badfiles.py b/beetsplug/badfiles.py
--- a/beetsplug/badfiles.py
+++ b/beetsplug/badfiles.py
@@ -50,6 +50,7 @@
class BadFiles(BeetsPlugin):
def __init__(self):
+ super(BadFiles, self).__init__()
self.verbose = False
def run_command(self, cmd):
| {"golden_diff": "diff --git a/beetsplug/badfiles.py b/beetsplug/badfiles.py\n--- a/beetsplug/badfiles.py\n+++ b/beetsplug/badfiles.py\n@@ -50,6 +50,7 @@\n \n class BadFiles(BeetsPlugin):\n def __init__(self):\n+ super(BadFiles, self).__init__()\n self.verbose = False\n \n def run_command(self, cmd):\n", "issue": "BadFiles plugin crashes beets with latest git master\n### Problem\r\n\r\nIf the `badfiles` plugin is activated, beets crashes when starting an import task.\r\n\r\nRunning this command in verbose (`-vv`) mode:\r\n\r\n```sh\r\n$ beet -vv import --write /data/music\r\nuser configuration: /home/jan/.config/beets/config.yaml\r\ndata directory: /home/jan/.config/beets\r\nplugin paths:\r\nSending event: pluginload\r\nartresizer: method is (2, (7, 0, 8))\r\nlyrics: Disabling google source: no API key configured.\r\nlibrary database: /home/jan/beets.db\r\nlibrary directory: /data/music\r\nSending event: library_opened\r\nTraceback (most recent call last):\r\n File \"/home/jan/.local/bin/beet\", line 11, in <module>\r\n load_entry_point('beets', 'console_scripts', 'beet')()\r\n File \"/data/jan/Projects/beets/beets/ui/__init__.py\", line 1266, in main\r\n _raw_main(args)\r\n File \"/data/jan/Projects/beets/beets/ui/__init__.py\", line 1253, in _raw_main\r\n subcommand.func(lib, suboptions, subargs)\r\n File \"/data/jan/Projects/beets/beets/ui/commands.py\", line 955, in import_func\r\n import_files(lib, paths, query)\r\n File \"/data/jan/Projects/beets/beets/ui/commands.py\", line 925, in import_files\r\n session.run()\r\n File \"/data/jan/Projects/beets/beets/importer.py\", line 316, in run\r\n for stage_func in plugins.early_import_stages():\r\n File \"/data/jan/Projects/beets/beets/plugins.py\", line 426, in early_import_stages\r\n stages += plugin.get_early_import_stages()\r\n File \"/data/jan/Projects/beets/beets/plugins.py\", line 112, in get_early_import_stages\r\n return self._set_stage_log_level(self.early_import_stages)\r\nAttributeError: 'BadFiles' object has no attribute 'early_import_stages'\r\n```\r\n\r\n### Setup\r\n\r\n* OS: Arch Linux\r\n* Python version: 3.7.2\r\n* beets version: be118b92\r\n* Turning off plugins made problem go away (yes/no): Yes (Disabling the `badfiles` plugin suffices)\r\n\r\nMy configuration (output of `beet config`) is: https://gist.github.com/Holzhaus/500b790c06fe2250ac9182bd8a6760da\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Fran\u00e7ois-Xavier Thomas.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Use command-line tools to check for audio file corruption.\n\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nfrom subprocess import check_output, CalledProcessError, list2cmdline, STDOUT\n\nimport shlex\nimport os\nimport errno\nimport sys\nimport six\nfrom beets.plugins import BeetsPlugin\nfrom beets.ui import Subcommand\nfrom beets.util import displayable_path, confit, par_map\nfrom beets import ui\n\n\nclass CheckerCommandException(Exception):\n \"\"\"Raised when running a checker failed.\n\n Attributes:\n checker: Checker command name.\n path: Path to the file being validated.\n errno: Error number from the checker execution error.\n msg: Message from the checker execution error.\n \"\"\"\n\n def __init__(self, cmd, oserror):\n self.checker = cmd[0]\n self.path = cmd[-1]\n self.errno = oserror.errno\n self.msg = str(oserror)\n\n\nclass BadFiles(BeetsPlugin):\n def __init__(self):\n self.verbose = False\n\n def run_command(self, cmd):\n self._log.debug(u\"running command: {}\",\n displayable_path(list2cmdline(cmd)))\n try:\n output = check_output(cmd, stderr=STDOUT)\n errors = 0\n status = 0\n except CalledProcessError as e:\n output = e.output\n errors = 1\n status = e.returncode\n except OSError as e:\n raise CheckerCommandException(cmd, e)\n output = output.decode(sys.getfilesystemencoding())\n return status, errors, [line for line in output.split(\"\\n\") if line]\n\n def check_mp3val(self, path):\n status, errors, output = self.run_command([\"mp3val\", path])\n if status == 0:\n output = [line for line in output if line.startswith(\"WARNING:\")]\n errors = len(output)\n return status, errors, output\n\n def check_flac(self, path):\n return self.run_command([\"flac\", \"-wst\", path])\n\n def check_custom(self, command):\n def checker(path):\n cmd = shlex.split(command)\n cmd.append(path)\n return self.run_command(cmd)\n return checker\n\n def get_checker(self, ext):\n ext = ext.lower()\n try:\n command = self.config['commands'].get(dict).get(ext)\n except confit.NotFoundError:\n command = None\n if command:\n return self.check_custom(command)\n if ext == \"mp3\":\n return self.check_mp3val\n if ext == \"flac\":\n return self.check_flac\n\n def check_item(self, item):\n # First, check whether the path exists. If not, the user\n # should probably run `beet update` to cleanup your library.\n dpath = displayable_path(item.path)\n self._log.debug(u\"checking path: {}\", dpath)\n if not os.path.exists(item.path):\n ui.print_(u\"{}: file does not exist\".format(\n ui.colorize('text_error', dpath)))\n\n # Run the checker against the file if one is found\n ext = os.path.splitext(item.path)[1][1:].decode('utf8', 'ignore')\n checker = self.get_checker(ext)\n if not checker:\n self._log.error(u\"no checker specified in the config for {}\",\n ext)\n return\n path = item.path\n if not isinstance(path, six.text_type):\n path = item.path.decode(sys.getfilesystemencoding())\n try:\n status, errors, output = checker(path)\n except CheckerCommandException as e:\n if e.errno == errno.ENOENT:\n self._log.error(\n u\"command not found: {} when validating file: {}\",\n e.checker,\n e.path\n )\n else:\n self._log.error(u\"error invoking {}: {}\", e.checker, e.msg)\n return\n if status > 0:\n ui.print_(u\"{}: checker exited with status {}\"\n .format(ui.colorize('text_error', dpath), status))\n for line in output:\n ui.print_(u\" {}\".format(displayable_path(line)))\n elif errors > 0:\n ui.print_(u\"{}: checker found {} errors or warnings\"\n .format(ui.colorize('text_warning', dpath), errors))\n for line in output:\n ui.print_(u\" {}\".format(displayable_path(line)))\n elif self.verbose:\n ui.print_(u\"{}: ok\".format(ui.colorize('text_success', dpath)))\n\n def command(self, lib, opts, args):\n # Get items from arguments\n items = lib.items(ui.decargs(args))\n self.verbose = opts.verbose\n par_map(self.check_item, items)\n\n def commands(self):\n bad_command = Subcommand('bad',\n help=u'check for corrupt or missing files')\n bad_command.parser.add_option(\n u'-v', u'--verbose',\n action='store_true', default=False, dest='verbose',\n help=u'view results for both the bad and uncorrupted files'\n )\n bad_command.func = self.command\n return [bad_command]\n", "path": "beetsplug/badfiles.py"}]} | 2,779 | 92 |
gh_patches_debug_30055 | rasdani/github-patches | git_diff | pytorch__torchdynamo-193 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make torchdynamo not import third party package in `skipfiles.py`
@xuzhao9 in https://github.com/facebookresearch/torchdynamo/issues/107#issuecomment-1095681515 found that the following line makes alexnet 18% slower:
https://github.com/jansel/torchdynamo/blob/bf90b8cdbacf35944fa8c12185b1823dc5cb90bb/torchdynamo/skipfiles.py#L123
It seems importing: "networkx", "omegaconf", "onnx", "pandas", and "sklearn" cause performance issues.
TorchDynamo is only importing these modules to find the filename, which is also a bit wasteful. We should rewrite `skipfiles.py` to use [find_spec](https://docs.python.org/3/library/importlib.html#importlib.abc.PathEntryFinder.find_spec) instead, so we don't need to import unused packages.
Also, I think we can cut down the list of modules in skipfiles dramatically. Most of those were added when TorchDynamo didn't automatically skip backends and supported much less of python, so likely many (most?) can be removed.
</issue>
<code>
[start of torchdynamo/skipfiles.py]
1 import abc
2 import collections
3 import contextlib
4 import copy
5 import copyreg
6 import dataclasses
7 import enum
8 import functools
9 import importlib
10 import inspect
11 import linecache
12 import logging
13 import multiprocessing
14 import operator
15 import os
16 import posixpath
17 import random
18 import re
19 import selectors
20 import signal
21 import tempfile
22 import threading
23 import tokenize
24 import traceback
25 import types
26 import typing
27 import unittest
28 import weakref
29
30 import _collections_abc
31 import _weakrefset
32 import torch
33
34
35 def _module_dir(m: types.ModuleType):
36 return re.sub(r"__init__.py$", "", m.__file__)
37
38
39 SKIP_DIRS = [
40 # torch.*
41 _module_dir(torch),
42 # torchdynamo.*
43 os.path.dirname(__file__) + "/",
44 "<frozen importlib",
45 "<__array_function__ internals>",
46 ] + [
47 # skip some standard libs
48 _module_dir(m)
49 for m in (
50 abc,
51 collections,
52 contextlib,
53 copy,
54 copyreg,
55 dataclasses,
56 enum,
57 functools,
58 importlib,
59 inspect,
60 linecache,
61 logging,
62 multiprocessing,
63 operator,
64 os,
65 posixpath,
66 random,
67 re,
68 selectors,
69 signal,
70 tempfile,
71 threading,
72 tokenize,
73 traceback,
74 types,
75 typing,
76 unittest,
77 weakref,
78 _collections_abc,
79 _weakrefset,
80 )
81 ]
82 SKIP_DIRS_RE = None # set in add() below
83 FILENAME_ALLOWLIST = {
84 torch.nn.Sequential.__init__.__code__.co_filename,
85 }
86
87
88 def add(module: types.ModuleType):
89 assert isinstance(module, types.ModuleType)
90 global SKIP_DIRS_RE
91 name = module.__file__
92 if name is None:
93 return
94 SKIP_DIRS.append(_module_dir(module))
95 SKIP_DIRS_RE = re.compile(f"^({'|'.join(map(re.escape, SKIP_DIRS))})")
96
97
98 def check(filename, allow_torch=False):
99 """Should skip this file?"""
100 if filename is None:
101 return True
102 if filename in FILENAME_ALLOWLIST:
103 return False
104 if allow_torch and is_torch(filename):
105 return False
106 return bool(SKIP_DIRS_RE.match(filename))
107
108
109 # skip common third party libs
110 for _name in (
111 "functorch",
112 "intel_extension_for_pytorch",
113 "networkx",
114 "numpy",
115 "omegaconf",
116 "onnx",
117 "onnxruntime",
118 "onnx_tf",
119 "pandas",
120 "sklearn",
121 "tabulate",
122 "tensorflow",
123 "tensorrt",
124 "torch2trt",
125 "tqdm",
126 "tree",
127 "tvm",
128 "fx2trt_oss",
129 ):
130 try:
131 add(importlib.import_module(_name))
132 except (ImportError, TypeError):
133 pass
134
135
136 def is_torch_inline_allowed(filename):
137 return filename.startswith(_module_dir(torch.nn)) or filename.startswith(
138 _module_dir(torch.distributions)
139 )
140
141
142 def is_torch(filename):
143 return filename.startswith(_module_dir(torch))
144
[end of torchdynamo/skipfiles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchdynamo/skipfiles.py b/torchdynamo/skipfiles.py
--- a/torchdynamo/skipfiles.py
+++ b/torchdynamo/skipfiles.py
@@ -32,8 +32,12 @@
import torch
+def _strip_init_py(s):
+ return re.sub(r"__init__.py$", "", s)
+
+
def _module_dir(m: types.ModuleType):
- return re.sub(r"__init__.py$", "", m.__file__)
+ return _strip_init_py(m.__file__)
SKIP_DIRS = [
@@ -79,22 +83,32 @@
_weakrefset,
)
]
-SKIP_DIRS_RE = None # set in add() below
FILENAME_ALLOWLIST = {
torch.nn.Sequential.__init__.__code__.co_filename,
}
+SKIP_DIRS_RE = None
-def add(module: types.ModuleType):
- assert isinstance(module, types.ModuleType)
+def _recompile_re():
global SKIP_DIRS_RE
- name = module.__file__
- if name is None:
- return
- SKIP_DIRS.append(_module_dir(module))
SKIP_DIRS_RE = re.compile(f"^({'|'.join(map(re.escape, SKIP_DIRS))})")
+def add(import_name: str):
+ if isinstance(import_name, types.ModuleType):
+ return add(import_name.__name__)
+ assert isinstance(import_name, str)
+ module_spec = importlib.util.find_spec(import_name)
+ if not module_spec:
+ return
+ origin = module_spec.origin
+ if origin is None:
+ return
+ global SKIP_DIRS_RE
+ SKIP_DIRS.append(_strip_init_py(origin))
+ _recompile_re()
+
+
def check(filename, allow_torch=False):
"""Should skip this file?"""
if filename is None:
@@ -127,10 +141,9 @@
"tvm",
"fx2trt_oss",
):
- try:
- add(importlib.import_module(_name))
- except (ImportError, TypeError):
- pass
+ add(_name)
+
+_recompile_re()
def is_torch_inline_allowed(filename):
| {"golden_diff": "diff --git a/torchdynamo/skipfiles.py b/torchdynamo/skipfiles.py\n--- a/torchdynamo/skipfiles.py\n+++ b/torchdynamo/skipfiles.py\n@@ -32,8 +32,12 @@\n import torch\n \n \n+def _strip_init_py(s):\n+ return re.sub(r\"__init__.py$\", \"\", s)\n+\n+\n def _module_dir(m: types.ModuleType):\n- return re.sub(r\"__init__.py$\", \"\", m.__file__)\n+ return _strip_init_py(m.__file__)\n \n \n SKIP_DIRS = [\n@@ -79,22 +83,32 @@\n _weakrefset,\n )\n ]\n-SKIP_DIRS_RE = None # set in add() below\n FILENAME_ALLOWLIST = {\n torch.nn.Sequential.__init__.__code__.co_filename,\n }\n+SKIP_DIRS_RE = None\n \n \n-def add(module: types.ModuleType):\n- assert isinstance(module, types.ModuleType)\n+def _recompile_re():\n global SKIP_DIRS_RE\n- name = module.__file__\n- if name is None:\n- return\n- SKIP_DIRS.append(_module_dir(module))\n SKIP_DIRS_RE = re.compile(f\"^({'|'.join(map(re.escape, SKIP_DIRS))})\")\n \n \n+def add(import_name: str):\n+ if isinstance(import_name, types.ModuleType):\n+ return add(import_name.__name__)\n+ assert isinstance(import_name, str)\n+ module_spec = importlib.util.find_spec(import_name)\n+ if not module_spec:\n+ return\n+ origin = module_spec.origin\n+ if origin is None:\n+ return\n+ global SKIP_DIRS_RE\n+ SKIP_DIRS.append(_strip_init_py(origin))\n+ _recompile_re()\n+\n+\n def check(filename, allow_torch=False):\n \"\"\"Should skip this file?\"\"\"\n if filename is None:\n@@ -127,10 +141,9 @@\n \"tvm\",\n \"fx2trt_oss\",\n ):\n- try:\n- add(importlib.import_module(_name))\n- except (ImportError, TypeError):\n- pass\n+ add(_name)\n+\n+_recompile_re()\n \n \n def is_torch_inline_allowed(filename):\n", "issue": "Make torchdynamo not import third party package in `skipfiles.py`\n@xuzhao9 in https://github.com/facebookresearch/torchdynamo/issues/107#issuecomment-1095681515 found that the following line makes alexnet 18% slower: \r\n\r\nhttps://github.com/jansel/torchdynamo/blob/bf90b8cdbacf35944fa8c12185b1823dc5cb90bb/torchdynamo/skipfiles.py#L123\r\n\r\nIt seems importing: \"networkx\", \"omegaconf\", \"onnx\", \"pandas\", and \"sklearn\" cause performance issues.\r\n\r\nTorchDynamo is only importing these modules to find the filename, which is also a bit wasteful. We should rewrite `skipfiles.py` to use [find_spec](https://docs.python.org/3/library/importlib.html#importlib.abc.PathEntryFinder.find_spec) instead, so we don't need to import unused packages.\r\n\r\nAlso, I think we can cut down the list of modules in skipfiles dramatically. Most of those were added when TorchDynamo didn't automatically skip backends and supported much less of python, so likely many (most?) can be removed.\r\n\n", "before_files": [{"content": "import abc\nimport collections\nimport contextlib\nimport copy\nimport copyreg\nimport dataclasses\nimport enum\nimport functools\nimport importlib\nimport inspect\nimport linecache\nimport logging\nimport multiprocessing\nimport operator\nimport os\nimport posixpath\nimport random\nimport re\nimport selectors\nimport signal\nimport tempfile\nimport threading\nimport tokenize\nimport traceback\nimport types\nimport typing\nimport unittest\nimport weakref\n\nimport _collections_abc\nimport _weakrefset\nimport torch\n\n\ndef _module_dir(m: types.ModuleType):\n return re.sub(r\"__init__.py$\", \"\", m.__file__)\n\n\nSKIP_DIRS = [\n # torch.*\n _module_dir(torch),\n # torchdynamo.*\n os.path.dirname(__file__) + \"/\",\n \"<frozen importlib\",\n \"<__array_function__ internals>\",\n] + [\n # skip some standard libs\n _module_dir(m)\n for m in (\n abc,\n collections,\n contextlib,\n copy,\n copyreg,\n dataclasses,\n enum,\n functools,\n importlib,\n inspect,\n linecache,\n logging,\n multiprocessing,\n operator,\n os,\n posixpath,\n random,\n re,\n selectors,\n signal,\n tempfile,\n threading,\n tokenize,\n traceback,\n types,\n typing,\n unittest,\n weakref,\n _collections_abc,\n _weakrefset,\n )\n]\nSKIP_DIRS_RE = None # set in add() below\nFILENAME_ALLOWLIST = {\n torch.nn.Sequential.__init__.__code__.co_filename,\n}\n\n\ndef add(module: types.ModuleType):\n assert isinstance(module, types.ModuleType)\n global SKIP_DIRS_RE\n name = module.__file__\n if name is None:\n return\n SKIP_DIRS.append(_module_dir(module))\n SKIP_DIRS_RE = re.compile(f\"^({'|'.join(map(re.escape, SKIP_DIRS))})\")\n\n\ndef check(filename, allow_torch=False):\n \"\"\"Should skip this file?\"\"\"\n if filename is None:\n return True\n if filename in FILENAME_ALLOWLIST:\n return False\n if allow_torch and is_torch(filename):\n return False\n return bool(SKIP_DIRS_RE.match(filename))\n\n\n# skip common third party libs\nfor _name in (\n \"functorch\",\n \"intel_extension_for_pytorch\",\n \"networkx\",\n \"numpy\",\n \"omegaconf\",\n \"onnx\",\n \"onnxruntime\",\n \"onnx_tf\",\n \"pandas\",\n \"sklearn\",\n \"tabulate\",\n \"tensorflow\",\n \"tensorrt\",\n \"torch2trt\",\n \"tqdm\",\n \"tree\",\n \"tvm\",\n \"fx2trt_oss\",\n):\n try:\n add(importlib.import_module(_name))\n except (ImportError, TypeError):\n pass\n\n\ndef is_torch_inline_allowed(filename):\n return filename.startswith(_module_dir(torch.nn)) or filename.startswith(\n _module_dir(torch.distributions)\n )\n\n\ndef is_torch(filename):\n return filename.startswith(_module_dir(torch))\n", "path": "torchdynamo/skipfiles.py"}]} | 1,809 | 485 |
gh_patches_debug_12000 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-3099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to use Unions with Generics
## Describe the Bug
Not sure if it is a bug or something non supported but when passing a union to a generic type strawberry is unable to initialize the schema.
The following example would work using a single strawberry type with Connection, but it fails when using an union
```python
from typing import Generic, TypeVar, Union
import strawberry
T = TypeVar("T")
@strawberry.type
class Edge(Generic[T]):
cursor: str
node: T
@strawberry.type
class Connection(Generic[T]):
edges: list["Edge[T]"]
@strawberry.type
class Entity1:
id: int
@strawberry.type
class Entity2:
id: int
Entities = Union[Entity1, Entity2]
@strawberry.type
class Query:
@strawberry.field
def entities(self) -> Connection[Entities]:
return Connection(
edges=[
Edge(
cursor="1",
node=Entity1(id=1),
),
Edge(
cursor="2",
node=Entity2(id=2),
),
],
)
schema = strawberry.Schema(Query)
print(schema.execute_sync("{ entities { __typename } }"))
```
error
```
raise cls(f"{self.name} fields cannot be resolved. {error}") from error
TypeError: Query fields cannot be resolved.
```
## System Information
- Operating system:
- Strawberry version (if applicable): 0.208.1
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/3098">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of strawberry/schema/name_converter.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, List, Optional, Union, cast
4 from typing_extensions import Protocol
5
6 from strawberry.custom_scalar import ScalarDefinition
7 from strawberry.directive import StrawberryDirective
8 from strawberry.enum import EnumDefinition, EnumValue
9 from strawberry.lazy_type import LazyType
10 from strawberry.schema_directive import StrawberrySchemaDirective
11 from strawberry.type import (
12 StrawberryList,
13 StrawberryOptional,
14 has_object_definition,
15 )
16 from strawberry.types.types import StrawberryObjectDefinition
17 from strawberry.union import StrawberryUnion
18 from strawberry.utils.str_converters import capitalize_first, to_camel_case
19 from strawberry.utils.typing import eval_type
20
21 if TYPE_CHECKING:
22 from strawberry.arguments import StrawberryArgument
23 from strawberry.field import StrawberryField
24 from strawberry.type import StrawberryType
25
26
27 class HasGraphQLName(Protocol):
28 python_name: str
29 graphql_name: Optional[str]
30
31
32 class NameConverter:
33 def __init__(self, auto_camel_case: bool = True) -> None:
34 self.auto_camel_case = auto_camel_case
35
36 def apply_naming_config(self, name: str) -> str:
37 if self.auto_camel_case:
38 name = to_camel_case(name)
39
40 return name
41
42 def from_type(
43 self,
44 type_: Union[StrawberryType, StrawberryDirective],
45 ) -> str:
46 if isinstance(type_, (StrawberryDirective, StrawberrySchemaDirective)):
47 return self.from_directive(type_)
48 if isinstance(type_, EnumDefinition): # TODO: Replace with StrawberryEnum
49 return self.from_enum(type_)
50 elif isinstance(type_, StrawberryObjectDefinition):
51 if type_.is_input:
52 return self.from_input_object(type_)
53 if type_.is_interface:
54 return self.from_interface(type_)
55 return self.from_object(type_)
56 elif isinstance(type_, StrawberryUnion):
57 return self.from_union(type_)
58 elif isinstance(type_, ScalarDefinition): # TODO: Replace with StrawberryScalar
59 return self.from_scalar(type_)
60 else:
61 return str(type_)
62
63 def from_argument(self, argument: StrawberryArgument) -> str:
64 return self.get_graphql_name(argument)
65
66 def from_object(self, object_type: StrawberryObjectDefinition) -> str:
67 # if concrete_of is not generic, than this is a subclass of an already
68 # especialized type.
69 if object_type.concrete_of and object_type.concrete_of.is_generic:
70 return self.from_generic(
71 object_type, list(object_type.type_var_map.values())
72 )
73
74 return object_type.name
75
76 def from_input_object(self, input_type: StrawberryObjectDefinition) -> str:
77 return self.from_object(input_type)
78
79 def from_interface(self, interface: StrawberryObjectDefinition) -> str:
80 return self.from_object(interface)
81
82 def from_enum(self, enum: EnumDefinition) -> str:
83 return enum.name
84
85 def from_enum_value(self, enum: EnumDefinition, enum_value: EnumValue) -> str:
86 return enum_value.name
87
88 def from_directive(
89 self, directive: Union[StrawberryDirective, StrawberrySchemaDirective]
90 ) -> str:
91 name = self.get_graphql_name(directive)
92
93 if self.auto_camel_case:
94 # we don't want the first letter to be uppercase for directives
95 return name[0].lower() + name[1:]
96
97 return name
98
99 def from_scalar(self, scalar: ScalarDefinition) -> str:
100 return scalar.name
101
102 def from_field(self, field: StrawberryField) -> str:
103 return self.get_graphql_name(field)
104
105 def from_union(self, union: StrawberryUnion) -> str:
106 if union.graphql_name is not None:
107 return union.graphql_name
108
109 name = ""
110
111 for type_ in union.types:
112 if isinstance(type_, LazyType):
113 type_ = cast("StrawberryType", type_.resolve_type()) # noqa: PLW2901
114
115 if has_object_definition(type_):
116 type_name = self.from_type(type_.__strawberry_definition__)
117 else:
118 # This should only be hit when generating names for type-related
119 # exceptions
120 type_name = self.from_type(type_)
121
122 name += type_name
123
124 return name
125
126 def from_generic(
127 self,
128 generic_type: StrawberryObjectDefinition,
129 types: List[Union[StrawberryType, type]],
130 ) -> str:
131 generic_type_name = generic_type.name
132
133 names: List[str] = []
134
135 for type_ in types:
136 name = self.get_from_type(type_)
137 names.append(name)
138
139 return "".join(names) + generic_type_name
140
141 def get_from_type(self, type_: Union[StrawberryType, type]) -> str:
142 type_ = eval_type(type_)
143
144 if isinstance(type_, LazyType):
145 name = type_.type_name
146 elif isinstance(type_, EnumDefinition):
147 name = type_.name
148 elif isinstance(type_, StrawberryUnion):
149 # TODO: test Generics with unnamed unions
150 assert type_.graphql_name
151
152 name = type_.graphql_name
153 elif isinstance(type_, StrawberryList):
154 name = self.get_from_type(type_.of_type) + "List"
155 elif isinstance(type_, StrawberryOptional):
156 name = self.get_from_type(type_.of_type) + "Optional"
157 elif hasattr(type_, "_scalar_definition"):
158 strawberry_type = type_._scalar_definition
159
160 name = strawberry_type.name
161 elif has_object_definition(type_):
162 strawberry_type = type_.__strawberry_definition__
163
164 if (
165 strawberry_type.is_generic
166 and not strawberry_type.is_specialized_generic
167 ):
168 types = type_.__args__ # type: ignore
169 name = self.from_generic(strawberry_type, types)
170 elif (
171 strawberry_type.concrete_of
172 and not strawberry_type.is_specialized_generic
173 ):
174 types = list(strawberry_type.type_var_map.values())
175 name = self.from_generic(strawberry_type, types)
176 else:
177 name = strawberry_type.name
178 else:
179 name = type_.__name__
180
181 return capitalize_first(name)
182
183 def get_graphql_name(self, obj: HasGraphQLName) -> str:
184 if obj.graphql_name is not None:
185 return obj.graphql_name
186
187 assert obj.python_name
188
189 return self.apply_naming_config(obj.python_name)
190
[end of strawberry/schema/name_converter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/schema/name_converter.py b/strawberry/schema/name_converter.py
--- a/strawberry/schema/name_converter.py
+++ b/strawberry/schema/name_converter.py
@@ -146,10 +146,7 @@
elif isinstance(type_, EnumDefinition):
name = type_.name
elif isinstance(type_, StrawberryUnion):
- # TODO: test Generics with unnamed unions
- assert type_.graphql_name
-
- name = type_.graphql_name
+ name = type_.graphql_name if type_.graphql_name else self.from_union(type_)
elif isinstance(type_, StrawberryList):
name = self.get_from_type(type_.of_type) + "List"
elif isinstance(type_, StrawberryOptional):
| {"golden_diff": "diff --git a/strawberry/schema/name_converter.py b/strawberry/schema/name_converter.py\n--- a/strawberry/schema/name_converter.py\n+++ b/strawberry/schema/name_converter.py\n@@ -146,10 +146,7 @@\n elif isinstance(type_, EnumDefinition):\n name = type_.name\n elif isinstance(type_, StrawberryUnion):\n- # TODO: test Generics with unnamed unions\n- assert type_.graphql_name\n-\n- name = type_.graphql_name\n+ name = type_.graphql_name if type_.graphql_name else self.from_union(type_)\n elif isinstance(type_, StrawberryList):\n name = self.get_from_type(type_.of_type) + \"List\"\n elif isinstance(type_, StrawberryOptional):\n", "issue": "Unable to use Unions with Generics\n## Describe the Bug\r\n\r\nNot sure if it is a bug or something non supported but when passing a union to a generic type strawberry is unable to initialize the schema.\r\n\r\nThe following example would work using a single strawberry type with Connection, but it fails when using an union \r\n\r\n\r\n```python\r\nfrom typing import Generic, TypeVar, Union\r\nimport strawberry\r\n\r\nT = TypeVar(\"T\")\r\n\r\n\r\[email protected]\r\nclass Edge(Generic[T]):\r\n cursor: str\r\n node: T\r\n\r\n\r\[email protected]\r\nclass Connection(Generic[T]):\r\n edges: list[\"Edge[T]\"]\r\n\r\n\r\[email protected]\r\nclass Entity1:\r\n id: int\r\n\r\n\r\[email protected]\r\nclass Entity2:\r\n id: int\r\n\r\n\r\nEntities = Union[Entity1, Entity2]\r\n\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n def entities(self) -> Connection[Entities]:\r\n return Connection(\r\n edges=[\r\n Edge(\r\n cursor=\"1\",\r\n node=Entity1(id=1),\r\n ),\r\n Edge(\r\n cursor=\"2\",\r\n node=Entity2(id=2),\r\n ),\r\n ],\r\n )\r\n\r\n\r\nschema = strawberry.Schema(Query)\r\nprint(schema.execute_sync(\"{ entities { __typename } }\"))\r\n\r\n```\r\nerror\r\n```\r\nraise cls(f\"{self.name} fields cannot be resolved. {error}\") from error\r\nTypeError: Query fields cannot be resolved.\r\n```\r\n\r\n\r\n## System Information\r\n\r\n - Operating system:\r\n - Strawberry version (if applicable): 0.208.1\r\n\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/3098\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/3098/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, List, Optional, Union, cast\nfrom typing_extensions import Protocol\n\nfrom strawberry.custom_scalar import ScalarDefinition\nfrom strawberry.directive import StrawberryDirective\nfrom strawberry.enum import EnumDefinition, EnumValue\nfrom strawberry.lazy_type import LazyType\nfrom strawberry.schema_directive import StrawberrySchemaDirective\nfrom strawberry.type import (\n StrawberryList,\n StrawberryOptional,\n has_object_definition,\n)\nfrom strawberry.types.types import StrawberryObjectDefinition\nfrom strawberry.union import StrawberryUnion\nfrom strawberry.utils.str_converters import capitalize_first, to_camel_case\nfrom strawberry.utils.typing import eval_type\n\nif TYPE_CHECKING:\n from strawberry.arguments import StrawberryArgument\n from strawberry.field import StrawberryField\n from strawberry.type import StrawberryType\n\n\nclass HasGraphQLName(Protocol):\n python_name: str\n graphql_name: Optional[str]\n\n\nclass NameConverter:\n def __init__(self, auto_camel_case: bool = True) -> None:\n self.auto_camel_case = auto_camel_case\n\n def apply_naming_config(self, name: str) -> str:\n if self.auto_camel_case:\n name = to_camel_case(name)\n\n return name\n\n def from_type(\n self,\n type_: Union[StrawberryType, StrawberryDirective],\n ) -> str:\n if isinstance(type_, (StrawberryDirective, StrawberrySchemaDirective)):\n return self.from_directive(type_)\n if isinstance(type_, EnumDefinition): # TODO: Replace with StrawberryEnum\n return self.from_enum(type_)\n elif isinstance(type_, StrawberryObjectDefinition):\n if type_.is_input:\n return self.from_input_object(type_)\n if type_.is_interface:\n return self.from_interface(type_)\n return self.from_object(type_)\n elif isinstance(type_, StrawberryUnion):\n return self.from_union(type_)\n elif isinstance(type_, ScalarDefinition): # TODO: Replace with StrawberryScalar\n return self.from_scalar(type_)\n else:\n return str(type_)\n\n def from_argument(self, argument: StrawberryArgument) -> str:\n return self.get_graphql_name(argument)\n\n def from_object(self, object_type: StrawberryObjectDefinition) -> str:\n # if concrete_of is not generic, than this is a subclass of an already\n # especialized type.\n if object_type.concrete_of and object_type.concrete_of.is_generic:\n return self.from_generic(\n object_type, list(object_type.type_var_map.values())\n )\n\n return object_type.name\n\n def from_input_object(self, input_type: StrawberryObjectDefinition) -> str:\n return self.from_object(input_type)\n\n def from_interface(self, interface: StrawberryObjectDefinition) -> str:\n return self.from_object(interface)\n\n def from_enum(self, enum: EnumDefinition) -> str:\n return enum.name\n\n def from_enum_value(self, enum: EnumDefinition, enum_value: EnumValue) -> str:\n return enum_value.name\n\n def from_directive(\n self, directive: Union[StrawberryDirective, StrawberrySchemaDirective]\n ) -> str:\n name = self.get_graphql_name(directive)\n\n if self.auto_camel_case:\n # we don't want the first letter to be uppercase for directives\n return name[0].lower() + name[1:]\n\n return name\n\n def from_scalar(self, scalar: ScalarDefinition) -> str:\n return scalar.name\n\n def from_field(self, field: StrawberryField) -> str:\n return self.get_graphql_name(field)\n\n def from_union(self, union: StrawberryUnion) -> str:\n if union.graphql_name is not None:\n return union.graphql_name\n\n name = \"\"\n\n for type_ in union.types:\n if isinstance(type_, LazyType):\n type_ = cast(\"StrawberryType\", type_.resolve_type()) # noqa: PLW2901\n\n if has_object_definition(type_):\n type_name = self.from_type(type_.__strawberry_definition__)\n else:\n # This should only be hit when generating names for type-related\n # exceptions\n type_name = self.from_type(type_)\n\n name += type_name\n\n return name\n\n def from_generic(\n self,\n generic_type: StrawberryObjectDefinition,\n types: List[Union[StrawberryType, type]],\n ) -> str:\n generic_type_name = generic_type.name\n\n names: List[str] = []\n\n for type_ in types:\n name = self.get_from_type(type_)\n names.append(name)\n\n return \"\".join(names) + generic_type_name\n\n def get_from_type(self, type_: Union[StrawberryType, type]) -> str:\n type_ = eval_type(type_)\n\n if isinstance(type_, LazyType):\n name = type_.type_name\n elif isinstance(type_, EnumDefinition):\n name = type_.name\n elif isinstance(type_, StrawberryUnion):\n # TODO: test Generics with unnamed unions\n assert type_.graphql_name\n\n name = type_.graphql_name\n elif isinstance(type_, StrawberryList):\n name = self.get_from_type(type_.of_type) + \"List\"\n elif isinstance(type_, StrawberryOptional):\n name = self.get_from_type(type_.of_type) + \"Optional\"\n elif hasattr(type_, \"_scalar_definition\"):\n strawberry_type = type_._scalar_definition\n\n name = strawberry_type.name\n elif has_object_definition(type_):\n strawberry_type = type_.__strawberry_definition__\n\n if (\n strawberry_type.is_generic\n and not strawberry_type.is_specialized_generic\n ):\n types = type_.__args__ # type: ignore\n name = self.from_generic(strawberry_type, types)\n elif (\n strawberry_type.concrete_of\n and not strawberry_type.is_specialized_generic\n ):\n types = list(strawberry_type.type_var_map.values())\n name = self.from_generic(strawberry_type, types)\n else:\n name = strawberry_type.name\n else:\n name = type_.__name__\n\n return capitalize_first(name)\n\n def get_graphql_name(self, obj: HasGraphQLName) -> str:\n if obj.graphql_name is not None:\n return obj.graphql_name\n\n assert obj.python_name\n\n return self.apply_naming_config(obj.python_name)\n", "path": "strawberry/schema/name_converter.py"}]} | 2,895 | 163 |
gh_patches_debug_16313 | rasdani/github-patches | git_diff | searxng__searxng-1380 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
move donation page to docs.searxng.org and link to it from instances
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
```
3a75d3c1ccbf979a483b4e5b209a59db9876ba33; searxng master
```
**How did you install SearXNG?**
with docker
**What happened?**
There is a donation page on the instance.
**Expected behavior**
The donation page should be in the official documentation and linked to by instances instead of having it on the instance. Also: There should be more information about who is receiving the money and what it is used for.
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->
**Additional context**
Suggestion in matrix, since donation page on instances could mean legal trouble.
</issue>
<code>
[start of searx/infopage/__init__.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 # pyright: basic
4 """Render SearXNG instance documentation.
5
6 Usage in a Flask app route:
7
8 .. code:: python
9
10 from searx import infopage
11
12 _INFO_PAGES = infopage.InfoPageSet(infopage.MistletoePage)
13
14 @app.route('/info/<pagename>', methods=['GET'])
15 def info(pagename):
16
17 locale = request.preferences.get_value('locale')
18 page = _INFO_PAGES.get_page(pagename, locale)
19
20 """
21
22 __all__ = ['InfoPage', 'InfoPageSet']
23
24 import os
25 import os.path
26 import logging
27 import typing
28
29 import urllib.parse
30 import jinja2
31 from flask.helpers import url_for
32 from markdown_it import MarkdownIt
33
34 from .. import get_setting
35 from ..compat import cached_property
36 from ..version import GIT_URL
37 from ..locales import LOCALE_NAMES
38
39
40 logger = logging.getLogger('searx.infopage')
41 _INFO_FOLDER = os.path.abspath(os.path.dirname(__file__))
42
43
44 class InfoPage:
45 """A page of the :py:obj:`online documentation <InfoPageSet>`."""
46
47 def __init__(self, fname):
48 self.fname = fname
49
50 @cached_property
51 def raw_content(self):
52 """Raw content of the page (without any jinja rendering)"""
53 with open(self.fname, 'r', encoding='utf-8') as f:
54 return f.read()
55
56 @cached_property
57 def content(self):
58 """Content of the page (rendered in a Jinja conntext)"""
59 ctx = self.get_ctx()
60 template = jinja2.Environment().from_string(self.raw_content)
61 return template.render(**ctx)
62
63 @cached_property
64 def title(self):
65 """Title of the content (without any markup)"""
66 t = ""
67 for l in self.raw_content.split('\n'):
68 if l.startswith('# '):
69 t = l.strip('# ')
70 return t
71
72 @cached_property
73 def html(self):
74 """Render Markdown (CommonMark_) to HTML by using markdown-it-py_.
75
76 .. _CommonMark: https://commonmark.org/
77 .. _markdown-it-py: https://github.com/executablebooks/markdown-it-py
78
79 """
80 return (
81 MarkdownIt("commonmark", {"typographer": True}).enable(["replacements", "smartquotes"]).render(self.content)
82 )
83
84 def get_ctx(self):
85 """Jinja context to render :py:obj:`InfoPage.content`"""
86
87 def _md_link(name, url):
88 url = url_for(url, _external=True)
89 return "[%s](%s)" % (name, url)
90
91 def _md_search(query):
92 url = '%s?q=%s' % (url_for('search', _external=True), urllib.parse.quote(query))
93 return '[%s](%s)' % (query, url)
94
95 ctx = {}
96 ctx['GIT_URL'] = GIT_URL
97 ctx['get_setting'] = get_setting
98 ctx['link'] = _md_link
99 ctx['search'] = _md_search
100
101 return ctx
102
103 def __repr__(self):
104 return f'<{self.__class__.__name__} fname={self.fname!r}>'
105
106
107 class InfoPageSet: # pylint: disable=too-few-public-methods
108 """Cached rendering of the online documentation a SearXNG instance has.
109
110 :param page_class: render online documentation by :py:obj:`InfoPage` parser.
111 :type page_class: :py:obj:`InfoPage`
112
113 :param info_folder: information directory
114 :type info_folder: str
115 """
116
117 def __init__(
118 self, page_class: typing.Optional[typing.Type[InfoPage]] = None, info_folder: typing.Optional[str] = None
119 ):
120 self.page_class = page_class or InfoPage
121 self.folder: str = info_folder or _INFO_FOLDER
122 """location of the Markdwon files"""
123
124 self.CACHE: typing.Dict[tuple, typing.Optional[InfoPage]] = {}
125
126 self.locale_default: str = 'en'
127 """default language"""
128
129 self.locales: typing.List[str] = [
130 locale.replace('_', '-') for locale in os.listdir(_INFO_FOLDER) if locale.replace('_', '-') in LOCALE_NAMES
131 ]
132 """list of supported languages (aka locales)"""
133
134 self.toc: typing.List[str] = [
135 'search-syntax',
136 'about',
137 'donate',
138 ]
139 """list of articles in the online documentation"""
140
141 def get_page(self, pagename: str, locale: typing.Optional[str] = None):
142 """Return ``pagename`` instance of :py:obj:`InfoPage`
143
144 :param pagename: name of the page, a value from :py:obj:`InfoPageSet.toc`
145 :type pagename: str
146
147 :param locale: language of the page, e.g. ``en``, ``zh_Hans_CN``
148 (default: :py:obj:`InfoPageSet.i18n_origin`)
149 :type locale: str
150
151 """
152 locale = locale or self.locale_default
153
154 if pagename not in self.toc:
155 return None
156 if locale not in self.locales:
157 return None
158
159 cache_key = (pagename, locale)
160 page = self.CACHE.get(cache_key)
161
162 if page is not None:
163 return page
164
165 # not yet instantiated
166
167 fname = os.path.join(self.folder, locale.replace('-', '_'), pagename) + '.md'
168 if not os.path.exists(fname):
169 logger.info('file %s does not exists', fname)
170 self.CACHE[cache_key] = None
171 return None
172
173 page = self.page_class(fname)
174 self.CACHE[cache_key] = page
175 return page
176
177 def iter_pages(self, locale: typing.Optional[str] = None, fallback_to_default=False):
178 """Iterate over all pages of the TOC"""
179 locale = locale or self.locale_default
180 for page_name in self.toc:
181 page_locale = locale
182 page = self.get_page(page_name, locale)
183 if fallback_to_default and page is None:
184 page_locale = self.locale_default
185 page = self.get_page(page_name, self.locale_default)
186 yield page_name, page_locale, page
187
[end of searx/infopage/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/infopage/__init__.py b/searx/infopage/__init__.py
--- a/searx/infopage/__init__.py
+++ b/searx/infopage/__init__.py
@@ -157,10 +157,9 @@
return None
cache_key = (pagename, locale)
- page = self.CACHE.get(cache_key)
- if page is not None:
- return page
+ if cache_key in self.CACHE:
+ return self.CACHE[cache_key]
# not yet instantiated
@@ -183,4 +182,6 @@
if fallback_to_default and page is None:
page_locale = self.locale_default
page = self.get_page(page_name, self.locale_default)
- yield page_name, page_locale, page
+ if page is not None:
+ # page is None if the page was deleted by the administrator
+ yield page_name, page_locale, page
| {"golden_diff": "diff --git a/searx/infopage/__init__.py b/searx/infopage/__init__.py\n--- a/searx/infopage/__init__.py\n+++ b/searx/infopage/__init__.py\n@@ -157,10 +157,9 @@\n return None\n \n cache_key = (pagename, locale)\n- page = self.CACHE.get(cache_key)\n \n- if page is not None:\n- return page\n+ if cache_key in self.CACHE:\n+ return self.CACHE[cache_key]\n \n # not yet instantiated\n \n@@ -183,4 +182,6 @@\n if fallback_to_default and page is None:\n page_locale = self.locale_default\n page = self.get_page(page_name, self.locale_default)\n- yield page_name, page_locale, page\n+ if page is not None:\n+ # page is None if the page was deleted by the administrator\n+ yield page_name, page_locale, page\n", "issue": "move donation page to docs.searxng.org and link to it from instances\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n```\r\n3a75d3c1ccbf979a483b4e5b209a59db9876ba33; searxng master\r\n``` \r\n\r\n**How did you install SearXNG?**\r\n\r\nwith docker\r\n\r\n**What happened?**\r\nThere is a donation page on the instance.\r\n\r\n**Expected behavior**\r\nThe donation page should be in the official documentation and linked to by instances instead of having it on the instance. Also: There should be more information about who is receiving the money and what it is used for.\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\nSuggestion in matrix, since donation page on instances could mean legal trouble.\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n# pyright: basic\n\"\"\"Render SearXNG instance documentation.\n\nUsage in a Flask app route:\n\n.. code:: python\n\n from searx import infopage\n\n _INFO_PAGES = infopage.InfoPageSet(infopage.MistletoePage)\n\n @app.route('/info/<pagename>', methods=['GET'])\n def info(pagename):\n\n locale = request.preferences.get_value('locale')\n page = _INFO_PAGES.get_page(pagename, locale)\n\n\"\"\"\n\n__all__ = ['InfoPage', 'InfoPageSet']\n\nimport os\nimport os.path\nimport logging\nimport typing\n\nimport urllib.parse\nimport jinja2\nfrom flask.helpers import url_for\nfrom markdown_it import MarkdownIt\n\nfrom .. import get_setting\nfrom ..compat import cached_property\nfrom ..version import GIT_URL\nfrom ..locales import LOCALE_NAMES\n\n\nlogger = logging.getLogger('searx.infopage')\n_INFO_FOLDER = os.path.abspath(os.path.dirname(__file__))\n\n\nclass InfoPage:\n \"\"\"A page of the :py:obj:`online documentation <InfoPageSet>`.\"\"\"\n\n def __init__(self, fname):\n self.fname = fname\n\n @cached_property\n def raw_content(self):\n \"\"\"Raw content of the page (without any jinja rendering)\"\"\"\n with open(self.fname, 'r', encoding='utf-8') as f:\n return f.read()\n\n @cached_property\n def content(self):\n \"\"\"Content of the page (rendered in a Jinja conntext)\"\"\"\n ctx = self.get_ctx()\n template = jinja2.Environment().from_string(self.raw_content)\n return template.render(**ctx)\n\n @cached_property\n def title(self):\n \"\"\"Title of the content (without any markup)\"\"\"\n t = \"\"\n for l in self.raw_content.split('\\n'):\n if l.startswith('# '):\n t = l.strip('# ')\n return t\n\n @cached_property\n def html(self):\n \"\"\"Render Markdown (CommonMark_) to HTML by using markdown-it-py_.\n\n .. _CommonMark: https://commonmark.org/\n .. _markdown-it-py: https://github.com/executablebooks/markdown-it-py\n\n \"\"\"\n return (\n MarkdownIt(\"commonmark\", {\"typographer\": True}).enable([\"replacements\", \"smartquotes\"]).render(self.content)\n )\n\n def get_ctx(self):\n \"\"\"Jinja context to render :py:obj:`InfoPage.content`\"\"\"\n\n def _md_link(name, url):\n url = url_for(url, _external=True)\n return \"[%s](%s)\" % (name, url)\n\n def _md_search(query):\n url = '%s?q=%s' % (url_for('search', _external=True), urllib.parse.quote(query))\n return '[%s](%s)' % (query, url)\n\n ctx = {}\n ctx['GIT_URL'] = GIT_URL\n ctx['get_setting'] = get_setting\n ctx['link'] = _md_link\n ctx['search'] = _md_search\n\n return ctx\n\n def __repr__(self):\n return f'<{self.__class__.__name__} fname={self.fname!r}>'\n\n\nclass InfoPageSet: # pylint: disable=too-few-public-methods\n \"\"\"Cached rendering of the online documentation a SearXNG instance has.\n\n :param page_class: render online documentation by :py:obj:`InfoPage` parser.\n :type page_class: :py:obj:`InfoPage`\n\n :param info_folder: information directory\n :type info_folder: str\n \"\"\"\n\n def __init__(\n self, page_class: typing.Optional[typing.Type[InfoPage]] = None, info_folder: typing.Optional[str] = None\n ):\n self.page_class = page_class or InfoPage\n self.folder: str = info_folder or _INFO_FOLDER\n \"\"\"location of the Markdwon files\"\"\"\n\n self.CACHE: typing.Dict[tuple, typing.Optional[InfoPage]] = {}\n\n self.locale_default: str = 'en'\n \"\"\"default language\"\"\"\n\n self.locales: typing.List[str] = [\n locale.replace('_', '-') for locale in os.listdir(_INFO_FOLDER) if locale.replace('_', '-') in LOCALE_NAMES\n ]\n \"\"\"list of supported languages (aka locales)\"\"\"\n\n self.toc: typing.List[str] = [\n 'search-syntax',\n 'about',\n 'donate',\n ]\n \"\"\"list of articles in the online documentation\"\"\"\n\n def get_page(self, pagename: str, locale: typing.Optional[str] = None):\n \"\"\"Return ``pagename`` instance of :py:obj:`InfoPage`\n\n :param pagename: name of the page, a value from :py:obj:`InfoPageSet.toc`\n :type pagename: str\n\n :param locale: language of the page, e.g. ``en``, ``zh_Hans_CN``\n (default: :py:obj:`InfoPageSet.i18n_origin`)\n :type locale: str\n\n \"\"\"\n locale = locale or self.locale_default\n\n if pagename not in self.toc:\n return None\n if locale not in self.locales:\n return None\n\n cache_key = (pagename, locale)\n page = self.CACHE.get(cache_key)\n\n if page is not None:\n return page\n\n # not yet instantiated\n\n fname = os.path.join(self.folder, locale.replace('-', '_'), pagename) + '.md'\n if not os.path.exists(fname):\n logger.info('file %s does not exists', fname)\n self.CACHE[cache_key] = None\n return None\n\n page = self.page_class(fname)\n self.CACHE[cache_key] = page\n return page\n\n def iter_pages(self, locale: typing.Optional[str] = None, fallback_to_default=False):\n \"\"\"Iterate over all pages of the TOC\"\"\"\n locale = locale or self.locale_default\n for page_name in self.toc:\n page_locale = locale\n page = self.get_page(page_name, locale)\n if fallback_to_default and page is None:\n page_locale = self.locale_default\n page = self.get_page(page_name, self.locale_default)\n yield page_name, page_locale, page\n", "path": "searx/infopage/__init__.py"}]} | 2,632 | 227 |
gh_patches_debug_34820 | rasdani/github-patches | git_diff | falconry__falcon-57 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test: Add Unicode chars to logging test
</issue>
<code>
[start of falcon/request.py]
1 """Defines the Request class.
2
3 Copyright 2013 by Rackspace Hosting, Inc.
4
5 Licensed under the Apache License, Version 2.0 (the "License");
6 you may not use this file except in compliance with the License.
7 You may obtain a copy of the License at
8
9 http://www.apache.org/licenses/LICENSE-2.0
10
11 Unless required by applicable law or agreed to in writing, software
12 distributed under the License is distributed on an "AS IS" BASIS,
13 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 See the License for the specific language governing permissions and
15 limitations under the License.
16
17 """
18
19 import sys
20 from datetime import datetime
21
22 from falcon.request_helpers import *
23 from falcon.exceptions import *
24 import six
25
26
27 class Request(object):
28 """Represents a client's HTTP request"""
29
30 __slots__ = (
31 'app',
32 'body',
33 '_headers',
34 'method',
35 '_params',
36 'path',
37 'protocol',
38 'query_string',
39 '_wsgierrors'
40 )
41
42 def __init__(self, env):
43 """Initialize attributes based on a WSGI environment dict
44
45 Note: Request is not meant to be instantiated directory by responders.
46
47 Args:
48 env: A WSGI environment dict passed in from the server. See also
49 the PEP-333 spec.
50
51 """
52
53 self.app = env['SCRIPT_NAME']
54 self.body = env['wsgi.input']
55 self.method = env['REQUEST_METHOD']
56 self.path = env['PATH_INFO'] or '/'
57 self.protocol = env['wsgi.url_scheme']
58 self.query_string = query_string = env['QUERY_STRING']
59 self._params = parse_query_string(query_string)
60 self._headers = parse_headers(env)
61 self._wsgierrors = env['wsgi.errors']
62
63 def log_error(self, message):
64 """Log an error to wsgi.error
65
66 Prepends timestamp and request info to message, and writes the result
67 out to the WSGI server's error stream (wsgi.error).
68
69 Args:
70 message: A string describing the problem. If a byte-string and
71 running under Python 2, the string is assumed to be encoded
72 as UTF-8.
73
74 """
75 u = six.text_type
76 log_line = (
77 u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\n').
78 format(datetime.now(), self.method, self.path, self.query_string,
79 message)
80 )
81
82 self._wsgierrors.write(log_line)
83
84 def client_accepts_json(self):
85 """Return True if the Accept header indicates JSON support"""
86
87 accept = self.get_header('Accept')
88 if accept is not None:
89 return ('application/json' in accept) or ('*/*' in accept)
90
91 return False
92
93 def get_header(self, name, default=None, required=False):
94 """Return a header value as a string
95
96 Args:
97 name: Header name, case-insensitive (e.g., 'Content-Type')
98 default: Value to return in case the header is not
99 found (default None)
100 required: Set to True to raise HttpBadRequest instead
101 of returning gracefully when the header is not found
102 (default False)
103
104 """
105
106 # Use try..except to optimize for the header existing in most cases
107 try:
108 # Don't take the time to cache beforehand, using HTTP naming.
109 # This will be faster, assuming that most headers are looked
110 # up only once, and not all headers will be requested.
111 return self._headers[name.upper().replace('-', '_')]
112 except KeyError:
113 if not required:
114 return default
115
116 raise HTTPBadRequest('Missing header',
117 'The "' + name + '" header is required.')
118
119 def get_param(self, name, default=None, required=False):
120 """Return the value of a query string parameter as a string
121
122 Args:
123 name: Parameter name, case-sensitive (e.g., 'sort')
124 default: Value to return in case the parameter is not found in the
125 query string (default None)
126 required: Set to True to raise HTTPBadRequest instead of returning
127 gracefully when the parameter is not found (default False)
128
129 Returns:
130 The value of the param as a byte string, or the default value if
131 param is not found and is not required.
132
133 Raises
134 HTTPBadRequest: The param was not found in the request, but was
135 required.
136
137 """
138
139 # PERF: Use if..in since it is a good all-around performer; we don't
140 # know how likely params are to be specified by clients.
141 if name in self._params:
142 return self._params[name]
143
144 if not required:
145 return default
146
147 raise HTTPBadRequest('Missing query parameter',
148 'The "' + name + '" query parameter is required.')
149
150 def get_param_as_int(self, name, default=None, required=False):
151 """Return the value of a query string parameter as an int
152
153 Args:
154 name: Parameter name, case-sensitive (e.g., 'limit')
155 default: Value to return in case the parameter is not found in the
156 query string, or it is not an integer (default None)
157 required: Set to True to raise HTTPBadRequest instead of returning
158 gracefully when the parameter is not found or is not an
159 integer (default False)
160
161 Returns:
162 The value of the param if it is found and can be converted to an
163 integer. Otherwise, returns the default value unless required is
164 True.
165
166 Raises
167 HTTPBadRequest: The param was not found in the request, but was
168 required.
169
170 """
171
172 # PERF: Use if..in since it is a good all-around performer; we don't
173 # know how likely params are to be specified by clients.
174 if name in self._params:
175 val = self._params[name]
176 try:
177 return int(val)
178 except ValueError:
179 pass
180
181 if not required:
182 return default
183
184 raise HTTPBadRequest('Missing query parameter',
185 'The "' + name + '" query parameter is required.')
186
[end of falcon/request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/falcon/request.py b/falcon/request.py
--- a/falcon/request.py
+++ b/falcon/request.py
@@ -16,12 +16,15 @@
"""
-import sys
from datetime import datetime
+import six
+
from falcon.request_helpers import *
from falcon.exceptions import *
-import six
+
+DEFAULT_ERROR_LOG_FORMAT = ('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'
+ ' {1} {2}?{3} => {4}\n')
class Request(object):
@@ -50,21 +53,23 @@
"""
- self.app = env['SCRIPT_NAME']
+ self._wsgierrors = env['wsgi.errors']
self.body = env['wsgi.input']
+
+ self.protocol = env['wsgi.url_scheme']
+ self.app = env['SCRIPT_NAME']
self.method = env['REQUEST_METHOD']
self.path = env['PATH_INFO'] or '/'
- self.protocol = env['wsgi.url_scheme']
self.query_string = query_string = env['QUERY_STRING']
+
self._params = parse_query_string(query_string)
self._headers = parse_headers(env)
- self._wsgierrors = env['wsgi.errors']
def log_error(self, message):
"""Log an error to wsgi.error
- Prepends timestamp and request info to message, and writes the result
- out to the WSGI server's error stream (wsgi.error).
+ Prepends timestamp and request info to message, and writes the
+ result out to the WSGI server's error stream (wsgi.error).
Args:
message: A string describing the problem. If a byte-string and
@@ -72,11 +77,13 @@
as UTF-8.
"""
- u = six.text_type
+ if not six.PY3 and isinstance(message, unicode):
+ message = message.encode('utf-8')
+
log_line = (
- u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\n').
- format(datetime.now(), self.method, self.path, self.query_string,
- message)
+ DEFAULT_ERROR_LOG_FORMAT.
+ format(datetime.now(), self.method, self.path,
+ self.query_string, message)
)
self._wsgierrors.write(log_line)
| {"golden_diff": "diff --git a/falcon/request.py b/falcon/request.py\n--- a/falcon/request.py\n+++ b/falcon/request.py\n@@ -16,12 +16,15 @@\n \n \"\"\"\n \n-import sys\n from datetime import datetime\n \n+import six\n+\n from falcon.request_helpers import *\n from falcon.exceptions import *\n-import six\n+\n+DEFAULT_ERROR_LOG_FORMAT = ('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n+ ' {1} {2}?{3} => {4}\\n')\n \n \n class Request(object):\n@@ -50,21 +53,23 @@\n \n \"\"\"\n \n- self.app = env['SCRIPT_NAME']\n+ self._wsgierrors = env['wsgi.errors']\n self.body = env['wsgi.input']\n+\n+ self.protocol = env['wsgi.url_scheme']\n+ self.app = env['SCRIPT_NAME']\n self.method = env['REQUEST_METHOD']\n self.path = env['PATH_INFO'] or '/'\n- self.protocol = env['wsgi.url_scheme']\n self.query_string = query_string = env['QUERY_STRING']\n+\n self._params = parse_query_string(query_string)\n self._headers = parse_headers(env)\n- self._wsgierrors = env['wsgi.errors']\n \n def log_error(self, message):\n \"\"\"Log an error to wsgi.error\n \n- Prepends timestamp and request info to message, and writes the result\n- out to the WSGI server's error stream (wsgi.error).\n+ Prepends timestamp and request info to message, and writes the\n+ result out to the WSGI server's error stream (wsgi.error).\n \n Args:\n message: A string describing the problem. If a byte-string and\n@@ -72,11 +77,13 @@\n as UTF-8.\n \n \"\"\"\n- u = six.text_type\n+ if not six.PY3 and isinstance(message, unicode):\n+ message = message.encode('utf-8')\n+\n log_line = (\n- u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\\n').\n- format(datetime.now(), self.method, self.path, self.query_string,\n- message)\n+ DEFAULT_ERROR_LOG_FORMAT.\n+ format(datetime.now(), self.method, self.path,\n+ self.query_string, message)\n )\n \n self._wsgierrors.write(log_line)\n", "issue": "test: Add Unicode chars to logging test\n\n", "before_files": [{"content": "\"\"\"Defines the Request class.\n\nCopyright 2013 by Rackspace Hosting, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\"\"\"\n\nimport sys\nfrom datetime import datetime\n\nfrom falcon.request_helpers import *\nfrom falcon.exceptions import *\nimport six\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request\"\"\"\n\n __slots__ = (\n 'app',\n 'body',\n '_headers',\n 'method',\n '_params',\n 'path',\n 'protocol',\n 'query_string',\n '_wsgierrors'\n )\n\n def __init__(self, env):\n \"\"\"Initialize attributes based on a WSGI environment dict\n\n Note: Request is not meant to be instantiated directory by responders.\n\n Args:\n env: A WSGI environment dict passed in from the server. See also\n the PEP-333 spec.\n\n \"\"\"\n\n self.app = env['SCRIPT_NAME']\n self.body = env['wsgi.input']\n self.method = env['REQUEST_METHOD']\n self.path = env['PATH_INFO'] or '/'\n self.protocol = env['wsgi.url_scheme']\n self.query_string = query_string = env['QUERY_STRING']\n self._params = parse_query_string(query_string)\n self._headers = parse_headers(env)\n self._wsgierrors = env['wsgi.errors']\n\n def log_error(self, message):\n \"\"\"Log an error to wsgi.error\n\n Prepends timestamp and request info to message, and writes the result\n out to the WSGI server's error stream (wsgi.error).\n\n Args:\n message: A string describing the problem. If a byte-string and\n running under Python 2, the string is assumed to be encoded\n as UTF-8.\n\n \"\"\"\n u = six.text_type\n log_line = (\n u('{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR] {1} {2}?{3} => {4}\\n').\n format(datetime.now(), self.method, self.path, self.query_string,\n message)\n )\n\n self._wsgierrors.write(log_line)\n\n def client_accepts_json(self):\n \"\"\"Return True if the Accept header indicates JSON support\"\"\"\n\n accept = self.get_header('Accept')\n if accept is not None:\n return ('application/json' in accept) or ('*/*' in accept)\n\n return False\n\n def get_header(self, name, default=None, required=False):\n \"\"\"Return a header value as a string\n\n Args:\n name: Header name, case-insensitive (e.g., 'Content-Type')\n default: Value to return in case the header is not\n found (default None)\n required: Set to True to raise HttpBadRequest instead\n of returning gracefully when the header is not found\n (default False)\n\n \"\"\"\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self._headers[name.upper().replace('-', '_')]\n except KeyError:\n if not required:\n return default\n\n raise HTTPBadRequest('Missing header',\n 'The \"' + name + '\" header is required.')\n\n def get_param(self, name, default=None, required=False):\n \"\"\"Return the value of a query string parameter as a string\n\n Args:\n name: Parameter name, case-sensitive (e.g., 'sort')\n default: Value to return in case the parameter is not found in the\n query string (default None)\n required: Set to True to raise HTTPBadRequest instead of returning\n gracefully when the parameter is not found (default False)\n\n Returns:\n The value of the param as a byte string, or the default value if\n param is not found and is not required.\n\n Raises\n HTTPBadRequest: The param was not found in the request, but was\n required.\n\n \"\"\"\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in self._params:\n return self._params[name]\n\n if not required:\n return default\n\n raise HTTPBadRequest('Missing query parameter',\n 'The \"' + name + '\" query parameter is required.')\n\n def get_param_as_int(self, name, default=None, required=False):\n \"\"\"Return the value of a query string parameter as an int\n\n Args:\n name: Parameter name, case-sensitive (e.g., 'limit')\n default: Value to return in case the parameter is not found in the\n query string, or it is not an integer (default None)\n required: Set to True to raise HTTPBadRequest instead of returning\n gracefully when the parameter is not found or is not an\n integer (default False)\n\n Returns:\n The value of the param if it is found and can be converted to an\n integer. Otherwise, returns the default value unless required is\n True.\n\n Raises\n HTTPBadRequest: The param was not found in the request, but was\n required.\n\n \"\"\"\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in self._params:\n val = self._params[name]\n try:\n return int(val)\n except ValueError:\n pass\n\n if not required:\n return default\n\n raise HTTPBadRequest('Missing query parameter',\n 'The \"' + name + '\" query parameter is required.')\n", "path": "falcon/request.py"}]} | 2,342 | 550 |
gh_patches_debug_1202 | rasdani/github-patches | git_diff | openvinotoolkit__datumaro-125 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
infer result passed from openvino launcher to interpreter is not appropriate.
I tried model run using openvino's mobileenet-v2-pytorch model.
(using mobilenet-v2-pytorch.xml, mobilenet-v2-pytorch.bin)
`datum model run -p proj -m model-0`
However, only the name of the layer (ex. 'prob' string) comes into the input parameters(outputs) of the interpreter. Please check the return result of OpenvinoLauncher.infer
`results = self._net.infer(inputs)` line 178, openvino_launcher.py
Debugging results are normal up to the code above, but it seems that only the name of the result layer is returned when returning and passing to interpreter.
</issue>
<code>
[start of datumaro/plugins/openvino_launcher.py]
1
2 # Copyright (C) 2019-2020 Intel Corporation
3 #
4 # SPDX-License-Identifier: MIT
5
6 # pylint: disable=exec-used
7
8 import cv2
9 import logging as log
10 import numpy as np
11 import os.path as osp
12 import shutil
13
14 from openvino.inference_engine import IECore
15
16 from datumaro.components.cli_plugin import CliPlugin
17 from datumaro.components.launcher import Launcher
18
19
20 class _OpenvinoImporter(CliPlugin):
21 @staticmethod
22 def _parse_output_layers(s):
23 return [s.strip() for s in s.split(',')]
24
25 @classmethod
26 def build_cmdline_parser(cls, **kwargs):
27 parser = super().build_cmdline_parser(**kwargs)
28 parser.add_argument('-d', '--description', required=True,
29 help="Path to the model description file (.xml)")
30 parser.add_argument('-w', '--weights', required=True,
31 help="Path to the model weights file (.bin)")
32 parser.add_argument('-i', '--interpreter', required=True,
33 help="Path to the network output interprter script (.py)")
34 parser.add_argument('--device', default='CPU',
35 help="Target device (default: %(default)s)")
36 parser.add_argument('--output-layers', type=cls._parse_output_layers,
37 help="A comma-separated list of extra output layers")
38 return parser
39
40 @staticmethod
41 def copy_model(model_dir, model):
42 shutil.copy(model['description'],
43 osp.join(model_dir, osp.basename(model['description'])))
44 model['description'] = osp.basename(model['description'])
45
46 shutil.copy(model['weights'],
47 osp.join(model_dir, osp.basename(model['weights'])))
48 model['weights'] = osp.basename(model['weights'])
49
50 shutil.copy(model['interpreter'],
51 osp.join(model_dir, osp.basename(model['interpreter'])))
52 model['interpreter'] = osp.basename(model['interpreter'])
53
54
55 class InterpreterScript:
56 def __init__(self, path):
57 with open(path, 'r') as f:
58 script = f.read()
59
60 context = {}
61 exec(script, context, context)
62
63 process_outputs = context.get('process_outputs')
64 if not callable(process_outputs):
65 raise Exception("Can't find 'process_outputs' function in "
66 "the interpreter script")
67 self.__dict__['process_outputs'] = process_outputs
68
69 get_categories = context.get('get_categories')
70 assert get_categories is None or callable(get_categories)
71 if get_categories:
72 self.__dict__['get_categories'] = get_categories
73
74 @staticmethod
75 def get_categories():
76 return None
77
78 @staticmethod
79 def process_outputs(inputs, outputs):
80 raise NotImplementedError(
81 "Function should be implemented in the interpreter script")
82
83
84 class OpenvinoLauncher(Launcher):
85 cli_plugin = _OpenvinoImporter
86
87 def __init__(self, description, weights, interpreter,
88 device=None, model_dir=None, output_layers=None):
89 if not model_dir:
90 model_dir = ''
91 if not osp.isfile(description):
92 description = osp.join(model_dir, description)
93 if not osp.isfile(description):
94 raise Exception('Failed to open model description file "%s"' % \
95 (description))
96
97 if not osp.isfile(weights):
98 weights = osp.join(model_dir, weights)
99 if not osp.isfile(weights):
100 raise Exception('Failed to open model weights file "%s"' % \
101 (weights))
102
103 if not osp.isfile(interpreter):
104 interpreter = osp.join(model_dir, interpreter)
105 if not osp.isfile(interpreter):
106 raise Exception('Failed to open model interpreter script file "%s"' % \
107 (interpreter))
108
109 self._interpreter = InterpreterScript(interpreter)
110
111 self._device = device or 'CPU'
112 self._output_blobs = output_layers
113
114 self._ie = IECore()
115 self._network = self._ie.read_network(description, weights)
116 self._check_model_support(self._network, self._device)
117 self._load_executable_net()
118
119 def _check_model_support(self, net, device):
120 not_supported_layers = set(name
121 for name, dev in self._ie.query_network(net, device).items()
122 if not dev)
123 if len(not_supported_layers) != 0:
124 log.error("The following layers are not supported " \
125 "by the plugin for device '%s': %s." % \
126 (device, ', '.join(not_supported_layers)))
127 raise NotImplementedError(
128 "Some layers are not supported on the device")
129
130 def _load_executable_net(self, batch_size=1):
131 network = self._network
132
133 if self._output_blobs:
134 network.add_outputs(self._output_blobs)
135
136 iter_inputs = iter(network.input_info)
137 self._input_blob = next(iter_inputs)
138
139 # NOTE: handling for the inclusion of `image_info` in OpenVino2019
140 self._require_image_info = 'image_info' in network.input_info
141 if self._input_blob == 'image_info':
142 self._input_blob = next(iter_inputs)
143
144 self._input_layout = network.input_info[self._input_blob].input_data.shape
145 self._input_layout[0] = batch_size
146 network.reshape({self._input_blob: self._input_layout})
147 self._batch_size = batch_size
148
149 self._net = self._ie.load_network(network=network, num_requests=1,
150 device_name=self._device)
151
152 def infer(self, inputs):
153 assert len(inputs.shape) == 4, \
154 "Expected an input image in (N, H, W, C) format, got %s" % \
155 (inputs.shape, )
156
157 if inputs.shape[3] == 1: # A batch of single-channel images
158 inputs = np.repeat(inputs, 3, axis=3)
159
160 assert inputs.shape[3] == 3, \
161 "Expected BGR input, got %s" % (inputs.shape, )
162
163 n, c, h, w = self._input_layout
164 if inputs.shape[1:3] != (h, w):
165 resized_inputs = np.empty((n, h, w, c), dtype=inputs.dtype)
166 for inp, resized_input in zip(inputs, resized_inputs):
167 cv2.resize(inp, (w, h), resized_input)
168 inputs = resized_inputs
169 inputs = inputs.transpose((0, 3, 1, 2)) # NHWC to NCHW
170 inputs = {self._input_blob: inputs}
171 if self._require_image_info:
172 info = np.zeros([1, 3])
173 info[0, 0] = h
174 info[0, 1] = w
175 info[0, 2] = 1.0 # scale
176 inputs['image_info'] = info
177
178 results = self._net.infer(inputs)
179 if len(results) == 1:
180 return next(iter(results))
181 else:
182 return results
183
184 def launch(self, inputs):
185 batch_size = len(inputs)
186 if self._batch_size < batch_size:
187 self._load_executable_net(batch_size)
188
189 outputs = self.infer(inputs)
190 results = self.process_outputs(inputs, outputs)
191 return results
192
193 def categories(self):
194 return self._interpreter.get_categories()
195
196 def process_outputs(self, inputs, outputs):
197 return self._interpreter.process_outputs(inputs, outputs)
198
199
[end of datumaro/plugins/openvino_launcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datumaro/plugins/openvino_launcher.py b/datumaro/plugins/openvino_launcher.py
--- a/datumaro/plugins/openvino_launcher.py
+++ b/datumaro/plugins/openvino_launcher.py
@@ -177,7 +177,7 @@
results = self._net.infer(inputs)
if len(results) == 1:
- return next(iter(results))
+ return next(iter(results.values()))
else:
return results
| {"golden_diff": "diff --git a/datumaro/plugins/openvino_launcher.py b/datumaro/plugins/openvino_launcher.py\n--- a/datumaro/plugins/openvino_launcher.py\n+++ b/datumaro/plugins/openvino_launcher.py\n@@ -177,7 +177,7 @@\n \n results = self._net.infer(inputs)\n if len(results) == 1:\n- return next(iter(results))\n+ return next(iter(results.values()))\n else:\n return results\n", "issue": "infer result passed from openvino launcher to interpreter is not appropriate.\nI tried model run using openvino's mobileenet-v2-pytorch model.\r\n(using mobilenet-v2-pytorch.xml, mobilenet-v2-pytorch.bin)\r\n\r\n`datum model run -p proj -m model-0`\r\n\r\nHowever, only the name of the layer (ex. 'prob' string) comes into the input parameters(outputs) of the interpreter. Please check the return result of OpenvinoLauncher.infer\r\n\r\n`results = self._net.infer(inputs)` line 178, openvino_launcher.py\r\nDebugging results are normal up to the code above, but it seems that only the name of the result layer is returned when returning and passing to interpreter.\n", "before_files": [{"content": "\n# Copyright (C) 2019-2020 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\n# pylint: disable=exec-used\n\nimport cv2\nimport logging as log\nimport numpy as np\nimport os.path as osp\nimport shutil\n\nfrom openvino.inference_engine import IECore\n\nfrom datumaro.components.cli_plugin import CliPlugin\nfrom datumaro.components.launcher import Launcher\n\n\nclass _OpenvinoImporter(CliPlugin):\n @staticmethod\n def _parse_output_layers(s):\n return [s.strip() for s in s.split(',')]\n\n @classmethod\n def build_cmdline_parser(cls, **kwargs):\n parser = super().build_cmdline_parser(**kwargs)\n parser.add_argument('-d', '--description', required=True,\n help=\"Path to the model description file (.xml)\")\n parser.add_argument('-w', '--weights', required=True,\n help=\"Path to the model weights file (.bin)\")\n parser.add_argument('-i', '--interpreter', required=True,\n help=\"Path to the network output interprter script (.py)\")\n parser.add_argument('--device', default='CPU',\n help=\"Target device (default: %(default)s)\")\n parser.add_argument('--output-layers', type=cls._parse_output_layers,\n help=\"A comma-separated list of extra output layers\")\n return parser\n\n @staticmethod\n def copy_model(model_dir, model):\n shutil.copy(model['description'],\n osp.join(model_dir, osp.basename(model['description'])))\n model['description'] = osp.basename(model['description'])\n\n shutil.copy(model['weights'],\n osp.join(model_dir, osp.basename(model['weights'])))\n model['weights'] = osp.basename(model['weights'])\n\n shutil.copy(model['interpreter'],\n osp.join(model_dir, osp.basename(model['interpreter'])))\n model['interpreter'] = osp.basename(model['interpreter'])\n\n\nclass InterpreterScript:\n def __init__(self, path):\n with open(path, 'r') as f:\n script = f.read()\n\n context = {}\n exec(script, context, context)\n\n process_outputs = context.get('process_outputs')\n if not callable(process_outputs):\n raise Exception(\"Can't find 'process_outputs' function in \"\n \"the interpreter script\")\n self.__dict__['process_outputs'] = process_outputs\n\n get_categories = context.get('get_categories')\n assert get_categories is None or callable(get_categories)\n if get_categories:\n self.__dict__['get_categories'] = get_categories\n\n @staticmethod\n def get_categories():\n return None\n\n @staticmethod\n def process_outputs(inputs, outputs):\n raise NotImplementedError(\n \"Function should be implemented in the interpreter script\")\n\n\nclass OpenvinoLauncher(Launcher):\n cli_plugin = _OpenvinoImporter\n\n def __init__(self, description, weights, interpreter,\n device=None, model_dir=None, output_layers=None):\n if not model_dir:\n model_dir = ''\n if not osp.isfile(description):\n description = osp.join(model_dir, description)\n if not osp.isfile(description):\n raise Exception('Failed to open model description file \"%s\"' % \\\n (description))\n\n if not osp.isfile(weights):\n weights = osp.join(model_dir, weights)\n if not osp.isfile(weights):\n raise Exception('Failed to open model weights file \"%s\"' % \\\n (weights))\n\n if not osp.isfile(interpreter):\n interpreter = osp.join(model_dir, interpreter)\n if not osp.isfile(interpreter):\n raise Exception('Failed to open model interpreter script file \"%s\"' % \\\n (interpreter))\n\n self._interpreter = InterpreterScript(interpreter)\n\n self._device = device or 'CPU'\n self._output_blobs = output_layers\n\n self._ie = IECore()\n self._network = self._ie.read_network(description, weights)\n self._check_model_support(self._network, self._device)\n self._load_executable_net()\n\n def _check_model_support(self, net, device):\n not_supported_layers = set(name\n for name, dev in self._ie.query_network(net, device).items()\n if not dev)\n if len(not_supported_layers) != 0:\n log.error(\"The following layers are not supported \" \\\n \"by the plugin for device '%s': %s.\" % \\\n (device, ', '.join(not_supported_layers)))\n raise NotImplementedError(\n \"Some layers are not supported on the device\")\n\n def _load_executable_net(self, batch_size=1):\n network = self._network\n\n if self._output_blobs:\n network.add_outputs(self._output_blobs)\n\n iter_inputs = iter(network.input_info)\n self._input_blob = next(iter_inputs)\n\n # NOTE: handling for the inclusion of `image_info` in OpenVino2019\n self._require_image_info = 'image_info' in network.input_info\n if self._input_blob == 'image_info':\n self._input_blob = next(iter_inputs)\n\n self._input_layout = network.input_info[self._input_blob].input_data.shape\n self._input_layout[0] = batch_size\n network.reshape({self._input_blob: self._input_layout})\n self._batch_size = batch_size\n\n self._net = self._ie.load_network(network=network, num_requests=1,\n device_name=self._device)\n\n def infer(self, inputs):\n assert len(inputs.shape) == 4, \\\n \"Expected an input image in (N, H, W, C) format, got %s\" % \\\n (inputs.shape, )\n\n if inputs.shape[3] == 1: # A batch of single-channel images\n inputs = np.repeat(inputs, 3, axis=3)\n\n assert inputs.shape[3] == 3, \\\n \"Expected BGR input, got %s\" % (inputs.shape, )\n\n n, c, h, w = self._input_layout\n if inputs.shape[1:3] != (h, w):\n resized_inputs = np.empty((n, h, w, c), dtype=inputs.dtype)\n for inp, resized_input in zip(inputs, resized_inputs):\n cv2.resize(inp, (w, h), resized_input)\n inputs = resized_inputs\n inputs = inputs.transpose((0, 3, 1, 2)) # NHWC to NCHW\n inputs = {self._input_blob: inputs}\n if self._require_image_info:\n info = np.zeros([1, 3])\n info[0, 0] = h\n info[0, 1] = w\n info[0, 2] = 1.0 # scale\n inputs['image_info'] = info\n\n results = self._net.infer(inputs)\n if len(results) == 1:\n return next(iter(results))\n else:\n return results\n\n def launch(self, inputs):\n batch_size = len(inputs)\n if self._batch_size < batch_size:\n self._load_executable_net(batch_size)\n\n outputs = self.infer(inputs)\n results = self.process_outputs(inputs, outputs)\n return results\n\n def categories(self):\n return self._interpreter.get_categories()\n\n def process_outputs(self, inputs, outputs):\n return self._interpreter.process_outputs(inputs, outputs)\n\n", "path": "datumaro/plugins/openvino_launcher.py"}]} | 2,775 | 104 |
gh_patches_debug_19923 | rasdani/github-patches | git_diff | dotkom__onlineweb4-3010 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Vis påmeldte" shows list of both attending and waiting list
**Describe the bug**
A clear and concise description of what the bug is.
"Vis påmeldte" on arrangements shows both attending people and people on the waiting list. This can lead to misunderstandings because people on the waiting list might think they're attending the arrangement.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to any arrangement
2. Click on 'Vis påmeldte'
3. Scroll down to bottom of list
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
I expect the "vis påmeldte" modal to show only attending people.
</issue>
<code>
[start of apps/events/api/views.py]
1 from django.shortcuts import get_object_or_404
2 from guardian.shortcuts import get_objects_for_user
3 from rest_framework import mixins, permissions, status, viewsets
4 from rest_framework.decorators import action
5 from rest_framework.exceptions import NotFound
6 from rest_framework.response import Response
7
8 from apps.payment.serializers import PaymentReadOnlySerializer
9 from apps.profiles.models import Privacy
10
11 from ..constants import AttendStatus
12 from ..filters import (
13 EventFilter,
14 ExtrasFilter,
15 FieldOfStudyRuleFilter,
16 GradeRuleFilter,
17 RuleBundleFilter,
18 UserGroupRuleFilter,
19 )
20 from ..models import (
21 AttendanceEvent,
22 Attendee,
23 Event,
24 Extras,
25 FieldOfStudyRule,
26 GradeRule,
27 RuleBundle,
28 UserGroupRule,
29 )
30 from ..utils import handle_attend_event_payment
31 from .permissions import (
32 ChangeAttendeePermission,
33 RegisterPermission,
34 UnregisterPermission,
35 )
36 from .register_attendance_serializer import RegisterAttendanceSerializer
37 from .serializers import (
38 AttendanceEventSerializer,
39 AttendeeAdministrateSerializer,
40 AttendeeSerializer,
41 AttendeeUpdateSerializer,
42 EventSerializer,
43 ExtrasSerializer,
44 FieldOfStudyRuleSerializer,
45 GradeRuleSerializer,
46 PublicAttendeeSerializer,
47 RegisterSerializer,
48 RuleBundleSerializer,
49 UserGroupRuleSerializer,
50 )
51
52
53 class EventViewSet(viewsets.ModelViewSet):
54 serializer_class = EventSerializer
55 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
56 filterset_class = EventFilter
57 ordering_fields = (
58 "event_start",
59 "event_end",
60 "id",
61 "closest",
62 "has_passed",
63 )
64 ordering = ("has_passed", "closest", "id")
65
66 def get_queryset(self):
67 user = self.request.user
68 return Event.by_nearest_active_event.get_queryset_for_user(user)
69
70
71 class AttendanceEventViewSet(viewsets.ModelViewSet):
72 serializer_class = AttendanceEventSerializer
73 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
74 queryset = AttendanceEvent.objects.all()
75
76 def get_queryset(self):
77 user = self.request.user
78 events = Event.by_registration.get_queryset_for_user(user)
79 return super().get_queryset().filter(event__in=events)
80
81 @action(
82 detail=True,
83 methods=["POST"],
84 permission_classes=(permissions.IsAuthenticated, RegisterPermission),
85 serializer_class=RegisterSerializer,
86 )
87 def register(self, request, pk=None):
88 user = request.user
89 privacy: Privacy = user.privacy
90 attendance_event: AttendanceEvent = self.get_object()
91 # Check if the recaptcha and other request data is valid
92 register_serializer = self.get_serializer(data=request.data)
93 register_serializer.is_valid(raise_exception=True)
94 data = register_serializer.validated_data
95 # Set the values to the users default settings if sent data is empty
96 # intentionally uses that bool(None) == False
97 attending_visibility = (
98 specific
99 if (specific := data.get("show_as_attending_event")) is not None
100 else bool(privacy.visible_as_attending_events)
101 )
102 allow_pictures = (
103 specific
104 if (specific := data.get("allow_pictures")) is not None
105 else bool(privacy.allow_pictures)
106 )
107
108 attendee = Attendee.objects.create(
109 event=attendance_event,
110 user=user,
111 show_as_attending_event=attending_visibility,
112 allow_pictures=allow_pictures,
113 note=data.get("note"),
114 )
115
116 if attendance_event.payment():
117 handle_attend_event_payment(attendance_event.event, user)
118
119 attendee_serializer = AttendeeSerializer(attendee)
120 return Response(data=attendee_serializer.data, status=status.HTTP_201_CREATED)
121
122 @action(
123 detail=True,
124 methods=["DELETE"],
125 permission_classes=(permissions.IsAuthenticated, UnregisterPermission),
126 )
127 def unregister(self, request, pk=None):
128 user = request.user
129 attendance_event: AttendanceEvent = self.get_object()
130 attendee = Attendee.objects.get(event=attendance_event, user=user)
131 # Attendees un-attend with themselves as the admin user.
132 attendee.unattend(user)
133
134 return Response(status=status.HTTP_204_NO_CONTENT)
135
136 @action(
137 detail=True,
138 methods=["GET"],
139 permission_classes=(permissions.IsAuthenticated,),
140 serializer_class=PublicAttendeeSerializer,
141 url_path="public-attendees",
142 )
143 def public_attendees(self, request, pk=None):
144 attendance_event: AttendanceEvent = self.get_object()
145 attendees = (
146 attendance_event.attending_attendees_qs | attendance_event.waitlist_qs
147 )
148 attendees = attendees.order_by("-show_as_attending_event", "timestamp")
149 serializer = self.get_serializer(attendees, many=True)
150
151 return Response(data=serializer.data, status=status.HTTP_200_OK)
152
153 @action(
154 detail=True,
155 methods=["GET"],
156 permission_classes=(permissions.IsAuthenticated,),
157 serializer_class=AttendeeSerializer,
158 )
159 def attendee(self, request, pk=None):
160 user = request.user
161 attendance_event: AttendanceEvent = self.get_object()
162 attendee = get_object_or_404(Attendee, event=attendance_event, user=user)
163 serializer = self.get_serializer(attendee)
164 return Response(data=serializer.data, status=status.HTTP_200_OK)
165
166 @action(
167 detail=True,
168 methods=["GET"],
169 permission_classes=(permissions.IsAuthenticated,),
170 serializer_class=ExtrasSerializer,
171 )
172 def extras(self, request, pk=None):
173 attendance_event: AttendanceEvent = self.get_object()
174 serializer = self.get_serializer(attendance_event.extras, many=True)
175 return Response(data=serializer.data, status=status.HTTP_200_OK)
176
177 @action(
178 detail=True,
179 methods=["GET"],
180 permission_classes=(permissions.IsAuthenticated,),
181 serializer_class=PaymentReadOnlySerializer,
182 )
183 def payment(self, request, pk=None):
184 attendance_event: AttendanceEvent = self.get_object()
185 payment = attendance_event.get_payment()
186 if not payment:
187 raise NotFound
188 serializer = self.get_serializer(payment)
189 return Response(data=serializer.data, status=status.HTTP_200_OK)
190
191
192 class AttendeeViewSet(
193 viewsets.GenericViewSet, mixins.ListModelMixin, mixins.RetrieveModelMixin
194 ):
195 serializer_class = AttendeeSerializer
196 filterset_fields = (
197 "event",
198 "attended",
199 "user",
200 "show_as_attending_event",
201 "allow_pictures",
202 "extras",
203 )
204
205 @staticmethod
206 def _get_allowed_attendees(user):
207 """
208 A user is allowed to see attendees for their own user, and for events they are organizing.
209 """
210 if user.is_anonymous:
211 return Attendee.objects.none()
212
213 attendees = get_objects_for_user(
214 user, "events.change_attendee", accept_global_perms=False
215 )
216 attendees |= Attendee.objects.filter(user=user)
217 return attendees.distinct()
218
219 def get_queryset(self):
220 return self._get_allowed_attendees(self.request.user)
221
222 @action(
223 detail=True,
224 methods=["PATCH", "PUT"],
225 permission_classes=(ChangeAttendeePermission,),
226 serializer_class=AttendeeUpdateSerializer,
227 )
228 def change(self, request, pk=None):
229 attendee: Attendee = self.get_object()
230 partial = request.method == "PATCH"
231 serializer = self.get_serializer(attendee, data=request.data, partial=partial)
232 serializer.is_valid(raise_exception=True)
233 serializer.save()
234 return Response(data=serializer.data, status=status.HTTP_200_OK)
235
236 @action(
237 detail=True,
238 methods=["PATCH", "PUT"],
239 permission_classes=(permissions.DjangoObjectPermissions,),
240 serializer_class=AttendeeAdministrateSerializer,
241 )
242 def administrate(self, request, pk=None):
243 attendee: Attendee = self.get_object()
244 partial = request.method == "PATCH"
245 serializer = self.get_serializer(attendee, data=request.data, partial=partial)
246 serializer.is_valid(raise_exception=True)
247 serializer.save()
248 return Response(data=serializer.data, status=status.HTTP_200_OK)
249
250 @action(
251 detail=False,
252 methods=["POST"],
253 serializer_class=RegisterAttendanceSerializer,
254 url_path="register-attendance",
255 )
256 def register_attendance(self, request, pk=None):
257 """
258 Register that a user has physically attended an event.
259 """
260 serializer = self.get_serializer(data=request.data)
261
262 serializer.is_valid(raise_exception=True)
263
264 attendee = serializer.get_attendee(request.data)
265 attendee.attended = True
266 attendee.save()
267
268 return Response(
269 {
270 "detail": {
271 "message": f"{attendee.user} er registrert som deltaker. Velkommen!",
272 "attend_status": AttendStatus.REGISTER_SUCCESS,
273 "attendee": attendee.id,
274 }
275 },
276 status=status.HTTP_200_OK,
277 )
278
279
280 class ExtrasViewSet(viewsets.ModelViewSet):
281 serializer_class = ExtrasSerializer
282 queryset = Extras.objects.all()
283 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
284 filterset_class = ExtrasFilter
285
286
287 class RuleBundleViewSet(viewsets.ModelViewSet):
288 serializer_class = RuleBundleSerializer
289 queryset = RuleBundle.objects.all()
290 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
291 filterset_class = RuleBundleFilter
292
293
294 class FieldOfStudyRuleViewSet(viewsets.ModelViewSet):
295 serializer_class = FieldOfStudyRuleSerializer
296 queryset = FieldOfStudyRule.objects.all()
297 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
298 filterset_class = FieldOfStudyRuleFilter
299
300
301 class GradeRuleViewSet(viewsets.ModelViewSet):
302 serializer_class = GradeRuleSerializer
303 queryset = GradeRule.objects.all()
304 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
305 filterset_class = GradeRuleFilter
306
307
308 class UserGroupRuleViewSet(viewsets.ModelViewSet):
309 serializer_class = UserGroupRuleSerializer
310 queryset = UserGroupRule.objects.all()
311 permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)
312 filterset_class = UserGroupRuleFilter
313
[end of apps/events/api/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/events/api/views.py b/apps/events/api/views.py
--- a/apps/events/api/views.py
+++ b/apps/events/api/views.py
@@ -142,10 +142,21 @@
)
def public_attendees(self, request, pk=None):
attendance_event: AttendanceEvent = self.get_object()
- attendees = (
- attendance_event.attending_attendees_qs | attendance_event.waitlist_qs
- )
- attendees = attendees.order_by("-show_as_attending_event", "timestamp")
+ attendees = attendance_event.attending_attendees_qs
+ serializer = self.get_serializer(attendees, many=True)
+
+ return Response(data=serializer.data, status=status.HTTP_200_OK)
+
+ @action(
+ detail=True,
+ methods=["GET"],
+ permission_classes=(permissions.IsAuthenticated,),
+ serializer_class=PublicAttendeeSerializer,
+ url_path="public-on-waitlist",
+ )
+ def public_on_waitlist(self, request, pk=None):
+ attendance_event: AttendanceEvent = self.get_object()
+ attendees = attendance_event.waitlist_qs
serializer = self.get_serializer(attendees, many=True)
return Response(data=serializer.data, status=status.HTTP_200_OK)
| {"golden_diff": "diff --git a/apps/events/api/views.py b/apps/events/api/views.py\n--- a/apps/events/api/views.py\n+++ b/apps/events/api/views.py\n@@ -142,10 +142,21 @@\n )\n def public_attendees(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n- attendees = (\n- attendance_event.attending_attendees_qs | attendance_event.waitlist_qs\n- )\n- attendees = attendees.order_by(\"-show_as_attending_event\", \"timestamp\")\n+ attendees = attendance_event.attending_attendees_qs\n+ serializer = self.get_serializer(attendees, many=True)\n+\n+ return Response(data=serializer.data, status=status.HTTP_200_OK)\n+\n+ @action(\n+ detail=True,\n+ methods=[\"GET\"],\n+ permission_classes=(permissions.IsAuthenticated,),\n+ serializer_class=PublicAttendeeSerializer,\n+ url_path=\"public-on-waitlist\",\n+ )\n+ def public_on_waitlist(self, request, pk=None):\n+ attendance_event: AttendanceEvent = self.get_object()\n+ attendees = attendance_event.waitlist_qs\n serializer = self.get_serializer(attendees, many=True)\n \n return Response(data=serializer.data, status=status.HTTP_200_OK)\n", "issue": "\"Vis p\u00e5meldte\" shows list of both attending and waiting list\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n\"Vis p\u00e5meldte\" on arrangements shows both attending people and people on the waiting list. This can lead to misunderstandings because people on the waiting list might think they're attending the arrangement.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to any arrangement\r\n2. Click on 'Vis p\u00e5meldte'\r\n3. Scroll down to bottom of list\r\n4. See error\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\nI expect the \"vis p\u00e5meldte\" modal to show only attending people.\n", "before_files": [{"content": "from django.shortcuts import get_object_or_404\nfrom guardian.shortcuts import get_objects_for_user\nfrom rest_framework import mixins, permissions, status, viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.response import Response\n\nfrom apps.payment.serializers import PaymentReadOnlySerializer\nfrom apps.profiles.models import Privacy\n\nfrom ..constants import AttendStatus\nfrom ..filters import (\n EventFilter,\n ExtrasFilter,\n FieldOfStudyRuleFilter,\n GradeRuleFilter,\n RuleBundleFilter,\n UserGroupRuleFilter,\n)\nfrom ..models import (\n AttendanceEvent,\n Attendee,\n Event,\n Extras,\n FieldOfStudyRule,\n GradeRule,\n RuleBundle,\n UserGroupRule,\n)\nfrom ..utils import handle_attend_event_payment\nfrom .permissions import (\n ChangeAttendeePermission,\n RegisterPermission,\n UnregisterPermission,\n)\nfrom .register_attendance_serializer import RegisterAttendanceSerializer\nfrom .serializers import (\n AttendanceEventSerializer,\n AttendeeAdministrateSerializer,\n AttendeeSerializer,\n AttendeeUpdateSerializer,\n EventSerializer,\n ExtrasSerializer,\n FieldOfStudyRuleSerializer,\n GradeRuleSerializer,\n PublicAttendeeSerializer,\n RegisterSerializer,\n RuleBundleSerializer,\n UserGroupRuleSerializer,\n)\n\n\nclass EventViewSet(viewsets.ModelViewSet):\n serializer_class = EventSerializer\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = EventFilter\n ordering_fields = (\n \"event_start\",\n \"event_end\",\n \"id\",\n \"closest\",\n \"has_passed\",\n )\n ordering = (\"has_passed\", \"closest\", \"id\")\n\n def get_queryset(self):\n user = self.request.user\n return Event.by_nearest_active_event.get_queryset_for_user(user)\n\n\nclass AttendanceEventViewSet(viewsets.ModelViewSet):\n serializer_class = AttendanceEventSerializer\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n queryset = AttendanceEvent.objects.all()\n\n def get_queryset(self):\n user = self.request.user\n events = Event.by_registration.get_queryset_for_user(user)\n return super().get_queryset().filter(event__in=events)\n\n @action(\n detail=True,\n methods=[\"POST\"],\n permission_classes=(permissions.IsAuthenticated, RegisterPermission),\n serializer_class=RegisterSerializer,\n )\n def register(self, request, pk=None):\n user = request.user\n privacy: Privacy = user.privacy\n attendance_event: AttendanceEvent = self.get_object()\n # Check if the recaptcha and other request data is valid\n register_serializer = self.get_serializer(data=request.data)\n register_serializer.is_valid(raise_exception=True)\n data = register_serializer.validated_data\n # Set the values to the users default settings if sent data is empty\n # intentionally uses that bool(None) == False\n attending_visibility = (\n specific\n if (specific := data.get(\"show_as_attending_event\")) is not None\n else bool(privacy.visible_as_attending_events)\n )\n allow_pictures = (\n specific\n if (specific := data.get(\"allow_pictures\")) is not None\n else bool(privacy.allow_pictures)\n )\n\n attendee = Attendee.objects.create(\n event=attendance_event,\n user=user,\n show_as_attending_event=attending_visibility,\n allow_pictures=allow_pictures,\n note=data.get(\"note\"),\n )\n\n if attendance_event.payment():\n handle_attend_event_payment(attendance_event.event, user)\n\n attendee_serializer = AttendeeSerializer(attendee)\n return Response(data=attendee_serializer.data, status=status.HTTP_201_CREATED)\n\n @action(\n detail=True,\n methods=[\"DELETE\"],\n permission_classes=(permissions.IsAuthenticated, UnregisterPermission),\n )\n def unregister(self, request, pk=None):\n user = request.user\n attendance_event: AttendanceEvent = self.get_object()\n attendee = Attendee.objects.get(event=attendance_event, user=user)\n # Attendees un-attend with themselves as the admin user.\n attendee.unattend(user)\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PublicAttendeeSerializer,\n url_path=\"public-attendees\",\n )\n def public_attendees(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n attendees = (\n attendance_event.attending_attendees_qs | attendance_event.waitlist_qs\n )\n attendees = attendees.order_by(\"-show_as_attending_event\", \"timestamp\")\n serializer = self.get_serializer(attendees, many=True)\n\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=AttendeeSerializer,\n )\n def attendee(self, request, pk=None):\n user = request.user\n attendance_event: AttendanceEvent = self.get_object()\n attendee = get_object_or_404(Attendee, event=attendance_event, user=user)\n serializer = self.get_serializer(attendee)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=ExtrasSerializer,\n )\n def extras(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n serializer = self.get_serializer(attendance_event.extras, many=True)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"GET\"],\n permission_classes=(permissions.IsAuthenticated,),\n serializer_class=PaymentReadOnlySerializer,\n )\n def payment(self, request, pk=None):\n attendance_event: AttendanceEvent = self.get_object()\n payment = attendance_event.get_payment()\n if not payment:\n raise NotFound\n serializer = self.get_serializer(payment)\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n\nclass AttendeeViewSet(\n viewsets.GenericViewSet, mixins.ListModelMixin, mixins.RetrieveModelMixin\n):\n serializer_class = AttendeeSerializer\n filterset_fields = (\n \"event\",\n \"attended\",\n \"user\",\n \"show_as_attending_event\",\n \"allow_pictures\",\n \"extras\",\n )\n\n @staticmethod\n def _get_allowed_attendees(user):\n \"\"\"\n A user is allowed to see attendees for their own user, and for events they are organizing.\n \"\"\"\n if user.is_anonymous:\n return Attendee.objects.none()\n\n attendees = get_objects_for_user(\n user, \"events.change_attendee\", accept_global_perms=False\n )\n attendees |= Attendee.objects.filter(user=user)\n return attendees.distinct()\n\n def get_queryset(self):\n return self._get_allowed_attendees(self.request.user)\n\n @action(\n detail=True,\n methods=[\"PATCH\", \"PUT\"],\n permission_classes=(ChangeAttendeePermission,),\n serializer_class=AttendeeUpdateSerializer,\n )\n def change(self, request, pk=None):\n attendee: Attendee = self.get_object()\n partial = request.method == \"PATCH\"\n serializer = self.get_serializer(attendee, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=True,\n methods=[\"PATCH\", \"PUT\"],\n permission_classes=(permissions.DjangoObjectPermissions,),\n serializer_class=AttendeeAdministrateSerializer,\n )\n def administrate(self, request, pk=None):\n attendee: Attendee = self.get_object()\n partial = request.method == \"PATCH\"\n serializer = self.get_serializer(attendee, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n @action(\n detail=False,\n methods=[\"POST\"],\n serializer_class=RegisterAttendanceSerializer,\n url_path=\"register-attendance\",\n )\n def register_attendance(self, request, pk=None):\n \"\"\"\n Register that a user has physically attended an event.\n \"\"\"\n serializer = self.get_serializer(data=request.data)\n\n serializer.is_valid(raise_exception=True)\n\n attendee = serializer.get_attendee(request.data)\n attendee.attended = True\n attendee.save()\n\n return Response(\n {\n \"detail\": {\n \"message\": f\"{attendee.user} er registrert som deltaker. Velkommen!\",\n \"attend_status\": AttendStatus.REGISTER_SUCCESS,\n \"attendee\": attendee.id,\n }\n },\n status=status.HTTP_200_OK,\n )\n\n\nclass ExtrasViewSet(viewsets.ModelViewSet):\n serializer_class = ExtrasSerializer\n queryset = Extras.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = ExtrasFilter\n\n\nclass RuleBundleViewSet(viewsets.ModelViewSet):\n serializer_class = RuleBundleSerializer\n queryset = RuleBundle.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = RuleBundleFilter\n\n\nclass FieldOfStudyRuleViewSet(viewsets.ModelViewSet):\n serializer_class = FieldOfStudyRuleSerializer\n queryset = FieldOfStudyRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = FieldOfStudyRuleFilter\n\n\nclass GradeRuleViewSet(viewsets.ModelViewSet):\n serializer_class = GradeRuleSerializer\n queryset = GradeRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = GradeRuleFilter\n\n\nclass UserGroupRuleViewSet(viewsets.ModelViewSet):\n serializer_class = UserGroupRuleSerializer\n queryset = UserGroupRule.objects.all()\n permission_classes = (permissions.DjangoModelPermissionsOrAnonReadOnly,)\n filterset_class = UserGroupRuleFilter\n", "path": "apps/events/api/views.py"}]} | 3,664 | 279 |
gh_patches_debug_30785 | rasdani/github-patches | git_diff | Parsl__parsl-1119 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement time limits for python apps
Requested by @lgray.
</issue>
<code>
[start of parsl/app/app.py]
1 """Definitions for the @App decorator and the App classes.
2
3 The App class encapsulates a generic leaf task that can be executed asynchronously.
4 """
5 import logging
6 from abc import ABCMeta, abstractmethod
7 from inspect import getsource
8 from hashlib import md5
9 from inspect import signature
10
11 from parsl.app.errors import InvalidAppTypeError
12
13 logger = logging.getLogger(__name__)
14
15
16 class AppBase(metaclass=ABCMeta):
17 """This is the base class that defines the two external facing functions that an App must define.
18
19 The __init__ () which is called when the interpreter sees the definition of the decorated
20 function, and the __call__ () which is invoked when a decorated function is called by the user.
21
22 """
23
24 def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False):
25 """Construct the App object.
26
27 Args:
28 - func (function): Takes the function to be made into an App
29
30 Kwargs:
31 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for
32 managing this app. This can be omitted only
33 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.
34 - walltime (int) : Walltime in seconds for the app execution.
35 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.
36 - cache (Bool) : Enable caching of this app ?
37
38 Returns:
39 - App object.
40
41 """
42 self.__name__ = func.__name__
43 self.func = func
44 self.data_flow_kernel = data_flow_kernel
45 self.status = 'created'
46 self.executors = executors
47 self.cache = cache
48 if not (isinstance(executors, list) or isinstance(executors, str)):
49 logger.error("App {} specifies invalid executor option, expects string or list".format(
50 func.__name__))
51
52 if cache is True:
53 try:
54 self.fn_source = getsource(func)
55 except OSError:
56 logger.debug("Unable to get source code for AppCaching. Recommend creating module")
57 self.fn_source = func.__name__
58
59 self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest()
60 else:
61 self.func_hash = func.__name__
62
63 params = signature(func).parameters
64
65 self.kwargs = {}
66 if 'stdout' in params:
67 self.kwargs['stdout'] = params['stdout'].default
68 if 'stderr' in params:
69 self.kwargs['stderr'] = params['stderr'].default
70 self.outputs = params['outputs'].default if 'outputs' in params else []
71 self.inputs = params['inputs'].default if 'inputs' in params else []
72
73 @abstractmethod
74 def __call__(self, *args, **kwargs):
75 pass
76
77
78 def App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
79 """The App decorator function.
80
81 Args:
82 - apptype (string) : Apptype can be bash|python
83
84 Kwargs:
85 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for
86 managing this app. This can be omitted only
87 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.
88 - walltime (int) : Walltime for app in seconds,
89 default=60
90 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.
91 - cache (Bool) : Enable caching of the app call
92 default=False
93
94 Returns:
95 A PythonApp or BashApp object, which when called runs the apps through the executor.
96 """
97
98 from parsl.app.python import PythonApp
99 from parsl.app.bash import BashApp
100
101 logger.warning("The 'App' decorator will be deprecated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.")
102
103 if apptype == 'python':
104 app_class = PythonApp
105 elif apptype == 'bash':
106 app_class = BashApp
107 else:
108 raise InvalidAppTypeError("Invalid apptype requested {}; must be 'python' or 'bash'".format(apptype))
109
110 def wrapper(f):
111 return app_class(f,
112 data_flow_kernel=data_flow_kernel,
113 walltime=walltime,
114 cache=cache,
115 executors=executors)
116 return wrapper
117
118
119 def python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
120 """Decorator function for making python apps.
121
122 Parameters
123 ----------
124 function : function
125 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
126 for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the
127 decorator is used alone, function will be the actual function being decorated, whereas if it
128 is called with arguments, function will be None. Default is None.
129 data_flow_kernel : DataFlowKernel
130 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
131 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
132 walltime : int
133 Walltime for app in seconds. Default is 60.
134 executors : string or list
135 Labels of the executors that this app can execute over. Default is 'all'.
136 cache : bool
137 Enable caching of the app call. Default is False.
138 """
139 from parsl.app.python import PythonApp
140
141 def decorator(func):
142 def wrapper(f):
143 return PythonApp(f,
144 data_flow_kernel=data_flow_kernel,
145 walltime=walltime,
146 cache=cache,
147 executors=executors)
148 return wrapper(func)
149 if function is not None:
150 return decorator(function)
151 return decorator
152
153
154 def bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
155 """Decorator function for making bash apps.
156
157 Parameters
158 ----------
159 function : function
160 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
161 for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the
162 decorator is used alone, function will be the actual function being decorated, whereas if it
163 is called with arguments, function will be None. Default is None.
164 data_flow_kernel : DataFlowKernel
165 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
166 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
167 walltime : int
168 Walltime for app in seconds. Default is 60.
169 executors : string or list
170 Labels of the executors that this app can execute over. Default is 'all'.
171 cache : bool
172 Enable caching of the app call. Default is False.
173 """
174 from parsl.app.bash import BashApp
175
176 def decorator(func):
177 def wrapper(f):
178 return BashApp(f,
179 data_flow_kernel=data_flow_kernel,
180 walltime=walltime,
181 cache=cache,
182 executors=executors)
183 return wrapper(func)
184 if function is not None:
185 return decorator(function)
186 return decorator
187
[end of parsl/app/app.py]
[start of parsl/app/python.py]
1 import logging
2
3 import tblib.pickling_support
4 tblib.pickling_support.install()
5
6 from parsl.app.futures import DataFuture
7 from parsl.app.app import AppBase
8 from parsl.app.errors import wrap_error
9 from parsl.dataflow.dflow import DataFlowKernelLoader
10
11
12 logger = logging.getLogger(__name__)
13
14
15 class PythonApp(AppBase):
16 """Extends AppBase to cover the Python App."""
17
18 def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
19 super().__init__(
20 wrap_error(func),
21 data_flow_kernel=data_flow_kernel,
22 walltime=walltime,
23 executors=executors,
24 cache=cache
25 )
26
27 def __call__(self, *args, **kwargs):
28 """This is where the call to a python app is handled.
29
30 Args:
31 - Arbitrary
32 Kwargs:
33 - Arbitrary
34
35 Returns:
36 If outputs=[...] was a kwarg then:
37 App_fut, [Data_Futures...]
38 else:
39 App_fut
40
41 """
42
43 if self.data_flow_kernel is None:
44 dfk = DataFlowKernelLoader.dfk()
45 else:
46 dfk = self.data_flow_kernel
47
48 app_fut = dfk.submit(self.func, *args,
49 executors=self.executors,
50 fn_hash=self.func_hash,
51 cache=self.cache,
52 **kwargs)
53
54 # logger.debug("App[{}] assigned Task[{}]".format(self.func.__name__,
55 # app_fut.tid))
56 out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)
57 for o in kwargs.get('outputs', [])]
58 app_fut._outputs = out_futs
59
60 return app_fut
61
[end of parsl/app/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsl/app/app.py b/parsl/app/app.py
--- a/parsl/app/app.py
+++ b/parsl/app/app.py
@@ -67,6 +67,8 @@
self.kwargs['stdout'] = params['stdout'].default
if 'stderr' in params:
self.kwargs['stderr'] = params['stderr'].default
+ if 'walltime' in params:
+ self.kwargs['walltime'] = params['walltime'].default
self.outputs = params['outputs'].default if 'outputs' in params else []
self.inputs = params['inputs'].default if 'inputs' in params else []
diff --git a/parsl/app/python.py b/parsl/app/python.py
--- a/parsl/app/python.py
+++ b/parsl/app/python.py
@@ -12,6 +12,27 @@
logger = logging.getLogger(__name__)
+def timeout(f, seconds):
+ def wrapper(*args, **kwargs):
+ import threading
+ import ctypes
+ import parsl.app.errors
+
+ def inject_exception(thread):
+ ctypes.pythonapi.PyThreadState_SetAsyncExc(
+ ctypes.c_long(thread),
+ ctypes.py_object(parsl.app.errors.AppTimeout)
+ )
+
+ thread = threading.current_thread().ident
+ timer = threading.Timer(seconds, inject_exception, args=[thread])
+ timer.start()
+ result = f(*args, **kwargs)
+ timer.cancel()
+ return result
+ return wrapper
+
+
class PythonApp(AppBase):
"""Extends AppBase to cover the Python App."""
@@ -45,6 +66,9 @@
else:
dfk = self.data_flow_kernel
+ walltime = self.kwargs.get('walltime')
+ if walltime is not None:
+ self.func = timeout(self.func, walltime)
app_fut = dfk.submit(self.func, *args,
executors=self.executors,
fn_hash=self.func_hash,
| {"golden_diff": "diff --git a/parsl/app/app.py b/parsl/app/app.py\n--- a/parsl/app/app.py\n+++ b/parsl/app/app.py\n@@ -67,6 +67,8 @@\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n+ if 'walltime' in params:\n+ self.kwargs['walltime'] = params['walltime'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n \ndiff --git a/parsl/app/python.py b/parsl/app/python.py\n--- a/parsl/app/python.py\n+++ b/parsl/app/python.py\n@@ -12,6 +12,27 @@\n logger = logging.getLogger(__name__)\n \n \n+def timeout(f, seconds):\n+ def wrapper(*args, **kwargs):\n+ import threading\n+ import ctypes\n+ import parsl.app.errors\n+\n+ def inject_exception(thread):\n+ ctypes.pythonapi.PyThreadState_SetAsyncExc(\n+ ctypes.c_long(thread),\n+ ctypes.py_object(parsl.app.errors.AppTimeout)\n+ )\n+\n+ thread = threading.current_thread().ident\n+ timer = threading.Timer(seconds, inject_exception, args=[thread])\n+ timer.start()\n+ result = f(*args, **kwargs)\n+ timer.cancel()\n+ return result\n+ return wrapper\n+\n+\n class PythonApp(AppBase):\n \"\"\"Extends AppBase to cover the Python App.\"\"\"\n \n@@ -45,6 +66,9 @@\n else:\n dfk = self.data_flow_kernel\n \n+ walltime = self.kwargs.get('walltime')\n+ if walltime is not None:\n+ self.func = timeout(self.func, walltime)\n app_fut = dfk.submit(self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n", "issue": "Implement time limits for python apps\nRequested by @lgray.\n", "before_files": [{"content": "\"\"\"Definitions for the @App decorator and the App classes.\n\nThe App class encapsulates a generic leaf task that can be executed asynchronously.\n\"\"\"\nimport logging\nfrom abc import ABCMeta, abstractmethod\nfrom inspect import getsource\nfrom hashlib import md5\nfrom inspect import signature\n\nfrom parsl.app.errors import InvalidAppTypeError\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppBase(metaclass=ABCMeta):\n \"\"\"This is the base class that defines the two external facing functions that an App must define.\n\n The __init__ () which is called when the interpreter sees the definition of the decorated\n function, and the __call__ () which is invoked when a decorated function is called by the user.\n\n \"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, executors='all', cache=False):\n \"\"\"Construct the App object.\n\n Args:\n - func (function): Takes the function to be made into an App\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime in seconds for the app execution.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n\n Returns:\n - App object.\n\n \"\"\"\n self.__name__ = func.__name__\n self.func = func\n self.data_flow_kernel = data_flow_kernel\n self.status = 'created'\n self.executors = executors\n self.cache = cache\n if not (isinstance(executors, list) or isinstance(executors, str)):\n logger.error(\"App {} specifies invalid executor option, expects string or list\".format(\n func.__name__))\n\n if cache is True:\n try:\n self.fn_source = getsource(func)\n except OSError:\n logger.debug(\"Unable to get source code for AppCaching. Recommend creating module\")\n self.fn_source = func.__name__\n\n self.func_hash = md5(self.fn_source.encode('utf-8')).hexdigest()\n else:\n self.func_hash = func.__name__\n\n params = signature(func).parameters\n\n self.kwargs = {}\n if 'stdout' in params:\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n pass\n\n\ndef App(apptype, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"The App decorator function.\n\n Args:\n - apptype (string) : Apptype can be bash|python\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime for app in seconds,\n default=60\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of the app call\n default=False\n\n Returns:\n A PythonApp or BashApp object, which when called runs the apps through the executor.\n \"\"\"\n\n from parsl.app.python import PythonApp\n from parsl.app.bash import BashApp\n\n logger.warning(\"The 'App' decorator will be deprecated in Parsl 0.8. Please use 'python_app' or 'bash_app' instead.\")\n\n if apptype == 'python':\n app_class = PythonApp\n elif apptype == 'bash':\n app_class = BashApp\n else:\n raise InvalidAppTypeError(\"Invalid apptype requested {}; must be 'python' or 'bash'\".format(apptype))\n\n def wrapper(f):\n return app_class(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper\n\n\ndef python_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\ndef bash_app(function=None, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n \"\"\"Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.bash import BashApp\n\n def decorator(func):\n def wrapper(f):\n return BashApp(f,\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n cache=cache,\n executors=executors)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n", "path": "parsl/app/app.py"}, {"content": "import logging\n\nimport tblib.pickling_support\ntblib.pickling_support.install()\n\nfrom parsl.app.futures import DataFuture\nfrom parsl.app.app import AppBase\nfrom parsl.app.errors import wrap_error\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PythonApp(AppBase):\n \"\"\"Extends AppBase to cover the Python App.\"\"\"\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n super().__init__(\n wrap_error(func),\n data_flow_kernel=data_flow_kernel,\n walltime=walltime,\n executors=executors,\n cache=cache\n )\n\n def __call__(self, *args, **kwargs):\n \"\"\"This is where the call to a python app is handled.\n\n Args:\n - Arbitrary\n Kwargs:\n - Arbitrary\n\n Returns:\n If outputs=[...] was a kwarg then:\n App_fut, [Data_Futures...]\n else:\n App_fut\n\n \"\"\"\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n app_fut = dfk.submit(self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n cache=self.cache,\n **kwargs)\n\n # logger.debug(\"App[{}] assigned Task[{}]\".format(self.func.__name__,\n # app_fut.tid))\n out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)\n for o in kwargs.get('outputs', [])]\n app_fut._outputs = out_futs\n\n return app_fut\n", "path": "parsl/app/python.py"}]} | 3,165 | 442 |
gh_patches_debug_31 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1456 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AWS::AutoScaling::AutoScalingGroup MaxInstanceLifetime Validation
*cfn-lint version: 0.29.2*
*Description of issue.*
When using the parameter `MaxInstanceLifetime` for `AWS::AutoScaling::AutoScalingGroup` we are hit with the following lint error:
```
$ cfn-lint templates/proj/rgs/rgs_autoscale_stretch_elb.yml
E3002 Invalid Property Resources/autoscalegroup/Properties/MaxInstanceLifetime
templates/proj/rgs/rgs_autoscale_stretch_elb.yml:194:7
```
The template which leads to the error:
```
[...]
autoscalegroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !Ref AvailabilityZones
Cooldown: '300'
HealthCheckGracePeriod: !Ref GracePeriod
HealthCheckType: ELB
MaxSize: !Ref MaxSize
MinSize: !Ref MinSize
MaxInstanceLifetime: !Ref MaxInstanceLifetime
VPCZoneIdentifier: !Ref EC2SubnetIDs
TargetGroupARNs:
- !Ref elbtargetgroup
LaunchConfigurationName: !Ref launchconfiguration
Tags: [...]
PropagateAtLaunch: true
TerminationPolicies:
- Default
[..]
```
It seems the parameter is currently not supported by cfn-lint, would be cool to see support for it.
</issue>
<code>
[start of src/cfnlint/version.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5
6 __version__ = '0.29.3'
7
[end of src/cfnlint/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/version.py b/src/cfnlint/version.py
--- a/src/cfnlint/version.py
+++ b/src/cfnlint/version.py
@@ -3,4 +3,4 @@
SPDX-License-Identifier: MIT-0
"""
-__version__ = '0.29.3'
+__version__ = '0.29.4'
| {"golden_diff": "diff --git a/src/cfnlint/version.py b/src/cfnlint/version.py\n--- a/src/cfnlint/version.py\n+++ b/src/cfnlint/version.py\n@@ -3,4 +3,4 @@\n SPDX-License-Identifier: MIT-0\n \"\"\"\n \n-__version__ = '0.29.3'\n+__version__ = '0.29.4'\n", "issue": "AWS::AutoScaling::AutoScalingGroup MaxInstanceLifetime Validation\n*cfn-lint version: 0.29.2*\r\n\r\n*Description of issue.*\r\n\r\nWhen using the parameter `MaxInstanceLifetime` for `AWS::AutoScaling::AutoScalingGroup` we are hit with the following lint error:\r\n\r\n```\r\n$ cfn-lint templates/proj/rgs/rgs_autoscale_stretch_elb.yml\r\nE3002 Invalid Property Resources/autoscalegroup/Properties/MaxInstanceLifetime\r\ntemplates/proj/rgs/rgs_autoscale_stretch_elb.yml:194:7\r\n```\r\n\r\nThe template which leads to the error:\r\n\r\n```\r\n[...]\r\n\r\n autoscalegroup:\r\n Type: AWS::AutoScaling::AutoScalingGroup\r\n Properties:\r\n AvailabilityZones: !Ref AvailabilityZones\r\n Cooldown: '300'\r\n HealthCheckGracePeriod: !Ref GracePeriod\r\n HealthCheckType: ELB\r\n MaxSize: !Ref MaxSize\r\n MinSize: !Ref MinSize\r\n MaxInstanceLifetime: !Ref MaxInstanceLifetime\r\n VPCZoneIdentifier: !Ref EC2SubnetIDs\r\n TargetGroupARNs:\r\n - !Ref elbtargetgroup\r\n LaunchConfigurationName: !Ref launchconfiguration\r\n Tags: [...]\r\n PropagateAtLaunch: true\r\n TerminationPolicies:\r\n - Default\r\n\r\n[..]\r\n```\r\n\r\nIt seems the parameter is currently not supported by cfn-lint, would be cool to see support for it.\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\n__version__ = '0.29.3'\n", "path": "src/cfnlint/version.py"}]} | 911 | 82 |
gh_patches_debug_55 | rasdani/github-patches | git_diff | emissary-ingress__emissary-23 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Users need statsd support
Ambassador needs to be able to send stats off to statsd, whatever statsd the user wants to use.
</issue>
<code>
[start of ambassador/VERSION.py]
1 # Don't change this line without also changing .bumpversion.cfg
2 Version = "0.5.0"
3
[end of ambassador/VERSION.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ambassador/VERSION.py b/ambassador/VERSION.py
--- a/ambassador/VERSION.py
+++ b/ambassador/VERSION.py
@@ -1,2 +1,2 @@
# Don't change this line without also changing .bumpversion.cfg
-Version = "0.5.0"
+Version = "0.5.1"
| {"golden_diff": "diff --git a/ambassador/VERSION.py b/ambassador/VERSION.py\n--- a/ambassador/VERSION.py\n+++ b/ambassador/VERSION.py\n@@ -1,2 +1,2 @@\n # Don't change this line without also changing .bumpversion.cfg\n-Version = \"0.5.0\"\n+Version = \"0.5.1\"\n", "issue": "Users need statsd support\nAmbassador needs to be able to send stats off to statsd, whatever statsd the user wants to use.\n", "before_files": [{"content": "# Don't change this line without also changing .bumpversion.cfg\nVersion = \"0.5.0\"\n", "path": "ambassador/VERSION.py"}]} | 589 | 80 |
gh_patches_debug_38263 | rasdani/github-patches | git_diff | microsoft__MLOS-573 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't expose all params as shell environment variables by default
_Originally posted by @bpkroth in https://github.com/microsoft/MLOS/pull/557#discussion_r1374921396_
</issue>
<code>
[start of mlos_bench/mlos_bench/environments/script_env.py]
1 #
2 # Copyright (c) Microsoft Corporation.
3 # Licensed under the MIT License.
4 #
5 """
6 Base scriptable benchmark environment.
7 """
8
9 import abc
10 import logging
11 import re
12 from typing import Dict, Iterable, Optional
13
14 from mlos_bench.environments.base_environment import Environment
15 from mlos_bench.services.base_service import Service
16 from mlos_bench.tunables.tunable import TunableValue
17 from mlos_bench.tunables.tunable_groups import TunableGroups
18
19 from mlos_bench.util import try_parse_val
20
21 _LOG = logging.getLogger(__name__)
22
23
24 class ScriptEnv(Environment, metaclass=abc.ABCMeta):
25 """
26 Base Environment that runs scripts for setup/run/teardown.
27 """
28
29 _RE_INVALID = re.compile(r"[^a-zA-Z0-9_]")
30
31 def __init__(self,
32 *,
33 name: str,
34 config: dict,
35 global_config: Optional[dict] = None,
36 tunables: Optional[TunableGroups] = None,
37 service: Optional[Service] = None):
38 """
39 Create a new environment for script execution.
40
41 Parameters
42 ----------
43 name: str
44 Human-readable name of the environment.
45 config : dict
46 Free-format dictionary that contains the benchmark environment
47 configuration. Each config must have at least the `tunable_params`
48 and the `const_args` sections. It must also have at least one of
49 the following parameters: {`setup`, `run`, `teardown`}.
50 Additional parameters:
51 * `shell_env_params` - an array of parameters to pass to the script
52 as shell environment variables, and
53 * `shell_env_params_rename` - a dictionary of {to: from} mappings
54 of the script parameters. If not specified, replace all
55 non-alphanumeric characters with underscores.
56 If neither `shell_env_params` nor `shell_env_params_rename` are specified,
57 pass *all* parameters to the script.
58 global_config : dict
59 Free-format dictionary of global parameters (e.g., security credentials)
60 to be mixed in into the "const_args" section of the local config.
61 tunables : TunableGroups
62 A collection of tunable parameters for *all* environments.
63 service: Service
64 An optional service object (e.g., providing methods to
65 deploy or reboot a VM, etc.).
66 """
67 super().__init__(name=name, config=config, global_config=global_config,
68 tunables=tunables, service=service)
69
70 self._script_setup = self.config.get("setup")
71 self._script_run = self.config.get("run")
72 self._script_teardown = self.config.get("teardown")
73
74 self._shell_env_params: Optional[Iterable[str]] = self.config.get("shell_env_params")
75 self._shell_env_params_rename: Dict[str, str] = self.config.get("shell_env_params_rename", {})
76
77 results_stdout_pattern = self.config.get("results_stdout_pattern")
78 self._results_stdout_pattern: Optional[re.Pattern[str]] = \
79 re.compile(results_stdout_pattern) if results_stdout_pattern else None
80
81 def _get_env_params(self) -> Dict[str, str]:
82 """
83 Get the *shell* environment parameters to be passed to the script.
84
85 Returns
86 -------
87 env_params : Dict[str, str]
88 Parameters to pass as *shell* environment variables into the script.
89 This is usually a subset of `_params` with some possible conversions.
90 """
91 rename: Dict[str, str] # {to: from} mapping of the script parameters.
92 if self._shell_env_params is None:
93 if self._shell_env_params_rename:
94 # Only rename specified - use it.
95 rename = self._shell_env_params_rename.copy()
96 else:
97 # FIXME: We should not be exposing all params by default.
98 # Neither `shell_env_params` nor rename are specified - use all params.
99 rename = {self._RE_INVALID.sub("_", key): key for key in self._params}
100 else:
101 # Use `shell_env_params` and rename if specified.
102 rename = {self._RE_INVALID.sub("_", key): key for key in self._shell_env_params}
103 rename.update(self._shell_env_params_rename)
104
105 return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}
106
107 def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:
108 """
109 Extract the results from the stdout of the script.
110
111 Parameters
112 ----------
113 stdout : str
114 The stdout of the script.
115
116 Returns
117 -------
118 results : Dict[str, TunableValue]
119 A dictionary of results extracted from the stdout.
120 """
121 if not self._results_stdout_pattern:
122 return {}
123 _LOG.debug("Extract regex: '%s' from: '%s'", self._results_stdout_pattern, stdout)
124 return {key: try_parse_val(val) for (key, val) in self._results_stdout_pattern.findall(stdout)}
125
[end of mlos_bench/mlos_bench/environments/script_env.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlos_bench/mlos_bench/environments/script_env.py b/mlos_bench/mlos_bench/environments/script_env.py
--- a/mlos_bench/mlos_bench/environments/script_env.py
+++ b/mlos_bench/mlos_bench/environments/script_env.py
@@ -54,7 +54,7 @@
of the script parameters. If not specified, replace all
non-alphanumeric characters with underscores.
If neither `shell_env_params` nor `shell_env_params_rename` are specified,
- pass *all* parameters to the script.
+ *no* additional shell parameters will be passed to the script.
global_config : dict
Free-format dictionary of global parameters (e.g., security credentials)
to be mixed in into the "const_args" section of the local config.
@@ -71,7 +71,7 @@
self._script_run = self.config.get("run")
self._script_teardown = self.config.get("teardown")
- self._shell_env_params: Optional[Iterable[str]] = self.config.get("shell_env_params")
+ self._shell_env_params: Iterable[str] = self.config.get("shell_env_params", [])
self._shell_env_params_rename: Dict[str, str] = self.config.get("shell_env_params_rename", {})
results_stdout_pattern = self.config.get("results_stdout_pattern")
@@ -88,20 +88,8 @@
Parameters to pass as *shell* environment variables into the script.
This is usually a subset of `_params` with some possible conversions.
"""
- rename: Dict[str, str] # {to: from} mapping of the script parameters.
- if self._shell_env_params is None:
- if self._shell_env_params_rename:
- # Only rename specified - use it.
- rename = self._shell_env_params_rename.copy()
- else:
- # FIXME: We should not be exposing all params by default.
- # Neither `shell_env_params` nor rename are specified - use all params.
- rename = {self._RE_INVALID.sub("_", key): key for key in self._params}
- else:
- # Use `shell_env_params` and rename if specified.
- rename = {self._RE_INVALID.sub("_", key): key for key in self._shell_env_params}
- rename.update(self._shell_env_params_rename)
-
+ rename = {self._RE_INVALID.sub("_", key): key for key in self._shell_env_params}
+ rename.update(self._shell_env_params_rename)
return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}
def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:
| {"golden_diff": "diff --git a/mlos_bench/mlos_bench/environments/script_env.py b/mlos_bench/mlos_bench/environments/script_env.py\n--- a/mlos_bench/mlos_bench/environments/script_env.py\n+++ b/mlos_bench/mlos_bench/environments/script_env.py\n@@ -54,7 +54,7 @@\n of the script parameters. If not specified, replace all\n non-alphanumeric characters with underscores.\n If neither `shell_env_params` nor `shell_env_params_rename` are specified,\n- pass *all* parameters to the script.\n+ *no* additional shell parameters will be passed to the script.\n global_config : dict\n Free-format dictionary of global parameters (e.g., security credentials)\n to be mixed in into the \"const_args\" section of the local config.\n@@ -71,7 +71,7 @@\n self._script_run = self.config.get(\"run\")\n self._script_teardown = self.config.get(\"teardown\")\n \n- self._shell_env_params: Optional[Iterable[str]] = self.config.get(\"shell_env_params\")\n+ self._shell_env_params: Iterable[str] = self.config.get(\"shell_env_params\", [])\n self._shell_env_params_rename: Dict[str, str] = self.config.get(\"shell_env_params_rename\", {})\n \n results_stdout_pattern = self.config.get(\"results_stdout_pattern\")\n@@ -88,20 +88,8 @@\n Parameters to pass as *shell* environment variables into the script.\n This is usually a subset of `_params` with some possible conversions.\n \"\"\"\n- rename: Dict[str, str] # {to: from} mapping of the script parameters.\n- if self._shell_env_params is None:\n- if self._shell_env_params_rename:\n- # Only rename specified - use it.\n- rename = self._shell_env_params_rename.copy()\n- else:\n- # FIXME: We should not be exposing all params by default.\n- # Neither `shell_env_params` nor rename are specified - use all params.\n- rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._params}\n- else:\n- # Use `shell_env_params` and rename if specified.\n- rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n- rename.update(self._shell_env_params_rename)\n-\n+ rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n+ rename.update(self._shell_env_params_rename)\n return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}\n \n def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:\n", "issue": "Don't expose all params as shell environment variables by default\n_Originally posted by @bpkroth in https://github.com/microsoft/MLOS/pull/557#discussion_r1374921396_\r\n \n", "before_files": [{"content": "#\n# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT License.\n#\n\"\"\"\nBase scriptable benchmark environment.\n\"\"\"\n\nimport abc\nimport logging\nimport re\nfrom typing import Dict, Iterable, Optional\n\nfrom mlos_bench.environments.base_environment import Environment\nfrom mlos_bench.services.base_service import Service\nfrom mlos_bench.tunables.tunable import TunableValue\nfrom mlos_bench.tunables.tunable_groups import TunableGroups\n\nfrom mlos_bench.util import try_parse_val\n\n_LOG = logging.getLogger(__name__)\n\n\nclass ScriptEnv(Environment, metaclass=abc.ABCMeta):\n \"\"\"\n Base Environment that runs scripts for setup/run/teardown.\n \"\"\"\n\n _RE_INVALID = re.compile(r\"[^a-zA-Z0-9_]\")\n\n def __init__(self,\n *,\n name: str,\n config: dict,\n global_config: Optional[dict] = None,\n tunables: Optional[TunableGroups] = None,\n service: Optional[Service] = None):\n \"\"\"\n Create a new environment for script execution.\n\n Parameters\n ----------\n name: str\n Human-readable name of the environment.\n config : dict\n Free-format dictionary that contains the benchmark environment\n configuration. Each config must have at least the `tunable_params`\n and the `const_args` sections. It must also have at least one of\n the following parameters: {`setup`, `run`, `teardown`}.\n Additional parameters:\n * `shell_env_params` - an array of parameters to pass to the script\n as shell environment variables, and\n * `shell_env_params_rename` - a dictionary of {to: from} mappings\n of the script parameters. If not specified, replace all\n non-alphanumeric characters with underscores.\n If neither `shell_env_params` nor `shell_env_params_rename` are specified,\n pass *all* parameters to the script.\n global_config : dict\n Free-format dictionary of global parameters (e.g., security credentials)\n to be mixed in into the \"const_args\" section of the local config.\n tunables : TunableGroups\n A collection of tunable parameters for *all* environments.\n service: Service\n An optional service object (e.g., providing methods to\n deploy or reboot a VM, etc.).\n \"\"\"\n super().__init__(name=name, config=config, global_config=global_config,\n tunables=tunables, service=service)\n\n self._script_setup = self.config.get(\"setup\")\n self._script_run = self.config.get(\"run\")\n self._script_teardown = self.config.get(\"teardown\")\n\n self._shell_env_params: Optional[Iterable[str]] = self.config.get(\"shell_env_params\")\n self._shell_env_params_rename: Dict[str, str] = self.config.get(\"shell_env_params_rename\", {})\n\n results_stdout_pattern = self.config.get(\"results_stdout_pattern\")\n self._results_stdout_pattern: Optional[re.Pattern[str]] = \\\n re.compile(results_stdout_pattern) if results_stdout_pattern else None\n\n def _get_env_params(self) -> Dict[str, str]:\n \"\"\"\n Get the *shell* environment parameters to be passed to the script.\n\n Returns\n -------\n env_params : Dict[str, str]\n Parameters to pass as *shell* environment variables into the script.\n This is usually a subset of `_params` with some possible conversions.\n \"\"\"\n rename: Dict[str, str] # {to: from} mapping of the script parameters.\n if self._shell_env_params is None:\n if self._shell_env_params_rename:\n # Only rename specified - use it.\n rename = self._shell_env_params_rename.copy()\n else:\n # FIXME: We should not be exposing all params by default.\n # Neither `shell_env_params` nor rename are specified - use all params.\n rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._params}\n else:\n # Use `shell_env_params` and rename if specified.\n rename = {self._RE_INVALID.sub(\"_\", key): key for key in self._shell_env_params}\n rename.update(self._shell_env_params_rename)\n\n return {key_sub: str(self._params[key]) for (key_sub, key) in rename.items()}\n\n def _extract_stdout_results(self, stdout: str) -> Dict[str, TunableValue]:\n \"\"\"\n Extract the results from the stdout of the script.\n\n Parameters\n ----------\n stdout : str\n The stdout of the script.\n\n Returns\n -------\n results : Dict[str, TunableValue]\n A dictionary of results extracted from the stdout.\n \"\"\"\n if not self._results_stdout_pattern:\n return {}\n _LOG.debug(\"Extract regex: '%s' from: '%s'\", self._results_stdout_pattern, stdout)\n return {key: try_parse_val(val) for (key, val) in self._results_stdout_pattern.findall(stdout)}\n", "path": "mlos_bench/mlos_bench/environments/script_env.py"}]} | 1,937 | 599 |
gh_patches_debug_4170 | rasdani/github-patches | git_diff | google__flax-1423 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
flax.core.FrozenDict copy broken when the new dictionary contains some names
Provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.
### Problem you have encountered:
Adding a dictionary which contains 'cls' key fails,

### What you expected to happen:
expected to update the value of 'cls' key.
### Logs, error messages, etc:
### Steps to reproduce:
```
flax.core.FrozenDict({}).copy({'cls': 'abc'})
```
One way to workaround this is to manually create concatenated FrozenDict instead of using `copy`.
```
flax.core.FrozenDict({**flax.core.FrozenDict({'def': '123', 'cls': 22}), **{'cls': 'abc'}})
```
</issue>
<code>
[start of flax/core/frozen_dict.py]
1 # Copyright 2021 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Frozen Dictionary."""
16
17 from typing import Any, TypeVar, Mapping, Dict, Tuple
18
19 from flax import serialization
20 import jax
21
22
23 K = TypeVar('K')
24 V = TypeVar('V')
25
26
27 def _indent(x, num_spaces):
28 indent_str = ' ' * num_spaces
29 lines = x.split('\n')
30 assert lines[-1] == ''
31 # skip the final line because it's empty and should not be indented.
32 return '\n'.join(indent_str + line for line in lines[:-1]) + '\n'
33
34
35 @jax.tree_util.register_pytree_node_class
36 class FrozenDict(Mapping[K, V]):
37 """An immutable variant of the Python dict."""
38 __slots__ = ('_dict', '_hash')
39
40 def __init__(self, *args, __unsafe_skip_copy__=False, **kwargs):
41 # make sure the dict is as
42 xs = dict(*args, **kwargs)
43 if __unsafe_skip_copy__:
44 self._dict = xs
45 else:
46 self._dict = _prepare_freeze(xs)
47
48 self._hash = None
49
50 def __getitem__(self, key):
51 v = self._dict[key]
52 if isinstance(v, dict):
53 return FrozenDict(v)
54 return v
55
56 def __setitem__(self, key, value):
57 raise ValueError('FrozenDict is immutable.')
58
59 def __contains__(self, key):
60 return key in self._dict
61
62 def __iter__(self):
63 return iter(self._dict)
64
65 def __len__(self):
66 return len(self._dict)
67
68 def __repr__(self):
69 return self.pretty_repr()
70
71 def __reduce__(self):
72 return FrozenDict, (self.unfreeze(),)
73
74 def pretty_repr(self, num_spaces=4):
75 """Returns an indented representation of the nested dictionary."""
76 def pretty_dict(x):
77 if not isinstance(x, dict):
78 return repr(x)
79 rep = ''
80 for key, val in x.items():
81 rep += f'{key}: {pretty_dict(val)},\n'
82 if rep:
83 return '{\n' + _indent(rep, num_spaces) + '}'
84 else:
85 return '{}'
86 return f'FrozenDict({pretty_dict(self._dict)})'
87
88 def __hash__(self):
89 if self._hash is None:
90 h = 0
91 for key, value in self.items():
92 h ^= hash((key, value))
93 self._hash = h
94 return self._hash
95
96 def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':
97 """Create a new FrozenDict with additional or replaced entries."""
98 return type(self)(self, **unfreeze(add_or_replace))
99
100 def items(self):
101 for key in self._dict:
102 yield (key, self[key])
103
104 def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:
105 """Create a new FrozenDict where one entry is removed.
106
107 Example::
108
109 state, params = variables.pop('params')
110
111 Args:
112 key: the key to remove from the dict
113 Returns:
114 A pair with the new FrozenDict and the removed value.
115 """
116 value = self[key]
117 new_dict = dict(self._dict)
118 new_dict.pop(key)
119 new_self = type(self)(new_dict)
120 return new_self, value
121
122 def unfreeze(self) -> Dict[K, V]:
123 """Unfreeze this FrozenDict.
124
125 Returns:
126 An unfrozen version of this FrozenDict instance.
127 """
128 return unfreeze(self)
129
130 def tree_flatten(self) -> Tuple[Tuple[Dict[Any, Any]], Tuple[()]]:
131 """Flattens this FrozenDict.
132
133 Returns:
134 A flattened version of this FrozenDict instance.
135 """
136 return (self._dict,), ()
137
138 @classmethod
139 def tree_unflatten(cls, _, data):
140 # data is already deep copied due to tree map mechanism
141 # we can skip the deep copy in the constructor
142 return cls(*data, __unsafe_skip_copy__=True)
143
144
145 def _prepare_freeze(xs: Any) -> Any:
146 """Deep copy unfrozen dicts to make the dictionary FrozenDict safe."""
147 if isinstance(xs, FrozenDict):
148 # we can safely ref share the internal state of a FrozenDict
149 # because it is immutable.
150 return xs._dict # pylint: disable=protected-access
151 if not isinstance(xs, dict):
152 # return a leaf as is.
153 return xs
154 # recursively copy dictionary to avoid ref sharing
155 return {key: _prepare_freeze(val) for key, val in xs.items()}
156
157
158 def freeze(xs: Mapping[Any, Any]) -> FrozenDict[Any, Any]:
159 """Freeze a nested dict.
160
161 Makes a nested `dict` immutable by transforming it into `FrozenDict`.
162 """
163 return FrozenDict(xs)
164
165
166 def unfreeze(x: FrozenDict[Any, Any]) -> Dict[Any, Any]:
167 """Unfreeze a FrozenDict.
168
169 Makes a mutable copy of a `FrozenDict` mutable by transforming
170 it into (nested) dict.
171 """
172 if isinstance(x, FrozenDict):
173 # deep copy internal state of a FrozenDict
174 # the dict branch would also work here but
175 # it is much less performant because jax.tree_map
176 # uses an optimized C implementation.
177 return jax.tree_map(lambda y: y, x._dict)
178 elif isinstance(x, dict):
179 ys = {}
180 for key, value in x.items():
181 ys[key] = unfreeze(value)
182 return ys
183 else:
184 return x
185
186
187 def _frozen_dict_state_dict(xs):
188 return {key: serialization.to_state_dict(value) for key, value in xs.items()}
189
190
191 def _restore_frozen_dict(xs, states):
192 return FrozenDict(
193 {key: serialization.from_state_dict(value, states[key])
194 for key, value in xs.items()})
195
196
197 serialization.register_serialization_state(
198 FrozenDict,
199 _frozen_dict_state_dict,
200 _restore_frozen_dict)
201
[end of flax/core/frozen_dict.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py
--- a/flax/core/frozen_dict.py
+++ b/flax/core/frozen_dict.py
@@ -95,7 +95,7 @@
def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':
"""Create a new FrozenDict with additional or replaced entries."""
- return type(self)(self, **unfreeze(add_or_replace))
+ return type(self)({**self, **unfreeze(add_or_replace)})
def items(self):
for key in self._dict:
| {"golden_diff": "diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py\n--- a/flax/core/frozen_dict.py\n+++ b/flax/core/frozen_dict.py\n@@ -95,7 +95,7 @@\n \n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n- return type(self)(self, **unfreeze(add_or_replace))\n+ return type(self)({**self, **unfreeze(add_or_replace)})\n \n def items(self):\n for key in self._dict:\n", "issue": "flax.core.FrozenDict copy broken when the new dictionary contains some names\nProvide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.\r\n\r\n### Problem you have encountered:\r\nAdding a dictionary which contains 'cls' key fails, \r\n\r\n\r\n### What you expected to happen:\r\nexpected to update the value of 'cls' key. \r\n\r\n### Logs, error messages, etc:\r\n\r\n\r\n\r\n### Steps to reproduce:\r\n\r\n```\r\nflax.core.FrozenDict({}).copy({'cls': 'abc'})\r\n```\r\n\r\nOne way to workaround this is to manually create concatenated FrozenDict instead of using `copy`.\r\n```\r\nflax.core.FrozenDict({**flax.core.FrozenDict({'def': '123', 'cls': 22}), **{'cls': 'abc'}})\r\n```\n", "before_files": [{"content": "# Copyright 2021 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Frozen Dictionary.\"\"\"\n\nfrom typing import Any, TypeVar, Mapping, Dict, Tuple\n\nfrom flax import serialization\nimport jax\n\n\nK = TypeVar('K')\nV = TypeVar('V')\n\n\ndef _indent(x, num_spaces):\n indent_str = ' ' * num_spaces\n lines = x.split('\\n')\n assert lines[-1] == ''\n # skip the final line because it's empty and should not be indented.\n return '\\n'.join(indent_str + line for line in lines[:-1]) + '\\n'\n\n\[email protected]_util.register_pytree_node_class\nclass FrozenDict(Mapping[K, V]):\n \"\"\"An immutable variant of the Python dict.\"\"\"\n __slots__ = ('_dict', '_hash')\n\n def __init__(self, *args, __unsafe_skip_copy__=False, **kwargs):\n # make sure the dict is as\n xs = dict(*args, **kwargs)\n if __unsafe_skip_copy__:\n self._dict = xs\n else:\n self._dict = _prepare_freeze(xs)\n\n self._hash = None\n\n def __getitem__(self, key):\n v = self._dict[key]\n if isinstance(v, dict):\n return FrozenDict(v)\n return v\n\n def __setitem__(self, key, value):\n raise ValueError('FrozenDict is immutable.')\n\n def __contains__(self, key):\n return key in self._dict\n\n def __iter__(self):\n return iter(self._dict)\n\n def __len__(self):\n return len(self._dict)\n\n def __repr__(self):\n return self.pretty_repr()\n\n def __reduce__(self):\n return FrozenDict, (self.unfreeze(),)\n\n def pretty_repr(self, num_spaces=4):\n \"\"\"Returns an indented representation of the nested dictionary.\"\"\"\n def pretty_dict(x):\n if not isinstance(x, dict):\n return repr(x)\n rep = ''\n for key, val in x.items():\n rep += f'{key}: {pretty_dict(val)},\\n'\n if rep:\n return '{\\n' + _indent(rep, num_spaces) + '}'\n else:\n return '{}'\n return f'FrozenDict({pretty_dict(self._dict)})'\n\n def __hash__(self):\n if self._hash is None:\n h = 0\n for key, value in self.items():\n h ^= hash((key, value))\n self._hash = h\n return self._hash\n\n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n return type(self)(self, **unfreeze(add_or_replace))\n\n def items(self):\n for key in self._dict:\n yield (key, self[key])\n\n def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:\n \"\"\"Create a new FrozenDict where one entry is removed.\n\n Example::\n\n state, params = variables.pop('params')\n\n Args:\n key: the key to remove from the dict\n Returns:\n A pair with the new FrozenDict and the removed value.\n \"\"\"\n value = self[key]\n new_dict = dict(self._dict)\n new_dict.pop(key)\n new_self = type(self)(new_dict)\n return new_self, value\n\n def unfreeze(self) -> Dict[K, V]:\n \"\"\"Unfreeze this FrozenDict.\n\n Returns:\n An unfrozen version of this FrozenDict instance.\n \"\"\"\n return unfreeze(self)\n\n def tree_flatten(self) -> Tuple[Tuple[Dict[Any, Any]], Tuple[()]]:\n \"\"\"Flattens this FrozenDict.\n\n Returns:\n A flattened version of this FrozenDict instance.\n \"\"\"\n return (self._dict,), ()\n\n @classmethod\n def tree_unflatten(cls, _, data):\n # data is already deep copied due to tree map mechanism\n # we can skip the deep copy in the constructor\n return cls(*data, __unsafe_skip_copy__=True)\n\n\ndef _prepare_freeze(xs: Any) -> Any:\n \"\"\"Deep copy unfrozen dicts to make the dictionary FrozenDict safe.\"\"\"\n if isinstance(xs, FrozenDict):\n # we can safely ref share the internal state of a FrozenDict\n # because it is immutable.\n return xs._dict # pylint: disable=protected-access\n if not isinstance(xs, dict):\n # return a leaf as is.\n return xs\n # recursively copy dictionary to avoid ref sharing\n return {key: _prepare_freeze(val) for key, val in xs.items()}\n\n\ndef freeze(xs: Mapping[Any, Any]) -> FrozenDict[Any, Any]:\n \"\"\"Freeze a nested dict.\n\n Makes a nested `dict` immutable by transforming it into `FrozenDict`.\n \"\"\"\n return FrozenDict(xs)\n\n\ndef unfreeze(x: FrozenDict[Any, Any]) -> Dict[Any, Any]:\n \"\"\"Unfreeze a FrozenDict.\n\n Makes a mutable copy of a `FrozenDict` mutable by transforming\n it into (nested) dict.\n \"\"\"\n if isinstance(x, FrozenDict):\n # deep copy internal state of a FrozenDict\n # the dict branch would also work here but\n # it is much less performant because jax.tree_map\n # uses an optimized C implementation.\n return jax.tree_map(lambda y: y, x._dict)\n elif isinstance(x, dict):\n ys = {}\n for key, value in x.items():\n ys[key] = unfreeze(value)\n return ys\n else:\n return x\n\n\ndef _frozen_dict_state_dict(xs):\n return {key: serialization.to_state_dict(value) for key, value in xs.items()}\n\n\ndef _restore_frozen_dict(xs, states):\n return FrozenDict(\n {key: serialization.from_state_dict(value, states[key])\n for key, value in xs.items()})\n\n\nserialization.register_serialization_state(\n FrozenDict,\n _frozen_dict_state_dict,\n _restore_frozen_dict)\n", "path": "flax/core/frozen_dict.py"}]} | 2,738 | 135 |
gh_patches_debug_7557 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-123 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModuleNotFoundError: No module named 'plasmapy.classes' on plasmapy import
On importing freshly installed plasmapy into a new environment:
(plasmapy) [~]$ python
Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import plasmapy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dominik/.anaconda3/envs/plasmapy/lib/python3.6/site-packages/plasmapy/__init__.py", line 8, in <module>
from .classes import Plasma
ModuleNotFoundError: No module named 'plasmapy.classes'
The goal of this one is being able to import plasmapy. At all.
The issue likely lies in `plasmapy/__init__.py`.
To quote @cadair 's words of encouragement on this bugfixing journey, *packaging is a special kind of hell*.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3
4 # Package metadata
5 metadata = {}
6 with open('plasmapy/_metadata.py', 'r') as metadata_file:
7 exec(metadata_file.read(), metadata)
8
9 # Requirements
10 with open('requirements/base.txt', 'r') as req_file:
11 requirements = req_file.read().splitlines()
12
13 setup(name=metadata['name'],
14 version=metadata['version'],
15 description="Python package for plasma physics",
16 requires=requirements,
17 install_requires=requirements,
18 provides=[metadata['name']],
19 author=metadata['author'],
20 author_email="[email protected]", # until we get an email address
21 license="BSD",
22 url="https://github.com/PlasmaPy/PlasmaPy", # until we make a webpage
23 long_description=metadata['description'],
24 keywords=['plasma', 'plasma physics', 'science'],
25 classifiers=[
26 'Intended Audience :: Science/Research',
27 'License :: OSI Approved :: BSD License',
28 'Operating System :: OS Independent',
29 'Programming Language :: Python :: 3 :: Only',
30 'Programming Language :: Python :: 3.6',
31 'Topic :: Scientific/Engineering :: Physics',
32 'Topic :: Scientific/Engineering :: Astronomy',
33 'Development Status :: 2 - Pre-Alpha',
34 ],
35 packages=["plasmapy"],
36 zip_safe=False,
37 use_2to3=False,
38 python_requires='>=3.6',
39 )
40
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-from setuptools import setup
+from setuptools import setup, find_packages
# Package metadata
@@ -32,7 +32,7 @@
'Topic :: Scientific/Engineering :: Astronomy',
'Development Status :: 2 - Pre-Alpha',
],
- packages=["plasmapy"],
+ packages=find_packages(),
zip_safe=False,
use_2to3=False,
python_requires='>=3.6',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,4 @@\n-from setuptools import setup\n+from setuptools import setup, find_packages\n \n \n # Package metadata\n@@ -32,7 +32,7 @@\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Development Status :: 2 - Pre-Alpha',\n ],\n- packages=[\"plasmapy\"],\n+ packages=find_packages(),\n zip_safe=False,\n use_2to3=False,\n python_requires='>=3.6',\n", "issue": "ModuleNotFoundError: No module named 'plasmapy.classes' on plasmapy import\nOn importing freshly installed plasmapy into a new environment:\r\n\r\n (plasmapy) [~]$ python\r\n Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32) \r\n [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\r\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n >>> import plasmapy\r\n Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dominik/.anaconda3/envs/plasmapy/lib/python3.6/site-packages/plasmapy/__init__.py\", line 8, in <module>\r\n from .classes import Plasma\r\n ModuleNotFoundError: No module named 'plasmapy.classes'\r\n\r\nThe goal of this one is being able to import plasmapy. At all.\r\n\r\nThe issue likely lies in `plasmapy/__init__.py`. \r\n\r\nTo quote @cadair 's words of encouragement on this bugfixing journey, *packaging is a special kind of hell*. \n", "before_files": [{"content": "from setuptools import setup\n\n\n# Package metadata\nmetadata = {}\nwith open('plasmapy/_metadata.py', 'r') as metadata_file:\n exec(metadata_file.read(), metadata)\n\n# Requirements\nwith open('requirements/base.txt', 'r') as req_file:\n requirements = req_file.read().splitlines()\n\nsetup(name=metadata['name'],\n version=metadata['version'],\n description=\"Python package for plasma physics\",\n requires=requirements,\n install_requires=requirements,\n provides=[metadata['name']],\n author=metadata['author'],\n author_email=\"[email protected]\", # until we get an email address\n license=\"BSD\",\n url=\"https://github.com/PlasmaPy/PlasmaPy\", # until we make a webpage\n long_description=metadata['description'],\n keywords=['plasma', 'plasma physics', 'science'],\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Development Status :: 2 - Pre-Alpha',\n ],\n packages=[\"plasmapy\"],\n zip_safe=False,\n use_2to3=False,\n python_requires='>=3.6',\n )\n", "path": "setup.py"}]} | 1,201 | 122 |
gh_patches_debug_39325 | rasdani/github-patches | git_diff | cowrie__cowrie-763 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement regular expressions in userdb.txt
The file that contains the combinations of usernames and passwords that Cowrie accepts from the attackers (`data/userdb.txt`) currently handles 3 special characters - `#`, which means a comment till the end of the line, `!`, which means negation, and `*`, which means "anything" (in either the username or the password field).
Would it be possible to allow any regular expression instead of the special characters '!' and '*'?
I've seen attackers use variations of the password "honeypot" to determine that they are dealing with a honeypot and refuse to conduct their usual attack. Examples include "Honeypot321" (309 times), "honeypot" (6 times), and "nologinissahoneypotlmao" (once) over a 17-month period.
I could, of course, explicitly block just these 3 passwords, but I'd like to disallow any password with the word "honeypot" (case-insensitive) in it.
</issue>
<code>
[start of cowrie/core/auth.py]
1 # Copyright (c) 2009-2014 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 """
5 This module contains ...
6 """
7
8 from __future__ import division, absolute_import
9
10 import json
11 from os import path
12 from random import randint
13
14 from twisted.python import log
15
16 from cowrie.core.config import CONFIG
17
18 class UserDB(object):
19 """
20 By Walter de Jong <[email protected]>
21 """
22
23 def __init__(self):
24 self.userdb = []
25 self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')
26 self.load()
27
28
29 def load(self):
30 """
31 load the user db
32 """
33
34 with open(self.userdb_file, 'rb') as f:
35 while True:
36 rawline = f.readline()
37 if not rawline:
38 break
39
40 line = rawline.strip()
41 if not line:
42 continue
43
44 if line.startswith(b'#'):
45 continue
46
47 (login, uid, passwd) = line.split(b':', 2)
48
49 self.userdb.append((login, passwd))
50
51
52 def save(self):
53 """
54 save the user db
55 """
56
57 # Note: this is subject to races between cowrie instances, but hey ...
58 with open(self.userdb_file, 'w') as f:
59 for (login, passwd) in self.userdb:
60 f.write('%s:x:%s\n' % (login, passwd))
61
62
63 def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):
64 """
65 check entered username/password against database
66 note that it allows multiple passwords for a single username
67 it also knows wildcard '*' for any username or password
68 prepend password with ! to explicitly deny it. Denials must come before wildcards
69 """
70 for (login, passwd) in self.userdb:
71 # Explicitly fail on !password
72 if login == thelogin and passwd == b'!' + thepasswd:
73 return False
74 if login in (thelogin, b'*') and passwd in (thepasswd, b'*'):
75 return True
76 return False
77
78
79 def user_password_exists(self, thelogin, thepasswd):
80 """
81 """
82 for (login, passwd) in self.userdb:
83 if login == thelogin and passwd == thepasswd:
84 return True
85 return False
86
87
88 def adduser(self, login, passwd):
89 """
90 """
91 if self.user_password_exists(login, passwd):
92 return
93 self.userdb.append((login, passwd))
94 self.save()
95
96
97
98 class AuthRandom(object):
99 """
100 Alternative class that defines the checklogin() method.
101 Users will be authenticated after a random number of attempts.
102 """
103
104 def __init__(self):
105 # Default values
106 self.mintry, self.maxtry, self.maxcache = 2, 5, 10
107
108 # Are there auth_class parameters?
109 if CONFIG.has_option('honeypot', 'auth_class_parameters'):
110 parameters = CONFIG.get('honeypot', 'auth_class_parameters')
111 parlist = parameters.split(',')
112 if len(parlist) == 3:
113 self.mintry = int(parlist[0])
114 self.maxtry = int(parlist[1])
115 self.maxcache = int(parlist[2])
116
117 if self.maxtry < self.mintry:
118 self.maxtry = self.mintry + 1
119 log.msg('maxtry < mintry, adjusting maxtry to: %d' % (self.maxtry,))
120 self.uservar = {}
121 self.uservar_file = '%s/uservar.json' % CONFIG.get('honeypot', 'data_path')
122 self.loadvars()
123
124
125 def loadvars(self):
126 """
127 Load user vars from json file
128 """
129 if path.isfile(self.uservar_file):
130 with open(self.uservar_file, 'rb') as fp:
131 try:
132 self.uservar = json.load(fp)
133 except:
134 self.uservar = {}
135
136
137 def savevars(self):
138 """
139 Save the user vars to json file
140 """
141 data = self.uservar
142 # Note: this is subject to races between cowrie logins
143 with open(self.uservar_file, 'wb') as fp:
144 json.dump(data, fp)
145
146
147 def checklogin(self, thelogin, thepasswd, src_ip):
148 """
149 Every new source IP will have to try a random number of times between
150 'mintry' and 'maxtry' before succeeding to login.
151 All username/password combinations must be different.
152 The successful login combination is stored with the IP address.
153 Successful username/passwords pairs are also cached for 'maxcache' times.
154 This is to allow access for returns from different IP addresses.
155 Variables are saved in 'uservar.json' in the data directory.
156 """
157
158 auth = False
159 userpass = thelogin + ':' + thepasswd
160
161 if not 'cache' in self.uservar:
162 self.uservar['cache'] = []
163 cache = self.uservar['cache']
164
165 # Check if it is the first visit from src_ip
166 if src_ip not in self.uservar:
167 self.uservar[src_ip] = {}
168 ipinfo = self.uservar[src_ip]
169 ipinfo['try'] = 0
170 if userpass in cache:
171 log.msg('first time for %s, found cached: %s' % (src_ip, userpass))
172 ipinfo['max'] = 1
173 ipinfo['user'] = thelogin
174 ipinfo['pw'] = thepasswd
175 auth = True
176 self.savevars()
177 return auth
178 else:
179 ipinfo['max'] = randint(self.mintry, self.maxtry)
180 log.msg('first time for %s, need: %d' % (src_ip, ipinfo['max']))
181
182 ipinfo = self.uservar[src_ip]
183
184 # Fill in missing variables
185 if not 'max' in ipinfo:
186 ipinfo['max'] = randint(self.mintry, self.maxtry)
187 if not 'try' in ipinfo:
188 ipinfo['try'] = 0
189 if not 'tried' in ipinfo:
190 ipinfo['tried'] = []
191
192 # Don't count repeated username/password combinations
193 if userpass in ipinfo['tried']:
194 log.msg('already tried this combination')
195 self.savevars()
196 return auth
197
198 ipinfo['try'] += 1
199 attempts = ipinfo['try']
200 need = ipinfo['max']
201 log.msg('login attempt: %d' % (attempts,))
202
203 # Check if enough login attempts are tried
204 if attempts < need:
205 self.uservar[src_ip]['tried'].append(userpass)
206 elif attempts == need:
207 ipinfo['user'] = thelogin
208 ipinfo['pw'] = thepasswd
209 cache.append(userpass)
210 if len(cache) > self.maxcache:
211 cache.pop(0)
212 auth = True
213 # Returning after successful login
214 elif attempts > need:
215 if not 'user' in ipinfo or not 'pw' in ipinfo:
216 log.msg('return, but username or password not set!!!')
217 ipinfo['tried'].append(userpass)
218 ipinfo['try'] = 1
219 else:
220 log.msg('login return, expect: [%s/%s]' % (ipinfo['user'], ipinfo['pw']))
221 if thelogin == ipinfo['user'] and thepasswd == ipinfo['pw']:
222 auth = True
223 self.savevars()
224 return auth
225
226
[end of cowrie/core/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cowrie/core/auth.py b/cowrie/core/auth.py
--- a/cowrie/core/auth.py
+++ b/cowrie/core/auth.py
@@ -7,6 +7,7 @@
from __future__ import division, absolute_import
+import re
import json
from os import path
from random import randint
@@ -21,7 +22,7 @@
"""
def __init__(self):
- self.userdb = []
+ self.userdb = {}
self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')
self.load()
@@ -44,55 +45,50 @@
if line.startswith(b'#'):
continue
- (login, uid, passwd) = line.split(b':', 2)
+ login, passwd = re.split(br':\w+:', line, 1)
+ self.adduser(login, passwd)
- self.userdb.append((login, passwd))
+ def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):
+ for credentials, policy in self.userdb.items():
+ login, passwd = credentials
- def save(self):
- """
- save the user db
- """
+ if self.match_rule(login, thelogin):
+ if self.match_rule(passwd, thepasswd):
+ return policy
- # Note: this is subject to races between cowrie instances, but hey ...
- with open(self.userdb_file, 'w') as f:
- for (login, passwd) in self.userdb:
- f.write('%s:x:%s\n' % (login, passwd))
+ return False
- def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):
- """
- check entered username/password against database
- note that it allows multiple passwords for a single username
- it also knows wildcard '*' for any username or password
- prepend password with ! to explicitly deny it. Denials must come before wildcards
- """
- for (login, passwd) in self.userdb:
- # Explicitly fail on !password
- if login == thelogin and passwd == b'!' + thepasswd:
- return False
- if login in (thelogin, b'*') and passwd in (thepasswd, b'*'):
- return True
- return False
+ def match_rule(self, rule, input):
+ if type(rule) is bytes:
+ return rule in [b'*', input]
+ else:
+ return bool(rule.search(input))
- def user_password_exists(self, thelogin, thepasswd):
+ def re_or_str(self, rule):
"""
+ Convert a /.../ type rule to a regex, otherwise return the string as-is
"""
- for (login, passwd) in self.userdb:
- if login == thelogin and passwd == thepasswd:
- return True
- return False
+ res = re.match(br'/(.+)/(i)?$', rule)
+ if res:
+ return re.compile(res.group(1), re.IGNORECASE if res.group(2) else 0)
+
+ return rule
def adduser(self, login, passwd):
- """
- """
- if self.user_password_exists(login, passwd):
- return
- self.userdb.append((login, passwd))
- self.save()
+ login = self.re_or_str(login)
+
+ if passwd.startswith(b'!'):
+ policy = False
+ passwd = passwd[1:]
+ else:
+ policy = True
+ passwd = self.re_or_str(passwd)
+ self.userdb[(login, passwd)] = policy
class AuthRandom(object):
| {"golden_diff": "diff --git a/cowrie/core/auth.py b/cowrie/core/auth.py\n--- a/cowrie/core/auth.py\n+++ b/cowrie/core/auth.py\n@@ -7,6 +7,7 @@\n \n from __future__ import division, absolute_import\n \n+import re\n import json\n from os import path\n from random import randint\n@@ -21,7 +22,7 @@\n \"\"\"\n \n def __init__(self):\n- self.userdb = []\n+ self.userdb = {}\n self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')\n self.load()\n \n@@ -44,55 +45,50 @@\n if line.startswith(b'#'):\n continue\n \n- (login, uid, passwd) = line.split(b':', 2)\n+ login, passwd = re.split(br':\\w+:', line, 1)\n+ self.adduser(login, passwd)\n \n- self.userdb.append((login, passwd))\n \n+ def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n+ for credentials, policy in self.userdb.items():\n+ login, passwd = credentials\n \n- def save(self):\n- \"\"\"\n- save the user db\n- \"\"\"\n+ if self.match_rule(login, thelogin):\n+ if self.match_rule(passwd, thepasswd):\n+ return policy\n \n- # Note: this is subject to races between cowrie instances, but hey ...\n- with open(self.userdb_file, 'w') as f:\n- for (login, passwd) in self.userdb:\n- f.write('%s:x:%s\\n' % (login, passwd))\n+ return False\n \n \n- def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n- \"\"\"\n- check entered username/password against database\n- note that it allows multiple passwords for a single username\n- it also knows wildcard '*' for any username or password\n- prepend password with ! to explicitly deny it. Denials must come before wildcards\n- \"\"\"\n- for (login, passwd) in self.userdb:\n- # Explicitly fail on !password\n- if login == thelogin and passwd == b'!' + thepasswd:\n- return False\n- if login in (thelogin, b'*') and passwd in (thepasswd, b'*'):\n- return True\n- return False\n+ def match_rule(self, rule, input):\n+ if type(rule) is bytes:\n+ return rule in [b'*', input]\n+ else:\n+ return bool(rule.search(input))\n \n \n- def user_password_exists(self, thelogin, thepasswd):\n+ def re_or_str(self, rule):\n \"\"\"\n+ Convert a /.../ type rule to a regex, otherwise return the string as-is\n \"\"\"\n- for (login, passwd) in self.userdb:\n- if login == thelogin and passwd == thepasswd:\n- return True\n- return False\n+ res = re.match(br'/(.+)/(i)?$', rule)\n+ if res:\n+ return re.compile(res.group(1), re.IGNORECASE if res.group(2) else 0)\n+\n+ return rule\n \n \n def adduser(self, login, passwd):\n- \"\"\"\n- \"\"\"\n- if self.user_password_exists(login, passwd):\n- return\n- self.userdb.append((login, passwd))\n- self.save()\n+ login = self.re_or_str(login)\n+\n+ if passwd.startswith(b'!'):\n+ policy = False\n+ passwd = passwd[1:]\n+ else:\n+ policy = True\n \n+ passwd = self.re_or_str(passwd)\n+ self.userdb[(login, passwd)] = policy\n \n \n class AuthRandom(object):\n", "issue": "Implement regular expressions in userdb.txt\nThe file that contains the combinations of usernames and passwords that Cowrie accepts from the attackers (`data/userdb.txt`) currently handles 3 special characters - `#`, which means a comment till the end of the line, `!`, which means negation, and `*`, which means \"anything\" (in either the username or the password field).\r\n\r\nWould it be possible to allow any regular expression instead of the special characters '!' and '*'?\r\n\r\nI've seen attackers use variations of the password \"honeypot\" to determine that they are dealing with a honeypot and refuse to conduct their usual attack. Examples include \"Honeypot321\" (309 times), \"honeypot\" (6 times), and \"nologinissahoneypotlmao\" (once) over a 17-month period.\r\n\r\nI could, of course, explicitly block just these 3 passwords, but I'd like to disallow any password with the word \"honeypot\" (case-insensitive) in it.\n", "before_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\n\"\"\"\nThis module contains ...\n\"\"\"\n\nfrom __future__ import division, absolute_import\n\nimport json\nfrom os import path\nfrom random import randint\n\nfrom twisted.python import log\n\nfrom cowrie.core.config import CONFIG\n\nclass UserDB(object):\n \"\"\"\n By Walter de Jong <[email protected]>\n \"\"\"\n\n def __init__(self):\n self.userdb = []\n self.userdb_file = '%s/userdb.txt' % CONFIG.get('honeypot', 'data_path')\n self.load()\n\n\n def load(self):\n \"\"\"\n load the user db\n \"\"\"\n\n with open(self.userdb_file, 'rb') as f:\n while True:\n rawline = f.readline()\n if not rawline:\n break\n\n line = rawline.strip()\n if not line:\n continue\n\n if line.startswith(b'#'):\n continue\n\n (login, uid, passwd) = line.split(b':', 2)\n\n self.userdb.append((login, passwd))\n\n\n def save(self):\n \"\"\"\n save the user db\n \"\"\"\n\n # Note: this is subject to races between cowrie instances, but hey ...\n with open(self.userdb_file, 'w') as f:\n for (login, passwd) in self.userdb:\n f.write('%s:x:%s\\n' % (login, passwd))\n\n\n def checklogin(self, thelogin, thepasswd, src_ip='0.0.0.0'):\n \"\"\"\n check entered username/password against database\n note that it allows multiple passwords for a single username\n it also knows wildcard '*' for any username or password\n prepend password with ! to explicitly deny it. Denials must come before wildcards\n \"\"\"\n for (login, passwd) in self.userdb:\n # Explicitly fail on !password\n if login == thelogin and passwd == b'!' + thepasswd:\n return False\n if login in (thelogin, b'*') and passwd in (thepasswd, b'*'):\n return True\n return False\n\n\n def user_password_exists(self, thelogin, thepasswd):\n \"\"\"\n \"\"\"\n for (login, passwd) in self.userdb:\n if login == thelogin and passwd == thepasswd:\n return True\n return False\n\n\n def adduser(self, login, passwd):\n \"\"\"\n \"\"\"\n if self.user_password_exists(login, passwd):\n return\n self.userdb.append((login, passwd))\n self.save()\n\n\n\nclass AuthRandom(object):\n \"\"\"\n Alternative class that defines the checklogin() method.\n Users will be authenticated after a random number of attempts.\n \"\"\"\n\n def __init__(self):\n # Default values\n self.mintry, self.maxtry, self.maxcache = 2, 5, 10\n\n # Are there auth_class parameters?\n if CONFIG.has_option('honeypot', 'auth_class_parameters'):\n parameters = CONFIG.get('honeypot', 'auth_class_parameters')\n parlist = parameters.split(',')\n if len(parlist) == 3:\n self.mintry = int(parlist[0])\n self.maxtry = int(parlist[1])\n self.maxcache = int(parlist[2])\n\n if self.maxtry < self.mintry:\n self.maxtry = self.mintry + 1\n log.msg('maxtry < mintry, adjusting maxtry to: %d' % (self.maxtry,))\n self.uservar = {}\n self.uservar_file = '%s/uservar.json' % CONFIG.get('honeypot', 'data_path')\n self.loadvars()\n\n\n def loadvars(self):\n \"\"\"\n Load user vars from json file\n \"\"\"\n if path.isfile(self.uservar_file):\n with open(self.uservar_file, 'rb') as fp:\n try:\n self.uservar = json.load(fp)\n except:\n self.uservar = {}\n\n\n def savevars(self):\n \"\"\"\n Save the user vars to json file\n \"\"\"\n data = self.uservar\n # Note: this is subject to races between cowrie logins\n with open(self.uservar_file, 'wb') as fp:\n json.dump(data, fp)\n\n\n def checklogin(self, thelogin, thepasswd, src_ip):\n \"\"\"\n Every new source IP will have to try a random number of times between\n 'mintry' and 'maxtry' before succeeding to login.\n All username/password combinations must be different.\n The successful login combination is stored with the IP address.\n Successful username/passwords pairs are also cached for 'maxcache' times.\n This is to allow access for returns from different IP addresses.\n Variables are saved in 'uservar.json' in the data directory.\n \"\"\"\n\n auth = False\n userpass = thelogin + ':' + thepasswd\n\n if not 'cache' in self.uservar:\n self.uservar['cache'] = []\n cache = self.uservar['cache']\n\n # Check if it is the first visit from src_ip\n if src_ip not in self.uservar:\n self.uservar[src_ip] = {}\n ipinfo = self.uservar[src_ip]\n ipinfo['try'] = 0\n if userpass in cache:\n log.msg('first time for %s, found cached: %s' % (src_ip, userpass))\n ipinfo['max'] = 1\n ipinfo['user'] = thelogin\n ipinfo['pw'] = thepasswd\n auth = True\n self.savevars()\n return auth\n else:\n ipinfo['max'] = randint(self.mintry, self.maxtry)\n log.msg('first time for %s, need: %d' % (src_ip, ipinfo['max']))\n\n ipinfo = self.uservar[src_ip]\n\n # Fill in missing variables\n if not 'max' in ipinfo:\n ipinfo['max'] = randint(self.mintry, self.maxtry)\n if not 'try' in ipinfo:\n ipinfo['try'] = 0\n if not 'tried' in ipinfo:\n ipinfo['tried'] = []\n\n # Don't count repeated username/password combinations\n if userpass in ipinfo['tried']:\n log.msg('already tried this combination')\n self.savevars()\n return auth\n\n ipinfo['try'] += 1\n attempts = ipinfo['try']\n need = ipinfo['max']\n log.msg('login attempt: %d' % (attempts,))\n\n # Check if enough login attempts are tried\n if attempts < need:\n self.uservar[src_ip]['tried'].append(userpass)\n elif attempts == need:\n ipinfo['user'] = thelogin\n ipinfo['pw'] = thepasswd\n cache.append(userpass)\n if len(cache) > self.maxcache:\n cache.pop(0)\n auth = True\n # Returning after successful login\n elif attempts > need:\n if not 'user' in ipinfo or not 'pw' in ipinfo:\n log.msg('return, but username or password not set!!!')\n ipinfo['tried'].append(userpass)\n ipinfo['try'] = 1\n else:\n log.msg('login return, expect: [%s/%s]' % (ipinfo['user'], ipinfo['pw']))\n if thelogin == ipinfo['user'] and thepasswd == ipinfo['pw']:\n auth = True\n self.savevars()\n return auth\n\n", "path": "cowrie/core/auth.py"}]} | 3,012 | 846 |
gh_patches_debug_14227 | rasdani/github-patches | git_diff | castorini__pyserini-1626 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error for SPLADE on-the-fly encoding with pytorch
command used:
```bash
python -m pyserini.search.lucene --threads 12 --batch-size 128 \
--index msmarco-v1-passage-splade-pp-ed \
--topics msmarco-passage-dev-subset \
--encoder naver/splade-cocondenser-ensembledistil \
--output run.msmarco-v1-passage.splade-pp-ed-pytorch.dev.txt \
--hits 1000 --impact
```
error message:
> ...
> File "/home/arthur/workplace/pyserini/pyserini/encode/_splade.py", line 28, in encode
> raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)
> NameError: name 'batch_token_ids' is not defined
</issue>
<code>
[start of pyserini/encode/_splade.py]
1 import torch
2 from transformers import AutoModelForMaskedLM, AutoTokenizer
3 import numpy as np
4
5 from pyserini.encode import QueryEncoder
6
7
8 class SpladeQueryEncoder(QueryEncoder):
9 def __init__(self, model_name_or_path, tokenizer_name=None, device='cpu'):
10 self.device = device
11 self.model = AutoModelForMaskedLM.from_pretrained(model_name_or_path)
12 self.model.to(self.device)
13 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name or model_name_or_path)
14 self.reverse_voc = {v: k for k, v in self.tokenizer.vocab.items()}
15 self.weight_range = 5
16 self.quant_range = 256
17
18 def encode(self, text, max_length=256, **kwargs):
19 inputs = self.tokenizer([text], max_length=max_length, padding='longest',
20 truncation=True, add_special_tokens=True,
21 return_tensors='pt').to(self.device)
22 input_ids = inputs['input_ids']
23 input_attention = inputs['attention_mask']
24 batch_logits = self.model(input_ids)['logits']
25 batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))
26 * input_attention.unsqueeze(-1), dim=1)
27 batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()
28 raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)
29 return self._get_encoded_query_token_wight_dicts(raw_weights)[0]
30
31 def _output_to_weight_dicts(self, batch_aggregated_logits):
32 to_return = []
33 for aggregated_logits in batch_aggregated_logits:
34 col = np.nonzero(aggregated_logits)[0]
35 weights = aggregated_logits[col]
36 d = {self.reverse_voc[k]: float(v) for k, v in zip(list(col), list(weights))}
37 to_return.append(d)
38 return to_return
39
40 def _get_encoded_query_token_wight_dicts(self, tok_weights):
41 to_return = []
42 for _tok_weight in tok_weights:
43 _weights = {}
44 for token, weight in _tok_weight.items():
45 weight_quanted = round(weight / self.weight_range * self.quant_range)
46 _weights[token] = weight_quanted
47 to_return.append(_weights)
48 return to_return
49
[end of pyserini/encode/_splade.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyserini/encode/_splade.py b/pyserini/encode/_splade.py
--- a/pyserini/encode/_splade.py
+++ b/pyserini/encode/_splade.py
@@ -25,7 +25,7 @@
batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))
* input_attention.unsqueeze(-1), dim=1)
batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()
- raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)
+ raw_weights = self._output_to_weight_dicts(batch_aggregated_logits)
return self._get_encoded_query_token_wight_dicts(raw_weights)[0]
def _output_to_weight_dicts(self, batch_aggregated_logits):
| {"golden_diff": "diff --git a/pyserini/encode/_splade.py b/pyserini/encode/_splade.py\n--- a/pyserini/encode/_splade.py\n+++ b/pyserini/encode/_splade.py\n@@ -25,7 +25,7 @@\n batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))\n * input_attention.unsqueeze(-1), dim=1)\n batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()\n- raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)\n+ raw_weights = self._output_to_weight_dicts(batch_aggregated_logits)\n return self._get_encoded_query_token_wight_dicts(raw_weights)[0]\n \n def _output_to_weight_dicts(self, batch_aggregated_logits):\n", "issue": "Error for SPLADE on-the-fly encoding with pytorch \ncommand used:\r\n```bash\r\npython -m pyserini.search.lucene --threads 12 --batch-size 128 \\\r\n --index msmarco-v1-passage-splade-pp-ed \\\r\n --topics msmarco-passage-dev-subset \\\r\n --encoder naver/splade-cocondenser-ensembledistil \\\r\n --output run.msmarco-v1-passage.splade-pp-ed-pytorch.dev.txt \\\r\n --hits 1000 --impact\r\n```\r\n\r\nerror message:\r\n> ...\r\n> File \"/home/arthur/workplace/pyserini/pyserini/encode/_splade.py\", line 28, in encode\r\n> raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)\r\n> NameError: name 'batch_token_ids' is not defined\r\n\n", "before_files": [{"content": "import torch\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer\nimport numpy as np\n\nfrom pyserini.encode import QueryEncoder\n\n\nclass SpladeQueryEncoder(QueryEncoder):\n def __init__(self, model_name_or_path, tokenizer_name=None, device='cpu'):\n self.device = device\n self.model = AutoModelForMaskedLM.from_pretrained(model_name_or_path)\n self.model.to(self.device)\n self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name or model_name_or_path)\n self.reverse_voc = {v: k for k, v in self.tokenizer.vocab.items()}\n self.weight_range = 5\n self.quant_range = 256\n\n def encode(self, text, max_length=256, **kwargs):\n inputs = self.tokenizer([text], max_length=max_length, padding='longest',\n truncation=True, add_special_tokens=True,\n return_tensors='pt').to(self.device)\n input_ids = inputs['input_ids']\n input_attention = inputs['attention_mask']\n batch_logits = self.model(input_ids)['logits']\n batch_aggregated_logits, _ = torch.max(torch.log(1 + torch.relu(batch_logits))\n * input_attention.unsqueeze(-1), dim=1)\n batch_aggregated_logits = batch_aggregated_logits.cpu().detach().numpy()\n raw_weights = self._output_to_weight_dicts(batch_token_ids, batch_weights)\n return self._get_encoded_query_token_wight_dicts(raw_weights)[0]\n\n def _output_to_weight_dicts(self, batch_aggregated_logits):\n to_return = []\n for aggregated_logits in batch_aggregated_logits:\n col = np.nonzero(aggregated_logits)[0]\n weights = aggregated_logits[col]\n d = {self.reverse_voc[k]: float(v) for k, v in zip(list(col), list(weights))}\n to_return.append(d)\n return to_return\n\n def _get_encoded_query_token_wight_dicts(self, tok_weights):\n to_return = []\n for _tok_weight in tok_weights:\n _weights = {}\n for token, weight in _tok_weight.items():\n weight_quanted = round(weight / self.weight_range * self.quant_range)\n _weights[token] = weight_quanted\n to_return.append(_weights)\n return to_return\n", "path": "pyserini/encode/_splade.py"}]} | 1,310 | 173 |
gh_patches_debug_9661 | rasdani/github-patches | git_diff | psychopy__psychopy-2339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setting "Custom code" in StaticComponent doesn't seem to have any effect
The generated script doesn't contain any traces of the `Custom code` entered in the `StaticComponent`'s properties dialog.
`psychopy:master`
</issue>
<code>
[start of psychopy/experiment/components/static/__init__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 Part of the PsychoPy library
6 Copyright (C) 2018 Jonathan Peirce
7 Distributed under the terms of the GNU General Public License (GPL).
8 """
9
10 from __future__ import absolute_import, print_function
11
12 from builtins import str
13 from os import path
14 from psychopy.experiment.components import BaseComponent, Param, _translate
15
16 __author__ = 'Jon Peirce'
17
18 # the absolute path to the folder containing this path
19 thisFolder = path.abspath(path.dirname(__file__))
20 iconFile = path.join(thisFolder, 'static.png')
21 tooltip = _translate('Static: Static screen period (e.g. an ISI). '
22 'Useful for pre-loading stimuli.')
23 _localized = {'Custom code': _translate('Custom code')}
24
25
26 class StaticComponent(BaseComponent):
27 """A Static Component, allowing frame rendering to pause.
28
29 E.g., pause while disk is accessed for loading an image
30 """
31 # override the categories property below
32 # an attribute of the class, determines the section in the components panel
33 categories = ['Custom']
34
35 def __init__(self, exp, parentName, name='ISI',
36 startType='time (s)', startVal=0.0,
37 stopType='duration (s)', stopVal=0.5,
38 startEstim='', durationEstim=''):
39 BaseComponent.__init__(self, exp, parentName, name=name)
40 self.updatesList = [] # a list of dicts {compParams, fieldName}
41 self.type = 'Static'
42 self.url = "http://www.psychopy.org/builder/components/static.html"
43 hnt = _translate(
44 "Custom code to be run during the static period (after updates)")
45 self.params['code'] = Param("", valType='code',
46 hint=hnt,
47 label=_localized['Custom code'])
48 self.order = ['name'] # make name come first (others don't matter)
49
50 hnt = _translate("How do you want to define your start point?")
51 self.params['startType'] = Param(startType, valType='str',
52 allowedVals=['time (s)', 'frame N'],
53 hint=hnt)
54 hnt = _translate("How do you want to define your end point?")
55 _allow = ['duration (s)', 'duration (frames)', 'time (s)', 'frame N']
56 self.params['stopType'] = Param(stopType, valType='str',
57 allowedVals=_allow, # copy not needed
58 hint=hnt)
59 hnt = _translate("When does the component start?")
60 self.params['startVal'] = Param(startVal, valType='code',
61 allowedTypes=[],
62 hint=hnt)
63 hnt = _translate("When does the component end? (blank is endless)")
64 self.params['stopVal'] = Param(stopVal, valType='code',
65 allowedTypes=[],
66 updates='constant', allowedUpdates=[],
67 hint=hnt)
68 hnt = _translate("(Optional) expected start (s), purely for "
69 "representing in the timeline")
70 self.params['startEstim'] = Param(startEstim, valType='code',
71 allowedTypes=[],
72 hint=hnt)
73 hnt = _translate("(Optional) expected duration (s), purely for "
74 "representing in the timeline")
75 self.params['durationEstim'] = Param(durationEstim, valType='code',
76 allowedTypes=[],
77 hint=hnt)
78
79 def addComponentUpdate(self, routine, compName, fieldName):
80 self.updatesList.append({'compName': compName,
81 'fieldName': fieldName,
82 'routine': routine})
83
84 def remComponentUpdate(self, routine, compName, fieldName):
85 # have to do this in a loop rather than a simple remove
86 target = {'compName': compName, 'fieldName': fieldName,
87 'routine': routine}
88 for item in self.updatesList:
89 if item == target:
90 self.updatesList.remove(item)
91
92 def writeInitCode(self, buff):
93 code = ("%(name)s = clock.StaticPeriod(win=win, "
94 "screenHz=expInfo['frameRate'], name='%(name)s')\n")
95 buff.writeIndented(code % self.params)
96
97 def writeFrameCode(self, buff):
98 self.writeStartTestCode(buff)
99 # to get out of the if statement
100 buff.setIndentLevel(-1, relative=True)
101 self.writeStopTestCode(buff)
102
103 def writeStartTestCode(self, buff):
104 """This will be executed as the final component in the routine
105 """
106 buff.writeIndented("# *%s* period\n" % (self.params['name']))
107 BaseComponent.writeStartTestCode(self, buff)
108
109 if self.params['stopType'].val == 'time (s)':
110 durationSecsStr = "%(stopVal)s-t" % (self.params)
111 elif self.params['stopType'].val == 'duration (s)':
112 durationSecsStr = "%(stopVal)s" % (self.params)
113 elif self.params['stopType'].val == 'duration (frames)':
114 durationSecsStr = "%(stopVal)s*frameDur" % (self.params)
115 elif self.params['stopType'].val == 'frame N':
116 durationSecsStr = "(%(stopVal)s-frameN)*frameDur" % (self.params)
117 else:
118 msg = ("Couldn't deduce end point for startType=%(startType)s, "
119 "stopType=%(stopType)s")
120 raise Exception(msg % self.params)
121 vals = (self.params['name'], durationSecsStr)
122 buff.writeIndented("%s.start(%s)\n" % vals)
123
124 def writeStopTestCode(self, buff):
125 """Test whether we need to stop
126 """
127 code = ("elif %(name)s.status == STARTED: # one frame should "
128 "pass before updating params and completing\n")
129 buff.writeIndented(code % self.params)
130 buff.setIndentLevel(+1, relative=True) # entered an if statement
131 self.writeParamUpdates(buff)
132 code = "%(name)s.complete() # finish the static period\n"
133 buff.writeIndented(code % self.params)
134 # to get out of the if statement
135 buff.setIndentLevel(-1, relative=True)
136
137 # pass # the clock.StaticPeriod class handles its own stopping
138
139 def writeParamUpdates(self, buff, updateType=None, paramNames=None):
140 """Write updates. Unlike most components, which us this method
141 to update themselves, the Static Component uses this to update
142 *other* components
143 """
144 if updateType == 'set every repeat':
145 return # the static component doesn't need to change itself
146 if len(self.updatesList):
147 code = "# updating other components during *%s*\n"
148 buff.writeIndented(code % self.params['name'])
149 for update in self.updatesList:
150 # update = {'compName':compName,'fieldName':fieldName,
151 # 'routine':routine}
152 compName = update['compName']
153 fieldName = update['fieldName']
154 routine = self.exp.routines[update['routine']]
155 if hasattr(compName, 'params'):
156 prms = compName.params # it's already a compon so get params
157 else:
158 # it's a name so get compon and then get params
159 prms = self.exp.getComponentFromName(str(compName)).params
160 self.writeParamUpdate(buff, compName=compName,
161 paramName=fieldName,
162 val=prms[fieldName],
163 updateType=prms[fieldName].updates,
164 params=prms)
165 code = "# component updates done\n"
166 buff.writeIndented(code)
167
[end of psychopy/experiment/components/static/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/psychopy/experiment/components/static/__init__.py b/psychopy/experiment/components/static/__init__.py
--- a/psychopy/experiment/components/static/__init__.py
+++ b/psychopy/experiment/components/static/__init__.py
@@ -163,4 +163,11 @@
updateType=prms[fieldName].updates,
params=prms)
code = "# component updates done\n"
- buff.writeIndented(code)
+
+ # Write custom code
+ if self.params['code']:
+ code += ("# Adding custom code for {name}\n"
+ "{code}\n".format(name=self.params['name'],
+ code=self.params['code']))
+
+ buff.writeIndentedLines(code)
| {"golden_diff": "diff --git a/psychopy/experiment/components/static/__init__.py b/psychopy/experiment/components/static/__init__.py\n--- a/psychopy/experiment/components/static/__init__.py\n+++ b/psychopy/experiment/components/static/__init__.py\n@@ -163,4 +163,11 @@\n updateType=prms[fieldName].updates,\n params=prms)\n code = \"# component updates done\\n\"\n- buff.writeIndented(code)\n+\n+ # Write custom code\n+ if self.params['code']:\n+ code += (\"# Adding custom code for {name}\\n\"\n+ \"{code}\\n\".format(name=self.params['name'],\n+ code=self.params['code']))\n+\n+ buff.writeIndentedLines(code)\n", "issue": "Setting \"Custom code\" in StaticComponent doesn't seem to have any effect\nThe generated script doesn't contain any traces of the `Custom code` entered in the `StaticComponent`'s properties dialog.\r\n\r\n`psychopy:master`\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nPart of the PsychoPy library\nCopyright (C) 2018 Jonathan Peirce\nDistributed under the terms of the GNU General Public License (GPL).\n\"\"\"\n\nfrom __future__ import absolute_import, print_function\n\nfrom builtins import str\nfrom os import path\nfrom psychopy.experiment.components import BaseComponent, Param, _translate\n\n__author__ = 'Jon Peirce'\n\n# the absolute path to the folder containing this path\nthisFolder = path.abspath(path.dirname(__file__))\niconFile = path.join(thisFolder, 'static.png')\ntooltip = _translate('Static: Static screen period (e.g. an ISI). '\n 'Useful for pre-loading stimuli.')\n_localized = {'Custom code': _translate('Custom code')}\n\n\nclass StaticComponent(BaseComponent):\n \"\"\"A Static Component, allowing frame rendering to pause.\n\n E.g., pause while disk is accessed for loading an image\n \"\"\"\n # override the categories property below\n # an attribute of the class, determines the section in the components panel\n categories = ['Custom']\n\n def __init__(self, exp, parentName, name='ISI',\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=0.5,\n startEstim='', durationEstim=''):\n BaseComponent.__init__(self, exp, parentName, name=name)\n self.updatesList = [] # a list of dicts {compParams, fieldName}\n self.type = 'Static'\n self.url = \"http://www.psychopy.org/builder/components/static.html\"\n hnt = _translate(\n \"Custom code to be run during the static period (after updates)\")\n self.params['code'] = Param(\"\", valType='code',\n hint=hnt,\n label=_localized['Custom code'])\n self.order = ['name'] # make name come first (others don't matter)\n\n hnt = _translate(\"How do you want to define your start point?\")\n self.params['startType'] = Param(startType, valType='str',\n allowedVals=['time (s)', 'frame N'],\n hint=hnt)\n hnt = _translate(\"How do you want to define your end point?\")\n _allow = ['duration (s)', 'duration (frames)', 'time (s)', 'frame N']\n self.params['stopType'] = Param(stopType, valType='str',\n allowedVals=_allow, # copy not needed\n hint=hnt)\n hnt = _translate(\"When does the component start?\")\n self.params['startVal'] = Param(startVal, valType='code',\n allowedTypes=[],\n hint=hnt)\n hnt = _translate(\"When does the component end? (blank is endless)\")\n self.params['stopVal'] = Param(stopVal, valType='code',\n allowedTypes=[],\n updates='constant', allowedUpdates=[],\n hint=hnt)\n hnt = _translate(\"(Optional) expected start (s), purely for \"\n \"representing in the timeline\")\n self.params['startEstim'] = Param(startEstim, valType='code',\n allowedTypes=[],\n hint=hnt)\n hnt = _translate(\"(Optional) expected duration (s), purely for \"\n \"representing in the timeline\")\n self.params['durationEstim'] = Param(durationEstim, valType='code',\n allowedTypes=[],\n hint=hnt)\n\n def addComponentUpdate(self, routine, compName, fieldName):\n self.updatesList.append({'compName': compName,\n 'fieldName': fieldName,\n 'routine': routine})\n\n def remComponentUpdate(self, routine, compName, fieldName):\n # have to do this in a loop rather than a simple remove\n target = {'compName': compName, 'fieldName': fieldName,\n 'routine': routine}\n for item in self.updatesList:\n if item == target:\n self.updatesList.remove(item)\n\n def writeInitCode(self, buff):\n code = (\"%(name)s = clock.StaticPeriod(win=win, \"\n \"screenHz=expInfo['frameRate'], name='%(name)s')\\n\")\n buff.writeIndented(code % self.params)\n\n def writeFrameCode(self, buff):\n self.writeStartTestCode(buff)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n self.writeStopTestCode(buff)\n\n def writeStartTestCode(self, buff):\n \"\"\"This will be executed as the final component in the routine\n \"\"\"\n buff.writeIndented(\"# *%s* period\\n\" % (self.params['name']))\n BaseComponent.writeStartTestCode(self, buff)\n\n if self.params['stopType'].val == 'time (s)':\n durationSecsStr = \"%(stopVal)s-t\" % (self.params)\n elif self.params['stopType'].val == 'duration (s)':\n durationSecsStr = \"%(stopVal)s\" % (self.params)\n elif self.params['stopType'].val == 'duration (frames)':\n durationSecsStr = \"%(stopVal)s*frameDur\" % (self.params)\n elif self.params['stopType'].val == 'frame N':\n durationSecsStr = \"(%(stopVal)s-frameN)*frameDur\" % (self.params)\n else:\n msg = (\"Couldn't deduce end point for startType=%(startType)s, \"\n \"stopType=%(stopType)s\")\n raise Exception(msg % self.params)\n vals = (self.params['name'], durationSecsStr)\n buff.writeIndented(\"%s.start(%s)\\n\" % vals)\n\n def writeStopTestCode(self, buff):\n \"\"\"Test whether we need to stop\n \"\"\"\n code = (\"elif %(name)s.status == STARTED: # one frame should \"\n \"pass before updating params and completing\\n\")\n buff.writeIndented(code % self.params)\n buff.setIndentLevel(+1, relative=True) # entered an if statement\n self.writeParamUpdates(buff)\n code = \"%(name)s.complete() # finish the static period\\n\"\n buff.writeIndented(code % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n\n # pass # the clock.StaticPeriod class handles its own stopping\n\n def writeParamUpdates(self, buff, updateType=None, paramNames=None):\n \"\"\"Write updates. Unlike most components, which us this method\n to update themselves, the Static Component uses this to update\n *other* components\n \"\"\"\n if updateType == 'set every repeat':\n return # the static component doesn't need to change itself\n if len(self.updatesList):\n code = \"# updating other components during *%s*\\n\"\n buff.writeIndented(code % self.params['name'])\n for update in self.updatesList:\n # update = {'compName':compName,'fieldName':fieldName,\n # 'routine':routine}\n compName = update['compName']\n fieldName = update['fieldName']\n routine = self.exp.routines[update['routine']]\n if hasattr(compName, 'params'):\n prms = compName.params # it's already a compon so get params\n else:\n # it's a name so get compon and then get params\n prms = self.exp.getComponentFromName(str(compName)).params\n self.writeParamUpdate(buff, compName=compName,\n paramName=fieldName,\n val=prms[fieldName],\n updateType=prms[fieldName].updates,\n params=prms)\n code = \"# component updates done\\n\"\n buff.writeIndented(code)\n", "path": "psychopy/experiment/components/static/__init__.py"}]} | 2,662 | 166 |
gh_patches_debug_9108 | rasdani/github-patches | git_diff | Kinto__kinto-726 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The /permissions endpoint is broken
To reproduce just access https://kinto-ota.dev.mozaws.net/v1/permissions
```
File "/home/ubuntu/venvs/kinto/local/lib/python2.7/site-packages/kinto/core/permission/re
dis.py", line 103, in get_accessible_objects
_, object_id, permission = key.decode('utf-8').split(':')
ValueError: too many values to unpack
```
</issue>
<code>
[start of kinto/core/permission/redis.py]
1 from __future__ import absolute_import
2
3 from collections import defaultdict
4
5 from kinto.core.permission import PermissionBase
6 from kinto.core.storage.redis import create_from_config, wrap_redis_error
7
8
9 class Permission(PermissionBase):
10 """Permission backend implementation using Redis.
11
12 Enable in configuration::
13
14 kinto.permission_backend = kinto.core.permission.redis
15
16 *(Optional)* Instance location URI can be customized::
17
18 kinto.permission_url = redis://localhost:6379/2
19
20 A threaded connection pool is enabled by default::
21
22 kinto.permission_pool_size = 50
23
24 :noindex:
25 """
26
27 def __init__(self, client, *args, **kwargs):
28 super(Permission, self).__init__(*args, **kwargs)
29 self._client = client
30
31 @property
32 def settings(self):
33 return dict(self._client.connection_pool.connection_kwargs)
34
35 def initialize_schema(self):
36 # Nothing to do.
37 pass
38
39 def _decode_set(self, results):
40 return set([r.decode('utf-8') for r in results])
41
42 @wrap_redis_error
43 def flush(self):
44 self._client.flushdb()
45
46 @wrap_redis_error
47 def add_user_principal(self, user_id, principal):
48 user_key = 'user:%s' % user_id
49 self._client.sadd(user_key, principal)
50
51 @wrap_redis_error
52 def remove_user_principal(self, user_id, principal):
53 user_key = 'user:%s' % user_id
54 self._client.srem(user_key, principal)
55 if self._client.scard(user_key) == 0:
56 self._client.delete(user_key)
57
58 def remove_principal(self, principal):
59 with self._client.pipeline() as pipe:
60 user_keys = self._client.scan_iter(match='user:*')
61 for user_key in user_keys:
62 pipe.srem(user_key, principal)
63 pipe.execute()
64
65 @wrap_redis_error
66 def get_user_principals(self, user_id):
67 user_key = 'user:%s' % user_id
68 return self._decode_set(self._client.smembers(user_key))
69
70 @wrap_redis_error
71 def add_principal_to_ace(self, object_id, permission, principal):
72 permission_key = 'permission:%s:%s' % (object_id, permission)
73 self._client.sadd(permission_key, principal)
74
75 @wrap_redis_error
76 def remove_principal_from_ace(self, object_id, permission, principal):
77 permission_key = 'permission:%s:%s' % (object_id, permission)
78 self._client.srem(permission_key, principal)
79 if self._client.scard(permission_key) == 0:
80 self._client.delete(permission_key)
81
82 @wrap_redis_error
83 def get_object_permission_principals(self, object_id, permission):
84 permission_key = 'permission:%s:%s' % (object_id, permission)
85 members = self._client.smembers(permission_key)
86 return self._decode_set(members)
87
88 @wrap_redis_error
89 def get_accessible_objects(self, principals, bound_permissions=None):
90 principals = set(principals)
91
92 if bound_permissions:
93 keys = ['permission:%s:%s' % op for op in bound_permissions]
94 else:
95 keys = ['permission:*']
96
97 perms_by_id = dict()
98 for key_pattern in keys:
99 matched = self._client.scan_iter(match=key_pattern)
100 for key in matched:
101 authorized = self._decode_set(self._client.smembers(key))
102 if len(authorized & principals) > 0:
103 _, object_id, permission = key.decode('utf-8').split(':')
104 perms_by_id.setdefault(object_id, set()).add(permission)
105
106 return perms_by_id
107
108 @wrap_redis_error
109 def get_authorized_principals(self, bound_permissions):
110 keys = ['permission:%s:%s' % (o, p) for (o, p) in bound_permissions]
111 if keys:
112 return self._decode_set(self._client.sunion(*list(keys)))
113 return set()
114
115 @wrap_redis_error
116 def get_objects_permissions(self, objects_ids, permissions=None):
117 objects_perms = []
118 for object_id in objects_ids:
119 if permissions is not None:
120 keys = ['permission:%s:%s' % (object_id, permission)
121 for permission in permissions]
122 else:
123 keys = [key.decode('utf-8') for key in self._client.scan_iter(
124 match='permission:%s:*' % object_id)]
125
126 with self._client.pipeline() as pipe:
127 for permission_key in keys:
128 pipe.smembers(permission_key)
129
130 results = pipe.execute()
131
132 permissions = defaultdict(set)
133 for i, result in enumerate(results):
134 permission = keys[i].split(':', 2)[-1]
135 permissions[permission] = self._decode_set(result)
136 objects_perms.append(permissions)
137 return objects_perms
138
139 @wrap_redis_error
140 def replace_object_permissions(self, object_id, permissions):
141 keys = ['permission:%s:%s' % (object_id, permission)
142 for permission in permissions]
143 with self._client.pipeline() as pipe:
144 for key in keys:
145 pipe.delete(key)
146 permission = key.split(':', 2)[-1]
147 principals = permissions[permission]
148 if len(principals) > 0:
149 pipe.sadd(key, *principals)
150 pipe.execute()
151
152 @wrap_redis_error
153 def delete_object_permissions(self, *object_id_list):
154 with self._client.pipeline() as pipe:
155 for object_id in object_id_list:
156 keys = list(self._client.scan_iter(
157 match='permission:%s:*' % object_id))
158 if len(keys) > 0:
159 pipe.delete(*keys)
160 pipe.execute()
161
162
163 def load_from_config(config):
164 client = create_from_config(config, prefix='permission_')
165 return Permission(client)
166
[end of kinto/core/permission/redis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/permission/redis.py b/kinto/core/permission/redis.py
--- a/kinto/core/permission/redis.py
+++ b/kinto/core/permission/redis.py
@@ -100,8 +100,8 @@
for key in matched:
authorized = self._decode_set(self._client.smembers(key))
if len(authorized & principals) > 0:
- _, object_id, permission = key.decode('utf-8').split(':')
- perms_by_id.setdefault(object_id, set()).add(permission)
+ _, obj_id, permission = key.decode('utf-8').split(':', 2)
+ perms_by_id.setdefault(obj_id, set()).add(permission)
return perms_by_id
| {"golden_diff": "diff --git a/kinto/core/permission/redis.py b/kinto/core/permission/redis.py\n--- a/kinto/core/permission/redis.py\n+++ b/kinto/core/permission/redis.py\n@@ -100,8 +100,8 @@\n for key in matched:\n authorized = self._decode_set(self._client.smembers(key))\n if len(authorized & principals) > 0:\n- _, object_id, permission = key.decode('utf-8').split(':')\n- perms_by_id.setdefault(object_id, set()).add(permission)\n+ _, obj_id, permission = key.decode('utf-8').split(':', 2)\n+ perms_by_id.setdefault(obj_id, set()).add(permission)\n \n return perms_by_id\n", "issue": "The /permissions endpoint is broken\nTo reproduce just access https://kinto-ota.dev.mozaws.net/v1/permissions\n\n```\n File \"/home/ubuntu/venvs/kinto/local/lib/python2.7/site-packages/kinto/core/permission/re\ndis.py\", line 103, in get_accessible_objects\n _, object_id, permission = key.decode('utf-8').split(':')\nValueError: too many values to unpack\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom collections import defaultdict\n\nfrom kinto.core.permission import PermissionBase\nfrom kinto.core.storage.redis import create_from_config, wrap_redis_error\n\n\nclass Permission(PermissionBase):\n \"\"\"Permission backend implementation using Redis.\n\n Enable in configuration::\n\n kinto.permission_backend = kinto.core.permission.redis\n\n *(Optional)* Instance location URI can be customized::\n\n kinto.permission_url = redis://localhost:6379/2\n\n A threaded connection pool is enabled by default::\n\n kinto.permission_pool_size = 50\n\n :noindex:\n \"\"\"\n\n def __init__(self, client, *args, **kwargs):\n super(Permission, self).__init__(*args, **kwargs)\n self._client = client\n\n @property\n def settings(self):\n return dict(self._client.connection_pool.connection_kwargs)\n\n def initialize_schema(self):\n # Nothing to do.\n pass\n\n def _decode_set(self, results):\n return set([r.decode('utf-8') for r in results])\n\n @wrap_redis_error\n def flush(self):\n self._client.flushdb()\n\n @wrap_redis_error\n def add_user_principal(self, user_id, principal):\n user_key = 'user:%s' % user_id\n self._client.sadd(user_key, principal)\n\n @wrap_redis_error\n def remove_user_principal(self, user_id, principal):\n user_key = 'user:%s' % user_id\n self._client.srem(user_key, principal)\n if self._client.scard(user_key) == 0:\n self._client.delete(user_key)\n\n def remove_principal(self, principal):\n with self._client.pipeline() as pipe:\n user_keys = self._client.scan_iter(match='user:*')\n for user_key in user_keys:\n pipe.srem(user_key, principal)\n pipe.execute()\n\n @wrap_redis_error\n def get_user_principals(self, user_id):\n user_key = 'user:%s' % user_id\n return self._decode_set(self._client.smembers(user_key))\n\n @wrap_redis_error\n def add_principal_to_ace(self, object_id, permission, principal):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n self._client.sadd(permission_key, principal)\n\n @wrap_redis_error\n def remove_principal_from_ace(self, object_id, permission, principal):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n self._client.srem(permission_key, principal)\n if self._client.scard(permission_key) == 0:\n self._client.delete(permission_key)\n\n @wrap_redis_error\n def get_object_permission_principals(self, object_id, permission):\n permission_key = 'permission:%s:%s' % (object_id, permission)\n members = self._client.smembers(permission_key)\n return self._decode_set(members)\n\n @wrap_redis_error\n def get_accessible_objects(self, principals, bound_permissions=None):\n principals = set(principals)\n\n if bound_permissions:\n keys = ['permission:%s:%s' % op for op in bound_permissions]\n else:\n keys = ['permission:*']\n\n perms_by_id = dict()\n for key_pattern in keys:\n matched = self._client.scan_iter(match=key_pattern)\n for key in matched:\n authorized = self._decode_set(self._client.smembers(key))\n if len(authorized & principals) > 0:\n _, object_id, permission = key.decode('utf-8').split(':')\n perms_by_id.setdefault(object_id, set()).add(permission)\n\n return perms_by_id\n\n @wrap_redis_error\n def get_authorized_principals(self, bound_permissions):\n keys = ['permission:%s:%s' % (o, p) for (o, p) in bound_permissions]\n if keys:\n return self._decode_set(self._client.sunion(*list(keys)))\n return set()\n\n @wrap_redis_error\n def get_objects_permissions(self, objects_ids, permissions=None):\n objects_perms = []\n for object_id in objects_ids:\n if permissions is not None:\n keys = ['permission:%s:%s' % (object_id, permission)\n for permission in permissions]\n else:\n keys = [key.decode('utf-8') for key in self._client.scan_iter(\n match='permission:%s:*' % object_id)]\n\n with self._client.pipeline() as pipe:\n for permission_key in keys:\n pipe.smembers(permission_key)\n\n results = pipe.execute()\n\n permissions = defaultdict(set)\n for i, result in enumerate(results):\n permission = keys[i].split(':', 2)[-1]\n permissions[permission] = self._decode_set(result)\n objects_perms.append(permissions)\n return objects_perms\n\n @wrap_redis_error\n def replace_object_permissions(self, object_id, permissions):\n keys = ['permission:%s:%s' % (object_id, permission)\n for permission in permissions]\n with self._client.pipeline() as pipe:\n for key in keys:\n pipe.delete(key)\n permission = key.split(':', 2)[-1]\n principals = permissions[permission]\n if len(principals) > 0:\n pipe.sadd(key, *principals)\n pipe.execute()\n\n @wrap_redis_error\n def delete_object_permissions(self, *object_id_list):\n with self._client.pipeline() as pipe:\n for object_id in object_id_list:\n keys = list(self._client.scan_iter(\n match='permission:%s:*' % object_id))\n if len(keys) > 0:\n pipe.delete(*keys)\n pipe.execute()\n\n\ndef load_from_config(config):\n client = create_from_config(config, prefix='permission_')\n return Permission(client)\n", "path": "kinto/core/permission/redis.py"}]} | 2,297 | 163 |
gh_patches_debug_24023 | rasdani/github-patches | git_diff | apache__airflow-13371 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AirflowMacroPluginRemovedRule fails on non-python files
**Apache Airflow version**: 1.10.14
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: X
- **OS** (e.g. from /etc/os-release): X
- **Kernel** (e.g. `uname -a`): X
- **Install tools**: X
- **Others**: X
**What happened**:
The `AirflowMacroPluginRemovedRule` seems unable to process non-standard python files (e.g. `.xlsx`) and chokes out with an unhelpful error message.:
```python
========================================================================================================================================================== STATUS ==========================================================================================================================================================
Check for latest versions of apache-airflow and checker...........................................................................................................................................................................................................................................................SUCCESS
Traceback (most recent call last):
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/bin/airflow", line 37, in <module>
args.func(args)
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py", line 88, in run
all_problems = check_upgrade(formatter, rules)
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py", line 37, in check_upgrade
rule_status = RuleStatus.from_rule(rule)
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/problem.py", line 44, in from_rule
result = rule.check()
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py", line 52, in check
problems.extend(self._check_file(file_path))
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py", line 42, in _check_file
for line_number, line in enumerate(file_pointer, 1):
File "/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 16: invalid start byte
```
**What you expected to happen**:
I expected the macro to skip over files it could not process/understand
**How to reproduce it**:
Add an `.xlsx` or other binary document to the DAGs folder and run the upgrade check.
**Suggested resolution**:
I think it's fine to fail out on these files (it led us to add certain items to the `.airflowignore` which should have been there anyway) but I had to modify the upgrade rule directly to tell me _which_ files were the problem. A more helpful error message here, and possibly a message prompting users to add said files to their `.airflowignore` would be ideal.
</issue>
<code>
[start of airflow/upgrade/rules/airflow_macro_plugin_removed.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 from __future__ import absolute_import
19
20 from airflow import conf
21 from airflow.upgrade.rules.base_rule import BaseRule
22 from airflow.utils.dag_processing import list_py_file_paths
23
24
25 class AirflowMacroPluginRemovedRule(BaseRule):
26
27 title = "Remove airflow.AirflowMacroPlugin class"
28
29 description = "The airflow.AirflowMacroPlugin class has been removed."
30
31 MACRO_PLUGIN_CLASS = "airflow.AirflowMacroPlugin"
32
33 def _change_info(self, file_path, line_number):
34 return "{} will be removed. Affected file: {} (line {})".format(
35 self.MACRO_PLUGIN_CLASS, file_path, line_number
36 )
37
38 def _check_file(self, file_path):
39 problems = []
40 class_name_to_check = self.MACRO_PLUGIN_CLASS.split(".")[-1]
41 with open(file_path, "r") as file_pointer:
42 for line_number, line in enumerate(file_pointer, 1):
43 if class_name_to_check in line:
44 problems.append(self._change_info(file_path, line_number))
45 return problems
46
47 def check(self):
48 dag_folder = conf.get("core", "dags_folder")
49 file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)
50 problems = []
51 for file_path in file_paths:
52 problems.extend(self._check_file(file_path))
53 return problems
54
[end of airflow/upgrade/rules/airflow_macro_plugin_removed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/upgrade/rules/airflow_macro_plugin_removed.py b/airflow/upgrade/rules/airflow_macro_plugin_removed.py
--- a/airflow/upgrade/rules/airflow_macro_plugin_removed.py
+++ b/airflow/upgrade/rules/airflow_macro_plugin_removed.py
@@ -39,9 +39,12 @@
problems = []
class_name_to_check = self.MACRO_PLUGIN_CLASS.split(".")[-1]
with open(file_path, "r") as file_pointer:
- for line_number, line in enumerate(file_pointer, 1):
- if class_name_to_check in line:
- problems.append(self._change_info(file_path, line_number))
+ try:
+ for line_number, line in enumerate(file_pointer, 1):
+ if class_name_to_check in line:
+ problems.append(self._change_info(file_path, line_number))
+ except UnicodeDecodeError:
+ problems.append("Unable to read python file {}".format(file_path))
return problems
def check(self):
@@ -49,5 +52,7 @@
file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)
problems = []
for file_path in file_paths:
+ if not file_path.endswith(".py"):
+ continue
problems.extend(self._check_file(file_path))
return problems
| {"golden_diff": "diff --git a/airflow/upgrade/rules/airflow_macro_plugin_removed.py b/airflow/upgrade/rules/airflow_macro_plugin_removed.py\n--- a/airflow/upgrade/rules/airflow_macro_plugin_removed.py\n+++ b/airflow/upgrade/rules/airflow_macro_plugin_removed.py\n@@ -39,9 +39,12 @@\n problems = []\n class_name_to_check = self.MACRO_PLUGIN_CLASS.split(\".\")[-1]\n with open(file_path, \"r\") as file_pointer:\n- for line_number, line in enumerate(file_pointer, 1):\n- if class_name_to_check in line:\n- problems.append(self._change_info(file_path, line_number))\n+ try:\n+ for line_number, line in enumerate(file_pointer, 1):\n+ if class_name_to_check in line:\n+ problems.append(self._change_info(file_path, line_number))\n+ except UnicodeDecodeError:\n+ problems.append(\"Unable to read python file {}\".format(file_path))\n return problems\n \n def check(self):\n@@ -49,5 +52,7 @@\n file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n for file_path in file_paths:\n+ if not file_path.endswith(\".py\"):\n+ continue\n problems.extend(self._check_file(file_path))\n return problems\n", "issue": "AirflowMacroPluginRemovedRule fails on non-python files\n**Apache Airflow version**: 1.10.14\r\n\r\n\r\n**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):\r\n\r\n**Environment**:\r\n\r\n- **Cloud provider or hardware configuration**: X\r\n- **OS** (e.g. from /etc/os-release): X\r\n- **Kernel** (e.g. `uname -a`): X\r\n- **Install tools**: X\r\n- **Others**: X\r\n\r\n**What happened**:\r\n\r\nThe `AirflowMacroPluginRemovedRule` seems unable to process non-standard python files (e.g. `.xlsx`) and chokes out with an unhelpful error message.:\r\n\r\n```python\r\n========================================================================================================================================================== STATUS ==========================================================================================================================================================\r\n\r\nCheck for latest versions of apache-airflow and checker...........................................................................................................................................................................................................................................................SUCCESS\r\nTraceback (most recent call last):\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/bin/airflow\", line 37, in <module>\r\n args.func(args)\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py\", line 88, in run\r\n all_problems = check_upgrade(formatter, rules)\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/checker.py\", line 37, in check_upgrade\r\n rule_status = RuleStatus.from_rule(rule)\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/problem.py\", line 44, in from_rule\r\n result = rule.check()\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py\", line 52, in check\r\n problems.extend(self._check_file(file_path))\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/site-packages/airflow/upgrade/rules/airflow_macro_plugin_removed.py\", line 42, in _check_file\r\n for line_number, line in enumerate(file_pointer, 1):\r\n File \"/Users/madison/programs/anaconda3/envs/memphis-airflow/lib/python3.8/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 16: invalid start byte\r\n```\r\n\r\n**What you expected to happen**:\r\n\r\nI expected the macro to skip over files it could not process/understand\r\n\r\n**How to reproduce it**:\r\n\r\nAdd an `.xlsx` or other binary document to the DAGs folder and run the upgrade check.\r\n\r\n\r\n**Suggested resolution**:\r\n\r\nI think it's fine to fail out on these files (it led us to add certain items to the `.airflowignore` which should have been there anyway) but I had to modify the upgrade rule directly to tell me _which_ files were the problem. A more helpful error message here, and possibly a message prompting users to add said files to their `.airflowignore` would be ideal.\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom __future__ import absolute_import\n\nfrom airflow import conf\nfrom airflow.upgrade.rules.base_rule import BaseRule\nfrom airflow.utils.dag_processing import list_py_file_paths\n\n\nclass AirflowMacroPluginRemovedRule(BaseRule):\n\n title = \"Remove airflow.AirflowMacroPlugin class\"\n\n description = \"The airflow.AirflowMacroPlugin class has been removed.\"\n\n MACRO_PLUGIN_CLASS = \"airflow.AirflowMacroPlugin\"\n\n def _change_info(self, file_path, line_number):\n return \"{} will be removed. Affected file: {} (line {})\".format(\n self.MACRO_PLUGIN_CLASS, file_path, line_number\n )\n\n def _check_file(self, file_path):\n problems = []\n class_name_to_check = self.MACRO_PLUGIN_CLASS.split(\".\")[-1]\n with open(file_path, \"r\") as file_pointer:\n for line_number, line in enumerate(file_pointer, 1):\n if class_name_to_check in line:\n problems.append(self._change_info(file_path, line_number))\n return problems\n\n def check(self):\n dag_folder = conf.get(\"core\", \"dags_folder\")\n file_paths = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n for file_path in file_paths:\n problems.extend(self._check_file(file_path))\n return problems\n", "path": "airflow/upgrade/rules/airflow_macro_plugin_removed.py"}]} | 1,848 | 293 |
gh_patches_debug_9399 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-2541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bplan template dates saved but not shown in Dashboard
URL: https://mein.berlin.de/dashboard/projects/erweiterung-mauerpark-bebauungsplan-3-64-im-bezirk/bplan/
user: initiator
expected behaviour: date and time that I have entered are still shown after saving form
behaviour: dates are no longer shown after saving, no error message, I can still publish the project and date is shown correctly on project tile
device & browser: Desktop, mac, chrome Version 76.0.3809.132 (Offizieller Build) (64-Bit)
Importance: relevant bug, fix before next release
</issue>
<code>
[start of meinberlin/apps/bplan/forms.py]
1 from django import forms
2
3 from meinberlin.apps.extprojects.forms import ExternalProjectCreateForm
4 from meinberlin.apps.extprojects.forms import ExternalProjectForm
5
6 from . import models
7
8
9 class StatementForm(forms.ModelForm):
10 class Meta:
11 model = models.Statement
12 fields = ['name', 'email', 'statement',
13 'street_number', 'postal_code_city']
14
15
16 class BplanProjectCreateForm(ExternalProjectCreateForm):
17
18 class Meta:
19 model = models.Bplan
20 fields = ['name', 'description', 'tile_image', 'tile_image_copyright']
21
22
23 class BplanProjectForm(ExternalProjectForm):
24
25 class Meta:
26 model = models.Bplan
27 fields = ['name', 'identifier', 'url', 'description', 'tile_image',
28 'tile_image_copyright', 'is_archived', 'office_worker_email']
29 required_for_project_publish = ['name', 'url', 'description',
30 'office_worker_email']
31
[end of meinberlin/apps/bplan/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py
--- a/meinberlin/apps/bplan/forms.py
+++ b/meinberlin/apps/bplan/forms.py
@@ -25,6 +25,7 @@
class Meta:
model = models.Bplan
fields = ['name', 'identifier', 'url', 'description', 'tile_image',
- 'tile_image_copyright', 'is_archived', 'office_worker_email']
+ 'tile_image_copyright', 'is_archived', 'office_worker_email',
+ 'start_date', 'end_date']
required_for_project_publish = ['name', 'url', 'description',
'office_worker_email']
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py\n--- a/meinberlin/apps/bplan/forms.py\n+++ b/meinberlin/apps/bplan/forms.py\n@@ -25,6 +25,7 @@\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n- 'tile_image_copyright', 'is_archived', 'office_worker_email']\n+ 'tile_image_copyright', 'is_archived', 'office_worker_email',\n+ 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n", "issue": "bplan template dates saved but not shown in Dashboard\nURL: https://mein.berlin.de/dashboard/projects/erweiterung-mauerpark-bebauungsplan-3-64-im-bezirk/bplan/\r\nuser: initiator\r\nexpected behaviour: date and time that I have entered are still shown after saving form\r\nbehaviour: dates are no longer shown after saving, no error message, I can still publish the project and date is shown correctly on project tile\r\ndevice & browser: Desktop, mac, chrome Version 76.0.3809.132 (Offizieller Build) (64-Bit)\r\nImportance: relevant bug, fix before next release\n", "before_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n", "path": "meinberlin/apps/bplan/forms.py"}]} | 945 | 157 |
gh_patches_debug_29689 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-2787 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
subproject alias 302 redirect to top-level project sharing alias' name
## Details
I have set up a subproject,
Subproject: cloudify-openstack-plugin-fh
Alias: openstack
* Project URL: http://cfy-rtd-demo.readthedocs.io/
* Build URL (if applicable): https://readthedocs.org/projects/cfy-rtd-demo/builds/5162299/
* Read the Docs username (if applicable): funkyhat
## Expected Result
When navigating to http://cfy-rtd-demo.readthedocs.io/projects/openstack
I expect to see (presumably via a redirect to http://cfy-rtd-demo.readthedocs.io/projects/openstack/en/sphinxify-rtd-demo/) my subproject's docs (`sphinxify-rtd-demo` is the current "active branch" for the subproject).
## Actual Result
I am redirected to http://openstack.readthedocs.io/en/latest/ which is unrelated to my project:
```
< HTTP/1.1 302 Found
* Server nginx/1.10.0 (Ubuntu) is not blacklisted
< Server: nginx/1.10.0 (Ubuntu)
< Date: Fri, 17 Mar 2017 13:00:04 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Language, Cookie
< Location: http://openstack.readthedocs.io/en/latest/
< Content-Language: en
< X-Fallback: True
< X-Served: Django
< X-Deity: web03
```
I have tried rebuilding the main project, which produced no change.
This seems potentially related to #1602
</issue>
<code>
[start of readthedocs/core/views/serve.py]
1 # -*- coding: utf-8 -*-
2 """
3 Doc serving from Python.
4
5 In production there are two modes,
6 * Serving from public symlinks in nginx (readthedocs.org & readthedocs.com)
7 * Serving from private symlinks in Python (readthedocs.com only)
8
9 In development, we have two modes:
10 * Serving from public symlinks in Python
11 * Serving from private symlinks in Python
12
13 This means we should only serve from public symlinks in dev,
14 and generally default to serving from private symlinks in Python only.
15
16 Privacy
17 -------
18
19 These views will take into account the version privacy level.
20
21 Settings
22 --------
23
24 PYTHON_MEDIA (False) - Set this to True to serve docs & media from Python
25 SERVE_DOCS (['private']) - The list of ['private', 'public'] docs to serve.
26 """
27
28 from __future__ import (
29 absolute_import, division, print_function, unicode_literals)
30
31 import logging
32 import mimetypes
33 import os
34 from functools import wraps
35
36 from django.conf import settings
37 from django.http import Http404, HttpResponse, HttpResponseRedirect
38 from django.shortcuts import render
39 from django.views.static import serve
40
41 from readthedocs.builds.models import Version
42 from readthedocs.core.permissions import AdminPermission
43 from readthedocs.core.resolver import resolve, resolve_path
44 from readthedocs.core.symlink import PrivateSymlink, PublicSymlink
45 from readthedocs.projects import constants
46 from readthedocs.projects.models import Project, ProjectRelationship
47
48 log = logging.getLogger(__name__)
49
50
51 def map_subproject_slug(view_func):
52 """
53 A decorator that maps a ``subproject_slug`` URL param into a Project.
54
55 :raises: Http404 if the Project doesn't exist
56
57 .. warning:: Does not take into account any kind of privacy settings.
58 """
59 @wraps(view_func)
60 def inner_view(
61 request, subproject=None, subproject_slug=None, *args, **kwargs):
62 if subproject is None and subproject_slug:
63 try:
64 subproject = Project.objects.get(slug=subproject_slug)
65 except Project.DoesNotExist:
66 try:
67 # Depends on a project passed into kwargs
68 rel = ProjectRelationship.objects.get(
69 parent=kwargs['project'],
70 alias=subproject_slug,
71 )
72 subproject = rel.child
73 except (ProjectRelationship.DoesNotExist, KeyError):
74 raise Http404
75 return view_func(request, subproject=subproject, *args, **kwargs)
76
77 return inner_view
78
79
80 def map_project_slug(view_func):
81 """
82 A decorator that maps a ``project_slug`` URL param into a Project.
83
84 :raises: Http404 if the Project doesn't exist
85
86 .. warning:: Does not take into account any kind of privacy settings.
87 """
88 @wraps(view_func)
89 def inner_view(request, project=None, project_slug=None, *args, **kwargs):
90 if project is None:
91 if not project_slug:
92 project_slug = request.slug
93 try:
94 project = Project.objects.get(slug=project_slug)
95 except Project.DoesNotExist:
96 raise Http404('Project does not exist.')
97 return view_func(request, project=project, *args, **kwargs)
98
99 return inner_view
100
101
102 @map_project_slug
103 @map_subproject_slug
104 def redirect_project_slug(request, project, subproject): # pylint: disable=unused-argument
105 """Handle / -> /en/latest/ directs on subdomains."""
106 return HttpResponseRedirect(resolve(subproject or project))
107
108
109 @map_project_slug
110 @map_subproject_slug
111 def redirect_page_with_filename(request, project, subproject, filename): # pylint: disable=unused-argument # noqa
112 """Redirect /page/file.html to /en/latest/file.html."""
113 return HttpResponseRedirect(
114 resolve(subproject or project, filename=filename))
115
116
117 def _serve_401(request, project):
118 res = render(request, '401.html')
119 res.status_code = 401
120 log.error('Unauthorized access to {0} documentation'.format(project.slug))
121 return res
122
123
124 def _serve_file(request, filename, basepath):
125 # Serve the file from the proper location
126 if settings.DEBUG or getattr(settings, 'PYTHON_MEDIA', False):
127 # Serve from Python
128 return serve(request, filename, basepath)
129 else:
130 # Serve from Nginx
131 content_type, encoding = mimetypes.guess_type(
132 os.path.join(basepath, filename))
133 content_type = content_type or 'application/octet-stream'
134 response = HttpResponse(content_type=content_type)
135 if encoding:
136 response['Content-Encoding'] = encoding
137 try:
138 response['X-Accel-Redirect'] = os.path.join(
139 basepath[len(settings.SITE_ROOT):],
140 filename,
141 )
142 except UnicodeEncodeError:
143 raise Http404
144
145 return response
146
147
148 @map_project_slug
149 @map_subproject_slug
150 def serve_docs(
151 request, project, subproject, lang_slug=None, version_slug=None,
152 filename=''):
153 """Exists to map existing proj, lang, version, filename views to the file format."""
154 if not version_slug:
155 version_slug = project.get_default_version()
156 try:
157 version = project.versions.public(request.user).get(slug=version_slug)
158 except Version.DoesNotExist:
159 # Properly raise a 404 if the version doesn't exist & a 401 if it does
160 if project.versions.filter(slug=version_slug).exists():
161 return _serve_401(request, project)
162 raise Http404('Version does not exist.')
163 filename = resolve_path(
164 subproject or project, # Resolve the subproject if it exists
165 version_slug=version_slug,
166 language=lang_slug,
167 filename=filename,
168 subdomain=True, # subdomain will make it a "full" path without a URL prefix
169 )
170 if (version.privacy_level == constants.PRIVATE and
171 not AdminPermission.is_member(user=request.user, obj=project)):
172 return _serve_401(request, project)
173 return _serve_symlink_docs(
174 request,
175 filename=filename,
176 project=project,
177 privacy_level=version.privacy_level,
178 )
179
180
181 @map_project_slug
182 def _serve_symlink_docs(request, project, privacy_level, filename=''):
183 """Serve a file by symlink, or a 404 if not found."""
184 # Handle indexes
185 if filename == '' or filename[-1] == '/':
186 filename += 'index.html'
187
188 # This breaks path joining, by ignoring the root when given an "absolute" path
189 if filename[0] == '/':
190 filename = filename[1:]
191
192 log.info('Serving %s for %s', filename, project)
193
194 files_tried = []
195
196 serve_docs = getattr(settings, 'SERVE_DOCS', [constants.PRIVATE])
197
198 if (settings.DEBUG or constants.PUBLIC in serve_docs) and privacy_level != constants.PRIVATE: # yapf: disable # noqa
199 public_symlink = PublicSymlink(project)
200 basepath = public_symlink.project_root
201 if os.path.exists(os.path.join(basepath, filename)):
202 return _serve_file(request, filename, basepath)
203 else:
204 files_tried.append(os.path.join(basepath, filename))
205
206 if (settings.DEBUG or constants.PRIVATE in serve_docs) and privacy_level == constants.PRIVATE: # yapf: disable # noqa
207 # Handle private
208 private_symlink = PrivateSymlink(project)
209 basepath = private_symlink.project_root
210
211 if os.path.exists(os.path.join(basepath, filename)):
212 return _serve_file(request, filename, basepath)
213 else:
214 files_tried.append(os.path.join(basepath, filename))
215
216 raise Http404(
217 'File not found. Tried these files: %s' % ','.join(files_tried))
218
[end of readthedocs/core/views/serve.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/core/views/serve.py b/readthedocs/core/views/serve.py
--- a/readthedocs/core/views/serve.py
+++ b/readthedocs/core/views/serve.py
@@ -34,7 +34,8 @@
from functools import wraps
from django.conf import settings
-from django.http import Http404, HttpResponse, HttpResponseRedirect
+from django.http import HttpResponse, HttpResponseRedirect, Http404
+from django.shortcuts import get_object_or_404
from django.shortcuts import render
from django.views.static import serve
@@ -60,18 +61,17 @@
def inner_view(
request, subproject=None, subproject_slug=None, *args, **kwargs):
if subproject is None and subproject_slug:
+ # Try to fetch by subproject alias first, otherwise we might end up
+ # redirected to an unrelated project.
try:
- subproject = Project.objects.get(slug=subproject_slug)
- except Project.DoesNotExist:
- try:
- # Depends on a project passed into kwargs
- rel = ProjectRelationship.objects.get(
- parent=kwargs['project'],
- alias=subproject_slug,
- )
- subproject = rel.child
- except (ProjectRelationship.DoesNotExist, KeyError):
- raise Http404
+ # Depends on a project passed into kwargs
+ rel = ProjectRelationship.objects.get(
+ parent=kwargs['project'],
+ alias=subproject_slug,
+ )
+ subproject = rel.child
+ except (ProjectRelationship.DoesNotExist, KeyError):
+ get_object_or_404(Project, slug=subproject_slug)
return view_func(request, subproject=subproject, *args, **kwargs)
return inner_view
| {"golden_diff": "diff --git a/readthedocs/core/views/serve.py b/readthedocs/core/views/serve.py\n--- a/readthedocs/core/views/serve.py\n+++ b/readthedocs/core/views/serve.py\n@@ -34,7 +34,8 @@\n from functools import wraps\n \n from django.conf import settings\n-from django.http import Http404, HttpResponse, HttpResponseRedirect\n+from django.http import HttpResponse, HttpResponseRedirect, Http404\n+from django.shortcuts import get_object_or_404\n from django.shortcuts import render\n from django.views.static import serve\n \n@@ -60,18 +61,17 @@\n def inner_view(\n request, subproject=None, subproject_slug=None, *args, **kwargs):\n if subproject is None and subproject_slug:\n+ # Try to fetch by subproject alias first, otherwise we might end up\n+ # redirected to an unrelated project.\n try:\n- subproject = Project.objects.get(slug=subproject_slug)\n- except Project.DoesNotExist:\n- try:\n- # Depends on a project passed into kwargs\n- rel = ProjectRelationship.objects.get(\n- parent=kwargs['project'],\n- alias=subproject_slug,\n- )\n- subproject = rel.child\n- except (ProjectRelationship.DoesNotExist, KeyError):\n- raise Http404\n+ # Depends on a project passed into kwargs\n+ rel = ProjectRelationship.objects.get(\n+ parent=kwargs['project'],\n+ alias=subproject_slug,\n+ )\n+ subproject = rel.child\n+ except (ProjectRelationship.DoesNotExist, KeyError):\n+ get_object_or_404(Project, slug=subproject_slug)\n return view_func(request, subproject=subproject, *args, **kwargs)\n \n return inner_view\n", "issue": "subproject alias 302 redirect to top-level project sharing alias' name\n## Details\r\nI have set up a subproject,\r\nSubproject: cloudify-openstack-plugin-fh\r\nAlias: openstack\r\n\r\n* Project URL: http://cfy-rtd-demo.readthedocs.io/\r\n* Build URL (if applicable): https://readthedocs.org/projects/cfy-rtd-demo/builds/5162299/\r\n* Read the Docs username (if applicable): funkyhat\r\n\r\n## Expected Result\r\nWhen navigating to http://cfy-rtd-demo.readthedocs.io/projects/openstack\r\n\r\nI expect to see (presumably via a redirect to http://cfy-rtd-demo.readthedocs.io/projects/openstack/en/sphinxify-rtd-demo/) my subproject's docs (`sphinxify-rtd-demo` is the current \"active branch\" for the subproject).\r\n\r\n## Actual Result\r\nI am redirected to http://openstack.readthedocs.io/en/latest/ which is unrelated to my project:\r\n```\r\n< HTTP/1.1 302 Found\r\n* Server nginx/1.10.0 (Ubuntu) is not blacklisted\r\n< Server: nginx/1.10.0 (Ubuntu)\r\n< Date: Fri, 17 Mar 2017 13:00:04 GMT\r\n< Content-Type: text/html; charset=utf-8\r\n< Transfer-Encoding: chunked\r\n< Connection: keep-alive\r\n< Vary: Accept-Language, Cookie\r\n< Location: http://openstack.readthedocs.io/en/latest/\r\n< Content-Language: en\r\n< X-Fallback: True\r\n< X-Served: Django\r\n< X-Deity: web03\r\n```\r\n\r\nI have tried rebuilding the main project, which produced no change.\r\n\r\nThis seems potentially related to #1602 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDoc serving from Python.\n\nIn production there are two modes,\n* Serving from public symlinks in nginx (readthedocs.org & readthedocs.com)\n* Serving from private symlinks in Python (readthedocs.com only)\n\nIn development, we have two modes:\n* Serving from public symlinks in Python\n* Serving from private symlinks in Python\n\nThis means we should only serve from public symlinks in dev,\nand generally default to serving from private symlinks in Python only.\n\nPrivacy\n-------\n\nThese views will take into account the version privacy level.\n\nSettings\n--------\n\nPYTHON_MEDIA (False) - Set this to True to serve docs & media from Python\nSERVE_DOCS (['private']) - The list of ['private', 'public'] docs to serve.\n\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport logging\nimport mimetypes\nimport os\nfrom functools import wraps\n\nfrom django.conf import settings\nfrom django.http import Http404, HttpResponse, HttpResponseRedirect\nfrom django.shortcuts import render\nfrom django.views.static import serve\n\nfrom readthedocs.builds.models import Version\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.resolver import resolve, resolve_path\nfrom readthedocs.core.symlink import PrivateSymlink, PublicSymlink\nfrom readthedocs.projects import constants\nfrom readthedocs.projects.models import Project, ProjectRelationship\n\nlog = logging.getLogger(__name__)\n\n\ndef map_subproject_slug(view_func):\n \"\"\"\n A decorator that maps a ``subproject_slug`` URL param into a Project.\n\n :raises: Http404 if the Project doesn't exist\n\n .. warning:: Does not take into account any kind of privacy settings.\n \"\"\"\n @wraps(view_func)\n def inner_view(\n request, subproject=None, subproject_slug=None, *args, **kwargs):\n if subproject is None and subproject_slug:\n try:\n subproject = Project.objects.get(slug=subproject_slug)\n except Project.DoesNotExist:\n try:\n # Depends on a project passed into kwargs\n rel = ProjectRelationship.objects.get(\n parent=kwargs['project'],\n alias=subproject_slug,\n )\n subproject = rel.child\n except (ProjectRelationship.DoesNotExist, KeyError):\n raise Http404\n return view_func(request, subproject=subproject, *args, **kwargs)\n\n return inner_view\n\n\ndef map_project_slug(view_func):\n \"\"\"\n A decorator that maps a ``project_slug`` URL param into a Project.\n\n :raises: Http404 if the Project doesn't exist\n\n .. warning:: Does not take into account any kind of privacy settings.\n \"\"\"\n @wraps(view_func)\n def inner_view(request, project=None, project_slug=None, *args, **kwargs):\n if project is None:\n if not project_slug:\n project_slug = request.slug\n try:\n project = Project.objects.get(slug=project_slug)\n except Project.DoesNotExist:\n raise Http404('Project does not exist.')\n return view_func(request, project=project, *args, **kwargs)\n\n return inner_view\n\n\n@map_project_slug\n@map_subproject_slug\ndef redirect_project_slug(request, project, subproject): # pylint: disable=unused-argument\n \"\"\"Handle / -> /en/latest/ directs on subdomains.\"\"\"\n return HttpResponseRedirect(resolve(subproject or project))\n\n\n@map_project_slug\n@map_subproject_slug\ndef redirect_page_with_filename(request, project, subproject, filename): # pylint: disable=unused-argument # noqa\n \"\"\"Redirect /page/file.html to /en/latest/file.html.\"\"\"\n return HttpResponseRedirect(\n resolve(subproject or project, filename=filename))\n\n\ndef _serve_401(request, project):\n res = render(request, '401.html')\n res.status_code = 401\n log.error('Unauthorized access to {0} documentation'.format(project.slug))\n return res\n\n\ndef _serve_file(request, filename, basepath):\n # Serve the file from the proper location\n if settings.DEBUG or getattr(settings, 'PYTHON_MEDIA', False):\n # Serve from Python\n return serve(request, filename, basepath)\n else:\n # Serve from Nginx\n content_type, encoding = mimetypes.guess_type(\n os.path.join(basepath, filename))\n content_type = content_type or 'application/octet-stream'\n response = HttpResponse(content_type=content_type)\n if encoding:\n response['Content-Encoding'] = encoding\n try:\n response['X-Accel-Redirect'] = os.path.join(\n basepath[len(settings.SITE_ROOT):],\n filename,\n )\n except UnicodeEncodeError:\n raise Http404\n\n return response\n\n\n@map_project_slug\n@map_subproject_slug\ndef serve_docs(\n request, project, subproject, lang_slug=None, version_slug=None,\n filename=''):\n \"\"\"Exists to map existing proj, lang, version, filename views to the file format.\"\"\"\n if not version_slug:\n version_slug = project.get_default_version()\n try:\n version = project.versions.public(request.user).get(slug=version_slug)\n except Version.DoesNotExist:\n # Properly raise a 404 if the version doesn't exist & a 401 if it does\n if project.versions.filter(slug=version_slug).exists():\n return _serve_401(request, project)\n raise Http404('Version does not exist.')\n filename = resolve_path(\n subproject or project, # Resolve the subproject if it exists\n version_slug=version_slug,\n language=lang_slug,\n filename=filename,\n subdomain=True, # subdomain will make it a \"full\" path without a URL prefix\n )\n if (version.privacy_level == constants.PRIVATE and\n not AdminPermission.is_member(user=request.user, obj=project)):\n return _serve_401(request, project)\n return _serve_symlink_docs(\n request,\n filename=filename,\n project=project,\n privacy_level=version.privacy_level,\n )\n\n\n@map_project_slug\ndef _serve_symlink_docs(request, project, privacy_level, filename=''):\n \"\"\"Serve a file by symlink, or a 404 if not found.\"\"\"\n # Handle indexes\n if filename == '' or filename[-1] == '/':\n filename += 'index.html'\n\n # This breaks path joining, by ignoring the root when given an \"absolute\" path\n if filename[0] == '/':\n filename = filename[1:]\n\n log.info('Serving %s for %s', filename, project)\n\n files_tried = []\n\n serve_docs = getattr(settings, 'SERVE_DOCS', [constants.PRIVATE])\n\n if (settings.DEBUG or constants.PUBLIC in serve_docs) and privacy_level != constants.PRIVATE: # yapf: disable # noqa\n public_symlink = PublicSymlink(project)\n basepath = public_symlink.project_root\n if os.path.exists(os.path.join(basepath, filename)):\n return _serve_file(request, filename, basepath)\n else:\n files_tried.append(os.path.join(basepath, filename))\n\n if (settings.DEBUG or constants.PRIVATE in serve_docs) and privacy_level == constants.PRIVATE: # yapf: disable # noqa\n # Handle private\n private_symlink = PrivateSymlink(project)\n basepath = private_symlink.project_root\n\n if os.path.exists(os.path.join(basepath, filename)):\n return _serve_file(request, filename, basepath)\n else:\n files_tried.append(os.path.join(basepath, filename))\n\n raise Http404(\n 'File not found. Tried these files: %s' % ','.join(files_tried))\n", "path": "readthedocs/core/views/serve.py"}]} | 3,169 | 380 |
gh_patches_debug_62942 | rasdani/github-patches | git_diff | great-expectations__great_expectations-3803 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/core/usage_statistics/anonymizers/anonymizer.py]
1 import logging
2 from hashlib import md5
3 from typing import Optional
4
5 from great_expectations.util import load_class
6
7 logger = logging.getLogger(__name__)
8
9
10 class Anonymizer:
11 """Anonymize string names in an optionally-consistent way."""
12
13 def __init__(self, salt=None):
14 if salt is not None and not isinstance(salt, str):
15 logger.error("invalid salt: must provide a string. Setting a random salt.")
16 salt = None
17 if salt is None:
18 import secrets
19
20 self._salt = secrets.token_hex(8)
21 else:
22 self._salt = salt
23
24 @property
25 def salt(self):
26 return self._salt
27
28 def anonymize(self, string_):
29 if string_ is None:
30 return None
31
32 if not isinstance(string_, str):
33 raise TypeError(
34 f"""The type of the "string_" argument must be a string (Python "str"). The type given is
35 "{str(type(string_))}", which is illegal.
36 """
37 )
38 salted = self._salt + string_
39 return md5(salted.encode("utf-8")).hexdigest()
40
41 def anonymize_object_info(
42 self,
43 anonymized_info_dict,
44 ge_classes,
45 object_=None,
46 object_class=None,
47 object_config=None,
48 runtime_environment=None,
49 ) -> dict:
50 assert (
51 object_ or object_class or object_config
52 ), "Must pass either object_ or object_class or object_config."
53
54 if runtime_environment is None:
55 runtime_environment = {}
56
57 object_class_name: Optional[str] = None
58 try:
59 if object_class is None and object_ is not None:
60 object_class = object_.__class__
61 elif object_class is None and object_config is not None:
62 object_class_name = object_config.get("class_name")
63 object_module_name = object_config.get(
64 "module_name"
65 ) or runtime_environment.get("module_name")
66 object_class = load_class(object_class_name, object_module_name)
67 object_class_name = object_class.__name__
68
69 for ge_class in ge_classes:
70 if issubclass(object_class, ge_class):
71 anonymized_info_dict["parent_class"] = ge_class.__name__
72 if not object_class == ge_class:
73 anonymized_info_dict["anonymized_class"] = self.anonymize(
74 object_class_name
75 )
76 break
77
78 if not anonymized_info_dict.get("parent_class"):
79 anonymized_info_dict["parent_class"] = "__not_recognized__"
80 anonymized_info_dict["anonymized_class"] = self.anonymize(
81 object_class_name
82 )
83 except AttributeError:
84 anonymized_info_dict["parent_class"] = "__not_recognized__"
85 anonymized_info_dict["anonymized_class"] = self.anonymize(object_class_name)
86
87 return anonymized_info_dict
88
89 @staticmethod
90 def _is_parent_class_recognized(
91 classes_to_check,
92 object_=None,
93 object_class=None,
94 object_config=None,
95 ) -> Optional[str]:
96 """
97 Check if the parent class is a subclass of any core GE class.
98 This private method is intended to be used by anonymizers in a public `is_parent_class_recognized()` method. These anonymizers define and provide the core GE classes_to_check.
99 Returns:
100 The name of the parent class found, or None if no parent class was found
101 """
102 assert (
103 object_ or object_class or object_config
104 ), "Must pass either object_ or object_class or object_config."
105 try:
106 if object_class is None and object_ is not None:
107 object_class = object_.__class__
108 elif object_class is None and object_config is not None:
109 object_class_name = object_config.get("class_name")
110 object_module_name = object_config.get("module_name")
111 object_class = load_class(object_class_name, object_module_name)
112
113 for class_to_check in classes_to_check:
114 if issubclass(object_class, class_to_check):
115 return class_to_check.__name__
116
117 return None
118
119 except AttributeError:
120 return None
121
[end of great_expectations/core/usage_statistics/anonymizers/anonymizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py
--- a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py
+++ b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py
@@ -35,6 +35,7 @@
"{str(type(string_))}", which is illegal.
"""
)
+
salted = self._salt + string_
return md5(salted.encode("utf-8")).hexdigest()
| {"golden_diff": "diff --git a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py\n--- a/great_expectations/core/usage_statistics/anonymizers/anonymizer.py\n+++ b/great_expectations/core/usage_statistics/anonymizers/anonymizer.py\n@@ -35,6 +35,7 @@\n \"{str(type(string_))}\", which is illegal.\n \"\"\"\n )\n+\n salted = self._salt + string_\n return md5(salted.encode(\"utf-8\")).hexdigest()\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import logging\nfrom hashlib import md5\nfrom typing import Optional\n\nfrom great_expectations.util import load_class\n\nlogger = logging.getLogger(__name__)\n\n\nclass Anonymizer:\n \"\"\"Anonymize string names in an optionally-consistent way.\"\"\"\n\n def __init__(self, salt=None):\n if salt is not None and not isinstance(salt, str):\n logger.error(\"invalid salt: must provide a string. Setting a random salt.\")\n salt = None\n if salt is None:\n import secrets\n\n self._salt = secrets.token_hex(8)\n else:\n self._salt = salt\n\n @property\n def salt(self):\n return self._salt\n\n def anonymize(self, string_):\n if string_ is None:\n return None\n\n if not isinstance(string_, str):\n raise TypeError(\n f\"\"\"The type of the \"string_\" argument must be a string (Python \"str\"). The type given is\n\"{str(type(string_))}\", which is illegal.\n \"\"\"\n )\n salted = self._salt + string_\n return md5(salted.encode(\"utf-8\")).hexdigest()\n\n def anonymize_object_info(\n self,\n anonymized_info_dict,\n ge_classes,\n object_=None,\n object_class=None,\n object_config=None,\n runtime_environment=None,\n ) -> dict:\n assert (\n object_ or object_class or object_config\n ), \"Must pass either object_ or object_class or object_config.\"\n\n if runtime_environment is None:\n runtime_environment = {}\n\n object_class_name: Optional[str] = None\n try:\n if object_class is None and object_ is not None:\n object_class = object_.__class__\n elif object_class is None and object_config is not None:\n object_class_name = object_config.get(\"class_name\")\n object_module_name = object_config.get(\n \"module_name\"\n ) or runtime_environment.get(\"module_name\")\n object_class = load_class(object_class_name, object_module_name)\n object_class_name = object_class.__name__\n\n for ge_class in ge_classes:\n if issubclass(object_class, ge_class):\n anonymized_info_dict[\"parent_class\"] = ge_class.__name__\n if not object_class == ge_class:\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(\n object_class_name\n )\n break\n\n if not anonymized_info_dict.get(\"parent_class\"):\n anonymized_info_dict[\"parent_class\"] = \"__not_recognized__\"\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(\n object_class_name\n )\n except AttributeError:\n anonymized_info_dict[\"parent_class\"] = \"__not_recognized__\"\n anonymized_info_dict[\"anonymized_class\"] = self.anonymize(object_class_name)\n\n return anonymized_info_dict\n\n @staticmethod\n def _is_parent_class_recognized(\n classes_to_check,\n object_=None,\n object_class=None,\n object_config=None,\n ) -> Optional[str]:\n \"\"\"\n Check if the parent class is a subclass of any core GE class.\n This private method is intended to be used by anonymizers in a public `is_parent_class_recognized()` method. These anonymizers define and provide the core GE classes_to_check.\n Returns:\n The name of the parent class found, or None if no parent class was found\n \"\"\"\n assert (\n object_ or object_class or object_config\n ), \"Must pass either object_ or object_class or object_config.\"\n try:\n if object_class is None and object_ is not None:\n object_class = object_.__class__\n elif object_class is None and object_config is not None:\n object_class_name = object_config.get(\"class_name\")\n object_module_name = object_config.get(\"module_name\")\n object_class = load_class(object_class_name, object_module_name)\n\n for class_to_check in classes_to_check:\n if issubclass(object_class, class_to_check):\n return class_to_check.__name__\n\n return None\n\n except AttributeError:\n return None\n", "path": "great_expectations/core/usage_statistics/anonymizers/anonymizer.py"}]} | 1,723 | 124 |
gh_patches_debug_32257 | rasdani/github-patches | git_diff | aio-libs__aiohttp-3694 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sdist build gets crashed under pip>=19 in dev mode
## Long story short
We have `cython` as an optional dependency. That's why we install it as a pre-requisite in the CI, as a separate step.
New pip creates a separate build virtualenv which doesn't have access to the place with cython installed which causes it to crash.
## Expected behaviour
It succeeds
## Actual behaviour
It tracebacks
## Steps to reproduce
* https://travis-ci.com/aio-libs/aiohttp/jobs/172249543#L198-L219
* https://ci.appveyor.com/project/aio-libs/aiohttp/build/job/uppd0qqw2sbisqtn#L46
## Your environment
Travis CI (doesn't really matter actually)
</issue>
<code>
[start of setup.py]
1 import codecs
2 import pathlib
3 import re
4 import sys
5 from distutils.command.build_ext import build_ext
6 from distutils.errors import (CCompilerError, DistutilsExecError,
7 DistutilsPlatformError)
8
9 from setuptools import Extension, setup
10
11
12 if sys.version_info < (3, 5, 3):
13 raise RuntimeError("aiohttp 3.x requires Python 3.5.3+")
14
15 here = pathlib.Path(__file__).parent
16
17 try:
18 from Cython.Build import cythonize
19 USE_CYTHON = True
20 except ImportError:
21 USE_CYTHON = False
22
23 if (here / '.git').exists() and not USE_CYTHON:
24 print("Install cython when building from git clone", file=sys.stderr)
25 print("Hint:", file=sys.stderr)
26 print(" pip install cython", file=sys.stderr)
27 sys.exit(1)
28
29
30 if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):
31 print("Install submodules when building from git clone", file=sys.stderr)
32 print("Hint:", file=sys.stderr)
33 print(" git submodule update --init", file=sys.stderr)
34 sys.exit(2)
35
36
37 ext = '.pyx' if USE_CYTHON else '.c'
38
39
40 extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]),
41 Extension('aiohttp._http_parser',
42 ['aiohttp/_http_parser' + ext,
43 'vendor/http-parser/http_parser.c',
44 'aiohttp/_find_header.c'],
45 define_macros=[('HTTP_PARSER_STRICT', 0)],
46 ),
47 Extension('aiohttp._frozenlist',
48 ['aiohttp/_frozenlist' + ext]),
49 Extension('aiohttp._helpers',
50 ['aiohttp/_helpers' + ext]),
51 Extension('aiohttp._http_writer',
52 ['aiohttp/_http_writer' + ext])]
53
54
55 if USE_CYTHON:
56 extensions = cythonize(extensions)
57
58
59 class BuildFailed(Exception):
60 pass
61
62
63 class ve_build_ext(build_ext):
64 # This class allows C extension building to fail.
65
66 def run(self):
67 try:
68 build_ext.run(self)
69 except (DistutilsPlatformError, FileNotFoundError):
70 raise BuildFailed()
71
72 def build_extension(self, ext):
73 try:
74 build_ext.build_extension(self, ext)
75 except (CCompilerError, DistutilsExecError,
76 DistutilsPlatformError, ValueError):
77 raise BuildFailed()
78
79
80
81 txt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')
82 try:
83 version = re.findall(r"^__version__ = '([^']+)'\r?$",
84 txt, re.M)[0]
85 except IndexError:
86 raise RuntimeError('Unable to determine version.')
87
88 install_requires = [
89 'attrs>=17.3.0',
90 'chardet>=2.0,<4.0',
91 'multidict>=4.0,<5.0',
92 'async_timeout>=3.0,<4.0',
93 'yarl>=1.0,<2.0',
94 'idna-ssl>=1.0; python_version<"3.7"',
95 'typing_extensions>=3.6.5; python_version<"3.7"',
96 ]
97
98
99 def read(f):
100 return (here / f).read_text('utf-8').strip()
101
102
103 NEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)
104 pytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []
105
106 tests_require = [
107 'pytest', 'gunicorn',
108 'pytest-timeout', 'async-generator',
109 'pytest-xdist',
110 ]
111
112
113 args = dict(
114 name='aiohttp',
115 version=version,
116 description='Async http client/server framework (asyncio)',
117 long_description='\n\n'.join((read('README.rst'), read('CHANGES.rst'))),
118 classifiers=[
119 'License :: OSI Approved :: Apache Software License',
120 'Intended Audience :: Developers',
121 'Programming Language :: Python',
122 'Programming Language :: Python :: 3',
123 'Programming Language :: Python :: 3.5',
124 'Programming Language :: Python :: 3.6',
125 'Programming Language :: Python :: 3.7',
126 'Development Status :: 5 - Production/Stable',
127 'Operating System :: POSIX',
128 'Operating System :: MacOS :: MacOS X',
129 'Operating System :: Microsoft :: Windows',
130 'Topic :: Internet :: WWW/HTTP',
131 'Framework :: AsyncIO',
132 ],
133 author='Nikolay Kim',
134 author_email='[email protected]',
135 maintainer=', '.join(('Nikolay Kim <[email protected]>',
136 'Andrew Svetlov <[email protected]>')),
137 maintainer_email='[email protected]',
138 url='https://github.com/aio-libs/aiohttp',
139 project_urls={
140 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',
141 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',
142 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',
143 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',
144 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',
145 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',
146 'Docs: RTD': 'https://docs.aiohttp.org',
147 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',
148 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',
149 },
150 license='Apache 2',
151 packages=['aiohttp'],
152 python_requires='>=3.5.3',
153 install_requires=install_requires,
154 extras_require={
155 'speedups': [
156 'aiodns',
157 'brotlipy',
158 'cchardet',
159 ],
160 },
161 tests_require=tests_require,
162 setup_requires=pytest_runner,
163 include_package_data=True,
164 ext_modules=extensions,
165 cmdclass=dict(build_ext=ve_build_ext),
166 )
167
168 try:
169 setup(**args)
170 except BuildFailed:
171 print("************************************************************")
172 print("Cannot compile C accelerator module, use pure python version")
173 print("************************************************************")
174 del args['ext_modules']
175 del args['cmdclass']
176 setup(**args)
177
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,18 +14,6 @@
here = pathlib.Path(__file__).parent
-try:
- from Cython.Build import cythonize
- USE_CYTHON = True
-except ImportError:
- USE_CYTHON = False
-
-if (here / '.git').exists() and not USE_CYTHON:
- print("Install cython when building from git clone", file=sys.stderr)
- print("Hint:", file=sys.stderr)
- print(" pip install cython", file=sys.stderr)
- sys.exit(1)
-
if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):
print("Install submodules when building from git clone", file=sys.stderr)
@@ -34,26 +22,21 @@
sys.exit(2)
-ext = '.pyx' if USE_CYTHON else '.c'
+# NOTE: makefile cythonizes all Cython modules
-
-extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]),
+extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),
Extension('aiohttp._http_parser',
- ['aiohttp/_http_parser' + ext,
+ ['aiohttp/_http_parser.c',
'vendor/http-parser/http_parser.c',
'aiohttp/_find_header.c'],
define_macros=[('HTTP_PARSER_STRICT', 0)],
),
Extension('aiohttp._frozenlist',
- ['aiohttp/_frozenlist' + ext]),
+ ['aiohttp/_frozenlist.c']),
Extension('aiohttp._helpers',
- ['aiohttp/_helpers' + ext]),
+ ['aiohttp/_helpers.c']),
Extension('aiohttp._http_writer',
- ['aiohttp/_http_writer' + ext])]
-
-
-if USE_CYTHON:
- extensions = cythonize(extensions)
+ ['aiohttp/_http_writer.c'])]
class BuildFailed(Exception):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,18 +14,6 @@\n \n here = pathlib.Path(__file__).parent\n \n-try:\n- from Cython.Build import cythonize\n- USE_CYTHON = True\n-except ImportError:\n- USE_CYTHON = False\n-\n-if (here / '.git').exists() and not USE_CYTHON:\n- print(\"Install cython when building from git clone\", file=sys.stderr)\n- print(\"Hint:\", file=sys.stderr)\n- print(\" pip install cython\", file=sys.stderr)\n- sys.exit(1)\n-\n \n if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n@@ -34,26 +22,21 @@\n sys.exit(2)\n \n \n-ext = '.pyx' if USE_CYTHON else '.c'\n+# NOTE: makefile cythonizes all Cython modules\n \n-\n-extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]),\n+extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n- ['aiohttp/_http_parser' + ext,\n+ ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n- ['aiohttp/_frozenlist' + ext]),\n+ ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n- ['aiohttp/_helpers' + ext]),\n+ ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n- ['aiohttp/_http_writer' + ext])]\n-\n-\n-if USE_CYTHON:\n- extensions = cythonize(extensions)\n+ ['aiohttp/_http_writer.c'])]\n \n \n class BuildFailed(Exception):\n", "issue": "sdist build gets crashed under pip>=19 in dev mode\n## Long story short\r\n\r\nWe have `cython` as an optional dependency. That's why we install it as a pre-requisite in the CI, as a separate step.\r\nNew pip creates a separate build virtualenv which doesn't have access to the place with cython installed which causes it to crash.\r\n\r\n## Expected behaviour\r\n\r\nIt succeeds\r\n\r\n## Actual behaviour\r\n\r\nIt tracebacks\r\n\r\n## Steps to reproduce\r\n\r\n* https://travis-ci.com/aio-libs/aiohttp/jobs/172249543#L198-L219\r\n* https://ci.appveyor.com/project/aio-libs/aiohttp/build/job/uppd0qqw2sbisqtn#L46\r\n\r\n## Your environment\r\n\r\nTravis CI (doesn't really matter actually)\n", "before_files": [{"content": "import codecs\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\nhere = pathlib.Path(__file__).parent\n\ntry:\n from Cython.Build import cythonize\n USE_CYTHON = True\nexcept ImportError:\n USE_CYTHON = False\n\nif (here / '.git').exists() and not USE_CYTHON:\n print(\"Install cython when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" pip install cython\", file=sys.stderr)\n sys.exit(1)\n\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\next = '.pyx' if USE_CYTHON else '.c'\n\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket' + ext]),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser' + ext,\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist' + ext]),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers' + ext]),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer' + ext])]\n\n\nif USE_CYTHON:\n extensions = cythonize(extensions)\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.0,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5; python_version<\"3.7\"',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nNEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)\npytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []\n\ntests_require = [\n 'pytest', 'gunicorn',\n 'pytest-timeout', 'async-generator',\n 'pytest-xdist',\n]\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'brotlipy',\n 'cchardet',\n ],\n },\n tests_require=tests_require,\n setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n)\n\ntry:\n setup(**args)\nexcept BuildFailed:\n print(\"************************************************************\")\n print(\"Cannot compile C accelerator module, use pure python version\")\n print(\"************************************************************\")\n del args['ext_modules']\n del args['cmdclass']\n setup(**args)\n", "path": "setup.py"}]} | 2,556 | 465 |
gh_patches_debug_23074 | rasdani/github-patches | git_diff | speechbrain__speechbrain-187 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Train Logger use average as the default summary function
Right now users have to specify a summary function for each statistic, however average is the function to use in the vast majority of cases (the exception is error rates). Why not make it default?
</issue>
<code>
[start of speechbrain/utils/train_logger.py]
1 """
2 Loggers for experiment monitoring
3
4 Authors
5 * Peter Plantinga 2020
6 """
7 import logging
8 from speechbrain.utils.edit_distance import wer_summary
9
10 logger = logging.getLogger(__name__)
11
12
13 class TrainLogger:
14 """Abstract class defining an interface for training loggers."""
15
16 def log_stats(
17 self,
18 stats_meta,
19 train_stats=None,
20 valid_stats=None,
21 test_stats=None,
22 verbose=False,
23 ):
24 """Log the stats for one epoch.
25
26 Arguments
27 ---------
28 stats_meta : dict of str:scalar pairs
29 Meta information about the stats (e.g. epoch, learning-rate, etc.)
30 train_stats : dict of str:list pairs
31 Each loss type is represented with a str : list pair including
32 all the values for the training pass.
33 valid_stats : dict of str:list pairs
34 Each loss type is represented with a str : list pair including
35 all the values for the validation pass.
36 test_stats : dict of str:list pairs
37 Each loss type is represented with a str : list pair including
38 all the values for the test pass.
39 verbose : bool
40 Whether to also put logging information to the standard logger.
41 """
42 raise NotImplementedError
43
44
45 class FileTrainLogger(TrainLogger):
46 """Text logger of training information
47
48 Arguments
49 ---------
50 save_file : str
51 The file to use for logging train information.
52 summary_fns : dict of str:function pairs
53 Each summary function should take a list produced as output
54 from a training/validation pass and summarize it to a single scalar.
55 """
56
57 def __init__(self, save_file, summary_fns):
58 self.save_file = save_file
59 self.summary_fns = summary_fns
60
61 def _item_to_string(self, key, value, dataset=None):
62 """Convert one item to string, handling floats"""
63 if isinstance(value, float) and 0.01 < value < 100.0:
64 value = f"{value:.2f}"
65 elif isinstance(value, float):
66 value = f"{value:.2e}"
67 if dataset is not None:
68 key = f"{dataset} {key}"
69 return f"{key}: {value}"
70
71 def _stats_to_string(self, stats, dataset=None):
72 """Convert all stats to a single string summary"""
73 return ", ".join(
74 [self._item_to_string(k, v, dataset) for k, v in stats.items()]
75 )
76
77 def log_stats(
78 self,
79 stats_meta,
80 train_stats=None,
81 valid_stats=None,
82 test_stats=None,
83 verbose=True,
84 ):
85 """See TrainLogger.log_stats()"""
86 string_summary = self._stats_to_string(stats_meta)
87 for dataset, stats in [
88 ("train", train_stats),
89 ("valid", valid_stats),
90 ("test", test_stats),
91 ]:
92 if stats is None:
93 continue
94 summary = {}
95 for stat, value_list in stats.items():
96 summary[stat] = self.summary_fns[stat](value_list)
97 string_summary += " - " + self._stats_to_string(summary, dataset)
98
99 with open(self.save_file, "a") as fout:
100 print(string_summary, file=fout)
101 if verbose:
102 logger.info(string_summary)
103
104
105 class TensorboardLogger(TrainLogger):
106 """Logs training information in the format required by Tensorboard.
107
108 Arguments
109 ---------
110 save_dir : str
111 A directory for storing all the relevant logs
112
113 Raises
114 ------
115 ImportError if Tensorboard is not installed.
116 """
117
118 def __init__(self, save_dir):
119 self.save_dir = save_dir
120
121 # Raises ImportError if TensorBoard is not installed
122 from torch.utils.tensorboard import SummaryWriter
123
124 self.writer = SummaryWriter(self.save_dir)
125 self.global_step = {"train": {}, "valid": {}, "meta": 0}
126
127 def log_stats(
128 self,
129 stats_meta,
130 train_stats=None,
131 valid_stats=None,
132 test_stats=None,
133 verbose=False,
134 ):
135 """See TrainLogger.log_stats()"""
136 self.global_step["meta"] += 1
137 for name, value in stats_meta.items():
138 self.writer.add_scalar(name, value, self.global_step["meta"])
139
140 for dataset, stats in [
141 ("train", train_stats),
142 ("valid", valid_stats),
143 ("test", test_stats),
144 ]:
145 if stats is None:
146 continue
147 for stat, value_list in stats.items():
148 if stat not in self.global_step[dataset]:
149 self.global_step[dataset][stat] = 0
150 tag = f"{stat}/{dataset}"
151 for value in value_list:
152 new_global_step = self.global_step[dataset][stat] + 1
153 self.writer.add_scalar(tag, value, new_global_step)
154 self.global_step[dataset][stat] = new_global_step
155
156
157 def summarize_average(stat_list):
158 return float(sum(stat_list) / len(stat_list))
159
160
161 def summarize_error_rate(stat_list):
162 summary = wer_summary(stat_list)
163 return summary["WER"]
164
[end of speechbrain/utils/train_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py
--- a/speechbrain/utils/train_logger.py
+++ b/speechbrain/utils/train_logger.py
@@ -54,9 +54,9 @@
from a training/validation pass and summarize it to a single scalar.
"""
- def __init__(self, save_file, summary_fns):
+ def __init__(self, save_file, summary_fns=None):
self.save_file = save_file
- self.summary_fns = summary_fns
+ self.summary_fns = summary_fns or {}
def _item_to_string(self, key, value, dataset=None):
"""Convert one item to string, handling floats"""
@@ -93,7 +93,10 @@
continue
summary = {}
for stat, value_list in stats.items():
- summary[stat] = self.summary_fns[stat](value_list)
+ if stat in self.summary_fns:
+ summary[stat] = self.summary_fns[stat](value_list)
+ else:
+ summary[stat] = summarize_average(value_list)
string_summary += " - " + self._stats_to_string(summary, dataset)
with open(self.save_file, "a") as fout:
| {"golden_diff": "diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py\n--- a/speechbrain/utils/train_logger.py\n+++ b/speechbrain/utils/train_logger.py\n@@ -54,9 +54,9 @@\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n \n- def __init__(self, save_file, summary_fns):\n+ def __init__(self, save_file, summary_fns=None):\n self.save_file = save_file\n- self.summary_fns = summary_fns\n+ self.summary_fns = summary_fns or {}\n \n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n@@ -93,7 +93,10 @@\n continue\n summary = {}\n for stat, value_list in stats.items():\n- summary[stat] = self.summary_fns[stat](value_list)\n+ if stat in self.summary_fns:\n+ summary[stat] = self.summary_fns[stat](value_list)\n+ else:\n+ summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n \n with open(self.save_file, \"a\") as fout:\n", "issue": "Train Logger use average as the default summary function\nRight now users have to specify a summary function for each statistic, however average is the function to use in the vast majority of cases (the exception is error rates). Why not make it default?\n", "before_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns):\n self.save_file = save_file\n self.summary_fns = summary_fns\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n elif isinstance(value, float):\n value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n summary[stat] = self.summary_fns[stat](value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}]} | 2,069 | 279 |
gh_patches_debug_35438 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-1881 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider public_storage is broken
During the global build at 2021-05-26-14-42-23, spider **public_storage** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/public_storage.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson))
</issue>
<code>
[start of locations/spiders/public_storage.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4
5 from locations.items import GeojsonPointItem
6
7
8 class PublicStorageSpider(scrapy.Spider):
9 name = "public_storage"
10 item_attributes = { 'brand': "Public Storage" }
11 allowed_domains = ["www.publicstorage.com"]
12 start_urls = (
13 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0',
14 )
15
16 def parse(self, response):
17 data = json.loads(response.body_as_unicode())
18
19 for store in data['response']['properties']['property']:
20 lat, lon = map(float, store['lat_long'].split(', '))
21 properties = {
22 "ref": store.get('property_id'),
23 "opening_hours": '; '.join(response.xpath('//time[@itemprop="openingHours"]/@datetime').extract()),
24 "addr_full": store.get('address'),
25 "city": store.get('city'),
26 "state": store.get('state'),
27 "postcode": store.get('zip'),
28 "lat": lat,
29 "lon": lon,
30 }
31
32 yield GeojsonPointItem(**properties)
33
[end of locations/spiders/public_storage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/public_storage.py b/locations/spiders/public_storage.py
--- a/locations/spiders/public_storage.py
+++ b/locations/spiders/public_storage.py
@@ -3,6 +3,7 @@
import json
from locations.items import GeojsonPointItem
+from locations.hours import OpeningHours
class PublicStorageSpider(scrapy.Spider):
@@ -10,23 +11,45 @@
item_attributes = { 'brand': "Public Storage" }
allowed_domains = ["www.publicstorage.com"]
start_urls = (
- 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0',
+ 'https://www.publicstorage.com/sitemap_plp.xml',
)
def parse(self, response):
- data = json.loads(response.body_as_unicode())
-
- for store in data['response']['properties']['property']:
- lat, lon = map(float, store['lat_long'].split(', '))
- properties = {
- "ref": store.get('property_id'),
- "opening_hours": '; '.join(response.xpath('//time[@itemprop="openingHours"]/@datetime').extract()),
- "addr_full": store.get('address'),
- "city": store.get('city'),
- "state": store.get('state'),
- "postcode": store.get('zip'),
- "lat": lat,
- "lon": lon,
- }
-
- yield GeojsonPointItem(**properties)
+ response.selector.remove_namespaces()
+ city_urls = response.xpath('//url/loc/text()').extract()
+ for path in city_urls:
+ yield scrapy.Request(
+ path.strip(),
+ callback=self.parse_store,
+ )
+
+ def parse_hours(self, hours):
+ opening_hours = OpeningHours()
+
+ for hour in hours:
+ for day in hour['dayOfWeek']:
+ opening_hours.add_range(
+ day=day[:2],
+ open_time=hour["opens"],
+ close_time=hour["closes"],
+ )
+
+ return opening_hours.as_opening_hours()
+
+ def parse_store(self, response):
+ data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
+ data = data['@graph'][0]
+
+ properties = {
+ "ref": data['@id'],
+ "opening_hours": self.parse_hours(data['openingHoursSpecification']),
+ "addr_full": data['address']['streetAddress'],
+ "city": data['address']['addressLocality'],
+ "state": data['address']['addressRegion'],
+ "postcode": data['address']['postalCode'],
+ "phone": data['telephone'],
+ "lat": data['geo']['latitude'],
+ "lon": data['geo']['longitude'],
+ }
+
+ yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/public_storage.py b/locations/spiders/public_storage.py\n--- a/locations/spiders/public_storage.py\n+++ b/locations/spiders/public_storage.py\n@@ -3,6 +3,7 @@\n import json\n \n from locations.items import GeojsonPointItem\n+from locations.hours import OpeningHours\n \n \n class PublicStorageSpider(scrapy.Spider):\n@@ -10,23 +11,45 @@\n item_attributes = { 'brand': \"Public Storage\" }\n allowed_domains = [\"www.publicstorage.com\"]\n start_urls = (\n- 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0',\n+ 'https://www.publicstorage.com/sitemap_plp.xml',\n )\n \n def parse(self, response):\n- data = json.loads(response.body_as_unicode())\n-\n- for store in data['response']['properties']['property']:\n- lat, lon = map(float, store['lat_long'].split(', '))\n- properties = {\n- \"ref\": store.get('property_id'),\n- \"opening_hours\": '; '.join(response.xpath('//time[@itemprop=\"openingHours\"]/@datetime').extract()),\n- \"addr_full\": store.get('address'),\n- \"city\": store.get('city'),\n- \"state\": store.get('state'),\n- \"postcode\": store.get('zip'),\n- \"lat\": lat,\n- \"lon\": lon,\n- }\n-\n- yield GeojsonPointItem(**properties)\n+ response.selector.remove_namespaces()\n+ city_urls = response.xpath('//url/loc/text()').extract()\n+ for path in city_urls:\n+ yield scrapy.Request(\n+ path.strip(),\n+ callback=self.parse_store,\n+ )\n+\n+ def parse_hours(self, hours):\n+ opening_hours = OpeningHours()\n+\n+ for hour in hours:\n+ for day in hour['dayOfWeek']:\n+ opening_hours.add_range(\n+ day=day[:2],\n+ open_time=hour[\"opens\"],\n+ close_time=hour[\"closes\"],\n+ )\n+\n+ return opening_hours.as_opening_hours()\n+\n+ def parse_store(self, response):\n+ data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n+ data = data['@graph'][0]\n+\n+ properties = {\n+ \"ref\": data['@id'],\n+ \"opening_hours\": self.parse_hours(data['openingHoursSpecification']),\n+ \"addr_full\": data['address']['streetAddress'],\n+ \"city\": data['address']['addressLocality'],\n+ \"state\": data['address']['addressRegion'],\n+ \"postcode\": data['address']['postalCode'],\n+ \"phone\": data['telephone'],\n+ \"lat\": data['geo']['latitude'],\n+ \"lon\": data['geo']['longitude'],\n+ }\n+\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider public_storage is broken\nDuring the global build at 2021-05-26-14-42-23, spider **public_storage** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/public_storage.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/public_storage.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PublicStorageSpider(scrapy.Spider):\n name = \"public_storage\"\n item_attributes = { 'brand': \"Public Storage\" }\n allowed_domains = [\"www.publicstorage.com\"]\n start_urls = (\n 'https://www.publicstorage.com/handlers/searchcoordinates.ashx?north=90.0&east=180.0&south=-90.0&west=-180.0',\n )\n\n def parse(self, response):\n data = json.loads(response.body_as_unicode())\n\n for store in data['response']['properties']['property']:\n lat, lon = map(float, store['lat_long'].split(', '))\n properties = {\n \"ref\": store.get('property_id'),\n \"opening_hours\": '; '.join(response.xpath('//time[@itemprop=\"openingHours\"]/@datetime').extract()),\n \"addr_full\": store.get('address'),\n \"city\": store.get('city'),\n \"state\": store.get('state'),\n \"postcode\": store.get('zip'),\n \"lat\": lat,\n \"lon\": lon,\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/public_storage.py"}]} | 1,039 | 656 |
gh_patches_debug_26560 | rasdani/github-patches | git_diff | fidals__shopelectro-1005 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Products rss for Google Merchant
Google Merchant has some semihidden and strange subservice looking like google adwords for the search. It couldn't integrate with an existing gm.yml file, but requires rss. It has no open documentation and/or validator and we have just one from seo guys
[Trello task](https://trello.com/c/39zr3xox/21-9-14k-%D0%B4%D0%B5%D0%BB%D0%B0%D0%B9-%D1%84%D0%B8%D0%B4-%D0%BF%D0%BE-%D0%BC%D0%B5%D1%80%D1%87%D0%B0%D0%BD%D1%82-%D1%86%D0%B5%D0%BD%D1%82%D1%80) contains details
</issue>
<code>
[start of shopelectro/management/commands/price.py]
1 """
2 Django command to generate yml price files for market-places.
3
4 `utm` or `target` defines particular market-place.
5 See `settings.UTM_PRICE_MAP` to explore current list of supported market-places.
6 """
7
8 import logging
9 import os
10 import typing
11 from collections import defaultdict
12
13 from django.conf import settings
14 from django.core.management.base import BaseCommand
15 from django.db.models import QuerySet
16 from django.template.loader import render_to_string
17
18 from catalog import context
19 from shopelectro import models
20
21 logger = logging.getLogger(__name__)
22
23
24 # --- files processing ---
25 class File:
26 def __init__(self, path: str, context: dict):
27 self.path = path
28 self.context = context
29
30 def create(self):
31 with open(self.path, 'w', encoding='utf-8') as file:
32 file.write(render_to_string('prices/price.yml', self.context).strip())
33 logger.info(f'{self.path} generated.')
34
35
36 class Files:
37 def __init__(self, files: typing.List[File]):
38 self.files = files
39
40 def create(self):
41 for file in self.files:
42 file.create()
43
44
45 class Context(context.Context):
46 """DB data, extracted for price file."""
47
48 def __init__(self, target: str):
49 self.target = target
50
51 def context(self) -> dict:
52 categories = CategoriesFilter(self.target).qs()
53 products = ProductsPatch(
54 self.target,
55 products=ProductsFilter(self.target, categories).qs()
56 ).products()
57
58 return {
59 'base_url': settings.BASE_URL,
60 'categories': categories,
61 'products': products,
62 'shop': settings.SHOP,
63 'utm': self.target,
64 }
65
66
67 class CategoriesFilter:
68 """Categories list for particular market place."""
69
70 @property
71 def ignored(self) -> typing.List[str]:
72 return (
73 settings.PRICE_IGNORED_CATEGORIES_MAP['default']
74 + settings.PRICE_IGNORED_CATEGORIES_MAP[self.target]
75 )
76
77 def __init__(self, target: str):
78 assert target in settings.UTM_PRICE_MAP
79 self.target = target
80
81 def qs(self) -> models.SECategoryQuerySet:
82 if self.target == 'SE78':
83 return models.Category.objects.all()
84
85 result_categories = (
86 models.Category.objects
87 .exclude(
88 id__in=(
89 models.Category.objects
90 .filter(name__in=self.ignored)
91 .get_descendants(include_self=True)
92 )
93 )
94 )
95
96 if self.target == 'YM':
97 """
98 Yandex Market feed requires items in some categories to have pictures.
99 To simplify filtering we are excluding all categories
100 which don't contain at least one product with picture.
101 """
102 # @todo #715:30m Try to rm ancestors filter in YM price filter.
103 # Exclude only categories with no pictures, without their ancestors.
104 result_categories = result_categories.get_categories_tree_with_pictures()
105
106 return result_categories
107
108
109 class ProductsFilter:
110 """Filter offers with individual price requirements."""
111
112 @property
113 def ignored(self) -> typing.List[str]:
114 return settings.PRICE_IGNORED_PRODUCTS_MAP[self.target]
115
116 FILTERS = defaultdict(
117 lambda: (lambda qs: qs),
118 # Yandex Market feed requires picture for every offer
119 YM=lambda qs: (
120 qs
121 .filter(page__images__isnull=False)
122 .distinct()
123 ),
124 # Google Merchant feed should not contain offers cheaper then CONST
125 GM=lambda qs: (
126 qs
127 .filter(price__gt=settings.PRICE_GM_LOWER_BOUND)
128 )
129 )
130
131 def __init__(self, target: str, categories: models.SECategoryQuerySet):
132 assert target in settings.UTM_PRICE_MAP
133 self.target = target
134 self.categories = categories
135
136 def qs(self) -> QuerySet:
137 return self.FILTERS[self.target](
138 models.Product.objects.active()
139 .filter(category__in=self.categories, price__gt=0)
140 .exclude(vendor_code__in=self.ignored)
141 )
142
143
144 class ProductsPatch:
145
146 UTM_MEDIUM_DATA = defaultdict(
147 lambda: 'cpc',
148 {'YM': 'cpc-market'}
149 )
150
151 def __init__(self, target: str, products: QuerySet):
152 assert target in settings.UTM_PRICE_MAP
153 self.target = target
154 self._products = products
155
156 def put_params(self, product):
157 product.prepared_params = [
158 (group, tags[0].name)
159 for (group, tags) in filter(
160 lambda x: x[0].name != 'Производитель',
161 product.get_params().items()
162 ) if tags
163 ]
164 return product
165
166 def put_utm(self, product):
167 """Put UTM attribute to product."""
168 utm_marks = [
169 ('utm_source', self.target),
170 ('utm_medium', self.UTM_MEDIUM_DATA[self.target]),
171 ('utm_content', product.get_root_category().page.slug),
172 ('utm_term', str(product.vendor_code)),
173 ]
174
175 utm_mark_query = '&'.join(f'{k}={v}' for k, v in utm_marks)
176 product.utm_url = f'{settings.BASE_URL}{product.url}?{utm_mark_query}'
177
178 return product
179
180 def put_crumbs(self, product): # Ignore PyDocStyleBear
181 """Crumbs for google merchant. https://goo.gl/b0UJQp"""
182 product.crumbs = ' > '.join(
183 product.page.get_ancestors_fields('h1', include_self=False)[1:]
184 )
185 return product
186
187 def put_brand(self, product, brands):
188 product.brand = brands.get(product)
189 return product
190
191 def products(self) -> typing.List[models.Product]:
192 """Path every product with additional fields."""
193 brands = models.Tag.objects.get_brands(self._products)
194 return [
195 self.put_brand(
196 product=self.put_params(self.put_crumbs(self.put_utm(product))),
197 brands=brands
198 )
199 for product in self._products
200 ]
201
202
203 # --- command block ---
204 class Command(BaseCommand):
205 """Generate yml file for a given vendor (YM or price.ru)."""
206
207 # price files will be stored at this dir
208 BASE_DIR = settings.ASSETS_DIR
209
210 def handle(self, *args, **options):
211 Files(
212 [File(
213 path=os.path.join(self.BASE_DIR, filename),
214 context=Context(target).context()
215 ) for target, filename in settings.UTM_PRICE_MAP.items()]
216 ).create()
217
[end of shopelectro/management/commands/price.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py
--- a/shopelectro/management/commands/price.py
+++ b/shopelectro/management/commands/price.py
@@ -23,13 +23,14 @@
# --- files processing ---
class File:
- def __init__(self, path: str, context: dict):
+ def __init__(self, path: str, context: dict, template_path: str):
self.path = path
self.context = context
+ self.template_path = template_path
def create(self):
with open(self.path, 'w', encoding='utf-8') as file:
- file.write(render_to_string('prices/price.yml', self.context).strip())
+ file.write(render_to_string(self.template_path, self.context).strip())
logger.info(f'{self.path} generated.')
@@ -208,9 +209,15 @@
BASE_DIR = settings.ASSETS_DIR
def handle(self, *args, **options):
- Files(
- [File(
+ Files([
+ *[File(
path=os.path.join(self.BASE_DIR, filename),
- context=Context(target).context()
- ) for target, filename in settings.UTM_PRICE_MAP.items()]
- ).create()
+ context=Context(target).context(),
+ template_path='prices/price.yml',
+ ) for target, filename in settings.UTM_PRICE_MAP.items()],
+ File(
+ path=os.path.join(self.BASE_DIR, 'gm.rss'),
+ context=Context('GM').context(),
+ template_path='prices/price.rss',
+ )
+ ]).create()
| {"golden_diff": "diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py\n--- a/shopelectro/management/commands/price.py\n+++ b/shopelectro/management/commands/price.py\n@@ -23,13 +23,14 @@\n \n # --- files processing ---\n class File:\n- def __init__(self, path: str, context: dict):\n+ def __init__(self, path: str, context: dict, template_path: str):\n self.path = path\n self.context = context\n+ self.template_path = template_path\n \n def create(self):\n with open(self.path, 'w', encoding='utf-8') as file:\n- file.write(render_to_string('prices/price.yml', self.context).strip())\n+ file.write(render_to_string(self.template_path, self.context).strip())\n logger.info(f'{self.path} generated.')\n \n \n@@ -208,9 +209,15 @@\n BASE_DIR = settings.ASSETS_DIR\n \n def handle(self, *args, **options):\n- Files(\n- [File(\n+ Files([\n+ *[File(\n path=os.path.join(self.BASE_DIR, filename),\n- context=Context(target).context()\n- ) for target, filename in settings.UTM_PRICE_MAP.items()]\n- ).create()\n+ context=Context(target).context(),\n+ template_path='prices/price.yml',\n+ ) for target, filename in settings.UTM_PRICE_MAP.items()],\n+ File(\n+ path=os.path.join(self.BASE_DIR, 'gm.rss'),\n+ context=Context('GM').context(),\n+ template_path='prices/price.rss',\n+ )\n+ ]).create()\n", "issue": "Products rss for Google Merchant\nGoogle Merchant has some semihidden and strange subservice looking like google adwords for the search. It couldn't integrate with an existing gm.yml file, but requires rss. It has no open documentation and/or validator and we have just one from seo guys\r\n\r\n[Trello task](https://trello.com/c/39zr3xox/21-9-14k-%D0%B4%D0%B5%D0%BB%D0%B0%D0%B9-%D1%84%D0%B8%D0%B4-%D0%BF%D0%BE-%D0%BC%D0%B5%D1%80%D1%87%D0%B0%D0%BD%D1%82-%D1%86%D0%B5%D0%BD%D1%82%D1%80) contains details\n", "before_files": [{"content": "\"\"\"\nDjango command to generate yml price files for market-places.\n\n`utm` or `target` defines particular market-place.\nSee `settings.UTM_PRICE_MAP` to explore current list of supported market-places.\n\"\"\"\n\nimport logging\nimport os\nimport typing\nfrom collections import defaultdict\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.db.models import QuerySet\nfrom django.template.loader import render_to_string\n\nfrom catalog import context\nfrom shopelectro import models\n\nlogger = logging.getLogger(__name__)\n\n\n# --- files processing ---\nclass File:\n def __init__(self, path: str, context: dict):\n self.path = path\n self.context = context\n\n def create(self):\n with open(self.path, 'w', encoding='utf-8') as file:\n file.write(render_to_string('prices/price.yml', self.context).strip())\n logger.info(f'{self.path} generated.')\n\n\nclass Files:\n def __init__(self, files: typing.List[File]):\n self.files = files\n\n def create(self):\n for file in self.files:\n file.create()\n\n\nclass Context(context.Context):\n \"\"\"DB data, extracted for price file.\"\"\"\n\n def __init__(self, target: str):\n self.target = target\n\n def context(self) -> dict:\n categories = CategoriesFilter(self.target).qs()\n products = ProductsPatch(\n self.target,\n products=ProductsFilter(self.target, categories).qs()\n ).products()\n\n return {\n 'base_url': settings.BASE_URL,\n 'categories': categories,\n 'products': products,\n 'shop': settings.SHOP,\n 'utm': self.target,\n }\n\n\nclass CategoriesFilter:\n \"\"\"Categories list for particular market place.\"\"\"\n\n @property\n def ignored(self) -> typing.List[str]:\n return (\n settings.PRICE_IGNORED_CATEGORIES_MAP['default']\n + settings.PRICE_IGNORED_CATEGORIES_MAP[self.target]\n )\n\n def __init__(self, target: str):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n\n def qs(self) -> models.SECategoryQuerySet:\n if self.target == 'SE78':\n return models.Category.objects.all()\n\n result_categories = (\n models.Category.objects\n .exclude(\n id__in=(\n models.Category.objects\n .filter(name__in=self.ignored)\n .get_descendants(include_self=True)\n )\n )\n )\n\n if self.target == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures.\n To simplify filtering we are excluding all categories\n which don't contain at least one product with picture.\n \"\"\"\n # @todo #715:30m Try to rm ancestors filter in YM price filter.\n # Exclude only categories with no pictures, without their ancestors.\n result_categories = result_categories.get_categories_tree_with_pictures()\n\n return result_categories\n\n\nclass ProductsFilter:\n \"\"\"Filter offers with individual price requirements.\"\"\"\n\n @property\n def ignored(self) -> typing.List[str]:\n return settings.PRICE_IGNORED_PRODUCTS_MAP[self.target]\n\n FILTERS = defaultdict(\n lambda: (lambda qs: qs),\n # Yandex Market feed requires picture for every offer\n YM=lambda qs: (\n qs\n .filter(page__images__isnull=False)\n .distinct()\n ),\n # Google Merchant feed should not contain offers cheaper then CONST\n GM=lambda qs: (\n qs\n .filter(price__gt=settings.PRICE_GM_LOWER_BOUND)\n )\n )\n\n def __init__(self, target: str, categories: models.SECategoryQuerySet):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n self.categories = categories\n\n def qs(self) -> QuerySet:\n return self.FILTERS[self.target](\n models.Product.objects.active()\n .filter(category__in=self.categories, price__gt=0)\n .exclude(vendor_code__in=self.ignored)\n )\n\n\nclass ProductsPatch:\n\n UTM_MEDIUM_DATA = defaultdict(\n lambda: 'cpc',\n {'YM': 'cpc-market'}\n )\n\n def __init__(self, target: str, products: QuerySet):\n assert target in settings.UTM_PRICE_MAP\n self.target = target\n self._products = products\n\n def put_params(self, product):\n product.prepared_params = [\n (group, tags[0].name)\n for (group, tags) in filter(\n lambda x: x[0].name != '\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c',\n product.get_params().items()\n ) if tags\n ]\n return product\n\n def put_utm(self, product):\n \"\"\"Put UTM attribute to product.\"\"\"\n utm_marks = [\n ('utm_source', self.target),\n ('utm_medium', self.UTM_MEDIUM_DATA[self.target]),\n ('utm_content', product.get_root_category().page.slug),\n ('utm_term', str(product.vendor_code)),\n ]\n\n utm_mark_query = '&'.join(f'{k}={v}' for k, v in utm_marks)\n product.utm_url = f'{settings.BASE_URL}{product.url}?{utm_mark_query}'\n\n return product\n\n def put_crumbs(self, product): # Ignore PyDocStyleBear\n \"\"\"Crumbs for google merchant. https://goo.gl/b0UJQp\"\"\"\n product.crumbs = ' > '.join(\n product.page.get_ancestors_fields('h1', include_self=False)[1:]\n )\n return product\n\n def put_brand(self, product, brands):\n product.brand = brands.get(product)\n return product\n\n def products(self) -> typing.List[models.Product]:\n \"\"\"Path every product with additional fields.\"\"\"\n brands = models.Tag.objects.get_brands(self._products)\n return [\n self.put_brand(\n product=self.put_params(self.put_crumbs(self.put_utm(product))),\n brands=brands\n )\n for product in self._products\n ]\n\n\n# --- command block ---\nclass Command(BaseCommand):\n \"\"\"Generate yml file for a given vendor (YM or price.ru).\"\"\"\n\n # price files will be stored at this dir\n BASE_DIR = settings.ASSETS_DIR\n\n def handle(self, *args, **options):\n Files(\n [File(\n path=os.path.join(self.BASE_DIR, filename),\n context=Context(target).context()\n ) for target, filename in settings.UTM_PRICE_MAP.items()]\n ).create()\n", "path": "shopelectro/management/commands/price.py"}]} | 2,703 | 383 |
gh_patches_debug_28889 | rasdani/github-patches | git_diff | piskvorky__gensim-968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lsi distributed fail
Hi,
I've got a problem with the lsi distributed. When i executed the example:
https://radimrehurek.com/gensim/dist_lsi.html
First configure the server (enviroment variables), then i run the server, worker and dispatcher.
And all without errros. But when i executed the code. I have this fail:

Why does this happens? How can i solve?
Thank you in advance.
</issue>
<code>
[start of gensim/models/lsi_worker.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) 2010 Radim Rehurek <[email protected]>
5 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
6
7 """
8 USAGE: %(program)s
9
10 Worker ("slave") process used in computing distributed LSI. Run this script \
11 on every node in your cluster. If you wish, you may even run it multiple times \
12 on a single machine, to make better use of multiple cores (just beware that \
13 memory footprint increases accordingly).
14
15 Example: python -m gensim.models.lsi_worker
16 """
17
18
19 from __future__ import with_statement
20 import os, sys, logging
21 import threading
22 import tempfile
23 try:
24 import Queue
25 except ImportError:
26 import queue as Queue
27 import Pyro4
28 from gensim.models import lsimodel
29 from gensim import utils
30
31 logger = logging.getLogger('gensim.models.lsi_worker')
32
33
34 SAVE_DEBUG = 0 # save intermediate models after every SAVE_DEBUG updates (0 for never)
35
36
37
38 class Worker(object):
39 def __init__(self):
40 self.model = None
41
42
43 def initialize(self, myid, dispatcher, **model_params):
44 self.lock_update = threading.Lock()
45 self.jobsdone = 0 # how many jobs has this worker completed?
46 self.myid = myid # id of this worker in the dispatcher; just a convenience var for easy access/logging TODO remove?
47 self.dispatcher = dispatcher
48 self.finished = False
49 logger.info("initializing worker #%s" % myid)
50 self.model = lsimodel.LsiModel(**model_params)
51
52
53 @Pyro4.oneway
54 def requestjob(self):
55 """
56 Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called.
57 """
58 if self.model is None:
59 raise RuntimeError("worker must be initialized before receiving jobs")
60
61 job = None
62 while job is None and not self.finished:
63 try:
64 job = self.dispatcher.getjob(self.myid)
65 except Queue.Empty:
66 # no new job: try again, unless we're finished with all work
67 continue
68 if job is not None:
69 logger.info("worker #%s received job #%i" % (self.myid, self.jobsdone))
70 self.processjob(job)
71 self.dispatcher.jobdone(self.myid)
72 else:
73 logger.info("worker #%i stopping asking for jobs" % self.myid)
74
75
76 @utils.synchronous('lock_update')
77 def processjob(self, job):
78 self.model.add_documents(job)
79 self.jobsdone += 1
80 if SAVE_DEBUG and self.jobsdone % SAVE_DEBUG == 0:
81 fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')
82 self.model.save(fname)
83
84
85 @utils.synchronous('lock_update')
86 def getstate(self):
87 logger.info("worker #%i returning its state after %s jobs" %
88 (self.myid, self.jobsdone))
89 assert isinstance(self.model.projection, lsimodel.Projection)
90 self.finished = True
91 return self.model.projection
92
93
94 @utils.synchronous('lock_update')
95 def reset(self):
96 logger.info("resetting worker #%i" % self.myid)
97 self.model.projection = self.model.projection.empty_like()
98 self.finished = False
99
100
101 @Pyro4.oneway
102 def exit(self):
103 logger.info("terminating worker #%i" % self.myid)
104 os._exit(0)
105 #endclass Worker
106
107
108
109 def main():
110 logging.basicConfig(format = '%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
111 logger.info("running %s" % " ".join(sys.argv))
112
113 program = os.path.basename(sys.argv[0])
114 # make sure we have enough cmd line parameters
115 if len(sys.argv) < 1:
116 print(globals()["__doc__"] % locals())
117 sys.exit(1)
118
119 utils.pyro_daemon('gensim.lsi_worker', Worker(), random_suffix=True)
120
121 logger.info("finished running %s" % program)
122
123
124
125 if __name__ == '__main__':
126 main()
127
[end of gensim/models/lsi_worker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gensim/models/lsi_worker.py b/gensim/models/lsi_worker.py
--- a/gensim/models/lsi_worker.py
+++ b/gensim/models/lsi_worker.py
@@ -39,7 +39,7 @@
def __init__(self):
self.model = None
-
+ @Pyro4.expose
def initialize(self, myid, dispatcher, **model_params):
self.lock_update = threading.Lock()
self.jobsdone = 0 # how many jobs has this worker completed?
@@ -49,7 +49,7 @@
logger.info("initializing worker #%s" % myid)
self.model = lsimodel.LsiModel(**model_params)
-
+ @Pyro4.expose
@Pyro4.oneway
def requestjob(self):
"""
@@ -81,7 +81,7 @@
fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')
self.model.save(fname)
-
+ @Pyro4.expose
@utils.synchronous('lock_update')
def getstate(self):
logger.info("worker #%i returning its state after %s jobs" %
@@ -90,7 +90,7 @@
self.finished = True
return self.model.projection
-
+ @Pyro4.expose
@utils.synchronous('lock_update')
def reset(self):
logger.info("resetting worker #%i" % self.myid)
| {"golden_diff": "diff --git a/gensim/models/lsi_worker.py b/gensim/models/lsi_worker.py\n--- a/gensim/models/lsi_worker.py\n+++ b/gensim/models/lsi_worker.py\n@@ -39,7 +39,7 @@\n def __init__(self):\n self.model = None\n \n-\n+ @Pyro4.expose\n def initialize(self, myid, dispatcher, **model_params):\n self.lock_update = threading.Lock()\n self.jobsdone = 0 # how many jobs has this worker completed?\n@@ -49,7 +49,7 @@\n logger.info(\"initializing worker #%s\" % myid)\n self.model = lsimodel.LsiModel(**model_params)\n \n-\n+ @Pyro4.expose\n @Pyro4.oneway\n def requestjob(self):\n \"\"\"\n@@ -81,7 +81,7 @@\n fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')\n self.model.save(fname)\n \n-\n+ @Pyro4.expose\n @utils.synchronous('lock_update')\n def getstate(self):\n logger.info(\"worker #%i returning its state after %s jobs\" %\n@@ -90,7 +90,7 @@\n self.finished = True\n return self.model.projection\n \n-\n+ @Pyro4.expose\n @utils.synchronous('lock_update')\n def reset(self):\n logger.info(\"resetting worker #%i\" % self.myid)\n", "issue": "Lsi distributed fail\nHi, \nI've got a problem with the lsi distributed. When i executed the example:\n\nhttps://radimrehurek.com/gensim/dist_lsi.html\n\nFirst configure the server (enviroment variables), then i run the server, worker and dispatcher.\n\nAnd all without errros. But when i executed the code. I have this fail:\n\n\nWhy does this happens? How can i solve?\n\nThank you in advance.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2010 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nUSAGE: %(program)s\n\n Worker (\"slave\") process used in computing distributed LSI. Run this script \\\non every node in your cluster. If you wish, you may even run it multiple times \\\non a single machine, to make better use of multiple cores (just beware that \\\nmemory footprint increases accordingly).\n\nExample: python -m gensim.models.lsi_worker\n\"\"\"\n\n\nfrom __future__ import with_statement\nimport os, sys, logging\nimport threading\nimport tempfile\ntry:\n import Queue\nexcept ImportError:\n import queue as Queue\nimport Pyro4\nfrom gensim.models import lsimodel\nfrom gensim import utils\n\nlogger = logging.getLogger('gensim.models.lsi_worker')\n\n\nSAVE_DEBUG = 0 # save intermediate models after every SAVE_DEBUG updates (0 for never)\n\n\n\nclass Worker(object):\n def __init__(self):\n self.model = None\n\n\n def initialize(self, myid, dispatcher, **model_params):\n self.lock_update = threading.Lock()\n self.jobsdone = 0 # how many jobs has this worker completed?\n self.myid = myid # id of this worker in the dispatcher; just a convenience var for easy access/logging TODO remove?\n self.dispatcher = dispatcher\n self.finished = False\n logger.info(\"initializing worker #%s\" % myid)\n self.model = lsimodel.LsiModel(**model_params)\n\n\n @Pyro4.oneway\n def requestjob(self):\n \"\"\"\n Request jobs from the dispatcher, in a perpetual loop until `getstate()` is called.\n \"\"\"\n if self.model is None:\n raise RuntimeError(\"worker must be initialized before receiving jobs\")\n\n job = None\n while job is None and not self.finished:\n try:\n job = self.dispatcher.getjob(self.myid)\n except Queue.Empty:\n # no new job: try again, unless we're finished with all work\n continue\n if job is not None:\n logger.info(\"worker #%s received job #%i\" % (self.myid, self.jobsdone))\n self.processjob(job)\n self.dispatcher.jobdone(self.myid)\n else:\n logger.info(\"worker #%i stopping asking for jobs\" % self.myid)\n\n\n @utils.synchronous('lock_update')\n def processjob(self, job):\n self.model.add_documents(job)\n self.jobsdone += 1\n if SAVE_DEBUG and self.jobsdone % SAVE_DEBUG == 0:\n fname = os.path.join(tempfile.gettempdir(), 'lsi_worker.pkl')\n self.model.save(fname)\n\n\n @utils.synchronous('lock_update')\n def getstate(self):\n logger.info(\"worker #%i returning its state after %s jobs\" %\n (self.myid, self.jobsdone))\n assert isinstance(self.model.projection, lsimodel.Projection)\n self.finished = True\n return self.model.projection\n\n\n @utils.synchronous('lock_update')\n def reset(self):\n logger.info(\"resetting worker #%i\" % self.myid)\n self.model.projection = self.model.projection.empty_like()\n self.finished = False\n\n\n @Pyro4.oneway\n def exit(self):\n logger.info(\"terminating worker #%i\" % self.myid)\n os._exit(0)\n#endclass Worker\n\n\n\ndef main():\n logging.basicConfig(format = '%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n logger.info(\"running %s\" % \" \".join(sys.argv))\n\n program = os.path.basename(sys.argv[0])\n # make sure we have enough cmd line parameters\n if len(sys.argv) < 1:\n print(globals()[\"__doc__\"] % locals())\n sys.exit(1)\n\n utils.pyro_daemon('gensim.lsi_worker', Worker(), random_suffix=True)\n\n logger.info(\"finished running %s\" % program)\n\n\n\nif __name__ == '__main__':\n main()\n", "path": "gensim/models/lsi_worker.py"}]} | 1,885 | 325 |
gh_patches_debug_20398 | rasdani/github-patches | git_diff | Mailu__Mailu-2158 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Login attempt on roundcube triggers error 500 on /sso/login endpoint (ZeroDivisionError: division by zero)
Hi everybody!
Thanks in advance for your help!
## Before you open your issue
- [x] Check if no issue or pull-request for this already exists.
- [x] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- [x] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur.
- [x] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
## Environment & Versions
### Environment
- [x] docker-compose
- [ ] kubernetes
- [ ] docker swarm
### Versions
1.9
## Description
A user encountered an error 500 while trying to log into an account on roundcube. The 500 comes from a ZeroDivisionError in one of the jinja templates (see logs below).
## Replication Steps
Unfortunately I could not reproduce it so far. Apparently it happened on the first login attempt, even though I suspect that it had to do with rate limiting, since there was a message about rate limiting right before the error (see below). Apparently the number of fields on the sso form is zero?
The user also reports, that it is now working (also when rapidly logging out and logging in again).
## Expected behaviour
No error 500 and succesful login.
## Logs
````markdown
```
admin_1 | [2022-01-10 14:59:49,322] WARNING in limiter: Authentication attempt from <REDACTED IP OF USER> for <REDACTED MAIL ACCOUNT> has been rate-limited.
admin_1 | [2022-01-10 14:59:49,334] ERROR in app: Exception on /sso/login [POST]
admin_1 | Traceback (most recent call last):
admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app
admin_1 | response = self.full_dispatch_request()
admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request
admin_1 | rv = self.handle_user_exception(e)
admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request
admin_1 | rv = self.dispatch_request()
admin_1 | File "/usr/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request
admin_1 | return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
admin_1 | File "/app/mailu/sso/views/base.py", line 36, in login
admin_1 | return flask.render_template('login.html', form=form)
admin_1 | File "/usr/lib/python3.9/site-packages/flask/templating.py", line 147, in render_template
admin_1 | return _render(
admin_1 | File "/usr/lib/python3.9/site-packages/flask/templating.py", line 128, in _render
admin_1 | rv = template.render(context)
admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/environment.py", line 1304, in render
admin_1 | self.environment.handle_exception()
admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/environment.py", line 925, in handle_exception
admin_1 | raise rewrite_traceback_stack(source=source)
admin_1 | File "/app/mailu/sso/templates/login.html", line 1, in top-level template code
admin_1 | {%- extends "form_sso.html" %}
admin_1 | File "/app/mailu/sso/templates/form_sso.html", line 1, in top-level template code
admin_1 | {%- extends "base_sso.html" %}
admin_1 | File "/app/mailu/sso/templates/base_sso.html", line 70, in top-level template code
admin_1 | {%- block content %}{%- endblock %}
admin_1 | File "/app/mailu/sso/templates/form_sso.html", line 4, in block 'content'
admin_1 | {%- call macros.card() %}
admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/runtime.py", line 828, in _invoke
admin_1 | rv = self._func(*arguments)
admin_1 | File "/app/mailu/ui/templates/macros.html", line 84, in template
admin_1 | {{- caller() }}
admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/runtime.py", line 828, in _invoke
admin_1 | rv = self._func(*arguments)
admin_1 | File "/app/mailu/sso/templates/form_sso.html", line 8, in template
admin_1 | {{ macros.form_fields(fields, label=False, class="btn btn-default") }}
admin_1 | File "/usr/lib/python3.9/site-packages/jinja2/runtime.py", line 828, in _invoke
admin_1 | rv = self._func(*arguments)
admin_1 | File "/app/mailu/ui/templates/macros.html", line 22, in template
admin_1 | {%- set width = (12 / fields|length)|int %}
admin_1 | ZeroDivisionError: division by zero
```
````
</issue>
<code>
[start of core/admin/mailu/sso/views/base.py]
1 from werkzeug.utils import redirect
2 from mailu import models, utils
3 from mailu.sso import sso, forms
4 from mailu.ui import access
5
6 from flask import current_app as app
7 import flask
8 import flask_login
9
10 @sso.route('/login', methods=['GET', 'POST'])
11 def login():
12 client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)
13 form = forms.LoginForm()
14 form.submitAdmin.label.text = form.submitAdmin.label.text + ' Admin'
15 form.submitWebmail.label.text = form.submitWebmail.label.text + ' Webmail'
16
17 fields = []
18 if str(app.config["WEBMAIL"]).upper() != "NONE":
19 fields.append(form.submitWebmail)
20 if str(app.config["ADMIN"]).upper() != "FALSE":
21 fields.append(form.submitAdmin)
22 fields = [fields]
23
24 if form.validate_on_submit():
25 if form.submitAdmin.data:
26 destination = app.config['WEB_ADMIN']
27 elif form.submitWebmail.data:
28 destination = app.config['WEB_WEBMAIL']
29 device_cookie, device_cookie_username = utils.limiter.parse_device_cookie(flask.request.cookies.get('rate_limit'))
30 username = form.email.data
31 if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):
32 flask.flash('Too many attempts from your IP (rate-limit)', 'error')
33 return flask.render_template('login.html', form=form)
34 if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):
35 flask.flash('Too many attempts for this user (rate-limit)', 'error')
36 return flask.render_template('login.html', form=form)
37 user = models.User.login(username, form.pw.data)
38 if user:
39 flask.session.regenerate()
40 flask_login.login_user(user)
41 response = flask.redirect(destination)
42 response.set_cookie('rate_limit', utils.limiter.device_cookie(username), max_age=31536000, path=flask.url_for('sso.login'), secure=app.config['SESSION_COOKIE_SECURE'], httponly=True)
43 flask.current_app.logger.info(f'Login succeeded for {username} from {client_ip}.')
44 return response
45 else:
46 utils.limiter.rate_limit_user(username, client_ip, device_cookie, device_cookie_username) if models.User.get(username) else utils.limiter.rate_limit_ip(client_ip)
47 flask.current_app.logger.warn(f'Login failed for {username} from {client_ip}.')
48 flask.flash('Wrong e-mail or password', 'error')
49 return flask.render_template('login.html', form=form, fields=fields)
50
51 @sso.route('/logout', methods=['GET'])
52 @access.authenticated
53 def logout():
54 flask_login.logout_user()
55 flask.session.destroy()
56 return flask.redirect(flask.url_for('.login'))
57
58
[end of core/admin/mailu/sso/views/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/sso/views/base.py b/core/admin/mailu/sso/views/base.py
--- a/core/admin/mailu/sso/views/base.py
+++ b/core/admin/mailu/sso/views/base.py
@@ -30,10 +30,10 @@
username = form.email.data
if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):
flask.flash('Too many attempts from your IP (rate-limit)', 'error')
- return flask.render_template('login.html', form=form)
+ return flask.render_template('login.html', form=form, fields=fields)
if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):
flask.flash('Too many attempts for this user (rate-limit)', 'error')
- return flask.render_template('login.html', form=form)
+ return flask.render_template('login.html', form=form, fields=fields)
user = models.User.login(username, form.pw.data)
if user:
flask.session.regenerate()
| {"golden_diff": "diff --git a/core/admin/mailu/sso/views/base.py b/core/admin/mailu/sso/views/base.py\n--- a/core/admin/mailu/sso/views/base.py\n+++ b/core/admin/mailu/sso/views/base.py\n@@ -30,10 +30,10 @@\n username = form.email.data\n if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):\n flask.flash('Too many attempts from your IP (rate-limit)', 'error')\n- return flask.render_template('login.html', form=form)\n+ return flask.render_template('login.html', form=form, fields=fields)\n if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):\n flask.flash('Too many attempts for this user (rate-limit)', 'error')\n- return flask.render_template('login.html', form=form)\n+ return flask.render_template('login.html', form=form, fields=fields)\n user = models.User.login(username, form.pw.data)\n if user:\n flask.session.regenerate()\n", "issue": "Login attempt on roundcube triggers error 500 on /sso/login endpoint (ZeroDivisionError: division by zero)\nHi everybody!\r\n\r\nThanks in advance for your help!\r\n\r\n## Before you open your issue\r\n- [x] Check if no issue or pull-request for this already exists.\r\n- [x] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- [x] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [x] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\n## Environment & Versions\r\n### Environment\r\n - [x] docker-compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.9\r\n\r\n## Description\r\nA user encountered an error 500 while trying to log into an account on roundcube. The 500 comes from a ZeroDivisionError in one of the jinja templates (see logs below).\r\n\r\n## Replication Steps\r\nUnfortunately I could not reproduce it so far. Apparently it happened on the first login attempt, even though I suspect that it had to do with rate limiting, since there was a message about rate limiting right before the error (see below). Apparently the number of fields on the sso form is zero?\r\n\r\nThe user also reports, that it is now working (also when rapidly logging out and logging in again).\r\n\r\n## Expected behaviour\r\nNo error 500 and succesful login.\r\n\r\n## Logs\r\n\r\n````markdown\r\n```\r\nadmin_1 | [2022-01-10 14:59:49,322] WARNING in limiter: Authentication attempt from <REDACTED IP OF USER> for <REDACTED MAIL ACCOUNT> has been rate-limited.\r\nadmin_1 | [2022-01-10 14:59:49,334] ERROR in app: Exception on /sso/login [POST]\r\nadmin_1 | Traceback (most recent call last):\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 2073, in wsgi_app\r\nadmin_1 | response = self.full_dispatch_request()\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 1518, in full_dispatch_request\r\nadmin_1 | rv = self.handle_user_exception(e)\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 1516, in full_dispatch_request\r\nadmin_1 | rv = self.dispatch_request()\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/app.py\", line 1502, in dispatch_request\r\nadmin_1 | return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)\r\nadmin_1 | File \"/app/mailu/sso/views/base.py\", line 36, in login\r\nadmin_1 | return flask.render_template('login.html', form=form)\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/templating.py\", line 147, in render_template\r\nadmin_1 | return _render(\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/flask/templating.py\", line 128, in _render\r\nadmin_1 | rv = template.render(context)\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/environment.py\", line 1304, in render\r\nadmin_1 | self.environment.handle_exception()\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/environment.py\", line 925, in handle_exception\r\nadmin_1 | raise rewrite_traceback_stack(source=source)\r\nadmin_1 | File \"/app/mailu/sso/templates/login.html\", line 1, in top-level template code\r\nadmin_1 | {%- extends \"form_sso.html\" %}\r\nadmin_1 | File \"/app/mailu/sso/templates/form_sso.html\", line 1, in top-level template code\r\nadmin_1 | {%- extends \"base_sso.html\" %}\r\nadmin_1 | File \"/app/mailu/sso/templates/base_sso.html\", line 70, in top-level template code\r\nadmin_1 | {%- block content %}{%- endblock %}\r\nadmin_1 | File \"/app/mailu/sso/templates/form_sso.html\", line 4, in block 'content'\r\nadmin_1 | {%- call macros.card() %}\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/runtime.py\", line 828, in _invoke\r\nadmin_1 | rv = self._func(*arguments)\r\nadmin_1 | File \"/app/mailu/ui/templates/macros.html\", line 84, in template\r\nadmin_1 | {{- caller() }}\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/runtime.py\", line 828, in _invoke\r\nadmin_1 | rv = self._func(*arguments)\r\nadmin_1 | File \"/app/mailu/sso/templates/form_sso.html\", line 8, in template\r\nadmin_1 | {{ macros.form_fields(fields, label=False, class=\"btn btn-default\") }}\r\nadmin_1 | File \"/usr/lib/python3.9/site-packages/jinja2/runtime.py\", line 828, in _invoke\r\nadmin_1 | rv = self._func(*arguments)\r\nadmin_1 | File \"/app/mailu/ui/templates/macros.html\", line 22, in template\r\nadmin_1 | {%- set width = (12 / fields|length)|int %}\r\nadmin_1 | ZeroDivisionError: division by zero\r\n\r\n```\r\n````\r\n\n", "before_files": [{"content": "from werkzeug.utils import redirect\nfrom mailu import models, utils\nfrom mailu.sso import sso, forms\nfrom mailu.ui import access\n\nfrom flask import current_app as app\nimport flask\nimport flask_login\n\[email protected]('/login', methods=['GET', 'POST'])\ndef login():\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n form = forms.LoginForm()\n form.submitAdmin.label.text = form.submitAdmin.label.text + ' Admin'\n form.submitWebmail.label.text = form.submitWebmail.label.text + ' Webmail'\n\n fields = []\n if str(app.config[\"WEBMAIL\"]).upper() != \"NONE\":\n fields.append(form.submitWebmail)\n if str(app.config[\"ADMIN\"]).upper() != \"FALSE\":\n fields.append(form.submitAdmin)\n fields = [fields]\n\n if form.validate_on_submit():\n if form.submitAdmin.data:\n destination = app.config['WEB_ADMIN']\n elif form.submitWebmail.data:\n destination = app.config['WEB_WEBMAIL']\n device_cookie, device_cookie_username = utils.limiter.parse_device_cookie(flask.request.cookies.get('rate_limit'))\n username = form.email.data\n if username != device_cookie_username and utils.limiter.should_rate_limit_ip(client_ip):\n flask.flash('Too many attempts from your IP (rate-limit)', 'error')\n return flask.render_template('login.html', form=form)\n if utils.limiter.should_rate_limit_user(username, client_ip, device_cookie, device_cookie_username):\n flask.flash('Too many attempts for this user (rate-limit)', 'error')\n return flask.render_template('login.html', form=form)\n user = models.User.login(username, form.pw.data)\n if user:\n flask.session.regenerate()\n flask_login.login_user(user)\n response = flask.redirect(destination)\n response.set_cookie('rate_limit', utils.limiter.device_cookie(username), max_age=31536000, path=flask.url_for('sso.login'), secure=app.config['SESSION_COOKIE_SECURE'], httponly=True)\n flask.current_app.logger.info(f'Login succeeded for {username} from {client_ip}.')\n return response\n else:\n utils.limiter.rate_limit_user(username, client_ip, device_cookie, device_cookie_username) if models.User.get(username) else utils.limiter.rate_limit_ip(client_ip)\n flask.current_app.logger.warn(f'Login failed for {username} from {client_ip}.')\n flask.flash('Wrong e-mail or password', 'error')\n return flask.render_template('login.html', form=form, fields=fields)\n\[email protected]('/logout', methods=['GET'])\[email protected]\ndef logout():\n flask_login.logout_user()\n flask.session.destroy()\n return flask.redirect(flask.url_for('.login'))\n\n", "path": "core/admin/mailu/sso/views/base.py"}]} | 2,604 | 225 |
gh_patches_debug_20388 | rasdani/github-patches | git_diff | vnpy__vnpy-1500 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ubuntu ctp导入问题
## 环境
* 操作系统: Ubuntu 18.04
* Anaconda版本: Python 3.7 64位
* vn.py版本: DEV-2.0.1 branch 20190313(下载日期)
## Issue类型
三选一:Bug
## 预期程序行为
```
from vnpy.gateway.ctp import ctp_gateway导入成功
## 实际程序行为
'''from vnpy.gateway.ctp.ctp_gateway import CtpGateWay
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vnpy/vnpy/vnpy/gateway/ctp/__init__.py", line 1, in <module>
from .ctp_gateway import CtpGateway
File "/home/vnpy/vnpy/vnpy/gateway/ctp/ctp_gateway.py", line 6, in <module>
from vnpy.api.ctp import (
File "/home/vnpy/vnpy/vnpy/api/ctp/__init__.py", line 1, in <module>
from .vnctpmd import MdApi
ModuleNotFoundError: No module named 'vnpy.api.ctp.vnctpmd'
```
## 重现步骤
```
删除setup下面的oes安装模块
git clone -b v2.0.1-DEV https://github.com/vnpy/vnpy
cd vnpy
vim setup.py #具体删除删除相关代码即可
chmod +x install.sh && ./install.sh
# 安装会正常进行
```
针对Bug类型Issue,请提供具体重现步骤以及报错截图
</issue>
<code>
[start of setup.py]
1 import ast
2 import platform
3 import re
4
5 from setuptools import Extension, find_packages, setup
6
7 with open("vnpy/__init__.py", "rb") as f:
8 version_line = re.search(
9 r"__version__\s+=\s+(.*)", f.read().decode("utf-8")
10 ).group(1)
11 version = str(ast.literal_eval(version_line))
12
13 if platform.uname().system == "Windows":
14 compiler_flags = ["/MP", "/std:c++17", # standard
15 "/O2", "/Ob2", "/Oi", "/Ot", "/Oy", "/GL", # Optimization
16 "/wd4819" # 936 code page
17 ]
18 extra_link_args = []
19 else:
20 compiler_flags = ["-std=c++17",
21 "-Wno-delete-incomplete", "-Wno-sign-compare",
22 ]
23 extra_link_args = ["-lstdc++"]
24
25 vnctpmd = Extension("vnpy.api.ctp.vnctpmd",
26 [
27 "vnpy/api/ctp/vnctp/vnctpmd/vnctpmd.cpp",
28 ],
29 include_dirs=["vnpy/api/ctp/include", "vnpy/api/ctp/vnctp", ],
30 define_macros=[],
31 undef_macros=[],
32 library_dirs=["vnpy/api/ctp/libs", "vnpy/api/ctp"],
33 libraries=["thostmduserapi", "thosttraderapi", ],
34 extra_compile_args=compiler_flags,
35 extra_link_args=extra_link_args,
36 depends=[],
37 runtime_library_dirs=["vnpy/api/ctp"],
38 language="cpp",
39 )
40 vnctptd = Extension("vnpy.api.ctp.vnctptd",
41 [
42 "vnpy/api/ctp/vnctp/vnctptd/vnctptd.cpp",
43 ],
44 include_dirs=["vnpy/api/ctp/include", "vnpy/api/ctp/vnctp", ],
45 define_macros=[],
46 undef_macros=[],
47 library_dirs=["vnpy/api/ctp/libs", "vnpy/api/ctp"],
48 libraries=["thostmduserapi", "thosttraderapi", ],
49 extra_compile_args=compiler_flags,
50 extra_link_args=extra_link_args,
51 runtime_library_dirs=["vnpy/api/ctp"],
52 depends=[],
53 language="cpp",
54 )
55 vnoes = Extension("vnpy.api.oes.vnoes",
56 [
57 "vnpy/api/oes/vnoes/generated_files/classes_1.cpp",
58 "vnpy/api/oes/vnoes/generated_files/classes_2.cpp",
59 "vnpy/api/oes/vnoes/generated_files/module.cpp",
60 ],
61 include_dirs=["vnpy/api/oes/include", "vnpy/api/oes/vnoes", ],
62 define_macros=[("BRIGAND_NO_BOOST_SUPPORT", "1")],
63 undef_macros=[],
64 library_dirs=["vnpy/api/oes/libs"],
65 libraries=["oes_api"],
66 extra_compile_args=compiler_flags,
67 extra_link_args=extra_link_args,
68 depends=[],
69 language="cpp",
70 )
71
72 if platform.uname().system == "Windows":
73 # use pre-built pyd for windows ( support python 3.7 only )
74 ext_modules = []
75 else:
76 ext_modules = [vnctptd, vnctpmd, vnoes]
77
78 pkgs = find_packages()
79
80 setup(
81 name="vnpy",
82 version=version,
83 include_package_data=True,
84 packages=pkgs,
85 package_data={"": [
86 "*.json", "*.md", "*.ico", "*.ini",
87 "*.dll", "*.so", "*.pyd"
88 ]},
89 install_requires=[],
90 ext_modules=ext_modules
91 )
92
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,7 +34,7 @@
extra_compile_args=compiler_flags,
extra_link_args=extra_link_args,
depends=[],
- runtime_library_dirs=["vnpy/api/ctp"],
+ runtime_library_dirs=["$ORIGIN"],
language="cpp",
)
vnctptd = Extension("vnpy.api.ctp.vnctptd",
@@ -48,7 +48,7 @@
libraries=["thostmduserapi", "thosttraderapi", ],
extra_compile_args=compiler_flags,
extra_link_args=extra_link_args,
- runtime_library_dirs=["vnpy/api/ctp"],
+ runtime_library_dirs=["$ORIGIN"],
depends=[],
language="cpp",
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,7 +34,7 @@\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n- runtime_library_dirs=[\"vnpy/api/ctp\"],\n+ runtime_library_dirs=[\"$ORIGIN\"],\n language=\"cpp\",\n )\n vnctptd = Extension(\"vnpy.api.ctp.vnctptd\",\n@@ -48,7 +48,7 @@\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n- runtime_library_dirs=[\"vnpy/api/ctp\"],\n+ runtime_library_dirs=[\"$ORIGIN\"],\n depends=[],\n language=\"cpp\",\n )\n", "issue": "ubuntu\u3000 ctp\u5bfc\u5165\u95ee\u9898\n## \u73af\u5883\r\n\r\n* \u64cd\u4f5c\u7cfb\u7edf: Ubuntu 18.04\r\n* Anaconda\u7248\u672c: Python 3.7 64\u4f4d\r\n* vn.py\u7248\u672c: DEV-2.0.1 branch 20190313\uff08\u4e0b\u8f7d\u65e5\u671f\uff09\r\n\r\n## Issue\u7c7b\u578b\r\n\u4e09\u9009\u4e00\uff1aBu\uff47\r\n\r\n## \u9884\u671f\u7a0b\u5e8f\u884c\u4e3a\r\n```\r\nfrom vnpy.gateway.ctp import ctp_gateway\u5bfc\u5165\u6210\u529f\r\n## \u5b9e\u9645\u7a0b\u5e8f\u884c\u4e3a\r\n'''from vnpy.gateway.ctp.ctp_gateway import CtpGateWay\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vnpy/vnpy/vnpy/gateway/ctp/__init__.py\", line 1, in <module>\r\n from .ctp_gateway import CtpGateway\r\n File \"/home/vnpy/vnpy/vnpy/gateway/ctp/ctp_gateway.py\", line 6, in <module>\r\n from vnpy.api.ctp import (\r\n File \"/home/vnpy/vnpy/vnpy/api/ctp/__init__.py\", line 1, in <module>\r\n from .vnctpmd import MdApi\r\nModuleNotFoundError: No module named 'vnpy.api.ctp.vnctpmd'\r\n```\r\n\r\n## \u91cd\u73b0\u6b65\u9aa4\r\n\r\n```\r\n\u5220\u9664setup\u4e0b\u9762\u7684oes\u5b89\u88c5\u6a21\u5757 \r\ngit clone -b v2.0.1-DEV https://github.com/vnpy/vnpy\r\ncd vnpy\r\nvim setup.py #\u5177\u4f53\u5220\u9664\u5220\u9664\u76f8\u5173\u4ee3\u7801\u5373\u53ef \r\nchmod +x install.sh && ./install.sh \r\n# \u5b89\u88c5\u4f1a\u6b63\u5e38\u8fdb\u884c \r\n```\r\n\r\n\u9488\u5bf9Bug\u7c7b\u578bIssue\uff0c\u8bf7\u63d0\u4f9b\u5177\u4f53\u91cd\u73b0\u6b65\u9aa4\u4ee5\u53ca\u62a5\u9519\u622a\u56fe\r\n\r\n\n", "before_files": [{"content": "import ast\nimport platform\nimport re\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"vnpy/__init__.py\", \"rb\") as f:\n version_line = re.search(\n r\"__version__\\s+=\\s+(.*)\", f.read().decode(\"utf-8\")\n ).group(1)\n version = str(ast.literal_eval(version_line))\n\nif platform.uname().system == \"Windows\":\n compiler_flags = [\"/MP\", \"/std:c++17\", # standard\n \"/O2\", \"/Ob2\", \"/Oi\", \"/Ot\", \"/Oy\", \"/GL\", # Optimization\n \"/wd4819\" # 936 code page\n ]\n extra_link_args = []\nelse:\n compiler_flags = [\"-std=c++17\",\n \"-Wno-delete-incomplete\", \"-Wno-sign-compare\",\n ]\n extra_link_args = [\"-lstdc++\"]\n\nvnctpmd = Extension(\"vnpy.api.ctp.vnctpmd\",\n [\n \"vnpy/api/ctp/vnctp/vnctpmd/vnctpmd.cpp\",\n ],\n include_dirs=[\"vnpy/api/ctp/include\", \"vnpy/api/ctp/vnctp\", ],\n define_macros=[],\n undef_macros=[],\n library_dirs=[\"vnpy/api/ctp/libs\", \"vnpy/api/ctp\"],\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n runtime_library_dirs=[\"vnpy/api/ctp\"],\n language=\"cpp\",\n )\nvnctptd = Extension(\"vnpy.api.ctp.vnctptd\",\n [\n \"vnpy/api/ctp/vnctp/vnctptd/vnctptd.cpp\",\n ],\n include_dirs=[\"vnpy/api/ctp/include\", \"vnpy/api/ctp/vnctp\", ],\n define_macros=[],\n undef_macros=[],\n library_dirs=[\"vnpy/api/ctp/libs\", \"vnpy/api/ctp\"],\n libraries=[\"thostmduserapi\", \"thosttraderapi\", ],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n runtime_library_dirs=[\"vnpy/api/ctp\"],\n depends=[],\n language=\"cpp\",\n )\nvnoes = Extension(\"vnpy.api.oes.vnoes\",\n [\n \"vnpy/api/oes/vnoes/generated_files/classes_1.cpp\",\n \"vnpy/api/oes/vnoes/generated_files/classes_2.cpp\",\n \"vnpy/api/oes/vnoes/generated_files/module.cpp\",\n ],\n include_dirs=[\"vnpy/api/oes/include\", \"vnpy/api/oes/vnoes\", ],\n define_macros=[(\"BRIGAND_NO_BOOST_SUPPORT\", \"1\")],\n undef_macros=[],\n library_dirs=[\"vnpy/api/oes/libs\"],\n libraries=[\"oes_api\"],\n extra_compile_args=compiler_flags,\n extra_link_args=extra_link_args,\n depends=[],\n language=\"cpp\",\n )\n\nif platform.uname().system == \"Windows\":\n # use pre-built pyd for windows ( support python 3.7 only )\n ext_modules = []\nelse:\n ext_modules = [vnctptd, vnctpmd, vnoes]\n\npkgs = find_packages()\n\nsetup(\n name=\"vnpy\",\n version=version,\n include_package_data=True,\n packages=pkgs,\n package_data={\"\": [\n \"*.json\", \"*.md\", \"*.ico\", \"*.ini\",\n \"*.dll\", \"*.so\", \"*.pyd\"\n ]},\n install_requires=[],\n ext_modules=ext_modules\n)\n", "path": "setup.py"}]} | 1,867 | 178 |
gh_patches_debug_37915 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-3457 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue terminal chat service
Hello,
First of all thank you for this amazing framework.
I was trying to run the terminal chat example (code from today without changes except port).
It seems that there are only 'max_worker' clients possible to connect in total. For example when I set max_worker=1 in the config.yml, I can connect with the client one time successfully. But when I stop the client ('[DONE]'), start it again and it gets stuck right at the beginning.
How can I prevent the server from getting stuck once >max_workers clients have been connected? I already tried removing the agent from memory, however it seems that the issue is that the thread is just not ending.

I use Python 3.7.5 and Ubuntu 18.04 LTS.
</issue>
<code>
[start of parlai/chat_service/services/terminal_chat/client.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 import json
8 import uuid
9 import websocket
10 import time
11 import threading
12 from parlai.core.params import ParlaiParser
13
14
15 def _get_rand_id():
16 """
17 :return: The string of a random id using uuid4
18 """
19 return str(uuid.uuid4())
20
21
22 def _prBlueBG(text):
23 """
24 Print given in text with a blue background.
25
26 :param text: The text to be printed
27 """
28 print("\033[44m{}\033[0m".format(text), sep="")
29
30
31 def on_message(ws, message):
32 """
33 Prints the incoming message from the server.
34
35 :param ws: a WebSocketApp
36 :param message: json with 'text' field to be printed
37 """
38 incoming_message = json.loads(message)
39 print("\033[0m\n")
40 print("Bot: " + incoming_message['text'])
41 quick_replies = incoming_message.get('quick_replies')
42 if quick_replies is not None and len(quick_replies) > 0:
43 print(f"\nOptions: [{'|'.join(quick_replies)}]")
44 print("\033[44m\n")
45
46
47 def on_error(ws, error):
48 """
49 Prints an error, if occurs.
50
51 :param ws: WebSocketApp
52 :param error: An error
53 """
54 print(error)
55
56
57 def on_close(ws):
58 """
59 Cleanup before closing connection.
60
61 :param ws: WebSocketApp
62 """
63 # Reset color formatting if necessary
64 print("\033[0m")
65 print("Connection closed")
66
67
68 def _run(ws, id):
69 """
70 Takes user input and sends it to a websocket.
71
72 :param ws: websocket.WebSocketApp
73 """
74 while True:
75 x = input("\033[44m Me: ")
76 print("\033[0m", end="")
77 data = {}
78 data['id'] = id
79 data['text'] = x
80 json_data = json.dumps(data)
81 ws.send(json_data)
82 time.sleep(1)
83 if x == "[DONE]":
84 break
85 ws.close()
86
87
88 def on_open(ws):
89 """
90 Starts a new thread that loops, taking user input and sending it to the websocket.
91
92 :param ws: websocket.WebSocketApp that sends messages to a terminal_manager
93 """
94 id = _get_rand_id()
95 threading.Thread(target=_run, args=(ws, id)).start()
96
97
98 def setup_args():
99 """
100 Set up args, specifically for the port number.
101
102 :return: A parser that parses the port from commandline arguments.
103 """
104 parser = ParlaiParser(False, False)
105 parser_grp = parser.add_argument_group('Terminal Chat')
106 parser_grp.add_argument(
107 '--port', default=35496, type=int, help='Port to run the terminal chat server'
108 )
109 return parser.parse_args()
110
111
112 if __name__ == "__main__":
113 opt = setup_args()
114 port = opt.get('port', 34596)
115 print("Connecting to port: ", port)
116 ws = websocket.WebSocketApp(
117 "ws://localhost:{}/websocket".format(port),
118 on_message=on_message,
119 on_error=on_error,
120 on_close=on_close,
121 )
122 ws.on_open = on_open
123 ws.run_forever()
124
[end of parlai/chat_service/services/terminal_chat/client.py]
[start of parlai/chat_service/tasks/chatbot/worlds.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 #
7 # py parlai/chat_service/tasks/overworld_demo/run.py --debug --verbose
8
9 from parlai.core.worlds import World
10 from parlai.chat_service.services.messenger.worlds import OnboardWorld
11 from parlai.core.agents import create_agent_from_shared
12
13
14 # ---------- Chatbot demo ---------- #
15 class MessengerBotChatOnboardWorld(OnboardWorld):
16 """
17 Example messenger onboarding world for Chatbot Model.
18 """
19
20 @staticmethod
21 def generate_world(opt, agents):
22 return MessengerBotChatOnboardWorld(opt=opt, agent=agents[0])
23
24 def parley(self):
25 self.episodeDone = True
26
27
28 class MessengerBotChatTaskWorld(World):
29 """
30 Example one person world that talks to a provided agent (bot).
31 """
32
33 MAX_AGENTS = 1
34 MODEL_KEY = 'blender_90M'
35
36 def __init__(self, opt, agent, bot):
37 self.agent = agent
38 self.episodeDone = False
39 self.model = bot
40 self.first_time = True
41
42 @staticmethod
43 def generate_world(opt, agents):
44 if opt['models'] is None:
45 raise RuntimeError("Model must be specified")
46 return MessengerBotChatTaskWorld(
47 opt,
48 agents[0],
49 create_agent_from_shared(
50 opt['shared_bot_params'][MessengerBotChatTaskWorld.MODEL_KEY]
51 ),
52 )
53
54 @staticmethod
55 def assign_roles(agents):
56 agents[0].disp_id = 'ChatbotAgent'
57
58 def parley(self):
59 if self.first_time:
60 self.agent.observe(
61 {
62 'id': 'World',
63 'text': 'Welcome to the ParlAI Chatbot demo. '
64 'You are now paired with a bot - feel free to send a message.'
65 'Type [DONE] to finish the chat, or [RESET] to reset the dialogue history.',
66 }
67 )
68 self.first_time = False
69 a = self.agent.act()
70 if a is not None:
71 if '[DONE]' in a['text']:
72 self.episodeDone = True
73 elif '[RESET]' in a['text']:
74 self.model.reset()
75 self.agent.observe({"text": "[History Cleared]", "episode_done": False})
76 else:
77 print("===act====")
78 print(a)
79 print("~~~~~~~~~~~")
80 self.model.observe(a)
81 response = self.model.act()
82 print("===response====")
83 print(response)
84 print("~~~~~~~~~~~")
85 self.agent.observe(response)
86
87 def episode_done(self):
88 return self.episodeDone
89
90 def shutdown(self):
91 self.agent.shutdown()
92
93
94 # ---------- Overworld -------- #
95 class MessengerOverworld(World):
96 """
97 World to handle moving agents to their proper places.
98 """
99
100 def __init__(self, opt, agent):
101 self.agent = agent
102 self.opt = opt
103 self.first_time = True
104 self.episodeDone = False
105
106 @staticmethod
107 def generate_world(opt, agents):
108 return MessengerOverworld(opt, agents[0])
109
110 @staticmethod
111 def assign_roles(agents):
112 for a in agents:
113 a.disp_id = 'Agent'
114
115 def episode_done(self):
116 return self.episodeDone
117
118 def parley(self):
119 if self.first_time:
120 self.agent.observe(
121 {
122 'id': 'Overworld',
123 'text': 'Welcome to the overworld for the ParlAI messenger '
124 'chatbot demo. Please type "begin" to start.',
125 'quick_replies': ['begin'],
126 }
127 )
128 self.first_time = False
129 a = self.agent.act()
130 if a is not None and a['text'].lower() == 'begin':
131 self.episodeDone = True
132 return 'default'
133 elif a is not None:
134 self.agent.observe(
135 {
136 'id': 'Overworld',
137 'text': 'Invalid option. Please type "begin".',
138 'quick_replies': ['begin'],
139 }
140 )
141
[end of parlai/chat_service/tasks/chatbot/worlds.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parlai/chat_service/services/terminal_chat/client.py b/parlai/chat_service/services/terminal_chat/client.py
--- a/parlai/chat_service/services/terminal_chat/client.py
+++ b/parlai/chat_service/services/terminal_chat/client.py
@@ -11,6 +11,13 @@
import threading
from parlai.core.params import ParlaiParser
+# the socket callback functions operate asynchronously.
+# upon exit of a chat, we do not want the user to view any additional messages from the server.
+# alas, it is necessary to send two messages ([DONE], and EXIT) in order to fully exist the world pool
+# to prevent receiving a message after sending [DONE], we track the user's state with
+# this global variable.
+RUNNING = True
+
def _get_rand_id():
"""
@@ -35,6 +42,8 @@
:param ws: a WebSocketApp
:param message: json with 'text' field to be printed
"""
+ if not RUNNING:
+ return
incoming_message = json.loads(message)
print("\033[0m\n")
print("Bot: " + incoming_message['text'])
@@ -71,16 +80,22 @@
:param ws: websocket.WebSocketApp
"""
+ global RUNNING
while True:
x = input("\033[44m Me: ")
print("\033[0m", end="")
data = {}
data['id'] = id
data['text'] = x
+ if x == "[DONE]":
+ RUNNING = False
json_data = json.dumps(data)
ws.send(json_data)
time.sleep(1)
if x == "[DONE]":
+ time.sleep(1)
+ data['text'] = 'EXIT'
+ ws.send(json.dumps(data))
break
ws.close()
diff --git a/parlai/chat_service/tasks/chatbot/worlds.py b/parlai/chat_service/tasks/chatbot/worlds.py
--- a/parlai/chat_service/tasks/chatbot/worlds.py
+++ b/parlai/chat_service/tasks/chatbot/worlds.py
@@ -121,12 +121,15 @@
{
'id': 'Overworld',
'text': 'Welcome to the overworld for the ParlAI messenger '
- 'chatbot demo. Please type "begin" to start.',
- 'quick_replies': ['begin'],
+ 'chatbot demo. Please type "begin" to start, or "exit" to exit',
+ 'quick_replies': ['begin', 'exit'],
}
)
self.first_time = False
a = self.agent.act()
+ if a is not None and a['text'].lower() == 'exit':
+ self.episode_done = True
+ return 'EXIT'
if a is not None and a['text'].lower() == 'begin':
self.episodeDone = True
return 'default'
| {"golden_diff": "diff --git a/parlai/chat_service/services/terminal_chat/client.py b/parlai/chat_service/services/terminal_chat/client.py\n--- a/parlai/chat_service/services/terminal_chat/client.py\n+++ b/parlai/chat_service/services/terminal_chat/client.py\n@@ -11,6 +11,13 @@\n import threading\n from parlai.core.params import ParlaiParser\n \n+# the socket callback functions operate asynchronously.\n+# upon exit of a chat, we do not want the user to view any additional messages from the server.\n+# alas, it is necessary to send two messages ([DONE], and EXIT) in order to fully exist the world pool\n+# to prevent receiving a message after sending [DONE], we track the user's state with\n+# this global variable.\n+RUNNING = True\n+\n \n def _get_rand_id():\n \"\"\"\n@@ -35,6 +42,8 @@\n :param ws: a WebSocketApp\n :param message: json with 'text' field to be printed\n \"\"\"\n+ if not RUNNING:\n+ return\n incoming_message = json.loads(message)\n print(\"\\033[0m\\n\")\n print(\"Bot: \" + incoming_message['text'])\n@@ -71,16 +80,22 @@\n \n :param ws: websocket.WebSocketApp\n \"\"\"\n+ global RUNNING\n while True:\n x = input(\"\\033[44m Me: \")\n print(\"\\033[0m\", end=\"\")\n data = {}\n data['id'] = id\n data['text'] = x\n+ if x == \"[DONE]\":\n+ RUNNING = False\n json_data = json.dumps(data)\n ws.send(json_data)\n time.sleep(1)\n if x == \"[DONE]\":\n+ time.sleep(1)\n+ data['text'] = 'EXIT'\n+ ws.send(json.dumps(data))\n break\n ws.close()\n \ndiff --git a/parlai/chat_service/tasks/chatbot/worlds.py b/parlai/chat_service/tasks/chatbot/worlds.py\n--- a/parlai/chat_service/tasks/chatbot/worlds.py\n+++ b/parlai/chat_service/tasks/chatbot/worlds.py\n@@ -121,12 +121,15 @@\n {\n 'id': 'Overworld',\n 'text': 'Welcome to the overworld for the ParlAI messenger '\n- 'chatbot demo. Please type \"begin\" to start.',\n- 'quick_replies': ['begin'],\n+ 'chatbot demo. Please type \"begin\" to start, or \"exit\" to exit',\n+ 'quick_replies': ['begin', 'exit'],\n }\n )\n self.first_time = False\n a = self.agent.act()\n+ if a is not None and a['text'].lower() == 'exit':\n+ self.episode_done = True\n+ return 'EXIT'\n if a is not None and a['text'].lower() == 'begin':\n self.episodeDone = True\n return 'default'\n", "issue": "Issue terminal chat service\nHello,\r\n\r\nFirst of all thank you for this amazing framework.\r\nI was trying to run the terminal chat example (code from today without changes except port).\r\n\r\nIt seems that there are only 'max_worker' clients possible to connect in total. For example when I set max_worker=1 in the config.yml, I can connect with the client one time successfully. But when I stop the client ('[DONE]'), start it again and it gets stuck right at the beginning.\r\n\r\nHow can I prevent the server from getting stuck once >max_workers clients have been connected? I already tried removing the agent from memory, however it seems that the issue is that the thread is just not ending.\r\n\r\n\r\n\r\nI use Python 3.7.5 and Ubuntu 18.04 LTS.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport json\nimport uuid\nimport websocket\nimport time\nimport threading\nfrom parlai.core.params import ParlaiParser\n\n\ndef _get_rand_id():\n \"\"\"\n :return: The string of a random id using uuid4\n \"\"\"\n return str(uuid.uuid4())\n\n\ndef _prBlueBG(text):\n \"\"\"\n Print given in text with a blue background.\n\n :param text: The text to be printed\n \"\"\"\n print(\"\\033[44m{}\\033[0m\".format(text), sep=\"\")\n\n\ndef on_message(ws, message):\n \"\"\"\n Prints the incoming message from the server.\n\n :param ws: a WebSocketApp\n :param message: json with 'text' field to be printed\n \"\"\"\n incoming_message = json.loads(message)\n print(\"\\033[0m\\n\")\n print(\"Bot: \" + incoming_message['text'])\n quick_replies = incoming_message.get('quick_replies')\n if quick_replies is not None and len(quick_replies) > 0:\n print(f\"\\nOptions: [{'|'.join(quick_replies)}]\")\n print(\"\\033[44m\\n\")\n\n\ndef on_error(ws, error):\n \"\"\"\n Prints an error, if occurs.\n\n :param ws: WebSocketApp\n :param error: An error\n \"\"\"\n print(error)\n\n\ndef on_close(ws):\n \"\"\"\n Cleanup before closing connection.\n\n :param ws: WebSocketApp\n \"\"\"\n # Reset color formatting if necessary\n print(\"\\033[0m\")\n print(\"Connection closed\")\n\n\ndef _run(ws, id):\n \"\"\"\n Takes user input and sends it to a websocket.\n\n :param ws: websocket.WebSocketApp\n \"\"\"\n while True:\n x = input(\"\\033[44m Me: \")\n print(\"\\033[0m\", end=\"\")\n data = {}\n data['id'] = id\n data['text'] = x\n json_data = json.dumps(data)\n ws.send(json_data)\n time.sleep(1)\n if x == \"[DONE]\":\n break\n ws.close()\n\n\ndef on_open(ws):\n \"\"\"\n Starts a new thread that loops, taking user input and sending it to the websocket.\n\n :param ws: websocket.WebSocketApp that sends messages to a terminal_manager\n \"\"\"\n id = _get_rand_id()\n threading.Thread(target=_run, args=(ws, id)).start()\n\n\ndef setup_args():\n \"\"\"\n Set up args, specifically for the port number.\n\n :return: A parser that parses the port from commandline arguments.\n \"\"\"\n parser = ParlaiParser(False, False)\n parser_grp = parser.add_argument_group('Terminal Chat')\n parser_grp.add_argument(\n '--port', default=35496, type=int, help='Port to run the terminal chat server'\n )\n return parser.parse_args()\n\n\nif __name__ == \"__main__\":\n opt = setup_args()\n port = opt.get('port', 34596)\n print(\"Connecting to port: \", port)\n ws = websocket.WebSocketApp(\n \"ws://localhost:{}/websocket\".format(port),\n on_message=on_message,\n on_error=on_error,\n on_close=on_close,\n )\n ws.on_open = on_open\n ws.run_forever()\n", "path": "parlai/chat_service/services/terminal_chat/client.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n#\n# py parlai/chat_service/tasks/overworld_demo/run.py --debug --verbose\n\nfrom parlai.core.worlds import World\nfrom parlai.chat_service.services.messenger.worlds import OnboardWorld\nfrom parlai.core.agents import create_agent_from_shared\n\n\n# ---------- Chatbot demo ---------- #\nclass MessengerBotChatOnboardWorld(OnboardWorld):\n \"\"\"\n Example messenger onboarding world for Chatbot Model.\n \"\"\"\n\n @staticmethod\n def generate_world(opt, agents):\n return MessengerBotChatOnboardWorld(opt=opt, agent=agents[0])\n\n def parley(self):\n self.episodeDone = True\n\n\nclass MessengerBotChatTaskWorld(World):\n \"\"\"\n Example one person world that talks to a provided agent (bot).\n \"\"\"\n\n MAX_AGENTS = 1\n MODEL_KEY = 'blender_90M'\n\n def __init__(self, opt, agent, bot):\n self.agent = agent\n self.episodeDone = False\n self.model = bot\n self.first_time = True\n\n @staticmethod\n def generate_world(opt, agents):\n if opt['models'] is None:\n raise RuntimeError(\"Model must be specified\")\n return MessengerBotChatTaskWorld(\n opt,\n agents[0],\n create_agent_from_shared(\n opt['shared_bot_params'][MessengerBotChatTaskWorld.MODEL_KEY]\n ),\n )\n\n @staticmethod\n def assign_roles(agents):\n agents[0].disp_id = 'ChatbotAgent'\n\n def parley(self):\n if self.first_time:\n self.agent.observe(\n {\n 'id': 'World',\n 'text': 'Welcome to the ParlAI Chatbot demo. '\n 'You are now paired with a bot - feel free to send a message.'\n 'Type [DONE] to finish the chat, or [RESET] to reset the dialogue history.',\n }\n )\n self.first_time = False\n a = self.agent.act()\n if a is not None:\n if '[DONE]' in a['text']:\n self.episodeDone = True\n elif '[RESET]' in a['text']:\n self.model.reset()\n self.agent.observe({\"text\": \"[History Cleared]\", \"episode_done\": False})\n else:\n print(\"===act====\")\n print(a)\n print(\"~~~~~~~~~~~\")\n self.model.observe(a)\n response = self.model.act()\n print(\"===response====\")\n print(response)\n print(\"~~~~~~~~~~~\")\n self.agent.observe(response)\n\n def episode_done(self):\n return self.episodeDone\n\n def shutdown(self):\n self.agent.shutdown()\n\n\n# ---------- Overworld -------- #\nclass MessengerOverworld(World):\n \"\"\"\n World to handle moving agents to their proper places.\n \"\"\"\n\n def __init__(self, opt, agent):\n self.agent = agent\n self.opt = opt\n self.first_time = True\n self.episodeDone = False\n\n @staticmethod\n def generate_world(opt, agents):\n return MessengerOverworld(opt, agents[0])\n\n @staticmethod\n def assign_roles(agents):\n for a in agents:\n a.disp_id = 'Agent'\n\n def episode_done(self):\n return self.episodeDone\n\n def parley(self):\n if self.first_time:\n self.agent.observe(\n {\n 'id': 'Overworld',\n 'text': 'Welcome to the overworld for the ParlAI messenger '\n 'chatbot demo. Please type \"begin\" to start.',\n 'quick_replies': ['begin'],\n }\n )\n self.first_time = False\n a = self.agent.act()\n if a is not None and a['text'].lower() == 'begin':\n self.episodeDone = True\n return 'default'\n elif a is not None:\n self.agent.observe(\n {\n 'id': 'Overworld',\n 'text': 'Invalid option. Please type \"begin\".',\n 'quick_replies': ['begin'],\n }\n )\n", "path": "parlai/chat_service/tasks/chatbot/worlds.py"}]} | 3,070 | 662 |
gh_patches_debug_12716 | rasdani/github-patches | git_diff | localstack__localstack-2332 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
s3.upload returns `Location: http://localhost:4566`
# Bug report
# Detailed description
The `AWS.s3.upload()` (official SDK - https://github.com/aws/aws-sdk-js) returns an object with the `Location` key that points to 4566 instead of 4572 (LocalStack S3 port).
## Expected behavior
The `Location` should point to the file on S3.
Example:
```
Location: http://localhost:4572/path/to/bucket.txt
```
## Actual behavior
The `Location` points to the LocalStack entrypoint.
Example:
```
Location: http://localhost:4566/path/to/bucket.txt
```
# Steps to reproduce
- Upload a file to S3 using the official AWS SDK (https://github.com/aws/aws-sdk-js).
- Check out the `Location` property.
## Client code
```javascript
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
region: 'us-west-1',
endpoint: 'http://localhost:4566',
apiVersion: '2006-03-01',
s3ForcePathStyle: true,
});
(async () => {
await s3
.createBucket({ Bucket: 'my-bucket', ACL: 'private' })
.promise();
const { Location } = await s3
.upload({ Key: 'file.txt', Body: 'test', Bucket: 'my-bucket' })
.promise();
console.assert(Location === 'http://localhost:4572/my-bucket/file.txt');
})();
```
</issue>
<code>
[start of localstack/services/edge.py]
1 import re
2 import os
3 import sys
4 import json
5 import logging
6 from requests.models import Response
7 from localstack import config
8 from localstack.constants import HEADER_LOCALSTACK_TARGET, HEADER_LOCALSTACK_EDGE_URL, LOCALSTACK_ROOT_FOLDER
9 from localstack.utils.common import run, is_root, TMP_THREADS
10 from localstack.utils.common import safe_requests as requests
11 from localstack.services.generic_proxy import ProxyListener, GenericProxy
12
13 LOG = logging.getLogger(__name__)
14
15 # Header to indicate that the process should kill itself. This is required because if
16 # this process is started as root, then we cannot kill it from a non-root process
17 HEADER_KILL_SIGNAL = 'x-localstack-kill'
18
19
20 class ProxyListenerEdge(ProxyListener):
21
22 def forward_request(self, method, path, data, headers):
23 if method == 'OPTIONS':
24 return 200
25
26 # kill the process if we receive this header
27 headers.get(HEADER_KILL_SIGNAL) and os._exit(0)
28
29 target = headers.get('x-amz-target', '')
30 auth_header = headers.get('authorization', '')
31 host = headers.get('host', '')
32 headers[HEADER_LOCALSTACK_EDGE_URL] = 'https://%s' % host
33
34 # extract API details
35 _, port, path, host = get_api_from_headers(headers, path)
36
37 if not port:
38 # detect S3 presigned URLs
39 if 'AWSAccessKeyId=' in path or 'Signature=' in path:
40 port = config.PORT_S3
41
42 if not port:
43 LOG.info('Unable to find forwarding rule for host "%s", path "%s", target header "%s", auth header "%s"' %
44 (host, path, target, auth_header))
45 response = Response()
46 response.status_code = 404
47 response._content = '{"status": "running"}'
48 return response
49
50 use_ssl = config.USE_SSL
51
52 connect_host = '%s:%s' % (config.HOSTNAME, port)
53 url = 'http%s://%s%s' % ('s' if use_ssl else '', connect_host, path)
54 headers['Host'] = host
55 function = getattr(requests, method.lower())
56 if isinstance(data, dict):
57 data = json.dumps(data)
58
59 response = function(url, data=data, headers=headers, verify=False)
60 return response
61
62
63 def get_api_from_headers(headers, path=None):
64 target = headers.get('x-amz-target', '')
65 host = headers.get('host', '')
66 auth_header = headers.get('authorization', '')
67 ls_target = headers.get(HEADER_LOCALSTACK_TARGET, '')
68 path = path or '/'
69
70 # initialize result
71 result = '_unknown_', 0
72
73 # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
74 try:
75 credential_scope = auth_header.split(',')[0].split()[1]
76 _, _, _, service, _ = credential_scope.split('/')
77 result = service, config.service_port(service)
78 except Exception:
79 pass
80
81 # Fallback rules and route customizations applied below
82
83 if host.endswith('cloudfront.net'):
84 path = path or '/'
85 result = 'cloudfront', config.PORT_CLOUDFRONT
86 elif target.startswith('AWSCognitoIdentityProviderService') or 'cognito-idp.' in host:
87 result = 'cognito-idp', config.PORT_COGNITO_IDP
88 elif target.startswith('AWSCognitoIdentityService') or 'cognito-identity.' in host:
89 result = 'cognito-identity', config.PORT_COGNITO_IDENTITY
90 elif result[0] == 's3' or re.match(r'.*s3(\-website)?\.([^\.]+\.)?amazonaws.com', host):
91 host = re.sub(r's3-website\..*\.amazonaws', 's3.amazonaws', host)
92 result = 's3', config.PORT_S3
93 elif result[0] == 'states' in auth_header or host.startswith('states.'):
94 result = 'stepfunctions', config.PORT_STEPFUNCTIONS
95 elif '.execute-api.' in host:
96 result = 'apigateway', config.PORT_APIGATEWAY
97 elif target.startswith('DynamoDBStreams') or host.startswith('streams.dynamodb.'):
98 result = 'dynamodbstreams', config.PORT_DYNAMODBSTREAMS
99 elif ls_target == 'web' or path == '/graph':
100 result = 'web', config.PORT_WEB_UI
101
102 return result[0], result[1], path, host
103
104
105 def do_start_edge(port, use_ssl, asynchronous=False):
106 try:
107 # start local DNS server, if present
108 from localstack_ext.services import dns_server
109 dns_server.start_servers()
110 except Exception:
111 pass
112
113 # get port and start Edge
114 print('Starting edge router (http%s port %s)...' % ('s' if use_ssl else '', port))
115 # use use=True here because our proxy allows both, HTTP and HTTPS traffic
116 proxy = GenericProxy(port, ssl=True, update_listener=ProxyListenerEdge())
117 proxy.start()
118 if not asynchronous:
119 proxy.join()
120 return proxy
121
122
123 def can_use_sudo():
124 try:
125 run('echo | sudo -S echo', print_error=False)
126 return True
127 except Exception:
128 return False
129
130
131 def ensure_can_use_sudo():
132 if not is_root() and not can_use_sudo():
133 print('Please enter your sudo password (required to configure local network):')
134 run('sudo echo', stdin=True)
135
136
137 def start_edge(port=None, use_ssl=True, asynchronous=False):
138 if not port:
139 port = config.EDGE_PORT
140 if config.EDGE_PORT_HTTP:
141 do_start_edge(config.EDGE_PORT_HTTP, use_ssl=False, asynchronous=True)
142 if port > 1024 or is_root():
143 return do_start_edge(port, use_ssl, asynchronous=asynchronous)
144
145 # process requires priviledged port but we're not root -> try running as sudo
146
147 class Terminator(object):
148
149 def stop(self, quiet=True):
150 try:
151 url = 'http%s://localhost:%s' % ('s' if use_ssl else '', port)
152 requests.verify_ssl = False
153 requests.post(url, headers={HEADER_KILL_SIGNAL: 'kill'})
154 except Exception:
155 pass
156
157 # make sure we can run sudo commands
158 ensure_can_use_sudo()
159
160 # register a signal handler to terminate the sudo process later on
161 TMP_THREADS.append(Terminator())
162
163 # start the process as sudo
164 sudo_cmd = 'sudo '
165 python_cmd = sys.executable
166 cmd = '%sPYTHONPATH=.:%s %s %s %s' % (sudo_cmd, LOCALSTACK_ROOT_FOLDER, python_cmd, __file__, port)
167 process = run(cmd, asynchronous=asynchronous)
168 return process
169
170
171 if __name__ == '__main__':
172 logging.basicConfig()
173 start_edge(int(sys.argv[1]))
174
[end of localstack/services/edge.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/localstack/services/edge.py b/localstack/services/edge.py
--- a/localstack/services/edge.py
+++ b/localstack/services/edge.py
@@ -38,6 +38,10 @@
# detect S3 presigned URLs
if 'AWSAccessKeyId=' in path or 'Signature=' in path:
port = config.PORT_S3
+ # assume that this is an S3 GET request with URL path `/<bucket>/<key ...>`
+ # TODO: move S3 public URLs to a separate port/endpoint, OR check ACLs here first
+ if method == 'GET' and '/' in path.strip('/'):
+ port = config.PORT_S3
if not port:
LOG.info('Unable to find forwarding rule for host "%s", path "%s", target header "%s", auth header "%s"' %
| {"golden_diff": "diff --git a/localstack/services/edge.py b/localstack/services/edge.py\n--- a/localstack/services/edge.py\n+++ b/localstack/services/edge.py\n@@ -38,6 +38,10 @@\n # detect S3 presigned URLs\n if 'AWSAccessKeyId=' in path or 'Signature=' in path:\n port = config.PORT_S3\n+ # assume that this is an S3 GET request with URL path `/<bucket>/<key ...>`\n+ # TODO: move S3 public URLs to a separate port/endpoint, OR check ACLs here first\n+ if method == 'GET' and '/' in path.strip('/'):\n+ port = config.PORT_S3\n \n if not port:\n LOG.info('Unable to find forwarding rule for host \"%s\", path \"%s\", target header \"%s\", auth header \"%s\"' %\n", "issue": "s3.upload returns `Location: http://localhost:4566`\n# Bug report\r\n\r\n# Detailed description\r\n\r\nThe `AWS.s3.upload()` (official SDK - https://github.com/aws/aws-sdk-js) returns an object with the `Location` key that points to 4566 instead of 4572 (LocalStack S3 port).\r\n\r\n## Expected behavior\r\n\r\nThe `Location` should point to the file on S3.\r\n\r\nExample:\r\n\r\n```\r\nLocation: http://localhost:4572/path/to/bucket.txt\r\n```\r\n\r\n## Actual behavior\r\n\r\nThe `Location` points to the LocalStack entrypoint.\r\n\r\nExample:\r\n\r\n```\r\nLocation: http://localhost:4566/path/to/bucket.txt\r\n```\r\n\r\n# Steps to reproduce\r\n\r\n- Upload a file to S3 using the official AWS SDK (https://github.com/aws/aws-sdk-js).\r\n- Check out the `Location` property.\r\n\r\n## Client code\r\n\r\n```javascript\r\nconst AWS = require('aws-sdk');\r\nconst s3 = new AWS.S3({\r\n region: 'us-west-1',\r\n endpoint: 'http://localhost:4566',\r\n apiVersion: '2006-03-01',\r\n s3ForcePathStyle: true,\r\n});\r\n\r\n(async () => {\r\n await s3\r\n .createBucket({ Bucket: 'my-bucket', ACL: 'private' })\r\n .promise();\r\n\r\n const { Location } = await s3\r\n .upload({ Key: 'file.txt', Body: 'test', Bucket: 'my-bucket' })\r\n .promise();\r\n\r\n console.assert(Location === 'http://localhost:4572/my-bucket/file.txt');\r\n})();\r\n```\n", "before_files": [{"content": "import re\nimport os\nimport sys\nimport json\nimport logging\nfrom requests.models import Response\nfrom localstack import config\nfrom localstack.constants import HEADER_LOCALSTACK_TARGET, HEADER_LOCALSTACK_EDGE_URL, LOCALSTACK_ROOT_FOLDER\nfrom localstack.utils.common import run, is_root, TMP_THREADS\nfrom localstack.utils.common import safe_requests as requests\nfrom localstack.services.generic_proxy import ProxyListener, GenericProxy\n\nLOG = logging.getLogger(__name__)\n\n# Header to indicate that the process should kill itself. This is required because if\n# this process is started as root, then we cannot kill it from a non-root process\nHEADER_KILL_SIGNAL = 'x-localstack-kill'\n\n\nclass ProxyListenerEdge(ProxyListener):\n\n def forward_request(self, method, path, data, headers):\n if method == 'OPTIONS':\n return 200\n\n # kill the process if we receive this header\n headers.get(HEADER_KILL_SIGNAL) and os._exit(0)\n\n target = headers.get('x-amz-target', '')\n auth_header = headers.get('authorization', '')\n host = headers.get('host', '')\n headers[HEADER_LOCALSTACK_EDGE_URL] = 'https://%s' % host\n\n # extract API details\n _, port, path, host = get_api_from_headers(headers, path)\n\n if not port:\n # detect S3 presigned URLs\n if 'AWSAccessKeyId=' in path or 'Signature=' in path:\n port = config.PORT_S3\n\n if not port:\n LOG.info('Unable to find forwarding rule for host \"%s\", path \"%s\", target header \"%s\", auth header \"%s\"' %\n (host, path, target, auth_header))\n response = Response()\n response.status_code = 404\n response._content = '{\"status\": \"running\"}'\n return response\n\n use_ssl = config.USE_SSL\n\n connect_host = '%s:%s' % (config.HOSTNAME, port)\n url = 'http%s://%s%s' % ('s' if use_ssl else '', connect_host, path)\n headers['Host'] = host\n function = getattr(requests, method.lower())\n if isinstance(data, dict):\n data = json.dumps(data)\n\n response = function(url, data=data, headers=headers, verify=False)\n return response\n\n\ndef get_api_from_headers(headers, path=None):\n target = headers.get('x-amz-target', '')\n host = headers.get('host', '')\n auth_header = headers.get('authorization', '')\n ls_target = headers.get(HEADER_LOCALSTACK_TARGET, '')\n path = path or '/'\n\n # initialize result\n result = '_unknown_', 0\n\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n try:\n credential_scope = auth_header.split(',')[0].split()[1]\n _, _, _, service, _ = credential_scope.split('/')\n result = service, config.service_port(service)\n except Exception:\n pass\n\n # Fallback rules and route customizations applied below\n\n if host.endswith('cloudfront.net'):\n path = path or '/'\n result = 'cloudfront', config.PORT_CLOUDFRONT\n elif target.startswith('AWSCognitoIdentityProviderService') or 'cognito-idp.' in host:\n result = 'cognito-idp', config.PORT_COGNITO_IDP\n elif target.startswith('AWSCognitoIdentityService') or 'cognito-identity.' in host:\n result = 'cognito-identity', config.PORT_COGNITO_IDENTITY\n elif result[0] == 's3' or re.match(r'.*s3(\\-website)?\\.([^\\.]+\\.)?amazonaws.com', host):\n host = re.sub(r's3-website\\..*\\.amazonaws', 's3.amazonaws', host)\n result = 's3', config.PORT_S3\n elif result[0] == 'states' in auth_header or host.startswith('states.'):\n result = 'stepfunctions', config.PORT_STEPFUNCTIONS\n elif '.execute-api.' in host:\n result = 'apigateway', config.PORT_APIGATEWAY\n elif target.startswith('DynamoDBStreams') or host.startswith('streams.dynamodb.'):\n result = 'dynamodbstreams', config.PORT_DYNAMODBSTREAMS\n elif ls_target == 'web' or path == '/graph':\n result = 'web', config.PORT_WEB_UI\n\n return result[0], result[1], path, host\n\n\ndef do_start_edge(port, use_ssl, asynchronous=False):\n try:\n # start local DNS server, if present\n from localstack_ext.services import dns_server\n dns_server.start_servers()\n except Exception:\n pass\n\n # get port and start Edge\n print('Starting edge router (http%s port %s)...' % ('s' if use_ssl else '', port))\n # use use=True here because our proxy allows both, HTTP and HTTPS traffic\n proxy = GenericProxy(port, ssl=True, update_listener=ProxyListenerEdge())\n proxy.start()\n if not asynchronous:\n proxy.join()\n return proxy\n\n\ndef can_use_sudo():\n try:\n run('echo | sudo -S echo', print_error=False)\n return True\n except Exception:\n return False\n\n\ndef ensure_can_use_sudo():\n if not is_root() and not can_use_sudo():\n print('Please enter your sudo password (required to configure local network):')\n run('sudo echo', stdin=True)\n\n\ndef start_edge(port=None, use_ssl=True, asynchronous=False):\n if not port:\n port = config.EDGE_PORT\n if config.EDGE_PORT_HTTP:\n do_start_edge(config.EDGE_PORT_HTTP, use_ssl=False, asynchronous=True)\n if port > 1024 or is_root():\n return do_start_edge(port, use_ssl, asynchronous=asynchronous)\n\n # process requires priviledged port but we're not root -> try running as sudo\n\n class Terminator(object):\n\n def stop(self, quiet=True):\n try:\n url = 'http%s://localhost:%s' % ('s' if use_ssl else '', port)\n requests.verify_ssl = False\n requests.post(url, headers={HEADER_KILL_SIGNAL: 'kill'})\n except Exception:\n pass\n\n # make sure we can run sudo commands\n ensure_can_use_sudo()\n\n # register a signal handler to terminate the sudo process later on\n TMP_THREADS.append(Terminator())\n\n # start the process as sudo\n sudo_cmd = 'sudo '\n python_cmd = sys.executable\n cmd = '%sPYTHONPATH=.:%s %s %s %s' % (sudo_cmd, LOCALSTACK_ROOT_FOLDER, python_cmd, __file__, port)\n process = run(cmd, asynchronous=asynchronous)\n return process\n\n\nif __name__ == '__main__':\n logging.basicConfig()\n start_edge(int(sys.argv[1]))\n", "path": "localstack/services/edge.py"}]} | 2,806 | 186 |
gh_patches_debug_35150 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider ljsilvers is broken
During the global build at 2021-06-02-14-42-40, spider **ljsilvers** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/ljsilvers.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson))
Long John Silver's
http://www.ljsilvers.com/
(location search box top right)
</issue>
<code>
[start of locations/spiders/ljsilvers.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 import re
5
6 from locations.items import GeojsonPointItem
7
8
9 class LjsilversSpider(scrapy.Spider):
10 name = "ljsilvers"
11 item_attributes = { 'brand': "Long John Silver's", 'brand_wikidata': "Q1535221" }
12 allowed_domains = ["ljsilvers.com"]
13 start_urls = (
14 'http://www.ljsilvers.com/locator?postalcode=76010',
15 )
16
17 def parse(self, response):
18 data = response.body_as_unicode()
19 base_data = re.search(r'dataout\s--Array\s\((.*)\)\s\s--><style type="text/css">', data, re.DOTALL).group(1)
20 detail_matches = re.findall(r'\((.*?)\)', base_data, re.DOTALL)
21
22 for detail_match in detail_matches:
23 key_values = re.findall(r'(.*?)\s=>\s(.*)', detail_match)
24 props = {}
25
26 for key_value in key_values:
27 key = key_value[0].strip()
28 value = key_value[1].strip()
29
30 if key == '[storeID]':
31 props['ref'] = value
32 if key == '[address]':
33 props['addr_full'] = value
34 if key == '[city]':
35 props['city'] = value
36 if key == '[state]':
37 props['state'] = value
38 if key == '[zip]':
39 props['postcode'] = value
40 if key == '[phone_number]':
41 props['phone'] = value
42 if key == '[latitude]':
43 props['lat'] = value
44 if key == '[longitude]':
45 props['lon'] = value
46
47 yield GeojsonPointItem(**props)
48
[end of locations/spiders/ljsilvers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/ljsilvers.py b/locations/spiders/ljsilvers.py
--- a/locations/spiders/ljsilvers.py
+++ b/locations/spiders/ljsilvers.py
@@ -1,47 +1,32 @@
# -*- coding: utf-8 -*-
import scrapy
-import json
-import re
from locations.items import GeojsonPointItem
class LjsilversSpider(scrapy.Spider):
name = "ljsilvers"
- item_attributes = { 'brand': "Long John Silver's", 'brand_wikidata': "Q1535221" }
+ item_attributes = {"brand": "Long John Silver's", "brand_wikidata": "Q1535221"}
allowed_domains = ["ljsilvers.com"]
start_urls = (
- 'http://www.ljsilvers.com/locator?postalcode=76010',
+ "https://viewer.blipstar.com/searchdbnew?uid=2483677&lat=45&lng=-103&value=10000",
)
def parse(self, response):
- data = response.body_as_unicode()
- base_data = re.search(r'dataout\s--Array\s\((.*)\)\s\s--><style type="text/css">', data, re.DOTALL).group(1)
- detail_matches = re.findall(r'\((.*?)\)', base_data, re.DOTALL)
-
- for detail_match in detail_matches:
- key_values = re.findall(r'(.*?)\s=>\s(.*)', detail_match)
- props = {}
-
- for key_value in key_values:
- key = key_value[0].strip()
- value = key_value[1].strip()
-
- if key == '[storeID]':
- props['ref'] = value
- if key == '[address]':
- props['addr_full'] = value
- if key == '[city]':
- props['city'] = value
- if key == '[state]':
- props['state'] = value
- if key == '[zip]':
- props['postcode'] = value
- if key == '[phone_number]':
- props['phone'] = value
- if key == '[latitude]':
- props['lat'] = value
- if key == '[longitude]':
- props['lon'] = value
-
- yield GeojsonPointItem(**props)
+ for row in response.json():
+ if row.keys() == {"fulltotal", "total", "units"}:
+ continue
+ addr = scrapy.Selector(text=row["a"])
+ properties = {
+ "name": row["n"],
+ "ref": row["bpid"],
+ "lat": row["lat"],
+ "lon": row["lng"],
+ "addr_full": addr.xpath("//p/text()").extract_first(),
+ "city": addr.css(".storecity ::text").extract_first(),
+ "state": addr.css(".storestate ::text").extract_first(),
+ "postcode": addr.css(".storepostalcode ::text").extract_first(),
+ "country": row["c"],
+ "phone": row.get("p"),
+ }
+ yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/ljsilvers.py b/locations/spiders/ljsilvers.py\n--- a/locations/spiders/ljsilvers.py\n+++ b/locations/spiders/ljsilvers.py\n@@ -1,47 +1,32 @@\n # -*- coding: utf-8 -*-\n import scrapy\n-import json\n-import re\n \n from locations.items import GeojsonPointItem\n \n \n class LjsilversSpider(scrapy.Spider):\n name = \"ljsilvers\"\n- item_attributes = { 'brand': \"Long John Silver's\", 'brand_wikidata': \"Q1535221\" }\n+ item_attributes = {\"brand\": \"Long John Silver's\", \"brand_wikidata\": \"Q1535221\"}\n allowed_domains = [\"ljsilvers.com\"]\n start_urls = (\n- 'http://www.ljsilvers.com/locator?postalcode=76010',\n+ \"https://viewer.blipstar.com/searchdbnew?uid=2483677&lat=45&lng=-103&value=10000\",\n )\n \n def parse(self, response):\n- data = response.body_as_unicode()\n- base_data = re.search(r'dataout\\s--Array\\s\\((.*)\\)\\s\\s--><style type=\"text/css\">', data, re.DOTALL).group(1)\n- detail_matches = re.findall(r'\\((.*?)\\)', base_data, re.DOTALL)\n-\n- for detail_match in detail_matches:\n- key_values = re.findall(r'(.*?)\\s=>\\s(.*)', detail_match)\n- props = {}\n-\n- for key_value in key_values:\n- key = key_value[0].strip()\n- value = key_value[1].strip()\n-\n- if key == '[storeID]':\n- props['ref'] = value\n- if key == '[address]':\n- props['addr_full'] = value\n- if key == '[city]':\n- props['city'] = value\n- if key == '[state]':\n- props['state'] = value\n- if key == '[zip]':\n- props['postcode'] = value\n- if key == '[phone_number]':\n- props['phone'] = value\n- if key == '[latitude]':\n- props['lat'] = value\n- if key == '[longitude]':\n- props['lon'] = value\n-\n- yield GeojsonPointItem(**props)\n+ for row in response.json():\n+ if row.keys() == {\"fulltotal\", \"total\", \"units\"}:\n+ continue\n+ addr = scrapy.Selector(text=row[\"a\"])\n+ properties = {\n+ \"name\": row[\"n\"],\n+ \"ref\": row[\"bpid\"],\n+ \"lat\": row[\"lat\"],\n+ \"lon\": row[\"lng\"],\n+ \"addr_full\": addr.xpath(\"//p/text()\").extract_first(),\n+ \"city\": addr.css(\".storecity ::text\").extract_first(),\n+ \"state\": addr.css(\".storestate ::text\").extract_first(),\n+ \"postcode\": addr.css(\".storepostalcode ::text\").extract_first(),\n+ \"country\": row[\"c\"],\n+ \"phone\": row.get(\"p\"),\n+ }\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider ljsilvers is broken\nDuring the global build at 2021-06-02-14-42-40, spider **ljsilvers** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/ljsilvers.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/ljsilvers.geojson))\nLong John Silver's\nhttp://www.ljsilvers.com/\r\n\r\n(location search box top right)\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass LjsilversSpider(scrapy.Spider):\n name = \"ljsilvers\"\n item_attributes = { 'brand': \"Long John Silver's\", 'brand_wikidata': \"Q1535221\" }\n allowed_domains = [\"ljsilvers.com\"]\n start_urls = (\n 'http://www.ljsilvers.com/locator?postalcode=76010',\n )\n\n def parse(self, response):\n data = response.body_as_unicode()\n base_data = re.search(r'dataout\\s--Array\\s\\((.*)\\)\\s\\s--><style type=\"text/css\">', data, re.DOTALL).group(1)\n detail_matches = re.findall(r'\\((.*?)\\)', base_data, re.DOTALL)\n\n for detail_match in detail_matches:\n key_values = re.findall(r'(.*?)\\s=>\\s(.*)', detail_match)\n props = {}\n\n for key_value in key_values:\n key = key_value[0].strip()\n value = key_value[1].strip()\n\n if key == '[storeID]':\n props['ref'] = value\n if key == '[address]':\n props['addr_full'] = value\n if key == '[city]':\n props['city'] = value\n if key == '[state]':\n props['state'] = value\n if key == '[zip]':\n props['postcode'] = value\n if key == '[phone_number]':\n props['phone'] = value\n if key == '[latitude]':\n props['lat'] = value\n if key == '[longitude]':\n props['lon'] = value\n\n yield GeojsonPointItem(**props)\n", "path": "locations/spiders/ljsilvers.py"}]} | 1,237 | 738 |
gh_patches_debug_37386 | rasdani/github-patches | git_diff | translate__pootle-6010 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
update_stores and sync_stores should produce an error if the project doesn't exist
If a non-existent project is passed to `update_stores` or `sync_stores` there is no output. I would expect an error:
```
# pootle update_stores --project=nonexistent-project
# pootle sync_stores --project=nonexistent-project
#
```
</issue>
<code>
[start of pootle/apps/pootle_app/management/commands/set_filetype.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import os
10
11 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
12
13 from django.core.management.base import CommandError
14
15 from pootle_format.models import Format
16 from pootle_project.models import Project
17
18 from . import PootleCommand
19
20
21 class Command(PootleCommand):
22 help = "Manage Store formats."
23
24 def add_arguments(self, parser):
25 super(Command, self).add_arguments(parser)
26 parser.add_argument(
27 'filetype',
28 action='store',
29 help="File type to set")
30 parser.add_argument(
31 '--from-filetype',
32 action='store',
33 help="Only convert Stores of this file type")
34 parser.add_argument(
35 '--matching',
36 action='store',
37 help="Glob match Store path excluding extension")
38
39 def get_projects(self):
40 if not self.projects:
41 return Project.objects.all()
42 projects = []
43 for project in self.projects:
44 # ensure all projects are valid before proceeding
45 try:
46 projects.append(Project.objects.get(code=project))
47 except Project.DoesNotExist:
48 raise CommandError("Unrecognized project '%s'" % project)
49 return projects
50
51 def get_filetype(self, name):
52 try:
53 return Format.objects.get(name=name)
54 except Format.DoesNotExist:
55 raise CommandError("Unrecognized filetype '%s'" % name)
56
57 def handle_all(self, **options):
58 filetype = self.get_filetype(options["filetype"])
59 from_filetype = (
60 options["from_filetype"]
61 and self.get_filetype(options["from_filetype"])
62 or None)
63 for project in self.get_projects():
64 # add the filetype to project, and convert the stores
65 project.filetype_tool.add_filetype(filetype)
66 project.filetype_tool.set_filetypes(
67 filetype,
68 from_filetype=from_filetype,
69 matching=options["matching"])
70
[end of pootle/apps/pootle_app/management/commands/set_filetype.py]
[start of pootle/apps/pootle_app/management/commands/__init__.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import datetime
10 import logging
11
12 from django.core.management.base import BaseCommand, CommandError
13
14 from pootle.runner import set_sync_mode
15 from pootle_project.models import Project
16
17
18 class SkipChecksMixin(object):
19 def check(self, app_configs=None, tags=None, display_num_errors=False,
20 include_deployment_checks=False):
21 skip_tags = getattr(self, 'skip_system_check_tags', None)
22 if skip_tags is not None:
23 from django.core.checks.registry import registry
24 tags = registry.tags_available() - set(skip_tags)
25
26 super(SkipChecksMixin, self).check(
27 app_configs=app_configs,
28 tags=tags,
29 display_num_errors=display_num_errors,
30 include_deployment_checks=include_deployment_checks)
31
32
33 class PootleCommand(BaseCommand):
34 """Base class for handling recursive pootle store management commands."""
35
36 process_disabled_projects = False
37
38 def add_arguments(self, parser):
39 parser.add_argument(
40 '--project',
41 action='append',
42 dest='projects',
43 help='Project to refresh',
44 )
45 parser.add_argument(
46 '--language',
47 action='append',
48 dest='languages',
49 help='Language to refresh',
50 )
51 parser.add_argument(
52 "--noinput",
53 action="store_true",
54 default=False,
55 help=u"Never prompt for input",
56 )
57 parser.add_argument(
58 "--no-rq",
59 action="store_true",
60 default=False,
61 help=(u"Run all jobs in a single process, without "
62 "using rq workers"),
63 )
64
65 def __init__(self, *args, **kwargs):
66 self.languages = []
67 self.projects = []
68 super(PootleCommand, self).__init__(*args, **kwargs)
69
70 def do_translation_project(self, tp, **options):
71 if hasattr(self, "handle_translation_project"):
72 logging.info(u"Running %s over %s", self.name, tp)
73 if not self.handle_translation_project(tp, **options):
74 return
75 if hasattr(self, "handle_all_stores"):
76 logging.info(u"Running %s over %s's files", self.name, tp)
77 self.handle_all_stores(tp, **options)
78 elif hasattr(self, "handle_store"):
79 store_query = tp.stores.live()
80 for store in store_query.iterator():
81 logging.info(u"Running %s over %s",
82 self.name, store.pootle_path)
83 self.handle_store(store, **options)
84
85 def handle(self, **options):
86 # adjust debug level to the verbosity option
87 debug_levels = {
88 0: logging.ERROR,
89 1: logging.WARNING,
90 2: logging.INFO,
91 3: logging.DEBUG
92 }
93 logging.getLogger().setLevel(
94 debug_levels.get(options['verbosity'], logging.DEBUG)
95 )
96
97 # reduce size of parse pool early on
98 self.name = self.__class__.__module__.split('.')[-1]
99 from pootle_store.fields import TranslationStoreFieldFile
100 TranslationStoreFieldFile._store_cache.maxsize = 2
101 TranslationStoreFieldFile._store_cache.cullsize = 2
102
103 self.projects = options.pop('projects', [])
104 self.languages = options.pop('languages', [])
105
106 # info start
107 start = datetime.datetime.now()
108 logging.info('Start running of %s', self.name)
109
110 try:
111 self.handle_all(**options)
112 except Exception as e:
113 raise CommandError(e)
114
115 # info finish
116 end = datetime.datetime.now()
117 logging.info('All done for %s in %s', self.name, end - start)
118
119 def handle_all(self, **options):
120 if options["no_rq"]:
121 set_sync_mode(options['noinput'])
122
123 if self.process_disabled_projects:
124 project_query = Project.objects.all()
125 else:
126 project_query = Project.objects.enabled()
127
128 if self.projects:
129 project_query = project_query.filter(code__in=self.projects)
130
131 for project in project_query.iterator():
132 tp_query = project.translationproject_set.live() \
133 .order_by('language__code')
134
135 if self.languages:
136 tp_query = tp_query.filter(language__code__in=self.languages)
137
138 for tp in tp_query.iterator():
139 self.do_translation_project(tp, **options)
140
[end of pootle/apps/pootle_app/management/commands/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/pootle_app/management/commands/__init__.py b/pootle/apps/pootle_app/management/commands/__init__.py
--- a/pootle/apps/pootle_app/management/commands/__init__.py
+++ b/pootle/apps/pootle_app/management/commands/__init__.py
@@ -12,6 +12,7 @@
from django.core.management.base import BaseCommand, CommandError
from pootle.runner import set_sync_mode
+from pootle_language.models import Language
from pootle_project.models import Project
@@ -82,6 +83,26 @@
self.name, store.pootle_path)
self.handle_store(store, **options)
+ def check_projects(self, project_codes):
+ existing_projects = Project.objects.filter(
+ code__in=project_codes
+ ).values_list("code", flat=True)
+ if len(existing_projects) != len(project_codes):
+ unrecognized_projects = list(set(project_codes) -
+ set(existing_projects))
+ raise CommandError("Unrecognized projects: %s" %
+ unrecognized_projects)
+
+ def check_languages(self, language_codes):
+ existing_languages = Language.objects.filter(
+ code__in=language_codes
+ ).values_list("code", flat=True)
+ if len(existing_languages) != len(language_codes):
+ unrecognized_languages = list(set(language_codes) -
+ set(existing_languages))
+ raise CommandError("Unrecognized languages: %s" %
+ unrecognized_languages)
+
def handle(self, **options):
# adjust debug level to the verbosity option
debug_levels = {
@@ -102,6 +123,10 @@
self.projects = options.pop('projects', [])
self.languages = options.pop('languages', [])
+ if self.projects:
+ self.check_projects(self.projects)
+ if self.languages:
+ self.check_languages(self.languages)
# info start
start = datetime.datetime.now()
diff --git a/pootle/apps/pootle_app/management/commands/set_filetype.py b/pootle/apps/pootle_app/management/commands/set_filetype.py
--- a/pootle/apps/pootle_app/management/commands/set_filetype.py
+++ b/pootle/apps/pootle_app/management/commands/set_filetype.py
@@ -39,14 +39,8 @@
def get_projects(self):
if not self.projects:
return Project.objects.all()
- projects = []
- for project in self.projects:
- # ensure all projects are valid before proceeding
- try:
- projects.append(Project.objects.get(code=project))
- except Project.DoesNotExist:
- raise CommandError("Unrecognized project '%s'" % project)
- return projects
+
+ return Project.objects.filter(code__in=self.projects)
def get_filetype(self, name):
try:
| {"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/__init__.py b/pootle/apps/pootle_app/management/commands/__init__.py\n--- a/pootle/apps/pootle_app/management/commands/__init__.py\n+++ b/pootle/apps/pootle_app/management/commands/__init__.py\n@@ -12,6 +12,7 @@\n from django.core.management.base import BaseCommand, CommandError\n \n from pootle.runner import set_sync_mode\n+from pootle_language.models import Language\n from pootle_project.models import Project\n \n \n@@ -82,6 +83,26 @@\n self.name, store.pootle_path)\n self.handle_store(store, **options)\n \n+ def check_projects(self, project_codes):\n+ existing_projects = Project.objects.filter(\n+ code__in=project_codes\n+ ).values_list(\"code\", flat=True)\n+ if len(existing_projects) != len(project_codes):\n+ unrecognized_projects = list(set(project_codes) -\n+ set(existing_projects))\n+ raise CommandError(\"Unrecognized projects: %s\" %\n+ unrecognized_projects)\n+\n+ def check_languages(self, language_codes):\n+ existing_languages = Language.objects.filter(\n+ code__in=language_codes\n+ ).values_list(\"code\", flat=True)\n+ if len(existing_languages) != len(language_codes):\n+ unrecognized_languages = list(set(language_codes) -\n+ set(existing_languages))\n+ raise CommandError(\"Unrecognized languages: %s\" %\n+ unrecognized_languages)\n+\n def handle(self, **options):\n # adjust debug level to the verbosity option\n debug_levels = {\n@@ -102,6 +123,10 @@\n \n self.projects = options.pop('projects', [])\n self.languages = options.pop('languages', [])\n+ if self.projects:\n+ self.check_projects(self.projects)\n+ if self.languages:\n+ self.check_languages(self.languages)\n \n # info start\n start = datetime.datetime.now()\ndiff --git a/pootle/apps/pootle_app/management/commands/set_filetype.py b/pootle/apps/pootle_app/management/commands/set_filetype.py\n--- a/pootle/apps/pootle_app/management/commands/set_filetype.py\n+++ b/pootle/apps/pootle_app/management/commands/set_filetype.py\n@@ -39,14 +39,8 @@\n def get_projects(self):\n if not self.projects:\n return Project.objects.all()\n- projects = []\n- for project in self.projects:\n- # ensure all projects are valid before proceeding\n- try:\n- projects.append(Project.objects.get(code=project))\n- except Project.DoesNotExist:\n- raise CommandError(\"Unrecognized project '%s'\" % project)\n- return projects\n+\n+ return Project.objects.filter(code__in=self.projects)\n \n def get_filetype(self, name):\n try:\n", "issue": "update_stores and sync_stores should produce an error if the project doesn't exist\nIf a non-existent project is passed to `update_stores` or `sync_stores` there is no output. I would expect an error:\r\n\r\n```\r\n# pootle update_stores --project=nonexistent-project\r\n# pootle sync_stores --project=nonexistent-project\r\n#\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\n\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom django.core.management.base import CommandError\n\nfrom pootle_format.models import Format\nfrom pootle_project.models import Project\n\nfrom . import PootleCommand\n\n\nclass Command(PootleCommand):\n help = \"Manage Store formats.\"\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n 'filetype',\n action='store',\n help=\"File type to set\")\n parser.add_argument(\n '--from-filetype',\n action='store',\n help=\"Only convert Stores of this file type\")\n parser.add_argument(\n '--matching',\n action='store',\n help=\"Glob match Store path excluding extension\")\n\n def get_projects(self):\n if not self.projects:\n return Project.objects.all()\n projects = []\n for project in self.projects:\n # ensure all projects are valid before proceeding\n try:\n projects.append(Project.objects.get(code=project))\n except Project.DoesNotExist:\n raise CommandError(\"Unrecognized project '%s'\" % project)\n return projects\n\n def get_filetype(self, name):\n try:\n return Format.objects.get(name=name)\n except Format.DoesNotExist:\n raise CommandError(\"Unrecognized filetype '%s'\" % name)\n\n def handle_all(self, **options):\n filetype = self.get_filetype(options[\"filetype\"])\n from_filetype = (\n options[\"from_filetype\"]\n and self.get_filetype(options[\"from_filetype\"])\n or None)\n for project in self.get_projects():\n # add the filetype to project, and convert the stores\n project.filetype_tool.add_filetype(filetype)\n project.filetype_tool.set_filetypes(\n filetype,\n from_filetype=from_filetype,\n matching=options[\"matching\"])\n", "path": "pootle/apps/pootle_app/management/commands/set_filetype.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport datetime\nimport logging\n\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle.runner import set_sync_mode\nfrom pootle_project.models import Project\n\n\nclass SkipChecksMixin(object):\n def check(self, app_configs=None, tags=None, display_num_errors=False,\n include_deployment_checks=False):\n skip_tags = getattr(self, 'skip_system_check_tags', None)\n if skip_tags is not None:\n from django.core.checks.registry import registry\n tags = registry.tags_available() - set(skip_tags)\n\n super(SkipChecksMixin, self).check(\n app_configs=app_configs,\n tags=tags,\n display_num_errors=display_num_errors,\n include_deployment_checks=include_deployment_checks)\n\n\nclass PootleCommand(BaseCommand):\n \"\"\"Base class for handling recursive pootle store management commands.\"\"\"\n\n process_disabled_projects = False\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--project',\n action='append',\n dest='projects',\n help='Project to refresh',\n )\n parser.add_argument(\n '--language',\n action='append',\n dest='languages',\n help='Language to refresh',\n )\n parser.add_argument(\n \"--noinput\",\n action=\"store_true\",\n default=False,\n help=u\"Never prompt for input\",\n )\n parser.add_argument(\n \"--no-rq\",\n action=\"store_true\",\n default=False,\n help=(u\"Run all jobs in a single process, without \"\n \"using rq workers\"),\n )\n\n def __init__(self, *args, **kwargs):\n self.languages = []\n self.projects = []\n super(PootleCommand, self).__init__(*args, **kwargs)\n\n def do_translation_project(self, tp, **options):\n if hasattr(self, \"handle_translation_project\"):\n logging.info(u\"Running %s over %s\", self.name, tp)\n if not self.handle_translation_project(tp, **options):\n return\n if hasattr(self, \"handle_all_stores\"):\n logging.info(u\"Running %s over %s's files\", self.name, tp)\n self.handle_all_stores(tp, **options)\n elif hasattr(self, \"handle_store\"):\n store_query = tp.stores.live()\n for store in store_query.iterator():\n logging.info(u\"Running %s over %s\",\n self.name, store.pootle_path)\n self.handle_store(store, **options)\n\n def handle(self, **options):\n # adjust debug level to the verbosity option\n debug_levels = {\n 0: logging.ERROR,\n 1: logging.WARNING,\n 2: logging.INFO,\n 3: logging.DEBUG\n }\n logging.getLogger().setLevel(\n debug_levels.get(options['verbosity'], logging.DEBUG)\n )\n\n # reduce size of parse pool early on\n self.name = self.__class__.__module__.split('.')[-1]\n from pootle_store.fields import TranslationStoreFieldFile\n TranslationStoreFieldFile._store_cache.maxsize = 2\n TranslationStoreFieldFile._store_cache.cullsize = 2\n\n self.projects = options.pop('projects', [])\n self.languages = options.pop('languages', [])\n\n # info start\n start = datetime.datetime.now()\n logging.info('Start running of %s', self.name)\n\n try:\n self.handle_all(**options)\n except Exception as e:\n raise CommandError(e)\n\n # info finish\n end = datetime.datetime.now()\n logging.info('All done for %s in %s', self.name, end - start)\n\n def handle_all(self, **options):\n if options[\"no_rq\"]:\n set_sync_mode(options['noinput'])\n\n if self.process_disabled_projects:\n project_query = Project.objects.all()\n else:\n project_query = Project.objects.enabled()\n\n if self.projects:\n project_query = project_query.filter(code__in=self.projects)\n\n for project in project_query.iterator():\n tp_query = project.translationproject_set.live() \\\n .order_by('language__code')\n\n if self.languages:\n tp_query = tp_query.filter(language__code__in=self.languages)\n\n for tp in tp_query.iterator():\n self.do_translation_project(tp, **options)\n", "path": "pootle/apps/pootle_app/management/commands/__init__.py"}]} | 2,565 | 637 |
gh_patches_debug_1832 | rasdani/github-patches | git_diff | conan-io__conan-center-index-18494 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] clickhouse-cpp/*: fPIC option is not respected
In the recipe file fPIC option is always removed during configure stage, which can lead to not working static library.
</issue>
<code>
[start of recipes/clickhouse-cpp/all/conanfile.py]
1 from conan import ConanFile
2 from conan.tools.cmake import CMake, CMakeToolchain,CMakeDeps, cmake_layout
3 from conan.tools.files import copy, get
4 from conan.tools.build import check_min_cppstd
5 from conan.errors import ConanInvalidConfiguration
6 from conan.tools.scm import Version
7 import os
8
9 required_conan_version = ">=1.53.0"
10
11 class ClickHouseCppConan(ConanFile):
12 name = "clickhouse-cpp"
13 homepage = "https://github.com/ClickHouse/clickhouse-cpp"
14 url = "https://github.com/conan-io/conan-center-index"
15 description = "ClickHouse C++ API"
16 license = "Apache-2.0"
17 topics = ("database", "db", "clickhouse")
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False],
22 "enable_benchmark": [True, False],
23 "with_openssl": [True, False]
24 }
25 default_options = {
26 "shared": False,
27 "fPIC": True,
28 "enable_benchmark": False,
29 "with_openssl": False
30 }
31
32 def requirements(self):
33
34 self.requires("lz4/1.9.4")
35
36 self.requires("abseil/20230125.3", transitive_headers=True)
37
38 self.requires("cityhash/cci.20130801")
39 if self.options.with_openssl:
40 self.requires("openssl/[>=1.1 <4]")
41
42 def build_requirements(self):
43 if self.options.enable_benchmark:
44 self.requires("benchmark/1.8.0")
45
46 @property
47 def _min_cppstd(self):
48 return "17"
49
50 @property
51 def _compilers_minimum_version(self):
52 return {
53 "Visual Studio": "15",
54 "msvc": "191",
55 "gcc": "7",
56 "clang": "6",
57 }
58
59 @property
60 def _requires_compiler_rt(self):
61 return self.settings.compiler == "clang" and (( self.settings.compiler.libcxx in ["libstdc++", "libstdc++11"] and not self.options.shared) or self.settings.compiler.libcxx == "libc++" )
62
63 def validate(self):
64 if self.settings.compiler.get_safe("cppstd"):
65 check_min_cppstd(self, self._min_cppstd)
66 minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)
67 if minimum_version and Version(self.settings.compiler.version) < minimum_version:
68 raise ConanInvalidConfiguration(f"{self.ref} requires C++17, which your compiler does not support.")
69 if self.settings.os == "Windows" and self.options.shared:
70 raise ConanInvalidConfiguration("f{self.ref} does not support shared library on Windows.")
71 # look at https://github.com/ClickHouse/clickhouse-cpp/pull/226
72
73 def config_options(self):
74 if self.settings.os == "Windows":
75 del self.options.fPIC
76
77 def configure(self):
78 self.options.rm_safe("fPIC")
79
80 def layout(self):
81 cmake_layout(self, src_folder="src")
82
83 def source(self):
84 get(self, **self.conan_data["sources"][self.version],
85 destination=self.source_folder, strip_root=True)
86
87 def generate(self):
88 tc = CMakeToolchain(self)
89 tc.variables["BUILD_BENCHMARK"] = self.options.enable_benchmark
90 tc.cache_variables["BUILD_SHARED_LIBS"] = self.options.shared
91 tc.variables["WITH_OPENSSL"] = self.options.with_openssl
92 tc.cache_variables["WITH_SYSTEM_ABSEIL"] = True
93 tc.cache_variables["WITH_SYSTEM_LZ4"] = True
94 tc.cache_variables["WITH_SYSTEM_CITYHASH"] = True
95 tc.generate()
96
97 cd = CMakeDeps(self)
98 cd.generate()
99
100 def build(self):
101 cmake = CMake(self)
102 cmake.configure()
103 cmake.build()
104
105 def package(self):
106 copy(self, "LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
107 cmake = CMake(self)
108 cmake.install()
109
110 def package_info(self):
111 self.cpp_info.libs.append("clickhouse-cpp-lib")
112 self.cpp_info.set_property("cmake_target_name", "clickhouse-cpp-lib::clickhouse-cpp-lib")
113
114 if self._requires_compiler_rt:
115 ldflags = ["--rtlib=compiler-rt"]
116 self.cpp_info.exelinkflags = ldflags
117 self.cpp_info.sharedlinkflags = ldflags
118 self.cpp_info.system_libs.append("gcc_s")
119
120 self.cpp_info.filenames["cmake_find_package"] = "clickhouse-cpp"
121 self.cpp_info.filenames["cmake_find_package_multi"] = "clickhouse-cpp"
122 self.cpp_info.names["cmake_find_package"] = "clickhouse-cpp-lib"
123 self.cpp_info.names["cmake_find_package_multi"] = "clickhouse-cpp-lib"
124
125 if self.settings.os == 'Windows':
126 self.cpp_info.system_libs = ['ws2_32', 'wsock32']
127
[end of recipes/clickhouse-cpp/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/clickhouse-cpp/all/conanfile.py b/recipes/clickhouse-cpp/all/conanfile.py
--- a/recipes/clickhouse-cpp/all/conanfile.py
+++ b/recipes/clickhouse-cpp/all/conanfile.py
@@ -75,7 +75,8 @@
del self.options.fPIC
def configure(self):
- self.options.rm_safe("fPIC")
+ if self.options.shared:
+ self.options.rm_safe("fPIC")
def layout(self):
cmake_layout(self, src_folder="src")
| {"golden_diff": "diff --git a/recipes/clickhouse-cpp/all/conanfile.py b/recipes/clickhouse-cpp/all/conanfile.py\n--- a/recipes/clickhouse-cpp/all/conanfile.py\n+++ b/recipes/clickhouse-cpp/all/conanfile.py\n@@ -75,7 +75,8 @@\n del self.options.fPIC\n \n def configure(self):\n- self.options.rm_safe(\"fPIC\")\n+ if self.options.shared:\n+ self.options.rm_safe(\"fPIC\")\n \n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n", "issue": "[package] clickhouse-cpp/*: fPIC option is not respected\nIn the recipe file fPIC option is always removed during configure stage, which can lead to not working static library.\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.tools.cmake import CMake, CMakeToolchain,CMakeDeps, cmake_layout\nfrom conan.tools.files import copy, get\nfrom conan.tools.build import check_min_cppstd\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.scm import Version\nimport os\n\nrequired_conan_version = \">=1.53.0\"\n\nclass ClickHouseCppConan(ConanFile):\n name = \"clickhouse-cpp\"\n homepage = \"https://github.com/ClickHouse/clickhouse-cpp\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"ClickHouse C++ API\"\n license = \"Apache-2.0\"\n topics = (\"database\", \"db\", \"clickhouse\")\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"enable_benchmark\": [True, False],\n \"with_openssl\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"enable_benchmark\": False,\n \"with_openssl\": False\n }\n\n def requirements(self):\n\n self.requires(\"lz4/1.9.4\")\n\n self.requires(\"abseil/20230125.3\", transitive_headers=True)\n\n self.requires(\"cityhash/cci.20130801\")\n if self.options.with_openssl:\n self.requires(\"openssl/[>=1.1 <4]\")\n\n def build_requirements(self):\n if self.options.enable_benchmark:\n self.requires(\"benchmark/1.8.0\")\n\n @property\n def _min_cppstd(self):\n return \"17\"\n\n @property\n def _compilers_minimum_version(self):\n return {\n \"Visual Studio\": \"15\",\n \"msvc\": \"191\",\n \"gcc\": \"7\",\n \"clang\": \"6\",\n }\n\n @property\n def _requires_compiler_rt(self):\n return self.settings.compiler == \"clang\" and (( self.settings.compiler.libcxx in [\"libstdc++\", \"libstdc++11\"] and not self.options.shared) or self.settings.compiler.libcxx == \"libc++\" )\n\n def validate(self):\n if self.settings.compiler.get_safe(\"cppstd\"):\n check_min_cppstd(self, self._min_cppstd)\n minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)\n if minimum_version and Version(self.settings.compiler.version) < minimum_version:\n raise ConanInvalidConfiguration(f\"{self.ref} requires C++17, which your compiler does not support.\")\n if self.settings.os == \"Windows\" and self.options.shared:\n raise ConanInvalidConfiguration(\"f{self.ref} does not support shared library on Windows.\")\n # look at https://github.com/ClickHouse/clickhouse-cpp/pull/226\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n self.options.rm_safe(\"fPIC\")\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version],\n destination=self.source_folder, strip_root=True)\n\n def generate(self):\n tc = CMakeToolchain(self)\n tc.variables[\"BUILD_BENCHMARK\"] = self.options.enable_benchmark\n tc.cache_variables[\"BUILD_SHARED_LIBS\"] = self.options.shared\n tc.variables[\"WITH_OPENSSL\"] = self.options.with_openssl\n tc.cache_variables[\"WITH_SYSTEM_ABSEIL\"] = True\n tc.cache_variables[\"WITH_SYSTEM_LZ4\"] = True\n tc.cache_variables[\"WITH_SYSTEM_CITYHASH\"] = True\n tc.generate()\n\n cd = CMakeDeps(self)\n cd.generate()\n\n def build(self):\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"LICENSE\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs.append(\"clickhouse-cpp-lib\")\n self.cpp_info.set_property(\"cmake_target_name\", \"clickhouse-cpp-lib::clickhouse-cpp-lib\")\n\n if self._requires_compiler_rt:\n ldflags = [\"--rtlib=compiler-rt\"]\n self.cpp_info.exelinkflags = ldflags\n self.cpp_info.sharedlinkflags = ldflags\n self.cpp_info.system_libs.append(\"gcc_s\")\n\n self.cpp_info.filenames[\"cmake_find_package\"] = \"clickhouse-cpp\"\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"clickhouse-cpp\"\n self.cpp_info.names[\"cmake_find_package\"] = \"clickhouse-cpp-lib\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"clickhouse-cpp-lib\"\n\n if self.settings.os == 'Windows':\n self.cpp_info.system_libs = ['ws2_32', 'wsock32']\n", "path": "recipes/clickhouse-cpp/all/conanfile.py"}]} | 2,001 | 127 |
gh_patches_debug_16662 | rasdani/github-patches | git_diff | Pylons__pyramid-2918 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restore the Registry(*args, **kw) API
In the course of #2891, the Registry API was changed in a backwards-incompatible way. While the author of the change (PR 2893) couldn't imagine anyone using the keywords API, I immediately ran into the incompatibility with a number of customer projects. Since the change back to the old API shouldn't introduce another backwards incompatibility, I suggest reverting that change as soon as possible.
</issue>
<code>
[start of pyramid/registry.py]
1 import operator
2 import threading
3
4 from zope.interface import implementer
5
6 from zope.interface.registry import Components
7
8 from pyramid.compat import text_
9 from pyramid.decorator import reify
10
11 from pyramid.interfaces import (
12 IIntrospector,
13 IIntrospectable,
14 ISettings,
15 )
16
17 from pyramid.path import (
18 CALLER_PACKAGE,
19 caller_package,
20 )
21
22 empty = text_('')
23
24 class Registry(Components, dict):
25 """ A registry object is an :term:`application registry`.
26
27 It is used by the framework itself to perform mappings of URLs to view
28 callables, as well as servicing other various framework duties. A registry
29 has its own internal API, but this API is rarely used by Pyramid
30 application developers (it's usually only used by developers of the
31 Pyramid framework and Pyramid addons). But it has a number of attributes
32 that may be useful to application developers within application code,
33 such as ``settings``, which is a dictionary containing application
34 deployment settings.
35
36 For information about the purpose and usage of the application registry,
37 see :ref:`zca_chapter`.
38
39 The registry may be used both as an :class:`pyramid.interfaces.IDict` and
40 as a Zope component registry.
41 These two ways of storing configuration are independent.
42 Applications will tend to prefer to store information as key-values
43 whereas addons may prefer to use the component registry to avoid naming
44 conflicts and to provide more complex lookup mechanisms.
45
46 The application registry is usually accessed as ``request.registry`` in
47 application code. By the time a registry is used to handle requests it
48 should be considered frozen and read-only. Any changes to its internal
49 state should be done with caution and concern for thread-safety.
50
51 """
52
53 # for optimization purposes, if no listeners are listening, don't try
54 # to notify them
55 has_listeners = False
56
57 _settings = None
58
59 def __init__(self, package_name=CALLER_PACKAGE):
60 # add a registry-instance-specific lock, which is used when the lookup
61 # cache is mutated
62 self._lock = threading.Lock()
63 # add a view lookup cache
64 self._clear_view_lookup_cache()
65 if package_name is CALLER_PACKAGE:
66 package_name = caller_package().__name__
67 Components.__init__(self, package_name)
68 dict.__init__(self)
69
70 def _clear_view_lookup_cache(self):
71 self._view_lookup_cache = {}
72
73 def __nonzero__(self):
74 # defeat bool determination via dict.__len__
75 return True
76
77 @reify
78 def package_name(self):
79 return self.__name__
80
81 def registerSubscriptionAdapter(self, *arg, **kw):
82 result = Components.registerSubscriptionAdapter(self, *arg, **kw)
83 self.has_listeners = True
84 return result
85
86 def registerSelfAdapter(self, required=None, provided=None, name=empty,
87 info=empty, event=True):
88 # registerAdapter analogue which always returns the object itself
89 # when required is matched
90 return self.registerAdapter(lambda x: x, required=required,
91 provided=provided, name=name,
92 info=info, event=event)
93
94 def queryAdapterOrSelf(self, object, interface, default=None):
95 # queryAdapter analogue which returns the object if it implements
96 # the interface, otherwise it will return an adaptation to the
97 # interface
98 if not interface.providedBy(object):
99 return self.queryAdapter(object, interface, default=default)
100 return object
101
102 def registerHandler(self, *arg, **kw):
103 result = Components.registerHandler(self, *arg, **kw)
104 self.has_listeners = True
105 return result
106
107 def notify(self, *events):
108 if self.has_listeners:
109 # iterating over subscribers assures they get executed
110 [ _ for _ in self.subscribers(events, None) ]
111
112 # backwards compatibility for code that wants to look up a settings
113 # object via ``registry.getUtility(ISettings)``
114 def _get_settings(self):
115 return self._settings
116
117 def _set_settings(self, settings):
118 self.registerUtility(settings, ISettings)
119 self._settings = settings
120
121 settings = property(_get_settings, _set_settings)
122
123 @implementer(IIntrospector)
124 class Introspector(object):
125 def __init__(self):
126 self._refs = {}
127 self._categories = {}
128 self._counter = 0
129
130 def add(self, intr):
131 category = self._categories.setdefault(intr.category_name, {})
132 category[intr.discriminator] = intr
133 category[intr.discriminator_hash] = intr
134 intr.order = self._counter
135 self._counter += 1
136
137 def get(self, category_name, discriminator, default=None):
138 category = self._categories.setdefault(category_name, {})
139 intr = category.get(discriminator, default)
140 return intr
141
142 def get_category(self, category_name, default=None, sort_key=None):
143 if sort_key is None:
144 sort_key = operator.attrgetter('order')
145 category = self._categories.get(category_name)
146 if category is None:
147 return default
148 values = category.values()
149 values = sorted(set(values), key=sort_key)
150 return [
151 {'introspectable': intr,
152 'related': self.related(intr)}
153 for intr in values
154 ]
155
156 def categorized(self, sort_key=None):
157 L = []
158 for category_name in self.categories():
159 L.append((category_name, self.get_category(category_name,
160 sort_key=sort_key)))
161 return L
162
163 def categories(self):
164 return sorted(self._categories.keys())
165
166 def remove(self, category_name, discriminator):
167 intr = self.get(category_name, discriminator)
168 if intr is None:
169 return
170 L = self._refs.pop(intr, [])
171 for d in L:
172 L2 = self._refs[d]
173 L2.remove(intr)
174 category = self._categories[intr.category_name]
175 del category[intr.discriminator]
176 del category[intr.discriminator_hash]
177
178 def _get_intrs_by_pairs(self, pairs):
179 introspectables = []
180 for pair in pairs:
181 category_name, discriminator = pair
182 intr = self._categories.get(category_name, {}).get(discriminator)
183 if intr is None:
184 raise KeyError((category_name, discriminator))
185 introspectables.append(intr)
186 return introspectables
187
188 def relate(self, *pairs):
189 introspectables = self._get_intrs_by_pairs(pairs)
190 relatable = ((x,y) for x in introspectables for y in introspectables)
191 for x, y in relatable:
192 L = self._refs.setdefault(x, [])
193 if x is not y and y not in L:
194 L.append(y)
195
196 def unrelate(self, *pairs):
197 introspectables = self._get_intrs_by_pairs(pairs)
198 relatable = ((x,y) for x in introspectables for y in introspectables)
199 for x, y in relatable:
200 L = self._refs.get(x, [])
201 if y in L:
202 L.remove(y)
203
204 def related(self, intr):
205 category_name, discriminator = intr.category_name, intr.discriminator
206 intr = self._categories.get(category_name, {}).get(discriminator)
207 if intr is None:
208 raise KeyError((category_name, discriminator))
209 return self._refs.get(intr, [])
210
211 @implementer(IIntrospectable)
212 class Introspectable(dict):
213
214 order = 0 # mutated by introspector.add
215 action_info = None # mutated by self.register
216
217 def __init__(self, category_name, discriminator, title, type_name):
218 self.category_name = category_name
219 self.discriminator = discriminator
220 self.title = title
221 self.type_name = type_name
222 self._relations = []
223
224 def relate(self, category_name, discriminator):
225 self._relations.append((True, category_name, discriminator))
226
227 def unrelate(self, category_name, discriminator):
228 self._relations.append((False, category_name, discriminator))
229
230 def _assert_resolved(self):
231 assert undefer(self.discriminator) is self.discriminator
232
233 @property
234 def discriminator_hash(self):
235 self._assert_resolved()
236 return hash(self.discriminator)
237
238 def __hash__(self):
239 self._assert_resolved()
240 return hash((self.category_name,) + (self.discriminator,))
241
242 def __repr__(self):
243 self._assert_resolved()
244 return '<%s category %r, discriminator %r>' % (self.__class__.__name__,
245 self.category_name,
246 self.discriminator)
247
248 def __nonzero__(self):
249 return True
250
251 __bool__ = __nonzero__ # py3
252
253 def register(self, introspector, action_info):
254 self.discriminator = undefer(self.discriminator)
255 self.action_info = action_info
256 introspector.add(self)
257 for relate, category_name, discriminator in self._relations:
258 discriminator = undefer(discriminator)
259 if relate:
260 method = introspector.relate
261 else:
262 method = introspector.unrelate
263 method(
264 (self.category_name, self.discriminator),
265 (category_name, discriminator)
266 )
267
268 class Deferred(object):
269 """ Can be used by a third-party configuration extender to wrap a
270 :term:`discriminator` during configuration if an immediately hashable
271 discriminator cannot be computed because it relies on unresolved values.
272 The function should accept no arguments and should return a hashable
273 discriminator."""
274 def __init__(self, func):
275 self.func = func
276
277 @reify
278 def value(self):
279 return self.func()
280
281 def resolve(self):
282 return self.value
283
284 def undefer(v):
285 """ Function which accepts an object and returns it unless it is a
286 :class:`pyramid.registry.Deferred` instance. If it is an instance of
287 that class, its ``resolve`` method is called, and the result of the
288 method is returned."""
289 if isinstance(v, Deferred):
290 v = v.resolve()
291 return v
292
293 class predvalseq(tuple):
294 """ A subtype of tuple used to represent a sequence of predicate values """
295 pass
296
297 global_registry = Registry('global')
298
[end of pyramid/registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/registry.py b/pyramid/registry.py
--- a/pyramid/registry.py
+++ b/pyramid/registry.py
@@ -56,7 +56,7 @@
_settings = None
- def __init__(self, package_name=CALLER_PACKAGE):
+ def __init__(self, package_name=CALLER_PACKAGE, *args, **kw):
# add a registry-instance-specific lock, which is used when the lookup
# cache is mutated
self._lock = threading.Lock()
@@ -64,7 +64,7 @@
self._clear_view_lookup_cache()
if package_name is CALLER_PACKAGE:
package_name = caller_package().__name__
- Components.__init__(self, package_name)
+ Components.__init__(self, package_name, *args, **kw)
dict.__init__(self)
def _clear_view_lookup_cache(self):
| {"golden_diff": "diff --git a/pyramid/registry.py b/pyramid/registry.py\n--- a/pyramid/registry.py\n+++ b/pyramid/registry.py\n@@ -56,7 +56,7 @@\n \n _settings = None\n \n- def __init__(self, package_name=CALLER_PACKAGE):\n+ def __init__(self, package_name=CALLER_PACKAGE, *args, **kw):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n@@ -64,7 +64,7 @@\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n- Components.__init__(self, package_name)\n+ Components.__init__(self, package_name, *args, **kw)\n dict.__init__(self)\n \n def _clear_view_lookup_cache(self):\n", "issue": "Restore the Registry(*args, **kw) API\nIn the course of #2891, the Registry API was changed in a backwards-incompatible way. While the author of the change (PR 2893) couldn't imagine anyone using the keywords API, I immediately ran into the incompatibility with a number of customer projects. Since the change back to the old API shouldn't introduce another backwards incompatibility, I suggest reverting that change as soon as possible.\n", "before_files": [{"content": "import operator\nimport threading\n\nfrom zope.interface import implementer\n\nfrom zope.interface.registry import Components\n\nfrom pyramid.compat import text_\nfrom pyramid.decorator import reify\n\nfrom pyramid.interfaces import (\n IIntrospector,\n IIntrospectable,\n ISettings,\n )\n\nfrom pyramid.path import (\n CALLER_PACKAGE,\n caller_package,\n)\n\nempty = text_('')\n\nclass Registry(Components, dict):\n \"\"\" A registry object is an :term:`application registry`.\n\n It is used by the framework itself to perform mappings of URLs to view\n callables, as well as servicing other various framework duties. A registry\n has its own internal API, but this API is rarely used by Pyramid\n application developers (it's usually only used by developers of the\n Pyramid framework and Pyramid addons). But it has a number of attributes\n that may be useful to application developers within application code,\n such as ``settings``, which is a dictionary containing application\n deployment settings.\n\n For information about the purpose and usage of the application registry,\n see :ref:`zca_chapter`.\n\n The registry may be used both as an :class:`pyramid.interfaces.IDict` and\n as a Zope component registry.\n These two ways of storing configuration are independent.\n Applications will tend to prefer to store information as key-values\n whereas addons may prefer to use the component registry to avoid naming\n conflicts and to provide more complex lookup mechanisms.\n\n The application registry is usually accessed as ``request.registry`` in\n application code. By the time a registry is used to handle requests it\n should be considered frozen and read-only. Any changes to its internal\n state should be done with caution and concern for thread-safety.\n\n \"\"\"\n\n # for optimization purposes, if no listeners are listening, don't try\n # to notify them\n has_listeners = False\n\n _settings = None\n\n def __init__(self, package_name=CALLER_PACKAGE):\n # add a registry-instance-specific lock, which is used when the lookup\n # cache is mutated\n self._lock = threading.Lock()\n # add a view lookup cache\n self._clear_view_lookup_cache()\n if package_name is CALLER_PACKAGE:\n package_name = caller_package().__name__\n Components.__init__(self, package_name)\n dict.__init__(self)\n\n def _clear_view_lookup_cache(self):\n self._view_lookup_cache = {}\n\n def __nonzero__(self):\n # defeat bool determination via dict.__len__\n return True\n\n @reify\n def package_name(self):\n return self.__name__\n\n def registerSubscriptionAdapter(self, *arg, **kw):\n result = Components.registerSubscriptionAdapter(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def registerSelfAdapter(self, required=None, provided=None, name=empty,\n info=empty, event=True):\n # registerAdapter analogue which always returns the object itself\n # when required is matched\n return self.registerAdapter(lambda x: x, required=required,\n provided=provided, name=name,\n info=info, event=event)\n\n def queryAdapterOrSelf(self, object, interface, default=None):\n # queryAdapter analogue which returns the object if it implements\n # the interface, otherwise it will return an adaptation to the\n # interface\n if not interface.providedBy(object):\n return self.queryAdapter(object, interface, default=default)\n return object\n\n def registerHandler(self, *arg, **kw):\n result = Components.registerHandler(self, *arg, **kw)\n self.has_listeners = True\n return result\n\n def notify(self, *events):\n if self.has_listeners:\n # iterating over subscribers assures they get executed\n [ _ for _ in self.subscribers(events, None) ]\n\n # backwards compatibility for code that wants to look up a settings\n # object via ``registry.getUtility(ISettings)``\n def _get_settings(self):\n return self._settings\n\n def _set_settings(self, settings):\n self.registerUtility(settings, ISettings)\n self._settings = settings\n\n settings = property(_get_settings, _set_settings)\n\n@implementer(IIntrospector)\nclass Introspector(object):\n def __init__(self):\n self._refs = {}\n self._categories = {}\n self._counter = 0\n\n def add(self, intr):\n category = self._categories.setdefault(intr.category_name, {})\n category[intr.discriminator] = intr\n category[intr.discriminator_hash] = intr\n intr.order = self._counter\n self._counter += 1\n\n def get(self, category_name, discriminator, default=None):\n category = self._categories.setdefault(category_name, {})\n intr = category.get(discriminator, default)\n return intr\n\n def get_category(self, category_name, default=None, sort_key=None):\n if sort_key is None:\n sort_key = operator.attrgetter('order')\n category = self._categories.get(category_name)\n if category is None:\n return default\n values = category.values()\n values = sorted(set(values), key=sort_key)\n return [\n {'introspectable': intr,\n 'related': self.related(intr)}\n for intr in values\n ]\n\n def categorized(self, sort_key=None):\n L = []\n for category_name in self.categories():\n L.append((category_name, self.get_category(category_name,\n sort_key=sort_key)))\n return L\n\n def categories(self):\n return sorted(self._categories.keys())\n\n def remove(self, category_name, discriminator):\n intr = self.get(category_name, discriminator)\n if intr is None:\n return\n L = self._refs.pop(intr, [])\n for d in L:\n L2 = self._refs[d]\n L2.remove(intr)\n category = self._categories[intr.category_name]\n del category[intr.discriminator]\n del category[intr.discriminator_hash]\n\n def _get_intrs_by_pairs(self, pairs):\n introspectables = []\n for pair in pairs:\n category_name, discriminator = pair\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n introspectables.append(intr)\n return introspectables\n\n def relate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.setdefault(x, [])\n if x is not y and y not in L:\n L.append(y)\n\n def unrelate(self, *pairs):\n introspectables = self._get_intrs_by_pairs(pairs)\n relatable = ((x,y) for x in introspectables for y in introspectables)\n for x, y in relatable:\n L = self._refs.get(x, [])\n if y in L:\n L.remove(y)\n\n def related(self, intr):\n category_name, discriminator = intr.category_name, intr.discriminator\n intr = self._categories.get(category_name, {}).get(discriminator)\n if intr is None:\n raise KeyError((category_name, discriminator))\n return self._refs.get(intr, [])\n\n@implementer(IIntrospectable)\nclass Introspectable(dict):\n\n order = 0 # mutated by introspector.add\n action_info = None # mutated by self.register\n\n def __init__(self, category_name, discriminator, title, type_name):\n self.category_name = category_name\n self.discriminator = discriminator\n self.title = title\n self.type_name = type_name\n self._relations = []\n\n def relate(self, category_name, discriminator):\n self._relations.append((True, category_name, discriminator))\n\n def unrelate(self, category_name, discriminator):\n self._relations.append((False, category_name, discriminator))\n\n def _assert_resolved(self):\n assert undefer(self.discriminator) is self.discriminator\n\n @property\n def discriminator_hash(self):\n self._assert_resolved()\n return hash(self.discriminator)\n\n def __hash__(self):\n self._assert_resolved()\n return hash((self.category_name,) + (self.discriminator,))\n\n def __repr__(self):\n self._assert_resolved()\n return '<%s category %r, discriminator %r>' % (self.__class__.__name__,\n self.category_name,\n self.discriminator)\n\n def __nonzero__(self):\n return True\n\n __bool__ = __nonzero__ # py3\n\n def register(self, introspector, action_info):\n self.discriminator = undefer(self.discriminator)\n self.action_info = action_info\n introspector.add(self)\n for relate, category_name, discriminator in self._relations:\n discriminator = undefer(discriminator)\n if relate:\n method = introspector.relate\n else:\n method = introspector.unrelate\n method(\n (self.category_name, self.discriminator),\n (category_name, discriminator)\n )\n\nclass Deferred(object):\n \"\"\" Can be used by a third-party configuration extender to wrap a\n :term:`discriminator` during configuration if an immediately hashable\n discriminator cannot be computed because it relies on unresolved values.\n The function should accept no arguments and should return a hashable\n discriminator.\"\"\"\n def __init__(self, func):\n self.func = func\n\n @reify\n def value(self):\n return self.func()\n\n def resolve(self):\n return self.value\n\ndef undefer(v):\n \"\"\" Function which accepts an object and returns it unless it is a\n :class:`pyramid.registry.Deferred` instance. If it is an instance of\n that class, its ``resolve`` method is called, and the result of the\n method is returned.\"\"\"\n if isinstance(v, Deferred):\n v = v.resolve()\n return v\n\nclass predvalseq(tuple):\n \"\"\" A subtype of tuple used to represent a sequence of predicate values \"\"\"\n pass\n\nglobal_registry = Registry('global')\n", "path": "pyramid/registry.py"}]} | 3,658 | 200 |
gh_patches_debug_30216 | rasdani/github-patches | git_diff | vega__altair-982 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: if selenium is installed but not properly configured, Altair cannot be imported
Fix is to use a more robust lazy import of selenium.
The main issue is that ``import altair`` ends up trying to import selenium. It would be better if selenium weren't imported until it is actually needed. Same for other optional imports.
</issue>
<code>
[start of altair/utils/headless.py]
1 """
2 Utilities that use selenium + chrome headless to save figures
3 """
4
5 import contextlib
6 import os
7 import tempfile
8
9 try:
10 import selenium.webdriver
11 except ImportError:
12 selenium = None
13
14
15 @contextlib.contextmanager
16 def temporary_filename(**kwargs):
17 """Create and clean-up a temporary file
18
19 Arguments are the same as those passed to tempfile.mkstemp
20
21 We could use tempfile.NamedTemporaryFile here, but that causes issues on
22 windows (see https://bugs.python.org/issue14243).
23 """
24 filedescriptor, filename = tempfile.mkstemp(**kwargs)
25 os.close(filedescriptor)
26
27 try:
28 yield filename
29 finally:
30 if os.path.exists(filename):
31 os.remove(filename)
32
33
34 HTML_TEMPLATE = """
35 <!DOCTYPE html>
36 <html>
37 <head>
38 <title>Embedding Vega-Lite</title>
39 <script src="https://cdn.jsdelivr.net/npm/vega@{vega_version}"></script>
40 <script src="https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}"></script>
41 <script src="https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}"></script>
42 </head>
43 <body>
44 <div id="vis"></div>
45 </body>
46 </html>
47 """
48
49 EXTRACT_CODE = {
50 'png': """
51 var spec = arguments[0];
52 var mode = arguments[1];
53 var scaleFactor = arguments[2];
54 var done = arguments[3];
55
56 if(mode === 'vega-lite'){
57 // compile vega-lite to vega
58 const compiled = vl.compile(spec);
59 spec = compiled.spec;
60 }
61
62 new vega.View(vega.parse(spec), {
63 loader: vega.loader(),
64 logLevel: vega.Warn,
65 renderer: 'none',
66 })
67 .initialize()
68 .toCanvas(scaleFactor)
69 .then(function(canvas){return canvas.toDataURL('image/png');})
70 .then(done)
71 .catch(function(err) { console.error(err); });
72 """,
73 'svg': """
74 var spec = arguments[0];
75 var mode = arguments[1];
76 var scaleFactor = arguments[2];
77 var done = arguments[3];
78
79 if(mode === 'vega-lite'){
80 // compile vega-lite to vega
81 const compiled = vl.compile(spec);
82 spec = compiled.spec;
83 }
84
85 new vega.View(vega.parse(spec), {
86 loader: vega.loader(),
87 logLevel: vega.Warn,
88 renderer: 'none',
89 })
90 .initialize()
91 .toSVG(scaleFactor)
92 .then(done)
93 .catch(function(err) { console.error(err); });
94 """,
95 'vega': """
96 var spec = arguments[0];
97 var mode = arguments[1];
98 var done = arguments[3];
99
100 if(mode === 'vega-lite'){
101 // compile vega-lite to vega
102 const compiled = vl.compile(spec);
103 spec = compiled.spec;
104 }
105
106 done(spec);
107 """}
108
109
110 def compile_spec(spec, format, mode,
111 vega_version, vegaembed_version, vegalite_version,
112 scale_factor=1, driver_timeout=20, webdriver='chrome'):
113
114 # TODO: detect & use local Jupyter caches of JS packages?
115
116 if format not in ['png', 'svg', 'vega']:
117 raise NotImplementedError("format must be 'svg', 'png' or 'vega'")
118
119 if mode not in ['vega', 'vega-lite']:
120 raise ValueError("mode must be either 'vega' or 'vega-lite'")
121
122 if vega_version is None:
123 raise ValueError("must specify vega_version")
124
125 if vegaembed_version is None:
126 raise ValueError("must specify vegaembed_version")
127
128 if mode == 'vega-lite' and vegalite_version is None:
129 raise ValueError("must specify vega-lite version")
130
131 if selenium is None:
132 raise ImportError("selenium package is required "
133 "for saving chart as {0}".format(format))
134 if webdriver == 'chrome':
135 webdriver_class = selenium.webdriver.Chrome
136 webdriver_options_class = selenium.webdriver.chrome.options.Options
137 elif webdriver == 'firefox':
138 webdriver_class = selenium.webdriver.Firefox
139 webdriver_options_class = selenium.webdriver.firefox.options.Options
140 else:
141 raise ValueError("webdriver must be 'chrome' or 'firefox'")
142
143 html = HTML_TEMPLATE.format(vega_version=vega_version,
144 vegalite_version=vegalite_version,
145 vegaembed_version=vegaembed_version)
146
147 webdriver_options = webdriver_options_class()
148 webdriver_options.add_argument("--headless")
149
150 if issubclass(webdriver_class, selenium.webdriver.Chrome):
151 # for linux/osx root user, need to add --no-sandbox option.
152 # since geteuid doesn't exist on windows, we don't check it
153 if hasattr(os, 'geteuid') and (os.geteuid() == 0):
154 webdriver_options.add_argument('--no-sandbox')
155
156 driver = webdriver_class(options=webdriver_options)
157
158 try:
159 driver.set_page_load_timeout(driver_timeout)
160
161 with temporary_filename(suffix='.html') as htmlfile:
162 with open(htmlfile, 'w') as f:
163 f.write(html)
164 driver.get("file://" + htmlfile)
165 online = driver.execute_script("return navigator.onLine")
166 if not online:
167 raise ValueError("Internet connection required for saving "
168 "chart as {0}".format(format))
169 return driver.execute_async_script(EXTRACT_CODE[format],
170 spec, mode, scale_factor)
171 finally:
172 driver.close()
173
[end of altair/utils/headless.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/utils/headless.py b/altair/utils/headless.py
--- a/altair/utils/headless.py
+++ b/altair/utils/headless.py
@@ -6,11 +6,6 @@
import os
import tempfile
-try:
- import selenium.webdriver
-except ImportError:
- selenium = None
-
@contextlib.contextmanager
def temporary_filename(**kwargs):
@@ -110,9 +105,15 @@
def compile_spec(spec, format, mode,
vega_version, vegaembed_version, vegalite_version,
scale_factor=1, driver_timeout=20, webdriver='chrome'):
-
# TODO: detect & use local Jupyter caches of JS packages?
+ # selenium is an optional dependency, so import it here
+ try:
+ import selenium.webdriver
+ except ImportError:
+ raise ImportError("selenium package is required "
+ "for saving chart as {0}".format(format))
+
if format not in ['png', 'svg', 'vega']:
raise NotImplementedError("format must be 'svg', 'png' or 'vega'")
@@ -128,9 +129,6 @@
if mode == 'vega-lite' and vegalite_version is None:
raise ValueError("must specify vega-lite version")
- if selenium is None:
- raise ImportError("selenium package is required "
- "for saving chart as {0}".format(format))
if webdriver == 'chrome':
webdriver_class = selenium.webdriver.Chrome
webdriver_options_class = selenium.webdriver.chrome.options.Options
| {"golden_diff": "diff --git a/altair/utils/headless.py b/altair/utils/headless.py\n--- a/altair/utils/headless.py\n+++ b/altair/utils/headless.py\n@@ -6,11 +6,6 @@\n import os\n import tempfile\n \n-try:\n- import selenium.webdriver\n-except ImportError:\n- selenium = None\n-\n \n @contextlib.contextmanager\n def temporary_filename(**kwargs):\n@@ -110,9 +105,15 @@\n def compile_spec(spec, format, mode,\n vega_version, vegaembed_version, vegalite_version,\n scale_factor=1, driver_timeout=20, webdriver='chrome'):\n- \n # TODO: detect & use local Jupyter caches of JS packages?\n \n+ # selenium is an optional dependency, so import it here\n+ try:\n+ import selenium.webdriver\n+ except ImportError:\n+ raise ImportError(\"selenium package is required \"\n+ \"for saving chart as {0}\".format(format))\n+\n if format not in ['png', 'svg', 'vega']:\n raise NotImplementedError(\"format must be 'svg', 'png' or 'vega'\")\n \n@@ -128,9 +129,6 @@\n if mode == 'vega-lite' and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n \n- if selenium is None:\n- raise ImportError(\"selenium package is required \"\n- \"for saving chart as {0}\".format(format))\n if webdriver == 'chrome':\n webdriver_class = selenium.webdriver.Chrome\n webdriver_options_class = selenium.webdriver.chrome.options.Options\n", "issue": "BUG: if selenium is installed but not properly configured, Altair cannot be imported\nFix is to use a more robust lazy import of selenium.\r\n\r\nThe main issue is that ``import altair`` ends up trying to import selenium. It would be better if selenium weren't imported until it is actually needed. Same for other optional imports.\n", "before_files": [{"content": "\"\"\"\nUtilities that use selenium + chrome headless to save figures\n\"\"\"\n\nimport contextlib\nimport os\nimport tempfile\n\ntry:\n import selenium.webdriver\nexcept ImportError:\n selenium = None\n\n\[email protected]\ndef temporary_filename(**kwargs):\n \"\"\"Create and clean-up a temporary file\n\n Arguments are the same as those passed to tempfile.mkstemp\n\n We could use tempfile.NamedTemporaryFile here, but that causes issues on\n windows (see https://bugs.python.org/issue14243).\n \"\"\"\n filedescriptor, filename = tempfile.mkstemp(**kwargs)\n os.close(filedescriptor)\n\n try:\n yield filename\n finally:\n if os.path.exists(filename):\n os.remove(filename)\n\n\nHTML_TEMPLATE = \"\"\"\n<!DOCTYPE html>\n<html>\n<head>\n <title>Embedding Vega-Lite</title>\n <script src=\"https://cdn.jsdelivr.net/npm/vega@{vega_version}\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n</body>\n</html>\n\"\"\"\n\nEXTRACT_CODE = {\n'png': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var scaleFactor = arguments[2];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n new vega.View(vega.parse(spec), {\n loader: vega.loader(),\n logLevel: vega.Warn,\n renderer: 'none',\n })\n .initialize()\n .toCanvas(scaleFactor)\n .then(function(canvas){return canvas.toDataURL('image/png');})\n .then(done)\n .catch(function(err) { console.error(err); });\n \"\"\",\n'svg': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var scaleFactor = arguments[2];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n new vega.View(vega.parse(spec), {\n loader: vega.loader(),\n logLevel: vega.Warn,\n renderer: 'none',\n })\n .initialize()\n .toSVG(scaleFactor)\n .then(done)\n .catch(function(err) { console.error(err); });\n \"\"\",\n'vega': \"\"\"\n var spec = arguments[0];\n var mode = arguments[1];\n var done = arguments[3];\n\n if(mode === 'vega-lite'){\n // compile vega-lite to vega\n const compiled = vl.compile(spec);\n spec = compiled.spec;\n }\n\n done(spec);\n \"\"\"}\n\n\ndef compile_spec(spec, format, mode,\n vega_version, vegaembed_version, vegalite_version,\n scale_factor=1, driver_timeout=20, webdriver='chrome'):\n \n # TODO: detect & use local Jupyter caches of JS packages?\n\n if format not in ['png', 'svg', 'vega']:\n raise NotImplementedError(\"format must be 'svg', 'png' or 'vega'\")\n\n if mode not in ['vega', 'vega-lite']:\n raise ValueError(\"mode must be either 'vega' or 'vega-lite'\")\n\n if vega_version is None:\n raise ValueError(\"must specify vega_version\")\n\n if vegaembed_version is None:\n raise ValueError(\"must specify vegaembed_version\")\n\n if mode == 'vega-lite' and vegalite_version is None:\n raise ValueError(\"must specify vega-lite version\")\n\n if selenium is None:\n raise ImportError(\"selenium package is required \"\n \"for saving chart as {0}\".format(format))\n if webdriver == 'chrome':\n webdriver_class = selenium.webdriver.Chrome\n webdriver_options_class = selenium.webdriver.chrome.options.Options\n elif webdriver == 'firefox':\n webdriver_class = selenium.webdriver.Firefox\n webdriver_options_class = selenium.webdriver.firefox.options.Options\n else:\n raise ValueError(\"webdriver must be 'chrome' or 'firefox'\")\n\n html = HTML_TEMPLATE.format(vega_version=vega_version,\n vegalite_version=vegalite_version,\n vegaembed_version=vegaembed_version)\n\n webdriver_options = webdriver_options_class()\n webdriver_options.add_argument(\"--headless\")\n\n if issubclass(webdriver_class, selenium.webdriver.Chrome):\n # for linux/osx root user, need to add --no-sandbox option.\n # since geteuid doesn't exist on windows, we don't check it\n if hasattr(os, 'geteuid') and (os.geteuid() == 0):\n webdriver_options.add_argument('--no-sandbox')\n\n driver = webdriver_class(options=webdriver_options)\n\n try:\n driver.set_page_load_timeout(driver_timeout)\n\n with temporary_filename(suffix='.html') as htmlfile:\n with open(htmlfile, 'w') as f:\n f.write(html)\n driver.get(\"file://\" + htmlfile)\n online = driver.execute_script(\"return navigator.onLine\")\n if not online:\n raise ValueError(\"Internet connection required for saving \"\n \"chart as {0}\".format(format))\n return driver.execute_async_script(EXTRACT_CODE[format],\n spec, mode, scale_factor)\n finally:\n driver.close()\n", "path": "altair/utils/headless.py"}]} | 2,229 | 351 |
gh_patches_debug_61783 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1155 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Taiwan TW is offline
Currently, Taiwan is grey and 24-hours-history is empty as well.
- [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though.
Maybe there have been some crucial changes?
Some other TW related things that should be fixed:
- The source link on the electricitymap website for Taiwan is not shown / shown as "?".

- In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit?
Here is the website with the 10-min-generation mix that should be linked in README.md:
http://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218

Taiwan TW is offline
Currently, Taiwan is grey and 24-hours-history is empty as well.
- [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though.
Maybe there have been some crucial changes?
Some other TW related things that should be fixed:
- The source link on the electricitymap website for Taiwan is not shown / shown as "?".

- In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit?
Here is the website with the 10-min-generation mix that should be linked in README.md:
http://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218

</issue>
<code>
[start of parsers/TW.py]
1 #!/usr/bin/env python3
2 import arrow
3 import requests
4 import pandas
5 import dateutil
6
7
8 def fetch_production(country_code='TW'):
9 url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'
10 response = requests.get(url)
11 data = response.json()
12
13 dumpDate = data['']
14 prodData = data['aaData']
15
16 tz = 'Asia/Taipei'
17 dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))
18
19 objData = pandas.DataFrame(prodData)
20
21 objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',
22 'additional']
23
24 objData['fueltype'] = objData.fueltype.str.split('(').str[1]
25 objData['fueltype'] = objData.fueltype.str.split(')').str[0]
26 objData.drop('additional', axis=1, inplace=True)
27 objData.drop('percentage', axis=1, inplace=True)
28
29 objData = objData.convert_objects(convert_numeric=True)
30 production = pandas.DataFrame(objData.groupby('fueltype').sum())
31 production.columns = ['capacity', 'output']
32
33 coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity
34 gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity
35 oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity
36
37 coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output
38 gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output
39 oil_production = production.ix['Oil'].output + production.ix['Diesel'].output
40
41 # For storage, note that load will be negative, and generation positive.
42 # We require the opposite
43
44 returndata = {
45 'countryCode': country_code,
46 'datetime': dumpDate.datetime,
47 'production': {
48 'coal': coal_production,
49 'gas': gas_production,
50 'oil': oil_production,
51 'hydro': production.ix['Hydro'].output,
52 'nuclear': production.ix['Nuclear'].output,
53 'solar': production.ix['Solar'].output,
54 'wind': production.ix['Wind'].output,
55 'unknown': production.ix['Co-Gen'].output
56 },
57 'capacity': {
58 'coal': coal_capacity,
59 'gas': gas_capacity,
60 'oil': oil_capacity,
61 'hydro': production.ix['Hydro'].capacity,
62 'nuclear': production.ix['Nuclear'].capacity,
63 'solar': production.ix['Solar'].capacity,
64 'wind': production.ix['Wind'].capacity,
65 'unknown': production.ix['Co-Gen'].capacity
66 },
67 'storage': {
68 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output
69 },
70 'source': 'taipower.com.tw'
71 }
72
73 return returndata
74
75
76 if __name__ == '__main__':
77 print(fetch_production())
78
[end of parsers/TW.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/TW.py b/parsers/TW.py
--- a/parsers/TW.py
+++ b/parsers/TW.py
@@ -5,7 +5,7 @@
import dateutil
-def fetch_production(country_code='TW'):
+def fetch_production(country_code='TW', session=None):
url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'
response = requests.get(url)
data = response.json()
| {"golden_diff": "diff --git a/parsers/TW.py b/parsers/TW.py\n--- a/parsers/TW.py\n+++ b/parsers/TW.py\n@@ -5,7 +5,7 @@\n import dateutil\n \n \n-def fetch_production(country_code='TW'):\n+def fetch_production(country_code='TW', session=None):\n url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n response = requests.get(url)\n data = response.json()\n", "issue": "Taiwan TW is offline\nCurrently, Taiwan is grey and 24-hours-history is empty as well.\r\n\r\n- [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though. \r\nMaybe there have been some crucial changes?\r\n\r\nSome other TW related things that should be fixed:\r\n- The source link on the electricitymap website for Taiwan is not shown / shown as \"?\".\r\n\r\n\r\n- In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit?\r\nHere is the website with the 10-min-generation mix that should be linked in README.md:\r\nhttp://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218\r\n\r\n\r\n\nTaiwan TW is offline\nCurrently, Taiwan is grey and 24-hours-history is empty as well.\r\n\r\n- [The link ](http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt) in the [TW.py parser](https://github.com/tmrowco/electricitymap/blob/master/parsers/TW.py) seems to show data, though. \r\nMaybe there have been some crucial changes?\r\n\r\nSome other TW related things that should be fixed:\r\n- The source link on the electricitymap website for Taiwan is not shown / shown as \"?\".\r\n\r\n\r\n- In general, the link in README.md will show 404 error and is leading nowhere. Seems like they updated/revised their website a bit?\r\nHere is the website with the 10-min-generation mix that should be linked in README.md:\r\nhttp://www.taipower.com.tw/tc/page.aspx?mid=206&cid=404&cchk=8ccc1918-8cae-4f40-a2d0-b43454f4f218\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport arrow\nimport requests\nimport pandas\nimport dateutil\n\n\ndef fetch_production(country_code='TW'):\n url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n response = requests.get(url)\n data = response.json()\n\n dumpDate = data['']\n prodData = data['aaData']\n\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n\n objData = pandas.DataFrame(prodData)\n\n objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',\n 'additional']\n\n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n objData.drop('additional', axis=1, inplace=True)\n objData.drop('percentage', axis=1, inplace=True)\n\n objData = objData.convert_objects(convert_numeric=True)\n production = pandas.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n\n coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity\n gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity\n oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity\n\n coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output\n gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output\n oil_production = production.ix['Oil'].output + production.ix['Diesel'].output\n\n # For storage, note that load will be negative, and generation positive.\n # We require the opposite\n\n returndata = {\n 'countryCode': country_code,\n 'datetime': dumpDate.datetime,\n 'production': {\n 'coal': coal_production,\n 'gas': gas_production,\n 'oil': oil_production,\n 'hydro': production.ix['Hydro'].output,\n 'nuclear': production.ix['Nuclear'].output,\n 'solar': production.ix['Solar'].output,\n 'wind': production.ix['Wind'].output,\n 'unknown': production.ix['Co-Gen'].output\n },\n 'capacity': {\n 'coal': coal_capacity,\n 'gas': gas_capacity,\n 'oil': oil_capacity,\n 'hydro': production.ix['Hydro'].capacity,\n 'nuclear': production.ix['Nuclear'].capacity,\n 'solar': production.ix['Solar'].capacity,\n 'wind': production.ix['Wind'].capacity,\n 'unknown': production.ix['Co-Gen'].capacity\n },\n 'storage': {\n 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output\n },\n 'source': 'taipower.com.tw'\n }\n\n return returndata\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/TW.py"}]} | 2,097 | 113 |
gh_patches_debug_14691 | rasdani/github-patches | git_diff | google__timesketch-406 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pylint not present in requirements.txt
Not pinning version of Pylint makes our build a bit non-deterministic. Pylint's behavior can change between versions and break our build.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # Copyright 2015 Google Inc. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """This is the setup file for the project. The standard setup rules apply:
16
17 python setup.py build
18 sudo python setup.py install
19 """
20
21 import os.path
22 import sys
23 import time
24
25 from setuptools import find_packages
26 from setuptools import setup
27
28 timesketch_version = u'20170721'
29
30 timesketch_description = (
31 u'Timesketch is a web based tool for collaborative forensic timeline '
32 u'analysis. Using sketches you and your collaborators can easily organize '
33 u'timelines and analyze them all at the same time. Add meaning to '
34 u'your raw data with rich annotations, comments, tags and stars.')
35
36 def check_before_upload():
37 """Warn user if frontend build is not present or is not recent.
38
39 Make sure that .js and .css bundles included in the PyPI package are up to
40 date.
41
42 Raises:
43 UserWarning
44 """
45 this_dir = os.path.dirname(__file__)
46 frontend_dist_dir = os.path.join(
47 this_dir, 'timesketch', 'ui', 'static', 'dist',
48 )
49 js = os.path.join(frontend_dist_dir, 'bundle.js')
50 css = os.path.join(frontend_dist_dir, 'bundle.css')
51 if not (os.path.isfile(js) and os.path.isfile(css)):
52 raise UserWarning(
53 "Build the frontend before uploading to PyPI!"
54 + " (see docs/Developers-Guide.md)"
55 )
56 mtime = min(os.path.getmtime(js), os.path.getmtime(css))
57 if time.time() - mtime > 180:
58 raise UserWarning(
59 "Frontend build is older than 3 minutes, please rebuild!"
60 + " (see docs/Developers-Guide.md)"
61 )
62
63 if 'upload' in sys.argv:
64 check_before_upload()
65
66 setup(
67 name=u'timesketch',
68 version=timesketch_version,
69 description=u'Digital forensic timeline analysis',
70 long_description=timesketch_description,
71 license=u'Apache License, Version 2.0',
72 url=u'http://www.timesketch.org/',
73 maintainer=u'Timesketch development team',
74 maintainer_email=u'[email protected]',
75 classifiers=[
76 u'Development Status :: 4 - Beta',
77 u'Environment :: Web Environment',
78 u'Operating System :: OS Independent',
79 u'Programming Language :: Python',
80 ],
81 data_files=[(u'share/timesketch', [u'timesketch.conf'])],
82 packages=find_packages(),
83 include_package_data=True,
84 zip_safe=False,
85 scripts=[u'tsctl'],
86 install_requires=frozenset([
87 u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy',
88 u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate',
89 u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch',
90 u'neo4jrestclient', u'python-dateutil'
91 ]))
92
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,6 +24,8 @@
from setuptools import find_packages
from setuptools import setup
+from pip.req import parse_requirements
+from pip.download import PipSession
timesketch_version = u'20170721'
@@ -83,9 +85,7 @@
include_package_data=True,
zip_safe=False,
scripts=[u'tsctl'],
- install_requires=frozenset([
- u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy',
- u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate',
- u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch',
- u'neo4jrestclient', u'python-dateutil'
- ]))
+ install_requires=[str(req.req) for req in parse_requirements(
+ "requirements.txt", session=PipSession(),
+ )],
+)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,6 +24,8 @@\n \n from setuptools import find_packages\n from setuptools import setup\n+from pip.req import parse_requirements\n+from pip.download import PipSession\n \n timesketch_version = u'20170721'\n \n@@ -83,9 +85,7 @@\n include_package_data=True,\n zip_safe=False,\n scripts=[u'tsctl'],\n- install_requires=frozenset([\n- u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy',\n- u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate',\n- u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch',\n- u'neo4jrestclient', u'python-dateutil'\n- ]))\n+ install_requires=[str(req.req) for req in parse_requirements(\n+ \"requirements.txt\", session=PipSession(),\n+ )],\n+)\n", "issue": "Pylint not present in requirements.txt\nNot pinning version of Pylint makes our build a bit non-deterministic. Pylint's behavior can change between versions and break our build.\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This is the setup file for the project. The standard setup rules apply:\n\n python setup.py build\n sudo python setup.py install\n\"\"\"\n\nimport os.path\nimport sys\nimport time\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\ntimesketch_version = u'20170721'\n\ntimesketch_description = (\n u'Timesketch is a web based tool for collaborative forensic timeline '\n u'analysis. Using sketches you and your collaborators can easily organize '\n u'timelines and analyze them all at the same time. Add meaning to '\n u'your raw data with rich annotations, comments, tags and stars.')\n\ndef check_before_upload():\n \"\"\"Warn user if frontend build is not present or is not recent.\n\n Make sure that .js and .css bundles included in the PyPI package are up to\n date.\n\n Raises:\n UserWarning\n \"\"\"\n this_dir = os.path.dirname(__file__)\n frontend_dist_dir = os.path.join(\n this_dir, 'timesketch', 'ui', 'static', 'dist',\n )\n js = os.path.join(frontend_dist_dir, 'bundle.js')\n css = os.path.join(frontend_dist_dir, 'bundle.css')\n if not (os.path.isfile(js) and os.path.isfile(css)):\n raise UserWarning(\n \"Build the frontend before uploading to PyPI!\"\n + \" (see docs/Developers-Guide.md)\"\n )\n mtime = min(os.path.getmtime(js), os.path.getmtime(css))\n if time.time() - mtime > 180:\n raise UserWarning(\n \"Frontend build is older than 3 minutes, please rebuild!\"\n + \" (see docs/Developers-Guide.md)\"\n )\n\nif 'upload' in sys.argv:\n check_before_upload()\n\nsetup(\n name=u'timesketch',\n version=timesketch_version,\n description=u'Digital forensic timeline analysis',\n long_description=timesketch_description,\n license=u'Apache License, Version 2.0',\n url=u'http://www.timesketch.org/',\n maintainer=u'Timesketch development team',\n maintainer_email=u'[email protected]',\n classifiers=[\n u'Development Status :: 4 - Beta',\n u'Environment :: Web Environment',\n u'Operating System :: OS Independent',\n u'Programming Language :: Python',\n ],\n data_files=[(u'share/timesketch', [u'timesketch.conf'])],\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n scripts=[u'tsctl'],\n install_requires=frozenset([\n u'Flask', u'Flask-Login', u'Flask-script', u'Flask-SQLAlchemy',\n u'Flask-Bcrypt', u'Flask-RESTful', u'Flask-WTF', u'Flask-Migrate',\n u'SQLAlchemy', u'celery', u'redis', u'blinker', u'elasticsearch',\n u'neo4jrestclient', u'python-dateutil'\n ]))\n", "path": "setup.py"}]} | 1,564 | 254 |
gh_patches_debug_29180 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-417 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add contribution guide to documentation
Add a guide to the documentation which explains how to contribute to this project.
It should contain the following:
- [x] Bug Reporting Guide
- [x] Code Style Guide
- [x] GitHub Workflow Guide
- [x] Code of Conduct
</issue>
<code>
[start of sphinx/conf.py]
1 """
2 Configuration file for the Sphinx documentation builder.
3
4 This file only contains a selection of the most common options. For a full
5 list see the documentation:
6 https://www.sphinx-doc.org/en/master/usage/configuration.html
7 """
8
9 # -- Path setup --------------------------------------------------------------
10
11 import os
12 import sys
13 import inspect
14 import importlib
15 import django
16
17 from sphinx.writers.html import HTMLTranslator
18
19 from backend.settings import VERSION
20
21 # Append project source directory to path environment variable
22 sys.path.append(os.path.abspath('../src/'))
23 os.environ['DJANGO_SETTINGS_MODULE'] = 'backend.settings'
24
25
26 # Setup Django
27 django.setup()
28
29
30 def setup(app):
31 """
32 Registeration and setup.
33
34 This method does the initial setup for the docs generation.
35 """
36 # Register the docstring processor with sphinx to improve the appearance of Django models
37 app.connect('autodoc-process-docstring', process_django_models)
38 # Patch HTMLTranslator to open external links in new tab
39 app.set_translator('html', PatchedHTMLTranslator)
40
41
42 # -- Project information -----------------------------------------------------
43
44
45 project = 'integreat-cms'
46 # pylint: disable=redefined-builtin
47 copyright = '2020, Integreat'
48 author = 'Integreat'
49
50 # The full version, including alpha/beta/rc tags
51 release = VERSION
52
53 # -- General configuration ---------------------------------------------------
54
55 # All enabled sphinx extensions
56 extensions = [
57 'sphinx.ext.autodoc',
58 'sphinx.ext.githubpages',
59 'sphinx.ext.intersphinx',
60 'sphinx.ext.linkcode',
61 'sphinxcontrib_django',
62 'sphinx_rtd_theme',
63 ]
64
65 # Enable cross-references to other documentations
66 intersphinx_mapping = {
67 'python': ('https://docs.python.org/3.7', None),
68 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),
69 'django': ('https://docs.djangoproject.com/en/2.2/',
70 'https://docs.djangoproject.com/en/2.2/_objects/'),
71 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),
72 }
73
74 # The path for patched template files
75 templates_path = ['templates']
76
77 # -- Options for HTML output -------------------------------------------------
78
79 # The theme to use for HTML and HTML Help pages.
80 html_theme = 'sphinx_rtd_theme'
81 # Do not show the project name, only the logo
82 html_theme_options = {
83 'logo_only': True,
84 'collapse_navigation': False,
85 }
86 # The logo shown in the menu bar
87 html_logo = '../src/cms/static/images/integreat-logo-white.png'
88 # The facivon of the html doc files
89 html_favicon = '../src/cms/static/images/favicon.ico'
90 # The url where the docs should be published (via gh-pages)
91 html_baseurl = 'https://Integreat.github.io/cms-django/'
92 # Do not include links to the documentation source (.rst files) in build
93 html_show_sourcelink = False
94
95 # -- Modify default Django model parameter types------------------------------
96
97
98 # pylint: disable=unused-argument, too-many-locals, too-many-branches
99 def process_django_models(app, what, name, obj, options, lines):
100 """Append correct param types from fields to model documentation."""
101 if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):
102 # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link
103 postgres_docu = intersphinx_mapping.get('django')[1][0] + 'ref/contrib/postgres/fields/'
104 # include_hidden to get also ManyToManyFields
105 for field in obj._meta.get_fields(include_hidden=True):
106 field_type = type(field).__name__
107 field_module = type(field).__module__
108 if field_module == 'django.contrib.postgres.fields.array':
109 # Fix intersphinx mappings for django.contrib.postgres fields
110 type_line = f':type {field.name}: `{field_module}.ArrayField <{postgres_docu}#arrayfield>`_'
111 elif field_module == 'django.contrib.postgres.fields.jsonb':
112 # Fix intersphinx mappings for django.contrib.postgres fields
113 type_line = f':type {field.name}: `{field_module}.JSONField <{postgres_docu}#jsonfield>`_'
114 elif field_module in ['django.db.models.fields.related', 'mptt.fields']:
115 # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)
116 # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)
117 remote_model = field.remote_field.get_related_field().model
118 type_line = f':type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`'
119 elif field_module == 'django.db.models.fields.reverse_related':
120 # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)
121 remote_model = field.remote_field.model
122 type_line = f':type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`'
123 else:
124 if 'django.db.models' in field_module:
125 # Scope with django.db.models * imports (remove all sub-module-paths)
126 field_module = 'django.db.models'
127 # Fix type hint to enable correct intersphinx mappings to other documentations
128 type_line = f':type {field.name}: {field_module}.{field_type}'
129 # This loop gets the indexes which are needed to update the type hints of the model parameters.
130 # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base
131 # model and params of a sub model (otherwise the type hints would not be recognized when separated from
132 # the parameter description).
133 param_index = None
134 next_param_index = None
135 type_index = None
136 for index, line in enumerate(lines):
137 if param_index is None and f':param {field.name}:' in line:
138 # The index of the field param is only used to determine the next param line
139 param_index = index
140 elif param_index is not None and next_param_index is None and (':param ' in line or line == ''):
141 # The line of the next param after the field, this is the index where we will insert the type.
142 # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.
143 # If the line is empty, the param description is finished, even if it extends over multiple lines.
144 next_param_index = index
145 elif type_index is None and f':type {field.name}:' in line:
146 # The index of the old type hint, we will either move this line or replace it
147 type_index = index
148 break
149 if next_param_index is None:
150 # In case the current field is the last param, we just append the type at the very end of lines
151 next_param_index = len(lines)
152 # For some params, the type line is not automatically generated and thus the type_index might be `None`
153 if type_index is not None:
154 # We delete the old type index, because we will replace it with the new type line
155 del lines[type_index]
156 # Insert the new type line just before the next param
157 lines.insert(next_param_index, type_line)
158 return lines
159
160
161 # -- Source Code links to GitHub ---------------------------------------------
162
163
164 def linkcode_resolve(domain, info):
165 """Link source code to GitHub."""
166 if domain != 'py' or not info['module']:
167 return None
168 filename = info['module'].replace('.', '/')
169 module = importlib.import_module(info['module'])
170 basename = os.path.splitext(module.__file__)[0]
171 if basename.endswith('__init__'):
172 filename += '/__init__'
173 item = module
174 line_number_reference = ''
175 for piece in info['fullname'].split('.'):
176 item = getattr(item, piece)
177 try:
178 line_number_reference = f'#L{inspect.getsourcelines(item)[1]}'
179 except (TypeError, IOError):
180 pass
181 return f"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}"
182
183 # -- Link targets ------------------------------------------------------------
184
185
186 # pylint: disable=abstract-method
187 class PatchedHTMLTranslator(HTMLTranslator):
188 """Open external links in a new tab"""
189
190 def visit_reference(self, node):
191 if (
192 node.get('newtab') or
193 not (
194 node.get('target') or
195 node.get('internal') or
196 'refuri' not in node
197 )
198 ):
199 node['target'] = '_blank'
200 super().visit_reference(node)
201
[end of sphinx/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sphinx/conf.py b/sphinx/conf.py
--- a/sphinx/conf.py
+++ b/sphinx/conf.py
@@ -65,7 +65,9 @@
# Enable cross-references to other documentations
intersphinx_mapping = {
'python': ('https://docs.python.org/3.7', None),
+ 'pipenv': ('https://pipenv.pypa.io/en/latest/', None),
'sphinx': ('https://www.sphinx-doc.org/en/master/', None),
+ 'sphinx-rtd-tutorial': ('https://sphinx-rtd-tutorial.readthedocs.io/en/latest/', None),
'django': ('https://docs.djangoproject.com/en/2.2/',
'https://docs.djangoproject.com/en/2.2/_objects/'),
'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),
@@ -80,7 +82,7 @@
html_theme = 'sphinx_rtd_theme'
# Do not show the project name, only the logo
html_theme_options = {
- 'logo_only': True,
+ 'logo_only': False,
'collapse_navigation': False,
}
# The logo shown in the menu bar
@@ -91,6 +93,10 @@
html_baseurl = 'https://Integreat.github.io/cms-django/'
# Do not include links to the documentation source (.rst files) in build
html_show_sourcelink = False
+# Do not include a link to sphinx
+html_show_sphinx = False
+# Include last updated timestamp
+html_last_updated_fmt = '%b %d, %Y'
# -- Modify default Django model parameter types------------------------------
| {"golden_diff": "diff --git a/sphinx/conf.py b/sphinx/conf.py\n--- a/sphinx/conf.py\n+++ b/sphinx/conf.py\n@@ -65,7 +65,9 @@\n # Enable cross-references to other documentations\n intersphinx_mapping = {\n 'python': ('https://docs.python.org/3.7', None),\n+ 'pipenv': ('https://pipenv.pypa.io/en/latest/', None),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n+ 'sphinx-rtd-tutorial': ('https://sphinx-rtd-tutorial.readthedocs.io/en/latest/', None),\n 'django': ('https://docs.djangoproject.com/en/2.2/',\n 'https://docs.djangoproject.com/en/2.2/_objects/'),\n 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),\n@@ -80,7 +82,7 @@\n html_theme = 'sphinx_rtd_theme'\n # Do not show the project name, only the logo\n html_theme_options = {\n- 'logo_only': True,\n+ 'logo_only': False,\n 'collapse_navigation': False,\n }\n # The logo shown in the menu bar\n@@ -91,6 +93,10 @@\n html_baseurl = 'https://Integreat.github.io/cms-django/'\n # Do not include links to the documentation source (.rst files) in build\n html_show_sourcelink = False\n+# Do not include a link to sphinx\n+html_show_sphinx = False\n+# Include last updated timestamp\n+html_last_updated_fmt = '%b %d, %Y'\n \n # -- Modify default Django model parameter types------------------------------\n", "issue": "Add contribution guide to documentation\nAdd a guide to the documentation which explains how to contribute to this project.\r\n\r\nIt should contain the following:\r\n\r\n- [x] Bug Reporting Guide\r\n- [x] Code Style Guide\r\n- [x] GitHub Workflow Guide\r\n- [x] Code of Conduct\n", "before_files": [{"content": "\"\"\"\nConfiguration file for the Sphinx documentation builder.\n\nThis file only contains a selection of the most common options. For a full\nlist see the documentation:\nhttps://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nimport inspect\nimport importlib\nimport django\n\nfrom sphinx.writers.html import HTMLTranslator\n\nfrom backend.settings import VERSION\n\n# Append project source directory to path environment variable\nsys.path.append(os.path.abspath('../src/'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'backend.settings'\n\n\n# Setup Django\ndjango.setup()\n\n\ndef setup(app):\n \"\"\"\n Registeration and setup.\n\n This method does the initial setup for the docs generation.\n \"\"\"\n # Register the docstring processor with sphinx to improve the appearance of Django models\n app.connect('autodoc-process-docstring', process_django_models)\n # Patch HTMLTranslator to open external links in new tab\n app.set_translator('html', PatchedHTMLTranslator)\n\n\n# -- Project information -----------------------------------------------------\n\n\nproject = 'integreat-cms'\n# pylint: disable=redefined-builtin\ncopyright = '2020, Integreat'\nauthor = 'Integreat'\n\n# The full version, including alpha/beta/rc tags\nrelease = VERSION\n\n# -- General configuration ---------------------------------------------------\n\n# All enabled sphinx extensions\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.githubpages',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.linkcode',\n 'sphinxcontrib_django',\n 'sphinx_rtd_theme',\n]\n\n# Enable cross-references to other documentations\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3.7', None),\n 'sphinx': ('https://www.sphinx-doc.org/en/master/', None),\n 'django': ('https://docs.djangoproject.com/en/2.2/',\n 'https://docs.djangoproject.com/en/2.2/_objects/'),\n 'django-mptt': ('https://django-mptt.readthedocs.io/en/latest/', None),\n}\n\n# The path for patched template files\ntemplates_path = ['templates']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'sphinx_rtd_theme'\n# Do not show the project name, only the logo\nhtml_theme_options = {\n 'logo_only': True,\n 'collapse_navigation': False,\n}\n# The logo shown in the menu bar\nhtml_logo = '../src/cms/static/images/integreat-logo-white.png'\n# The facivon of the html doc files\nhtml_favicon = '../src/cms/static/images/favicon.ico'\n# The url where the docs should be published (via gh-pages)\nhtml_baseurl = 'https://Integreat.github.io/cms-django/'\n# Do not include links to the documentation source (.rst files) in build\nhtml_show_sourcelink = False\n\n# -- Modify default Django model parameter types------------------------------\n\n\n# pylint: disable=unused-argument, too-many-locals, too-many-branches\ndef process_django_models(app, what, name, obj, options, lines):\n \"\"\"Append correct param types from fields to model documentation.\"\"\"\n if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):\n # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link\n postgres_docu = intersphinx_mapping.get('django')[1][0] + 'ref/contrib/postgres/fields/'\n # include_hidden to get also ManyToManyFields\n for field in obj._meta.get_fields(include_hidden=True):\n field_type = type(field).__name__\n field_module = type(field).__module__\n if field_module == 'django.contrib.postgres.fields.array':\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = f':type {field.name}: `{field_module}.ArrayField <{postgres_docu}#arrayfield>`_'\n elif field_module == 'django.contrib.postgres.fields.jsonb':\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = f':type {field.name}: `{field_module}.JSONField <{postgres_docu}#jsonfield>`_'\n elif field_module in ['django.db.models.fields.related', 'mptt.fields']:\n # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)\n # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)\n remote_model = field.remote_field.get_related_field().model\n type_line = f':type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`'\n elif field_module == 'django.db.models.fields.reverse_related':\n # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)\n remote_model = field.remote_field.model\n type_line = f':type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`'\n else:\n if 'django.db.models' in field_module:\n # Scope with django.db.models * imports (remove all sub-module-paths)\n field_module = 'django.db.models'\n # Fix type hint to enable correct intersphinx mappings to other documentations\n type_line = f':type {field.name}: {field_module}.{field_type}'\n # This loop gets the indexes which are needed to update the type hints of the model parameters.\n # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base\n # model and params of a sub model (otherwise the type hints would not be recognized when separated from\n # the parameter description).\n param_index = None\n next_param_index = None\n type_index = None\n for index, line in enumerate(lines):\n if param_index is None and f':param {field.name}:' in line:\n # The index of the field param is only used to determine the next param line\n param_index = index\n elif param_index is not None and next_param_index is None and (':param ' in line or line == ''):\n # The line of the next param after the field, this is the index where we will insert the type.\n # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.\n # If the line is empty, the param description is finished, even if it extends over multiple lines.\n next_param_index = index\n elif type_index is None and f':type {field.name}:' in line:\n # The index of the old type hint, we will either move this line or replace it\n type_index = index\n break\n if next_param_index is None:\n # In case the current field is the last param, we just append the type at the very end of lines\n next_param_index = len(lines)\n # For some params, the type line is not automatically generated and thus the type_index might be `None`\n if type_index is not None:\n # We delete the old type index, because we will replace it with the new type line\n del lines[type_index]\n # Insert the new type line just before the next param\n lines.insert(next_param_index, type_line)\n return lines\n\n\n# -- Source Code links to GitHub ---------------------------------------------\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Link source code to GitHub.\"\"\"\n if domain != 'py' or not info['module']:\n return None\n filename = info['module'].replace('.', '/')\n module = importlib.import_module(info['module'])\n basename = os.path.splitext(module.__file__)[0]\n if basename.endswith('__init__'):\n filename += '/__init__'\n item = module\n line_number_reference = ''\n for piece in info['fullname'].split('.'):\n item = getattr(item, piece)\n try:\n line_number_reference = f'#L{inspect.getsourcelines(item)[1]}'\n except (TypeError, IOError):\n pass\n return f\"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}\"\n\n# -- Link targets ------------------------------------------------------------\n\n\n# pylint: disable=abstract-method\nclass PatchedHTMLTranslator(HTMLTranslator):\n \"\"\"Open external links in a new tab\"\"\"\n\n def visit_reference(self, node):\n if (\n node.get('newtab') or\n not (\n node.get('target') or\n node.get('internal') or\n 'refuri' not in node\n )\n ):\n node['target'] = '_blank'\n super().visit_reference(node)\n", "path": "sphinx/conf.py"}]} | 2,975 | 368 |
gh_patches_debug_18530 | rasdani/github-patches | git_diff | pulp__pulpcore-5377 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
</issue>
<code>
[start of pulpcore/tasking/util.py]
1 import logging
2 from gettext import gettext as _
3
4 from django.db import transaction
5 from django.db import connection
6
7 from pulpcore.app.models import Task
8 from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES
9
10 _logger = logging.getLogger(__name__)
11
12
13 def cancel(task_id):
14 """
15 Cancel the task that is represented by the given task_id.
16
17 This method cancels only the task with given task_id, not the spawned tasks. This also updates
18 task's state to either 'canceled' or 'canceling'.
19
20 Args:
21 task_id (str): The ID of the task you wish to cancel
22
23 Raises:
24 rest_framework.exceptions.NotFound: If a task with given task_id does not exist
25 """
26 task_status = Task.objects.get(pk=task_id)
27
28 if task_status.state in TASK_FINAL_STATES:
29 # If the task is already done, just stop
30 _logger.debug(
31 "Task [{task_id}] already in a final state: {state}".format(
32 task_id=task_id, state=task_status.state
33 )
34 )
35 return task_status
36
37 _logger.info(_("Canceling task: {id}").format(id=task_id))
38
39 task = task_status
40 # This is the only valid transition without holding the task lock
41 rows = Task.objects.filter(pk=task.pk, state__in=TASK_INCOMPLETE_STATES).update(
42 state=TASK_STATES.CANCELING
43 )
44 # Notify the worker that might be running that task and other workers to clean up
45 with connection.cursor() as cursor:
46 cursor.execute("SELECT pg_notify('pulp_worker_cancel', %s)", (str(task.pk),))
47 cursor.execute("NOTIFY pulp_worker_wakeup")
48 if rows == 1:
49 task.refresh_from_db()
50 return task
51
52
53 def _delete_incomplete_resources(task):
54 """
55 Delete all incomplete created-resources on a canceled task.
56
57 Args:
58 task (Task): A task.
59 """
60 if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:
61 raise RuntimeError(_("Task must be canceled."))
62 for model in (r.content_object for r in task.created_resources.all()):
63 try:
64 if model.complete:
65 continue
66 except AttributeError:
67 continue
68 try:
69 with transaction.atomic():
70 model.delete()
71 except Exception as error:
72 _logger.error(_("Delete created resource, failed: {}").format(str(error)))
73
[end of pulpcore/tasking/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -4,7 +4,7 @@
from django.db import transaction
from django.db import connection
-from pulpcore.app.models import Task
+from pulpcore.app.models import Artifact, Content, Task
from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES
_logger = logging.getLogger(__name__)
@@ -60,6 +60,8 @@
if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:
raise RuntimeError(_("Task must be canceled."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| {"golden_diff": "diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py\n--- a/pulpcore/tasking/util.py\n+++ b/pulpcore/tasking/util.py\n@@ -4,7 +4,7 @@\n from django.db import transaction\n from django.db import connection\n \n-from pulpcore.app.models import Task\n+from pulpcore.app.models import Artifact, Content, Task\n from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES\n \n _logger = logging.getLogger(__name__)\n@@ -60,6 +60,8 @@\n if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:\n raise RuntimeError(_(\"Task must be canceled.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n+ if isinstance(model, (Artifact, Content)):\n+ continue\n try:\n if model.complete:\n continue\n", "issue": "Task cleanup must not delete content nor artifacts\nDeleting content or artifacts outside of orphan cleanup is breaking the rules.\r\nAnd no, we cannot get away with that.\r\n\n", "before_files": [{"content": "import logging\nfrom gettext import gettext as _\n\nfrom django.db import transaction\nfrom django.db import connection\n\nfrom pulpcore.app.models import Task\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES\n\n_logger = logging.getLogger(__name__)\n\n\ndef cancel(task_id):\n \"\"\"\n Cancel the task that is represented by the given task_id.\n\n This method cancels only the task with given task_id, not the spawned tasks. This also updates\n task's state to either 'canceled' or 'canceling'.\n\n Args:\n task_id (str): The ID of the task you wish to cancel\n\n Raises:\n rest_framework.exceptions.NotFound: If a task with given task_id does not exist\n \"\"\"\n task_status = Task.objects.get(pk=task_id)\n\n if task_status.state in TASK_FINAL_STATES:\n # If the task is already done, just stop\n _logger.debug(\n \"Task [{task_id}] already in a final state: {state}\".format(\n task_id=task_id, state=task_status.state\n )\n )\n return task_status\n\n _logger.info(_(\"Canceling task: {id}\").format(id=task_id))\n\n task = task_status\n # This is the only valid transition without holding the task lock\n rows = Task.objects.filter(pk=task.pk, state__in=TASK_INCOMPLETE_STATES).update(\n state=TASK_STATES.CANCELING\n )\n # Notify the worker that might be running that task and other workers to clean up\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_notify('pulp_worker_cancel', %s)\", (str(task.pk),))\n cursor.execute(\"NOTIFY pulp_worker_wakeup\")\n if rows == 1:\n task.refresh_from_db()\n return task\n\n\ndef _delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:\n raise RuntimeError(_(\"Task must be canceled.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n", "path": "pulpcore/tasking/util.py"}]} | 1,232 | 190 |
gh_patches_debug_10535 | rasdani/github-patches | git_diff | oppia__oppia-16730 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request]: Consider allowing Python version to be 3.8.x where x >= 12
### Is there an existing issue for this?
- [X] I have searched the existing issues
### A clear and concise description of what you want to happen.
In https://github.com/oppia/oppia/issues/16436, @sagangwee noted that he couldn't get Python 3.8.12 to work on his machine, but Python 3.8.13 worked fine. We might want to consider allowing 3.8.13 etc. in the start script checks for local developers.
### Describe the solution you'd like
Consider expanding the start.py script checks to include later versions of Python that are still on the 3.8.x series.
### Describe alternatives you've considered
We could leave the existing behaviour as-is, but this poses serious problems for developers who can't install 3.8.12.
### Additional context
_No response_
</issue>
<code>
[start of scripts/setup.py]
1 # Copyright 2019 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the 'License');
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an 'AS-IS' BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Python execution environent set up for all scripts."""
16
17 from __future__ import annotations
18
19 import argparse
20 import os
21 import subprocess
22 import sys
23 import tarfile
24
25 from typing import Final, List, Optional
26
27 from . import clean
28 from . import common
29
30 _PARSER: Final = argparse.ArgumentParser(
31 description="""
32 Python execution environent set up for all scripts.
33 """)
34
35
36 def create_directory(directory_path: str) -> None:
37 """Creates a new directory. Does not do anything if directory already
38 exists.
39
40 Args:
41 directory_path: str. Directory path to be created.
42 """
43 if os.path.exists(directory_path):
44 return
45 os.makedirs(directory_path)
46
47
48 # This function takes a command for python as its only input.
49 # It checks this input for a specific version of python and returns false
50 # if it does not match the expected prefix.
51 def test_python_version() -> None:
52 running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)
53 if running_python_version != '3.8.12':
54 print('Please use Python 3.8.12. Exiting...')
55 # If OS is Windows, print helpful error message about adding Python to
56 # path.
57 if common.is_windows_os():
58 common.print_each_string_after_two_new_lines([
59 'It looks like you are using Windows. If you have Python '
60 'installed,',
61 'make sure it is in your PATH and that PYTHONPATH is set.',
62 'If you have two versions of Python (ie, Python 2.7 and 3), '
63 'specify 2.7 before other versions of Python when setting the '
64 'PATH.',
65 'Here are some helpful articles:',
66 'http://docs.python-guide.org/en/latest/starting/install/win/',
67 'https://stackoverflow.com/questions/3701646/how-to-add-to-the-'
68 'pythonpath-in-windows-7'])
69 # Exit when no suitable Python environment can be found.
70 raise Exception('No suitable python version found.')
71
72 # Verify that Python 2 is available. Python 2 is needed for the
73 # app_devserver. See the Google Cloud docs:
74 # https://cloud.google.com/appengine/docs/standard/python3/testing-and-deploying-your-app#local-dev-server
75 return_code = subprocess.call(
76 'python2 -V', stderr=subprocess.DEVNULL, shell=True
77 )
78 if return_code != 0:
79 print(
80 '\033[91m'
81 'The Oppia server needs Python 2 to be installed. '
82 'Please follow the instructions at '
83 'https://github.com/oppia/oppia/wiki/Troubleshooting#'
84 'python-2-is-not-available to fix this.'
85 '\033[0m'
86 )
87 sys.exit(1)
88
89
90 def download_and_install_package(url_to_retrieve: str, filename: str) -> None:
91 """Downloads and installs package in Oppia tools directory.
92
93 Args:
94 url_to_retrieve: string. The url from which package is to be
95 downloaded.
96 filename: string. The name of the tar file.
97 """
98 common.url_retrieve(url_to_retrieve, filename)
99 tar = tarfile.open(name=filename)
100 tar.extractall(path=common.OPPIA_TOOLS_DIR)
101 tar.close()
102 rename_yarn_folder(filename, common.OPPIA_TOOLS_DIR)
103 os.remove(filename)
104
105
106 def rename_yarn_folder(filename: str, path: str) -> None:
107 """Removes the `v` from the yarn folder name.
108
109 Args:
110 filename: string. The name of the tar file.
111 path: string. The path of the yarn file.
112 """
113 if 'yarn' in filename:
114 old_name = filename.split('.tar.gz')[0]
115 new_name = ''.join(old_name.split('v'))
116 os.rename(path + '/' + old_name, path + '/' + new_name)
117
118
119 def download_and_install_node() -> None:
120 """Download and install node to Oppia tools directory."""
121 outfile_name = 'node-download'
122
123 if common.is_windows_os():
124 if common.is_x64_architecture():
125 architecture = 'x64'
126 else:
127 architecture = 'x86'
128
129 extension = '.zip'
130 node_file_name = 'node-v%s-win-%s' % (
131 common.NODE_VERSION, architecture)
132 url_to_retrieve = 'https://nodejs.org/dist/v%s/%s%s' % (
133 common.NODE_VERSION, node_file_name, extension)
134 common.url_retrieve(url_to_retrieve, outfile_name)
135 subprocess.check_call(
136 ['powershell.exe', '-c', 'expand-archive',
137 outfile_name, '-DestinationPath',
138 common.OPPIA_TOOLS_DIR])
139 else:
140 extension = '.tar.gz'
141 if common.is_x64_architecture():
142 if common.is_mac_os():
143 node_file_name = 'node-v%s-darwin-x64' % (common.NODE_VERSION)
144 elif common.is_linux_os():
145 node_file_name = 'node-v%s-linux-x64' % (common.NODE_VERSION)
146 # Oppia only suppports windows, mac and linux operating systems.
147 else:
148 raise Exception(
149 'System\'s Operating System is not compatible.')
150 else:
151 node_file_name = 'node-v%s' % common.NODE_VERSION
152 download_and_install_package(
153 'https://nodejs.org/dist/v%s/%s%s' % (
154 common.NODE_VERSION, node_file_name, extension),
155 outfile_name)
156 os.rename(
157 os.path.join(common.OPPIA_TOOLS_DIR, node_file_name),
158 common.NODE_PATH)
159 if node_file_name == 'node-v%s' % common.NODE_VERSION:
160 with common.CD(common.NODE_PATH):
161 subprocess.check_call(['./configure'])
162 subprocess.check_call(['make'])
163
164
165 def main(args: Optional[List[str]] = None) -> None:
166 """Runs the script to setup Oppia."""
167 unused_parsed_args = _PARSER.parse_args(args=args)
168 test_python_version()
169
170 # The second option allows this script to also be run from deployment
171 # folders.
172 if not os.getcwd().endswith(('oppia', 'deploy-')):
173 print('')
174 print('WARNING This script should be run from the oppia/ root folder.')
175 print('')
176 raise Exception('Invalid root directory.')
177
178 # Set COMMON_DIR to the absolute path of the directory above OPPIA_DIR. This
179 # is necessary becaue COMMON_DIR (or subsequent variables which refer to it)
180 # may use it in a situation where relative paths won't work as expected(such
181 # as $PYTHONPATH).
182 create_directory(common.OPPIA_TOOLS_DIR)
183 create_directory(common.THIRD_PARTY_DIR)
184 common.create_readme(
185 common.THIRD_PARTY_DIR,
186 'This folder contains third party libraries used in Oppia codebase.\n'
187 'You can regenerate this folder by deleting it and then running '
188 'the start.py script.\n')
189 create_directory(common.NODE_MODULES_PATH)
190 common.create_readme(
191 common.NODE_MODULES_PATH,
192 'This folder contains node utilities used in Oppia codebase.\n'
193 'You can regenerate this folder by deleting it and then running '
194 'the start.py script.\n')
195
196 # Download and install node.js.
197 print('Checking if node.js is installed in %s' % common.OPPIA_TOOLS_DIR)
198 if not os.path.exists(common.NODE_PATH):
199 print('Installing Node.js')
200 download_and_install_node()
201 # Change ownership of node_modules.
202 # Note: on some machines, these commands seem to take quite a long time.
203 if not common.is_windows_os():
204 common.recursive_chown(common.NODE_MODULES_PATH, os.getuid(), -1)
205 common.recursive_chmod(common.NODE_MODULES_PATH, 0o744)
206
207 # Download and install yarn.
208 print('Checking if yarn is installed in %s' % common.OPPIA_TOOLS_DIR)
209 if not os.path.exists(common.YARN_PATH):
210 print('Removing package-lock.json')
211 clean.delete_file('package-lock.json')
212 common.print_each_string_after_two_new_lines([
213 'Installing yarn',
214 'WARNING: Please note that Oppia uses Yarn to manage node packages',
215 'do *NOT* use npm. For more information on how to use yarn,',
216 'visit https://yarnpkg.com/en/docs/usage.'])
217
218 # NB: Update .yarnrc if the yarn version below is changed.
219 yarn_file_name = 'yarn-v%s.tar.gz' % common.YARN_VERSION
220 download_and_install_package(
221 'https://github.com/yarnpkg/yarn/releases/download/v%s/%s'
222 % (common.YARN_VERSION, yarn_file_name), yarn_file_name)
223
224 print('Environment setup completed.')
225
226
227 # The 'no coverage' pragma is used as this line is un-testable. This is because
228 # it will only be called when setup.py is used as a script.
229 if __name__ == '__main__': # pragma: no cover
230 main()
231
[end of scripts/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/setup.py b/scripts/setup.py
--- a/scripts/setup.py
+++ b/scripts/setup.py
@@ -50,8 +50,8 @@
# if it does not match the expected prefix.
def test_python_version() -> None:
running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)
- if running_python_version != '3.8.12':
- print('Please use Python 3.8.12. Exiting...')
+ if running_python_version != '3.8.15':
+ print('Please use Python 3.8.15. Exiting...')
# If OS is Windows, print helpful error message about adding Python to
# path.
if common.is_windows_os():
| {"golden_diff": "diff --git a/scripts/setup.py b/scripts/setup.py\n--- a/scripts/setup.py\n+++ b/scripts/setup.py\n@@ -50,8 +50,8 @@\n # if it does not match the expected prefix.\n def test_python_version() -> None:\n running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)\n- if running_python_version != '3.8.12':\n- print('Please use Python 3.8.12. Exiting...')\n+ if running_python_version != '3.8.15':\n+ print('Please use Python 3.8.15. Exiting...')\n # If OS is Windows, print helpful error message about adding Python to\n # path.\n if common.is_windows_os():\n", "issue": "[Feature Request]: Consider allowing Python version to be 3.8.x where x >= 12\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### A clear and concise description of what you want to happen.\n\nIn https://github.com/oppia/oppia/issues/16436, @sagangwee noted that he couldn't get Python 3.8.12 to work on his machine, but Python 3.8.13 worked fine. We might want to consider allowing 3.8.13 etc. in the start script checks for local developers.\n\n### Describe the solution you'd like\n\nConsider expanding the start.py script checks to include later versions of Python that are still on the 3.8.x series.\n\n### Describe alternatives you've considered\n\nWe could leave the existing behaviour as-is, but this poses serious problems for developers who can't install 3.8.12. \n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright 2019 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the 'License');\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an 'AS-IS' BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python execution environent set up for all scripts.\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport os\nimport subprocess\nimport sys\nimport tarfile\n\nfrom typing import Final, List, Optional\n\nfrom . import clean\nfrom . import common\n\n_PARSER: Final = argparse.ArgumentParser(\n description=\"\"\"\nPython execution environent set up for all scripts.\n\"\"\")\n\n\ndef create_directory(directory_path: str) -> None:\n \"\"\"Creates a new directory. Does not do anything if directory already\n exists.\n\n Args:\n directory_path: str. Directory path to be created.\n \"\"\"\n if os.path.exists(directory_path):\n return\n os.makedirs(directory_path)\n\n\n# This function takes a command for python as its only input.\n# It checks this input for a specific version of python and returns false\n# if it does not match the expected prefix.\ndef test_python_version() -> None:\n running_python_version = '{0[0]}.{0[1]}.{0[2]}'.format(sys.version_info)\n if running_python_version != '3.8.12':\n print('Please use Python 3.8.12. Exiting...')\n # If OS is Windows, print helpful error message about adding Python to\n # path.\n if common.is_windows_os():\n common.print_each_string_after_two_new_lines([\n 'It looks like you are using Windows. If you have Python '\n 'installed,',\n 'make sure it is in your PATH and that PYTHONPATH is set.',\n 'If you have two versions of Python (ie, Python 2.7 and 3), '\n 'specify 2.7 before other versions of Python when setting the '\n 'PATH.',\n 'Here are some helpful articles:',\n 'http://docs.python-guide.org/en/latest/starting/install/win/',\n 'https://stackoverflow.com/questions/3701646/how-to-add-to-the-'\n 'pythonpath-in-windows-7'])\n # Exit when no suitable Python environment can be found.\n raise Exception('No suitable python version found.')\n\n # Verify that Python 2 is available. Python 2 is needed for the\n # app_devserver. See the Google Cloud docs:\n # https://cloud.google.com/appengine/docs/standard/python3/testing-and-deploying-your-app#local-dev-server\n return_code = subprocess.call(\n 'python2 -V', stderr=subprocess.DEVNULL, shell=True\n )\n if return_code != 0:\n print(\n '\\033[91m'\n 'The Oppia server needs Python 2 to be installed. '\n 'Please follow the instructions at '\n 'https://github.com/oppia/oppia/wiki/Troubleshooting#'\n 'python-2-is-not-available to fix this.'\n '\\033[0m'\n )\n sys.exit(1)\n\n\ndef download_and_install_package(url_to_retrieve: str, filename: str) -> None:\n \"\"\"Downloads and installs package in Oppia tools directory.\n\n Args:\n url_to_retrieve: string. The url from which package is to be\n downloaded.\n filename: string. The name of the tar file.\n \"\"\"\n common.url_retrieve(url_to_retrieve, filename)\n tar = tarfile.open(name=filename)\n tar.extractall(path=common.OPPIA_TOOLS_DIR)\n tar.close()\n rename_yarn_folder(filename, common.OPPIA_TOOLS_DIR)\n os.remove(filename)\n\n\ndef rename_yarn_folder(filename: str, path: str) -> None:\n \"\"\"Removes the `v` from the yarn folder name.\n\n Args:\n filename: string. The name of the tar file.\n path: string. The path of the yarn file.\n \"\"\"\n if 'yarn' in filename:\n old_name = filename.split('.tar.gz')[0]\n new_name = ''.join(old_name.split('v'))\n os.rename(path + '/' + old_name, path + '/' + new_name)\n\n\ndef download_and_install_node() -> None:\n \"\"\"Download and install node to Oppia tools directory.\"\"\"\n outfile_name = 'node-download'\n\n if common.is_windows_os():\n if common.is_x64_architecture():\n architecture = 'x64'\n else:\n architecture = 'x86'\n\n extension = '.zip'\n node_file_name = 'node-v%s-win-%s' % (\n common.NODE_VERSION, architecture)\n url_to_retrieve = 'https://nodejs.org/dist/v%s/%s%s' % (\n common.NODE_VERSION, node_file_name, extension)\n common.url_retrieve(url_to_retrieve, outfile_name)\n subprocess.check_call(\n ['powershell.exe', '-c', 'expand-archive',\n outfile_name, '-DestinationPath',\n common.OPPIA_TOOLS_DIR])\n else:\n extension = '.tar.gz'\n if common.is_x64_architecture():\n if common.is_mac_os():\n node_file_name = 'node-v%s-darwin-x64' % (common.NODE_VERSION)\n elif common.is_linux_os():\n node_file_name = 'node-v%s-linux-x64' % (common.NODE_VERSION)\n # Oppia only suppports windows, mac and linux operating systems.\n else:\n raise Exception(\n 'System\\'s Operating System is not compatible.')\n else:\n node_file_name = 'node-v%s' % common.NODE_VERSION\n download_and_install_package(\n 'https://nodejs.org/dist/v%s/%s%s' % (\n common.NODE_VERSION, node_file_name, extension),\n outfile_name)\n os.rename(\n os.path.join(common.OPPIA_TOOLS_DIR, node_file_name),\n common.NODE_PATH)\n if node_file_name == 'node-v%s' % common.NODE_VERSION:\n with common.CD(common.NODE_PATH):\n subprocess.check_call(['./configure'])\n subprocess.check_call(['make'])\n\n\ndef main(args: Optional[List[str]] = None) -> None:\n \"\"\"Runs the script to setup Oppia.\"\"\"\n unused_parsed_args = _PARSER.parse_args(args=args)\n test_python_version()\n\n # The second option allows this script to also be run from deployment\n # folders.\n if not os.getcwd().endswith(('oppia', 'deploy-')):\n print('')\n print('WARNING This script should be run from the oppia/ root folder.')\n print('')\n raise Exception('Invalid root directory.')\n\n # Set COMMON_DIR to the absolute path of the directory above OPPIA_DIR. This\n # is necessary becaue COMMON_DIR (or subsequent variables which refer to it)\n # may use it in a situation where relative paths won't work as expected(such\n # as $PYTHONPATH).\n create_directory(common.OPPIA_TOOLS_DIR)\n create_directory(common.THIRD_PARTY_DIR)\n common.create_readme(\n common.THIRD_PARTY_DIR,\n 'This folder contains third party libraries used in Oppia codebase.\\n'\n 'You can regenerate this folder by deleting it and then running '\n 'the start.py script.\\n')\n create_directory(common.NODE_MODULES_PATH)\n common.create_readme(\n common.NODE_MODULES_PATH,\n 'This folder contains node utilities used in Oppia codebase.\\n'\n 'You can regenerate this folder by deleting it and then running '\n 'the start.py script.\\n')\n\n # Download and install node.js.\n print('Checking if node.js is installed in %s' % common.OPPIA_TOOLS_DIR)\n if not os.path.exists(common.NODE_PATH):\n print('Installing Node.js')\n download_and_install_node()\n # Change ownership of node_modules.\n # Note: on some machines, these commands seem to take quite a long time.\n if not common.is_windows_os():\n common.recursive_chown(common.NODE_MODULES_PATH, os.getuid(), -1)\n common.recursive_chmod(common.NODE_MODULES_PATH, 0o744)\n\n # Download and install yarn.\n print('Checking if yarn is installed in %s' % common.OPPIA_TOOLS_DIR)\n if not os.path.exists(common.YARN_PATH):\n print('Removing package-lock.json')\n clean.delete_file('package-lock.json')\n common.print_each_string_after_two_new_lines([\n 'Installing yarn',\n 'WARNING: Please note that Oppia uses Yarn to manage node packages',\n 'do *NOT* use npm. For more information on how to use yarn,',\n 'visit https://yarnpkg.com/en/docs/usage.'])\n\n # NB: Update .yarnrc if the yarn version below is changed.\n yarn_file_name = 'yarn-v%s.tar.gz' % common.YARN_VERSION\n download_and_install_package(\n 'https://github.com/yarnpkg/yarn/releases/download/v%s/%s'\n % (common.YARN_VERSION, yarn_file_name), yarn_file_name)\n\n print('Environment setup completed.')\n\n\n# The 'no coverage' pragma is used as this line is un-testable. This is because\n# it will only be called when setup.py is used as a script.\nif __name__ == '__main__': # pragma: no cover\n main()\n", "path": "scripts/setup.py"}]} | 3,433 | 171 |
gh_patches_debug_3397 | rasdani/github-patches | git_diff | Netflix__lemur-238 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Creating an authority does not allow others with the role to issue certificates
When creating an authority currently only the creator can see the authority, anyone with the owning role should be able to see and use the certificate.
Currently even when a valid role is assigned and the user can see the authority they cannot use it because the cannot access the authorities key.
</issue>
<code>
[start of lemur/authorities/service.py]
1 """
2 .. module: lemur.authorities.service
3 :platform: Unix
4 :synopsis: This module contains all of the services level functions used to
5 administer authorities in Lemur
6 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
7 :license: Apache, see LICENSE for more details.
8 .. moduleauthor:: Kevin Glisson <[email protected]>
9
10 """
11 from flask import g
12 from flask import current_app
13
14 from lemur import database
15 from lemur.authorities.models import Authority
16 from lemur.roles import service as role_service
17 from lemur.notifications import service as notification_service
18
19 from lemur.roles.models import Role
20 from lemur.certificates.models import Certificate
21
22 from lemur.plugins.base import plugins
23
24
25 def update(authority_id, description=None, owner=None, active=None, roles=None):
26 """
27 Update a an authority with new values.
28
29 :param authority_id:
30 :param roles: roles that are allowed to use this authority
31 :return:
32 """
33 authority = get(authority_id)
34 if roles:
35 authority = database.update_list(authority, 'roles', Role, roles)
36
37 if active:
38 authority.active = active
39
40 authority.description = description
41 authority.owner = owner
42 return database.update(authority)
43
44
45 def create(kwargs):
46 """
47 Create a new authority.
48
49 :return:
50 """
51
52 issuer = plugins.get(kwargs.get('pluginName'))
53
54 kwargs['creator'] = g.current_user.email
55 cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)
56
57 cert = Certificate(cert_body, chain=intermediate)
58 cert.owner = kwargs['ownerEmail']
59
60 if kwargs['caType'] == 'subca':
61 cert.description = "This is the ROOT certificate for the {0} sub certificate authority the parent \
62 authority is {1}.".format(kwargs.get('caName'), kwargs.get('caParent'))
63 else:
64 cert.description = "This is the ROOT certificate for the {0} certificate authority.".format(
65 kwargs.get('caName')
66 )
67
68 cert.user = g.current_user
69
70 cert.notifications = notification_service.create_default_expiration_notifications(
71 'DEFAULT_SECURITY',
72 current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')
73 )
74
75 # we create and attach any roles that the issuer gives us
76 role_objs = []
77 for r in issuer_roles:
78
79 role = role_service.create(
80 r['name'],
81 password=r['password'],
82 description="{0} auto generated role".format(kwargs.get('pluginName')),
83 username=r['username'])
84
85 # the user creating the authority should be able to administer it
86 if role.username == 'admin':
87 g.current_user.roles.append(role)
88
89 role_objs.append(role)
90
91 authority = Authority(
92 kwargs.get('caName'),
93 kwargs['ownerEmail'],
94 kwargs['pluginName'],
95 cert_body,
96 description=kwargs['caDescription'],
97 chain=intermediate,
98 roles=role_objs
99 )
100
101 database.update(cert)
102 authority = database.create(authority)
103
104 g.current_user.authorities.append(authority)
105
106 return authority
107
108
109 def get_all():
110 """
111 Get all authorities that are currently in Lemur.
112
113 :rtype : List
114 :return:
115 """
116 query = database.session_query(Authority)
117 return database.find_all(query, Authority, {}).all()
118
119
120 def get(authority_id):
121 """
122 Retrieves an authority given it's ID
123
124 :param authority_id:
125 :return:
126 """
127 return database.get(Authority, authority_id)
128
129
130 def get_by_name(authority_name):
131 """
132 Retrieves an authority given it's name.
133
134 :param authority_name:
135 :return:
136 """
137 return database.get(Authority, authority_name, field='name')
138
139
140 def get_authority_role(ca_name):
141 """
142 Attempts to get the authority role for a given ca uses current_user
143 as a basis for accomplishing that.
144
145 :param ca_name:
146 """
147 if g.current_user.is_admin:
148 authority = get_by_name(ca_name)
149 # TODO we should pick admin ca roles for admin
150 return authority.roles[0]
151 else:
152 for role in g.current_user.roles:
153 if role.authority:
154 if role.authority.name == ca_name:
155 return role
156
157
158 def render(args):
159 """
160 Helper that helps us render the REST Api responses.
161 :param args:
162 :return:
163 """
164 query = database.session_query(Authority)
165 sort_by = args.pop('sort_by')
166 sort_dir = args.pop('sort_dir')
167 page = args.pop('page')
168 count = args.pop('count')
169 filt = args.pop('filter')
170
171 if filt:
172 terms = filt.split(';')
173 if 'active' in filt: # this is really weird but strcmp seems to not work here??
174 query = query.filter(Authority.active == terms[1])
175 else:
176 query = database.filter(query, Authority, terms)
177
178 # we make sure that a user can only use an authority they either own are are a member of - admins can see all
179 if not g.current_user.is_admin:
180 authority_ids = []
181 for role in g.current_user.roles:
182 if role.authority:
183 authority_ids.append(role.authority.id)
184 query = query.filter(Authority.id.in_(authority_ids))
185
186 query = database.find_all(query, Authority, args)
187
188 if sort_by and sort_dir:
189 query = database.sort(query, Authority, sort_by, sort_dir)
190
191 return database.paginate(query, page, count)
192
[end of lemur/authorities/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py
--- a/lemur/authorities/service.py
+++ b/lemur/authorities/service.py
@@ -101,6 +101,10 @@
database.update(cert)
authority = database.create(authority)
+ # the owning dl or role should have this authority associated with it
+ owner_role = role_service.get_by_name(kwargs['ownerEmail'])
+ owner_role.authority = authority
+
g.current_user.authorities.append(authority)
return authority
| {"golden_diff": "diff --git a/lemur/authorities/service.py b/lemur/authorities/service.py\n--- a/lemur/authorities/service.py\n+++ b/lemur/authorities/service.py\n@@ -101,6 +101,10 @@\n database.update(cert)\n authority = database.create(authority)\n \n+ # the owning dl or role should have this authority associated with it\n+ owner_role = role_service.get_by_name(kwargs['ownerEmail'])\n+ owner_role.authority = authority\n+\n g.current_user.authorities.append(authority)\n \n return authority\n", "issue": "Creating an authority does not allow others with the role to issue certificates\nWhen creating an authority currently only the creator can see the authority, anyone with the owning role should be able to see and use the certificate.\n\nCurrently even when a valid role is assigned and the user can see the authority they cannot use it because the cannot access the authorities key.\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.authorities.service\n :platform: Unix\n :synopsis: This module contains all of the services level functions used to\n administer authorities in Lemur\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nfrom flask import g\nfrom flask import current_app\n\nfrom lemur import database\nfrom lemur.authorities.models import Authority\nfrom lemur.roles import service as role_service\nfrom lemur.notifications import service as notification_service\n\nfrom lemur.roles.models import Role\nfrom lemur.certificates.models import Certificate\n\nfrom lemur.plugins.base import plugins\n\n\ndef update(authority_id, description=None, owner=None, active=None, roles=None):\n \"\"\"\n Update a an authority with new values.\n\n :param authority_id:\n :param roles: roles that are allowed to use this authority\n :return:\n \"\"\"\n authority = get(authority_id)\n if roles:\n authority = database.update_list(authority, 'roles', Role, roles)\n\n if active:\n authority.active = active\n\n authority.description = description\n authority.owner = owner\n return database.update(authority)\n\n\ndef create(kwargs):\n \"\"\"\n Create a new authority.\n\n :return:\n \"\"\"\n\n issuer = plugins.get(kwargs.get('pluginName'))\n\n kwargs['creator'] = g.current_user.email\n cert_body, intermediate, issuer_roles = issuer.create_authority(kwargs)\n\n cert = Certificate(cert_body, chain=intermediate)\n cert.owner = kwargs['ownerEmail']\n\n if kwargs['caType'] == 'subca':\n cert.description = \"This is the ROOT certificate for the {0} sub certificate authority the parent \\\n authority is {1}.\".format(kwargs.get('caName'), kwargs.get('caParent'))\n else:\n cert.description = \"This is the ROOT certificate for the {0} certificate authority.\".format(\n kwargs.get('caName')\n )\n\n cert.user = g.current_user\n\n cert.notifications = notification_service.create_default_expiration_notifications(\n 'DEFAULT_SECURITY',\n current_app.config.get('LEMUR_SECURITY_TEAM_EMAIL')\n )\n\n # we create and attach any roles that the issuer gives us\n role_objs = []\n for r in issuer_roles:\n\n role = role_service.create(\n r['name'],\n password=r['password'],\n description=\"{0} auto generated role\".format(kwargs.get('pluginName')),\n username=r['username'])\n\n # the user creating the authority should be able to administer it\n if role.username == 'admin':\n g.current_user.roles.append(role)\n\n role_objs.append(role)\n\n authority = Authority(\n kwargs.get('caName'),\n kwargs['ownerEmail'],\n kwargs['pluginName'],\n cert_body,\n description=kwargs['caDescription'],\n chain=intermediate,\n roles=role_objs\n )\n\n database.update(cert)\n authority = database.create(authority)\n\n g.current_user.authorities.append(authority)\n\n return authority\n\n\ndef get_all():\n \"\"\"\n Get all authorities that are currently in Lemur.\n\n :rtype : List\n :return:\n \"\"\"\n query = database.session_query(Authority)\n return database.find_all(query, Authority, {}).all()\n\n\ndef get(authority_id):\n \"\"\"\n Retrieves an authority given it's ID\n\n :param authority_id:\n :return:\n \"\"\"\n return database.get(Authority, authority_id)\n\n\ndef get_by_name(authority_name):\n \"\"\"\n Retrieves an authority given it's name.\n\n :param authority_name:\n :return:\n \"\"\"\n return database.get(Authority, authority_name, field='name')\n\n\ndef get_authority_role(ca_name):\n \"\"\"\n Attempts to get the authority role for a given ca uses current_user\n as a basis for accomplishing that.\n\n :param ca_name:\n \"\"\"\n if g.current_user.is_admin:\n authority = get_by_name(ca_name)\n # TODO we should pick admin ca roles for admin\n return authority.roles[0]\n else:\n for role in g.current_user.roles:\n if role.authority:\n if role.authority.name == ca_name:\n return role\n\n\ndef render(args):\n \"\"\"\n Helper that helps us render the REST Api responses.\n :param args:\n :return:\n \"\"\"\n query = database.session_query(Authority)\n sort_by = args.pop('sort_by')\n sort_dir = args.pop('sort_dir')\n page = args.pop('page')\n count = args.pop('count')\n filt = args.pop('filter')\n\n if filt:\n terms = filt.split(';')\n if 'active' in filt: # this is really weird but strcmp seems to not work here??\n query = query.filter(Authority.active == terms[1])\n else:\n query = database.filter(query, Authority, terms)\n\n # we make sure that a user can only use an authority they either own are are a member of - admins can see all\n if not g.current_user.is_admin:\n authority_ids = []\n for role in g.current_user.roles:\n if role.authority:\n authority_ids.append(role.authority.id)\n query = query.filter(Authority.id.in_(authority_ids))\n\n query = database.find_all(query, Authority, args)\n\n if sort_by and sort_dir:\n query = database.sort(query, Authority, sort_by, sort_dir)\n\n return database.paginate(query, page, count)\n", "path": "lemur/authorities/service.py"}]} | 2,289 | 129 |
gh_patches_debug_11633 | rasdani/github-patches | git_diff | pypi__warehouse-1181 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Errors in celery don't get sent to Sentry
</issue>
<code>
[start of warehouse/celery.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import celery.backends
14
15 # We need to trick Celery into supporting rediss:// URLs which is how redis-py
16 # signals that you should use Redis with TLS.
17 celery.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
18
19 from celery import Celery, Task
20 from celery.backends.redis import RedisBackend as _RedisBackend
21 from celery.signals import celeryd_init
22 from pyramid import scripting
23 from pyramid.threadlocal import get_current_request
24
25 from warehouse.config import Environment, configure
26
27
28 @celeryd_init.connect
29 def _configure_celery(*args, **kwargs):
30 configure()
31
32
33 class TLSRedisBackend(_RedisBackend):
34
35 def _params_from_url(self, url, defaults):
36 params = super()._params_from_url(url, defaults)
37 params.update({"connection_class": self.redis.SSLConnection})
38 return params
39
40
41 class WarehouseTask(Task):
42
43 abstract = True
44
45 def __call__(self, *args, **kwargs):
46 registry = self.app.pyramid_config.registry
47 pyramid_env = scripting.prepare(registry=registry)
48
49 try:
50 return super().__call__(pyramid_env["request"], *args, **kwargs)
51 finally:
52 pyramid_env["closer"]()
53
54 def apply_async(self, *args, **kwargs):
55 # The API design of Celery makes this threadlocal pretty impossible to
56 # avoid :(
57 request = get_current_request()
58
59 # If for whatever reason we were unable to get a request we'll just
60 # skip this and call the original method to send this immediately.
61 if request is None or not hasattr(request, "tm"):
62 return super().apply_async(*args, **kwargs)
63
64 # This will break things that expect to get an AsyncResult because
65 # we're no longer going to be returning an async result from this when
66 # called from within a request, response cycle. Ideally we shouldn't be
67 # waiting for responses in a request/response cycle anyways though.
68 request.tm.get().addAfterCommitHook(
69 self._after_commit_hook,
70 args=args,
71 kws=kwargs,
72 )
73
74 def _after_commit_hook(self, success, *args, **kwargs):
75 if success:
76 super().apply_async(*args, **kwargs)
77
78
79 app = Celery("warehouse")
80 app.Task = WarehouseTask
81
82
83 task = app.task
84
85
86 def includeme(config):
87 s = config.registry.settings
88 app.pyramid_config = config
89 app.conf.update(
90 BROKER_URL=s["celery.broker_url"],
91 BROKER_USE_SSL=s["warehouse.env"] == Environment.production,
92 CELERY_DISABLE_RATE_LIMITS=True,
93 CELERY_RESULT_BACKEND=s["celery.result_url"],
94 CELERY_RESULT_SERIALIZER="json",
95 CELERY_TASK_SERIALIZER="json",
96 CELERY_ACCEPT_CONTENT=["json", "msgpack"],
97 CELERY_MESSAGE_COMPRESSION="gzip",
98 CELERY_QUEUE_HA_POLICY="all",
99 )
100
[end of warehouse/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/celery.py b/warehouse/celery.py
--- a/warehouse/celery.py
+++ b/warehouse/celery.py
@@ -21,13 +21,16 @@
from celery.signals import celeryd_init
from pyramid import scripting
from pyramid.threadlocal import get_current_request
+from raven.contrib.celery import register_signal, register_logger_signal
from warehouse.config import Environment, configure
@celeryd_init.connect
def _configure_celery(*args, **kwargs):
- configure()
+ config = configure()
+ register_logger_signal(config.registry["raven.client"])
+ register_signal(config.registry["raven.client"])
class TLSRedisBackend(_RedisBackend):
| {"golden_diff": "diff --git a/warehouse/celery.py b/warehouse/celery.py\n--- a/warehouse/celery.py\n+++ b/warehouse/celery.py\n@@ -21,13 +21,16 @@\n from celery.signals import celeryd_init\n from pyramid import scripting\n from pyramid.threadlocal import get_current_request\n+from raven.contrib.celery import register_signal, register_logger_signal\n \n from warehouse.config import Environment, configure\n \n \n @celeryd_init.connect\n def _configure_celery(*args, **kwargs):\n- configure()\n+ config = configure()\n+ register_logger_signal(config.registry[\"raven.client\"])\n+ register_signal(config.registry[\"raven.client\"])\n \n \n class TLSRedisBackend(_RedisBackend):\n", "issue": "Errors in celery don't get sent to Sentry\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport celery.backends\n\n# We need to trick Celery into supporting rediss:// URLs which is how redis-py\n# signals that you should use Redis with TLS.\ncelery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n\nfrom celery import Celery, Task\nfrom celery.backends.redis import RedisBackend as _RedisBackend\nfrom celery.signals import celeryd_init\nfrom pyramid import scripting\nfrom pyramid.threadlocal import get_current_request\n\nfrom warehouse.config import Environment, configure\n\n\n@celeryd_init.connect\ndef _configure_celery(*args, **kwargs):\n configure()\n\n\nclass TLSRedisBackend(_RedisBackend):\n\n def _params_from_url(self, url, defaults):\n params = super()._params_from_url(url, defaults)\n params.update({\"connection_class\": self.redis.SSLConnection})\n return params\n\n\nclass WarehouseTask(Task):\n\n abstract = True\n\n def __call__(self, *args, **kwargs):\n registry = self.app.pyramid_config.registry\n pyramid_env = scripting.prepare(registry=registry)\n\n try:\n return super().__call__(pyramid_env[\"request\"], *args, **kwargs)\n finally:\n pyramid_env[\"closer\"]()\n\n def apply_async(self, *args, **kwargs):\n # The API design of Celery makes this threadlocal pretty impossible to\n # avoid :(\n request = get_current_request()\n\n # If for whatever reason we were unable to get a request we'll just\n # skip this and call the original method to send this immediately.\n if request is None or not hasattr(request, \"tm\"):\n return super().apply_async(*args, **kwargs)\n\n # This will break things that expect to get an AsyncResult because\n # we're no longer going to be returning an async result from this when\n # called from within a request, response cycle. Ideally we shouldn't be\n # waiting for responses in a request/response cycle anyways though.\n request.tm.get().addAfterCommitHook(\n self._after_commit_hook,\n args=args,\n kws=kwargs,\n )\n\n def _after_commit_hook(self, success, *args, **kwargs):\n if success:\n super().apply_async(*args, **kwargs)\n\n\napp = Celery(\"warehouse\")\napp.Task = WarehouseTask\n\n\ntask = app.task\n\n\ndef includeme(config):\n s = config.registry.settings\n app.pyramid_config = config\n app.conf.update(\n BROKER_URL=s[\"celery.broker_url\"],\n BROKER_USE_SSL=s[\"warehouse.env\"] == Environment.production,\n CELERY_DISABLE_RATE_LIMITS=True,\n CELERY_RESULT_BACKEND=s[\"celery.result_url\"],\n CELERY_RESULT_SERIALIZER=\"json\",\n CELERY_TASK_SERIALIZER=\"json\",\n CELERY_ACCEPT_CONTENT=[\"json\", \"msgpack\"],\n CELERY_MESSAGE_COMPRESSION=\"gzip\",\n CELERY_QUEUE_HA_POLICY=\"all\",\n )\n", "path": "warehouse/celery.py"}]} | 1,498 | 158 |
gh_patches_debug_29413 | rasdani/github-patches | git_diff | freedomofpress__securedrop-7045 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
determine post-upgrade failure-mode for a SHA-1-signed submission key
## Description
After #6948 (for #6399), redwood will refuse to encrypt to a submission key with a SHA-1 signature.
After #6928, `securedrop-admin sdconfig` will reject a submission key with a SHA-1 signature. This check guarantees that new and reconfigured instances will comply with #6948.
What will happen to an instance with a SHA-1-signed signature after upgrading to v2.7.0?
## Possible approaches
| Option | Documentation changes | Code changes | Implication |
| --- | --- | --- | --- |
| Fail open, but log | optional | ✓ | Admin must monitor logs and/or OSSEC alerts. |
| Fail open, but document | ✓ | ✗ | Admin must monitor release notes or check documentation. |
| Fail closed | optional | ✓[1] | Admin can contact us for help. |
**Notes:**
1. @legoktm observes that, without a code change to handle this case, Apache will come back up after reboot even if the `postinst` script fails under `unattended-upgrades`.
</issue>
<code>
[start of securedrop/journalist.py]
1 from encryption import EncryptionManager, GpgKeyNotFoundError
2 from execution import asynchronous
3 from journalist_app import create_app
4 from models import Source
5 from sdconfig import SecureDropConfig
6
7 config = SecureDropConfig.get_current()
8 # app is imported by journalist.wsgi
9 app = create_app(config)
10
11
12 @asynchronous
13 def prime_keycache() -> None:
14 """Pre-load the source public keys into Redis."""
15 with app.app_context():
16 encryption_mgr = EncryptionManager.get_default()
17 for source in Source.query.filter_by(pending=False, deleted_at=None).all():
18 try:
19 encryption_mgr.get_source_public_key(source.filesystem_id)
20 except GpgKeyNotFoundError:
21 pass
22
23
24 prime_keycache()
25
26
27 if __name__ == "__main__": # pragma: no cover
28 debug = getattr(config, "env", "prod") != "prod"
29 # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host
30 app.run(debug=debug, host="0.0.0.0", port=8081)
31
[end of securedrop/journalist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/journalist.py b/securedrop/journalist.py
--- a/securedrop/journalist.py
+++ b/securedrop/journalist.py
@@ -1,9 +1,13 @@
+import sys
+
from encryption import EncryptionManager, GpgKeyNotFoundError
from execution import asynchronous
from journalist_app import create_app
from models import Source
from sdconfig import SecureDropConfig
+import redwood
+
config = SecureDropConfig.get_current()
# app is imported by journalist.wsgi
app = create_app(config)
@@ -21,10 +25,28 @@
pass
-prime_keycache()
+def validate_journalist_key() -> None:
+ """Verify the journalist PGP key is valid"""
+ encryption_mgr = EncryptionManager.get_default()
+ # First check that we can read it
+ try:
+ journalist_key = encryption_mgr.get_journalist_public_key()
+ except Exception as e:
+ print(f"ERROR: Unable to read journalist public key: {e}", file=sys.stderr)
+ app.logger.error(f"ERROR: Unable to read journalist public key: {e}")
+ sys.exit(1)
+ # And then what we read is valid
+ try:
+ redwood.is_valid_public_key(journalist_key)
+ except redwood.RedwoodError as e:
+ print(f"ERROR: Journalist public key is not valid: {e}", file=sys.stderr)
+ app.logger.error(f"ERROR: Journalist public key is not valid: {e}")
+ sys.exit(1)
if __name__ == "__main__": # pragma: no cover
+ validate_journalist_key()
+ prime_keycache()
debug = getattr(config, "env", "prod") != "prod"
# nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host
app.run(debug=debug, host="0.0.0.0", port=8081)
| {"golden_diff": "diff --git a/securedrop/journalist.py b/securedrop/journalist.py\n--- a/securedrop/journalist.py\n+++ b/securedrop/journalist.py\n@@ -1,9 +1,13 @@\n+import sys\n+\n from encryption import EncryptionManager, GpgKeyNotFoundError\n from execution import asynchronous\n from journalist_app import create_app\n from models import Source\n from sdconfig import SecureDropConfig\n \n+import redwood\n+\n config = SecureDropConfig.get_current()\n # app is imported by journalist.wsgi\n app = create_app(config)\n@@ -21,10 +25,28 @@\n pass\n \n \n-prime_keycache()\n+def validate_journalist_key() -> None:\n+ \"\"\"Verify the journalist PGP key is valid\"\"\"\n+ encryption_mgr = EncryptionManager.get_default()\n+ # First check that we can read it\n+ try:\n+ journalist_key = encryption_mgr.get_journalist_public_key()\n+ except Exception as e:\n+ print(f\"ERROR: Unable to read journalist public key: {e}\", file=sys.stderr)\n+ app.logger.error(f\"ERROR: Unable to read journalist public key: {e}\")\n+ sys.exit(1)\n+ # And then what we read is valid\n+ try:\n+ redwood.is_valid_public_key(journalist_key)\n+ except redwood.RedwoodError as e:\n+ print(f\"ERROR: Journalist public key is not valid: {e}\", file=sys.stderr)\n+ app.logger.error(f\"ERROR: Journalist public key is not valid: {e}\")\n+ sys.exit(1)\n \n \n if __name__ == \"__main__\": # pragma: no cover\n+ validate_journalist_key()\n+ prime_keycache()\n debug = getattr(config, \"env\", \"prod\") != \"prod\"\n # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host\n app.run(debug=debug, host=\"0.0.0.0\", port=8081)\n", "issue": "determine post-upgrade failure-mode for a SHA-1-signed submission key\n## Description\r\n\r\nAfter #6948 (for #6399), redwood will refuse to encrypt to a submission key with a SHA-1 signature.\r\n\r\nAfter #6928, `securedrop-admin sdconfig` will reject a submission key with a SHA-1 signature. This check guarantees that new and reconfigured instances will comply with #6948.\r\n\r\nWhat will happen to an instance with a SHA-1-signed signature after upgrading to v2.7.0?\r\n\r\n## Possible approaches\r\n\r\n| Option | Documentation changes | Code changes | Implication |\r\n| --- | --- | --- | --- |\r\n| Fail open, but log | optional | \u2713 | Admin must monitor logs and/or OSSEC alerts. |\r\n| Fail open, but document | \u2713 | \u2717 | Admin must monitor release notes or check documentation. |\r\n| Fail closed | optional | \u2713[1] | Admin can contact us for help. |\r\n\r\n**Notes:**\r\n1. @legoktm observes that, without a code change to handle this case, Apache will come back up after reboot even if the `postinst` script fails under `unattended-upgrades`.\n", "before_files": [{"content": "from encryption import EncryptionManager, GpgKeyNotFoundError\nfrom execution import asynchronous\nfrom journalist_app import create_app\nfrom models import Source\nfrom sdconfig import SecureDropConfig\n\nconfig = SecureDropConfig.get_current()\n# app is imported by journalist.wsgi\napp = create_app(config)\n\n\n@asynchronous\ndef prime_keycache() -> None:\n \"\"\"Pre-load the source public keys into Redis.\"\"\"\n with app.app_context():\n encryption_mgr = EncryptionManager.get_default()\n for source in Source.query.filter_by(pending=False, deleted_at=None).all():\n try:\n encryption_mgr.get_source_public_key(source.filesystem_id)\n except GpgKeyNotFoundError:\n pass\n\n\nprime_keycache()\n\n\nif __name__ == \"__main__\": # pragma: no cover\n debug = getattr(config, \"env\", \"prod\") != \"prod\"\n # nosemgrep: python.flask.security.audit.app-run-param-config.avoid_app_run_with_bad_host\n app.run(debug=debug, host=\"0.0.0.0\", port=8081)\n", "path": "securedrop/journalist.py"}]} | 1,074 | 440 |
gh_patches_debug_21071 | rasdani/github-patches | git_diff | netbox-community__netbox-14608 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Datasources stuck in sync when using git + ssh from ./manage.py syncdatasource
### NetBox version
v3.6.1
### Python version
3.11
### Steps to Reproduce
In Data Sources
Add
Name: test
Type: git
URL: [email protected]:netbox-community/netbox.git
Create
docker compose exec netbox ./manage.py syncdatasource test
### Expected Behavior
Usually leads to some sort of ssh question or failure, and I would expect the exception to set the status to failed, and then be able to hit sync again.
I'm not sure exactly how NetBox works, but looking at one of the exceptions...
core.exceptions.SyncError: Fetching remote data failed (HangupException):
class SyncError(Exception):
pass
Does this mean the status is not being reset correctly due to the status being left as syncing?
### Observed Behavior
datasource.status = syncing in nbshell
'syncing' in gui
Sync option is now greyed out and cannot reset status without manually setting it in nbshell:
for d in DataSource.objects.filter(status='syncing'):
d.status = 'failed'
d.save()
</issue>
<code>
[start of netbox/core/management/commands/syncdatasource.py]
1 from django.core.management.base import BaseCommand, CommandError
2
3 from core.models import DataSource
4
5
6 class Command(BaseCommand):
7 help = "Synchronize a data source from its remote upstream"
8
9 def add_arguments(self, parser):
10 parser.add_argument('name', nargs='*', help="Data source(s) to synchronize")
11 parser.add_argument(
12 "--all", action='store_true', dest='sync_all',
13 help="Synchronize all data sources"
14 )
15
16 def handle(self, *args, **options):
17
18 # Find DataSources to sync
19 if options['sync_all']:
20 datasources = DataSource.objects.all()
21 elif options['name']:
22 datasources = DataSource.objects.filter(name__in=options['name'])
23 # Check for invalid names
24 found_names = {ds['name'] for ds in datasources.values('name')}
25 if invalid_names := set(options['name']) - found_names:
26 raise CommandError(f"Invalid data source names: {', '.join(invalid_names)}")
27 else:
28 raise CommandError(f"Must specify at least one data source, or set --all.")
29
30 if len(options['name']) > 1:
31 self.stdout.write(f"Syncing {len(datasources)} data sources.")
32
33 for i, datasource in enumerate(datasources, start=1):
34 self.stdout.write(f"[{i}] Syncing {datasource}... ", ending='')
35 self.stdout.flush()
36 datasource.sync()
37 self.stdout.write(datasource.get_status_display())
38 self.stdout.flush()
39
40 if len(options['name']) > 1:
41 self.stdout.write(f"Finished.")
42
[end of netbox/core/management/commands/syncdatasource.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/core/management/commands/syncdatasource.py b/netbox/core/management/commands/syncdatasource.py
--- a/netbox/core/management/commands/syncdatasource.py
+++ b/netbox/core/management/commands/syncdatasource.py
@@ -1,5 +1,6 @@
from django.core.management.base import BaseCommand, CommandError
+from core.choices import DataSourceStatusChoices
from core.models import DataSource
@@ -33,9 +34,13 @@
for i, datasource in enumerate(datasources, start=1):
self.stdout.write(f"[{i}] Syncing {datasource}... ", ending='')
self.stdout.flush()
- datasource.sync()
- self.stdout.write(datasource.get_status_display())
- self.stdout.flush()
+ try:
+ datasource.sync()
+ self.stdout.write(datasource.get_status_display())
+ self.stdout.flush()
+ except Exception as e:
+ DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED)
+ raise e
if len(options['name']) > 1:
self.stdout.write(f"Finished.")
| {"golden_diff": "diff --git a/netbox/core/management/commands/syncdatasource.py b/netbox/core/management/commands/syncdatasource.py\n--- a/netbox/core/management/commands/syncdatasource.py\n+++ b/netbox/core/management/commands/syncdatasource.py\n@@ -1,5 +1,6 @@\n from django.core.management.base import BaseCommand, CommandError\n \n+from core.choices import DataSourceStatusChoices\n from core.models import DataSource\n \n \n@@ -33,9 +34,13 @@\n for i, datasource in enumerate(datasources, start=1):\n self.stdout.write(f\"[{i}] Syncing {datasource}... \", ending='')\n self.stdout.flush()\n- datasource.sync()\n- self.stdout.write(datasource.get_status_display())\n- self.stdout.flush()\n+ try:\n+ datasource.sync()\n+ self.stdout.write(datasource.get_status_display())\n+ self.stdout.flush()\n+ except Exception as e:\n+ DataSource.objects.filter(pk=datasource.pk).update(status=DataSourceStatusChoices.FAILED)\n+ raise e\n \n if len(options['name']) > 1:\n self.stdout.write(f\"Finished.\")\n", "issue": "Datasources stuck in sync when using git + ssh from ./manage.py syncdatasource\n### NetBox version\n\nv3.6.1\n\n### Python version\n\n3.11\n\n### Steps to Reproduce\n\nIn Data Sources\r\nAdd\r\nName: test\r\nType: git\r\nURL: [email protected]:netbox-community/netbox.git\r\nCreate\r\n\r\ndocker compose exec netbox ./manage.py syncdatasource test\r\n\r\n\r\n\r\n\n\n### Expected Behavior\n\nUsually leads to some sort of ssh question or failure, and I would expect the exception to set the status to failed, and then be able to hit sync again.\r\n\r\nI'm not sure exactly how NetBox works, but looking at one of the exceptions...\r\ncore.exceptions.SyncError: Fetching remote data failed (HangupException): \r\n\r\nclass SyncError(Exception):\r\n pass\r\n\r\nDoes this mean the status is not being reset correctly due to the status being left as syncing?\r\n\r\n\n\n### Observed Behavior\n\ndatasource.status = syncing in nbshell\r\n'syncing' in gui\r\nSync option is now greyed out and cannot reset status without manually setting it in nbshell:\r\n\r\nfor d in DataSource.objects.filter(status='syncing'):\r\n d.status = 'failed'\r\n d.save()\r\n\n", "before_files": [{"content": "from django.core.management.base import BaseCommand, CommandError\n\nfrom core.models import DataSource\n\n\nclass Command(BaseCommand):\n help = \"Synchronize a data source from its remote upstream\"\n\n def add_arguments(self, parser):\n parser.add_argument('name', nargs='*', help=\"Data source(s) to synchronize\")\n parser.add_argument(\n \"--all\", action='store_true', dest='sync_all',\n help=\"Synchronize all data sources\"\n )\n\n def handle(self, *args, **options):\n\n # Find DataSources to sync\n if options['sync_all']:\n datasources = DataSource.objects.all()\n elif options['name']:\n datasources = DataSource.objects.filter(name__in=options['name'])\n # Check for invalid names\n found_names = {ds['name'] for ds in datasources.values('name')}\n if invalid_names := set(options['name']) - found_names:\n raise CommandError(f\"Invalid data source names: {', '.join(invalid_names)}\")\n else:\n raise CommandError(f\"Must specify at least one data source, or set --all.\")\n\n if len(options['name']) > 1:\n self.stdout.write(f\"Syncing {len(datasources)} data sources.\")\n\n for i, datasource in enumerate(datasources, start=1):\n self.stdout.write(f\"[{i}] Syncing {datasource}... \", ending='')\n self.stdout.flush()\n datasource.sync()\n self.stdout.write(datasource.get_status_display())\n self.stdout.flush()\n\n if len(options['name']) > 1:\n self.stdout.write(f\"Finished.\")\n", "path": "netbox/core/management/commands/syncdatasource.py"}]} | 1,219 | 249 |
gh_patches_debug_28788 | rasdani/github-patches | git_diff | pyload__pyload-284 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with zippyshare
I have a problem with pyload and zippyshare.
When i add a link from zippyshare the software respond: 'NoneType' object has no attribute 'group' XX MiB ZippyshareCom.
I have add the swf path on configuration of zippyshare before the installation of sfwtools.
I use windows 8 on 64 bit and i had try with youtube and it's ok..
the log is this:
https://gist.github.com/djfelix91/6711122
</issue>
<code>
[start of module/plugins/hoster/ZippyshareCom.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import re
5 import subprocess
6 import tempfile
7 import os
8
9 from module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, timestamp
10 from module.plugins.internal.CaptchaService import ReCaptcha
11 from module.common.json_layer import json_loads
12
13
14 class ZippyshareCom(SimpleHoster):
15 __name__ = "ZippyshareCom"
16 __type__ = "hoster"
17 __pattern__ = r"(?P<HOST>http://www\d{0,2}\.zippyshare.com)/v(?:/|iew.jsp.*key=)(?P<KEY>\d+)"
18 __version__ = "0.39"
19 __description__ = """Zippyshare.com Download Hoster"""
20 __author_name__ = ("spoob", "zoidberg", "stickell")
21 __author_mail__ = ("[email protected]", "[email protected]", "[email protected]")
22 __config__ = [("swfdump_path", "string", "Path to swfdump", "")]
23
24 FILE_NAME_PATTERN = r'>Name:</font>\s*<font [^>]*>(?P<N>[^<]+)</font><br />'
25 FILE_SIZE_PATTERN = r'>Size:</font>\s*<font [^>]*>(?P<S>[0-9.,]+) (?P<U>[kKMG]+)i?B</font><br />'
26 FILE_INFO_PATTERN = r'document\.getElementById\(\'dlbutton\'\)\.href = "[^;]*/(?P<N>[^"]+)";'
27 FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'
28
29 DOWNLOAD_URL_PATTERN = r"<script type=\"text/javascript\">([^<]*?)document\.getElementById\('dlbutton'\).href = ([^;]+);"
30 SEED_PATTERN = r'swfobject.embedSWF\("([^"]+)".*?seed: (\d+)'
31 CAPTCHA_KEY_PATTERN = r'Recaptcha.create\("([^"]+)"'
32 CAPTCHA_SHORTENCODE_PATTERN = r"shortencode: '([^']+)'"
33 CAPTCHA_DOWNLOAD_PATTERN = r"document.location = '([^']+)'"
34
35 LAST_KNOWN_VALUES = (9, 2374755) # time = (seed * multiply) % modulo
36
37 def setup(self):
38 self.html = None
39 self.wantReconnect = False
40 self.multiDL = True
41
42 def handleFree(self):
43 url = self.get_file_url()
44 if not url:
45 self.fail("Download URL not found.")
46 self.logDebug("Download URL %s" % url)
47 self.download(url, cookies=True)
48
49 check = self.checkDownload({
50 "swf_values": re.compile(self.SEED_PATTERN)
51 })
52
53 if check == "swf_values":
54 swf_sts = self.getStorage("swf_sts")
55 if not swf_sts:
56 self.setStorage("swf_sts", 2)
57 self.setStorage("swf_stamp", 0)
58 elif swf_sts == '1':
59 self.setStorage("swf_sts", 2)
60
61 self.retry(max_tries=1)
62
63 def get_file_url(self):
64 """ returns the absolute downloadable filepath
65 """
66 url = multiply = modulo = None
67
68 found = re.search(self.DOWNLOAD_URL_PATTERN, self.html, re.S)
69 if found:
70 #Method #1: JS eval
71 js = "\n".join(found.groups())
72 regex = r"document.getElementById\(\\*'dlbutton\\*'\).omg"
73 omg = re.search(regex + r" = ([^;]+);", js).group(1)
74 js = re.sub(regex + r" = ([^;]+);", '', js)
75 js = re.sub(regex, omg, js)
76 url = self.js.eval(js)
77 else:
78 #Method #2: SWF eval
79 seed_search = re.search(self.SEED_PATTERN, self.html)
80 if seed_search:
81 swf_url, file_seed = seed_search.groups()
82
83 swf_sts = self.getStorage("swf_sts")
84 swf_stamp = int(self.getStorage("swf_stamp") or 0)
85 swf_version = self.getStorage("version")
86 self.logDebug("SWF", swf_sts, swf_stamp, swf_version)
87
88 if not swf_sts:
89 self.logDebug('Using default values')
90 multiply, modulo = self.LAST_KNOWN_VALUES
91 elif swf_sts == "1":
92 self.logDebug('Using stored values')
93 multiply = self.getStorage("multiply")
94 modulo = self.getStorage("modulo")
95 elif swf_sts == "2":
96 if swf_version < self.__version__:
97 self.logDebug('Reverting to default values')
98 self.setStorage("swf_sts", "")
99 self.setStorage("version", self.__version__)
100 multiply, modulo = self.LAST_KNOWN_VALUES
101 elif (swf_stamp + 3600000) < timestamp():
102 swfdump = self.get_swfdump_path()
103 if swfdump:
104 multiply, modulo = self.get_swf_values(self.file_info['HOST'] + swf_url, swfdump)
105 else:
106 self.logWarning("Swfdump not found. Install swftools to bypass captcha.")
107
108 if multiply and modulo:
109 self.logDebug("TIME = (%s * %s) %s" % (file_seed, multiply, modulo))
110 url = "/download?key=%s&time=%d" % (self.file_info['KEY'],
111 (int(file_seed) * int(multiply)) % int(modulo))
112
113 if not url:
114 #Method #3: Captcha
115 url = self.do_recaptcha()
116
117 return self.file_info['HOST'] + url
118
119 def get_swf_values(self, swf_url, swfdump):
120 self.logDebug('Parsing values from %s' % swf_url)
121 multiply = modulo = None
122
123 fd, fpath = tempfile.mkstemp()
124 try:
125 swf_data = self.load(swf_url)
126 os.write(fd, swf_data)
127
128 p = subprocess.Popen([swfdump, '-a', fpath], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
129 out, err = p.communicate()
130
131 if err:
132 self.logError(err)
133 else:
134 m_str = re.search(r'::break.*?{(.*?)}', out, re.S).group(1)
135 multiply = re.search(r'pushbyte (\d+)', m_str).group(1)
136 modulo = re.search(r'pushint (\d+)', m_str).group(1)
137 finally:
138 os.close(fd)
139 os.remove(fpath)
140
141 if multiply and modulo:
142 self.setStorage("multiply", multiply)
143 self.setStorage("modulo", modulo)
144 self.setStorage("swf_sts", 1)
145 self.setStorage("version", self.__version__)
146 else:
147 self.logError("Parsing SWF failed: swfdump not installed or plugin out of date")
148 self.setStorage("swf_sts", 2)
149
150 self.setStorage("swf_stamp", timestamp())
151
152 return multiply, modulo
153
154 def get_swfdump_path(self):
155 # used for detecting if swfdump is installed
156 def is_exe(ppath):
157 return os.path.isfile(ppath) and os.access(ppath, os.X_OK)
158
159 program = self.getConfig("swfdump_path") or "swfdump"
160 swfdump = None
161 ppath, pname = os.path.split(program)
162 if ppath:
163 if is_exe(program):
164 swfdump = program
165 else:
166 for ppath in os.environ["PATH"].split(os.pathsep):
167 exe_file = os.path.join(ppath, program)
168 if is_exe(exe_file):
169 swfdump = exe_file
170
171 # return path to the executable or None if not found
172 return swfdump
173
174 def do_recaptcha(self):
175 self.logDebug('Trying to solve captcha')
176 captcha_key = re.search(self.CAPTCHA_KEY_PATTERN, self.html).group(1)
177 shortencode = re.search(self.CAPTCHA_SHORTENCODE_PATTERN, self.html).group(1)
178 url = re.search(self.CAPTCHA_DOWNLOAD_PATTERN, self.html).group(1)
179
180 recaptcha = ReCaptcha(self)
181
182 for i in range(5):
183 challenge, code = recaptcha.challenge(captcha_key)
184
185 response = json_loads(self.load(self.file_info['HOST'] + '/rest/captcha/test',
186 post={'challenge': challenge,
187 'response': code,
188 'shortencode': shortencode}))
189 self.logDebug("reCaptcha response : %s" % response)
190 if response == True:
191 self.correctCaptcha()
192 break
193 else:
194 self.invalidCaptcha()
195 else:
196 self.fail("Invalid captcha")
197
198 return url
199
200
201 getInfo = create_getInfo(ZippyshareCom)
202
[end of module/plugins/hoster/ZippyshareCom.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/module/plugins/hoster/ZippyshareCom.py b/module/plugins/hoster/ZippyshareCom.py
--- a/module/plugins/hoster/ZippyshareCom.py
+++ b/module/plugins/hoster/ZippyshareCom.py
@@ -26,7 +26,7 @@
FILE_INFO_PATTERN = r'document\.getElementById\(\'dlbutton\'\)\.href = "[^;]*/(?P<N>[^"]+)";'
FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'
- DOWNLOAD_URL_PATTERN = r"<script type=\"text/javascript\">([^<]*?)document\.getElementById\('dlbutton'\).href = ([^;]+);"
+ DOWNLOAD_URL_PATTERN = r"<script type=\"text/javascript\">([^<]*?)(document\.getElementById\('dlbutton'\).href = [^;]+;)"
SEED_PATTERN = r'swfobject.embedSWF\("([^"]+)".*?seed: (\d+)'
CAPTCHA_KEY_PATTERN = r'Recaptcha.create\("([^"]+)"'
CAPTCHA_SHORTENCODE_PATTERN = r"shortencode: '([^']+)'"
@@ -69,10 +69,11 @@
if found:
#Method #1: JS eval
js = "\n".join(found.groups())
- regex = r"document.getElementById\(\\*'dlbutton\\*'\).omg"
- omg = re.search(regex + r" = ([^;]+);", js).group(1)
- js = re.sub(regex + r" = ([^;]+);", '', js)
- js = re.sub(regex, omg, js)
+ d = re.search(r'span id="omg" class="(\d*)"', self.html).group(1)
+ regex = r"document.getElementById\('omg'\).getAttribute\('class'\)"
+ js = re.sub(regex, d, js)
+ regex = r"document.getElementById\(\\*'dlbutton\\*'\).href = "
+ js = re.sub(regex, '', js)
url = self.js.eval(js)
else:
#Method #2: SWF eval
| {"golden_diff": "diff --git a/module/plugins/hoster/ZippyshareCom.py b/module/plugins/hoster/ZippyshareCom.py\n--- a/module/plugins/hoster/ZippyshareCom.py\n+++ b/module/plugins/hoster/ZippyshareCom.py\n@@ -26,7 +26,7 @@\n FILE_INFO_PATTERN = r'document\\.getElementById\\(\\'dlbutton\\'\\)\\.href = \"[^;]*/(?P<N>[^\"]+)\";'\n FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'\n \n- DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)document\\.getElementById\\('dlbutton'\\).href = ([^;]+);\"\n+ DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)(document\\.getElementById\\('dlbutton'\\).href = [^;]+;)\"\n SEED_PATTERN = r'swfobject.embedSWF\\(\"([^\"]+)\".*?seed: (\\d+)'\n CAPTCHA_KEY_PATTERN = r'Recaptcha.create\\(\"([^\"]+)\"'\n CAPTCHA_SHORTENCODE_PATTERN = r\"shortencode: '([^']+)'\"\n@@ -69,10 +69,11 @@\n if found:\n #Method #1: JS eval\n js = \"\\n\".join(found.groups())\n- regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).omg\"\n- omg = re.search(regex + r\" = ([^;]+);\", js).group(1)\n- js = re.sub(regex + r\" = ([^;]+);\", '', js)\n- js = re.sub(regex, omg, js)\n+ d = re.search(r'span id=\"omg\" class=\"(\\d*)\"', self.html).group(1)\n+ regex = r\"document.getElementById\\('omg'\\).getAttribute\\('class'\\)\"\n+ js = re.sub(regex, d, js)\n+ regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).href = \"\n+ js = re.sub(regex, '', js)\n url = self.js.eval(js)\n else:\n #Method #2: SWF eval\n", "issue": "Problem with zippyshare\nI have a problem with pyload and zippyshare.\n\nWhen i add a link from zippyshare the software respond: 'NoneType' object has no attribute 'group' XX MiB ZippyshareCom.\n\nI have add the swf path on configuration of zippyshare before the installation of sfwtools.\n\nI use windows 8 on 64 bit and i had try with youtube and it's ok..\n\nthe log is this:\n\nhttps://gist.github.com/djfelix91/6711122\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nimport subprocess\nimport tempfile\nimport os\n\nfrom module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, timestamp\nfrom module.plugins.internal.CaptchaService import ReCaptcha\nfrom module.common.json_layer import json_loads\n\n\nclass ZippyshareCom(SimpleHoster):\n __name__ = \"ZippyshareCom\"\n __type__ = \"hoster\"\n __pattern__ = r\"(?P<HOST>http://www\\d{0,2}\\.zippyshare.com)/v(?:/|iew.jsp.*key=)(?P<KEY>\\d+)\"\n __version__ = \"0.39\"\n __description__ = \"\"\"Zippyshare.com Download Hoster\"\"\"\n __author_name__ = (\"spoob\", \"zoidberg\", \"stickell\")\n __author_mail__ = (\"[email protected]\", \"[email protected]\", \"[email protected]\")\n __config__ = [(\"swfdump_path\", \"string\", \"Path to swfdump\", \"\")]\n\n FILE_NAME_PATTERN = r'>Name:</font>\\s*<font [^>]*>(?P<N>[^<]+)</font><br />'\n FILE_SIZE_PATTERN = r'>Size:</font>\\s*<font [^>]*>(?P<S>[0-9.,]+) (?P<U>[kKMG]+)i?B</font><br />'\n FILE_INFO_PATTERN = r'document\\.getElementById\\(\\'dlbutton\\'\\)\\.href = \"[^;]*/(?P<N>[^\"]+)\";'\n FILE_OFFLINE_PATTERN = r'>File does not exist on this server</div>'\n\n DOWNLOAD_URL_PATTERN = r\"<script type=\\\"text/javascript\\\">([^<]*?)document\\.getElementById\\('dlbutton'\\).href = ([^;]+);\"\n SEED_PATTERN = r'swfobject.embedSWF\\(\"([^\"]+)\".*?seed: (\\d+)'\n CAPTCHA_KEY_PATTERN = r'Recaptcha.create\\(\"([^\"]+)\"'\n CAPTCHA_SHORTENCODE_PATTERN = r\"shortencode: '([^']+)'\"\n CAPTCHA_DOWNLOAD_PATTERN = r\"document.location = '([^']+)'\"\n\n LAST_KNOWN_VALUES = (9, 2374755) # time = (seed * multiply) % modulo\n\n def setup(self):\n self.html = None\n self.wantReconnect = False\n self.multiDL = True\n\n def handleFree(self):\n url = self.get_file_url()\n if not url:\n self.fail(\"Download URL not found.\")\n self.logDebug(\"Download URL %s\" % url)\n self.download(url, cookies=True)\n\n check = self.checkDownload({\n \"swf_values\": re.compile(self.SEED_PATTERN)\n })\n\n if check == \"swf_values\":\n swf_sts = self.getStorage(\"swf_sts\")\n if not swf_sts:\n self.setStorage(\"swf_sts\", 2)\n self.setStorage(\"swf_stamp\", 0)\n elif swf_sts == '1':\n self.setStorage(\"swf_sts\", 2)\n\n self.retry(max_tries=1)\n\n def get_file_url(self):\n \"\"\" returns the absolute downloadable filepath\n \"\"\"\n url = multiply = modulo = None\n\n found = re.search(self.DOWNLOAD_URL_PATTERN, self.html, re.S)\n if found:\n #Method #1: JS eval\n js = \"\\n\".join(found.groups())\n regex = r\"document.getElementById\\(\\\\*'dlbutton\\\\*'\\).omg\"\n omg = re.search(regex + r\" = ([^;]+);\", js).group(1)\n js = re.sub(regex + r\" = ([^;]+);\", '', js)\n js = re.sub(regex, omg, js)\n url = self.js.eval(js)\n else:\n #Method #2: SWF eval\n seed_search = re.search(self.SEED_PATTERN, self.html)\n if seed_search:\n swf_url, file_seed = seed_search.groups()\n\n swf_sts = self.getStorage(\"swf_sts\")\n swf_stamp = int(self.getStorage(\"swf_stamp\") or 0)\n swf_version = self.getStorage(\"version\")\n self.logDebug(\"SWF\", swf_sts, swf_stamp, swf_version)\n\n if not swf_sts:\n self.logDebug('Using default values')\n multiply, modulo = self.LAST_KNOWN_VALUES\n elif swf_sts == \"1\":\n self.logDebug('Using stored values')\n multiply = self.getStorage(\"multiply\")\n modulo = self.getStorage(\"modulo\")\n elif swf_sts == \"2\":\n if swf_version < self.__version__:\n self.logDebug('Reverting to default values')\n self.setStorage(\"swf_sts\", \"\")\n self.setStorage(\"version\", self.__version__)\n multiply, modulo = self.LAST_KNOWN_VALUES\n elif (swf_stamp + 3600000) < timestamp():\n swfdump = self.get_swfdump_path()\n if swfdump:\n multiply, modulo = self.get_swf_values(self.file_info['HOST'] + swf_url, swfdump)\n else:\n self.logWarning(\"Swfdump not found. Install swftools to bypass captcha.\")\n\n if multiply and modulo:\n self.logDebug(\"TIME = (%s * %s) %s\" % (file_seed, multiply, modulo))\n url = \"/download?key=%s&time=%d\" % (self.file_info['KEY'],\n (int(file_seed) * int(multiply)) % int(modulo))\n\n if not url:\n #Method #3: Captcha\n url = self.do_recaptcha()\n\n return self.file_info['HOST'] + url\n\n def get_swf_values(self, swf_url, swfdump):\n self.logDebug('Parsing values from %s' % swf_url)\n multiply = modulo = None\n\n fd, fpath = tempfile.mkstemp()\n try:\n swf_data = self.load(swf_url)\n os.write(fd, swf_data)\n\n p = subprocess.Popen([swfdump, '-a', fpath], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n out, err = p.communicate()\n\n if err:\n self.logError(err)\n else:\n m_str = re.search(r'::break.*?{(.*?)}', out, re.S).group(1)\n multiply = re.search(r'pushbyte (\\d+)', m_str).group(1)\n modulo = re.search(r'pushint (\\d+)', m_str).group(1)\n finally:\n os.close(fd)\n os.remove(fpath)\n\n if multiply and modulo:\n self.setStorage(\"multiply\", multiply)\n self.setStorage(\"modulo\", modulo)\n self.setStorage(\"swf_sts\", 1)\n self.setStorage(\"version\", self.__version__)\n else:\n self.logError(\"Parsing SWF failed: swfdump not installed or plugin out of date\")\n self.setStorage(\"swf_sts\", 2)\n\n self.setStorage(\"swf_stamp\", timestamp())\n\n return multiply, modulo\n\n def get_swfdump_path(self):\n # used for detecting if swfdump is installed\n def is_exe(ppath):\n return os.path.isfile(ppath) and os.access(ppath, os.X_OK)\n\n program = self.getConfig(\"swfdump_path\") or \"swfdump\"\n swfdump = None\n ppath, pname = os.path.split(program)\n if ppath:\n if is_exe(program):\n swfdump = program\n else:\n for ppath in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(ppath, program)\n if is_exe(exe_file):\n swfdump = exe_file\n\n # return path to the executable or None if not found\n return swfdump\n\n def do_recaptcha(self):\n self.logDebug('Trying to solve captcha')\n captcha_key = re.search(self.CAPTCHA_KEY_PATTERN, self.html).group(1)\n shortencode = re.search(self.CAPTCHA_SHORTENCODE_PATTERN, self.html).group(1)\n url = re.search(self.CAPTCHA_DOWNLOAD_PATTERN, self.html).group(1)\n\n recaptcha = ReCaptcha(self)\n\n for i in range(5):\n challenge, code = recaptcha.challenge(captcha_key)\n\n response = json_loads(self.load(self.file_info['HOST'] + '/rest/captcha/test',\n post={'challenge': challenge,\n 'response': code,\n 'shortencode': shortencode}))\n self.logDebug(\"reCaptcha response : %s\" % response)\n if response == True:\n self.correctCaptcha()\n break\n else:\n self.invalidCaptcha()\n else:\n self.fail(\"Invalid captcha\")\n\n return url\n\n\ngetInfo = create_getInfo(ZippyshareCom)\n", "path": "module/plugins/hoster/ZippyshareCom.py"}]} | 3,151 | 473 |
gh_patches_debug_31038 | rasdani/github-patches | git_diff | encode__httpx-2009 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
When using proxy the request downgrades to HTTP/1
Using the following code
```
client = httpx.Client(http2=True, proxies=proxies)
response = client.get('https://www.truepeoplesearch.com/results?name=John', headers=headers)
response.http_version
```
It appears that it returns HTTP/1
but if I were to make the same request without proxy, it shows HTTP/2
Version of HTTPX being used.
```
Name: httpx
Version: 0.21.1
Summary: The next generation HTTP client.
Home-page: https://github.com/encode/httpx
Author: Tom Christie
Author-email: [email protected]
```
</issue>
<code>
[start of httpx/_transports/default.py]
1 """
2 Custom transports, with nicely configured defaults.
3
4 The following additional keyword arguments are currently supported by httpcore...
5
6 * uds: str
7 * local_address: str
8 * retries: int
9
10 Example usages...
11
12 # Disable HTTP/2 on a single specific domain.
13 mounts = {
14 "all://": httpx.HTTPTransport(http2=True),
15 "all://*example.org": httpx.HTTPTransport()
16 }
17
18 # Using advanced httpcore configuration, with connection retries.
19 transport = httpx.HTTPTransport(retries=1)
20 client = httpx.Client(transport=transport)
21
22 # Using advanced httpcore configuration, with unix domain sockets.
23 transport = httpx.HTTPTransport(uds="socket.uds")
24 client = httpx.Client(transport=transport)
25 """
26 import contextlib
27 import typing
28 from types import TracebackType
29
30 import httpcore
31
32 from .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context
33 from .._exceptions import (
34 ConnectError,
35 ConnectTimeout,
36 LocalProtocolError,
37 NetworkError,
38 PoolTimeout,
39 ProtocolError,
40 ProxyError,
41 ReadError,
42 ReadTimeout,
43 RemoteProtocolError,
44 TimeoutException,
45 UnsupportedProtocol,
46 WriteError,
47 WriteTimeout,
48 )
49 from .._models import Request, Response
50 from .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes
51 from .base import AsyncBaseTransport, BaseTransport
52
53 T = typing.TypeVar("T", bound="HTTPTransport")
54 A = typing.TypeVar("A", bound="AsyncHTTPTransport")
55
56
57 @contextlib.contextmanager
58 def map_httpcore_exceptions() -> typing.Iterator[None]:
59 try:
60 yield
61 except Exception as exc: # noqa: PIE-786
62 mapped_exc = None
63
64 for from_exc, to_exc in HTTPCORE_EXC_MAP.items():
65 if not isinstance(exc, from_exc):
66 continue
67 # We want to map to the most specific exception we can find.
68 # Eg if `exc` is an `httpcore.ReadTimeout`, we want to map to
69 # `httpx.ReadTimeout`, not just `httpx.TimeoutException`.
70 if mapped_exc is None or issubclass(to_exc, mapped_exc):
71 mapped_exc = to_exc
72
73 if mapped_exc is None: # pragma: nocover
74 raise
75
76 message = str(exc)
77 raise mapped_exc(message) from exc
78
79
80 HTTPCORE_EXC_MAP = {
81 httpcore.TimeoutException: TimeoutException,
82 httpcore.ConnectTimeout: ConnectTimeout,
83 httpcore.ReadTimeout: ReadTimeout,
84 httpcore.WriteTimeout: WriteTimeout,
85 httpcore.PoolTimeout: PoolTimeout,
86 httpcore.NetworkError: NetworkError,
87 httpcore.ConnectError: ConnectError,
88 httpcore.ReadError: ReadError,
89 httpcore.WriteError: WriteError,
90 httpcore.ProxyError: ProxyError,
91 httpcore.UnsupportedProtocol: UnsupportedProtocol,
92 httpcore.ProtocolError: ProtocolError,
93 httpcore.LocalProtocolError: LocalProtocolError,
94 httpcore.RemoteProtocolError: RemoteProtocolError,
95 }
96
97
98 class ResponseStream(SyncByteStream):
99 def __init__(self, httpcore_stream: typing.Iterable[bytes]):
100 self._httpcore_stream = httpcore_stream
101
102 def __iter__(self) -> typing.Iterator[bytes]:
103 with map_httpcore_exceptions():
104 for part in self._httpcore_stream:
105 yield part
106
107 def close(self) -> None:
108 if hasattr(self._httpcore_stream, "close"):
109 self._httpcore_stream.close() # type: ignore
110
111
112 class HTTPTransport(BaseTransport):
113 def __init__(
114 self,
115 verify: VerifyTypes = True,
116 cert: CertTypes = None,
117 http1: bool = True,
118 http2: bool = False,
119 limits: Limits = DEFAULT_LIMITS,
120 trust_env: bool = True,
121 proxy: Proxy = None,
122 uds: str = None,
123 local_address: str = None,
124 retries: int = 0,
125 ) -> None:
126 ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)
127
128 if proxy is None:
129 self._pool = httpcore.ConnectionPool(
130 ssl_context=ssl_context,
131 max_connections=limits.max_connections,
132 max_keepalive_connections=limits.max_keepalive_connections,
133 keepalive_expiry=limits.keepalive_expiry,
134 http1=http1,
135 http2=http2,
136 uds=uds,
137 local_address=local_address,
138 retries=retries,
139 )
140 else:
141 self._pool = httpcore.HTTPProxy(
142 proxy_url=httpcore.URL(
143 scheme=proxy.url.raw_scheme,
144 host=proxy.url.raw_host,
145 port=proxy.url.port,
146 target=proxy.url.raw_path,
147 ),
148 proxy_headers=proxy.headers.raw,
149 ssl_context=ssl_context,
150 max_connections=limits.max_connections,
151 max_keepalive_connections=limits.max_keepalive_connections,
152 keepalive_expiry=limits.keepalive_expiry,
153 )
154
155 def __enter__(self: T) -> T: # Use generics for subclass support.
156 self._pool.__enter__()
157 return self
158
159 def __exit__(
160 self,
161 exc_type: typing.Type[BaseException] = None,
162 exc_value: BaseException = None,
163 traceback: TracebackType = None,
164 ) -> None:
165 with map_httpcore_exceptions():
166 self._pool.__exit__(exc_type, exc_value, traceback)
167
168 def handle_request(
169 self,
170 request: Request,
171 ) -> Response:
172 assert isinstance(request.stream, SyncByteStream)
173
174 req = httpcore.Request(
175 method=request.method,
176 url=httpcore.URL(
177 scheme=request.url.raw_scheme,
178 host=request.url.raw_host,
179 port=request.url.port,
180 target=request.url.raw_path,
181 ),
182 headers=request.headers.raw,
183 content=request.stream,
184 extensions=request.extensions,
185 )
186 with map_httpcore_exceptions():
187 resp = self._pool.handle_request(req)
188
189 assert isinstance(resp.stream, typing.Iterable)
190
191 return Response(
192 status_code=resp.status,
193 headers=resp.headers,
194 stream=ResponseStream(resp.stream),
195 extensions=resp.extensions,
196 )
197
198 def close(self) -> None:
199 self._pool.close()
200
201
202 class AsyncResponseStream(AsyncByteStream):
203 def __init__(self, httpcore_stream: typing.AsyncIterable[bytes]):
204 self._httpcore_stream = httpcore_stream
205
206 async def __aiter__(self) -> typing.AsyncIterator[bytes]:
207 with map_httpcore_exceptions():
208 async for part in self._httpcore_stream:
209 yield part
210
211 async def aclose(self) -> None:
212 if hasattr(self._httpcore_stream, "aclose"):
213 await self._httpcore_stream.aclose() # type: ignore
214
215
216 class AsyncHTTPTransport(AsyncBaseTransport):
217 def __init__(
218 self,
219 verify: VerifyTypes = True,
220 cert: CertTypes = None,
221 http1: bool = True,
222 http2: bool = False,
223 limits: Limits = DEFAULT_LIMITS,
224 trust_env: bool = True,
225 proxy: Proxy = None,
226 uds: str = None,
227 local_address: str = None,
228 retries: int = 0,
229 ) -> None:
230 ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)
231
232 if proxy is None:
233 self._pool = httpcore.AsyncConnectionPool(
234 ssl_context=ssl_context,
235 max_connections=limits.max_connections,
236 max_keepalive_connections=limits.max_keepalive_connections,
237 keepalive_expiry=limits.keepalive_expiry,
238 http1=http1,
239 http2=http2,
240 uds=uds,
241 local_address=local_address,
242 retries=retries,
243 )
244 else:
245 self._pool = httpcore.AsyncHTTPProxy(
246 proxy_url=httpcore.URL(
247 scheme=proxy.url.raw_scheme,
248 host=proxy.url.raw_host,
249 port=proxy.url.port,
250 target=proxy.url.raw_path,
251 ),
252 proxy_headers=proxy.headers.raw,
253 ssl_context=ssl_context,
254 max_connections=limits.max_connections,
255 max_keepalive_connections=limits.max_keepalive_connections,
256 keepalive_expiry=limits.keepalive_expiry,
257 )
258
259 async def __aenter__(self: A) -> A: # Use generics for subclass support.
260 await self._pool.__aenter__()
261 return self
262
263 async def __aexit__(
264 self,
265 exc_type: typing.Type[BaseException] = None,
266 exc_value: BaseException = None,
267 traceback: TracebackType = None,
268 ) -> None:
269 with map_httpcore_exceptions():
270 await self._pool.__aexit__(exc_type, exc_value, traceback)
271
272 async def handle_async_request(
273 self,
274 request: Request,
275 ) -> Response:
276 assert isinstance(request.stream, AsyncByteStream)
277
278 req = httpcore.Request(
279 method=request.method,
280 url=httpcore.URL(
281 scheme=request.url.raw_scheme,
282 host=request.url.raw_host,
283 port=request.url.port,
284 target=request.url.raw_path,
285 ),
286 headers=request.headers.raw,
287 content=request.stream,
288 extensions=request.extensions,
289 )
290 with map_httpcore_exceptions():
291 resp = await self._pool.handle_async_request(req)
292
293 assert isinstance(resp.stream, typing.AsyncIterable)
294
295 return Response(
296 status_code=resp.status,
297 headers=resp.headers,
298 stream=AsyncResponseStream(resp.stream),
299 extensions=resp.extensions,
300 )
301
302 async def aclose(self) -> None:
303 await self._pool.aclose()
304
[end of httpx/_transports/default.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/_transports/default.py b/httpx/_transports/default.py
--- a/httpx/_transports/default.py
+++ b/httpx/_transports/default.py
@@ -138,19 +138,36 @@
retries=retries,
)
else:
- self._pool = httpcore.HTTPProxy(
- proxy_url=httpcore.URL(
- scheme=proxy.url.raw_scheme,
- host=proxy.url.raw_host,
- port=proxy.url.port,
- target=proxy.url.raw_path,
- ),
- proxy_headers=proxy.headers.raw,
- ssl_context=ssl_context,
- max_connections=limits.max_connections,
- max_keepalive_connections=limits.max_keepalive_connections,
- keepalive_expiry=limits.keepalive_expiry,
- )
+ try:
+ self._pool = httpcore.HTTPProxy(
+ proxy_url=httpcore.URL(
+ scheme=proxy.url.raw_scheme,
+ host=proxy.url.raw_host,
+ port=proxy.url.port,
+ target=proxy.url.raw_path,
+ ),
+ proxy_headers=proxy.headers.raw,
+ ssl_context=ssl_context,
+ max_connections=limits.max_connections,
+ max_keepalive_connections=limits.max_keepalive_connections,
+ keepalive_expiry=limits.keepalive_expiry,
+ http1=http1,
+ http2=http2,
+ )
+ except TypeError: # pragma: nocover
+ self._pool = httpcore.HTTPProxy(
+ proxy_url=httpcore.URL(
+ scheme=proxy.url.raw_scheme,
+ host=proxy.url.raw_host,
+ port=proxy.url.port,
+ target=proxy.url.raw_path,
+ ),
+ proxy_headers=proxy.headers.raw,
+ ssl_context=ssl_context,
+ max_connections=limits.max_connections,
+ max_keepalive_connections=limits.max_keepalive_connections,
+ keepalive_expiry=limits.keepalive_expiry,
+ )
def __enter__(self: T) -> T: # Use generics for subclass support.
self._pool.__enter__()
| {"golden_diff": "diff --git a/httpx/_transports/default.py b/httpx/_transports/default.py\n--- a/httpx/_transports/default.py\n+++ b/httpx/_transports/default.py\n@@ -138,19 +138,36 @@\n retries=retries,\n )\n else:\n- self._pool = httpcore.HTTPProxy(\n- proxy_url=httpcore.URL(\n- scheme=proxy.url.raw_scheme,\n- host=proxy.url.raw_host,\n- port=proxy.url.port,\n- target=proxy.url.raw_path,\n- ),\n- proxy_headers=proxy.headers.raw,\n- ssl_context=ssl_context,\n- max_connections=limits.max_connections,\n- max_keepalive_connections=limits.max_keepalive_connections,\n- keepalive_expiry=limits.keepalive_expiry,\n- )\n+ try:\n+ self._pool = httpcore.HTTPProxy(\n+ proxy_url=httpcore.URL(\n+ scheme=proxy.url.raw_scheme,\n+ host=proxy.url.raw_host,\n+ port=proxy.url.port,\n+ target=proxy.url.raw_path,\n+ ),\n+ proxy_headers=proxy.headers.raw,\n+ ssl_context=ssl_context,\n+ max_connections=limits.max_connections,\n+ max_keepalive_connections=limits.max_keepalive_connections,\n+ keepalive_expiry=limits.keepalive_expiry,\n+ http1=http1,\n+ http2=http2,\n+ )\n+ except TypeError: # pragma: nocover\n+ self._pool = httpcore.HTTPProxy(\n+ proxy_url=httpcore.URL(\n+ scheme=proxy.url.raw_scheme,\n+ host=proxy.url.raw_host,\n+ port=proxy.url.port,\n+ target=proxy.url.raw_path,\n+ ),\n+ proxy_headers=proxy.headers.raw,\n+ ssl_context=ssl_context,\n+ max_connections=limits.max_connections,\n+ max_keepalive_connections=limits.max_keepalive_connections,\n+ keepalive_expiry=limits.keepalive_expiry,\n+ )\n \n def __enter__(self: T) -> T: # Use generics for subclass support.\n self._pool.__enter__()\n", "issue": "When using proxy the request downgrades to HTTP/1\nUsing the following code\r\n\r\n```\r\nclient = httpx.Client(http2=True, proxies=proxies)\r\n response = client.get('https://www.truepeoplesearch.com/results?name=John', headers=headers)\r\nresponse.http_version\r\n```\r\n\r\nIt appears that it returns HTTP/1\r\n\r\nbut if I were to make the same request without proxy, it shows HTTP/2\r\n\r\n\r\nVersion of HTTPX being used.\r\n```\r\nName: httpx\r\nVersion: 0.21.1\r\nSummary: The next generation HTTP client.\r\nHome-page: https://github.com/encode/httpx\r\nAuthor: Tom Christie\r\nAuthor-email: [email protected]\r\n```\n", "before_files": [{"content": "\"\"\"\nCustom transports, with nicely configured defaults.\n\nThe following additional keyword arguments are currently supported by httpcore...\n\n* uds: str\n* local_address: str\n* retries: int\n\nExample usages...\n\n# Disable HTTP/2 on a single specific domain.\nmounts = {\n \"all://\": httpx.HTTPTransport(http2=True),\n \"all://*example.org\": httpx.HTTPTransport()\n}\n\n# Using advanced httpcore configuration, with connection retries.\ntransport = httpx.HTTPTransport(retries=1)\nclient = httpx.Client(transport=transport)\n\n# Using advanced httpcore configuration, with unix domain sockets.\ntransport = httpx.HTTPTransport(uds=\"socket.uds\")\nclient = httpx.Client(transport=transport)\n\"\"\"\nimport contextlib\nimport typing\nfrom types import TracebackType\n\nimport httpcore\n\nfrom .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context\nfrom .._exceptions import (\n ConnectError,\n ConnectTimeout,\n LocalProtocolError,\n NetworkError,\n PoolTimeout,\n ProtocolError,\n ProxyError,\n ReadError,\n ReadTimeout,\n RemoteProtocolError,\n TimeoutException,\n UnsupportedProtocol,\n WriteError,\n WriteTimeout,\n)\nfrom .._models import Request, Response\nfrom .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes\nfrom .base import AsyncBaseTransport, BaseTransport\n\nT = typing.TypeVar(\"T\", bound=\"HTTPTransport\")\nA = typing.TypeVar(\"A\", bound=\"AsyncHTTPTransport\")\n\n\[email protected]\ndef map_httpcore_exceptions() -> typing.Iterator[None]:\n try:\n yield\n except Exception as exc: # noqa: PIE-786\n mapped_exc = None\n\n for from_exc, to_exc in HTTPCORE_EXC_MAP.items():\n if not isinstance(exc, from_exc):\n continue\n # We want to map to the most specific exception we can find.\n # Eg if `exc` is an `httpcore.ReadTimeout`, we want to map to\n # `httpx.ReadTimeout`, not just `httpx.TimeoutException`.\n if mapped_exc is None or issubclass(to_exc, mapped_exc):\n mapped_exc = to_exc\n\n if mapped_exc is None: # pragma: nocover\n raise\n\n message = str(exc)\n raise mapped_exc(message) from exc\n\n\nHTTPCORE_EXC_MAP = {\n httpcore.TimeoutException: TimeoutException,\n httpcore.ConnectTimeout: ConnectTimeout,\n httpcore.ReadTimeout: ReadTimeout,\n httpcore.WriteTimeout: WriteTimeout,\n httpcore.PoolTimeout: PoolTimeout,\n httpcore.NetworkError: NetworkError,\n httpcore.ConnectError: ConnectError,\n httpcore.ReadError: ReadError,\n httpcore.WriteError: WriteError,\n httpcore.ProxyError: ProxyError,\n httpcore.UnsupportedProtocol: UnsupportedProtocol,\n httpcore.ProtocolError: ProtocolError,\n httpcore.LocalProtocolError: LocalProtocolError,\n httpcore.RemoteProtocolError: RemoteProtocolError,\n}\n\n\nclass ResponseStream(SyncByteStream):\n def __init__(self, httpcore_stream: typing.Iterable[bytes]):\n self._httpcore_stream = httpcore_stream\n\n def __iter__(self) -> typing.Iterator[bytes]:\n with map_httpcore_exceptions():\n for part in self._httpcore_stream:\n yield part\n\n def close(self) -> None:\n if hasattr(self._httpcore_stream, \"close\"):\n self._httpcore_stream.close() # type: ignore\n\n\nclass HTTPTransport(BaseTransport):\n def __init__(\n self,\n verify: VerifyTypes = True,\n cert: CertTypes = None,\n http1: bool = True,\n http2: bool = False,\n limits: Limits = DEFAULT_LIMITS,\n trust_env: bool = True,\n proxy: Proxy = None,\n uds: str = None,\n local_address: str = None,\n retries: int = 0,\n ) -> None:\n ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)\n\n if proxy is None:\n self._pool = httpcore.ConnectionPool(\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n uds=uds,\n local_address=local_address,\n retries=retries,\n )\n else:\n self._pool = httpcore.HTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n )\n\n def __enter__(self: T) -> T: # Use generics for subclass support.\n self._pool.__enter__()\n return self\n\n def __exit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n with map_httpcore_exceptions():\n self._pool.__exit__(exc_type, exc_value, traceback)\n\n def handle_request(\n self,\n request: Request,\n ) -> Response:\n assert isinstance(request.stream, SyncByteStream)\n\n req = httpcore.Request(\n method=request.method,\n url=httpcore.URL(\n scheme=request.url.raw_scheme,\n host=request.url.raw_host,\n port=request.url.port,\n target=request.url.raw_path,\n ),\n headers=request.headers.raw,\n content=request.stream,\n extensions=request.extensions,\n )\n with map_httpcore_exceptions():\n resp = self._pool.handle_request(req)\n\n assert isinstance(resp.stream, typing.Iterable)\n\n return Response(\n status_code=resp.status,\n headers=resp.headers,\n stream=ResponseStream(resp.stream),\n extensions=resp.extensions,\n )\n\n def close(self) -> None:\n self._pool.close()\n\n\nclass AsyncResponseStream(AsyncByteStream):\n def __init__(self, httpcore_stream: typing.AsyncIterable[bytes]):\n self._httpcore_stream = httpcore_stream\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n with map_httpcore_exceptions():\n async for part in self._httpcore_stream:\n yield part\n\n async def aclose(self) -> None:\n if hasattr(self._httpcore_stream, \"aclose\"):\n await self._httpcore_stream.aclose() # type: ignore\n\n\nclass AsyncHTTPTransport(AsyncBaseTransport):\n def __init__(\n self,\n verify: VerifyTypes = True,\n cert: CertTypes = None,\n http1: bool = True,\n http2: bool = False,\n limits: Limits = DEFAULT_LIMITS,\n trust_env: bool = True,\n proxy: Proxy = None,\n uds: str = None,\n local_address: str = None,\n retries: int = 0,\n ) -> None:\n ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)\n\n if proxy is None:\n self._pool = httpcore.AsyncConnectionPool(\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n http1=http1,\n http2=http2,\n uds=uds,\n local_address=local_address,\n retries=retries,\n )\n else:\n self._pool = httpcore.AsyncHTTPProxy(\n proxy_url=httpcore.URL(\n scheme=proxy.url.raw_scheme,\n host=proxy.url.raw_host,\n port=proxy.url.port,\n target=proxy.url.raw_path,\n ),\n proxy_headers=proxy.headers.raw,\n ssl_context=ssl_context,\n max_connections=limits.max_connections,\n max_keepalive_connections=limits.max_keepalive_connections,\n keepalive_expiry=limits.keepalive_expiry,\n )\n\n async def __aenter__(self: A) -> A: # Use generics for subclass support.\n await self._pool.__aenter__()\n return self\n\n async def __aexit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n with map_httpcore_exceptions():\n await self._pool.__aexit__(exc_type, exc_value, traceback)\n\n async def handle_async_request(\n self,\n request: Request,\n ) -> Response:\n assert isinstance(request.stream, AsyncByteStream)\n\n req = httpcore.Request(\n method=request.method,\n url=httpcore.URL(\n scheme=request.url.raw_scheme,\n host=request.url.raw_host,\n port=request.url.port,\n target=request.url.raw_path,\n ),\n headers=request.headers.raw,\n content=request.stream,\n extensions=request.extensions,\n )\n with map_httpcore_exceptions():\n resp = await self._pool.handle_async_request(req)\n\n assert isinstance(resp.stream, typing.AsyncIterable)\n\n return Response(\n status_code=resp.status,\n headers=resp.headers,\n stream=AsyncResponseStream(resp.stream),\n extensions=resp.extensions,\n )\n\n async def aclose(self) -> None:\n await self._pool.aclose()\n", "path": "httpx/_transports/default.py"}]} | 3,549 | 454 |
gh_patches_debug_16318 | rasdani/github-patches | git_diff | angr__angr-4080 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sigaction syscall triggers "Not enough data for store" error
### Description
Running Angr on an AMD64 binary that makes a `sigaction` syscall triggers an error
```
Traceback (most recent call last):
File "/home/ubuntu/angr-exp/sigactionbug/demo.py", line 7, in <module>
state_successors = state_successors[0].step()
File "/home/ubuntu/angr-dev/angr/angr/sim_state.py", line 607, in step
return self.project.factory.successors(self, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/factory.py", line 77, in successors
return self.default_engine.process(*args, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/engines/vex/light/slicing.py", line 20, in process
return super().process(*args, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/engines/engine.py", line 163, in process
self.process_successors(self.successors, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/engines/failure.py", line 24, in process_successors
return super().process_successors(successors, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/engines/syscall.py", line 50, in process_successors
return self.process_procedure(state, successors, sys_procedure, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/engines/procedure.py", line 39, in process_procedure
inst = procedure.execute(state, successors, ret_to=ret_to, arguments=arguments)
File "/home/ubuntu/angr-dev/angr/angr/sim_procedure.py", line 286, in execute
inst.ret(r)
File "/home/ubuntu/angr-dev/angr/angr/sim_procedure.py", line 459, in ret
ret_addr = self.cc.teardown_callsite(self.state, return_val=expr, prototype=self.prototype)
File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 921, in teardown_callsite
self.set_return_val(state, return_val, prototype.returnty)
File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 1339, in set_return_val
super().set_return_val(state, val, ty, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 798, in set_return_val
loc.set_value(state, val, stack_base=stack_base)
File "/home/ubuntu/angr-dev/angr/angr/calling_conventions.py", line 310, in set_value
state.registers.store(offset, value, size=self.size)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/unwrapper_mixin.py", line 10, in store
return super().store(
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/name_resolution_mixin.py", line 60, in store
return super().store(addr, data, size=size, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/bvv_conversion_mixin.py", line 26, in store
super().store(addr, data_bv, size=size, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/simplification_mixin.py", line 13, in store
super().store(addr, real_data, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/clouseau_mixin.py", line 7, in store
super().store(addr, data, size=size, condition=condition, endness=endness, inspect=inspect, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/actions_mixin.py", line 34, in store
super().store(addr, data, size=size, action=action, condition=condition, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/underconstrained_mixin.py", line 28, in store
super().store(addr, data, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py", line 99, in store
super().store(addr, data, size=size, condition=condition, **kwargs)
File "/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py", line 43, in store
raise SimMemoryError("Not enough data for store")
angr.errors.SimMemoryError: Not enough data for store
```
### Steps to reproduce the bug
Here is a minimal code that triggers the bug :
```C
#include <signal.h>
#include <stdio.h>
int main() {
sigaction(SIGSEGV, NULL, NULL);
return 0;
}
```
```python
import angr
angr_project = angr.Project("./target_program", exclude_sim_procedures_list=['sigaction'])
state_successors = angr_project.factory.entry_state().step()
while not state_successors.is_empty:
state_successors = state_successors[0].step()
```
The bug happens when the SimProcedure for the sigaction syscall is executed (not the SimProcedure for the libc wrapper ). Excluding the libc wrapper SimProcedure is necessary to actually reach the code calling the syscall and trigger the bug.
### Environment
_No response_
### Additional context
As far as I understand, this is due to an inconsistency between the prototype and the implementation of the SimProcedure for the `sigaction` syscall, like #4033. According to its prototype, the SimProcedure should return a long but it actually returns an int.
The error happens when the SimProcedure for the syscall returns. It is raised from this check in `angr/storage/memory_mixins/size_resolution_mixin.py`
```
if out_size > max_size:
raise SimMemoryError("Not enough data for store")
```
because `max_size` is 4 and `out_size` is 8.
`max_size` is determined by the length of the value returned by the SimProcedure, i.e. an int in the current implementation in `angr/procedures/linux_kernel/sigaction.py`
```python
class rt_sigaction(angr.SimProcedure):
def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument
# TODO: actually do something
# ...hack
if self.state.solver.is_true(signum == 33):
return self.state.libc.ret_errno("EINVAL")
return self.state.solver.BVV(0, self.arch.sizeof["int"])
```
`out_size` is determined by the return type given in the prototype of the SimProcedure in `angr/procedures/definitions/linux_kernel.py`, i.e. a long :
```python
# long sys_rt_sigaction(int, const struct sigaction *, struct sigaction *, size_t);
'rt_sigaction': SimTypeFunction([SimTypeInt(signed=True), SimTypePointer(SimStruct({}, name="sigaction", pack=False, align=None), offset=0), SimTypePointer(SimStruct({}, name="sigaction", pack=False, align=None), offset=0), SimTypeLong(signed=False, label="size_t")], SimTypeLong(signed=True), arg_names=["None", "None", "None", "None"]),
```
Replacing `int` with `long` in the return of the SimProcedure fixes the problem for me.
</issue>
<code>
[start of angr/procedures/linux_kernel/sigaction.py]
1 import angr
2
3
4 class sigaction(angr.SimProcedure):
5 def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument
6 # TODO: actually do something
7 return self.state.solver.BVV(0, self.arch.sizeof["int"])
8
9
10 class rt_sigaction(angr.SimProcedure):
11 def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument
12 # TODO: actually do something
13 # ...hack
14 if self.state.solver.is_true(signum == 33):
15 return self.state.libc.ret_errno("EINVAL")
16 return self.state.solver.BVV(0, self.arch.sizeof["int"])
17
[end of angr/procedures/linux_kernel/sigaction.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/angr/procedures/linux_kernel/sigaction.py b/angr/procedures/linux_kernel/sigaction.py
--- a/angr/procedures/linux_kernel/sigaction.py
+++ b/angr/procedures/linux_kernel/sigaction.py
@@ -4,7 +4,7 @@
class sigaction(angr.SimProcedure):
def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument
# TODO: actually do something
- return self.state.solver.BVV(0, self.arch.sizeof["int"])
+ return self.state.solver.BVV(0, self.arch.sizeof["long"])
class rt_sigaction(angr.SimProcedure):
@@ -13,4 +13,4 @@
# ...hack
if self.state.solver.is_true(signum == 33):
return self.state.libc.ret_errno("EINVAL")
- return self.state.solver.BVV(0, self.arch.sizeof["int"])
+ return self.state.solver.BVV(0, self.arch.sizeof["long"])
| {"golden_diff": "diff --git a/angr/procedures/linux_kernel/sigaction.py b/angr/procedures/linux_kernel/sigaction.py\n--- a/angr/procedures/linux_kernel/sigaction.py\n+++ b/angr/procedures/linux_kernel/sigaction.py\n@@ -4,7 +4,7 @@\n class sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n- return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n+ return self.state.solver.BVV(0, self.arch.sizeof[\"long\"])\n \n \n class rt_sigaction(angr.SimProcedure):\n@@ -13,4 +13,4 @@\n # ...hack\n if self.state.solver.is_true(signum == 33):\n return self.state.libc.ret_errno(\"EINVAL\")\n- return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n+ return self.state.solver.BVV(0, self.arch.sizeof[\"long\"])\n", "issue": "Sigaction syscall triggers \"Not enough data for store\" error\n### Description\r\n\r\nRunning Angr on an AMD64 binary that makes a `sigaction` syscall triggers an error\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/angr-exp/sigactionbug/demo.py\", line 7, in <module>\r\n state_successors = state_successors[0].step()\r\n File \"/home/ubuntu/angr-dev/angr/angr/sim_state.py\", line 607, in step\r\n return self.project.factory.successors(self, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/factory.py\", line 77, in successors\r\n return self.default_engine.process(*args, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/vex/light/slicing.py\", line 20, in process\r\n return super().process(*args, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/engine.py\", line 163, in process\r\n self.process_successors(self.successors, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/failure.py\", line 24, in process_successors\r\n return super().process_successors(successors, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/syscall.py\", line 50, in process_successors\r\n return self.process_procedure(state, successors, sys_procedure, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/engines/procedure.py\", line 39, in process_procedure\r\n inst = procedure.execute(state, successors, ret_to=ret_to, arguments=arguments)\r\n File \"/home/ubuntu/angr-dev/angr/angr/sim_procedure.py\", line 286, in execute\r\n inst.ret(r)\r\n File \"/home/ubuntu/angr-dev/angr/angr/sim_procedure.py\", line 459, in ret\r\n ret_addr = self.cc.teardown_callsite(self.state, return_val=expr, prototype=self.prototype)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 921, in teardown_callsite\r\n self.set_return_val(state, return_val, prototype.returnty)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 1339, in set_return_val\r\n super().set_return_val(state, val, ty, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 798, in set_return_val\r\n loc.set_value(state, val, stack_base=stack_base)\r\n File \"/home/ubuntu/angr-dev/angr/angr/calling_conventions.py\", line 310, in set_value\r\n state.registers.store(offset, value, size=self.size)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/unwrapper_mixin.py\", line 10, in store\r\n return super().store(\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/name_resolution_mixin.py\", line 60, in store\r\n return super().store(addr, data, size=size, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/bvv_conversion_mixin.py\", line 26, in store\r\n super().store(addr, data_bv, size=size, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/simplification_mixin.py\", line 13, in store\r\n super().store(addr, real_data, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/clouseau_mixin.py\", line 7, in store\r\n super().store(addr, data, size=size, condition=condition, endness=endness, inspect=inspect, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/actions_mixin.py\", line 34, in store\r\n super().store(addr, data, size=size, action=action, condition=condition, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/underconstrained_mixin.py\", line 28, in store\r\n super().store(addr, data, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py\", line 99, in store\r\n super().store(addr, data, size=size, condition=condition, **kwargs)\r\n File \"/home/ubuntu/angr-dev/angr/angr/storage/memory_mixins/size_resolution_mixin.py\", line 43, in store\r\n raise SimMemoryError(\"Not enough data for store\")\r\nangr.errors.SimMemoryError: Not enough data for store\r\n```\r\n\r\n### Steps to reproduce the bug\r\n\r\nHere is a minimal code that triggers the bug :\r\n\r\n```C\r\n#include <signal.h>\r\n#include <stdio.h>\r\n\r\nint main() {\r\n sigaction(SIGSEGV, NULL, NULL);\r\n return 0;\r\n}\r\n```\r\n\r\n```python\r\nimport angr\r\n\r\nangr_project = angr.Project(\"./target_program\", exclude_sim_procedures_list=['sigaction'])\r\nstate_successors = angr_project.factory.entry_state().step()\r\n\r\nwhile not state_successors.is_empty:\r\n state_successors = state_successors[0].step()\r\n```\r\n\r\nThe bug happens when the SimProcedure for the sigaction syscall is executed (not the SimProcedure for the libc wrapper ). Excluding the libc wrapper SimProcedure is necessary to actually reach the code calling the syscall and trigger the bug.\r\n\r\n### Environment\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nAs far as I understand, this is due to an inconsistency between the prototype and the implementation of the SimProcedure for the `sigaction` syscall, like #4033. According to its prototype, the SimProcedure should return a long but it actually returns an int.\r\n\r\nThe error happens when the SimProcedure for the syscall returns. It is raised from this check in `angr/storage/memory_mixins/size_resolution_mixin.py`\r\n\r\n```\r\n if out_size > max_size:\r\n raise SimMemoryError(\"Not enough data for store\")\r\n```\r\n\r\nbecause `max_size` is 4 and `out_size` is 8.\r\n\r\n`max_size` is determined by the length of the value returned by the SimProcedure, i.e. an int in the current implementation in `angr/procedures/linux_kernel/sigaction.py`\r\n```python\r\nclass rt_sigaction(angr.SimProcedure):\r\n def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument\r\n # TODO: actually do something\r\n # ...hack\r\n if self.state.solver.is_true(signum == 33):\r\n return self.state.libc.ret_errno(\"EINVAL\")\r\n return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\r\n```\r\n\r\n\r\n`out_size` is determined by the return type given in the prototype of the SimProcedure in `angr/procedures/definitions/linux_kernel.py`, i.e. a long :\r\n\r\n```python\r\n# long sys_rt_sigaction(int, const struct sigaction *, struct sigaction *, size_t);\r\n'rt_sigaction': SimTypeFunction([SimTypeInt(signed=True), SimTypePointer(SimStruct({}, name=\"sigaction\", pack=False, align=None), offset=0), SimTypePointer(SimStruct({}, name=\"sigaction\", pack=False, align=None), offset=0), SimTypeLong(signed=False, label=\"size_t\")], SimTypeLong(signed=True), arg_names=[\"None\", \"None\", \"None\", \"None\"]),\r\n```\r\n\r\nReplacing `int` with `long` in the return of the SimProcedure fixes the problem for me.\n", "before_files": [{"content": "import angr\n\n\nclass sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n\n\nclass rt_sigaction(angr.SimProcedure):\n def run(self, signum, act, oldact, sigsetsize): # pylint:disable=arguments-differ,unused-argument\n # TODO: actually do something\n # ...hack\n if self.state.solver.is_true(signum == 33):\n return self.state.libc.ret_errno(\"EINVAL\")\n return self.state.solver.BVV(0, self.arch.sizeof[\"int\"])\n", "path": "angr/procedures/linux_kernel/sigaction.py"}]} | 2,483 | 239 |
gh_patches_debug_22664 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2243 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move to IATI currencies
Currently, there's only 2 options for a currency in RSR: Euros and US Dollars. However, with the introduction of IATI fields we have also incorporated the IATI codelists for some fields that allow the list of IATI currencies including all possible currencies. We should extend the currencies of RSR, keeping in mind:
- [x] Project editor should allow to select all possible currencies and display it for any 'currency' field.
- [x] Show the currency code (e.g. EUR, USD, YEN) everywhere:
- [x] Projects list
- [x] Project page (summary, full report and finance tabs)
- [x] Organisation page
- [x] All fields in the project editor should respond to the currency selection. When you switch to a different currency, this should happen for all fields that use a currency in the editor.
</issue>
<code>
[start of akvo/rsr/models/partnership.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from django.core.exceptions import ValidationError
9 from django.db import models
10 from django.utils.translation import ugettext_lazy as _
11
12 from ..fields import ValidXMLCharField
13
14
15 class Partnership(models.Model):
16 # the old way
17 FIELD_PARTNER = u'field'
18 FUNDING_PARTNER = u'funding'
19 SPONSOR_PARTNER = u'sponsor'
20 SUPPORT_PARTNER = u'support'
21 EXTENDING_PARTNER = u'extending'
22
23 PARTNER_TYPE_LIST = [
24 FIELD_PARTNER, FUNDING_PARTNER, SPONSOR_PARTNER, SUPPORT_PARTNER, EXTENDING_PARTNER
25 ]
26 PARTNER_LABELS = [
27 _(u'Implementing partner'),
28 _(u'Funding partner'),
29 _(u'Sponsor partner'),
30 _(u'Accountable partner'),
31 _(u'Extending partner'),
32 ]
33 PARTNER_TYPES = zip(PARTNER_TYPE_LIST, PARTNER_LABELS)
34
35 # the new way
36 IATI_FUNDING_PARTNER = 1
37 IATI_ACCOUNTABLE_PARTNER = 2
38 IATI_EXTENDING_PARTNER = 3
39 IATI_IMPLEMENTING_PARTNER = 4
40 AKVO_SPONSOR_PARTNER = 100 # not part of the IATI OrganisationRole codelist!
41 IATI_REPORTING_ORGANISATION = 101
42
43 # make sure the AKVO_SPONSOR_PARTNER is last in the list
44 IATI_ROLE_LIST = [
45 IATI_FUNDING_PARTNER, IATI_ACCOUNTABLE_PARTNER, IATI_EXTENDING_PARTNER,
46 IATI_IMPLEMENTING_PARTNER, AKVO_SPONSOR_PARTNER, IATI_REPORTING_ORGANISATION
47 ]
48 IATI_ROLE_LABELS = [
49 _(u'Funding partner'),
50 _(u'Accountable partner'),
51 _(u'Extending partner'),
52 _(u'Implementing partner'),
53 _(u'Sponsor partner'),
54 _(u'Reporting organisation'),
55 ]
56 IATI_ROLES = zip(IATI_ROLE_LIST, IATI_ROLE_LABELS)
57
58 # used when migrating
59 PARTNER_TYPES_TO_ROLES_MAP = {
60 FUNDING_PARTNER: IATI_FUNDING_PARTNER,
61 SUPPORT_PARTNER: IATI_ACCOUNTABLE_PARTNER,
62 FIELD_PARTNER: IATI_IMPLEMENTING_PARTNER,
63 SPONSOR_PARTNER: AKVO_SPONSOR_PARTNER,
64 }
65
66 # backwards compatibility
67 ROLES_TO_PARTNER_TYPES_MAP = {
68 IATI_FUNDING_PARTNER: FUNDING_PARTNER,
69 IATI_ACCOUNTABLE_PARTNER: SUPPORT_PARTNER,
70 IATI_EXTENDING_PARTNER: EXTENDING_PARTNER,
71 IATI_IMPLEMENTING_PARTNER: FIELD_PARTNER,
72 AKVO_SPONSOR_PARTNER: SPONSOR_PARTNER,
73 # TODO: not backwards compatible
74 IATI_REPORTING_ORGANISATION: u''
75 }
76
77 ALLIANCE_PARTNER = u'alliance'
78 KNOWLEDGE_PARTNER = u'knowledge'
79 NETWORK_PARTNER = u'network'
80
81 PARTNER_TYPE_EXTRAS_LIST = (ALLIANCE_PARTNER, KNOWLEDGE_PARTNER, NETWORK_PARTNER)
82 PARTNER_TYPE_EXTRA_LABELS = (
83 _(u'Alliance'),
84 _(u'Knowledge'),
85 _(u'Network')
86 )
87
88 PARTNER_TYPE_EXTRAS = zip(PARTNER_TYPE_EXTRAS_LIST, PARTNER_TYPE_EXTRA_LABELS)
89
90 organisation = models.ForeignKey(
91 'Organisation', verbose_name=_(u'organisation'), related_name='partnerships', null=True,
92 blank=True,
93 help_text=_(u'Select an organisation that is taking an active role in the project.')
94 )
95 project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='partnerships')
96 iati_organisation_role = models.PositiveSmallIntegerField(
97 _(u'organisation role'), choices=IATI_ROLES, db_index=True, null=True, blank=True,
98 help_text=_(u'Select the role of the organisation within the project:<br/>'
99 u'- Funding organisation: a government or organisation that provides funds to '
100 u'the project<br/>'
101 u'- Implementing organisation: an organisation involved in carrying out the '
102 u'activity or intervention<br/>'
103 u'- Accountable organisation: an organisation responsible for oversight of '
104 u'the project and its outcomes<br/>'
105 u'- Extending organisation: an organisation that manages the budget and '
106 u'direction of a project on behalf of the funding organisation<br/>'
107 u'- Reporting organisation: an organisation that will report this project in '
108 u'an IATI file')
109 )
110 # is_secondary_reporter is only used when the iati_organisation_role is set to
111 # IATI_REPORTING_ORGANISATION, thus the use of NullBooleanField
112 is_secondary_reporter = models.NullBooleanField(
113 _(u'secondary reporter'),
114 help_text=_(
115 u'This indicates whether the reporting organisation is a secondary publisher: '
116 u'publishing data for which it is not directly responsible.'
117 )
118 )
119 funding_amount = models.DecimalField(
120 _(u'funding amount'), max_digits=14, decimal_places=2, blank=True, null=True, db_index=True,
121 help_text=_(u'It’s only possible to indicate a funding amount for funding partners. Use a '
122 u'period to denote decimals.')
123 )
124 partner_type_extra = ValidXMLCharField(
125 _(u'partner type extra'), max_length=30, blank=True, null=True, choices=PARTNER_TYPE_EXTRAS,
126 help_text=_(u'RSR specific partner type.')
127 )
128 iati_activity_id = ValidXMLCharField(
129 _(u'IATI activity ID'), max_length=100, blank=True, null=True, db_index=True,
130 help_text=_(u'A valid activity identifier published by the participating organisation '
131 u'which points to the activity that it has published to IATI that describes '
132 u'its role in this activity.')
133 )
134 internal_id = ValidXMLCharField(
135 _(u'Internal ID'), max_length=75, blank=True, null=True, db_index=True,
136 help_text=_(u'This field can be used to indicate an internal identifier that is used by '
137 u'the organisation for this project. (75 characters)')
138 )
139 iati_url = models.URLField(
140 blank=True,
141 help_text=_(
142 u'Please enter the URL for where the IATI Activity Id Funding details are published. '
143 u'For projects directly or indirectly funded by the Dutch Government, this should '
144 u'be the OpenAid.nl page. For other projects, an alternative URL can be used.'
145 )
146 )
147 related_activity_id = ValidXMLCharField(
148 _(u'related IATI activity ID'), max_length=100, blank=True
149 )
150
151 def iati_organisation_role_label(self):
152 if self.iati_organisation_role:
153 return dict(self.IATI_ROLES).get(self.iati_organisation_role)
154 else:
155 return ''
156
157 def iati_role_to_partner_type(self):
158 if self.iati_organisation_role:
159 return dict(self.ROLES_TO_PARTNER_TYPES_MAP).get(int(self.iati_organisation_role))
160 else:
161 return None
162
163 def organisation_show_link(self):
164 if self.organisation:
165 return u'<a href="{0}">{1}</a>'.format(self.organisation.get_absolute_url(),
166 self.organisation.long_name or
167 self.organisation.name)
168 return ''
169
170 class Meta:
171 app_label = 'rsr'
172 verbose_name = _(u'project partner')
173 verbose_name_plural = _(u'project partners')
174 ordering = ['iati_organisation_role']
175
176 def __unicode__(self):
177 if self.organisation:
178 if self.organisation.name:
179 organisation_unicode = self.organisation.name
180 elif self.organisation.long_name:
181 organisation_unicode = self.organisation.long_name
182 else:
183 organisation_unicode = u'%s' % _(u'Organisation name not specified')
184 else:
185 organisation_unicode = u'%s' % _(u'Organisation not specified')
186
187 if self.iati_organisation_role:
188 organisation_unicode += u' ({})'.format(
189 unicode(dict(self.IATI_ROLES)[self.iati_organisation_role])
190 )
191 return organisation_unicode
192
193 def clean(self):
194 # Don't allow multiple reporting organisations
195 if self.iati_organisation_role == self.IATI_REPORTING_ORGANISATION:
196 reporting_orgs = self.project.partnerships.filter(
197 iati_organisation_role=self.IATI_REPORTING_ORGANISATION
198 )
199
200 if reporting_orgs.count() > 1:
201 raise ValidationError(
202 {'iati_organisation_role': u'%s' % _(u'Project can only have one reporting '
203 u'organisation')}
204 )
205
206 def save(self, *args, **kwargs):
207 super(Partnership, self).save(*args, **kwargs)
208 self.set_primary_organisation()
209
210 def delete(self, *args, **kwargs):
211 super(Partnership, self).delete(*args, **kwargs)
212 self.set_primary_organisation()
213
214 def set_primary_organisation(self):
215 # Check which organisation should be set to the primary organisation of the project
216 # This is done to get better performance on the project list page
217 self.project.primary_organisation = self.project.find_primary_organisation()
218 self.project.save(update_fields=['primary_organisation'])
219
[end of akvo/rsr/models/partnership.py]
[start of akvo/rest/serializers/partnership.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from rest_framework import serializers
8
9 from akvo.rsr.models import Partnership
10
11 from akvo.rest.serializers.organisation import OrganisationBasicSerializer
12 from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer
13
14
15 class PartnershipRawSerializer(BaseRSRSerializer):
16
17 class Meta:
18 model = Partnership
19
20
21 class PartnershipSerializer(PartnershipRawSerializer):
22
23 organisation_show_link = serializers.Field(source='organisation_show_link')
24 partner_type = serializers.Field(source='iati_role_to_partner_type')
25 organisation_role_label = serializers.Field(source='iati_organisation_role_label')
26
27
28 class PartnershipBasicSerializer(BaseRSRSerializer):
29
30 organisation = OrganisationBasicSerializer(source='organisation')
31 iati_organisation_role_label = serializers.Field(source='iati_organisation_role_label')
32
33 class Meta:
34 model = Partnership
35 fields = (
36 'id',
37 'project',
38 'organisation',
39 'iati_organisation_role',
40 'iati_organisation_role_label',
41 )
42
[end of akvo/rest/serializers/partnership.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/serializers/partnership.py b/akvo/rest/serializers/partnership.py
--- a/akvo/rest/serializers/partnership.py
+++ b/akvo/rest/serializers/partnership.py
@@ -23,6 +23,7 @@
organisation_show_link = serializers.Field(source='organisation_show_link')
partner_type = serializers.Field(source='iati_role_to_partner_type')
organisation_role_label = serializers.Field(source='iati_organisation_role_label')
+ funding_amount_label = serializers.Field(source='funding_amount_with_currency')
class PartnershipBasicSerializer(BaseRSRSerializer):
diff --git a/akvo/rsr/models/partnership.py b/akvo/rsr/models/partnership.py
--- a/akvo/rsr/models/partnership.py
+++ b/akvo/rsr/models/partnership.py
@@ -167,6 +167,12 @@
self.organisation.name)
return ''
+ def funding_amount_with_currency(self):
+ """Returns the funding amount, prepended by the project's currency."""
+ if self.funding_amount and self.project and self.project.currency:
+ return u'{0} {1}'.format(self.project.currency, self.funding_amount)
+ return self.funding_amount
+
class Meta:
app_label = 'rsr'
verbose_name = _(u'project partner')
| {"golden_diff": "diff --git a/akvo/rest/serializers/partnership.py b/akvo/rest/serializers/partnership.py\n--- a/akvo/rest/serializers/partnership.py\n+++ b/akvo/rest/serializers/partnership.py\n@@ -23,6 +23,7 @@\n organisation_show_link = serializers.Field(source='organisation_show_link')\n partner_type = serializers.Field(source='iati_role_to_partner_type')\n organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n+ funding_amount_label = serializers.Field(source='funding_amount_with_currency')\n \n \n class PartnershipBasicSerializer(BaseRSRSerializer):\ndiff --git a/akvo/rsr/models/partnership.py b/akvo/rsr/models/partnership.py\n--- a/akvo/rsr/models/partnership.py\n+++ b/akvo/rsr/models/partnership.py\n@@ -167,6 +167,12 @@\n self.organisation.name)\n return ''\n \n+ def funding_amount_with_currency(self):\n+ \"\"\"Returns the funding amount, prepended by the project's currency.\"\"\"\n+ if self.funding_amount and self.project and self.project.currency:\n+ return u'{0} {1}'.format(self.project.currency, self.funding_amount)\n+ return self.funding_amount\n+\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'project partner')\n", "issue": "Move to IATI currencies\nCurrently, there's only 2 options for a currency in RSR: Euros and US Dollars. However, with the introduction of IATI fields we have also incorporated the IATI codelists for some fields that allow the list of IATI currencies including all possible currencies. We should extend the currencies of RSR, keeping in mind:\n- [x] Project editor should allow to select all possible currencies and display it for any 'currency' field.\n- [x] Show the currency code (e.g. EUR, USD, YEN) everywhere:\n - [x] Projects list\n - [x] Project page (summary, full report and finance tabs)\n - [x] Organisation page\n- [x] All fields in the project editor should respond to the currency selection. When you switch to a different currency, this should happen for all fields that use a currency in the editor.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\n\nclass Partnership(models.Model):\n # the old way\n FIELD_PARTNER = u'field'\n FUNDING_PARTNER = u'funding'\n SPONSOR_PARTNER = u'sponsor'\n SUPPORT_PARTNER = u'support'\n EXTENDING_PARTNER = u'extending'\n\n PARTNER_TYPE_LIST = [\n FIELD_PARTNER, FUNDING_PARTNER, SPONSOR_PARTNER, SUPPORT_PARTNER, EXTENDING_PARTNER\n ]\n PARTNER_LABELS = [\n _(u'Implementing partner'),\n _(u'Funding partner'),\n _(u'Sponsor partner'),\n _(u'Accountable partner'),\n _(u'Extending partner'),\n ]\n PARTNER_TYPES = zip(PARTNER_TYPE_LIST, PARTNER_LABELS)\n\n # the new way\n IATI_FUNDING_PARTNER = 1\n IATI_ACCOUNTABLE_PARTNER = 2\n IATI_EXTENDING_PARTNER = 3\n IATI_IMPLEMENTING_PARTNER = 4\n AKVO_SPONSOR_PARTNER = 100 # not part of the IATI OrganisationRole codelist!\n IATI_REPORTING_ORGANISATION = 101\n\n # make sure the AKVO_SPONSOR_PARTNER is last in the list\n IATI_ROLE_LIST = [\n IATI_FUNDING_PARTNER, IATI_ACCOUNTABLE_PARTNER, IATI_EXTENDING_PARTNER,\n IATI_IMPLEMENTING_PARTNER, AKVO_SPONSOR_PARTNER, IATI_REPORTING_ORGANISATION\n ]\n IATI_ROLE_LABELS = [\n _(u'Funding partner'),\n _(u'Accountable partner'),\n _(u'Extending partner'),\n _(u'Implementing partner'),\n _(u'Sponsor partner'),\n _(u'Reporting organisation'),\n ]\n IATI_ROLES = zip(IATI_ROLE_LIST, IATI_ROLE_LABELS)\n\n # used when migrating\n PARTNER_TYPES_TO_ROLES_MAP = {\n FUNDING_PARTNER: IATI_FUNDING_PARTNER,\n SUPPORT_PARTNER: IATI_ACCOUNTABLE_PARTNER,\n FIELD_PARTNER: IATI_IMPLEMENTING_PARTNER,\n SPONSOR_PARTNER: AKVO_SPONSOR_PARTNER,\n }\n\n # backwards compatibility\n ROLES_TO_PARTNER_TYPES_MAP = {\n IATI_FUNDING_PARTNER: FUNDING_PARTNER,\n IATI_ACCOUNTABLE_PARTNER: SUPPORT_PARTNER,\n IATI_EXTENDING_PARTNER: EXTENDING_PARTNER,\n IATI_IMPLEMENTING_PARTNER: FIELD_PARTNER,\n AKVO_SPONSOR_PARTNER: SPONSOR_PARTNER,\n # TODO: not backwards compatible\n IATI_REPORTING_ORGANISATION: u''\n }\n\n ALLIANCE_PARTNER = u'alliance'\n KNOWLEDGE_PARTNER = u'knowledge'\n NETWORK_PARTNER = u'network'\n\n PARTNER_TYPE_EXTRAS_LIST = (ALLIANCE_PARTNER, KNOWLEDGE_PARTNER, NETWORK_PARTNER)\n PARTNER_TYPE_EXTRA_LABELS = (\n _(u'Alliance'),\n _(u'Knowledge'),\n _(u'Network')\n )\n\n PARTNER_TYPE_EXTRAS = zip(PARTNER_TYPE_EXTRAS_LIST, PARTNER_TYPE_EXTRA_LABELS)\n\n organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'organisation'), related_name='partnerships', null=True,\n blank=True,\n help_text=_(u'Select an organisation that is taking an active role in the project.')\n )\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='partnerships')\n iati_organisation_role = models.PositiveSmallIntegerField(\n _(u'organisation role'), choices=IATI_ROLES, db_index=True, null=True, blank=True,\n help_text=_(u'Select the role of the organisation within the project:<br/>'\n u'- Funding organisation: a government or organisation that provides funds to '\n u'the project<br/>'\n u'- Implementing organisation: an organisation involved in carrying out the '\n u'activity or intervention<br/>'\n u'- Accountable organisation: an organisation responsible for oversight of '\n u'the project and its outcomes<br/>'\n u'- Extending organisation: an organisation that manages the budget and '\n u'direction of a project on behalf of the funding organisation<br/>'\n u'- Reporting organisation: an organisation that will report this project in '\n u'an IATI file')\n )\n # is_secondary_reporter is only used when the iati_organisation_role is set to\n # IATI_REPORTING_ORGANISATION, thus the use of NullBooleanField\n is_secondary_reporter = models.NullBooleanField(\n _(u'secondary reporter'),\n help_text=_(\n u'This indicates whether the reporting organisation is a secondary publisher: '\n u'publishing data for which it is not directly responsible.'\n )\n )\n funding_amount = models.DecimalField(\n _(u'funding amount'), max_digits=14, decimal_places=2, blank=True, null=True, db_index=True,\n help_text=_(u'It\u2019s only possible to indicate a funding amount for funding partners. Use a '\n u'period to denote decimals.')\n )\n partner_type_extra = ValidXMLCharField(\n _(u'partner type extra'), max_length=30, blank=True, null=True, choices=PARTNER_TYPE_EXTRAS,\n help_text=_(u'RSR specific partner type.')\n )\n iati_activity_id = ValidXMLCharField(\n _(u'IATI activity ID'), max_length=100, blank=True, null=True, db_index=True,\n help_text=_(u'A valid activity identifier published by the participating organisation '\n u'which points to the activity that it has published to IATI that describes '\n u'its role in this activity.')\n )\n internal_id = ValidXMLCharField(\n _(u'Internal ID'), max_length=75, blank=True, null=True, db_index=True,\n help_text=_(u'This field can be used to indicate an internal identifier that is used by '\n u'the organisation for this project. (75 characters)')\n )\n iati_url = models.URLField(\n blank=True,\n help_text=_(\n u'Please enter the URL for where the IATI Activity Id Funding details are published. '\n u'For projects directly or indirectly funded by the Dutch Government, this should '\n u'be the OpenAid.nl page. For other projects, an alternative URL can be used.'\n )\n )\n related_activity_id = ValidXMLCharField(\n _(u'related IATI activity ID'), max_length=100, blank=True\n )\n\n def iati_organisation_role_label(self):\n if self.iati_organisation_role:\n return dict(self.IATI_ROLES).get(self.iati_organisation_role)\n else:\n return ''\n\n def iati_role_to_partner_type(self):\n if self.iati_organisation_role:\n return dict(self.ROLES_TO_PARTNER_TYPES_MAP).get(int(self.iati_organisation_role))\n else:\n return None\n\n def organisation_show_link(self):\n if self.organisation:\n return u'<a href=\"{0}\">{1}</a>'.format(self.organisation.get_absolute_url(),\n self.organisation.long_name or\n self.organisation.name)\n return ''\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'project partner')\n verbose_name_plural = _(u'project partners')\n ordering = ['iati_organisation_role']\n\n def __unicode__(self):\n if self.organisation:\n if self.organisation.name:\n organisation_unicode = self.organisation.name\n elif self.organisation.long_name:\n organisation_unicode = self.organisation.long_name\n else:\n organisation_unicode = u'%s' % _(u'Organisation name not specified')\n else:\n organisation_unicode = u'%s' % _(u'Organisation not specified')\n\n if self.iati_organisation_role:\n organisation_unicode += u' ({})'.format(\n unicode(dict(self.IATI_ROLES)[self.iati_organisation_role])\n )\n return organisation_unicode\n\n def clean(self):\n # Don't allow multiple reporting organisations\n if self.iati_organisation_role == self.IATI_REPORTING_ORGANISATION:\n reporting_orgs = self.project.partnerships.filter(\n iati_organisation_role=self.IATI_REPORTING_ORGANISATION\n )\n\n if reporting_orgs.count() > 1:\n raise ValidationError(\n {'iati_organisation_role': u'%s' % _(u'Project can only have one reporting '\n u'organisation')}\n )\n\n def save(self, *args, **kwargs):\n super(Partnership, self).save(*args, **kwargs)\n self.set_primary_organisation()\n\n def delete(self, *args, **kwargs):\n super(Partnership, self).delete(*args, **kwargs)\n self.set_primary_organisation()\n\n def set_primary_organisation(self):\n # Check which organisation should be set to the primary organisation of the project\n # This is done to get better performance on the project list page\n self.project.primary_organisation = self.project.find_primary_organisation()\n self.project.save(update_fields=['primary_organisation'])\n", "path": "akvo/rsr/models/partnership.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import serializers\n\nfrom akvo.rsr.models import Partnership\n\nfrom akvo.rest.serializers.organisation import OrganisationBasicSerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\n\n\nclass PartnershipRawSerializer(BaseRSRSerializer):\n\n class Meta:\n model = Partnership\n\n\nclass PartnershipSerializer(PartnershipRawSerializer):\n\n organisation_show_link = serializers.Field(source='organisation_show_link')\n partner_type = serializers.Field(source='iati_role_to_partner_type')\n organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n\n\nclass PartnershipBasicSerializer(BaseRSRSerializer):\n\n organisation = OrganisationBasicSerializer(source='organisation')\n iati_organisation_role_label = serializers.Field(source='iati_organisation_role_label')\n\n class Meta:\n model = Partnership\n fields = (\n 'id',\n 'project',\n 'organisation',\n 'iati_organisation_role',\n 'iati_organisation_role_label',\n )\n", "path": "akvo/rest/serializers/partnership.py"}]} | 3,815 | 306 |
gh_patches_debug_27449 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-321 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Function to handle deleting schemas
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Users might want to delete schemas. We don't currently support this.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
A function that handles deleting of schemas in the database. We should raise an error if there is anything outside of the schema referencing the schema.
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
This should be in the `db` module.
</issue>
<code>
[start of db/schemas.py]
1 import logging
2 import warnings
3 from sqlalchemy.schema import CreateSchema
4 from sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table
5
6 from db import types
7
8 logger = logging.getLogger(__name__)
9
10 TYPES_SCHEMA = types.base.SCHEMA
11
12 EXCLUDED_SCHEMATA = [TYPES_SCHEMA, "information_schema"]
13
14
15 def get_schema_name_from_oid(oid, engine):
16 return reflect_schema(engine, oid=oid)["name"]
17
18
19 def get_schema_oid_from_name(name, engine):
20 return reflect_schema(engine, name=name)["oid"]
21
22
23 def reflect_schema(engine, name=None, oid=None):
24 # If we have both arguments, the behavior is undefined.
25 try:
26 assert name is None or oid is None
27 except AssertionError as e:
28 logger.error("ERROR: Only one of 'name' or 'oid' can be given!")
29 raise e
30 metadata = MetaData()
31 with warnings.catch_warnings():
32 warnings.filterwarnings("ignore", message="Did not recognize type")
33 pg_namespace = Table("pg_namespace", metadata, autoload_with=engine)
34 sel = (
35 select(pg_namespace.c.oid, pg_namespace.c.nspname.label("name"))
36 .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid))
37 )
38 with engine.begin() as conn:
39 schema_info = conn.execute(sel).fetchone()
40 return schema_info
41
42
43 def get_mathesar_schemas(engine):
44 return [schema for schema, _ in get_mathesar_schemas_with_oids(engine)]
45
46
47 def get_mathesar_schemas_with_oids(engine):
48 metadata = MetaData()
49 with warnings.catch_warnings():
50 warnings.filterwarnings("ignore", message="Did not recognize type")
51 pg_namespace = Table("pg_namespace", metadata, autoload_with=engine)
52 sel = (
53 select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid)
54 .where(
55 and_(
56 *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA],
57 not_(pg_namespace.c.nspname.like("pg_%"))
58 )
59 )
60 )
61 with engine.begin() as conn:
62 result = conn.execute(sel).fetchall()
63 return result
64
65
66 def get_all_schemas(engine):
67 inspector = inspect(engine)
68 # We don't need to exclude system schemas (i.e., starting with "pg_")
69 # since Inspector.get_schema_names already excludes them. Thus, this
70 # function actually gets all non-pg-reserved schemas.
71 return inspector.get_schema_names()
72
73
74 def create_schema(schema, engine):
75 """
76 This method creates a Postgres schema.
77 """
78 if schema not in get_all_schemas(engine):
79 with engine.begin() as connection:
80 connection.execute(CreateSchema(schema))
81
[end of db/schemas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/schemas.py b/db/schemas.py
--- a/db/schemas.py
+++ b/db/schemas.py
@@ -1,7 +1,11 @@
import logging
import warnings
-from sqlalchemy.schema import CreateSchema
+from sqlalchemy.schema import CreateSchema, DropSchema
from sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table
+from sqlalchemy.exc import InternalError
+from sqlalchemy.schema import DDLElement
+from sqlalchemy.ext import compiler
+from psycopg2.errors import DependentObjectsStillExist
from db import types
@@ -78,3 +82,42 @@
if schema not in get_all_schemas(engine):
with engine.begin() as connection:
connection.execute(CreateSchema(schema))
+
+
+def delete_schema(schema, engine, cascade=False, if_exists=False):
+ """
+ This method deletes a Postgres schema.
+ """
+ if if_exists and schema not in get_all_schemas(engine):
+ return
+
+ with engine.begin() as connection:
+ try:
+ connection.execute(DropSchema(schema, cascade=cascade))
+ except InternalError as e:
+ if isinstance(e.orig, DependentObjectsStillExist):
+ raise e.orig
+ else:
+ raise e
+
+
+class RenameSchema(DDLElement):
+ def __init__(self, schema, rename_to):
+ self.schema = schema
+ self.rename_to = rename_to
+
+
[email protected](RenameSchema)
+def compile_rename_schema(element, compiler, **_):
+ return "ALTER SCHEMA %s RENAME TO %s" % (
+ element.schema,
+ element.rename_to
+ )
+
+
+def rename_schema(schema, engine, rename_to):
+ """
+ This method renames a Postgres schema.
+ """
+ with engine.begin() as connection:
+ connection.execute(RenameSchema(schema, rename_to))
| {"golden_diff": "diff --git a/db/schemas.py b/db/schemas.py\n--- a/db/schemas.py\n+++ b/db/schemas.py\n@@ -1,7 +1,11 @@\n import logging\n import warnings\n-from sqlalchemy.schema import CreateSchema\n+from sqlalchemy.schema import CreateSchema, DropSchema\n from sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table\n+from sqlalchemy.exc import InternalError\n+from sqlalchemy.schema import DDLElement\n+from sqlalchemy.ext import compiler\n+from psycopg2.errors import DependentObjectsStillExist\n \n from db import types\n \n@@ -78,3 +82,42 @@\n if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n+\n+\n+def delete_schema(schema, engine, cascade=False, if_exists=False):\n+ \"\"\"\n+ This method deletes a Postgres schema.\n+ \"\"\"\n+ if if_exists and schema not in get_all_schemas(engine):\n+ return\n+\n+ with engine.begin() as connection:\n+ try:\n+ connection.execute(DropSchema(schema, cascade=cascade))\n+ except InternalError as e:\n+ if isinstance(e.orig, DependentObjectsStillExist):\n+ raise e.orig\n+ else:\n+ raise e\n+\n+\n+class RenameSchema(DDLElement):\n+ def __init__(self, schema, rename_to):\n+ self.schema = schema\n+ self.rename_to = rename_to\n+\n+\[email protected](RenameSchema)\n+def compile_rename_schema(element, compiler, **_):\n+ return \"ALTER SCHEMA %s RENAME TO %s\" % (\n+ element.schema,\n+ element.rename_to\n+ )\n+\n+\n+def rename_schema(schema, engine, rename_to):\n+ \"\"\"\n+ This method renames a Postgres schema.\n+ \"\"\"\n+ with engine.begin() as connection:\n+ connection.execute(RenameSchema(schema, rename_to))\n", "issue": "Function to handle deleting schemas\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nUsers might want to delete schemas. We don't currently support this.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nA function that handles deleting of schemas in the database. We should raise an error if there is anything outside of the schema referencing the schema.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nThis should be in the `db` module.\n", "before_files": [{"content": "import logging\nimport warnings\nfrom sqlalchemy.schema import CreateSchema\nfrom sqlalchemy import inspect, MetaData, select, and_, not_, or_, Table\n\nfrom db import types\n\nlogger = logging.getLogger(__name__)\n\nTYPES_SCHEMA = types.base.SCHEMA\n\nEXCLUDED_SCHEMATA = [TYPES_SCHEMA, \"information_schema\"]\n\n\ndef get_schema_name_from_oid(oid, engine):\n return reflect_schema(engine, oid=oid)[\"name\"]\n\n\ndef get_schema_oid_from_name(name, engine):\n return reflect_schema(engine, name=name)[\"oid\"]\n\n\ndef reflect_schema(engine, name=None, oid=None):\n # If we have both arguments, the behavior is undefined.\n try:\n assert name is None or oid is None\n except AssertionError as e:\n logger.error(\"ERROR: Only one of 'name' or 'oid' can be given!\")\n raise e\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.oid, pg_namespace.c.nspname.label(\"name\"))\n .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid))\n )\n with engine.begin() as conn:\n schema_info = conn.execute(sel).fetchone()\n return schema_info\n\n\ndef get_mathesar_schemas(engine):\n return [schema for schema, _ in get_mathesar_schemas_with_oids(engine)]\n\n\ndef get_mathesar_schemas_with_oids(engine):\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid)\n .where(\n and_(\n *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA],\n not_(pg_namespace.c.nspname.like(\"pg_%\"))\n )\n )\n )\n with engine.begin() as conn:\n result = conn.execute(sel).fetchall()\n return result\n\n\ndef get_all_schemas(engine):\n inspector = inspect(engine)\n # We don't need to exclude system schemas (i.e., starting with \"pg_\")\n # since Inspector.get_schema_names already excludes them. Thus, this\n # function actually gets all non-pg-reserved schemas.\n return inspector.get_schema_names()\n\n\ndef create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n", "path": "db/schemas.py"}]} | 1,400 | 421 |
gh_patches_debug_20114 | rasdani/github-patches | git_diff | kubeflow__pipelines-5165 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problems upgrading to TFX 0.27.0
### What steps did you take:
Installed Kubeflow Pipelines on GCP via kustomize manifests.
Tried to run the Taxi TFX Demo.
### What happened:
On the first step, I got the error "No module named 'tfx.dsl.components'"
### What did you expect to happen:
To successfully run the TFX Taxi Demo.
### Environment:
How did you deploy Kubeflow Pipelines (KFP)?
Via the kustomize manifests in GCP.
KFP version: 1.4.0-rc.1
/kind bug
/area backend
</issue>
<code>
[start of samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py]
1 #!/usr/bin/env python3
2 # Copyright 2019 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17
18 from typing import Text
19
20 import kfp
21 import tensorflow_model_analysis as tfma
22 from tfx.components.evaluator.component import Evaluator
23 from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
24 from tfx.components.example_validator.component import ExampleValidator
25 from tfx.components.pusher.component import Pusher
26 from tfx.components.schema_gen.component import SchemaGen
27 from tfx.components.statistics_gen.component import StatisticsGen
28 from tfx.components.trainer.component import Trainer
29 from tfx.components.transform.component import Transform
30 from tfx.orchestration import data_types
31 from tfx.orchestration import pipeline
32 from tfx.orchestration.kubeflow import kubeflow_dag_runner
33 from tfx.utils.dsl_utils import external_input
34 from tfx.proto import pusher_pb2
35 from tfx.proto import trainer_pb2
36
37 # Define pipeline params used for pipeline execution.
38 # Path to the module file, should be a GCS path,
39 # or a module file baked in the docker image used by the pipeline.
40 _taxi_module_file_param = data_types.RuntimeParameter(
41 name='module-file',
42 default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',
43 ptype=Text,
44 )
45
46 # Path to the CSV data file, under which their should be a data.csv file.
47 _data_root_param = data_types.RuntimeParameter(
48 name='data-root',
49 default='gs://ml-pipeline/sample-data/chicago-taxi/data',
50 ptype=Text,
51 )
52
53 # Path of pipeline root, should be a GCS path.
54 pipeline_root = os.path.join(
55 'gs://{{kfp-default-bucket}}', 'tfx_taxi_simple', kfp.dsl.RUN_ID_PLACEHOLDER
56 )
57
58
59 def _create_pipeline(
60 pipeline_root: Text, csv_input_location: data_types.RuntimeParameter,
61 taxi_module_file: data_types.RuntimeParameter, enable_cache: bool
62 ):
63 """Creates a simple Kubeflow-based Chicago Taxi TFX pipeline.
64
65 Args:
66 pipeline_root: The root of the pipeline output.
67 csv_input_location: The location of the input data directory.
68 taxi_module_file: The location of the module file for Transform/Trainer.
69 enable_cache: Whether to enable cache or not.
70
71 Returns:
72 A logical TFX pipeline.Pipeline object.
73 """
74 examples = external_input(csv_input_location)
75
76 example_gen = CsvExampleGen(input=examples)
77 statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
78 infer_schema = SchemaGen(
79 statistics=statistics_gen.outputs['statistics'],
80 infer_feature_shape=False,
81 )
82 validate_stats = ExampleValidator(
83 statistics=statistics_gen.outputs['statistics'],
84 schema=infer_schema.outputs['schema'],
85 )
86 transform = Transform(
87 examples=example_gen.outputs['examples'],
88 schema=infer_schema.outputs['schema'],
89 module_file=taxi_module_file,
90 )
91 trainer = Trainer(
92 module_file=taxi_module_file,
93 transformed_examples=transform.outputs['transformed_examples'],
94 schema=infer_schema.outputs['schema'],
95 transform_graph=transform.outputs['transform_graph'],
96 train_args=trainer_pb2.TrainArgs(num_steps=10),
97 eval_args=trainer_pb2.EvalArgs(num_steps=5),
98 )
99 # Set the TFMA config for Model Evaluation and Validation.
100 eval_config = tfma.EvalConfig(
101 model_specs=[
102 # Using signature 'eval' implies the use of an EvalSavedModel. To use
103 # a serving model remove the signature to defaults to 'serving_default'
104 # and add a label_key.
105 tfma.ModelSpec(signature_name='eval')
106 ],
107 metrics_specs=[
108 tfma.MetricsSpec(
109 # The metrics added here are in addition to those saved with the
110 # model (assuming either a keras model or EvalSavedModel is used).
111 # Any metrics added into the saved model (for example using
112 # model.compile(..., metrics=[...]), etc) will be computed
113 # automatically.
114 metrics=[tfma.MetricConfig(class_name='ExampleCount')],
115 # To add validation thresholds for metrics saved with the model,
116 # add them keyed by metric name to the thresholds map.
117 thresholds={
118 'binary_accuracy':
119 tfma.MetricThreshold(
120 value_threshold=tfma.GenericValueThreshold(
121 lower_bound={'value': 0.5}
122 ),
123 change_threshold=tfma.GenericChangeThreshold(
124 direction=tfma.MetricDirection.HIGHER_IS_BETTER,
125 absolute={'value': -1e-10}
126 )
127 )
128 }
129 )
130 ],
131 slicing_specs=[
132 # An empty slice spec means the overall slice, i.e. the whole dataset.
133 tfma.SlicingSpec(),
134 # Data can be sliced along a feature column. In this case, data is
135 # sliced along feature column trip_start_hour.
136 tfma.SlicingSpec(feature_keys=['trip_start_hour'])
137 ]
138 )
139
140 model_analyzer = Evaluator(
141 examples=example_gen.outputs['examples'],
142 model=trainer.outputs['model'],
143 eval_config=eval_config,
144 )
145
146 pusher = Pusher(
147 model=trainer.outputs['model'],
148 model_blessing=model_analyzer.outputs['blessing'],
149 push_destination=pusher_pb2.PushDestination(
150 filesystem=pusher_pb2.PushDestination.Filesystem(
151 base_directory=os.path.
152 join(str(pipeline.ROOT_PARAMETER), 'model_serving')
153 )
154 ),
155 )
156
157 return pipeline.Pipeline(
158 pipeline_name='parameterized_tfx_oss',
159 pipeline_root=pipeline_root,
160 components=[
161 example_gen, statistics_gen, infer_schema, validate_stats, transform,
162 trainer, model_analyzer, pusher
163 ],
164 enable_cache=enable_cache,
165 )
166
167
168 if __name__ == '__main__':
169 enable_cache = True
170 pipeline = _create_pipeline(
171 pipeline_root,
172 _data_root_param,
173 _taxi_module_file_param,
174 enable_cache=enable_cache,
175 )
176 # Make sure the version of TFX image used is consistent with the version of
177 # TFX SDK.
178 config = kubeflow_dag_runner.KubeflowDagRunnerConfig(
179 kubeflow_metadata_config=kubeflow_dag_runner.
180 get_default_kubeflow_metadata_config(),
181 tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',
182 )
183 kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(
184 output_filename=__file__ + '.yaml', config=config
185 )
186
187 kfp_runner.run(pipeline)
188
[end of samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py
--- a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py
+++ b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py
@@ -39,7 +39,7 @@
# or a module file baked in the docker image used by the pipeline.
_taxi_module_file_param = data_types.RuntimeParameter(
name='module-file',
- default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',
+ default='/tfx/src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',
ptype=Text,
)
@@ -178,7 +178,7 @@
config = kubeflow_dag_runner.KubeflowDagRunnerConfig(
kubeflow_metadata_config=kubeflow_dag_runner.
get_default_kubeflow_metadata_config(),
- tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',
+ tfx_image='gcr.io/tfx-oss-public/tfx:0.27.0',
)
kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(
output_filename=__file__ + '.yaml', config=config
| {"golden_diff": "diff --git a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py\n--- a/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py\n+++ b/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py\n@@ -39,7 +39,7 @@\n # or a module file baked in the docker image used by the pipeline.\n _taxi_module_file_param = data_types.RuntimeParameter(\n name='module-file',\n- default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n+ default='/tfx/src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n ptype=Text,\n )\n \n@@ -178,7 +178,7 @@\n config = kubeflow_dag_runner.KubeflowDagRunnerConfig(\n kubeflow_metadata_config=kubeflow_dag_runner.\n get_default_kubeflow_metadata_config(),\n- tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',\n+ tfx_image='gcr.io/tfx-oss-public/tfx:0.27.0',\n )\n kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(\n output_filename=__file__ + '.yaml', config=config\n", "issue": "Problems upgrading to TFX 0.27.0\n### What steps did you take:\r\nInstalled Kubeflow Pipelines on GCP via kustomize manifests.\r\nTried to run the Taxi TFX Demo.\r\n\r\n### What happened:\r\nOn the first step, I got the error \"No module named 'tfx.dsl.components'\"\r\n\r\n### What did you expect to happen:\r\nTo successfully run the TFX Taxi Demo.\r\n\r\n### Environment:\r\nHow did you deploy Kubeflow Pipelines (KFP)?\r\nVia the kustomize manifests in GCP.\r\n\r\nKFP version: 1.4.0-rc.1\r\n\r\n/kind bug\r\n/area backend\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom typing import Text\n\nimport kfp\nimport tensorflow_model_analysis as tfma\nfrom tfx.components.evaluator.component import Evaluator\nfrom tfx.components.example_gen.csv_example_gen.component import CsvExampleGen\nfrom tfx.components.example_validator.component import ExampleValidator\nfrom tfx.components.pusher.component import Pusher\nfrom tfx.components.schema_gen.component import SchemaGen\nfrom tfx.components.statistics_gen.component import StatisticsGen\nfrom tfx.components.trainer.component import Trainer\nfrom tfx.components.transform.component import Transform\nfrom tfx.orchestration import data_types\nfrom tfx.orchestration import pipeline\nfrom tfx.orchestration.kubeflow import kubeflow_dag_runner\nfrom tfx.utils.dsl_utils import external_input\nfrom tfx.proto import pusher_pb2\nfrom tfx.proto import trainer_pb2\n\n# Define pipeline params used for pipeline execution.\n# Path to the module file, should be a GCS path,\n# or a module file baked in the docker image used by the pipeline.\n_taxi_module_file_param = data_types.RuntimeParameter(\n name='module-file',\n default='/tfx-src/tfx/examples/chicago_taxi_pipeline/taxi_utils.py',\n ptype=Text,\n)\n\n# Path to the CSV data file, under which their should be a data.csv file.\n_data_root_param = data_types.RuntimeParameter(\n name='data-root',\n default='gs://ml-pipeline/sample-data/chicago-taxi/data',\n ptype=Text,\n)\n\n# Path of pipeline root, should be a GCS path.\npipeline_root = os.path.join(\n 'gs://{{kfp-default-bucket}}', 'tfx_taxi_simple', kfp.dsl.RUN_ID_PLACEHOLDER\n)\n\n\ndef _create_pipeline(\n pipeline_root: Text, csv_input_location: data_types.RuntimeParameter,\n taxi_module_file: data_types.RuntimeParameter, enable_cache: bool\n):\n \"\"\"Creates a simple Kubeflow-based Chicago Taxi TFX pipeline.\n\n Args:\n pipeline_root: The root of the pipeline output.\n csv_input_location: The location of the input data directory.\n taxi_module_file: The location of the module file for Transform/Trainer.\n enable_cache: Whether to enable cache or not.\n\n Returns:\n A logical TFX pipeline.Pipeline object.\n \"\"\"\n examples = external_input(csv_input_location)\n\n example_gen = CsvExampleGen(input=examples)\n statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])\n infer_schema = SchemaGen(\n statistics=statistics_gen.outputs['statistics'],\n infer_feature_shape=False,\n )\n validate_stats = ExampleValidator(\n statistics=statistics_gen.outputs['statistics'],\n schema=infer_schema.outputs['schema'],\n )\n transform = Transform(\n examples=example_gen.outputs['examples'],\n schema=infer_schema.outputs['schema'],\n module_file=taxi_module_file,\n )\n trainer = Trainer(\n module_file=taxi_module_file,\n transformed_examples=transform.outputs['transformed_examples'],\n schema=infer_schema.outputs['schema'],\n transform_graph=transform.outputs['transform_graph'],\n train_args=trainer_pb2.TrainArgs(num_steps=10),\n eval_args=trainer_pb2.EvalArgs(num_steps=5),\n )\n # Set the TFMA config for Model Evaluation and Validation.\n eval_config = tfma.EvalConfig(\n model_specs=[\n # Using signature 'eval' implies the use of an EvalSavedModel. To use\n # a serving model remove the signature to defaults to 'serving_default'\n # and add a label_key.\n tfma.ModelSpec(signature_name='eval')\n ],\n metrics_specs=[\n tfma.MetricsSpec(\n # The metrics added here are in addition to those saved with the\n # model (assuming either a keras model or EvalSavedModel is used).\n # Any metrics added into the saved model (for example using\n # model.compile(..., metrics=[...]), etc) will be computed\n # automatically.\n metrics=[tfma.MetricConfig(class_name='ExampleCount')],\n # To add validation thresholds for metrics saved with the model,\n # add them keyed by metric name to the thresholds map.\n thresholds={\n 'binary_accuracy':\n tfma.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={'value': 0.5}\n ),\n change_threshold=tfma.GenericChangeThreshold(\n direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n absolute={'value': -1e-10}\n )\n )\n }\n )\n ],\n slicing_specs=[\n # An empty slice spec means the overall slice, i.e. the whole dataset.\n tfma.SlicingSpec(),\n # Data can be sliced along a feature column. In this case, data is\n # sliced along feature column trip_start_hour.\n tfma.SlicingSpec(feature_keys=['trip_start_hour'])\n ]\n )\n\n model_analyzer = Evaluator(\n examples=example_gen.outputs['examples'],\n model=trainer.outputs['model'],\n eval_config=eval_config,\n )\n\n pusher = Pusher(\n model=trainer.outputs['model'],\n model_blessing=model_analyzer.outputs['blessing'],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=os.path.\n join(str(pipeline.ROOT_PARAMETER), 'model_serving')\n )\n ),\n )\n\n return pipeline.Pipeline(\n pipeline_name='parameterized_tfx_oss',\n pipeline_root=pipeline_root,\n components=[\n example_gen, statistics_gen, infer_schema, validate_stats, transform,\n trainer, model_analyzer, pusher\n ],\n enable_cache=enable_cache,\n )\n\n\nif __name__ == '__main__':\n enable_cache = True\n pipeline = _create_pipeline(\n pipeline_root,\n _data_root_param,\n _taxi_module_file_param,\n enable_cache=enable_cache,\n )\n # Make sure the version of TFX image used is consistent with the version of\n # TFX SDK.\n config = kubeflow_dag_runner.KubeflowDagRunnerConfig(\n kubeflow_metadata_config=kubeflow_dag_runner.\n get_default_kubeflow_metadata_config(),\n tfx_image='gcr.io/tfx-oss-public/tfx:0.22.0',\n )\n kfp_runner = kubeflow_dag_runner.KubeflowDagRunner(\n output_filename=__file__ + '.yaml', config=config\n )\n\n kfp_runner.run(pipeline)\n", "path": "samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py"}]} | 2,706 | 320 |
gh_patches_debug_2454 | rasdani/github-patches | git_diff | mkdocs__mkdocs-904 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error while executing gh-deploy
I've successfully deployed a MkDocs site using the gh-deploy command. When I try to deploy some additional changes to my master branch, I get the following error:
```
c:\docs>mkdocs gh-deploy --clean
INFO - Cleaning site directory
INFO - Building documentation to directory: c:\docs\site
INFO - Copying 'c:\docs\site' to 'gh-pages' branch and pushing to GitHub.
Traceback (most recent call last):
File "C:\Python34\lib\runpy.py", line 170, in _run_module_as_main
"__main__", mod_spec)
File "C:\Python34\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Python34\Scripts\mkdocs.exe\__main__.py", line 9, in <module>
File "C:\Python34\lib\site-packages\click\core.py", line 664, in __call__
return self.main(*args, **kwargs)
File "C:\Python34\lib\site-packages\click\core.py", line 644, in main
rv = self.invoke(ctx)
File "C:\Python34\lib\site-packages\click\core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Python34\lib\site-packages\click\core.py", line 837, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Python34\lib\site-packages\click\core.py", line 464, in invoke
return callback(*args, **kwargs)
File "C:\Python34\lib\site-packages\mkdocs\cli.py", line 186, in gh_deploy_command
gh_deploy.gh_deploy(config, message=message)
File "C:\Python34\lib\site-packages\mkdocs\gh_deploy.py", line 69, in gh_deploy
remote_branch)
File "C:\Python34\lib\site-packages\mkdocs\utils\ghp_import.py", line 163, in ghp_import
if not try_rebase(remote, branch):
File "C:\Python34\lib\site-packages\mkdocs\utils\ghp_import.py", line 78, in try_rebase
if sp.call(cmd) != 0:
File "C:\Python34\lib\subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Python34\lib\subprocess.py", line 859, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1086, in _execute_child
args = list2cmdline(args)
File "C:\Python34\lib\subprocess.py", line 663, in list2cmdline
needquote = (" " in arg) or ("\t" in arg) or not arg
TypeError: 'str' does not support the buffer interface
```
</issue>
<code>
[start of mkdocs/utils/ghp_import.py]
1 #! /usr/bin/env python
2 #
3 # This file is part of the ghp-import package released under
4 # the Tumbolia Public License.
5
6 # Tumbolia Public License
7
8 # Copyright 2013, Paul Davis <[email protected]>
9
10 # Copying and distribution of this file, with or without modification, are
11 # permitted in any medium without royalty provided the copyright notice and this
12 # notice are preserved.
13
14 # TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
15
16 # 0. opan saurce LOL
17
18 from __future__ import unicode_literals
19
20 import errno
21 import logging
22 import os
23 import subprocess as sp
24 import sys
25 import time
26 import unicodedata
27
28 log = logging.getLogger(__name__)
29
30
31 if sys.version_info[0] == 3:
32 def enc(text):
33 if isinstance(text, bytes):
34 return text
35 return text.encode()
36
37 def dec(text):
38 if isinstance(text, bytes):
39 return text.decode('utf-8')
40 return text
41
42 def write(pipe, data):
43 try:
44 pipe.stdin.write(data)
45 except IOError as e:
46 if e.errno != errno.EPIPE:
47 raise
48 else:
49 def enc(text):
50 if isinstance(text, unicode):
51 return text.encode('utf-8')
52 return text
53
54 def dec(text):
55 if isinstance(text, unicode):
56 return text
57 return text.decode('utf-8')
58
59 def write(pipe, data):
60 pipe.stdin.write(data)
61
62
63 def normalize_path(path):
64 # Fix unicode pathnames on OS X
65 # See: http://stackoverflow.com/a/5582439/44289
66 if sys.platform == "darwin":
67 return unicodedata.normalize("NFKC", dec(path))
68 return path
69
70
71 def try_rebase(remote, branch):
72 cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)]
73 p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
74 (rev, _) = p.communicate()
75 if p.wait() != 0:
76 return True
77 cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()]
78 if sp.call(cmd) != 0:
79 return False
80 return True
81
82
83 def get_config(key):
84 p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE)
85 (value, _) = p.communicate()
86 return value.decode('utf-8').strip()
87
88
89 def get_prev_commit(branch):
90 cmd = ['git', 'rev-list', '--max-count=1', branch, '--']
91 p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
92 (rev, _) = p.communicate()
93 if p.wait() != 0:
94 return None
95 return rev.decode('utf-8').strip()
96
97
98 def mk_when(timestamp=None):
99 if timestamp is None:
100 timestamp = int(time.time())
101 currtz = "%+05d" % (-1 * time.timezone / 36) # / 3600 * 100
102 return "%s %s" % (timestamp, currtz)
103
104
105 def start_commit(pipe, branch, message):
106 uname = dec(get_config("user.name"))
107 email = dec(get_config("user.email"))
108 write(pipe, enc('commit refs/heads/%s\n' % branch))
109 write(pipe, enc('committer %s <%s> %s\n' % (uname, email, mk_when())))
110 write(pipe, enc('data %d\n%s\n' % (len(message), message)))
111 head = get_prev_commit(branch)
112 if head:
113 write(pipe, enc('from %s\n' % head))
114 write(pipe, enc('deleteall\n'))
115
116
117 def add_file(pipe, srcpath, tgtpath):
118 with open(srcpath, "rb") as handle:
119 if os.access(srcpath, os.X_OK):
120 write(pipe, enc('M 100755 inline %s\n' % tgtpath))
121 else:
122 write(pipe, enc('M 100644 inline %s\n' % tgtpath))
123 data = handle.read()
124 write(pipe, enc('data %d\n' % len(data)))
125 write(pipe, enc(data))
126 write(pipe, enc('\n'))
127
128
129 def add_nojekyll(pipe):
130 write(pipe, enc('M 100644 inline .nojekyll\n'))
131 write(pipe, enc('data 0\n'))
132 write(pipe, enc('\n'))
133
134
135 def gitpath(fname):
136 norm = os.path.normpath(fname)
137 return "/".join(norm.split(os.path.sep))
138
139
140 def run_import(srcdir, branch, message, nojekyll):
141 cmd = ['git', 'fast-import', '--date-format=raw', '--quiet']
142 kwargs = {"stdin": sp.PIPE}
143 if sys.version_info >= (3, 2, 0):
144 kwargs["universal_newlines"] = False
145 pipe = sp.Popen(cmd, **kwargs)
146 start_commit(pipe, branch, message)
147 for path, _, fnames in os.walk(srcdir):
148 for fn in fnames:
149 fpath = os.path.join(path, fn)
150 fpath = normalize_path(fpath)
151 gpath = gitpath(os.path.relpath(fpath, start=srcdir))
152 add_file(pipe, fpath, gpath)
153 if nojekyll:
154 add_nojekyll(pipe)
155 write(pipe, enc('\n'))
156 pipe.stdin.close()
157 if pipe.wait() != 0:
158 sys.stdout.write(enc("Failed to process commit.\n"))
159
160
161 def ghp_import(directory, message, remote='origin', branch='gh-pages'):
162
163 if not try_rebase(remote, branch):
164 log.error("Failed to rebase %s branch.", branch)
165
166 nojekyll = True
167
168 run_import(directory, branch, message, nojekyll)
169
170 proc = sp.Popen(['git', 'push', remote, branch],
171 stdout=sp.PIPE, stderr=sp.PIPE)
172 proc.communicate()
173 return proc.wait() == 0
174
[end of mkdocs/utils/ghp_import.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/utils/ghp_import.py b/mkdocs/utils/ghp_import.py
--- a/mkdocs/utils/ghp_import.py
+++ b/mkdocs/utils/ghp_import.py
@@ -74,7 +74,7 @@
(rev, _) = p.communicate()
if p.wait() != 0:
return True
- cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()]
+ cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())]
if sp.call(cmd) != 0:
return False
return True
| {"golden_diff": "diff --git a/mkdocs/utils/ghp_import.py b/mkdocs/utils/ghp_import.py\n--- a/mkdocs/utils/ghp_import.py\n+++ b/mkdocs/utils/ghp_import.py\n@@ -74,7 +74,7 @@\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return True\n- cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()]\n+ cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())]\n if sp.call(cmd) != 0:\n return False\n return True\n", "issue": "Error while executing gh-deploy\nI've successfully deployed a MkDocs site using the gh-deploy command. When I try to deploy some additional changes to my master branch, I get the following error:\n\n```\nc:\\docs>mkdocs gh-deploy --clean\nINFO - Cleaning site directory\nINFO - Building documentation to directory: c:\\docs\\site\nINFO - Copying 'c:\\docs\\site' to 'gh-pages' branch and pushing to GitHub.\nTraceback (most recent call last):\n File \"C:\\Python34\\lib\\runpy.py\", line 170, in _run_module_as_main\n \"__main__\", mod_spec)\n File \"C:\\Python34\\lib\\runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"c:\\Python34\\Scripts\\mkdocs.exe\\__main__.py\", line 9, in <module>\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 664, in __call__\n return self.main(*args, **kwargs)\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 644, in main\n rv = self.invoke(ctx)\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 991, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 837, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"C:\\Python34\\lib\\site-packages\\click\\core.py\", line 464, in invoke\n return callback(*args, **kwargs)\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\cli.py\", line 186, in gh_deploy_command\n gh_deploy.gh_deploy(config, message=message)\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\gh_deploy.py\", line 69, in gh_deploy\n remote_branch)\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\utils\\ghp_import.py\", line 163, in ghp_import\n if not try_rebase(remote, branch):\n File \"C:\\Python34\\lib\\site-packages\\mkdocs\\utils\\ghp_import.py\", line 78, in try_rebase\n if sp.call(cmd) != 0:\n File \"C:\\Python34\\lib\\subprocess.py\", line 537, in call\n with Popen(*popenargs, **kwargs) as p:\n File \"C:\\Python34\\lib\\subprocess.py\", line 859, in __init__\n restore_signals, start_new_session)\n File \"C:\\Python34\\lib\\subprocess.py\", line 1086, in _execute_child\n args = list2cmdline(args)\n File \"C:\\Python34\\lib\\subprocess.py\", line 663, in list2cmdline\n needquote = (\" \" in arg) or (\"\\t\" in arg) or not arg\nTypeError: 'str' does not support the buffer interface\n```\n\n", "before_files": [{"content": "#! /usr/bin/env python\n#\n# This file is part of the ghp-import package released under\n# the Tumbolia Public License.\n\n# Tumbolia Public License\n\n# Copyright 2013, Paul Davis <[email protected]>\n\n# Copying and distribution of this file, with or without modification, are\n# permitted in any medium without royalty provided the copyright notice and this\n# notice are preserved.\n\n# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n# 0. opan saurce LOL\n\nfrom __future__ import unicode_literals\n\nimport errno\nimport logging\nimport os\nimport subprocess as sp\nimport sys\nimport time\nimport unicodedata\n\nlog = logging.getLogger(__name__)\n\n\nif sys.version_info[0] == 3:\n def enc(text):\n if isinstance(text, bytes):\n return text\n return text.encode()\n\n def dec(text):\n if isinstance(text, bytes):\n return text.decode('utf-8')\n return text\n\n def write(pipe, data):\n try:\n pipe.stdin.write(data)\n except IOError as e:\n if e.errno != errno.EPIPE:\n raise\nelse:\n def enc(text):\n if isinstance(text, unicode):\n return text.encode('utf-8')\n return text\n\n def dec(text):\n if isinstance(text, unicode):\n return text\n return text.decode('utf-8')\n\n def write(pipe, data):\n pipe.stdin.write(data)\n\n\ndef normalize_path(path):\n # Fix unicode pathnames on OS X\n # See: http://stackoverflow.com/a/5582439/44289\n if sys.platform == \"darwin\":\n return unicodedata.normalize(\"NFKC\", dec(path))\n return path\n\n\ndef try_rebase(remote, branch):\n cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)]\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return True\n cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, rev.strip()]\n if sp.call(cmd) != 0:\n return False\n return True\n\n\ndef get_config(key):\n p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE)\n (value, _) = p.communicate()\n return value.decode('utf-8').strip()\n\n\ndef get_prev_commit(branch):\n cmd = ['git', 'rev-list', '--max-count=1', branch, '--']\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return None\n return rev.decode('utf-8').strip()\n\n\ndef mk_when(timestamp=None):\n if timestamp is None:\n timestamp = int(time.time())\n currtz = \"%+05d\" % (-1 * time.timezone / 36) # / 3600 * 100\n return \"%s %s\" % (timestamp, currtz)\n\n\ndef start_commit(pipe, branch, message):\n uname = dec(get_config(\"user.name\"))\n email = dec(get_config(\"user.email\"))\n write(pipe, enc('commit refs/heads/%s\\n' % branch))\n write(pipe, enc('committer %s <%s> %s\\n' % (uname, email, mk_when())))\n write(pipe, enc('data %d\\n%s\\n' % (len(message), message)))\n head = get_prev_commit(branch)\n if head:\n write(pipe, enc('from %s\\n' % head))\n write(pipe, enc('deleteall\\n'))\n\n\ndef add_file(pipe, srcpath, tgtpath):\n with open(srcpath, \"rb\") as handle:\n if os.access(srcpath, os.X_OK):\n write(pipe, enc('M 100755 inline %s\\n' % tgtpath))\n else:\n write(pipe, enc('M 100644 inline %s\\n' % tgtpath))\n data = handle.read()\n write(pipe, enc('data %d\\n' % len(data)))\n write(pipe, enc(data))\n write(pipe, enc('\\n'))\n\n\ndef add_nojekyll(pipe):\n write(pipe, enc('M 100644 inline .nojekyll\\n'))\n write(pipe, enc('data 0\\n'))\n write(pipe, enc('\\n'))\n\n\ndef gitpath(fname):\n norm = os.path.normpath(fname)\n return \"/\".join(norm.split(os.path.sep))\n\n\ndef run_import(srcdir, branch, message, nojekyll):\n cmd = ['git', 'fast-import', '--date-format=raw', '--quiet']\n kwargs = {\"stdin\": sp.PIPE}\n if sys.version_info >= (3, 2, 0):\n kwargs[\"universal_newlines\"] = False\n pipe = sp.Popen(cmd, **kwargs)\n start_commit(pipe, branch, message)\n for path, _, fnames in os.walk(srcdir):\n for fn in fnames:\n fpath = os.path.join(path, fn)\n fpath = normalize_path(fpath)\n gpath = gitpath(os.path.relpath(fpath, start=srcdir))\n add_file(pipe, fpath, gpath)\n if nojekyll:\n add_nojekyll(pipe)\n write(pipe, enc('\\n'))\n pipe.stdin.close()\n if pipe.wait() != 0:\n sys.stdout.write(enc(\"Failed to process commit.\\n\"))\n\n\ndef ghp_import(directory, message, remote='origin', branch='gh-pages'):\n\n if not try_rebase(remote, branch):\n log.error(\"Failed to rebase %s branch.\", branch)\n\n nojekyll = True\n\n run_import(directory, branch, message, nojekyll)\n\n proc = sp.Popen(['git', 'push', remote, branch],\n stdout=sp.PIPE, stderr=sp.PIPE)\n proc.communicate()\n return proc.wait() == 0\n", "path": "mkdocs/utils/ghp_import.py"}]} | 3,050 | 152 |
gh_patches_debug_17903 | rasdani/github-patches | git_diff | ipython__ipython-9820 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IPython 5 shell does not react to SIGQUIT (CTRL + \)
In previous IPython versions it was possible to terminate an IPython session quickly by sending an `SIGQUIT`, i.e., by pressing <kbd>CTRL</kbd>+<kbd> \ </kbd>. This is useful when having an `embed()` in a loop with a large number of iterations. Pressing <kbd>CTRL</kbd>+<kbd>d</kbd> followed by <kbd>y</kbd> is not practical. Since `%kill_embedded` is currently also broken there is no convenient way to terminate the process.
</issue>
<code>
[start of IPython/terminal/shortcuts.py]
1 import signal
2 import sys
3
4 from prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER
5 from prompt_toolkit.filters import (HasFocus, HasSelection, Condition,
6 ViInsertMode, EmacsInsertMode, HasCompletions)
7 from prompt_toolkit.filters.cli import ViMode
8 from prompt_toolkit.keys import Keys
9 from prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline
10
11 from IPython.utils.decorators import undoc
12
13 @Condition
14 def cursor_in_leading_ws(cli):
15 before = cli.application.buffer.document.current_line_before_cursor
16 return (not before) or before.isspace()
17
18 def register_ipython_shortcuts(registry, shell):
19 """Set up the prompt_toolkit keyboard shortcuts for IPython"""
20 insert_mode = ViInsertMode() | EmacsInsertMode()
21
22 # Ctrl+J == Enter, seemingly
23 registry.add_binding(Keys.ControlJ,
24 filter=(HasFocus(DEFAULT_BUFFER)
25 & ~HasSelection()
26 & insert_mode
27 ))(newline_or_execute_outer(shell))
28
29 registry.add_binding(Keys.ControlP,
30 filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)
31 ))(previous_history_or_previous_completion)
32
33 registry.add_binding(Keys.ControlN,
34 filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)
35 ))(next_history_or_next_completion)
36
37 registry.add_binding(Keys.ControlG,
38 filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions()
39 ))(dismiss_completion)
40
41 registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER)
42 )(reset_buffer)
43
44 registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER)
45 )(reset_search_buffer)
46
47 supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP'))
48 registry.add_binding(Keys.ControlZ, filter=supports_suspend
49 )(suspend_to_bg)
50
51 # Ctrl+I == Tab
52 registry.add_binding(Keys.ControlI,
53 filter=(HasFocus(DEFAULT_BUFFER)
54 & ~HasSelection()
55 & insert_mode
56 & cursor_in_leading_ws
57 ))(indent_buffer)
58
59 registry.add_binding(Keys.ControlO,
60 filter=(HasFocus(DEFAULT_BUFFER)
61 & EmacsInsertMode()))(newline_with_copy_margin)
62
63 if shell.display_completions == 'readlinelike':
64 registry.add_binding(Keys.ControlI,
65 filter=(HasFocus(DEFAULT_BUFFER)
66 & ~HasSelection()
67 & insert_mode
68 & ~cursor_in_leading_ws
69 ))(display_completions_like_readline)
70
71 if sys.platform == 'win32':
72 registry.add_binding(Keys.ControlV,
73 filter=(
74 HasFocus(
75 DEFAULT_BUFFER) & ~ViMode()
76 ))(win_paste)
77
78
79 def newline_or_execute_outer(shell):
80 def newline_or_execute(event):
81 """When the user presses return, insert a newline or execute the code."""
82 b = event.current_buffer
83 d = b.document
84
85 if b.complete_state:
86 cc = b.complete_state.current_completion
87 if cc:
88 b.apply_completion(cc)
89 else:
90 b.cancel_completion()
91 return
92
93 if not (d.on_last_line or d.cursor_position_row >= d.line_count
94 - d.empty_line_count_at_the_end()):
95 b.newline()
96 return
97
98 status, indent = shell.input_splitter.check_complete(d.text + '\n')
99
100 if (status != 'incomplete') and b.accept_action.is_returnable:
101 b.accept_action.validate_and_handle(event.cli, b)
102 else:
103 b.insert_text('\n' + (' ' * (indent or 0)))
104 return newline_or_execute
105
106
107 def previous_history_or_previous_completion(event):
108 """
109 Control-P in vi edit mode on readline is history next, unlike default prompt toolkit.
110
111 If completer is open this still select previous completion.
112 """
113 event.current_buffer.auto_up()
114
115
116 def next_history_or_next_completion(event):
117 """
118 Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit.
119
120 If completer is open this still select next completion.
121 """
122 event.current_buffer.auto_down()
123
124
125 def dismiss_completion(event):
126 b = event.current_buffer
127 if b.complete_state:
128 b.cancel_completion()
129
130
131 def reset_buffer(event):
132 b = event.current_buffer
133 if b.complete_state:
134 b.cancel_completion()
135 else:
136 b.reset()
137
138
139 def reset_search_buffer(event):
140 if event.current_buffer.document.text:
141 event.current_buffer.reset()
142 else:
143 event.cli.push_focus(DEFAULT_BUFFER)
144
145 def suspend_to_bg(event):
146 event.cli.suspend_to_background()
147
148 def indent_buffer(event):
149 event.current_buffer.insert_text(' ' * 4)
150
151 def newline_with_copy_margin(event):
152 """
153 Preserve margin and cursor position when using
154 Control-O to insert a newline in EMACS mode
155 """
156 b = event.current_buffer
157 cursor_start_pos = b.document.cursor_position_col
158 b.newline(copy_margin=True)
159 b.cursor_up(count=1)
160 cursor_end_pos = b.document.cursor_position_col
161 if cursor_start_pos != cursor_end_pos:
162 pos_diff = cursor_start_pos - cursor_end_pos
163 b.cursor_right(count=pos_diff)
164
165
166
167
168 if sys.platform == 'win32':
169 from IPython.core.error import TryNext
170 from IPython.lib.clipboard import (ClipboardEmpty,
171 win32_clipboard_get,
172 tkinter_clipboard_get)
173
174 @undoc
175 def win_paste(event):
176 try:
177 text = win32_clipboard_get()
178 except TryNext:
179 try:
180 text = tkinter_clipboard_get()
181 except (TryNext, ClipboardEmpty):
182 return
183 except ClipboardEmpty:
184 return
185 event.current_buffer.insert_text(text.replace('\t', ' ' * 4))
186
[end of IPython/terminal/shortcuts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/terminal/shortcuts.py b/IPython/terminal/shortcuts.py
--- a/IPython/terminal/shortcuts.py
+++ b/IPython/terminal/shortcuts.py
@@ -26,6 +26,8 @@
& insert_mode
))(newline_or_execute_outer(shell))
+ registry.add_binding(Keys.ControlBackslash)(force_exit)
+
registry.add_binding(Keys.ControlP,
filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)
))(previous_history_or_previous_completion)
@@ -145,6 +147,12 @@
def suspend_to_bg(event):
event.cli.suspend_to_background()
+def force_exit(event):
+ """
+ Force exit (with a non-zero return value)
+ """
+ sys.exit("Quit")
+
def indent_buffer(event):
event.current_buffer.insert_text(' ' * 4)
| {"golden_diff": "diff --git a/IPython/terminal/shortcuts.py b/IPython/terminal/shortcuts.py\n--- a/IPython/terminal/shortcuts.py\n+++ b/IPython/terminal/shortcuts.py\n@@ -26,6 +26,8 @@\n & insert_mode\n ))(newline_or_execute_outer(shell))\n \n+ registry.add_binding(Keys.ControlBackslash)(force_exit)\n+\n registry.add_binding(Keys.ControlP,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(previous_history_or_previous_completion)\n@@ -145,6 +147,12 @@\n def suspend_to_bg(event):\n event.cli.suspend_to_background()\n \n+def force_exit(event):\n+ \"\"\"\n+ Force exit (with a non-zero return value)\n+ \"\"\"\n+ sys.exit(\"Quit\")\n+\n def indent_buffer(event):\n event.current_buffer.insert_text(' ' * 4)\n", "issue": "IPython 5 shell does not react to SIGQUIT (CTRL + \\)\nIn previous IPython versions it was possible to terminate an IPython session quickly by sending an `SIGQUIT`, i.e., by pressing <kbd>CTRL</kbd>+<kbd> \\ </kbd>. This is useful when having an `embed()` in a loop with a large number of iterations. Pressing <kbd>CTRL</kbd>+<kbd>d</kbd> followed by <kbd>y</kbd> is not practical. Since `%kill_embedded` is currently also broken there is no convenient way to terminate the process.\n\n", "before_files": [{"content": "import signal\nimport sys\n\nfrom prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER\nfrom prompt_toolkit.filters import (HasFocus, HasSelection, Condition,\n ViInsertMode, EmacsInsertMode, HasCompletions)\nfrom prompt_toolkit.filters.cli import ViMode\nfrom prompt_toolkit.keys import Keys\nfrom prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline\n\nfrom IPython.utils.decorators import undoc\n\n@Condition\ndef cursor_in_leading_ws(cli):\n before = cli.application.buffer.document.current_line_before_cursor\n return (not before) or before.isspace()\n\ndef register_ipython_shortcuts(registry, shell):\n \"\"\"Set up the prompt_toolkit keyboard shortcuts for IPython\"\"\"\n insert_mode = ViInsertMode() | EmacsInsertMode()\n\n # Ctrl+J == Enter, seemingly\n registry.add_binding(Keys.ControlJ,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n ))(newline_or_execute_outer(shell))\n\n registry.add_binding(Keys.ControlP,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(previous_history_or_previous_completion)\n\n registry.add_binding(Keys.ControlN,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(next_history_or_next_completion)\n\n registry.add_binding(Keys.ControlG,\n filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions()\n ))(dismiss_completion)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER)\n )(reset_buffer)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER)\n )(reset_search_buffer)\n\n supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP'))\n registry.add_binding(Keys.ControlZ, filter=supports_suspend\n )(suspend_to_bg)\n\n # Ctrl+I == Tab\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & cursor_in_leading_ws\n ))(indent_buffer)\n\n registry.add_binding(Keys.ControlO,\n filter=(HasFocus(DEFAULT_BUFFER)\n & EmacsInsertMode()))(newline_with_copy_margin)\n\n if shell.display_completions == 'readlinelike':\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & ~cursor_in_leading_ws\n ))(display_completions_like_readline)\n\n if sys.platform == 'win32':\n registry.add_binding(Keys.ControlV,\n filter=(\n HasFocus(\n DEFAULT_BUFFER) & ~ViMode()\n ))(win_paste)\n\n\ndef newline_or_execute_outer(shell):\n def newline_or_execute(event):\n \"\"\"When the user presses return, insert a newline or execute the code.\"\"\"\n b = event.current_buffer\n d = b.document\n\n if b.complete_state:\n cc = b.complete_state.current_completion\n if cc:\n b.apply_completion(cc)\n else:\n b.cancel_completion()\n return\n\n if not (d.on_last_line or d.cursor_position_row >= d.line_count\n - d.empty_line_count_at_the_end()):\n b.newline()\n return\n\n status, indent = shell.input_splitter.check_complete(d.text + '\\n')\n\n if (status != 'incomplete') and b.accept_action.is_returnable:\n b.accept_action.validate_and_handle(event.cli, b)\n else:\n b.insert_text('\\n' + (' ' * (indent or 0)))\n return newline_or_execute\n\n\ndef previous_history_or_previous_completion(event):\n \"\"\"\n Control-P in vi edit mode on readline is history next, unlike default prompt toolkit.\n\n If completer is open this still select previous completion.\n \"\"\"\n event.current_buffer.auto_up()\n\n\ndef next_history_or_next_completion(event):\n \"\"\"\n Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit.\n\n If completer is open this still select next completion.\n \"\"\"\n event.current_buffer.auto_down()\n\n\ndef dismiss_completion(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n\n\ndef reset_buffer(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n else:\n b.reset()\n\n\ndef reset_search_buffer(event):\n if event.current_buffer.document.text:\n event.current_buffer.reset()\n else:\n event.cli.push_focus(DEFAULT_BUFFER)\n\ndef suspend_to_bg(event):\n event.cli.suspend_to_background()\n\ndef indent_buffer(event):\n event.current_buffer.insert_text(' ' * 4)\n\ndef newline_with_copy_margin(event):\n \"\"\"\n Preserve margin and cursor position when using\n Control-O to insert a newline in EMACS mode\n \"\"\"\n b = event.current_buffer\n cursor_start_pos = b.document.cursor_position_col\n b.newline(copy_margin=True)\n b.cursor_up(count=1)\n cursor_end_pos = b.document.cursor_position_col\n if cursor_start_pos != cursor_end_pos:\n pos_diff = cursor_start_pos - cursor_end_pos\n b.cursor_right(count=pos_diff)\n\n\n\n\nif sys.platform == 'win32':\n from IPython.core.error import TryNext\n from IPython.lib.clipboard import (ClipboardEmpty,\n win32_clipboard_get,\n tkinter_clipboard_get)\n\n @undoc\n def win_paste(event):\n try:\n text = win32_clipboard_get()\n except TryNext:\n try:\n text = tkinter_clipboard_get()\n except (TryNext, ClipboardEmpty):\n return\n except ClipboardEmpty:\n return\n event.current_buffer.insert_text(text.replace('\\t', ' ' * 4))\n", "path": "IPython/terminal/shortcuts.py"}]} | 2,342 | 194 |
gh_patches_debug_20625 | rasdani/github-patches | git_diff | boto__botocore-66 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix output in multi result pagination (build_full_result)
Because we use izip_longest you can get a response like this:
```
{"CommonPrefixes": [null, null, null, null],
"Content": [{...}, {...}, {...}, {...}
}
```
When really if the null we shouldn't add it to the list. Then our response _should_ look like:
```
{"CommonPrefixes": [],
"Content": [{...}, {...}, {...}, {...}
}
```
</issue>
<code>
[start of botocore/paginate.py]
1 # Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the
5 # "Software"), to deal in the Software without restriction, including
6 # without limitation the rights to use, copy, modify, merge, publish, dis-
7 # tribute, sublicense, and/or sell copies of the Software, and to permit
8 # persons to whom the Software is furnished to do so, subject to the fol-
9 # lowing conditions:
10 #
11 # The above copyright notice and this permission notice shall be included
12 # in all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
15 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
16 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
17 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
18 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
20 # IN THE SOFTWARE.
21 #
22 from itertools import tee
23 from collections import defaultdict
24 try:
25 from itertools import zip_longest
26 except ImportError:
27 # Python2.x is izip_longest.
28 from itertools import izip_longest as zip_longest
29
30 try:
31 zip
32 except NameError:
33 # Python2.x is izip.
34 from itertools import izip as zip
35
36 import jmespath
37 from botocore.exceptions import PaginationError
38
39
40 class Paginator(object):
41 def __init__(self, operation):
42 self._operation = operation
43 self._pagination_cfg = operation.pagination
44 self._output_token = self._get_output_tokens(self._pagination_cfg)
45 self._input_token = self._get_input_tokens(self._pagination_cfg)
46 self._more_results = self._get_more_results_token(self._pagination_cfg)
47 self._result_key = self._get_result_key(self._pagination_cfg)
48
49 def _get_output_tokens(self, config):
50 output = []
51 output_token = config['output_token']
52 if not isinstance(output_token, list):
53 output_token = [output_token]
54 for config in output_token:
55 output.append(jmespath.compile(config))
56 return output
57
58 def _get_input_tokens(self, config):
59 input_token = self._pagination_cfg['py_input_token']
60 if not isinstance(input_token, list):
61 input_token = [input_token]
62 return input_token
63
64 def _get_more_results_token(self, config):
65 more_results = config.get('more_results')
66 if more_results is not None:
67 return jmespath.compile(more_results)
68
69 def _get_result_key(self, config):
70 result_key = config.get('result_key')
71 if result_key is not None:
72 if not isinstance(result_key, list):
73 result_key = [result_key]
74 return result_key
75
76 def paginate(self, endpoint, **kwargs):
77 """Paginate responses to an operation.
78
79 The responses to some operations are too large for a single response.
80 When this happens, the service will indicate that there are more
81 results in its response. This method handles the details of how
82 to detect when this happens and how to retrieve more results.
83
84 This method returns an iterator. Each element in the iterator
85 is the result of an ``Operation.call`` call, so each element is
86 a tuple of (``http_response``, ``parsed_result``).
87
88 """
89 return PageIterator(self._operation, self._input_token,
90 self._output_token, self._more_results,
91 self._result_key, endpoint, kwargs)
92
93
94
95 class PageIterator(object):
96 def __init__(self, operation, input_token, output_token, more_results,
97 result_key, endpoint, op_kwargs):
98 self._operation = operation
99 self._input_token = input_token
100 self._output_token = output_token
101 self._more_results = more_results
102 self._result_key = result_key
103 self._endpoint = endpoint
104 self._op_kwargs = op_kwargs
105 self._http_responses = []
106
107 @property
108 def http_responses(self):
109 return self._http_responses
110
111 def __iter__(self):
112 current_kwargs = self._op_kwargs
113 endpoint = self._endpoint
114 previous_next_token = None
115 while True:
116 http_response, parsed = self._operation.call(endpoint,
117 **current_kwargs)
118 self._http_responses.append(http_response)
119 yield http_response, parsed
120 next_token = self._get_next_token(parsed)
121 if all(t is None for t in next_token):
122 break
123 if previous_next_token is not None and \
124 previous_next_token == next_token:
125 message = ("The same next token was received "
126 "twice: %s" % next_token)
127 raise PaginationError(message=message)
128 for name, token in zip(self._input_token, next_token):
129 current_kwargs[name] = token
130 previous_next_token = next_token
131
132 def _get_next_token(self, parsed):
133 if self._more_results is not None:
134 if not self._more_results.search(parsed):
135 return [None]
136 next_tokens = []
137 for token in self._output_token:
138 next_tokens.append(token.search(parsed))
139 return next_tokens
140
141 def result_key_iters(self):
142 teed_results = tee(self, len(self._result_key))
143 return [ResultKeyIterator(i, result_key) for i, result_key
144 in zip(teed_results, self._result_key)]
145
146 def build_full_result(self):
147 iterators = self.result_key_iters()
148 if len(iterators) > 1:
149 response = defaultdict(list)
150 key_names = [i.result_key for i in iterators]
151 for vals in zip_longest(*iterators):
152 for k, val in zip(key_names, vals):
153 response[k].append(val)
154 else:
155 response = list(iterators[0])
156 return response
157
158
159 class ResultKeyIterator(object):
160 """Iterates over the results of paginated responses."""
161 def __init__(self, pages_iterator, result_key):
162 self._pages_iterator = pages_iterator
163 self.result_key = result_key
164
165 def __iter__(self):
166 for _, page in self._pages_iterator:
167 for result in page.get(self.result_key, []):
168 yield result
169
[end of botocore/paginate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/botocore/paginate.py b/botocore/paginate.py
--- a/botocore/paginate.py
+++ b/botocore/paginate.py
@@ -20,7 +20,6 @@
# IN THE SOFTWARE.
#
from itertools import tee
-from collections import defaultdict
try:
from itertools import zip_longest
except ImportError:
@@ -146,11 +145,14 @@
def build_full_result(self):
iterators = self.result_key_iters()
if len(iterators) > 1:
- response = defaultdict(list)
+ response = {}
key_names = [i.result_key for i in iterators]
+ for key in key_names:
+ response[key] = []
for vals in zip_longest(*iterators):
for k, val in zip(key_names, vals):
- response[k].append(val)
+ if val is not None:
+ response[k].append(val)
else:
response = list(iterators[0])
return response
| {"golden_diff": "diff --git a/botocore/paginate.py b/botocore/paginate.py\n--- a/botocore/paginate.py\n+++ b/botocore/paginate.py\n@@ -20,7 +20,6 @@\n # IN THE SOFTWARE.\n #\n from itertools import tee\n-from collections import defaultdict\n try:\n from itertools import zip_longest\n except ImportError:\n@@ -146,11 +145,14 @@\n def build_full_result(self):\n iterators = self.result_key_iters()\n if len(iterators) > 1:\n- response = defaultdict(list)\n+ response = {}\n key_names = [i.result_key for i in iterators]\n+ for key in key_names:\n+ response[key] = []\n for vals in zip_longest(*iterators):\n for k, val in zip(key_names, vals):\n- response[k].append(val)\n+ if val is not None:\n+ response[k].append(val)\n else:\n response = list(iterators[0])\n return response\n", "issue": "Fix output in multi result pagination (build_full_result)\nBecause we use izip_longest you can get a response like this:\n\n```\n{\"CommonPrefixes\": [null, null, null, null],\n \"Content\": [{...}, {...}, {...}, {...}\n}\n```\n\nWhen really if the null we shouldn't add it to the list. Then our response _should_ look like:\n\n```\n{\"CommonPrefixes\": [],\n \"Content\": [{...}, {...}, {...}, {...}\n}\n```\n\n", "before_files": [{"content": "# Copyright (c) 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nfrom itertools import tee\nfrom collections import defaultdict\ntry:\n from itertools import zip_longest\nexcept ImportError:\n # Python2.x is izip_longest.\n from itertools import izip_longest as zip_longest\n\ntry:\n zip\nexcept NameError:\n # Python2.x is izip.\n from itertools import izip as zip\n\nimport jmespath\nfrom botocore.exceptions import PaginationError\n\n\nclass Paginator(object):\n def __init__(self, operation):\n self._operation = operation\n self._pagination_cfg = operation.pagination\n self._output_token = self._get_output_tokens(self._pagination_cfg)\n self._input_token = self._get_input_tokens(self._pagination_cfg)\n self._more_results = self._get_more_results_token(self._pagination_cfg)\n self._result_key = self._get_result_key(self._pagination_cfg)\n\n def _get_output_tokens(self, config):\n output = []\n output_token = config['output_token']\n if not isinstance(output_token, list):\n output_token = [output_token]\n for config in output_token:\n output.append(jmespath.compile(config))\n return output\n\n def _get_input_tokens(self, config):\n input_token = self._pagination_cfg['py_input_token']\n if not isinstance(input_token, list):\n input_token = [input_token]\n return input_token\n\n def _get_more_results_token(self, config):\n more_results = config.get('more_results')\n if more_results is not None:\n return jmespath.compile(more_results)\n\n def _get_result_key(self, config):\n result_key = config.get('result_key')\n if result_key is not None:\n if not isinstance(result_key, list):\n result_key = [result_key]\n return result_key\n\n def paginate(self, endpoint, **kwargs):\n \"\"\"Paginate responses to an operation.\n\n The responses to some operations are too large for a single response.\n When this happens, the service will indicate that there are more\n results in its response. This method handles the details of how\n to detect when this happens and how to retrieve more results.\n\n This method returns an iterator. Each element in the iterator\n is the result of an ``Operation.call`` call, so each element is\n a tuple of (``http_response``, ``parsed_result``).\n\n \"\"\"\n return PageIterator(self._operation, self._input_token,\n self._output_token, self._more_results,\n self._result_key, endpoint, kwargs)\n\n\n\nclass PageIterator(object):\n def __init__(self, operation, input_token, output_token, more_results,\n result_key, endpoint, op_kwargs):\n self._operation = operation\n self._input_token = input_token\n self._output_token = output_token\n self._more_results = more_results\n self._result_key = result_key\n self._endpoint = endpoint\n self._op_kwargs = op_kwargs\n self._http_responses = []\n\n @property\n def http_responses(self):\n return self._http_responses\n\n def __iter__(self):\n current_kwargs = self._op_kwargs\n endpoint = self._endpoint\n previous_next_token = None\n while True:\n http_response, parsed = self._operation.call(endpoint,\n **current_kwargs)\n self._http_responses.append(http_response)\n yield http_response, parsed\n next_token = self._get_next_token(parsed)\n if all(t is None for t in next_token):\n break\n if previous_next_token is not None and \\\n previous_next_token == next_token:\n message = (\"The same next token was received \"\n \"twice: %s\" % next_token)\n raise PaginationError(message=message)\n for name, token in zip(self._input_token, next_token):\n current_kwargs[name] = token\n previous_next_token = next_token\n\n def _get_next_token(self, parsed):\n if self._more_results is not None:\n if not self._more_results.search(parsed):\n return [None]\n next_tokens = []\n for token in self._output_token:\n next_tokens.append(token.search(parsed))\n return next_tokens\n\n def result_key_iters(self):\n teed_results = tee(self, len(self._result_key))\n return [ResultKeyIterator(i, result_key) for i, result_key\n in zip(teed_results, self._result_key)]\n\n def build_full_result(self):\n iterators = self.result_key_iters()\n if len(iterators) > 1:\n response = defaultdict(list)\n key_names = [i.result_key for i in iterators]\n for vals in zip_longest(*iterators):\n for k, val in zip(key_names, vals):\n response[k].append(val)\n else:\n response = list(iterators[0])\n return response\n\n\nclass ResultKeyIterator(object):\n \"\"\"Iterates over the results of paginated responses.\"\"\"\n def __init__(self, pages_iterator, result_key):\n self._pages_iterator = pages_iterator\n self.result_key = result_key\n\n def __iter__(self):\n for _, page in self._pages_iterator:\n for result in page.get(self.result_key, []):\n yield result\n", "path": "botocore/paginate.py"}]} | 2,420 | 229 |
gh_patches_debug_27745 | rasdani/github-patches | git_diff | NVIDIA-Merlin__NVTabular-1357 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Test_s3 failing
```
with s3_context(s3_base=s3_base, bucket=engine, files=files) as s3fs:
# Create nvt.Dataset from mock s3 paths
url = f"s3://{engine}" if engine == "parquet" else f"s3://{engine}/*"
dataset = nvt.Dataset(url, engine=engine, storage_options=s3so)
# Check that the iteration API works
columns = mycols_pq if engine == "parquet" else mycols_csv
gdf = nvt.dispatch._concat(list(dataset.to_iter()))[columns]
assert_eq(gdf.reset_index(drop=True), df.reset_index(drop=True))
cat_names = ["name-cat", "name-string"] if engine == "parquet" else ["name-string"]
cont_names = ["x", "y", "id"]
label_name = ["label"]
conts = cont_names >> ops.FillMissing() >> ops.Clip(min_value=0) >> ops.LogOp()
cats = cat_names >> ops.Categorify(cat_cache="host")
processor = nvt.Workflow(conts + cats + label_name)
processor.fit(dataset)
# make sure we can write out the dataset back to S3
# (https://github.com/NVIDIA-Merlin/NVTabular/issues/1214)
> processor.transform(dataset).to_parquet(f"s3://{engine}/output")
/nvtabular/tests/unit/test_s3.py:111:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/nvtabular/nvtabular/io/dataset.py:906: in to_parquet
self.schema.write(output_path)
/nvtabular/nvtabular/graph/schema.py:154: in write
return PbTxt_SchemaWriter.write(self, schema_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'nvtabular.graph.schema_io.schema_writer_pbtxt.PbTxt_SchemaWriter'>
schema = [{'name': 'x', 'tags': [<Tags.CONTINUOUS: 'continuous'>], 'properties': {}, 'dtype': <class 'float'>, '_is_list': Fals...int'>, '_is_list': False}, {'name': 'label', 'tags': [], 'properties': {}, 'dtype': dtype('int64'), '_is_list': False}]
schema_path = PosixPath('s3:/csv/output')
@classmethod
def write(cls, schema, schema_path):
schema_path = Path(schema_path)
if not schema_path.is_dir():
> raise ValueError(f"The path provided is not a valid directory: {schema_path}")
E ValueError: The path provided is not a valid directory: s3:/csv/output
/nvtabular/nvtabular/graph/schema_io/schema_writer_pbtxt.py:45: ValueError
```
</issue>
<code>
[start of nvtabular/graph/schema_io/schema_writer_pbtxt.py]
1 #
2 # Copyright (c) 2021, NVIDIA CORPORATION.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 import os
17 from pathlib import Path
18
19 import numpy
20
21 os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
22 from google.protobuf import json_format, text_format # noqa
23 from google.protobuf.any_pb2 import Any # noqa
24 from google.protobuf.struct_pb2 import Struct # noqa
25 from tensorflow_metadata.proto.v0 import schema_pb2 # noqa
26
27 import nvtabular as nvt # noqa
28 from nvtabular.graph.schema_io.schema_writer_base import SchemaWriter # noqa
29 from nvtabular.graph.tags import Tags # noqa
30
31
32 class PbTxt_SchemaWriter(SchemaWriter):
33 @classmethod
34 def _read(cls, schema_path):
35 with open(schema_path, "r") as f:
36 schema = schema_pb2.Schema()
37 text_format.Parse(f.read(), schema)
38
39 return schema
40
41 @classmethod
42 def write(cls, schema, schema_path):
43 schema_path = Path(schema_path)
44 if not schema_path.is_dir():
45 raise ValueError(f"The path provided is not a valid directory: {schema_path}")
46
47 # traverse list of column schema
48 schema_file = schema_pb2.Schema()
49 features = []
50 for col_name, col_schema in schema.column_schemas.items():
51 features.append(create_protobuf_feature(col_schema))
52 schema_file.feature.extend(features)
53
54 with open(schema_path / "schema.pbtxt", "w") as f:
55 f.write(text_format.MessageToString(schema_file))
56 return schema
57
58 @classmethod
59 def load(cls, schema_path):
60 columns = []
61 if isinstance(schema_path, (str, Path)):
62 if isinstance(schema_path, str):
63 schema_path = Path(schema_path)
64 if schema_path.is_dir():
65 schema_path = schema_path / "schema.pbtxt"
66 schema = cls._read(schema_path)
67
68 for feat in schema.feature:
69 _is_list = False
70 dtype = None
71 properties = {}
72 tags = list(feat.annotation.tag) or []
73 # only one item should ever be in extra_metadata
74 if len(feat.annotation.extra_metadata) > 1:
75 raise ValueError(
76 f"{feat.name}: extra_metadata should have 1 item, has \
77 {len(feat.annotation.extra_metadata)}"
78 )
79 if feat.annotation.extra_metadata:
80 properties = json_format.MessageToDict(feat.annotation.extra_metadata[0])["value"]
81 # what domain
82 # load the domain values
83 shape_name = feat.WhichOneof("shape_type")
84 if shape_name:
85 _is_list = True
86 field_name = feat.WhichOneof("domain_info")
87 if field_name:
88 domain_values = getattr(feat, field_name)
89 # if zero no values were passed
90 if domain_values.max > 0:
91 properties["domain"] = {"min": domain_values.min, "max": domain_values.max}
92 if feat.type:
93 if feat.type == 2:
94 dtype = numpy.int
95 elif feat.type == 3:
96 dtype = numpy.float
97 columns.append(
98 nvt.ColumnSchema(
99 feat.name, tags=tags, properties=properties, dtype=dtype, _is_list=_is_list
100 )
101 )
102
103 return nvt.Schema(columns)
104
105
106 def register_extra_metadata(column_schema, feature):
107 filtered_properties = {k: v for k, v in column_schema.properties.items() if k != "domain"}
108 msg_struct = Struct()
109 # must pack message into "Any" type
110 any_pack = Any()
111 any_pack.Pack(json_format.ParseDict(filtered_properties, msg_struct))
112 # extra_metadata only takes type "Any" messages
113 feature.annotation.extra_metadata.add().CopyFrom(any_pack)
114 return feature
115
116
117 def register_list(column_schema, feature):
118 if str(column_schema._is_list):
119 min_length, max_length = None, None
120 if "value_count" in column_schema.properties:
121 min_length = column_schema.properties["value_count"]["min"]
122 max_length = column_schema.properties["value_count"]["max"]
123 if min_length and max_length and min_length == max_length:
124 shape = schema_pb2.FixedShape()
125 dim = shape.dim.add()
126 dim.size = min_length
127 feature.shape.CopyFrom(shape)
128 elif min_length and max_length and min_length < max_length:
129 feature.value_count.CopyFrom(schema_pb2.ValueCount(min=min_length, max=max_length))
130 else:
131 # if no min max available set dummy value, to signal this is list
132 feature.value_count.CopyFrom(schema_pb2.ValueCount(min=0, max=0))
133 return feature
134
135
136 def set_protobuf_float(column_schema, feature):
137 domain = column_schema.properties.get("domain", {})
138 feature.float_domain.CopyFrom(
139 schema_pb2.FloatDomain(
140 name=column_schema.name,
141 min=domain.get("min", None),
142 max=domain.get("max", None),
143 )
144 )
145 feature.type = schema_pb2.FeatureType.FLOAT
146 return feature
147
148
149 def set_protobuf_int(column_schema, feature):
150 domain = column_schema.properties.get("domain", {})
151 feature.int_domain.CopyFrom(
152 schema_pb2.IntDomain(
153 name=column_schema.name,
154 min=domain.get("min", None),
155 max=domain.get("max", None),
156 is_categorical=(
157 Tags.CATEGORICAL in column_schema.tags
158 or Tags.CATEGORICAL.value in column_schema.tags
159 ),
160 )
161 )
162 feature.type = schema_pb2.FeatureType.INT
163 return feature
164
165
166 def register_dtype(column_schema, feature):
167 # column_schema is a dict, changes are held
168 # TODO: this double check can be refactored
169 if column_schema.dtype:
170 if column_schema._is_list:
171 feature = proto_dict["list"](column_schema, feature)
172 if hasattr(column_schema.dtype, "kind"):
173 string_name = numpy.core._dtype._kind_name(column_schema.dtype)
174 elif hasattr(column_schema.dtype, "item"):
175 string_name = type(column_schema.dtype(1).item()).__name__
176 elif isinstance(column_schema.dtype, str):
177 string_name = column_schema.dtype
178 elif hasattr(column_schema.dtype, "__name__"):
179 string_name = column_schema.dtype.__name__
180 else:
181 raise TypeError(f"unsupported dtype for column schema: {column_schema.dtype}")
182
183 if string_name in proto_dict:
184 feature = proto_dict[string_name](column_schema, feature)
185 return feature
186
187
188 proto_dict = {
189 "list": register_list,
190 "float": set_protobuf_float,
191 "int": set_protobuf_int,
192 "uint": set_protobuf_int,
193 }
194
195
196 def create_protobuf_feature(column_schema):
197 feature = schema_pb2.Feature()
198 feature.name = column_schema.name
199 feature = register_dtype(column_schema, feature)
200 annotation = feature.annotation
201 annotation.tag.extend(
202 [tag.value if hasattr(tag, "value") else tag for tag in column_schema.tags]
203 )
204 # can be instantiated with no values
205 # if so, unnecessary to dump
206 # import pdb; pdb.set_trace()
207 if len(column_schema.properties) > 0:
208 feature = register_extra_metadata(column_schema, feature)
209 return feature
210
[end of nvtabular/graph/schema_io/schema_writer_pbtxt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvtabular/graph/schema_io/schema_writer_pbtxt.py b/nvtabular/graph/schema_io/schema_writer_pbtxt.py
--- a/nvtabular/graph/schema_io/schema_writer_pbtxt.py
+++ b/nvtabular/graph/schema_io/schema_writer_pbtxt.py
@@ -16,6 +16,7 @@
import os
from pathlib import Path
+import fsspec
import numpy
os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
@@ -40,9 +41,7 @@
@classmethod
def write(cls, schema, schema_path):
- schema_path = Path(schema_path)
- if not schema_path.is_dir():
- raise ValueError(f"The path provided is not a valid directory: {schema_path}")
+ fs = fsspec.get_fs_token_paths(schema_path)[0]
# traverse list of column schema
schema_file = schema_pb2.Schema()
@@ -51,9 +50,16 @@
features.append(create_protobuf_feature(col_schema))
schema_file.feature.extend(features)
- with open(schema_path / "schema.pbtxt", "w") as f:
- f.write(text_format.MessageToString(schema_file))
- return schema
+ try:
+ with fs.open(fs.sep.join([str(schema_path), "schema.pbtxt"]), "w") as f:
+ f.write(text_format.MessageToString(schema_file))
+ return schema
+ except Exception as e:
+ if not fs.isdir(schema_path):
+ raise ValueError(
+ f"The path provided is not a valid directory: {schema_path}"
+ ) from e
+ raise
@classmethod
def load(cls, schema_path):
| {"golden_diff": "diff --git a/nvtabular/graph/schema_io/schema_writer_pbtxt.py b/nvtabular/graph/schema_io/schema_writer_pbtxt.py\n--- a/nvtabular/graph/schema_io/schema_writer_pbtxt.py\n+++ b/nvtabular/graph/schema_io/schema_writer_pbtxt.py\n@@ -16,6 +16,7 @@\n import os\n from pathlib import Path\n \n+import fsspec\n import numpy\n \n os.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n@@ -40,9 +41,7 @@\n \n @classmethod\n def write(cls, schema, schema_path):\n- schema_path = Path(schema_path)\n- if not schema_path.is_dir():\n- raise ValueError(f\"The path provided is not a valid directory: {schema_path}\")\n+ fs = fsspec.get_fs_token_paths(schema_path)[0]\n \n # traverse list of column schema\n schema_file = schema_pb2.Schema()\n@@ -51,9 +50,16 @@\n features.append(create_protobuf_feature(col_schema))\n schema_file.feature.extend(features)\n \n- with open(schema_path / \"schema.pbtxt\", \"w\") as f:\n- f.write(text_format.MessageToString(schema_file))\n- return schema\n+ try:\n+ with fs.open(fs.sep.join([str(schema_path), \"schema.pbtxt\"]), \"w\") as f:\n+ f.write(text_format.MessageToString(schema_file))\n+ return schema\n+ except Exception as e:\n+ if not fs.isdir(schema_path):\n+ raise ValueError(\n+ f\"The path provided is not a valid directory: {schema_path}\"\n+ ) from e\n+ raise\n \n @classmethod\n def load(cls, schema_path):\n", "issue": "[BUG] Test_s3 failing\n```\r\n with s3_context(s3_base=s3_base, bucket=engine, files=files) as s3fs:\r\n # Create nvt.Dataset from mock s3 paths\r\n url = f\"s3://{engine}\" if engine == \"parquet\" else f\"s3://{engine}/*\"\r\n dataset = nvt.Dataset(url, engine=engine, storage_options=s3so)\r\n \r\n # Check that the iteration API works\r\n columns = mycols_pq if engine == \"parquet\" else mycols_csv\r\n gdf = nvt.dispatch._concat(list(dataset.to_iter()))[columns]\r\n assert_eq(gdf.reset_index(drop=True), df.reset_index(drop=True))\r\n \r\n cat_names = [\"name-cat\", \"name-string\"] if engine == \"parquet\" else [\"name-string\"]\r\n cont_names = [\"x\", \"y\", \"id\"]\r\n label_name = [\"label\"]\r\n \r\n conts = cont_names >> ops.FillMissing() >> ops.Clip(min_value=0) >> ops.LogOp()\r\n cats = cat_names >> ops.Categorify(cat_cache=\"host\")\r\n \r\n processor = nvt.Workflow(conts + cats + label_name)\r\n processor.fit(dataset)\r\n \r\n # make sure we can write out the dataset back to S3\r\n # (https://github.com/NVIDIA-Merlin/NVTabular/issues/1214)\r\n> processor.transform(dataset).to_parquet(f\"s3://{engine}/output\")\r\n\r\n/nvtabular/tests/unit/test_s3.py:111: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/nvtabular/nvtabular/io/dataset.py:906: in to_parquet\r\n self.schema.write(output_path)\r\n/nvtabular/nvtabular/graph/schema.py:154: in write\r\n return PbTxt_SchemaWriter.write(self, schema_path)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\ncls = <class 'nvtabular.graph.schema_io.schema_writer_pbtxt.PbTxt_SchemaWriter'>\r\nschema = [{'name': 'x', 'tags': [<Tags.CONTINUOUS: 'continuous'>], 'properties': {}, 'dtype': <class 'float'>, '_is_list': Fals...int'>, '_is_list': False}, {'name': 'label', 'tags': [], 'properties': {}, 'dtype': dtype('int64'), '_is_list': False}]\r\nschema_path = PosixPath('s3:/csv/output')\r\n\r\n @classmethod\r\n def write(cls, schema, schema_path):\r\n schema_path = Path(schema_path)\r\n if not schema_path.is_dir():\r\n> raise ValueError(f\"The path provided is not a valid directory: {schema_path}\")\r\nE ValueError: The path provided is not a valid directory: s3:/csv/output\r\n\r\n/nvtabular/nvtabular/graph/schema_io/schema_writer_pbtxt.py:45: ValueError\r\n```\n", "before_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport os\nfrom pathlib import Path\n\nimport numpy\n\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\nfrom google.protobuf import json_format, text_format # noqa\nfrom google.protobuf.any_pb2 import Any # noqa\nfrom google.protobuf.struct_pb2 import Struct # noqa\nfrom tensorflow_metadata.proto.v0 import schema_pb2 # noqa\n\nimport nvtabular as nvt # noqa\nfrom nvtabular.graph.schema_io.schema_writer_base import SchemaWriter # noqa\nfrom nvtabular.graph.tags import Tags # noqa\n\n\nclass PbTxt_SchemaWriter(SchemaWriter):\n @classmethod\n def _read(cls, schema_path):\n with open(schema_path, \"r\") as f:\n schema = schema_pb2.Schema()\n text_format.Parse(f.read(), schema)\n\n return schema\n\n @classmethod\n def write(cls, schema, schema_path):\n schema_path = Path(schema_path)\n if not schema_path.is_dir():\n raise ValueError(f\"The path provided is not a valid directory: {schema_path}\")\n\n # traverse list of column schema\n schema_file = schema_pb2.Schema()\n features = []\n for col_name, col_schema in schema.column_schemas.items():\n features.append(create_protobuf_feature(col_schema))\n schema_file.feature.extend(features)\n\n with open(schema_path / \"schema.pbtxt\", \"w\") as f:\n f.write(text_format.MessageToString(schema_file))\n return schema\n\n @classmethod\n def load(cls, schema_path):\n columns = []\n if isinstance(schema_path, (str, Path)):\n if isinstance(schema_path, str):\n schema_path = Path(schema_path)\n if schema_path.is_dir():\n schema_path = schema_path / \"schema.pbtxt\"\n schema = cls._read(schema_path)\n\n for feat in schema.feature:\n _is_list = False\n dtype = None\n properties = {}\n tags = list(feat.annotation.tag) or []\n # only one item should ever be in extra_metadata\n if len(feat.annotation.extra_metadata) > 1:\n raise ValueError(\n f\"{feat.name}: extra_metadata should have 1 item, has \\\n {len(feat.annotation.extra_metadata)}\"\n )\n if feat.annotation.extra_metadata:\n properties = json_format.MessageToDict(feat.annotation.extra_metadata[0])[\"value\"]\n # what domain\n # load the domain values\n shape_name = feat.WhichOneof(\"shape_type\")\n if shape_name:\n _is_list = True\n field_name = feat.WhichOneof(\"domain_info\")\n if field_name:\n domain_values = getattr(feat, field_name)\n # if zero no values were passed\n if domain_values.max > 0:\n properties[\"domain\"] = {\"min\": domain_values.min, \"max\": domain_values.max}\n if feat.type:\n if feat.type == 2:\n dtype = numpy.int\n elif feat.type == 3:\n dtype = numpy.float\n columns.append(\n nvt.ColumnSchema(\n feat.name, tags=tags, properties=properties, dtype=dtype, _is_list=_is_list\n )\n )\n\n return nvt.Schema(columns)\n\n\ndef register_extra_metadata(column_schema, feature):\n filtered_properties = {k: v for k, v in column_schema.properties.items() if k != \"domain\"}\n msg_struct = Struct()\n # must pack message into \"Any\" type\n any_pack = Any()\n any_pack.Pack(json_format.ParseDict(filtered_properties, msg_struct))\n # extra_metadata only takes type \"Any\" messages\n feature.annotation.extra_metadata.add().CopyFrom(any_pack)\n return feature\n\n\ndef register_list(column_schema, feature):\n if str(column_schema._is_list):\n min_length, max_length = None, None\n if \"value_count\" in column_schema.properties:\n min_length = column_schema.properties[\"value_count\"][\"min\"]\n max_length = column_schema.properties[\"value_count\"][\"max\"]\n if min_length and max_length and min_length == max_length:\n shape = schema_pb2.FixedShape()\n dim = shape.dim.add()\n dim.size = min_length\n feature.shape.CopyFrom(shape)\n elif min_length and max_length and min_length < max_length:\n feature.value_count.CopyFrom(schema_pb2.ValueCount(min=min_length, max=max_length))\n else:\n # if no min max available set dummy value, to signal this is list\n feature.value_count.CopyFrom(schema_pb2.ValueCount(min=0, max=0))\n return feature\n\n\ndef set_protobuf_float(column_schema, feature):\n domain = column_schema.properties.get(\"domain\", {})\n feature.float_domain.CopyFrom(\n schema_pb2.FloatDomain(\n name=column_schema.name,\n min=domain.get(\"min\", None),\n max=domain.get(\"max\", None),\n )\n )\n feature.type = schema_pb2.FeatureType.FLOAT\n return feature\n\n\ndef set_protobuf_int(column_schema, feature):\n domain = column_schema.properties.get(\"domain\", {})\n feature.int_domain.CopyFrom(\n schema_pb2.IntDomain(\n name=column_schema.name,\n min=domain.get(\"min\", None),\n max=domain.get(\"max\", None),\n is_categorical=(\n Tags.CATEGORICAL in column_schema.tags\n or Tags.CATEGORICAL.value in column_schema.tags\n ),\n )\n )\n feature.type = schema_pb2.FeatureType.INT\n return feature\n\n\ndef register_dtype(column_schema, feature):\n # column_schema is a dict, changes are held\n # TODO: this double check can be refactored\n if column_schema.dtype:\n if column_schema._is_list:\n feature = proto_dict[\"list\"](column_schema, feature)\n if hasattr(column_schema.dtype, \"kind\"):\n string_name = numpy.core._dtype._kind_name(column_schema.dtype)\n elif hasattr(column_schema.dtype, \"item\"):\n string_name = type(column_schema.dtype(1).item()).__name__\n elif isinstance(column_schema.dtype, str):\n string_name = column_schema.dtype\n elif hasattr(column_schema.dtype, \"__name__\"):\n string_name = column_schema.dtype.__name__\n else:\n raise TypeError(f\"unsupported dtype for column schema: {column_schema.dtype}\")\n\n if string_name in proto_dict:\n feature = proto_dict[string_name](column_schema, feature)\n return feature\n\n\nproto_dict = {\n \"list\": register_list,\n \"float\": set_protobuf_float,\n \"int\": set_protobuf_int,\n \"uint\": set_protobuf_int,\n}\n\n\ndef create_protobuf_feature(column_schema):\n feature = schema_pb2.Feature()\n feature.name = column_schema.name\n feature = register_dtype(column_schema, feature)\n annotation = feature.annotation\n annotation.tag.extend(\n [tag.value if hasattr(tag, \"value\") else tag for tag in column_schema.tags]\n )\n # can be instantiated with no values\n # if so, unnecessary to dump\n # import pdb; pdb.set_trace()\n if len(column_schema.properties) > 0:\n feature = register_extra_metadata(column_schema, feature)\n return feature\n", "path": "nvtabular/graph/schema_io/schema_writer_pbtxt.py"}]} | 3,415 | 370 |
gh_patches_debug_33708 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1304 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PEP257 docstrings for file "./docs/conf.py"
Cover `./docs/conf.py` file with docstrings and follow [PEP257](https://www.python.org/dev/peps/pep-0257/). We use [pydocstyle](https://pypi.org/project/pydocstyle/) for validation.
Current validation log:
```
./docs/conf.py:1 at module level:
D100: Missing docstring in public module
./docs/conf.py:28 in public class `Mock`:
D101: Missing docstring in public class
./docs/conf.py:29 in public method `__init__`:
D107: Missing docstring in __init__
./docs/conf.py:32 in public method `__call__`:
D102: Missing docstring in public method
./docs/conf.py:36 in public method `__getattr__`:
D105: Missing docstring in magic method
```
Subtask for #742
</issue>
<code>
[start of docs/ccext.py]
1 # -*- coding: utf-8 -*-
2
3 """Custom Sphinx extension to build a list of all of cookiecutter's cli."""
4
5 import click
6 from docutils import nodes
7 from docutils.parsers import rst
8 from docutils.statemachine import ViewList
9
10 from cookiecutter import cli
11
12
13 class CcCommandLineOptions(rst.Directive):
14 def _format_option(self, option):
15 return [
16 ".. _`%s`:" % option.name,
17 "",
18 ".. option:: " + ", ".join(option.opts),
19 "",
20 option.help,
21 ""
22 ]
23
24 def process_actions(self):
25 for option in cli.main.params:
26 if isinstance(option, click.core.Option):
27 for line in self._format_option(option):
28 self.view_list.append(line, "")
29
30 def run(self):
31 node = nodes.paragraph()
32 node.document = self.state.document
33 self.view_list = ViewList()
34 self.process_actions()
35 self.state.nested_parse(self.view_list, 0, node)
36 return [node]
37
38
39 def setup(app):
40 app.add_directive('cc-command-line-options', CcCommandLineOptions)
41
[end of docs/ccext.py]
[start of cookiecutter/extensions.py]
1 # -*- coding: utf-8 -*-
2
3 """Jinja2 extensions."""
4
5 import json
6 import string
7 try:
8 # Python 3.6 and above
9 from secrets import choice
10 except ImportError:
11 from random import choice
12
13 from jinja2.ext import Extension
14
15
16 class JsonifyExtension(Extension):
17 """Jinja2 extension to convert a Python object to JSON."""
18
19 def __init__(self, environment):
20 """Initialize the extension with the given environment."""
21 super(JsonifyExtension, self).__init__(environment)
22
23 def jsonify(obj):
24 return json.dumps(obj, sort_keys=True, indent=4)
25
26 environment.filters['jsonify'] = jsonify
27
28
29 class RandomStringExtension(Extension):
30 """Jinja2 extension to create a random string."""
31
32 def __init__(self, environment):
33 """Jinja2 Extension Constructor"""
34 super(RandomStringExtension, self).__init__(environment)
35
36 def random_ascii_string(length, punctuation=False):
37 if punctuation:
38 corpus = "".join((string.ascii_letters, string.punctuation))
39 else:
40 corpus = string.ascii_letters
41 return "".join(choice(corpus) for _ in range(length))
42 environment.globals.update(random_ascii_string=random_ascii_string)
43
[end of cookiecutter/extensions.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """cookiecutter distutils configuration"""
5
6 import os
7 import io
8 import sys
9
10 from setuptools import setup
11
12 version = "1.7.0"
13
14 if sys.argv[-1] == 'publish':
15 os.system('python setup.py sdist upload')
16 os.system('python setup.py bdist_wheel upload')
17 sys.exit()
18
19 if sys.argv[-1] == 'tag':
20 os.system("git tag -a %s -m 'version %s'" % (version, version))
21 os.system("git push --tags")
22 sys.exit()
23
24 with io.open('README.md', 'r', encoding='utf-8') as readme_file:
25 readme = readme_file.read()
26
27 requirements = [
28 'binaryornot>=0.2.0',
29 'jinja2>=2.7',
30 'click>=7.0',
31 'poyo>=0.1.0',
32 'jinja2-time>=0.1.0',
33 'requests>=2.18.0',
34 'six>=1.10',
35 ]
36
37 if sys.argv[-1] == 'readme':
38 print(readme)
39 sys.exit()
40
41
42 setup(
43 name='cookiecutter',
44 version=version,
45 description=('A command-line utility that creates projects from project '
46 'templates, e.g. creating a Python package project from a '
47 'Python package project template.'),
48 long_description=readme,
49 long_description_content_type='text/markdown',
50 author='Audrey Roy',
51 author_email='[email protected]',
52 url='https://github.com/cookiecutter/cookiecutter',
53 packages=[
54 'cookiecutter',
55 ],
56 package_dir={'cookiecutter': 'cookiecutter'},
57 entry_points={
58 'console_scripts': [
59 'cookiecutter = cookiecutter.__main__:main',
60 ]
61 },
62 include_package_data=True,
63 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
64 install_requires=requirements,
65 extras_require={
66 ':python_version<"3.3"': ['whichcraft>=0.4.0'],
67 },
68 license='BSD',
69 zip_safe=False,
70 classifiers=[
71 "Development Status :: 5 - Production/Stable",
72 "Environment :: Console",
73 "Intended Audience :: Developers",
74 "Natural Language :: English",
75 "License :: OSI Approved :: BSD License",
76 "Programming Language :: Python",
77 "Programming Language :: Python :: 2",
78 "Programming Language :: Python :: 2.7",
79 "Programming Language :: Python :: 3",
80 "Programming Language :: Python :: 3.5",
81 "Programming Language :: Python :: 3.6",
82 "Programming Language :: Python :: 3.7",
83 "Programming Language :: Python :: 3.8",
84 "Programming Language :: Python :: Implementation :: CPython",
85 "Programming Language :: Python :: Implementation :: PyPy",
86 "Topic :: Software Development",
87 ],
88 keywords=(
89 'cookiecutter, Python, projects, project templates, Jinja2, '
90 'skeleton, scaffolding, project directory, setup.py, package, '
91 'packaging'
92 ),
93 )
94
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py
--- a/cookiecutter/extensions.py
+++ b/cookiecutter/extensions.py
@@ -30,7 +30,7 @@
"""Jinja2 extension to create a random string."""
def __init__(self, environment):
- """Jinja2 Extension Constructor"""
+ """Jinja2 Extension Constructor."""
super(RandomStringExtension, self).__init__(environment)
def random_ascii_string(length, punctuation=False):
diff --git a/docs/ccext.py b/docs/ccext.py
--- a/docs/ccext.py
+++ b/docs/ccext.py
@@ -11,7 +11,10 @@
class CcCommandLineOptions(rst.Directive):
+ """Custom docutils extension class to parse cli commands from code."""
+
def _format_option(self, option):
+ """Do cli options formatting."""
return [
".. _`%s`:" % option.name,
"",
@@ -22,12 +25,14 @@
]
def process_actions(self):
+ """Get options from cookiecutter, send to formatter, prepare result."""
for option in cli.main.params:
if isinstance(option, click.core.Option):
for line in self._format_option(option):
self.view_list.append(line, "")
def run(self):
+ """Override `run` in `rst.Directive` class."""
node = nodes.paragraph()
node.document = self.state.document
self.view_list = ViewList()
@@ -37,4 +42,5 @@
def setup(app):
+ """Register a Docutils extension directive."""
app.add_directive('cc-command-line-options', CcCommandLineOptions)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-"""cookiecutter distutils configuration"""
+"""cookiecutter distutils configuration."""
import os
import io
| {"golden_diff": "diff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py\n--- a/cookiecutter/extensions.py\n+++ b/cookiecutter/extensions.py\n@@ -30,7 +30,7 @@\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n \n def __init__(self, environment):\n- \"\"\"Jinja2 Extension Constructor\"\"\"\n+ \"\"\"Jinja2 Extension Constructor.\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n \n def random_ascii_string(length, punctuation=False):\ndiff --git a/docs/ccext.py b/docs/ccext.py\n--- a/docs/ccext.py\n+++ b/docs/ccext.py\n@@ -11,7 +11,10 @@\n \n \n class CcCommandLineOptions(rst.Directive):\n+ \"\"\"Custom docutils extension class to parse cli commands from code.\"\"\"\n+\n def _format_option(self, option):\n+ \"\"\"Do cli options formatting.\"\"\"\n return [\n \".. _`%s`:\" % option.name,\n \"\",\n@@ -22,12 +25,14 @@\n ]\n \n def process_actions(self):\n+ \"\"\"Get options from cookiecutter, send to formatter, prepare result.\"\"\"\n for option in cli.main.params:\n if isinstance(option, click.core.Option):\n for line in self._format_option(option):\n self.view_list.append(line, \"\")\n \n def run(self):\n+ \"\"\"Override `run` in `rst.Directive` class.\"\"\"\n node = nodes.paragraph()\n node.document = self.state.document\n self.view_list = ViewList()\n@@ -37,4 +42,5 @@\n \n \n def setup(app):\n+ \"\"\"Register a Docutils extension directive.\"\"\"\n app.add_directive('cc-command-line-options', CcCommandLineOptions)\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,7 +1,7 @@\n #!/usr/bin/env python\n # -*- coding: utf-8 -*-\n \n-\"\"\"cookiecutter distutils configuration\"\"\"\n+\"\"\"cookiecutter distutils configuration.\"\"\"\n \n import os\n import io\n", "issue": "PEP257 docstrings for file \"./docs/conf.py\"\nCover `./docs/conf.py` file with docstrings and follow [PEP257](https://www.python.org/dev/peps/pep-0257/). We use [pydocstyle](https://pypi.org/project/pydocstyle/) for validation.\r\n\r\nCurrent validation log:\r\n\r\n```\r\n./docs/conf.py:1 at module level:\r\n D100: Missing docstring in public module\r\n./docs/conf.py:28 in public class `Mock`:\r\n D101: Missing docstring in public class\r\n./docs/conf.py:29 in public method `__init__`:\r\n D107: Missing docstring in __init__\r\n./docs/conf.py:32 in public method `__call__`:\r\n D102: Missing docstring in public method\r\n./docs/conf.py:36 in public method `__getattr__`:\r\n D105: Missing docstring in magic method\r\n```\r\n\r\nSubtask for #742 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Custom Sphinx extension to build a list of all of cookiecutter's cli.\"\"\"\n\nimport click\nfrom docutils import nodes\nfrom docutils.parsers import rst\nfrom docutils.statemachine import ViewList\n\nfrom cookiecutter import cli\n\n\nclass CcCommandLineOptions(rst.Directive):\n def _format_option(self, option):\n return [\n \".. _`%s`:\" % option.name,\n \"\",\n \".. option:: \" + \", \".join(option.opts),\n \"\",\n option.help,\n \"\"\n ]\n\n def process_actions(self):\n for option in cli.main.params:\n if isinstance(option, click.core.Option):\n for line in self._format_option(option):\n self.view_list.append(line, \"\")\n\n def run(self):\n node = nodes.paragraph()\n node.document = self.state.document\n self.view_list = ViewList()\n self.process_actions()\n self.state.nested_parse(self.view_list, 0, node)\n return [node]\n\n\ndef setup(app):\n app.add_directive('cc-command-line-options', CcCommandLineOptions)\n", "path": "docs/ccext.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Jinja2 extensions.\"\"\"\n\nimport json\nimport string\ntry:\n # Python 3.6 and above\n from secrets import choice\nexcept ImportError:\n from random import choice\n\nfrom jinja2.ext import Extension\n\n\nclass JsonifyExtension(Extension):\n \"\"\"Jinja2 extension to convert a Python object to JSON.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Initialize the extension with the given environment.\"\"\"\n super(JsonifyExtension, self).__init__(environment)\n\n def jsonify(obj):\n return json.dumps(obj, sort_keys=True, indent=4)\n\n environment.filters['jsonify'] = jsonify\n\n\nclass RandomStringExtension(Extension):\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension Constructor\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n\n def random_ascii_string(length, punctuation=False):\n if punctuation:\n corpus = \"\".join((string.ascii_letters, string.punctuation))\n else:\n corpus = string.ascii_letters\n return \"\".join(choice(corpus) for _ in range(length))\n environment.globals.update(random_ascii_string=random_ascii_string)\n", "path": "cookiecutter/extensions.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"cookiecutter distutils configuration\"\"\"\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.7.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.md', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=7.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n 'requests>=2.18.0',\n 'six>=1.10',\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n extras_require={\n ':python_version<\"3.3\"': ['whichcraft>=0.4.0'],\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]} | 2,324 | 446 |
gh_patches_debug_4070 | rasdani/github-patches | git_diff | scrapy__scrapy-4033 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
may be 'accessible'?
in the function [request_fingerprint](https://github.com/scrapy/scrapy/blob/master/scrapy/utils/request.py) ,‘accesible’ may be ‘accessible’ in comments. OCD XD..
</issue>
<code>
[start of scrapy/utils/request.py]
1 """
2 This module provides some useful functions for working with
3 scrapy.http.Request objects
4 """
5
6 from __future__ import print_function
7 import hashlib
8 import weakref
9 from six.moves.urllib.parse import urlunparse
10
11 from w3lib.http import basic_auth_header
12 from scrapy.utils.python import to_bytes, to_native_str
13
14 from w3lib.url import canonicalize_url
15 from scrapy.utils.httpobj import urlparse_cached
16
17
18 _fingerprint_cache = weakref.WeakKeyDictionary()
19 def request_fingerprint(request, include_headers=None):
20 """
21 Return the request fingerprint.
22
23 The request fingerprint is a hash that uniquely identifies the resource the
24 request points to. For example, take the following two urls:
25
26 http://www.example.com/query?id=111&cat=222
27 http://www.example.com/query?cat=222&id=111
28
29 Even though those are two different URLs both point to the same resource
30 and are equivalent (ie. they should return the same response).
31
32 Another example are cookies used to store session ids. Suppose the
33 following page is only accesible to authenticated users:
34
35 http://www.example.com/members/offers.html
36
37 Lot of sites use a cookie to store the session id, which adds a random
38 component to the HTTP Request and thus should be ignored when calculating
39 the fingerprint.
40
41 For this reason, request headers are ignored by default when calculating
42 the fingeprint. If you want to include specific headers use the
43 include_headers argument, which is a list of Request headers to include.
44
45 """
46 if include_headers:
47 include_headers = tuple(to_bytes(h.lower())
48 for h in sorted(include_headers))
49 cache = _fingerprint_cache.setdefault(request, {})
50 if include_headers not in cache:
51 fp = hashlib.sha1()
52 fp.update(to_bytes(request.method))
53 fp.update(to_bytes(canonicalize_url(request.url)))
54 fp.update(request.body or b'')
55 if include_headers:
56 for hdr in include_headers:
57 if hdr in request.headers:
58 fp.update(hdr)
59 for v in request.headers.getlist(hdr):
60 fp.update(v)
61 cache[include_headers] = fp.hexdigest()
62 return cache[include_headers]
63
64
65 def request_authenticate(request, username, password):
66 """Autenticate the given request (in place) using the HTTP basic access
67 authentication mechanism (RFC 2617) and the given username and password
68 """
69 request.headers['Authorization'] = basic_auth_header(username, password)
70
71
72 def request_httprepr(request):
73 """Return the raw HTTP representation (as bytes) of the given request.
74 This is provided only for reference since it's not the actual stream of
75 bytes that will be send when performing the request (that's controlled
76 by Twisted).
77 """
78 parsed = urlparse_cached(request)
79 path = urlunparse(('', '', parsed.path or '/', parsed.params, parsed.query, ''))
80 s = to_bytes(request.method) + b" " + to_bytes(path) + b" HTTP/1.1\r\n"
81 s += b"Host: " + to_bytes(parsed.hostname or b'') + b"\r\n"
82 if request.headers:
83 s += request.headers.to_string() + b"\r\n"
84 s += b"\r\n"
85 s += request.body
86 return s
87
88
89 def referer_str(request):
90 """ Return Referer HTTP header suitable for logging. """
91 referrer = request.headers.get('Referer')
92 if referrer is None:
93 return referrer
94 return to_native_str(referrer, errors='replace')
95
[end of scrapy/utils/request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/utils/request.py b/scrapy/utils/request.py
--- a/scrapy/utils/request.py
+++ b/scrapy/utils/request.py
@@ -30,7 +30,7 @@
and are equivalent (ie. they should return the same response).
Another example are cookies used to store session ids. Suppose the
- following page is only accesible to authenticated users:
+ following page is only accessible to authenticated users:
http://www.example.com/members/offers.html
| {"golden_diff": "diff --git a/scrapy/utils/request.py b/scrapy/utils/request.py\n--- a/scrapy/utils/request.py\n+++ b/scrapy/utils/request.py\n@@ -30,7 +30,7 @@\n and are equivalent (ie. they should return the same response).\n \n Another example are cookies used to store session ids. Suppose the\n- following page is only accesible to authenticated users:\n+ following page is only accessible to authenticated users:\n \n http://www.example.com/members/offers.html\n", "issue": "may be 'accessible'?\nin the function [request_fingerprint](https://github.com/scrapy/scrapy/blob/master/scrapy/utils/request.py) \uff0c\u2018accesible\u2019 may be \u2018accessible\u2019 in comments. OCD XD..\r\n\n", "before_files": [{"content": "\"\"\"\nThis module provides some useful functions for working with\nscrapy.http.Request objects\n\"\"\"\n\nfrom __future__ import print_function\nimport hashlib\nimport weakref\nfrom six.moves.urllib.parse import urlunparse\n\nfrom w3lib.http import basic_auth_header\nfrom scrapy.utils.python import to_bytes, to_native_str\n\nfrom w3lib.url import canonicalize_url\nfrom scrapy.utils.httpobj import urlparse_cached\n\n\n_fingerprint_cache = weakref.WeakKeyDictionary()\ndef request_fingerprint(request, include_headers=None):\n \"\"\"\n Return the request fingerprint.\n\n The request fingerprint is a hash that uniquely identifies the resource the\n request points to. For example, take the following two urls:\n\n http://www.example.com/query?id=111&cat=222\n http://www.example.com/query?cat=222&id=111\n\n Even though those are two different URLs both point to the same resource\n and are equivalent (ie. they should return the same response).\n\n Another example are cookies used to store session ids. Suppose the\n following page is only accesible to authenticated users:\n\n http://www.example.com/members/offers.html\n\n Lot of sites use a cookie to store the session id, which adds a random\n component to the HTTP Request and thus should be ignored when calculating\n the fingerprint.\n\n For this reason, request headers are ignored by default when calculating\n the fingeprint. If you want to include specific headers use the\n include_headers argument, which is a list of Request headers to include.\n\n \"\"\"\n if include_headers:\n include_headers = tuple(to_bytes(h.lower())\n for h in sorted(include_headers))\n cache = _fingerprint_cache.setdefault(request, {})\n if include_headers not in cache:\n fp = hashlib.sha1()\n fp.update(to_bytes(request.method))\n fp.update(to_bytes(canonicalize_url(request.url)))\n fp.update(request.body or b'')\n if include_headers:\n for hdr in include_headers:\n if hdr in request.headers:\n fp.update(hdr)\n for v in request.headers.getlist(hdr):\n fp.update(v)\n cache[include_headers] = fp.hexdigest()\n return cache[include_headers]\n\n\ndef request_authenticate(request, username, password):\n \"\"\"Autenticate the given request (in place) using the HTTP basic access\n authentication mechanism (RFC 2617) and the given username and password\n \"\"\"\n request.headers['Authorization'] = basic_auth_header(username, password)\n\n\ndef request_httprepr(request):\n \"\"\"Return the raw HTTP representation (as bytes) of the given request.\n This is provided only for reference since it's not the actual stream of\n bytes that will be send when performing the request (that's controlled\n by Twisted).\n \"\"\"\n parsed = urlparse_cached(request)\n path = urlunparse(('', '', parsed.path or '/', parsed.params, parsed.query, ''))\n s = to_bytes(request.method) + b\" \" + to_bytes(path) + b\" HTTP/1.1\\r\\n\"\n s += b\"Host: \" + to_bytes(parsed.hostname or b'') + b\"\\r\\n\"\n if request.headers:\n s += request.headers.to_string() + b\"\\r\\n\"\n s += b\"\\r\\n\"\n s += request.body\n return s\n\n\ndef referer_str(request):\n \"\"\" Return Referer HTTP header suitable for logging. \"\"\"\n referrer = request.headers.get('Referer')\n if referrer is None:\n return referrer\n return to_native_str(referrer, errors='replace')\n", "path": "scrapy/utils/request.py"}]} | 1,537 | 109 |
gh_patches_debug_6415 | rasdani/github-patches | git_diff | open-mmlab__mmpose-1338 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
框架有很多没有通过测试的Bug
在我运行res50_freihand2d_224x224.py的过程中就碰到了两个问题
1. tools/dist_train.sh里面没有定义$MASTER_ADDR
2. res50_freihand2d_224x224.py 没有定义runner = dict(type='EpochBasedRunner', max_epochs=total_epochs)
其他的还没有跑,但感觉会有很多小问题
发布的版本有点粗糙呀,期待把这些小Bug尽快解决,后续很期待在这个框架上做更多实验
</issue>
<code>
[start of tools/train.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import argparse
3 import copy
4 import os
5 import os.path as osp
6 import time
7 import warnings
8
9 import mmcv
10 import torch
11 import torch.distributed as dist
12 from mmcv import Config, DictAction
13 from mmcv.runner import get_dist_info, init_dist, set_random_seed
14 from mmcv.utils import get_git_hash
15
16 from mmpose import __version__
17 from mmpose.apis import init_random_seed, train_model
18 from mmpose.datasets import build_dataset
19 from mmpose.models import build_posenet
20 from mmpose.utils import collect_env, get_root_logger, setup_multi_processes
21
22
23 def parse_args():
24 parser = argparse.ArgumentParser(description='Train a pose model')
25 parser.add_argument('config', help='train config file path')
26 parser.add_argument('--work-dir', help='the dir to save logs and models')
27 parser.add_argument(
28 '--resume-from', help='the checkpoint file to resume from')
29 parser.add_argument(
30 '--no-validate',
31 action='store_true',
32 help='whether not to evaluate the checkpoint during training')
33 group_gpus = parser.add_mutually_exclusive_group()
34 group_gpus.add_argument(
35 '--gpus',
36 type=int,
37 help='(Deprecated, please use --gpu-id) number of gpus to use '
38 '(only applicable to non-distributed training)')
39 group_gpus.add_argument(
40 '--gpu-ids',
41 type=int,
42 nargs='+',
43 help='(Deprecated, please use --gpu-id) ids of gpus to use '
44 '(only applicable to non-distributed training)')
45 group_gpus.add_argument(
46 '--gpu-id',
47 type=int,
48 default=0,
49 help='id of gpu to use '
50 '(only applicable to non-distributed training)')
51 parser.add_argument('--seed', type=int, default=None, help='random seed')
52 parser.add_argument(
53 '--diff_seed',
54 action='store_true',
55 help='Whether or not set different seeds for different ranks')
56 parser.add_argument(
57 '--deterministic',
58 action='store_true',
59 help='whether to set deterministic options for CUDNN backend.')
60 parser.add_argument(
61 '--cfg-options',
62 nargs='+',
63 action=DictAction,
64 default={},
65 help='override some settings in the used config, the key-value pair '
66 'in xxx=yyy format will be merged into config file. For example, '
67 "'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'")
68 parser.add_argument(
69 '--launcher',
70 choices=['none', 'pytorch', 'slurm', 'mpi'],
71 default='none',
72 help='job launcher')
73 parser.add_argument('--local-rank', type=int, default=0)
74 parser.add_argument(
75 '--local_rank', type=int, default=0, help='An alias to --local-rank')
76 parser.add_argument(
77 '--autoscale-lr',
78 action='store_true',
79 help='automatically scale lr with the number of gpus')
80 args = parser.parse_args()
81 if 'LOCAL_RANK' not in os.environ:
82 os.environ['LOCAL_RANK'] = str(args.local_rank)
83
84 return args
85
86
87 def main():
88 args = parse_args()
89
90 cfg = Config.fromfile(args.config)
91
92 if args.cfg_options is not None:
93 cfg.merge_from_dict(args.cfg_options)
94
95 # set multi-process settings
96 setup_multi_processes(cfg)
97
98 # set cudnn_benchmark
99 if cfg.get('cudnn_benchmark', False):
100 torch.backends.cudnn.benchmark = True
101
102 # work_dir is determined in this priority: CLI > segment in file > filename
103 if args.work_dir is not None:
104 # update configs according to CLI args if args.work_dir is not None
105 cfg.work_dir = args.work_dir
106 elif cfg.get('work_dir', None) is None:
107 # use config filename as default work_dir if cfg.work_dir is None
108 cfg.work_dir = osp.join('./work_dirs',
109 osp.splitext(osp.basename(args.config))[0])
110 if args.resume_from is not None:
111 cfg.resume_from = args.resume_from
112 if args.gpus is not None:
113 cfg.gpu_ids = range(1)
114 warnings.warn('`--gpus` is deprecated because we only support '
115 'single GPU mode in non-distributed training. '
116 'Use `gpus=1` now.')
117 if args.gpu_ids is not None:
118 cfg.gpu_ids = args.gpu_ids[0:1]
119 warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '
120 'Because we only support single GPU mode in '
121 'non-distributed training. Use the first GPU '
122 'in `gpu_ids` now.')
123 if args.gpus is None and args.gpu_ids is None:
124 cfg.gpu_ids = [args.gpu_id]
125
126 if args.autoscale_lr:
127 # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)
128 cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8
129
130 # init distributed env first, since logger depends on the dist info.
131 if args.launcher == 'none':
132 distributed = False
133 if len(cfg.gpu_ids) > 1:
134 warnings.warn(
135 f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '
136 f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '
137 'non-distribute training time.')
138 cfg.gpu_ids = cfg.gpu_ids[0:1]
139 else:
140 distributed = True
141 init_dist(args.launcher, **cfg.dist_params)
142 # re-set gpu_ids with distributed training mode
143 _, world_size = get_dist_info()
144 cfg.gpu_ids = range(world_size)
145
146 # create work_dir
147 mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
148 # init the logger before other steps
149 timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
150 log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
151 logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
152
153 # init the meta dict to record some important information such as
154 # environment info and seed, which will be logged
155 meta = dict()
156 # log env info
157 env_info_dict = collect_env()
158 env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])
159 dash_line = '-' * 60 + '\n'
160 logger.info('Environment info:\n' + dash_line + env_info + '\n' +
161 dash_line)
162 meta['env_info'] = env_info
163
164 # log some basic info
165 logger.info(f'Distributed training: {distributed}')
166 logger.info(f'Config:\n{cfg.pretty_text}')
167
168 # set random seeds
169 seed = init_random_seed(args.seed)
170 seed = seed + dist.get_rank() if args.diff_seed else seed
171 logger.info(f'Set random seed to {seed}, '
172 f'deterministic: {args.deterministic}')
173 set_random_seed(seed, deterministic=args.deterministic)
174 cfg.seed = seed
175 meta['seed'] = seed
176
177 model = build_posenet(cfg.model)
178 datasets = [build_dataset(cfg.data.train)]
179
180 if len(cfg.workflow) == 2:
181 val_dataset = copy.deepcopy(cfg.data.val)
182 val_dataset.pipeline = cfg.data.train.pipeline
183 datasets.append(build_dataset(val_dataset))
184
185 if cfg.checkpoint_config is not None:
186 # save mmpose version, config file content
187 # checkpoints as meta data
188 cfg.checkpoint_config.meta = dict(
189 mmpose_version=__version__ + get_git_hash(digits=7),
190 config=cfg.pretty_text,
191 )
192 train_model(
193 model,
194 datasets,
195 cfg,
196 distributed=distributed,
197 validate=(not args.no_validate),
198 timestamp=timestamp,
199 meta=meta)
200
201
202 if __name__ == '__main__':
203 main()
204
[end of tools/train.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/train.py b/tools/train.py
--- a/tools/train.py
+++ b/tools/train.py
@@ -70,9 +70,7 @@
choices=['none', 'pytorch', 'slurm', 'mpi'],
default='none',
help='job launcher')
- parser.add_argument('--local-rank', type=int, default=0)
- parser.add_argument(
- '--local_rank', type=int, default=0, help='An alias to --local-rank')
+ parser.add_argument('--local_rank', type=int, default=0)
parser.add_argument(
'--autoscale-lr',
action='store_true',
| {"golden_diff": "diff --git a/tools/train.py b/tools/train.py\n--- a/tools/train.py\n+++ b/tools/train.py\n@@ -70,9 +70,7 @@\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n- parser.add_argument('--local-rank', type=int, default=0)\n- parser.add_argument(\n- '--local_rank', type=int, default=0, help='An alias to --local-rank')\n+ parser.add_argument('--local_rank', type=int, default=0)\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n", "issue": "\u6846\u67b6\u6709\u5f88\u591a\u6ca1\u6709\u901a\u8fc7\u6d4b\u8bd5\u7684Bug\n\u5728\u6211\u8fd0\u884cres50_freihand2d_224x224.py\u7684\u8fc7\u7a0b\u4e2d\u5c31\u78b0\u5230\u4e86\u4e24\u4e2a\u95ee\u9898\r\n\r\n1. tools/dist_train.sh\u91cc\u9762\u6ca1\u6709\u5b9a\u4e49$MASTER_ADDR\r\n\r\n2. res50_freihand2d_224x224.py \u6ca1\u6709\u5b9a\u4e49runner = dict(type='EpochBasedRunner', max_epochs=total_epochs)\r\n\r\n\u5176\u4ed6\u7684\u8fd8\u6ca1\u6709\u8dd1\uff0c\u4f46\u611f\u89c9\u4f1a\u6709\u5f88\u591a\u5c0f\u95ee\u9898\r\n\r\n\u53d1\u5e03\u7684\u7248\u672c\u6709\u70b9\u7c97\u7cd9\u5440\uff0c\u671f\u5f85\u628a\u8fd9\u4e9b\u5c0fBug\u5c3d\u5feb\u89e3\u51b3\uff0c\u540e\u7eed\u5f88\u671f\u5f85\u5728\u8fd9\u4e2a\u6846\u67b6\u4e0a\u505a\u66f4\u591a\u5b9e\u9a8c\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\nimport warnings\n\nimport mmcv\nimport torch\nimport torch.distributed as dist\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import get_dist_info, init_dist, set_random_seed\nfrom mmcv.utils import get_git_hash\n\nfrom mmpose import __version__\nfrom mmpose.apis import init_random_seed, train_model\nfrom mmpose.datasets import build_dataset\nfrom mmpose.models import build_posenet\nfrom mmpose.utils import collect_env, get_root_logger, setup_multi_processes\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a pose model')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='(Deprecated, please use --gpu-id) number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='(Deprecated, please use --gpu-id) ids of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-id',\n type=int,\n default=0,\n help='id of gpu to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--diff_seed',\n action='store_true',\n help='Whether or not set different seeds for different ranks')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n default={},\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. For example, '\n \"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'\")\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local-rank', type=int, default=0)\n parser.add_argument(\n '--local_rank', type=int, default=0, help='An alias to --local-rank')\n parser.add_argument(\n '--autoscale-lr',\n action='store_true',\n help='automatically scale lr with the number of gpus')\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n # set multi-process settings\n setup_multi_processes(cfg)\n\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpus is not None:\n cfg.gpu_ids = range(1)\n warnings.warn('`--gpus` is deprecated because we only support '\n 'single GPU mode in non-distributed training. '\n 'Use `gpus=1` now.')\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids[0:1]\n warnings.warn('`--gpu-ids` is deprecated, please use `--gpu-id`. '\n 'Because we only support single GPU mode in '\n 'non-distributed training. Use the first GPU '\n 'in `gpu_ids` now.')\n if args.gpus is None and args.gpu_ids is None:\n cfg.gpu_ids = [args.gpu_id]\n\n if args.autoscale_lr:\n # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)\n cfg.optimizer['lr'] = cfg.optimizer['lr'] * len(cfg.gpu_ids) / 8\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n if len(cfg.gpu_ids) > 1:\n warnings.warn(\n f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '\n f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '\n 'non-distribute training time.')\n cfg.gpu_ids = cfg.gpu_ids[0:1]\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n # re-set gpu_ids with distributed training mode\n _, world_size = get_dist_info()\n cfg.gpu_ids = range(world_size)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n seed = init_random_seed(args.seed)\n seed = seed + dist.get_rank() if args.diff_seed else seed\n logger.info(f'Set random seed to {seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(seed, deterministic=args.deterministic)\n cfg.seed = seed\n meta['seed'] = seed\n\n model = build_posenet(cfg.model)\n datasets = [build_dataset(cfg.data.train)]\n\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n\n if cfg.checkpoint_config is not None:\n # save mmpose version, config file content\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmpose_version=__version__ + get_git_hash(digits=7),\n config=cfg.pretty_text,\n )\n train_model(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}]} | 2,909 | 143 |
gh_patches_debug_13920 | rasdani/github-patches | git_diff | searx__searx-1135 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Yacy results crash
Getting:
Engines cannot retrieve results:
yacy (unexpected crash)
> ERROR:searx.search:engine yacy : exception : 'url'
> Traceback (most recent call last):
> File "/home/leo/searx/searx/search.py", line 118, in search_one_request_safe
> search_results = search_one_request(engine, query, request_params, start_time, timeout_limit)
> File "/home/leo/searx/searx/search.py", line 110, in search_one_request
> return engine.response(response)
> File "/home/leo/searx/searx/engines/yacy.py", line 80, in response
> results.append({'url': result['url'],
> KeyError: 'url'
</issue>
<code>
[start of searx/engines/yacy.py]
1 # Yacy (Web, Images, Videos, Music, Files)
2 #
3 # @website http://yacy.net
4 # @provide-api yes
5 # (http://www.yacy-websuche.de/wiki/index.php/Dev:APIyacysearch)
6 #
7 # @using-api yes
8 # @results JSON
9 # @stable yes
10 # @parse (general) url, title, content, publishedDate
11 # @parse (images) url, title, img_src
12 #
13 # @todo parse video, audio and file results
14
15 from json import loads
16 from dateutil import parser
17 from searx.url_utils import urlencode
18
19 from searx.utils import html_to_text
20
21 # engine dependent config
22 categories = ['general', 'images'] # TODO , 'music', 'videos', 'files'
23 paging = True
24 language_support = True
25 number_of_results = 5
26
27 # search-url
28 base_url = 'http://localhost:8090'
29 search_url = '/yacysearch.json?{query}'\
30 '&startRecord={offset}'\
31 '&maximumRecords={limit}'\
32 '&contentdom={search_type}'\
33 '&resource=global'
34
35 # yacy specific type-definitions
36 search_types = {'general': 'text',
37 'images': 'image',
38 'files': 'app',
39 'music': 'audio',
40 'videos': 'video'}
41
42
43 # do search-request
44 def request(query, params):
45 offset = (params['pageno'] - 1) * number_of_results
46 search_type = search_types.get(params.get('category'), '0')
47
48 params['url'] = base_url +\
49 search_url.format(query=urlencode({'query': query}),
50 offset=offset,
51 limit=number_of_results,
52 search_type=search_type)
53
54 params['url'] += '&lr=lang_' + params['language'].split('-')[0]
55
56 return params
57
58
59 # get response from search-request
60 def response(resp):
61 results = []
62
63 raw_search_results = loads(resp.text)
64
65 # return empty array if there are no results
66 if not raw_search_results:
67 return []
68
69 search_results = raw_search_results.get('channels', [])
70
71 if len(search_results) == 0:
72 return []
73
74 for result in search_results[0].get('items', []):
75 # parse image results
76 if result.get('image'):
77 # append result
78 results.append({'url': result['url'],
79 'title': result['title'],
80 'content': '',
81 'img_src': result['image'],
82 'template': 'images.html'})
83
84 # parse general results
85 else:
86 publishedDate = parser.parse(result['pubDate'])
87
88 # append result
89 results.append({'url': result['link'],
90 'title': result['title'],
91 'content': html_to_text(result['description']),
92 'publishedDate': publishedDate})
93
94 # TODO parse video, audio and file results
95
96 # return results
97 return results
98
[end of searx/engines/yacy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/yacy.py b/searx/engines/yacy.py
--- a/searx/engines/yacy.py
+++ b/searx/engines/yacy.py
@@ -74,8 +74,17 @@
for result in search_results[0].get('items', []):
# parse image results
if result.get('image'):
+
+ result_url = ''
+ if 'url' in result:
+ result_url = result['url']
+ elif 'link' in result:
+ result_url = result['link']
+ else:
+ continue
+
# append result
- results.append({'url': result['url'],
+ results.append({'url': result_url,
'title': result['title'],
'content': '',
'img_src': result['image'],
| {"golden_diff": "diff --git a/searx/engines/yacy.py b/searx/engines/yacy.py\n--- a/searx/engines/yacy.py\n+++ b/searx/engines/yacy.py\n@@ -74,8 +74,17 @@\n for result in search_results[0].get('items', []):\n # parse image results\n if result.get('image'):\n+\n+ result_url = ''\n+ if 'url' in result:\n+ result_url = result['url']\n+ elif 'link' in result:\n+ result_url = result['link']\n+ else:\n+ continue\n+\n # append result\n- results.append({'url': result['url'],\n+ results.append({'url': result_url,\n 'title': result['title'],\n 'content': '',\n 'img_src': result['image'],\n", "issue": "Yacy results crash\nGetting:\r\nEngines cannot retrieve results:\r\nyacy (unexpected crash)\r\n\r\n> ERROR:searx.search:engine yacy : exception : 'url'\r\n> Traceback (most recent call last):\r\n> File \"/home/leo/searx/searx/search.py\", line 118, in search_one_request_safe\r\n> search_results = search_one_request(engine, query, request_params, start_time, timeout_limit)\r\n> File \"/home/leo/searx/searx/search.py\", line 110, in search_one_request\r\n> return engine.response(response)\r\n> File \"/home/leo/searx/searx/engines/yacy.py\", line 80, in response\r\n> results.append({'url': result['url'],\r\n> KeyError: 'url'\n", "before_files": [{"content": "# Yacy (Web, Images, Videos, Music, Files)\n#\n# @website http://yacy.net\n# @provide-api yes\n# (http://www.yacy-websuche.de/wiki/index.php/Dev:APIyacysearch)\n#\n# @using-api yes\n# @results JSON\n# @stable yes\n# @parse (general) url, title, content, publishedDate\n# @parse (images) url, title, img_src\n#\n# @todo parse video, audio and file results\n\nfrom json import loads\nfrom dateutil import parser\nfrom searx.url_utils import urlencode\n\nfrom searx.utils import html_to_text\n\n# engine dependent config\ncategories = ['general', 'images'] # TODO , 'music', 'videos', 'files'\npaging = True\nlanguage_support = True\nnumber_of_results = 5\n\n# search-url\nbase_url = 'http://localhost:8090'\nsearch_url = '/yacysearch.json?{query}'\\\n '&startRecord={offset}'\\\n '&maximumRecords={limit}'\\\n '&contentdom={search_type}'\\\n '&resource=global'\n\n# yacy specific type-definitions\nsearch_types = {'general': 'text',\n 'images': 'image',\n 'files': 'app',\n 'music': 'audio',\n 'videos': 'video'}\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * number_of_results\n search_type = search_types.get(params.get('category'), '0')\n\n params['url'] = base_url +\\\n search_url.format(query=urlencode({'query': query}),\n offset=offset,\n limit=number_of_results,\n search_type=search_type)\n\n params['url'] += '&lr=lang_' + params['language'].split('-')[0]\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n raw_search_results = loads(resp.text)\n\n # return empty array if there are no results\n if not raw_search_results:\n return []\n\n search_results = raw_search_results.get('channels', [])\n\n if len(search_results) == 0:\n return []\n\n for result in search_results[0].get('items', []):\n # parse image results\n if result.get('image'):\n # append result\n results.append({'url': result['url'],\n 'title': result['title'],\n 'content': '',\n 'img_src': result['image'],\n 'template': 'images.html'})\n\n # parse general results\n else:\n publishedDate = parser.parse(result['pubDate'])\n\n # append result\n results.append({'url': result['link'],\n 'title': result['title'],\n 'content': html_to_text(result['description']),\n 'publishedDate': publishedDate})\n\n # TODO parse video, audio and file results\n\n # return results\n return results\n", "path": "searx/engines/yacy.py"}]} | 1,555 | 189 |
gh_patches_debug_64039 | rasdani/github-patches | git_diff | Rapptz__discord.py-1057 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot connect to voice channels
Running `await voice_channel.connect()` raises
`AttributeError: 'LP_EncoderStruct' object has no attribute 'value'`
Relevant portions of the traceback ([and the line itself](https://github.com/Rapptz/discord.py/blob/rewrite/discord/opus.py#L52)):
```
File ".../lib/python3.6/site-packages/discord/abc.py", line 985, in connect
voice = VoiceClient(state=state, timeout=timeout, channel=self)
File ".../lib/python3.6/site-packages/discord/voice_client.py", line 109, in __init__
self.encoder = opus.Encoder()
File ".../lib/python3.6/site-packages/discord/opus.py", line 225, in __init__
self._state = self._create_state()
File ".../lib/python3.6/site-packages/discord/opus.py", line 239, in _create_state
return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))
File ".../lib/python3.6/site-packages/discord/opus.py", line 52, in _err_ne
if result.value != 0:
AttributeError: 'LP_EncoderStruct' object has no attribute 'value'
```
I have opus 1.2.1 installed on 64-bit Linux, and it is loaded according to `discord.opus.is_loaded()`.
Any clue as to what might be the issue?
</issue>
<code>
[start of discord/opus.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 The MIT License (MIT)
5
6 Copyright (c) 2015-2017 Rapptz
7
8 Permission is hereby granted, free of charge, to any person obtaining a
9 copy of this software and associated documentation files (the "Software"),
10 to deal in the Software without restriction, including without limitation
11 the rights to use, copy, modify, merge, publish, distribute, sublicense,
12 and/or sell copies of the Software, and to permit persons to whom the
13 Software is furnished to do so, subject to the following conditions:
14
15 The above copyright notice and this permission notice shall be included in
16 all copies or substantial portions of the Software.
17
18 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
19 OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
21 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
22 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
23 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
24 DEALINGS IN THE SOFTWARE.
25 """
26
27 import ctypes
28 import ctypes.util
29 import array
30 from .errors import DiscordException
31 import logging
32 import sys
33 import os.path
34
35 log = logging.getLogger(__name__)
36 c_int_ptr = ctypes.POINTER(ctypes.c_int)
37 c_int16_ptr = ctypes.POINTER(ctypes.c_int16)
38 c_float_ptr = ctypes.POINTER(ctypes.c_float)
39
40 class EncoderStruct(ctypes.Structure):
41 pass
42
43 EncoderStructPtr = ctypes.POINTER(EncoderStruct)
44
45 def _err_lt(result, func, args):
46 if result < 0:
47 log.info('error has happened in {0.__name__}'.format(func))
48 raise OpusError(result)
49 return result
50
51 def _err_ne(result, func, args):
52 if result.value != 0:
53 log.info('error has happened in {0.__name__}'.format(func))
54 raise OpusError(result.value)
55 return result
56
57 # A list of exported functions.
58 # The first argument is obviously the name.
59 # The second one are the types of arguments it takes.
60 # The third is the result type.
61 # The fourth is the error handler.
62 exported_functions = [
63 ('opus_strerror',
64 [ctypes.c_int], ctypes.c_char_p, None),
65 ('opus_encoder_get_size',
66 [ctypes.c_int], ctypes.c_int, None),
67 ('opus_encoder_create',
68 [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),
69 ('opus_encode',
70 [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),
71 ('opus_encoder_ctl',
72 None, ctypes.c_int32, _err_lt),
73 ('opus_encoder_destroy',
74 [EncoderStructPtr], None, None),
75 ]
76
77 def libopus_loader(name):
78 # create the library...
79 lib = ctypes.cdll.LoadLibrary(name)
80
81 # register the functions...
82 for item in exported_functions:
83 try:
84 func = getattr(lib, item[0])
85 except Exception as e:
86 raise e
87
88 try:
89 if item[1]:
90 func.argtypes = item[1]
91
92 func.restype = item[2]
93 except KeyError:
94 pass
95
96 try:
97 if item[3]:
98 func.errcheck = item[3]
99 except KeyError:
100 log.exception("Error assigning check function to %s", func)
101
102 return lib
103
104 try:
105 if sys.platform == 'win32':
106 _basedir = os.path.dirname(os.path.abspath(__file__))
107 _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'
108 _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))
109 _lib = libopus_loader(_filename)
110 else:
111 _lib = libopus_loader(ctypes.util.find_library('opus'))
112 except Exception as e:
113 _lib = None
114
115 def load_opus(name):
116 """Loads the libopus shared library for use with voice.
117
118 If this function is not called then the library uses the function
119 `ctypes.util.find_library`__ and then loads that one
120 if available.
121
122 .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries
123 __ `find library`_
124
125 Not loading a library leads to voice not working.
126
127 This function propagates the exceptions thrown.
128
129 Warning
130 --------
131 The bitness of the library must match the bitness of your python
132 interpreter. If the library is 64-bit then your python interpreter
133 must be 64-bit as well. Usually if there's a mismatch in bitness then
134 the load will throw an exception.
135
136 Note
137 ----
138 On Windows, the .dll extension is not necessary. However, on Linux
139 the full extension is required to load the library, e.g. ``libopus.so.1``.
140 On Linux however, `find library`_ will usually find the library automatically
141 without you having to call this.
142
143 Parameters
144 ----------
145 name: str
146 The filename of the shared library.
147 """
148 global _lib
149 _lib = libopus_loader(name)
150
151 def is_loaded():
152 """Function to check if opus lib is successfully loaded either
153 via the ``ctypes.util.find_library`` call of :func:`load_opus`.
154
155 This must return ``True`` for voice to work.
156
157 Returns
158 -------
159 bool
160 Indicates if the opus library has been loaded.
161 """
162 global _lib
163 return _lib is not None
164
165 class OpusError(DiscordException):
166 """An exception that is thrown for libopus related errors.
167
168 Attributes
169 ----------
170 code : :class:`int`
171 The error code returned.
172 """
173
174 def __init__(self, code):
175 self.code = code
176 msg = _lib.opus_strerror(self.code).decode('utf-8')
177 log.info('"%s" has happened', msg)
178 super().__init__(msg)
179
180 class OpusNotLoaded(DiscordException):
181 """An exception that is thrown for when libopus is not loaded."""
182 pass
183
184
185 # Some constants...
186 OK = 0
187 APPLICATION_AUDIO = 2049
188 APPLICATION_VOIP = 2048
189 APPLICATION_LOWDELAY = 2051
190 CTL_SET_BITRATE = 4002
191 CTL_SET_BANDWIDTH = 4008
192 CTL_SET_FEC = 4012
193 CTL_SET_PLP = 4014
194 CTL_SET_SIGNAL = 4024
195
196 band_ctl = {
197 'narrow': 1101,
198 'medium': 1102,
199 'wide': 1103,
200 'superwide': 1104,
201 'full': 1105,
202 }
203
204 signal_ctl = {
205 'auto': -1000,
206 'voice': 3001,
207 'music': 3002,
208 }
209
210 class Encoder:
211 SAMPLING_RATE = 48000
212 CHANNELS = 2
213 FRAME_LENGTH = 20
214 SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)
215 SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)
216
217 FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE
218
219 def __init__(self, application=APPLICATION_AUDIO):
220 self.application = application
221
222 if not is_loaded():
223 raise OpusNotLoaded()
224
225 self._state = self._create_state()
226 self.set_bitrate(128)
227 self.set_fec(True)
228 self.set_expected_packet_loss_percent(0.15)
229 self.set_bandwidth('full')
230 self.set_signal_type('auto')
231
232 def __del__(self):
233 if hasattr(self, '_state'):
234 _lib.opus_encoder_destroy(self._state)
235 self._state = None
236
237 def _create_state(self):
238 ret = ctypes.c_int()
239 return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))
240
241 def set_bitrate(self, kbps):
242 kbps = min(128, max(16, int(kbps)))
243
244 _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)
245 return kbps
246
247 def set_bandwidth(self, req):
248 if req not in band_ctl:
249 raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))
250
251 k = band_ctl[req]
252 _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)
253
254 def set_signal_type(self, req):
255 if req not in signal_ctl:
256 raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))
257
258 k = signal_ctl[req]
259 _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)
260
261 def set_fec(self, enabled=True):
262 _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)
263
264 def set_expected_packet_loss_percent(self, percentage):
265 _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))
266
267 def encode(self, pcm, frame_size):
268 max_data_bytes = len(pcm)
269 pcm = ctypes.cast(pcm, c_int16_ptr)
270 data = (ctypes.c_char * max_data_bytes)()
271
272 ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)
273
274 return array.array('b', data[:ret]).tobytes()
275
[end of discord/opus.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/discord/opus.py b/discord/opus.py
--- a/discord/opus.py
+++ b/discord/opus.py
@@ -49,9 +49,10 @@
return result
def _err_ne(result, func, args):
- if result.value != 0:
+ ret = args[-1]._obj
+ if ret.value != 0:
log.info('error has happened in {0.__name__}'.format(func))
- raise OpusError(result.value)
+ raise OpusError(ret.value)
return result
# A list of exported functions.
| {"golden_diff": "diff --git a/discord/opus.py b/discord/opus.py\n--- a/discord/opus.py\n+++ b/discord/opus.py\n@@ -49,9 +49,10 @@\n return result\n \n def _err_ne(result, func, args):\n- if result.value != 0:\n+ ret = args[-1]._obj\n+ if ret.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n- raise OpusError(result.value)\n+ raise OpusError(ret.value)\n return result\n \n # A list of exported functions.\n", "issue": "Cannot connect to voice channels\nRunning `await voice_channel.connect()` raises\r\n`AttributeError: 'LP_EncoderStruct' object has no attribute 'value'`\r\n\r\nRelevant portions of the traceback ([and the line itself](https://github.com/Rapptz/discord.py/blob/rewrite/discord/opus.py#L52)):\r\n```\r\n File \".../lib/python3.6/site-packages/discord/abc.py\", line 985, in connect\r\n voice = VoiceClient(state=state, timeout=timeout, channel=self)\r\n File \".../lib/python3.6/site-packages/discord/voice_client.py\", line 109, in __init__\r\n self.encoder = opus.Encoder()\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 225, in __init__\r\n self._state = self._create_state()\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 239, in _create_state\r\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\r\n File \".../lib/python3.6/site-packages/discord/opus.py\", line 52, in _err_ne\r\n if result.value != 0:\r\nAttributeError: 'LP_EncoderStruct' object has no attribute 'value'\r\n```\r\nI have opus 1.2.1 installed on 64-bit Linux, and it is loaded according to `discord.opus.is_loaded()`.\r\n\r\nAny clue as to what might be the issue?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport ctypes\nimport ctypes.util\nimport array\nfrom .errors import DiscordException\nimport logging\nimport sys\nimport os.path\n\nlog = logging.getLogger(__name__)\nc_int_ptr = ctypes.POINTER(ctypes.c_int)\nc_int16_ptr = ctypes.POINTER(ctypes.c_int16)\nc_float_ptr = ctypes.POINTER(ctypes.c_float)\n\nclass EncoderStruct(ctypes.Structure):\n pass\n\nEncoderStructPtr = ctypes.POINTER(EncoderStruct)\n\ndef _err_lt(result, func, args):\n if result < 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result)\n return result\n\ndef _err_ne(result, func, args):\n if result.value != 0:\n log.info('error has happened in {0.__name__}'.format(func))\n raise OpusError(result.value)\n return result\n\n# A list of exported functions.\n# The first argument is obviously the name.\n# The second one are the types of arguments it takes.\n# The third is the result type.\n# The fourth is the error handler.\nexported_functions = [\n ('opus_strerror',\n [ctypes.c_int], ctypes.c_char_p, None),\n ('opus_encoder_get_size',\n [ctypes.c_int], ctypes.c_int, None),\n ('opus_encoder_create',\n [ctypes.c_int, ctypes.c_int, ctypes.c_int, c_int_ptr], EncoderStructPtr, _err_ne),\n ('opus_encode',\n [EncoderStructPtr, c_int16_ptr, ctypes.c_int, ctypes.c_char_p, ctypes.c_int32], ctypes.c_int32, _err_lt),\n ('opus_encoder_ctl',\n None, ctypes.c_int32, _err_lt),\n ('opus_encoder_destroy',\n [EncoderStructPtr], None, None),\n]\n\ndef libopus_loader(name):\n # create the library...\n lib = ctypes.cdll.LoadLibrary(name)\n\n # register the functions...\n for item in exported_functions:\n try:\n func = getattr(lib, item[0])\n except Exception as e:\n raise e\n\n try:\n if item[1]:\n func.argtypes = item[1]\n\n func.restype = item[2]\n except KeyError:\n pass\n\n try:\n if item[3]:\n func.errcheck = item[3]\n except KeyError:\n log.exception(\"Error assigning check function to %s\", func)\n\n return lib\n\ntry:\n if sys.platform == 'win32':\n _basedir = os.path.dirname(os.path.abspath(__file__))\n _bitness = 'x64' if sys.maxsize > 2**32 else 'x86'\n _filename = os.path.join(_basedir, 'bin', 'libopus-0.{}.dll'.format(_bitness))\n _lib = libopus_loader(_filename)\n else:\n _lib = libopus_loader(ctypes.util.find_library('opus'))\nexcept Exception as e:\n _lib = None\n\ndef load_opus(name):\n \"\"\"Loads the libopus shared library for use with voice.\n\n If this function is not called then the library uses the function\n `ctypes.util.find_library`__ and then loads that one\n if available.\n\n .. _find library: https://docs.python.org/3.5/library/ctypes.html#finding-shared-libraries\n __ `find library`_\n\n Not loading a library leads to voice not working.\n\n This function propagates the exceptions thrown.\n\n Warning\n --------\n The bitness of the library must match the bitness of your python\n interpreter. If the library is 64-bit then your python interpreter\n must be 64-bit as well. Usually if there's a mismatch in bitness then\n the load will throw an exception.\n\n Note\n ----\n On Windows, the .dll extension is not necessary. However, on Linux\n the full extension is required to load the library, e.g. ``libopus.so.1``.\n On Linux however, `find library`_ will usually find the library automatically\n without you having to call this.\n\n Parameters\n ----------\n name: str\n The filename of the shared library.\n \"\"\"\n global _lib\n _lib = libopus_loader(name)\n\ndef is_loaded():\n \"\"\"Function to check if opus lib is successfully loaded either\n via the ``ctypes.util.find_library`` call of :func:`load_opus`.\n\n This must return ``True`` for voice to work.\n\n Returns\n -------\n bool\n Indicates if the opus library has been loaded.\n \"\"\"\n global _lib\n return _lib is not None\n\nclass OpusError(DiscordException):\n \"\"\"An exception that is thrown for libopus related errors.\n\n Attributes\n ----------\n code : :class:`int`\n The error code returned.\n \"\"\"\n\n def __init__(self, code):\n self.code = code\n msg = _lib.opus_strerror(self.code).decode('utf-8')\n log.info('\"%s\" has happened', msg)\n super().__init__(msg)\n\nclass OpusNotLoaded(DiscordException):\n \"\"\"An exception that is thrown for when libopus is not loaded.\"\"\"\n pass\n\n\n# Some constants...\nOK = 0\nAPPLICATION_AUDIO = 2049\nAPPLICATION_VOIP = 2048\nAPPLICATION_LOWDELAY = 2051\nCTL_SET_BITRATE = 4002\nCTL_SET_BANDWIDTH = 4008\nCTL_SET_FEC = 4012\nCTL_SET_PLP = 4014\nCTL_SET_SIGNAL = 4024\n\nband_ctl = {\n 'narrow': 1101,\n 'medium': 1102,\n 'wide': 1103,\n 'superwide': 1104,\n 'full': 1105,\n}\n\nsignal_ctl = {\n 'auto': -1000,\n 'voice': 3001,\n 'music': 3002,\n}\n\nclass Encoder:\n SAMPLING_RATE = 48000\n CHANNELS = 2\n FRAME_LENGTH = 20\n SAMPLE_SIZE = 4 # (bit_rate / 8) * CHANNELS (bit_rate == 16)\n SAMPLES_PER_FRAME = int(SAMPLING_RATE / 1000 * FRAME_LENGTH)\n\n FRAME_SIZE = SAMPLES_PER_FRAME * SAMPLE_SIZE\n\n def __init__(self, application=APPLICATION_AUDIO):\n self.application = application\n\n if not is_loaded():\n raise OpusNotLoaded()\n\n self._state = self._create_state()\n self.set_bitrate(128)\n self.set_fec(True)\n self.set_expected_packet_loss_percent(0.15)\n self.set_bandwidth('full')\n self.set_signal_type('auto')\n\n def __del__(self):\n if hasattr(self, '_state'):\n _lib.opus_encoder_destroy(self._state)\n self._state = None\n\n def _create_state(self):\n ret = ctypes.c_int()\n return _lib.opus_encoder_create(self.SAMPLING_RATE, self.CHANNELS, self.application, ctypes.byref(ret))\n\n def set_bitrate(self, kbps):\n kbps = min(128, max(16, int(kbps)))\n\n _lib.opus_encoder_ctl(self._state, CTL_SET_BITRATE, kbps * 1024)\n return kbps\n\n def set_bandwidth(self, req):\n if req not in band_ctl:\n raise KeyError('%r is not a valid bandwidth setting. Try one of: %s' % (req, ','.join(band_ctl)))\n\n k = band_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_BANDWIDTH, k)\n\n def set_signal_type(self, req):\n if req not in signal_ctl:\n raise KeyError('%r is not a valid signal setting. Try one of: %s' % (req, ','.join(signal_ctl)))\n\n k = signal_ctl[req]\n _lib.opus_encoder_ctl(self._state, CTL_SET_SIGNAL, k)\n\n def set_fec(self, enabled=True):\n _lib.opus_encoder_ctl(self._state, CTL_SET_FEC, 1 if enabled else 0)\n\n def set_expected_packet_loss_percent(self, percentage):\n _lib.opus_encoder_ctl(self._state, CTL_SET_PLP, min(100, max(0, int(percentage * 100))))\n\n def encode(self, pcm, frame_size):\n max_data_bytes = len(pcm)\n pcm = ctypes.cast(pcm, c_int16_ptr)\n data = (ctypes.c_char * max_data_bytes)()\n\n ret = _lib.opus_encode(self._state, pcm, frame_size, data, max_data_bytes)\n\n return array.array('b', data[:ret]).tobytes()\n", "path": "discord/opus.py"}]} | 3,803 | 133 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.