problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_5466 | rasdani/github-patches | git_diff | docker__docker-py-820 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
POST /volumes is now POST /volumes/create
https://github.com/docker/docker/pull/17136
</issue>
<code>
[start of docker/api/volume.py]
1 from .. import utils
2
3
4 class VolumeApiMixin(object):
5 @utils.minimum_version('1.21')
6 def volumes(self, filters=None):
7 params = {
8 'filter': utils.convert_filters(filters) if filters else None
9 }
10 url = self._url('/volumes')
11 return self._result(self._get(url, params=params), True)
12
13 @utils.minimum_version('1.21')
14 def create_volume(self, name, driver=None, driver_opts=None):
15 url = self._url('/volumes')
16 if driver_opts is not None and not isinstance(driver_opts, dict):
17 raise TypeError('driver_opts must be a dictionary')
18
19 data = {
20 'Name': name,
21 'Driver': driver,
22 'DriverOpts': driver_opts,
23 }
24 return self._result(self._post_json(url, data=data), True)
25
26 @utils.minimum_version('1.21')
27 def inspect_volume(self, name):
28 url = self._url('/volumes/{0}', name)
29 return self._result(self._get(url), True)
30
31 @utils.minimum_version('1.21')
32 def remove_volume(self, name):
33 url = self._url('/volumes/{0}', name)
34 resp = self._delete(url)
35 self._raise_for_status(resp)
36 return True
37
[end of docker/api/volume.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/api/volume.py b/docker/api/volume.py
--- a/docker/api/volume.py
+++ b/docker/api/volume.py
@@ -12,7 +12,7 @@
@utils.minimum_version('1.21')
def create_volume(self, name, driver=None, driver_opts=None):
- url = self._url('/volumes')
+ url = self._url('/volumes/create')
if driver_opts is not None and not isinstance(driver_opts, dict):
raise TypeError('driver_opts must be a dictionary')
| {"golden_diff": "diff --git a/docker/api/volume.py b/docker/api/volume.py\n--- a/docker/api/volume.py\n+++ b/docker/api/volume.py\n@@ -12,7 +12,7 @@\n \n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n- url = self._url('/volumes')\n+ url = self._url('/volumes/create')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n", "issue": "POST /volumes is now POST /volumes/create\nhttps://github.com/docker/docker/pull/17136\n\n", "before_files": [{"content": "from .. import utils\n\n\nclass VolumeApiMixin(object):\n @utils.minimum_version('1.21')\n def volumes(self, filters=None):\n params = {\n 'filter': utils.convert_filters(filters) if filters else None\n }\n url = self._url('/volumes')\n return self._result(self._get(url, params=params), True)\n\n @utils.minimum_version('1.21')\n def create_volume(self, name, driver=None, driver_opts=None):\n url = self._url('/volumes')\n if driver_opts is not None and not isinstance(driver_opts, dict):\n raise TypeError('driver_opts must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'DriverOpts': driver_opts,\n }\n return self._result(self._post_json(url, data=data), True)\n\n @utils.minimum_version('1.21')\n def inspect_volume(self, name):\n url = self._url('/volumes/{0}', name)\n return self._result(self._get(url), True)\n\n @utils.minimum_version('1.21')\n def remove_volume(self, name):\n url = self._url('/volumes/{0}', name)\n resp = self._delete(url)\n self._raise_for_status(resp)\n return True\n", "path": "docker/api/volume.py"}]} | 910 | 120 |
gh_patches_debug_5879 | rasdani/github-patches | git_diff | inventree__InvenTree-1860 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Migration warns for phantom part changes
Here is the warning:
```
Your models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
Running `manage.py makemigrations` does **not** generate new migration file...
Running `manage.py showmigrations part` shows all part migrations are complete.
I found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.
</issue>
<code>
[start of InvenTree/InvenTree/fields.py]
1 """ Custom fields used in InvenTree """
2
3 # -*- coding: utf-8 -*-
4 from __future__ import unicode_literals
5 import sys
6
7 from .validators import allowable_url_schemes
8
9 from django.utils.translation import ugettext_lazy as _
10
11 from django.forms.fields import URLField as FormURLField
12 from django.db import models as models
13 from django.core import validators
14 from django import forms
15
16 from decimal import Decimal
17
18 from djmoney.models.fields import MoneyField as ModelMoneyField
19 from djmoney.forms.fields import MoneyField
20 from djmoney.models.validators import MinMoneyValidator
21
22 import InvenTree.helpers
23
24
25 class InvenTreeURLFormField(FormURLField):
26 """ Custom URL form field with custom scheme validators """
27
28 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
29
30
31 class InvenTreeURLField(models.URLField):
32 """ Custom URL field which has custom scheme validators """
33
34 default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]
35
36 def formfield(self, **kwargs):
37 return super().formfield(**{
38 'form_class': InvenTreeURLFormField
39 })
40
41
42 def money_kwargs():
43 """ returns the database settings for MoneyFields """
44 from common.settings import currency_code_mappings, currency_code_default
45
46 kwargs = {}
47 kwargs['currency_choices'] = currency_code_mappings()
48 kwargs['default_currency'] = currency_code_default()
49 return kwargs
50
51
52 class InvenTreeModelMoneyField(ModelMoneyField):
53 """
54 Custom MoneyField for clean migrations while using dynamic currency settings
55 """
56
57 def __init__(self, **kwargs):
58 # detect if creating migration
59 if 'makemigrations' in sys.argv:
60 # remove currency information for a clean migration
61 kwargs['default_currency'] = ''
62 kwargs['currency_choices'] = []
63 else:
64 # set defaults
65 kwargs.update(money_kwargs())
66
67 # Set a minimum value validator
68 validators = kwargs.get('validators', [])
69
70 if len(validators) == 0:
71 validators.append(
72 MinMoneyValidator(0),
73 )
74
75 kwargs['validators'] = validators
76
77 super().__init__(**kwargs)
78
79 def formfield(self, **kwargs):
80 """ override form class to use own function """
81 kwargs['form_class'] = InvenTreeMoneyField
82 return super().formfield(**kwargs)
83
84
85 class InvenTreeMoneyField(MoneyField):
86 """ custom MoneyField for clean migrations while using dynamic currency settings """
87 def __init__(self, *args, **kwargs):
88 # override initial values with the real info from database
89 kwargs.update(money_kwargs())
90 super().__init__(*args, **kwargs)
91
92
93 class DatePickerFormField(forms.DateField):
94 """
95 Custom date-picker field
96 """
97
98 def __init__(self, **kwargs):
99
100 help_text = kwargs.get('help_text', _('Enter date'))
101 label = kwargs.get('label', None)
102 required = kwargs.get('required', False)
103 initial = kwargs.get('initial', None)
104
105 widget = forms.DateInput(
106 attrs={
107 'type': 'date',
108 }
109 )
110
111 forms.DateField.__init__(
112 self,
113 required=required,
114 initial=initial,
115 help_text=help_text,
116 widget=widget,
117 label=label
118 )
119
120
121 def round_decimal(value, places):
122 """
123 Round value to the specified number of places.
124 """
125
126 if value is not None:
127 # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options
128 return value.quantize(Decimal(10) ** -places)
129 return value
130
131
132 class RoundingDecimalFormField(forms.DecimalField):
133 def to_python(self, value):
134 value = super(RoundingDecimalFormField, self).to_python(value)
135 value = round_decimal(value, self.decimal_places)
136 return value
137
138 def prepare_value(self, value):
139 """
140 Override the 'prepare_value' method, to remove trailing zeros when displaying.
141 Why? It looks nice!
142 """
143
144 if type(value) == Decimal:
145 return InvenTree.helpers.normalize(value)
146 else:
147 return value
148
149
150 class RoundingDecimalField(models.DecimalField):
151 def to_python(self, value):
152 value = super(RoundingDecimalField, self).to_python(value)
153 return round_decimal(value, self.decimal_places)
154
155 def formfield(self, **kwargs):
156 defaults = {
157 'form_class': RoundingDecimalFormField
158 }
159
160 defaults.update(kwargs)
161
162 return super().formfield(**kwargs)
163
[end of InvenTree/InvenTree/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py
--- a/InvenTree/InvenTree/fields.py
+++ b/InvenTree/InvenTree/fields.py
@@ -55,7 +55,7 @@
def __init__(self, **kwargs):
# detect if creating migration
- if 'makemigrations' in sys.argv:
+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:
# remove currency information for a clean migration
kwargs['default_currency'] = ''
kwargs['currency_choices'] = []
| {"golden_diff": "diff --git a/InvenTree/InvenTree/fields.py b/InvenTree/InvenTree/fields.py\n--- a/InvenTree/InvenTree/fields.py\n+++ b/InvenTree/InvenTree/fields.py\n@@ -55,7 +55,7 @@\n \n def __init__(self, **kwargs):\n # detect if creating migration\n- if 'makemigrations' in sys.argv:\n+ if 'migrate' in sys.argv or 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n", "issue": "Migration warns for phantom part changes \nHere is the warning:\r\n\r\n```\r\nYour models in app(s): 'part' have changes that are not yet reflected in a migration, and so won't be applied.\r\nRun 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.\r\n```\r\n\r\nRunning `manage.py makemigrations` does **not** generate new migration file...\r\n\r\nRunning `manage.py showmigrations part` shows all part migrations are complete.\r\n\r\nI found this warning both with PostGreSQL and SQLite3, if that has anything to do with backend dependency.\n", "before_files": [{"content": "\"\"\" Custom fields used in InvenTree \"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .validators import allowable_url_schemes\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.forms.fields import URLField as FormURLField\nfrom django.db import models as models\nfrom django.core import validators\nfrom django import forms\n\nfrom decimal import Decimal\n\nfrom djmoney.models.fields import MoneyField as ModelMoneyField\nfrom djmoney.forms.fields import MoneyField\nfrom djmoney.models.validators import MinMoneyValidator\n\nimport InvenTree.helpers\n\n\nclass InvenTreeURLFormField(FormURLField):\n \"\"\" Custom URL form field with custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n\nclass InvenTreeURLField(models.URLField):\n \"\"\" Custom URL field which has custom scheme validators \"\"\"\n\n default_validators = [validators.URLValidator(schemes=allowable_url_schemes())]\n\n def formfield(self, **kwargs):\n return super().formfield(**{\n 'form_class': InvenTreeURLFormField\n })\n\n\ndef money_kwargs():\n \"\"\" returns the database settings for MoneyFields \"\"\"\n from common.settings import currency_code_mappings, currency_code_default\n\n kwargs = {}\n kwargs['currency_choices'] = currency_code_mappings()\n kwargs['default_currency'] = currency_code_default()\n return kwargs\n\n\nclass InvenTreeModelMoneyField(ModelMoneyField):\n \"\"\"\n Custom MoneyField for clean migrations while using dynamic currency settings\n \"\"\"\n \n def __init__(self, **kwargs):\n # detect if creating migration\n if 'makemigrations' in sys.argv:\n # remove currency information for a clean migration\n kwargs['default_currency'] = ''\n kwargs['currency_choices'] = []\n else:\n # set defaults\n kwargs.update(money_kwargs())\n\n # Set a minimum value validator\n validators = kwargs.get('validators', [])\n\n if len(validators) == 0:\n validators.append(\n MinMoneyValidator(0),\n )\n\n kwargs['validators'] = validators\n\n super().__init__(**kwargs)\n\n def formfield(self, **kwargs):\n \"\"\" override form class to use own function \"\"\"\n kwargs['form_class'] = InvenTreeMoneyField\n return super().formfield(**kwargs)\n\n\nclass InvenTreeMoneyField(MoneyField):\n \"\"\" custom MoneyField for clean migrations while using dynamic currency settings \"\"\"\n def __init__(self, *args, **kwargs):\n # override initial values with the real info from database\n kwargs.update(money_kwargs())\n super().__init__(*args, **kwargs)\n\n\nclass DatePickerFormField(forms.DateField):\n \"\"\"\n Custom date-picker field\n \"\"\"\n\n def __init__(self, **kwargs):\n\n help_text = kwargs.get('help_text', _('Enter date'))\n label = kwargs.get('label', None)\n required = kwargs.get('required', False)\n initial = kwargs.get('initial', None)\n\n widget = forms.DateInput(\n attrs={\n 'type': 'date',\n }\n )\n\n forms.DateField.__init__(\n self,\n required=required,\n initial=initial,\n help_text=help_text,\n widget=widget,\n label=label\n )\n\n\ndef round_decimal(value, places):\n \"\"\"\n Round value to the specified number of places.\n \"\"\"\n\n if value is not None:\n # see https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize for options\n return value.quantize(Decimal(10) ** -places)\n return value\n\n\nclass RoundingDecimalFormField(forms.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalFormField, self).to_python(value)\n value = round_decimal(value, self.decimal_places)\n return value\n\n def prepare_value(self, value):\n \"\"\"\n Override the 'prepare_value' method, to remove trailing zeros when displaying.\n Why? It looks nice!\n \"\"\"\n\n if type(value) == Decimal:\n return InvenTree.helpers.normalize(value)\n else:\n return value\n\n\nclass RoundingDecimalField(models.DecimalField):\n def to_python(self, value):\n value = super(RoundingDecimalField, self).to_python(value)\n return round_decimal(value, self.decimal_places)\n\n def formfield(self, **kwargs):\n defaults = {\n 'form_class': RoundingDecimalFormField\n }\n\n defaults.update(kwargs)\n\n return super().formfield(**kwargs)\n", "path": "InvenTree/InvenTree/fields.py"}]} | 2,029 | 144 |
gh_patches_debug_14841 | rasdani/github-patches | git_diff | koxudaxi__datamodel-code-generator-421 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IPv4Address doesn't import from pydantic.validators
**Describe the bug**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic import IPv4Address
```
This isn't a valid import.
**To Reproduce**
Example schema:
```yaml
openapi: 3.0.0
info:
version: 0.0.1
title: Foo API
paths:
/foo:
get:
responses:
"200":
description: Success
components:
schemas:
Foo:
type: object
properties:
ip:
type: string
format: ipv4
```
Used commandline:
```
$ datamodel-codegen --input openapi.yaml
```
**Expected behavior**
When using `format: ipv4`, the following import is added to the output:
```py
from pydantic.validators import IPv4Address
```
**Version:**
- OS: MacOS
- Python version: `3.9.2`
- datamodel-code-generator version: `0.8.2`
**Additional context**
None
</issue>
<code>
[start of datamodel_code_generator/model/pydantic/imports.py]
1 from datamodel_code_generator.imports import Import
2
3 IMPORT_CONSTR = Import.from_full_path('pydantic.constr')
4 IMPORT_CONINT = Import.from_full_path('pydantic.conint')
5 IMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')
6 IMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')
7 IMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')
8 IMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')
9 IMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')
10 IMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')
11 IMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')
12 IMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')
13 IMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')
14 IMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')
15 IMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')
16 IMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')
17 IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
18 IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
19 IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
20 IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
21 IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
22 IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
23 IMPORT_FIELD = Import.from_full_path('pydantic.Field')
24 IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
25 IMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')
26 IMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')
27 IMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')
28 IMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')
29 IMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')
30
[end of datamodel_code_generator/model/pydantic/imports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py
--- a/datamodel_code_generator/model/pydantic/imports.py
+++ b/datamodel_code_generator/model/pydantic/imports.py
@@ -17,8 +17,8 @@
IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')
IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')
IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')
-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')
-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')
+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')
+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')
IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')
IMPORT_FIELD = Import.from_full_path('pydantic.Field')
IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')
| {"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic/imports.py b/datamodel_code_generator/model/pydantic/imports.py\n--- a/datamodel_code_generator/model/pydantic/imports.py\n+++ b/datamodel_code_generator/model/pydantic/imports.py\n@@ -17,8 +17,8 @@\n IMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\n IMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\n IMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\n-IMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\n-IMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\n+IMPORT_IPV4ADDRESS = Import.from_full_path('ipaddress.IPv4Address')\n+IMPORT_IPV6ADDRESS = Import.from_full_path('ipaddress.IPv6Address')\n IMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\n IMPORT_FIELD = Import.from_full_path('pydantic.Field')\n IMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\n", "issue": "IPv4Address doesn't import from pydantic.validators\n**Describe the bug**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic import IPv4Address\r\n```\r\n\r\nThis isn't a valid import.\r\n\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```yaml\r\nopenapi: 3.0.0\r\n\r\ninfo:\r\n version: 0.0.1\r\n title: Foo API\r\n\r\npaths:\r\n /foo:\r\n get:\r\n responses:\r\n \"200\":\r\n description: Success\r\n\r\ncomponents:\r\n schemas:\r\n Foo:\r\n type: object\r\n properties:\r\n ip:\r\n type: string\r\n format: ipv4\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input openapi.yaml\r\n```\r\n\r\n**Expected behavior**\r\n\r\nWhen using `format: ipv4`, the following import is added to the output:\r\n\r\n```py\r\nfrom pydantic.validators import IPv4Address\r\n```\r\n\r\n**Version:**\r\n - OS: MacOS\r\n - Python version: `3.9.2`\r\n - datamodel-code-generator version: `0.8.2`\r\n\r\n**Additional context**\r\nNone\r\n\n", "before_files": [{"content": "from datamodel_code_generator.imports import Import\n\nIMPORT_CONSTR = Import.from_full_path('pydantic.constr')\nIMPORT_CONINT = Import.from_full_path('pydantic.conint')\nIMPORT_CONFLOAT = Import.from_full_path('pydantic.confloat')\nIMPORT_CONDECIMAL = Import.from_full_path('pydantic.condecimal')\nIMPORT_CONBYTES = Import.from_full_path('pydantic.conbytes')\nIMPORT_POSITIVE_INT = Import.from_full_path('pydantic.PositiveInt')\nIMPORT_NEGATIVE_INT = Import.from_full_path('pydantic.NegativeInt')\nIMPORT_POSITIVE_FLOAT = Import.from_full_path('pydantic.PositiveFloat')\nIMPORT_NEGATIVE_FLOAT = Import.from_full_path('pydantic.NegativeFloat')\nIMPORT_SECRET_STR = Import.from_full_path('pydantic.SecretStr')\nIMPORT_EMAIL_STR = Import.from_full_path('pydantic.EmailStr')\nIMPORT_UUID1 = Import.from_full_path('pydantic.UUID1')\nIMPORT_UUID2 = Import.from_full_path('pydantic.UUID2')\nIMPORT_UUID3 = Import.from_full_path('pydantic.UUID3')\nIMPORT_UUID4 = Import.from_full_path('pydantic.UUID4')\nIMPORT_UUID5 = Import.from_full_path('pydantic.UUID5')\nIMPORT_ANYURL = Import.from_full_path('pydantic.AnyUrl')\nIMPORT_IPV4ADDRESS = Import.from_full_path('pydantic.IPv4Address')\nIMPORT_IPV6ADDRESS = Import.from_full_path('pydantic.IPv6Address')\nIMPORT_EXTRA = Import.from_full_path('pydantic.Extra')\nIMPORT_FIELD = Import.from_full_path('pydantic.Field')\nIMPORT_STRICT_INT = Import.from_full_path('pydantic.StrictInt')\nIMPORT_STRICT_FLOAT = Import.from_full_path('pydantic.StrictFloat')\nIMPORT_STRICT_STR = Import.from_full_path('pydantic.StrictStr')\nIMPORT_STRICT_BOOL = Import.from_full_path('pydantic.StrictBool')\nIMPORT_STRICT_BYTES = Import.from_full_path('pydantic.StrictBytes')\nIMPORT_DATACLASS = Import.from_full_path('pydantic.dataclasses.dataclass')\n", "path": "datamodel_code_generator/model/pydantic/imports.py"}]} | 1,281 | 230 |
gh_patches_debug_417 | rasdani/github-patches | git_diff | python__python-docs-es-1712 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translate 'library/base64.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.
Meanwhile, the English version is shown.
Current stats for `library/base64.po`:
* Fuzzy: 4
* Percent translated: 90.9%
* Entries: 50 / 55
* Untranslated: 5
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
</issue>
<code>
[start of scripts/translate.py]
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":doc:`[^`]+`",
46 "``[^`]+``",
47 "`[^`]+`__",
48 "`[^`]+`_",
49 "\*\*.+\*\*", # bold text between **
50 "\*.+\*", # italic text between *
51 ]
52
53 _exps = [re.compile(e) for e in _patterns]
54
55 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
56 """
57 Parameters:
58 string containing the text to translate
59
60 Returns:
61 dictionary containing all the placeholder text as keys
62 and the correct value.
63 """
64
65 i = 0
66 d: Dict[str, str] = {}
67 for exp in _exps:
68 matches = exp.findall(s)
69 if DEBUG:
70 print(exp, matches)
71 for match in matches:
72 ph = f"XASDF{str(i).zfill(2)}"
73 s = s.replace(match, ph)
74 if ph in d and VERBOSE:
75 print(f"Error: {ph} is already in the dictionary")
76 print("new", match)
77 print("old", d[ph])
78 d[ph] = match
79 i += 1
80 return d, s
81
82
83 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
84 for ph, value in placeholders.items():
85 translated_text = translated_text.replace(ph, value)
86 if DEBUG:
87 print(ph, value)
88 print(translated_text)
89 return translated_text
90
91
92 if __name__ == "__main__":
93 filename = sys.argv[1]
94 if not os.path.isfile(filename):
95 print(f"File not found: '{filename}'")
96 sys.exit(-1)
97
98 po = polib.pofile(filename)
99 translator = GoogleTranslator(source="en", target="es")
100
101 for entry in po:
102 # If the entry has already a translation, skip.
103 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
104 continue
105
106 print("\nEN|", entry.msgid)
107 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
108 if VERBOSE:
109 print(temp_text)
110 print(placeholders)
111
112 # Translate the temporary text without sphinx statements
113 translated_text = translator.translate(temp_text)
114
115 # Recover sphinx statements
116 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
117 print("ES|", real_text)
118
119 # Replace the po file translated entry
120 entry.msgstr = real_text
121
122 # Save the file after all the entries are translated
123 po.save()
124
[end of scripts/translate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -42,6 +42,7 @@
":program:`[^`]+`",
":keyword:`[^`]+`",
":RFC:`[^`]+`",
+ ":rfc:`[^`]+`",
":doc:`[^`]+`",
"``[^`]+``",
"`[^`]+`__",
| {"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -42,6 +42,7 @@\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n+ \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n", "issue": "Translate 'library/base64.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/library/base64.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `library/base64.po`:\n\n* Fuzzy: 4\n* Percent translated: 90.9%\n* Entries: 50 / 55\n* Untranslated: 5\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]} | 1,847 | 103 |
gh_patches_debug_3070 | rasdani/github-patches | git_diff | pallets__werkzeug-1539 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ProfilerMiddleware's default filename_format causes ValueError when ProfilerMiddleware is used with profile_dir
## Environment
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.13.6
BuildVersion: 17G3025
$ python --version
Python 3.7.2
$ pip freeze
Click==7.0
Flask==1.0.2
itsdangerous==1.1.0
Jinja2==2.10.1
MarkupSafe==1.1.1
Werkzeug==0.15.2
```
Basically, the only Python dependency I installed was Flask because that's what I'm most familiar with. However, the error I'm describing looks to be contained within werkzeug.
## Observed Behavior
When using `ProfilerMiddleware` with its `profile_dir` argument, the following error gets raised after a request is sent to the server:
```
Error on request:
Traceback (most recent call last):
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py", line 302, in run_wsgi
execute(self.server.app)
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py", line 290, in execute
application_iter = app(environ, start_response)
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py", line 119, in __call__
time=time.time(),
ValueError: Unknown format code 'd' for object of type 'float'
```
## Expected Behavior
No `ValueError`.
## Steps to Reproduce
1. `pip install flask`
2. Save the following file as app.py.
```python
# app.py
from flask import Flask
from werkzeug.middleware.profiler import ProfilerMiddleware
app = Flask(__name__)
app.wsgi_app = ProfilerMiddleware(app.wsgi_app, profile_dir=".")
@app.route("/", methods=["GET"])
def get_index():
return "Hello, world!"
```
3. Start the server with `FLASK_APP=app.py flask run`.
4. Send a request to the server (e.g. http://127.0.0.1:5000/).
## Workaround/Solution
Slightly modify `ProfilerMiddleware`'s `filename_format`, replacing the `d` with `f`. For example:
```python
app.wsgi_app = ProfilerMiddleware(
app.wsgi_app, profile_dir=".", filename_format="{method}.{path}.{elapsed:06f}ms.{time:f}.prof"
)
```
Both instances of `d` need to be replaced because both `elapsed` and `time` are floating point numbers.
</issue>
<code>
[start of src/werkzeug/middleware/profiler.py]
1 """
2 Application Profiler
3 ====================
4
5 This module provides a middleware that profiles each request with the
6 :mod:`cProfile` module. This can help identify bottlenecks in your code
7 that may be slowing down your application.
8
9 .. autoclass:: ProfilerMiddleware
10
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
13 """
14 from __future__ import print_function
15
16 import os.path
17 import sys
18 import time
19 from pstats import Stats
20
21 try:
22 from cProfile import Profile
23 except ImportError:
24 from profile import Profile
25
26
27 class ProfilerMiddleware(object):
28 """Wrap a WSGI application and profile the execution of each
29 request. Responses are buffered so that timings are more exact.
30
31 If ``stream`` is given, :class:`pstats.Stats` are written to it
32 after each request. If ``profile_dir`` is given, :mod:`cProfile`
33 data files are saved to that directory, one file per request.
34
35 The filename can be customized by passing ``filename_format``. If
36 it is a string, it will be formatted using :meth:`str.format` with
37 the following fields available:
38
39 - ``{method}`` - The request method; GET, POST, etc.
40 - ``{path}`` - The request path or 'root' should one not exist.
41 - ``{elapsed}`` - The elapsed time of the request.
42 - ``{time}`` - The time of the request.
43
44 If it is a callable, it will be called with the WSGI ``environ``
45 dict and should return a filename.
46
47 :param app: The WSGI application to wrap.
48 :param stream: Write stats to this stream. Disable with ``None``.
49 :param sort_by: A tuple of columns to sort stats by. See
50 :meth:`pstats.Stats.sort_stats`.
51 :param restrictions: A tuple of restrictions to filter stats by. See
52 :meth:`pstats.Stats.print_stats`.
53 :param profile_dir: Save profile data files to this directory.
54 :param filename_format: Format string for profile data file names,
55 or a callable returning a name. See explanation above.
56
57 .. code-block:: python
58
59 from werkzeug.middleware.profiler import ProfilerMiddleware
60 app = ProfilerMiddleware(app)
61
62 .. versionchanged:: 0.15
63 Stats are written even if ``profile_dir`` is given, and can be
64 disable by passing ``stream=None``.
65
66 .. versionadded:: 0.15
67 Added ``filename_format``.
68
69 .. versionadded:: 0.9
70 Added ``restrictions`` and ``profile_dir``.
71 """
72
73 def __init__(
74 self,
75 app,
76 stream=sys.stdout,
77 sort_by=("time", "calls"),
78 restrictions=(),
79 profile_dir=None,
80 filename_format="{method}.{path}.{elapsed:06d}ms.{time:d}.prof",
81 ):
82 self._app = app
83 self._stream = stream
84 self._sort_by = sort_by
85 self._restrictions = restrictions
86 self._profile_dir = profile_dir
87 self._filename_format = filename_format
88
89 def __call__(self, environ, start_response):
90 response_body = []
91
92 def catching_start_response(status, headers, exc_info=None):
93 start_response(status, headers, exc_info)
94 return response_body.append
95
96 def runapp():
97 app_iter = self._app(environ, catching_start_response)
98 response_body.extend(app_iter)
99
100 if hasattr(app_iter, "close"):
101 app_iter.close()
102
103 profile = Profile()
104 start = time.time()
105 profile.runcall(runapp)
106 body = b"".join(response_body)
107 elapsed = time.time() - start
108
109 if self._profile_dir is not None:
110 if callable(self._filename_format):
111 filename = self._filename_format(environ)
112 else:
113 filename = self._filename_format.format(
114 method=environ["REQUEST_METHOD"],
115 path=(
116 environ.get("PATH_INFO").strip("/").replace("/", ".") or "root"
117 ),
118 elapsed=elapsed * 1000.0,
119 time=time.time(),
120 )
121 filename = os.path.join(self._profile_dir, filename)
122 profile.dump_stats(filename)
123
124 if self._stream is not None:
125 stats = Stats(profile, stream=self._stream)
126 stats.sort_stats(*self._sort_by)
127 print("-" * 80, file=self._stream)
128 print("PATH: {!r}".format(environ.get("PATH_INFO", "")), file=self._stream)
129 stats.print_stats(*self._restrictions)
130 print("-" * 80 + "\n", file=self._stream)
131
132 return [body]
133
[end of src/werkzeug/middleware/profiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py
--- a/src/werkzeug/middleware/profiler.py
+++ b/src/werkzeug/middleware/profiler.py
@@ -77,7 +77,7 @@
sort_by=("time", "calls"),
restrictions=(),
profile_dir=None,
- filename_format="{method}.{path}.{elapsed:06d}ms.{time:d}.prof",
+ filename_format="{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof",
):
self._app = app
self._stream = stream
| {"golden_diff": "diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py\n--- a/src/werkzeug/middleware/profiler.py\n+++ b/src/werkzeug/middleware/profiler.py\n@@ -77,7 +77,7 @@\n sort_by=(\"time\", \"calls\"),\n restrictions=(),\n profile_dir=None,\n- filename_format=\"{method}.{path}.{elapsed:06d}ms.{time:d}.prof\",\n+ filename_format=\"{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof\",\n ):\n self._app = app\n self._stream = stream\n", "issue": "ProfilerMiddleware's default filename_format causes ValueError when ProfilerMiddleware is used with profile_dir\n## Environment\r\n\r\n```\r\n$ sw_vers \r\nProductName:\tMac OS X\r\nProductVersion:\t10.13.6\r\nBuildVersion:\t17G3025\r\n\r\n$ python --version\r\nPython 3.7.2\r\n\r\n$ pip freeze\r\nClick==7.0\r\nFlask==1.0.2\r\nitsdangerous==1.1.0\r\nJinja2==2.10.1\r\nMarkupSafe==1.1.1\r\nWerkzeug==0.15.2\r\n```\r\n\r\nBasically, the only Python dependency I installed was Flask because that's what I'm most familiar with. However, the error I'm describing looks to be contained within werkzeug.\r\n\r\n\r\n## Observed Behavior\r\n\r\nWhen using `ProfilerMiddleware` with its `profile_dir` argument, the following error gets raised after a request is sent to the server:\r\n\r\n```\r\nError on request:\r\nTraceback (most recent call last):\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py\", line 302, in run_wsgi\r\n execute(self.server.app)\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/serving.py\", line 290, in execute\r\n application_iter = app(environ, start_response)\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/flask/app.py\", line 2309, in __call__\r\n return self.wsgi_app(environ, start_response)\r\n File \"/dev/jlove-bazaarvoice/werkzeug-profiler-bug/.venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py\", line 119, in __call__\r\n time=time.time(),\r\nValueError: Unknown format code 'd' for object of type 'float'\r\n```\r\n\r\n## Expected Behavior\r\n\r\nNo `ValueError`.\r\n\r\n## Steps to Reproduce\r\n\r\n1. `pip install flask`\r\n2. Save the following file as app.py.\r\n```python\r\n# app.py\r\nfrom flask import Flask\r\nfrom werkzeug.middleware.profiler import ProfilerMiddleware\r\n\r\napp = Flask(__name__)\r\napp.wsgi_app = ProfilerMiddleware(app.wsgi_app, profile_dir=\".\")\r\n\r\n\r\[email protected](\"/\", methods=[\"GET\"])\r\ndef get_index():\r\n return \"Hello, world!\"\r\n```\r\n3. Start the server with `FLASK_APP=app.py flask run`.\r\n4. Send a request to the server (e.g. http://127.0.0.1:5000/).\r\n\r\n## Workaround/Solution\r\n\r\nSlightly modify `ProfilerMiddleware`'s `filename_format`, replacing the `d` with `f`. For example:\r\n```python\r\napp.wsgi_app = ProfilerMiddleware(\r\n app.wsgi_app, profile_dir=\".\", filename_format=\"{method}.{path}.{elapsed:06f}ms.{time:f}.prof\"\r\n)\r\n```\r\n\r\nBoth instances of `d` need to be replaced because both `elapsed` and `time` are floating point numbers.\n", "before_files": [{"content": "\"\"\"\nApplication Profiler\n====================\n\nThis module provides a middleware that profiles each request with the\n:mod:`cProfile` module. This can help identify bottlenecks in your code\nthat may be slowing down your application.\n\n.. autoclass:: ProfilerMiddleware\n\n:copyright: 2007 Pallets\n:license: BSD-3-Clause\n\"\"\"\nfrom __future__ import print_function\n\nimport os.path\nimport sys\nimport time\nfrom pstats import Stats\n\ntry:\n from cProfile import Profile\nexcept ImportError:\n from profile import Profile\n\n\nclass ProfilerMiddleware(object):\n \"\"\"Wrap a WSGI application and profile the execution of each\n request. Responses are buffered so that timings are more exact.\n\n If ``stream`` is given, :class:`pstats.Stats` are written to it\n after each request. If ``profile_dir`` is given, :mod:`cProfile`\n data files are saved to that directory, one file per request.\n\n The filename can be customized by passing ``filename_format``. If\n it is a string, it will be formatted using :meth:`str.format` with\n the following fields available:\n\n - ``{method}`` - The request method; GET, POST, etc.\n - ``{path}`` - The request path or 'root' should one not exist.\n - ``{elapsed}`` - The elapsed time of the request.\n - ``{time}`` - The time of the request.\n\n If it is a callable, it will be called with the WSGI ``environ``\n dict and should return a filename.\n\n :param app: The WSGI application to wrap.\n :param stream: Write stats to this stream. Disable with ``None``.\n :param sort_by: A tuple of columns to sort stats by. See\n :meth:`pstats.Stats.sort_stats`.\n :param restrictions: A tuple of restrictions to filter stats by. See\n :meth:`pstats.Stats.print_stats`.\n :param profile_dir: Save profile data files to this directory.\n :param filename_format: Format string for profile data file names,\n or a callable returning a name. See explanation above.\n\n .. code-block:: python\n\n from werkzeug.middleware.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n .. versionchanged:: 0.15\n Stats are written even if ``profile_dir`` is given, and can be\n disable by passing ``stream=None``.\n\n .. versionadded:: 0.15\n Added ``filename_format``.\n\n .. versionadded:: 0.9\n Added ``restrictions`` and ``profile_dir``.\n \"\"\"\n\n def __init__(\n self,\n app,\n stream=sys.stdout,\n sort_by=(\"time\", \"calls\"),\n restrictions=(),\n profile_dir=None,\n filename_format=\"{method}.{path}.{elapsed:06d}ms.{time:d}.prof\",\n ):\n self._app = app\n self._stream = stream\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n self._filename_format = filename_format\n\n def __call__(self, environ, start_response):\n response_body = []\n\n def catching_start_response(status, headers, exc_info=None):\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp():\n app_iter = self._app(environ, catching_start_response)\n response_body.extend(app_iter)\n\n if hasattr(app_iter, \"close\"):\n app_iter.close()\n\n profile = Profile()\n start = time.time()\n profile.runcall(runapp)\n body = b\"\".join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n if callable(self._filename_format):\n filename = self._filename_format(environ)\n else:\n filename = self._filename_format.format(\n method=environ[\"REQUEST_METHOD\"],\n path=(\n environ.get(\"PATH_INFO\").strip(\"/\").replace(\"/\", \".\") or \"root\"\n ),\n elapsed=elapsed * 1000.0,\n time=time.time(),\n )\n filename = os.path.join(self._profile_dir, filename)\n profile.dump_stats(filename)\n\n if self._stream is not None:\n stats = Stats(profile, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n print(\"-\" * 80, file=self._stream)\n print(\"PATH: {!r}\".format(environ.get(\"PATH_INFO\", \"\")), file=self._stream)\n stats.print_stats(*self._restrictions)\n print(\"-\" * 80 + \"\\n\", file=self._stream)\n\n return [body]\n", "path": "src/werkzeug/middleware/profiler.py"}]} | 2,565 | 139 |
gh_patches_debug_6326 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-472 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OpenTracing propagator does not use a TraceFlags object
I set up a client and server that propagated spans using the OpenTracing propagator. The server side reported this error:
```
[2021-04-26 16:41:13,377] ERROR in app: Exception on /ping [GET]
Traceback (most recent call last):
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "server.py", line 53, in ping
with tracer.start_as_current_span(
File "/home/ocelotl/.pyenv/versions/3.8.3/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py", line 863, in start_as_current_span
span = self.start_span(
File "/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py", line 917, in start_span
sampling_result = self.sampler.should_sample(
File "/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py", line 326, in should_sample
if parent_span_context.trace_flags.sampled:
AttributeError: 'int' object has no attribute 'sampled'
```
This happens because when instantiating a context during propagation with the OpenTracing propagator, a `TracFlags` object is not used for the trace flags.
</issue>
<code>
[start of propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from re import compile as re_compile
16 from typing import Any, Iterable, Optional
17
18 from opentelemetry.baggage import get_all, set_baggage
19 from opentelemetry.context import Context
20 from opentelemetry.propagators.textmap import (
21 CarrierT,
22 Getter,
23 Setter,
24 TextMapPropagator,
25 default_getter,
26 default_setter,
27 )
28 from opentelemetry.trace import (
29 INVALID_SPAN_ID,
30 INVALID_TRACE_ID,
31 NonRecordingSpan,
32 SpanContext,
33 TraceFlags,
34 get_current_span,
35 set_span_in_context,
36 )
37
38 OT_TRACE_ID_HEADER = "ot-tracer-traceid"
39 OT_SPAN_ID_HEADER = "ot-tracer-spanid"
40 OT_SAMPLED_HEADER = "ot-tracer-sampled"
41 OT_BAGGAGE_PREFIX = "ot-baggage-"
42
43 _valid_header_name = re_compile(r"[\w_^`!#$%&'*+.|~]+")
44 _valid_header_value = re_compile(r"[\t\x20-\x7e\x80-\xff]+")
45 _valid_extract_traceid = re_compile(r"[0-9a-f]{1,32}")
46 _valid_extract_spanid = re_compile(r"[0-9a-f]{1,16}")
47
48
49 class OTTracePropagator(TextMapPropagator):
50 """Propagator for the OTTrace HTTP header format"""
51
52 def extract(
53 self,
54 carrier: CarrierT,
55 context: Optional[Context] = None,
56 getter: Getter = default_getter,
57 ) -> Context:
58
59 traceid = _extract_first_element(
60 getter.get(carrier, OT_TRACE_ID_HEADER), INVALID_TRACE_ID
61 )
62
63 spanid = _extract_first_element(
64 getter.get(carrier, OT_SPAN_ID_HEADER), INVALID_SPAN_ID
65 )
66
67 sampled = _extract_first_element(
68 getter.get(carrier, OT_SAMPLED_HEADER)
69 )
70
71 if sampled == "true":
72 traceflags = TraceFlags.SAMPLED
73 else:
74 traceflags = TraceFlags.DEFAULT
75
76 if (
77 traceid != INVALID_TRACE_ID
78 and _valid_extract_traceid.fullmatch(traceid) is not None
79 and spanid != INVALID_SPAN_ID
80 and _valid_extract_spanid.fullmatch(spanid) is not None
81 ):
82 context = set_span_in_context(
83 NonRecordingSpan(
84 SpanContext(
85 trace_id=int(traceid, 16),
86 span_id=int(spanid, 16),
87 is_remote=True,
88 trace_flags=traceflags,
89 )
90 ),
91 context,
92 )
93
94 baggage = get_all(context) or {}
95
96 for key in getter.keys(carrier):
97
98 if not key.startswith(OT_BAGGAGE_PREFIX):
99 continue
100
101 baggage[
102 key[len(OT_BAGGAGE_PREFIX) :]
103 ] = _extract_first_element(getter.get(carrier, key))
104
105 for key, value in baggage.items():
106 context = set_baggage(key, value, context)
107
108 return context
109
110 def inject(
111 self,
112 carrier: CarrierT,
113 context: Optional[Context] = None,
114 setter: Setter = default_setter,
115 ) -> None:
116
117 span_context = get_current_span(context).get_span_context()
118
119 if span_context.trace_id == INVALID_TRACE_ID:
120 return
121
122 setter.set(
123 carrier, OT_TRACE_ID_HEADER, hex(span_context.trace_id)[2:][-16:]
124 )
125 setter.set(
126 carrier, OT_SPAN_ID_HEADER, hex(span_context.span_id)[2:][-16:],
127 )
128
129 if span_context.trace_flags == TraceFlags.SAMPLED:
130 traceflags = "true"
131 else:
132 traceflags = "false"
133
134 setter.set(carrier, OT_SAMPLED_HEADER, traceflags)
135
136 baggage = get_all(context)
137
138 if not baggage:
139 return
140
141 for header_name, header_value in baggage.items():
142
143 if (
144 _valid_header_name.fullmatch(header_name) is None
145 or _valid_header_value.fullmatch(header_value) is None
146 ):
147 continue
148
149 setter.set(
150 carrier,
151 "".join([OT_BAGGAGE_PREFIX, header_name]),
152 header_value,
153 )
154
155 @property
156 def fields(self):
157 """Returns a set with the fields set in `inject`.
158
159 See
160 `opentelemetry.propagators.textmap.TextMapPropagator.fields`
161 """
162 return {
163 OT_TRACE_ID_HEADER,
164 OT_SPAN_ID_HEADER,
165 OT_SAMPLED_HEADER,
166 }
167
168
169 def _extract_first_element(
170 items: Iterable[CarrierT], default: Any = None,
171 ) -> Optional[CarrierT]:
172 if items is None:
173 return default
174 return next(iter(items), None)
175
[end of propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py
--- a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py
+++ b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py
@@ -85,7 +85,7 @@
trace_id=int(traceid, 16),
span_id=int(spanid, 16),
is_remote=True,
- trace_flags=traceflags,
+ trace_flags=TraceFlags(traceflags),
)
),
context,
| {"golden_diff": "diff --git a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py\n--- a/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py\n+++ b/propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py\n@@ -85,7 +85,7 @@\n trace_id=int(traceid, 16),\n span_id=int(spanid, 16),\n is_remote=True,\n- trace_flags=traceflags,\n+ trace_flags=TraceFlags(traceflags),\n )\n ),\n context,\n", "issue": "OpenTracing propagator does not use a TraceFlags object\nI set up a client and server that propagated spans using the OpenTracing propagator. The server side reported this error:\r\n\r\n```\r\n[2021-04-26 16:41:13,377] ERROR in app: Exception on /ping [GET]\r\nTraceback (most recent call last):\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 2447, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1952, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1821, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/_compat.py\", line 39, in reraise\r\n raise value\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1950, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/home/ocelotl/virtual_environments/LS-22507/lib/python3.8/site-packages/flask/app.py\", line 1936, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"server.py\", line 53, in ping\r\n with tracer.start_as_current_span(\r\n File \"/home/ocelotl/.pyenv/versions/3.8.3/lib/python3.8/contextlib.py\", line 113, in __enter__\r\n return next(self.gen)\r\n File \"/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py\", line 863, in start_as_current_span\r\n span = self.start_span(\r\n File \"/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py\", line 917, in start_span\r\n sampling_result = self.sampler.should_sample(\r\n File \"/home/ocelotl/github/ocelotl/opentelemetry-python/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py\", line 326, in should_sample\r\n if parent_span_context.trace_flags.sampled:\r\nAttributeError: 'int' object has no attribute 'sampled'\r\n```\r\n\r\nThis happens because when instantiating a context during propagation with the OpenTracing propagator, a `TracFlags` object is not used for the trace flags.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom re import compile as re_compile\nfrom typing import Any, Iterable, Optional\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\nfrom opentelemetry.trace import (\n INVALID_SPAN_ID,\n INVALID_TRACE_ID,\n NonRecordingSpan,\n SpanContext,\n TraceFlags,\n get_current_span,\n set_span_in_context,\n)\n\nOT_TRACE_ID_HEADER = \"ot-tracer-traceid\"\nOT_SPAN_ID_HEADER = \"ot-tracer-spanid\"\nOT_SAMPLED_HEADER = \"ot-tracer-sampled\"\nOT_BAGGAGE_PREFIX = \"ot-baggage-\"\n\n_valid_header_name = re_compile(r\"[\\w_^`!#$%&'*+.|~]+\")\n_valid_header_value = re_compile(r\"[\\t\\x20-\\x7e\\x80-\\xff]+\")\n_valid_extract_traceid = re_compile(r\"[0-9a-f]{1,32}\")\n_valid_extract_spanid = re_compile(r\"[0-9a-f]{1,16}\")\n\n\nclass OTTracePropagator(TextMapPropagator):\n \"\"\"Propagator for the OTTrace HTTP header format\"\"\"\n\n def extract(\n self,\n carrier: CarrierT,\n context: Optional[Context] = None,\n getter: Getter = default_getter,\n ) -> Context:\n\n traceid = _extract_first_element(\n getter.get(carrier, OT_TRACE_ID_HEADER), INVALID_TRACE_ID\n )\n\n spanid = _extract_first_element(\n getter.get(carrier, OT_SPAN_ID_HEADER), INVALID_SPAN_ID\n )\n\n sampled = _extract_first_element(\n getter.get(carrier, OT_SAMPLED_HEADER)\n )\n\n if sampled == \"true\":\n traceflags = TraceFlags.SAMPLED\n else:\n traceflags = TraceFlags.DEFAULT\n\n if (\n traceid != INVALID_TRACE_ID\n and _valid_extract_traceid.fullmatch(traceid) is not None\n and spanid != INVALID_SPAN_ID\n and _valid_extract_spanid.fullmatch(spanid) is not None\n ):\n context = set_span_in_context(\n NonRecordingSpan(\n SpanContext(\n trace_id=int(traceid, 16),\n span_id=int(spanid, 16),\n is_remote=True,\n trace_flags=traceflags,\n )\n ),\n context,\n )\n\n baggage = get_all(context) or {}\n\n for key in getter.keys(carrier):\n\n if not key.startswith(OT_BAGGAGE_PREFIX):\n continue\n\n baggage[\n key[len(OT_BAGGAGE_PREFIX) :]\n ] = _extract_first_element(getter.get(carrier, key))\n\n for key, value in baggage.items():\n context = set_baggage(key, value, context)\n\n return context\n\n def inject(\n self,\n carrier: CarrierT,\n context: Optional[Context] = None,\n setter: Setter = default_setter,\n ) -> None:\n\n span_context = get_current_span(context).get_span_context()\n\n if span_context.trace_id == INVALID_TRACE_ID:\n return\n\n setter.set(\n carrier, OT_TRACE_ID_HEADER, hex(span_context.trace_id)[2:][-16:]\n )\n setter.set(\n carrier, OT_SPAN_ID_HEADER, hex(span_context.span_id)[2:][-16:],\n )\n\n if span_context.trace_flags == TraceFlags.SAMPLED:\n traceflags = \"true\"\n else:\n traceflags = \"false\"\n\n setter.set(carrier, OT_SAMPLED_HEADER, traceflags)\n\n baggage = get_all(context)\n\n if not baggage:\n return\n\n for header_name, header_value in baggage.items():\n\n if (\n _valid_header_name.fullmatch(header_name) is None\n or _valid_header_value.fullmatch(header_value) is None\n ):\n continue\n\n setter.set(\n carrier,\n \"\".join([OT_BAGGAGE_PREFIX, header_name]),\n header_value,\n )\n\n @property\n def fields(self):\n \"\"\"Returns a set with the fields set in `inject`.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.fields`\n \"\"\"\n return {\n OT_TRACE_ID_HEADER,\n OT_SPAN_ID_HEADER,\n OT_SAMPLED_HEADER,\n }\n\n\ndef _extract_first_element(\n items: Iterable[CarrierT], default: Any = None,\n) -> Optional[CarrierT]:\n if items is None:\n return default\n return next(iter(items), None)\n", "path": "propagator/opentelemetry-propagator-ot-trace/src/opentelemetry/propagators/ot_trace/__init__.py"}]} | 2,838 | 192 |
gh_patches_debug_5952 | rasdani/github-patches | git_diff | Kinto__kinto-386 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Activate POST on collections
```
$ curl -H "Content-Type: application/json" \
-X POST -d '{"data": {"test": "some_data"}}' --user testuser:abc123 \
https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections
{"errno":115,"message":"Method not allowed on this endpoint.","code":405,"error":"Method Not Allowed"}
```
</issue>
<code>
[start of kinto/views/collections.py]
1 import colander
2 import jsonschema
3 from cliquet import resource
4 from jsonschema import exceptions as jsonschema_exceptions
5
6 from kinto.views import NameGenerator
7
8
9 class JSONSchemaMapping(colander.SchemaNode):
10 def schema_type(self, **kw):
11 return colander.Mapping(unknown='preserve')
12
13 def deserialize(self, cstruct=colander.null):
14 # Start by deserializing a simple mapping.
15 validated = super(JSONSchemaMapping, self).deserialize(cstruct)
16
17 # In case it is optional in parent schema.
18 if not validated or validated in (colander.null, colander.drop):
19 return validated
20
21 try:
22 jsonschema.Draft4Validator.check_schema(validated)
23 except jsonschema_exceptions.SchemaError as e:
24 self.raise_invalid(e.path.pop() + e.message)
25 return validated
26
27
28 class CollectionSchema(resource.ResourceSchema):
29 schema = JSONSchemaMapping(missing=colander.drop)
30 cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)
31
32 class Options:
33 preserve_unknown = True
34
35
36 @resource.register(name='collection',
37 collection_methods=('GET',),
38 collection_path='/buckets/{{bucket_id}}/collections',
39 record_path='/buckets/{{bucket_id}}/collections/{{id}}')
40 class Collection(resource.ProtectedResource):
41 mapping = CollectionSchema()
42 permissions = ('read', 'write', 'record:create')
43
44 def __init__(self, *args, **kwargs):
45 super(Collection, self).__init__(*args, **kwargs)
46 self.model.id_generator = NameGenerator()
47
48 def get_parent_id(self, request):
49 bucket_id = request.matchdict['bucket_id']
50 parent_id = '/buckets/%s' % bucket_id
51 return parent_id
52
53 def delete(self):
54 result = super(Collection, self).delete()
55
56 # Delete records.
57 storage = self.model.storage
58 parent_id = '%s/collections/%s' % (self.model.parent_id,
59 self.record_id)
60 storage.delete_all(collection_id='record',
61 parent_id=parent_id,
62 with_deleted=False)
63 storage.purge_deleted(collection_id='record', parent_id=parent_id)
64
65 return result
66
[end of kinto/views/collections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/collections.py b/kinto/views/collections.py
--- a/kinto/views/collections.py
+++ b/kinto/views/collections.py
@@ -34,7 +34,7 @@
@resource.register(name='collection',
- collection_methods=('GET',),
+ collection_methods=('GET', 'POST'),
collection_path='/buckets/{{bucket_id}}/collections',
record_path='/buckets/{{bucket_id}}/collections/{{id}}')
class Collection(resource.ProtectedResource):
| {"golden_diff": "diff --git a/kinto/views/collections.py b/kinto/views/collections.py\n--- a/kinto/views/collections.py\n+++ b/kinto/views/collections.py\n@@ -34,7 +34,7 @@\n \n \n @resource.register(name='collection',\n- collection_methods=('GET',),\n+ collection_methods=('GET', 'POST'),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\n class Collection(resource.ProtectedResource):\n", "issue": "Activate POST on collections\n```\n$ curl -H \"Content-Type: application/json\" \\\n -X POST -d '{\"data\": {\"test\": \"some_data\"}}' --user testuser:abc123 \\\n https://kinto.dev.mozaws.net/v1/buckets/test_bucket/collections\n\n{\"errno\":115,\"message\":\"Method not allowed on this endpoint.\",\"code\":405,\"error\":\"Method Not Allowed\"}\n```\n\n", "before_files": [{"content": "import colander\nimport jsonschema\nfrom cliquet import resource\nfrom jsonschema import exceptions as jsonschema_exceptions\n\nfrom kinto.views import NameGenerator\n\n\nclass JSONSchemaMapping(colander.SchemaNode):\n def schema_type(self, **kw):\n return colander.Mapping(unknown='preserve')\n\n def deserialize(self, cstruct=colander.null):\n # Start by deserializing a simple mapping.\n validated = super(JSONSchemaMapping, self).deserialize(cstruct)\n\n # In case it is optional in parent schema.\n if not validated or validated in (colander.null, colander.drop):\n return validated\n\n try:\n jsonschema.Draft4Validator.check_schema(validated)\n except jsonschema_exceptions.SchemaError as e:\n self.raise_invalid(e.path.pop() + e.message)\n return validated\n\n\nclass CollectionSchema(resource.ResourceSchema):\n schema = JSONSchemaMapping(missing=colander.drop)\n cache_expires = colander.SchemaNode(colander.Int(), missing=colander.drop)\n\n class Options:\n preserve_unknown = True\n\n\[email protected](name='collection',\n collection_methods=('GET',),\n collection_path='/buckets/{{bucket_id}}/collections',\n record_path='/buckets/{{bucket_id}}/collections/{{id}}')\nclass Collection(resource.ProtectedResource):\n mapping = CollectionSchema()\n permissions = ('read', 'write', 'record:create')\n\n def __init__(self, *args, **kwargs):\n super(Collection, self).__init__(*args, **kwargs)\n self.model.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = '/buckets/%s' % bucket_id\n return parent_id\n\n def delete(self):\n result = super(Collection, self).delete()\n\n # Delete records.\n storage = self.model.storage\n parent_id = '%s/collections/%s' % (self.model.parent_id,\n self.record_id)\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/collections.py"}]} | 1,220 | 108 |
gh_patches_debug_5335 | rasdani/github-patches | git_diff | Nitrate__Nitrate-415 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
import xml says Worng xml_version
import xml in not working says worng xml_version 1.1
i export the test case and generate xml and try to import same not work
thanks in advance
</issue>
<code>
[start of src/tcms/settings/product.py]
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 # For Kerberos authentication, uncomment out RemoteUserMiddleware.
22 # MIDDLEWARE += (
23 # 'django.contrib.auth.middleware.RemoteUserMiddleware',
24 # )
25
26 # Remote kerberos authentication backends
27 # AUTHENTICATION_BACKENDS = (
28 # 'tcms.auth.backends.ModAuthKerbBackend',
29 # )
30
31 # To enable database routers for read/write separation.
32 # DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']
33
34 # Kerberos realm
35 # KRB5_REALM = 'EXAMPLE.COM'
36
37 # User authentication by Bugzilla settings
38 # BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'
39
40
41 TEMPLATES[0].update({
42 'DIRS': ['/usr/share/nitrate/templates'],
43 })
44
45 # Set the default send mail address
46 EMAIL_HOST = 'smtp.example.com'
47 EMAIL_FROM = '[email protected]'
48
49 # Site-specific messages
50
51 # First run - to determine if it needs to prompt user or not.
52 FIRST_RUN = False
53
54 # You can add a help link on the footer of home page as following format:
55 # ('http://foo.com', 'foo')
56 FOOTER_LINKS = (
57 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
58 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
59 )
60
61 # added for nitrate3.4 compatibility
62 DEFAULT_GROUPS = ['default']
63 TESTOPIA_XML_VERSION = '1.0'
64
65 # admin settings
66 ADMINS = (
67 # ('Your Name', '[email protected]'),
68 )
69
70 DEFAULT_PAGE_SIZE = 100
71
[end of src/tcms/settings/product.py]
[start of docker/released/product.py]
1 # Django settings for product env.
2
3 from tcms.settings.common import * # noqa
4
5 # Debug settings
6 DEBUG = False
7 TEMPLATE_DEBUG = DEBUG
8
9 # Database settings
10 DATABASES = {
11 'default': {
12 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],
13 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),
14 'USER': env.get('NITRATE_DB_USER', 'nitrate'),
15 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),
16 'HOST': env.get('NITRATE_DB_HOST', ''),
17 'PORT': env.get('NITRATE_DB_PORT', ''),
18 },
19 }
20
21 AUTHENTICATION_BACKENDS = (
22 'django.contrib.auth.backends.ModelBackend',
23 )
24
25 TEMPLATES[0].update({
26 'DIRS': ['/usr/share/nitrate/templates'],
27 })
28
29 # Set the default send mail address
30 EMAIL_HOST = 'smtp.example.com'
31 EMAIL_FROM = '[email protected]'
32
33 # Site-specific messages
34
35 # First run - to determine if it needs to prompt user or not.
36 FIRST_RUN = False
37
38 # You can add a help link on the footer of home page as following format:
39 # ('http://foo.com', 'foo')
40 FOOTER_LINKS = (
41 ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),
42 ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),
43 )
44
45 # added for nitrate3.4 compatibility
46 DEFAULT_GROUPS = ['default']
47 TESTOPIA_XML_VERSION = '1.0'
48
49 ADMINS = (
50 )
51
52 DEFAULT_PAGE_SIZE = 100
53
[end of docker/released/product.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/released/product.py b/docker/released/product.py
--- a/docker/released/product.py
+++ b/docker/released/product.py
@@ -44,7 +44,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
ADMINS = (
)
diff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py
--- a/src/tcms/settings/product.py
+++ b/src/tcms/settings/product.py
@@ -60,7 +60,6 @@
# added for nitrate3.4 compatibility
DEFAULT_GROUPS = ['default']
-TESTOPIA_XML_VERSION = '1.0'
# admin settings
ADMINS = (
| {"golden_diff": "diff --git a/docker/released/product.py b/docker/released/product.py\n--- a/docker/released/product.py\n+++ b/docker/released/product.py\n@@ -44,7 +44,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n ADMINS = (\n )\ndiff --git a/src/tcms/settings/product.py b/src/tcms/settings/product.py\n--- a/src/tcms/settings/product.py\n+++ b/src/tcms/settings/product.py\n@@ -60,7 +60,6 @@\n \n # added for nitrate3.4 compatibility\n DEFAULT_GROUPS = ['default']\n-TESTOPIA_XML_VERSION = '1.0'\n \n # admin settings\n ADMINS = (\n", "issue": "import xml says Worng xml_version\nimport xml in not working says worng xml_version 1.1\r\n\r\ni export the test case and generate xml and try to import same not work\r\n\r\nthanks in advance\n", "before_files": [{"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', 'nitrate'),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\n# For Kerberos authentication, uncomment out RemoteUserMiddleware.\n# MIDDLEWARE += (\n# 'django.contrib.auth.middleware.RemoteUserMiddleware',\n# )\n\n# Remote kerberos authentication backends\n# AUTHENTICATION_BACKENDS = (\n# 'tcms.auth.backends.ModAuthKerbBackend',\n# )\n\n# To enable database routers for read/write separation.\n# DATABASE_ROUTERS = ['tcms.core.utils.tcms_router.RWRouter']\n\n# Kerberos realm\n# KRB5_REALM = 'EXAMPLE.COM'\n\n# User authentication by Bugzilla settings\n# BUGZILLA_XMLRPC_URL = 'https://bugzilla.example.com/xmlrpc.cgi'\n\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\n# admin settings\nADMINS = (\n # ('Your Name', '[email protected]'),\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "src/tcms/settings/product.py"}, {"content": "# Django settings for product env.\n\nfrom tcms.settings.common import * # noqa\n\n# Debug settings\nDEBUG = False\nTEMPLATE_DEBUG = DEBUG\n\n# Database settings\nDATABASES = {\n 'default': {\n 'ENGINE': SUPPORTED_DB_ENGINES[DB_ENGINE],\n 'NAME': env.get('NITRATE_DB_NAME', 'nitrate'),\n 'USER': env.get('NITRATE_DB_USER', 'nitrate'),\n 'PASSWORD': env.get('NITRATE_DB_PASSWORD', ''),\n 'HOST': env.get('NITRATE_DB_HOST', ''),\n 'PORT': env.get('NITRATE_DB_PORT', ''),\n },\n}\n\nAUTHENTICATION_BACKENDS = (\n 'django.contrib.auth.backends.ModelBackend',\n)\n\nTEMPLATES[0].update({\n 'DIRS': ['/usr/share/nitrate/templates'],\n})\n\n# Set the default send mail address\nEMAIL_HOST = 'smtp.example.com'\nEMAIL_FROM = '[email protected]'\n\n# Site-specific messages\n\n# First run - to determine if it needs to prompt user or not.\nFIRST_RUN = False\n\n# You can add a help link on the footer of home page as following format:\n# ('http://foo.com', 'foo')\nFOOTER_LINKS = (\n ('https://nitrate.readthedocs.io/en/latest/api/xmlrpc.html', 'XML-RPC Service'),\n ('https://nitrate.readthedocs.io/en/latest/guide.html', 'User Guide'),\n)\n\n# added for nitrate3.4 compatibility\nDEFAULT_GROUPS = ['default']\nTESTOPIA_XML_VERSION = '1.0'\n\nADMINS = (\n)\n\nDEFAULT_PAGE_SIZE = 100\n", "path": "docker/released/product.py"}]} | 1,679 | 167 |
gh_patches_debug_25743 | rasdani/github-patches | git_diff | getsentry__sentry-python-3099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
2.2.1
### Steps to Reproduce
```console
$ docker run --rm -it ubuntu:22.04
root@e264f830878b:/# apt update
root@e264f830878b:/# apt install -y python3-apport virtualenv
root@e264f830878b:/# virtualenv venv
root@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk
…
Successfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1
root@e264f830878b:/# cat > test.py <<EOF
exec(open("venv/bin/activate_this.py").read(), {"__file__": "venv/bin/activate_this.py"})
import sentry_sdk
sentry_sdk.init(dsn="https://[email protected]/1234")
import exceptiongroup
EOF
root@e264f830878b:/# python3 test.py
```
### Expected Result
No error.
### Actual Result
```pytb
Traceback (most recent call last):
File "//test.py", line 4, in <module>
import exceptiongroup
File "/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py", line 20, in <module>
from ._formatting import (
File "/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py", line 394, in <module>
assert sys.excepthook is apport_python_hook.apport_excepthook
AssertionError
Sentry is attempting to send 2 pending events
Waiting up to 2 seconds
Press Ctrl-C to quit
```
The [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is
```python
if getattr(sys.excepthook, "__name__", None) in (
"apport_excepthook",
# on ubuntu 22.10 the hook was renamed to partial_apport_excepthook
"partial_apport_excepthook",
):
…
import apport_python_hook
assert sys.excepthook is apport_python_hook.apport_excepthook
```
which fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of
- #2906
(cc @sentrivana)
This is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it’s a popular library; for example, it’s a dependency of IPython.
</issue>
<code>
[start of sentry_sdk/integrations/excepthook.py]
1 import sys
2
3 import sentry_sdk
4 from sentry_sdk.utils import (
5 capture_internal_exceptions,
6 ensure_integration_enabled,
7 event_from_exception,
8 )
9 from sentry_sdk.integrations import Integration
10
11 from sentry_sdk._types import TYPE_CHECKING
12
13 if TYPE_CHECKING:
14 from typing import Callable
15 from typing import Any
16 from typing import Type
17 from typing import Optional
18
19 from types import TracebackType
20
21 Excepthook = Callable[
22 [Type[BaseException], BaseException, Optional[TracebackType]],
23 Any,
24 ]
25
26
27 class ExcepthookIntegration(Integration):
28 identifier = "excepthook"
29
30 always_run = False
31
32 def __init__(self, always_run=False):
33 # type: (bool) -> None
34
35 if not isinstance(always_run, bool):
36 raise ValueError(
37 "Invalid value for always_run: %s (must be type boolean)"
38 % (always_run,)
39 )
40 self.always_run = always_run
41
42 @staticmethod
43 def setup_once():
44 # type: () -> None
45 sys.excepthook = _make_excepthook(sys.excepthook)
46
47
48 def _make_excepthook(old_excepthook):
49 # type: (Excepthook) -> Excepthook
50 @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
51 def sentry_sdk_excepthook(type_, value, traceback):
52 # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
53 integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
54
55 if _should_send(integration.always_run):
56 with capture_internal_exceptions():
57 event, hint = event_from_exception(
58 (type_, value, traceback),
59 client_options=sentry_sdk.get_client().options,
60 mechanism={"type": "excepthook", "handled": False},
61 )
62 sentry_sdk.capture_event(event, hint=hint)
63
64 return old_excepthook(type_, value, traceback)
65
66 return sentry_sdk_excepthook
67
68
69 def _should_send(always_run=False):
70 # type: (bool) -> bool
71 if always_run:
72 return True
73
74 if hasattr(sys, "ps1"):
75 # Disable the excepthook for interactive Python shells, otherwise
76 # every typo gets sent to Sentry.
77 return False
78
79 return True
80
[end of sentry_sdk/integrations/excepthook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py
--- a/sentry_sdk/integrations/excepthook.py
+++ b/sentry_sdk/integrations/excepthook.py
@@ -3,7 +3,6 @@
import sentry_sdk
from sentry_sdk.utils import (
capture_internal_exceptions,
- ensure_integration_enabled,
event_from_exception,
)
from sentry_sdk.integrations import Integration
@@ -47,11 +46,16 @@
def _make_excepthook(old_excepthook):
# type: (Excepthook) -> Excepthook
- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)
def sentry_sdk_excepthook(type_, value, traceback):
# type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None
integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)
+ # Note: If we replace this with ensure_integration_enabled then
+ # we break the exceptiongroup backport;
+ # See: https://github.com/getsentry/sentry-python/issues/3097
+ if integration is None:
+ return old_excepthook(type_, value, traceback)
+
if _should_send(integration.always_run):
with capture_internal_exceptions():
event, hint = event_from_exception(
| {"golden_diff": "diff --git a/sentry_sdk/integrations/excepthook.py b/sentry_sdk/integrations/excepthook.py\n--- a/sentry_sdk/integrations/excepthook.py\n+++ b/sentry_sdk/integrations/excepthook.py\n@@ -3,7 +3,6 @@\n import sentry_sdk\n from sentry_sdk.utils import (\n capture_internal_exceptions,\n- ensure_integration_enabled,\n event_from_exception,\n )\n from sentry_sdk.integrations import Integration\n@@ -47,11 +46,16 @@\n \n def _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n- @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n \n+ # Note: If we replace this with ensure_integration_enabled then\n+ # we break the exceptiongroup backport;\n+ # See: https://github.com/getsentry/sentry-python/issues/3097\n+ if integration is None:\n+ return old_excepthook(type_, value, traceback)\n+\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n", "issue": "`sentry_sdk.init` breaks `import exceptiongroup` in virtualenv activated with `activate_this.py`\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n2.2.1\r\n\r\n### Steps to Reproduce\r\n\r\n```console\r\n$ docker run --rm -it ubuntu:22.04\r\nroot@e264f830878b:/# apt update\r\nroot@e264f830878b:/# apt install -y python3-apport virtualenv\r\nroot@e264f830878b:/# virtualenv venv\r\nroot@e264f830878b:/# venv/bin/pip install exceptiongroup sentry-sdk\r\n\u2026\r\nSuccessfully installed certifi-2024.2.2 exceptiongroup-1.2.1 sentry-sdk-2.2.1 urllib3-2.2.1\r\nroot@e264f830878b:/# cat > test.py <<EOF\r\nexec(open(\"venv/bin/activate_this.py\").read(), {\"__file__\": \"venv/bin/activate_this.py\"})\r\nimport sentry_sdk\r\nsentry_sdk.init(dsn=\"https://[email protected]/1234\")\r\nimport exceptiongroup\r\nEOF\r\nroot@e264f830878b:/# python3 test.py\r\n```\r\n\r\n### Expected Result\r\n\r\nNo error.\r\n\r\n### Actual Result\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"//test.py\", line 4, in <module>\r\n import exceptiongroup\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/__init__.py\", line 20, in <module>\r\n from ._formatting import (\r\n File \"/venv/lib/python3.10/site-packages/exceptiongroup/_formatting.py\", line 394, in <module>\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\nAssertionError\r\nSentry is attempting to send 2 pending events\r\nWaiting up to 2 seconds\r\nPress Ctrl-C to quit\r\n```\r\n\r\nThe [relevant code within `exceptiongroup`](https://github.com/agronholm/exceptiongroup/blob/1.2.1/src/exceptiongroup/_formatting.py#L374-L400) is\r\n\r\n```python\r\nif getattr(sys.excepthook, \"__name__\", None) in (\r\n \"apport_excepthook\",\r\n # on ubuntu 22.10 the hook was renamed to partial_apport_excepthook\r\n \"partial_apport_excepthook\",\r\n):\r\n \u2026\r\n import apport_python_hook\r\n\r\n assert sys.excepthook is apport_python_hook.apport_excepthook\r\n```\r\n\r\nwhich fails because Sentry has patched `sys.excepthook` but retained the same `__name__`, due to `functools.wraps` within the `ensure_integration_enabled` decorator, used by `_make_excepthook` as of\r\n\r\n- #2906\r\n\r\n(cc @sentrivana)\r\n\r\nThis is arguably poor code within `exceptiongroup` (I opened agronholm/exceptiongroup#123), but Sentry should avoid breaking it since it\u2019s a popular library; for example, it\u2019s a dependency of IPython.\n", "before_files": [{"content": "import sys\n\nimport sentry_sdk\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n ensure_integration_enabled,\n event_from_exception,\n)\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Callable\n from typing import Any\n from typing import Type\n from typing import Optional\n\n from types import TracebackType\n\n Excepthook = Callable[\n [Type[BaseException], BaseException, Optional[TracebackType]],\n Any,\n ]\n\n\nclass ExcepthookIntegration(Integration):\n identifier = \"excepthook\"\n\n always_run = False\n\n def __init__(self, always_run=False):\n # type: (bool) -> None\n\n if not isinstance(always_run, bool):\n raise ValueError(\n \"Invalid value for always_run: %s (must be type boolean)\"\n % (always_run,)\n )\n self.always_run = always_run\n\n @staticmethod\n def setup_once():\n # type: () -> None\n sys.excepthook = _make_excepthook(sys.excepthook)\n\n\ndef _make_excepthook(old_excepthook):\n # type: (Excepthook) -> Excepthook\n @ensure_integration_enabled(ExcepthookIntegration, old_excepthook)\n def sentry_sdk_excepthook(type_, value, traceback):\n # type: (Type[BaseException], BaseException, Optional[TracebackType]) -> None\n integration = sentry_sdk.get_client().get_integration(ExcepthookIntegration)\n\n if _should_send(integration.always_run):\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n (type_, value, traceback),\n client_options=sentry_sdk.get_client().options,\n mechanism={\"type\": \"excepthook\", \"handled\": False},\n )\n sentry_sdk.capture_event(event, hint=hint)\n\n return old_excepthook(type_, value, traceback)\n\n return sentry_sdk_excepthook\n\n\ndef _should_send(always_run=False):\n # type: (bool) -> bool\n if always_run:\n return True\n\n if hasattr(sys, \"ps1\"):\n # Disable the excepthook for interactive Python shells, otherwise\n # every typo gets sent to Sentry.\n return False\n\n return True\n", "path": "sentry_sdk/integrations/excepthook.py"}]} | 1,957 | 321 |
gh_patches_debug_30470 | rasdani/github-patches | git_diff | vega__altair-2643 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
x-axis tick labels in Natural Disasters case study need clean up
See:

</issue>
<code>
[start of altair/examples/natural_disasters.py]
1 """
2 Natural Disasters
3 -----------------
4 This example shows a visualization of global deaths from natural disasters.
5 """
6 # category: case studies
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.disasters.url
11
12 alt.Chart(source).mark_circle(
13 opacity=0.8,
14 stroke='black',
15 strokeWidth=1
16 ).encode(
17 alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
18 alt.Y('Entity:N'),
19 alt.Size('Deaths:Q',
20 scale=alt.Scale(range=[0, 4000]),
21 legend=alt.Legend(title='Annual Global Deaths')
22 ),
23 alt.Color('Entity:N', legend=None)
24 ).properties(
25 width=450,
26 height=320
27 ).transform_filter(
28 alt.datum.Entity != 'All natural disasters'
29 )
30
[end of altair/examples/natural_disasters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py
--- a/altair/examples/natural_disasters.py
+++ b/altair/examples/natural_disasters.py
@@ -1,7 +1,7 @@
"""
-Natural Disasters
------------------
-This example shows a visualization of global deaths from natural disasters.
+Global Deaths from Natural Disasters
+------------------------------------
+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.
"""
# category: case studies
import altair as alt
@@ -9,21 +9,44 @@
source = data.disasters.url
-alt.Chart(source).mark_circle(
+alt.Chart(source).transform_filter(
+ alt.datum.Entity != 'All natural disasters'
+).mark_circle(
opacity=0.8,
stroke='black',
- strokeWidth=1
+ strokeWidth=1,
+ strokeOpacity=0.4
).encode(
- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),
- alt.Y('Entity:N'),
- alt.Size('Deaths:Q',
- scale=alt.Scale(range=[0, 4000]),
- legend=alt.Legend(title='Annual Global Deaths')
+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),
+ y=alt.Y(
+ 'Entity:N',
+ sort=alt.EncodingSortField(field="Deaths", op="sum", order='descending'),
+ title=None
+ ),
+ size=alt.Size('Deaths:Q',
+ scale=alt.Scale(range=[0, 2500]),
+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')
),
- alt.Color('Entity:N', legend=None)
+ color=alt.Color('Entity:N', legend=None),
+ tooltip=[
+ "Entity:N",
+ alt.Tooltip("Year:T", format='%Y'),
+ alt.Tooltip("Deaths:Q", format='~s')
+ ],
).properties(
width=450,
- height=320
-).transform_filter(
- alt.datum.Entity != 'All natural disasters'
+ height=320,
+ title=alt.TitleParams(
+ text="Global Deaths from Natural Disasters (1900-2017)",
+ subtitle="The size of the bubble represents the total death count per year, by type of disaster",
+ anchor='start'
+ )
+).configure_axisY(
+ domain=False,
+ ticks=False,
+ offset=10
+).configure_axisX(
+ grid=False,
+).configure_view(
+ stroke=None
)
| {"golden_diff": "diff --git a/altair/examples/natural_disasters.py b/altair/examples/natural_disasters.py\n--- a/altair/examples/natural_disasters.py\n+++ b/altair/examples/natural_disasters.py\n@@ -1,7 +1,7 @@\n \"\"\"\n-Natural Disasters\n------------------\n-This example shows a visualization of global deaths from natural disasters.\n+Global Deaths from Natural Disasters\n+------------------------------------\n+This example shows a proportional symbols visualization of deaths from natural disasters by year and type.\n \"\"\"\n # category: case studies\n import altair as alt\n@@ -9,21 +9,44 @@\n \n source = data.disasters.url\n \n-alt.Chart(source).mark_circle(\n+alt.Chart(source).transform_filter(\n+ alt.datum.Entity != 'All natural disasters'\n+).mark_circle(\n opacity=0.8,\n stroke='black',\n- strokeWidth=1\n+ strokeWidth=1,\n+ strokeOpacity=0.4\n ).encode(\n- alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n- alt.Y('Entity:N'),\n- alt.Size('Deaths:Q',\n- scale=alt.Scale(range=[0, 4000]),\n- legend=alt.Legend(title='Annual Global Deaths')\n+ x=alt.X('Year:T', title=None, scale=alt.Scale(domain=['1899','2018'])),\n+ y=alt.Y(\n+ 'Entity:N',\n+ sort=alt.EncodingSortField(field=\"Deaths\", op=\"sum\", order='descending'),\n+ title=None\n+ ),\n+ size=alt.Size('Deaths:Q',\n+ scale=alt.Scale(range=[0, 2500]),\n+ legend=alt.Legend(title='Deaths', clipHeight=30, format='s')\n ),\n- alt.Color('Entity:N', legend=None)\n+ color=alt.Color('Entity:N', legend=None),\n+ tooltip=[\n+ \"Entity:N\", \n+ alt.Tooltip(\"Year:T\", format='%Y'), \n+ alt.Tooltip(\"Deaths:Q\", format='~s')\n+ ],\n ).properties(\n width=450,\n- height=320\n-).transform_filter(\n- alt.datum.Entity != 'All natural disasters'\n+ height=320,\n+ title=alt.TitleParams(\n+ text=\"Global Deaths from Natural Disasters (1900-2017)\",\n+ subtitle=\"The size of the bubble represents the total death count per year, by type of disaster\",\n+ anchor='start'\n+ )\n+).configure_axisY(\n+ domain=False,\n+ ticks=False,\n+ offset=10\n+).configure_axisX(\n+ grid=False,\n+).configure_view(\n+ stroke=None\n )\n", "issue": "x-axis tick labels in Natural Disasters case study need clean up\nSee:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nNatural Disasters\n-----------------\nThis example shows a visualization of global deaths from natural disasters.\n\"\"\"\n# category: case studies\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.disasters.url\n\nalt.Chart(source).mark_circle(\n opacity=0.8,\n stroke='black',\n strokeWidth=1\n).encode(\n alt.X('Year:O', axis=alt.Axis(labelAngle=0)),\n alt.Y('Entity:N'),\n alt.Size('Deaths:Q',\n scale=alt.Scale(range=[0, 4000]),\n legend=alt.Legend(title='Annual Global Deaths')\n ),\n alt.Color('Entity:N', legend=None)\n).properties(\n width=450,\n height=320\n).transform_filter(\n alt.datum.Entity != 'All natural disasters'\n)\n", "path": "altair/examples/natural_disasters.py"}]} | 869 | 609 |
gh_patches_debug_33544 | rasdani/github-patches | git_diff | rasterio__rasterio-822 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rio warp like silently ignores res override
The obvious starting place is `rio warp --like` but this doesn't allow you to override the resolution. It silently ignores the `--res` option which could be considered a bug.
```
$ rio warp --like b.tif --res 5 a.tif c.tif
$ rio info --res c.tif
1.0 1.0
```
In this case, warp should either a) override the resolution of the like raster or b) raise an exception saying that it's not supported
</issue>
<code>
[start of rasterio/rio/warp.py]
1 import logging
2 from math import ceil, floor, log
3 import warnings
4
5 import click
6 from cligj import files_inout_arg, format_opt
7
8 from .helpers import resolve_inout
9 from . import options
10 import rasterio
11 from rasterio.crs import CRS
12 from rasterio.errors import CRSError
13 from rasterio.transform import Affine
14 from rasterio.warp import (
15 reproject, Resampling, calculate_default_transform, transform_bounds)
16
17
18 # Improper usage of rio-warp can lead to accidental creation of
19 # extremely large datasets. We'll put a hard limit on the size of
20 # datasets and raise a usage error if the limits are exceeded.
21 MAX_OUTPUT_WIDTH = 100000
22 MAX_OUTPUT_HEIGHT = 100000
23
24
25 @click.command(short_help='Warp a raster dataset.')
26 @files_inout_arg
27 @options.output_opt
28 @format_opt
29 @click.option(
30 '--like',
31 type=click.Path(exists=True),
32 help='Raster dataset to use as a template for obtaining affine '
33 'transform (bounds and resolution), and crs.')
34 @click.option('--dst-crs', default=None,
35 help='Target coordinate reference system.')
36 @options.dimensions_opt
37 @click.option(
38 '--src-bounds',
39 nargs=4, type=float, default=None,
40 help="Determine output extent from source bounds: left bottom right top "
41 ". Cannot be used with destination --bounds")
42 @click.option(
43 '--bounds', '--dst-bounds', nargs=4, type=float, default=None,
44 help="Determine output extent from destination bounds: left bottom right top")
45 @options.resolution_opt
46 @click.option('--resampling', type=click.Choice([r.name for r in Resampling]),
47 default='nearest', help="Resampling method.",
48 show_default=True)
49 @click.option('--src-nodata', default=None, show_default=True,
50 type=float, help="Manually override source nodata")
51 @click.option('--dst-nodata', default=None, show_default=True,
52 type=float, help="Manually override destination nodata")
53 @click.option('--threads', type=int, default=1,
54 help='Number of processing threads.')
55 @click.option('--check-invert-proj', is_flag=True, default=True,
56 help='Constrain output to valid coordinate region in dst-crs')
57 @options.force_overwrite_opt
58 @options.creation_options
59 @click.pass_context
60 def warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,
61 dst_bounds, res, resampling, src_nodata, dst_nodata, threads,
62 check_invert_proj, force_overwrite, creation_options):
63 """
64 Warp a raster dataset.
65
66 If a template raster is provided using the --like option, the
67 coordinate reference system, affine transform, and dimensions of
68 that raster will be used for the output. In this case --dst-crs,
69 --bounds, --res, and --dimensions options are ignored.
70
71 \b
72 $ rio warp input.tif output.tif --like template.tif
73
74 The output coordinate reference system may be either a PROJ.4 or
75 EPSG:nnnn string,
76
77 \b
78 --dst-crs EPSG:4326
79 --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'
80
81 or a JSON text-encoded PROJ.4 object.
82
83 \b
84 --dst-crs '{"proj": "utm", "zone": 18, ...}'
85
86 If --dimensions are provided, --res and --bounds are ignored.
87 Resolution is calculated based on the relationship between the
88 raster bounds in the target coordinate system and the dimensions,
89 and may produce rectangular rather than square pixels.
90
91 \b
92 $ rio warp input.tif output.tif --dimensions 100 200 \\
93 > --dst-crs EPSG:4326
94
95 If --bounds are provided, --res is required if --dst-crs is provided
96 (defaults to source raster resolution otherwise).
97
98 \b
99 $ rio warp input.tif output.tif \\
100 > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326
101
102 """
103 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
104
105 output, files = resolve_inout(
106 files=files, output=output, force_overwrite=force_overwrite)
107
108 resampling = Resampling[resampling] # get integer code for method
109
110 if not len(res):
111 # Click sets this as an empty tuple if not provided
112 res = None
113 else:
114 # Expand one value to two if needed
115 res = (res[0], res[0]) if len(res) == 1 else res
116
117 with rasterio.Env(CPL_DEBUG=verbosity > 2,
118 CHECK_WITH_INVERT_PROJ=check_invert_proj):
119 with rasterio.open(files[0]) as src:
120 l, b, r, t = src.bounds
121 out_kwargs = src.profile.copy()
122 out_kwargs['driver'] = driver
123
124 # Sort out the bounds options.
125 if src_bounds and dst_bounds:
126 raise click.BadParameter(
127 "--src-bounds and destination --bounds may not be specified "
128 "simultaneously.")
129
130 if like:
131 with rasterio.open(like) as template_ds:
132 dst_crs = template_ds.crs
133 dst_transform = template_ds.affine
134 dst_height = template_ds.height
135 dst_width = template_ds.width
136
137 elif dst_crs is not None:
138 try:
139 dst_crs = CRS.from_string(dst_crs)
140 except ValueError as err:
141 raise click.BadParameter(
142 str(err), param='dst_crs', param_hint='dst_crs')
143
144 if dimensions:
145 # Calculate resolution appropriate for dimensions
146 # in target.
147 dst_width, dst_height = dimensions
148 try:
149 xmin, ymin, xmax, ymax = transform_bounds(
150 src.crs, dst_crs, *src.bounds)
151 except CRSError as err:
152 raise click.BadParameter(
153 str(err), param='dst_crs', param_hint='dst_crs')
154 dst_transform = Affine(
155 (xmax - xmin) / float(dst_width),
156 0, xmin, 0,
157 (ymin - ymax) / float(dst_height),
158 ymax
159 )
160
161 elif src_bounds or dst_bounds:
162 if not res:
163 raise click.BadParameter(
164 "Required when using --bounds.",
165 param='res', param_hint='res')
166
167 if src_bounds:
168 try:
169 xmin, ymin, xmax, ymax = transform_bounds(
170 src.crs, dst_crs, *src_bounds)
171 except CRSError as err:
172 raise click.BadParameter(
173 str(err), param='dst_crs',
174 param_hint='dst_crs')
175 else:
176 xmin, ymin, xmax, ymax = dst_bounds
177
178 dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)
179 dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)
180 dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)
181
182 else:
183 try:
184 dst_transform, dst_width, dst_height = calculate_default_transform(
185 src.crs, dst_crs, src.width, src.height,
186 *src.bounds, resolution=res)
187 except CRSError as err:
188 raise click.BadParameter(
189 str(err), param='dst_crs', param_hint='dst_crs')
190 elif dimensions:
191 # Same projection, different dimensions, calculate resolution.
192 dst_crs = src.crs
193 dst_width, dst_height = dimensions
194 dst_transform = Affine(
195 (r - l) / float(dst_width),
196 0, l, 0,
197 (b - t) / float(dst_height),
198 t
199 )
200
201 elif src_bounds or dst_bounds:
202 # Same projection, different dimensions and possibly
203 # different resolution.
204 if not res:
205 res = (src.affine.a, -src.affine.e)
206
207 dst_crs = src.crs
208 xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)
209 dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)
210 dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)
211 dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)
212
213 elif res:
214 # Same projection, different resolution.
215 dst_crs = src.crs
216 dst_transform = Affine(res[0], 0, l, 0, -res[1], t)
217 dst_width = max(int(ceil((r - l) / res[0])), 1)
218 dst_height = max(int(ceil((t - b) / res[1])), 1)
219
220 else:
221 dst_crs = src.crs
222 dst_transform = src.affine
223 dst_width = src.width
224 dst_height = src.height
225
226 # If src_nodata is not None, update the dst metadata NODATA
227 # value to src_nodata (will be overridden by dst_nodata if it is not None
228 if src_nodata is not None:
229 # Update the dst nodata value
230 out_kwargs.update({
231 'nodata': src_nodata
232 })
233
234 # Validate a manually set destination NODATA value
235 # against the input datatype.
236 if dst_nodata is not None:
237 if src_nodata is None and src.meta['nodata'] is None:
238 raise click.BadParameter(
239 "--src-nodata must be provided because dst-nodata is not None")
240 else:
241 # Update the dst nodata value
242 out_kwargs.update({'nodata': dst_nodata})
243
244 # When the bounds option is misused, extreme values of
245 # destination width and height may result.
246 if (dst_width < 0 or dst_height < 0 or
247 dst_width > MAX_OUTPUT_WIDTH or
248 dst_height > MAX_OUTPUT_HEIGHT):
249 raise click.BadParameter(
250 "Invalid output dimensions: {0}.".format(
251 (dst_width, dst_height)))
252
253 out_kwargs.update({
254 'crs': dst_crs,
255 'transform': dst_transform,
256 'affine': dst_transform,
257 'width': dst_width,
258 'height': dst_height
259 })
260
261 # Adjust block size if necessary.
262 if ('blockxsize' in out_kwargs and
263 dst_width < out_kwargs['blockxsize']):
264 del out_kwargs['blockxsize']
265 if ('blockysize' in out_kwargs and
266 dst_height < out_kwargs['blockysize']):
267 del out_kwargs['blockysize']
268
269 out_kwargs.update(**creation_options)
270
271 with rasterio.open(output, 'w', **out_kwargs) as dst:
272 reproject(
273 source=rasterio.band(src, list(range(1, src.count + 1))),
274 destination=rasterio.band(
275 dst, list(range(1, src.count + 1))),
276 src_transform=src.affine,
277 src_crs=src.crs,
278 src_nodata=src_nodata,
279 dst_transform=out_kwargs['transform'],
280 dst_crs=out_kwargs['crs'],
281 dst_nodata=dst_nodata,
282 resampling=resampling,
283 num_threads=threads)
284
[end of rasterio/rio/warp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/rio/warp.py b/rasterio/rio/warp.py
--- a/rasterio/rio/warp.py
+++ b/rasterio/rio/warp.py
@@ -66,7 +66,8 @@
If a template raster is provided using the --like option, the
coordinate reference system, affine transform, and dimensions of
that raster will be used for the output. In this case --dst-crs,
- --bounds, --res, and --dimensions options are ignored.
+ --bounds, --res, and --dimensions options are not applicable and
+ an exception will be raised.
\b
$ rio warp input.tif output.tif --like template.tif
@@ -83,7 +84,8 @@
\b
--dst-crs '{"proj": "utm", "zone": 18, ...}'
- If --dimensions are provided, --res and --bounds are ignored.
+ If --dimensions are provided, --res and --bounds are not applicable and an
+ exception will be raised.
Resolution is calculated based on the relationship between the
raster bounds in the target coordinate system and the dimensions,
and may produce rectangular rather than square pixels.
@@ -114,6 +116,20 @@
# Expand one value to two if needed
res = (res[0], res[0]) if len(res) == 1 else res
+ # Check invalid parameter combinations
+ if like:
+ invalid_combos = (dimensions, dst_bounds, dst_crs, res)
+ if any(p for p in invalid_combos if p is not None):
+ raise click.BadParameter(
+ "--like cannot be used with any of --dimensions, --bounds, "
+ "--dst-crs, or --res")
+
+ elif dimensions:
+ invalid_combos = (dst_bounds, res)
+ if any(p for p in invalid_combos if p is not None):
+ raise click.BadParameter(
+ "--dimensions cannot be used with --bounds or --res")
+
with rasterio.Env(CPL_DEBUG=verbosity > 2,
CHECK_WITH_INVERT_PROJ=check_invert_proj):
with rasterio.open(files[0]) as src:
| {"golden_diff": "diff --git a/rasterio/rio/warp.py b/rasterio/rio/warp.py\n--- a/rasterio/rio/warp.py\n+++ b/rasterio/rio/warp.py\n@@ -66,7 +66,8 @@\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n- --bounds, --res, and --dimensions options are ignored.\n+ --bounds, --res, and --dimensions options are not applicable and\n+ an exception will be raised.\n \n \\b\n $ rio warp input.tif output.tif --like template.tif\n@@ -83,7 +84,8 @@\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n \n- If --dimensions are provided, --res and --bounds are ignored.\n+ If --dimensions are provided, --res and --bounds are not applicable and an\n+ exception will be raised.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n@@ -114,6 +116,20 @@\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n \n+ # Check invalid parameter combinations\n+ if like:\n+ invalid_combos = (dimensions, dst_bounds, dst_crs, res)\n+ if any(p for p in invalid_combos if p is not None):\n+ raise click.BadParameter(\n+ \"--like cannot be used with any of --dimensions, --bounds, \"\n+ \"--dst-crs, or --res\")\n+\n+ elif dimensions:\n+ invalid_combos = (dst_bounds, res)\n+ if any(p for p in invalid_combos if p is not None):\n+ raise click.BadParameter(\n+ \"--dimensions cannot be used with --bounds or --res\")\n+\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n", "issue": "rio warp like silently ignores res override\nThe obvious starting place is `rio warp --like` but this doesn't allow you to override the resolution. It silently ignores the `--res` option which could be considered a bug. \n\n```\n$ rio warp --like b.tif --res 5 a.tif c.tif\n$ rio info --res c.tif\n1.0 1.0\n```\n\nIn this case, warp should either a) override the resolution of the like raster or b) raise an exception saying that it's not supported\n\n", "before_files": [{"content": "import logging\nfrom math import ceil, floor, log\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.crs import CRS\nfrom rasterio.errors import CRSError\nfrom rasterio.transform import Affine\nfrom rasterio.warp import (\n reproject, Resampling, calculate_default_transform, transform_bounds)\n\n\n# Improper usage of rio-warp can lead to accidental creation of\n# extremely large datasets. We'll put a hard limit on the size of\n# datasets and raise a usage error if the limits are exceeded.\nMAX_OUTPUT_WIDTH = 100000\nMAX_OUTPUT_HEIGHT = 100000\n\n\[email protected](short_help='Warp a raster dataset.')\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), and crs.')\[email protected]('--dst-crs', default=None,\n help='Target coordinate reference system.')\[email protected]_opt\[email protected](\n '--src-bounds',\n nargs=4, type=float, default=None,\n help=\"Determine output extent from source bounds: left bottom right top \"\n \". Cannot be used with destination --bounds\")\[email protected](\n '--bounds', '--dst-bounds', nargs=4, type=float, default=None,\n help=\"Determine output extent from destination bounds: left bottom right top\")\[email protected]_opt\[email protected]('--resampling', type=click.Choice([r.name for r in Resampling]),\n default='nearest', help=\"Resampling method.\",\n show_default=True)\[email protected]('--src-nodata', default=None, show_default=True,\n type=float, help=\"Manually override source nodata\")\[email protected]('--dst-nodata', default=None, show_default=True,\n type=float, help=\"Manually override destination nodata\")\[email protected]('--threads', type=int, default=1,\n help='Number of processing threads.')\[email protected]('--check-invert-proj', is_flag=True, default=True,\n help='Constrain output to valid coordinate region in dst-crs')\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,\n dst_bounds, res, resampling, src_nodata, dst_nodata, threads,\n check_invert_proj, force_overwrite, creation_options):\n \"\"\"\n Warp a raster dataset.\n\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n --bounds, --res, and --dimensions options are ignored.\n\n \\b\n $ rio warp input.tif output.tif --like template.tif\n\n The output coordinate reference system may be either a PROJ.4 or\n EPSG:nnnn string,\n\n \\b\n --dst-crs EPSG:4326\n --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'\n\n or a JSON text-encoded PROJ.4 object.\n\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n\n If --dimensions are provided, --res and --bounds are ignored.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n\n \\b\n $ rio warp input.tif output.tif --dimensions 100 200 \\\\\n > --dst-crs EPSG:4326\n\n If --bounds are provided, --res is required if --dst-crs is provided\n (defaults to source raster resolution otherwise).\n\n \\b\n $ rio warp input.tif output.tif \\\\\n > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326\n\n \"\"\"\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n\n output, files = resolve_inout(\n files=files, output=output, force_overwrite=force_overwrite)\n\n resampling = Resampling[resampling] # get integer code for method\n\n if not len(res):\n # Click sets this as an empty tuple if not provided\n res = None\n else:\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n l, b, r, t = src.bounds\n out_kwargs = src.profile.copy()\n out_kwargs['driver'] = driver\n\n # Sort out the bounds options.\n if src_bounds and dst_bounds:\n raise click.BadParameter(\n \"--src-bounds and destination --bounds may not be specified \"\n \"simultaneously.\")\n\n if like:\n with rasterio.open(like) as template_ds:\n dst_crs = template_ds.crs\n dst_transform = template_ds.affine\n dst_height = template_ds.height\n dst_width = template_ds.width\n\n elif dst_crs is not None:\n try:\n dst_crs = CRS.from_string(dst_crs)\n except ValueError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n\n if dimensions:\n # Calculate resolution appropriate for dimensions\n # in target.\n dst_width, dst_height = dimensions\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src.bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n dst_transform = Affine(\n (xmax - xmin) / float(dst_width),\n 0, xmin, 0,\n (ymin - ymax) / float(dst_height),\n ymax\n )\n\n elif src_bounds or dst_bounds:\n if not res:\n raise click.BadParameter(\n \"Required when using --bounds.\",\n param='res', param_hint='res')\n\n if src_bounds:\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src_bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs',\n param_hint='dst_crs')\n else:\n xmin, ymin, xmax, ymax = dst_bounds\n\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n else:\n try:\n dst_transform, dst_width, dst_height = calculate_default_transform(\n src.crs, dst_crs, src.width, src.height,\n *src.bounds, resolution=res)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n elif dimensions:\n # Same projection, different dimensions, calculate resolution.\n dst_crs = src.crs\n dst_width, dst_height = dimensions\n dst_transform = Affine(\n (r - l) / float(dst_width),\n 0, l, 0,\n (b - t) / float(dst_height),\n t\n )\n\n elif src_bounds or dst_bounds:\n # Same projection, different dimensions and possibly\n # different resolution.\n if not res:\n res = (src.affine.a, -src.affine.e)\n\n dst_crs = src.crs\n xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n elif res:\n # Same projection, different resolution.\n dst_crs = src.crs\n dst_transform = Affine(res[0], 0, l, 0, -res[1], t)\n dst_width = max(int(ceil((r - l) / res[0])), 1)\n dst_height = max(int(ceil((t - b) / res[1])), 1)\n\n else:\n dst_crs = src.crs\n dst_transform = src.affine\n dst_width = src.width\n dst_height = src.height\n\n # If src_nodata is not None, update the dst metadata NODATA\n # value to src_nodata (will be overridden by dst_nodata if it is not None\n if src_nodata is not None:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': src_nodata\n })\n\n # Validate a manually set destination NODATA value\n # against the input datatype.\n if dst_nodata is not None:\n if src_nodata is None and src.meta['nodata'] is None:\n raise click.BadParameter(\n \"--src-nodata must be provided because dst-nodata is not None\")\n else:\n # Update the dst nodata value\n out_kwargs.update({'nodata': dst_nodata})\n\n # When the bounds option is misused, extreme values of\n # destination width and height may result.\n if (dst_width < 0 or dst_height < 0 or\n dst_width > MAX_OUTPUT_WIDTH or\n dst_height > MAX_OUTPUT_HEIGHT):\n raise click.BadParameter(\n \"Invalid output dimensions: {0}.\".format(\n (dst_width, dst_height)))\n\n out_kwargs.update({\n 'crs': dst_crs,\n 'transform': dst_transform,\n 'affine': dst_transform,\n 'width': dst_width,\n 'height': dst_height\n })\n\n # Adjust block size if necessary.\n if ('blockxsize' in out_kwargs and\n dst_width < out_kwargs['blockxsize']):\n del out_kwargs['blockxsize']\n if ('blockysize' in out_kwargs and\n dst_height < out_kwargs['blockysize']):\n del out_kwargs['blockysize']\n\n out_kwargs.update(**creation_options)\n\n with rasterio.open(output, 'w', **out_kwargs) as dst:\n reproject(\n source=rasterio.band(src, list(range(1, src.count + 1))),\n destination=rasterio.band(\n dst, list(range(1, src.count + 1))),\n src_transform=src.affine,\n src_crs=src.crs,\n src_nodata=src_nodata,\n dst_transform=out_kwargs['transform'],\n dst_crs=out_kwargs['crs'],\n dst_nodata=dst_nodata,\n resampling=resampling,\n num_threads=threads)\n", "path": "rasterio/rio/warp.py"}]} | 3,894 | 498 |
gh_patches_debug_15350 | rasdani/github-patches | git_diff | mkdocs__mkdocs-418 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mkdocs build cleaning removes .git when site_dir points to a parent directory
`mkdocs build --clean` wipes out the project when attempting to split doc development and mkdocs' build output across two git branches (gh-pages, gh-pages-dev) with the following layout:
```
<branch: gh-pages-dev>
$PROJ_ROOT/
|- dev
` |- doc/
`- mkdoc.yml \# NOTE: site_dir=../
<branch: gh-pages>
$PROJ_ROOT/
`- ... \# build output
```
This is so I can both keep everything in the same project and also track the dev/ directory on the dev branch and have the output where it should be on the gh-pages branch. It seems obvious now that this would wipe out everything including the .git/ for the project (glad this was a test). Possibly, it could recursively check `site_dir` for a .git, warn and exit. Or just a disclaimer [here](http://www.mkdocs.org/user-guide/configuration/#build-directories) of this possibility would be enough, so no one else tries this and wipes out their project (and maybe you could recommend to new users how to maintain site development and build output in the same repo).
Thanks,
Kris
</issue>
<code>
[start of mkdocs/utils.py]
1 # coding: utf-8
2
3 """
4 Standalone file utils.
5
6 Nothing in this module should have an knowledge of config or the layout
7 and structure of the site and pages in the site.
8 """
9
10 import os
11 import shutil
12
13 from mkdocs.compat import urlparse
14
15
16 def copy_file(source_path, output_path):
17 """
18 Copy source_path to output_path, making sure any parent directories exist.
19 """
20 output_dir = os.path.dirname(output_path)
21 if not os.path.exists(output_dir):
22 os.makedirs(output_dir)
23 shutil.copy(source_path, output_path)
24
25
26 def write_file(content, output_path):
27 """
28 Write content to output_path, making sure any parent directories exist.
29 """
30 output_dir = os.path.dirname(output_path)
31 if not os.path.exists(output_dir):
32 os.makedirs(output_dir)
33 open(output_path, 'wb').write(content)
34
35
36 def clean_directory(directory):
37 """
38 Remove the content of a directory recursively but not the directory itself.
39 """
40 if os.path.exists(directory):
41 for entry in os.listdir(directory):
42 path = os.path.join(directory, entry)
43 if os.path.isdir(path):
44 shutil.rmtree(path, True)
45 else:
46 os.unlink(path)
47
48
49 def copy_media_files(from_dir, to_dir):
50 """
51 Recursively copy all files except markdown and HTML into another directory.
52 """
53 for (source_dir, dirnames, filenames) in os.walk(from_dir):
54 relative_path = os.path.relpath(source_dir, from_dir)
55 output_dir = os.path.normpath(os.path.join(to_dir, relative_path))
56
57 # Filter filenames starting with a '.'
58 filenames = [f for f in filenames if not f.startswith('.')]
59
60 # Filter the dirnames that start with a '.' and update the list in
61 # place to prevent us walking these.
62 dirnames[:] = [d for d in dirnames if not d.startswith('.')]
63
64 for filename in filenames:
65 if not is_markdown_file(filename) and not is_html_file(filename):
66 source_path = os.path.join(source_dir, filename)
67 output_path = os.path.join(output_dir, filename)
68 copy_file(source_path, output_path)
69
70
71 def get_html_path(path):
72 """
73 Map a source file path to an output html path.
74
75 Paths like 'index.md' will be converted to 'index.html'
76 Paths like 'about.md' will be converted to 'about/index.html'
77 Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'
78 """
79 path = os.path.splitext(path)[0]
80 if os.path.basename(path) == 'index':
81 return path + '.html'
82 return "/".join((path, 'index.html'))
83
84
85 def get_url_path(path, use_directory_urls=True):
86 """
87 Map a source file path to an output html path.
88
89 Paths like 'index.md' will be converted to '/'
90 Paths like 'about.md' will be converted to '/about/'
91 Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'
92
93 If `use_directory_urls` is `False`, returned URLs will include the a trailing
94 `index.html` rather than just returning the directory path.
95 """
96 path = get_html_path(path)
97 url = '/' + path.replace(os.path.sep, '/')
98 if use_directory_urls:
99 return url[:-len('index.html')]
100 return url
101
102
103 def is_homepage(path):
104 return os.path.splitext(path)[0] == 'index'
105
106
107 def is_markdown_file(path):
108 """
109 Return True if the given file path is a Markdown file.
110
111 http://superuser.com/questions/249436/file-extension-for-markdown-files
112 """
113 ext = os.path.splitext(path)[1].lower()
114 return ext in [
115 '.markdown',
116 '.mdown',
117 '.mkdn',
118 '.mkd',
119 '.md',
120 ]
121
122
123 def is_css_file(path):
124 """
125 Return True if the given file path is a CSS file.
126 """
127 ext = os.path.splitext(path)[1].lower()
128 return ext in [
129 '.css',
130 ]
131
132
133 def is_javascript_file(path):
134 """
135 Return True if the given file path is a Javascript file.
136 """
137 ext = os.path.splitext(path)[1].lower()
138 return ext in [
139 '.js',
140 '.javascript'
141 ]
142
143
144 def is_html_file(path):
145 """
146 Return True if the given file path is an HTML file.
147 """
148 ext = os.path.splitext(path)[1].lower()
149 return ext in [
150 '.html',
151 '.htm',
152 ]
153
154
155 def create_media_urls(nav, url_list):
156 """
157 Return a list of URLs that have been processed correctly for inclusion in a page.
158 """
159 final_urls = []
160 for url in url_list:
161 # Allow links to fully qualified URL's
162 parsed = urlparse(url)
163 if parsed.netloc:
164 final_urls.append(url)
165 else:
166 relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)
167 final_urls.append(relative_url)
168 return final_urls
169
170
171 def create_relative_media_url(nav, url):
172 """
173 For a current page, create a relative url based on the given URL.
174
175 On index.md (which becomes /index.html):
176 image.png -> ./image.png
177 /image.png -> ./image.png
178
179 on sub/page.md (which becomes /sub/page/index.html):
180 image.png -> ../image.png
181 /image.png -> ../../image.png
182
183 """
184
185 # Allow links to fully qualified URL's
186 parsed = urlparse(url)
187 if parsed.netloc:
188 return url
189
190 # If the URL we are looking at starts with a /, then it should be
191 # considered as absolute and will be 'relative' to the root.
192 if url.startswith('/'):
193 base = '/'
194 url = url[1:]
195 else:
196 base = nav.url_context.base_path
197
198 relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)
199
200 # TODO: Fix this, this is a hack. Relative urls are not being calculated
201 # correctly for images in the same directory as the markdown. I think this
202 # is due to us moving it into a directory with index.html, but I'm not sure
203 if nav.url_context.base_path is not '/' and relative_url.startswith("./"):
204 relative_url = ".%s" % relative_url
205
206 return relative_url
207
[end of mkdocs/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/utils.py b/mkdocs/utils.py
--- a/mkdocs/utils.py
+++ b/mkdocs/utils.py
@@ -37,13 +37,21 @@
"""
Remove the content of a directory recursively but not the directory itself.
"""
- if os.path.exists(directory):
- for entry in os.listdir(directory):
- path = os.path.join(directory, entry)
- if os.path.isdir(path):
- shutil.rmtree(path, True)
- else:
- os.unlink(path)
+ if not os.path.exists(directory):
+ return
+
+ for entry in os.listdir(directory):
+
+ # Don't remove hidden files from the directory. We never copy files
+ # that are hidden, so we shouldn't delete them either.
+ if entry.startswith('.'):
+ continue
+
+ path = os.path.join(directory, entry)
+ if os.path.isdir(path):
+ shutil.rmtree(path, True)
+ else:
+ os.unlink(path)
def copy_media_files(from_dir, to_dir):
| {"golden_diff": "diff --git a/mkdocs/utils.py b/mkdocs/utils.py\n--- a/mkdocs/utils.py\n+++ b/mkdocs/utils.py\n@@ -37,13 +37,21 @@\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n- if os.path.exists(directory):\n- for entry in os.listdir(directory):\n- path = os.path.join(directory, entry)\n- if os.path.isdir(path):\n- shutil.rmtree(path, True)\n- else:\n- os.unlink(path)\n+ if not os.path.exists(directory):\n+ return\n+\n+ for entry in os.listdir(directory):\n+\n+ # Don't remove hidden files from the directory. We never copy files\n+ # that are hidden, so we shouldn't delete them either.\n+ if entry.startswith('.'):\n+ continue\n+\n+ path = os.path.join(directory, entry)\n+ if os.path.isdir(path):\n+ shutil.rmtree(path, True)\n+ else:\n+ os.unlink(path)\n \n \n def copy_media_files(from_dir, to_dir):\n", "issue": "mkdocs build cleaning removes .git when site_dir points to a parent directory\n`mkdocs build --clean` wipes out the project when attempting to split doc development and mkdocs' build output across two git branches (gh-pages, gh-pages-dev) with the following layout:\n\n```\n<branch: gh-pages-dev>\n$PROJ_ROOT/\n|- dev\n` |- doc/\n `- mkdoc.yml \\# NOTE: site_dir=../\n\n<branch: gh-pages>\n$PROJ_ROOT/\n`- ... \\# build output\n```\n\nThis is so I can both keep everything in the same project and also track the dev/ directory on the dev branch and have the output where it should be on the gh-pages branch. It seems obvious now that this would wipe out everything including the .git/ for the project (glad this was a test). Possibly, it could recursively check `site_dir` for a .git, warn and exit. Or just a disclaimer [here](http://www.mkdocs.org/user-guide/configuration/#build-directories) of this possibility would be enough, so no one else tries this and wipes out their project (and maybe you could recommend to new users how to maintain site development and build output in the same repo).\n\nThanks,\nKris\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nimport os\nimport shutil\n\nfrom mkdocs.compat import urlparse\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n shutil.copy(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n open(output_path, 'wb').write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if os.path.exists(directory):\n for entry in os.listdir(directory):\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef copy_media_files(from_dir, to_dir):\n \"\"\"\n Recursively copy all files except markdown and HTML into another directory.\n \"\"\"\n for (source_dir, dirnames, filenames) in os.walk(from_dir):\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n\n # Filter filenames starting with a '.'\n filenames = [f for f in filenames if not f.startswith('.')]\n\n # Filter the dirnames that start with a '.' and update the list in\n # place to prevent us walking these.\n dirnames[:] = [d for d in dirnames if not d.startswith('.')]\n\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n output_path = os.path.join(output_dir, filename)\n copy_file(source_path, output_path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_homepage(path):\n return os.path.splitext(path)[0] == 'index'\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n http://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md',\n ]\n\n\ndef is_css_file(path):\n \"\"\"\n Return True if the given file path is a CSS file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.css',\n ]\n\n\ndef is_javascript_file(path):\n \"\"\"\n Return True if the given file path is a Javascript file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.js',\n '.javascript'\n ]\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef create_media_urls(nav, url_list):\n \"\"\"\n Return a list of URLs that have been processed correctly for inclusion in a page.\n \"\"\"\n final_urls = []\n for url in url_list:\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n final_urls.append(url)\n else:\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n\n\ndef create_relative_media_url(nav, url):\n \"\"\"\n For a current page, create a relative url based on the given URL.\n\n On index.md (which becomes /index.html):\n image.png -> ./image.png\n /image.png -> ./image.png\n\n on sub/page.md (which becomes /sub/page/index.html):\n image.png -> ../image.png\n /image.png -> ../../image.png\n\n \"\"\"\n\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n return url\n\n # If the URL we are looking at starts with a /, then it should be\n # considered as absolute and will be 'relative' to the root.\n if url.startswith('/'):\n base = '/'\n url = url[1:]\n else:\n base = nav.url_context.base_path\n\n relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)\n\n # TODO: Fix this, this is a hack. Relative urls are not being calculated\n # correctly for images in the same directory as the markdown. I think this\n # is due to us moving it into a directory with index.html, but I'm not sure\n if nav.url_context.base_path is not '/' and relative_url.startswith(\"./\"):\n relative_url = \".%s\" % relative_url\n\n return relative_url\n", "path": "mkdocs/utils.py"}]} | 2,698 | 234 |
gh_patches_debug_32903 | rasdani/github-patches | git_diff | netbox-community__netbox-11331 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Initial NetBox 3.4.x migration fails when a plugin using SearchIndex is installed
### NetBox version
3.4.1
### Python version
3.8
### Steps to Reproduce
1. Install NetBox 3.4.1
2. Install any plugin that uses SearchIndex functionality (e.g. netbox-dns) and configure it in PLUGINS
3. Run the initial migration
### Expected Behavior
The migration succeeds
### Observed Behavior
The migration fails with ProgrammingError exception:
```
Operations to perform:
Apply all migrations: admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, netbox_dns, sessions, social_django, taggit, tenancy, users, virtualization, wireless
Running migrations:
Applying extras.0083_search...Reindexing 67 models.
Clearing cached values... 0 entries deleted.
Indexing models
netbox_dns.nameserver... Traceback (most recent call last):
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedTable: relation "netbox_dns_nameserver" does not exist
LINE 1: ...name", "netbox_dns_nameserver"."description" FROM "netbox_dn...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/netbox/netbox/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 96, in wrapped
res = handle_func(*args, **kwargs)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 349, in handle
post_migrate_state = executor.migrate(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py", line 135, in migrate
state = self._migrate_all_forwards(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py", line 167, in _migrate_all_forwards
state = self.apply_migration(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py", line 252, in apply_migration
state = migration.apply(state, schema_editor)
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/migration.py", line 130, in apply
operation.database_forwards(
File "/opt/netbox/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 193, in database_forwards
self.code(from_state.apps, schema_editor)
File "/opt/netbox/netbox/extras/migrations/0083_search.py", line 13, in reindex
management.call_command('reindex')
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py", line 198, in call_command
return command.execute(*args, **defaults)
File "/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/opt/netbox/netbox/extras/management/commands/reindex.py", line 68, in handle
i = search_backend.cache(model.objects.iterator(), remove_existing=False)
File "/opt/netbox/netbox/netbox/search/backends.py", line 148, in cache
for instance in instances:
File "/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py", line 512, in _iterator
yield from iterable
File "/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py", line 87, in __iter__
results = compiler.execute_sql(
File "/opt/netbox/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1398, in execute_sql
cursor.execute(sql, params)
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 103, in execute
return super().execute(sql, params)
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/opt/netbox/lib/python3.8/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "netbox_dns_nameserver" does not exist
LINE 1: ...name", "netbox_dns_nameserver"."description" FROM "netbox_dn...
```
The `netbox_dns_nameserver` relation for the `netbox_dns.models.NameServer` model uses `SearchIndex`:
```
@register_search
class NameServerIndex(SearchIndex):
model = NameServer
fields = (
("name", 100),
("description", 500),
)
```
That results in the NetBox migration `extras.0083_search` trying to reindex the plugin's relation, which does not exist yet at that point.
The obvious workaround is to disable the plugin, run the migration, re-enable the plugin and then run the migration again.
</issue>
<code>
[start of netbox/extras/management/commands/reindex.py]
1 from django.contrib.contenttypes.models import ContentType
2 from django.core.management.base import BaseCommand, CommandError
3
4 from netbox.registry import registry
5 from netbox.search.backends import search_backend
6
7
8 class Command(BaseCommand):
9 help = 'Reindex objects for search'
10
11 def add_arguments(self, parser):
12 parser.add_argument(
13 'args',
14 metavar='app_label[.ModelName]',
15 nargs='*',
16 help='One or more apps or models to reindex',
17 )
18
19 def _get_indexers(self, *model_names):
20 indexers = {}
21
22 # No models specified; pull in all registered indexers
23 if not model_names:
24 for idx in registry['search'].values():
25 indexers[idx.model] = idx
26
27 # Return only indexers for the specified models
28 else:
29 for label in model_names:
30 try:
31 app_label, model_name = label.lower().split('.')
32 except ValueError:
33 raise CommandError(
34 f"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>."
35 )
36 try:
37 idx = registry['search'][f'{app_label}.{model_name}']
38 indexers[idx.model] = idx
39 except KeyError:
40 raise CommandError(f"No indexer registered for {label}")
41
42 return indexers
43
44 def handle(self, *model_labels, **kwargs):
45
46 # Determine which models to reindex
47 indexers = self._get_indexers(*model_labels)
48 if not indexers:
49 raise CommandError("No indexers found!")
50 self.stdout.write(f'Reindexing {len(indexers)} models.')
51
52 # Clear all cached values for the specified models
53 self.stdout.write('Clearing cached values... ', ending='')
54 self.stdout.flush()
55 content_types = [
56 ContentType.objects.get_for_model(model) for model in indexers.keys()
57 ]
58 deleted_count = search_backend.clear(content_types)
59 self.stdout.write(f'{deleted_count} entries deleted.')
60
61 # Index models
62 self.stdout.write('Indexing models')
63 for model, idx in indexers.items():
64 app_label = model._meta.app_label
65 model_name = model._meta.model_name
66 self.stdout.write(f' {app_label}.{model_name}... ', ending='')
67 self.stdout.flush()
68 i = search_backend.cache(model.objects.iterator(), remove_existing=False)
69 if i:
70 self.stdout.write(f'{i} entries cached.')
71 else:
72 self.stdout.write(f'None found.')
73
74 msg = f'Completed.'
75 if total_count := search_backend.size:
76 msg += f' Total entries: {total_count}'
77 self.stdout.write(msg, self.style.SUCCESS)
78
[end of netbox/extras/management/commands/reindex.py]
[start of netbox/extras/migrations/0083_search.py]
1 import sys
2 import uuid
3
4 import django.db.models.deletion
5 import django.db.models.lookups
6 from django.core import management
7 from django.db import migrations, models
8
9
10 def reindex(apps, schema_editor):
11 # Build the search index (except during tests)
12 if 'test' not in sys.argv:
13 management.call_command('reindex')
14
15
16 class Migration(migrations.Migration):
17
18 dependencies = [
19 ('circuits', '0041_standardize_description_comments'),
20 ('contenttypes', '0002_remove_content_type_name'),
21 ('dcim', '0166_virtualdevicecontext'),
22 ('extras', '0082_savedfilter'),
23 ('ipam', '0063_standardize_description_comments'),
24 ('tenancy', '0009_standardize_description_comments'),
25 ('virtualization', '0034_standardize_description_comments'),
26 ('wireless', '0008_wirelesslan_status'),
27 ]
28
29 operations = [
30 migrations.AddField(
31 model_name='customfield',
32 name='search_weight',
33 field=models.PositiveSmallIntegerField(default=1000),
34 ),
35 migrations.CreateModel(
36 name='CachedValue',
37 fields=[
38 ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
39 ('timestamp', models.DateTimeField(auto_now_add=True)),
40 ('object_id', models.PositiveBigIntegerField()),
41 ('field', models.CharField(max_length=200)),
42 ('type', models.CharField(max_length=30)),
43 ('value', models.TextField()),
44 ('weight', models.PositiveSmallIntegerField(default=1000)),
45 ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),
46 ],
47 options={
48 'ordering': ('weight', 'object_type', 'object_id'),
49 },
50 ),
51 migrations.RunPython(
52 code=reindex,
53 reverse_code=migrations.RunPython.noop
54 ),
55 ]
56
[end of netbox/extras/migrations/0083_search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/extras/management/commands/reindex.py b/netbox/extras/management/commands/reindex.py
--- a/netbox/extras/management/commands/reindex.py
+++ b/netbox/extras/management/commands/reindex.py
@@ -27,17 +27,28 @@
# Return only indexers for the specified models
else:
for label in model_names:
- try:
- app_label, model_name = label.lower().split('.')
- except ValueError:
+ labels = label.lower().split('.')
+
+ # Label specifies an exact model
+ if len(labels) == 2:
+ app_label, model_name = labels
+ try:
+ idx = registry['search'][f'{app_label}.{model_name}']
+ indexers[idx.model] = idx
+ except KeyError:
+ raise CommandError(f"No indexer registered for {label}")
+
+ # Label specifies all the models of an app
+ elif len(labels) == 1:
+ app_label = labels[0] + '.'
+ for indexer_label, idx in registry['search'].items():
+ if indexer_label.startswith(app_label):
+ indexers[idx.model] = idx
+
+ else:
raise CommandError(
- f"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>."
+ f"Invalid model: {label}. Model names must be in the format <app_label> or <app_label>.<model_name>."
)
- try:
- idx = registry['search'][f'{app_label}.{model_name}']
- indexers[idx.model] = idx
- except KeyError:
- raise CommandError(f"No indexer registered for {label}")
return indexers
diff --git a/netbox/extras/migrations/0083_search.py b/netbox/extras/migrations/0083_search.py
--- a/netbox/extras/migrations/0083_search.py
+++ b/netbox/extras/migrations/0083_search.py
@@ -10,7 +10,16 @@
def reindex(apps, schema_editor):
# Build the search index (except during tests)
if 'test' not in sys.argv:
- management.call_command('reindex')
+ management.call_command(
+ 'reindex',
+ 'circuits',
+ 'dcim',
+ 'extras',
+ 'ipam',
+ 'tenancy',
+ 'virtualization',
+ 'wireless',
+ )
class Migration(migrations.Migration):
| {"golden_diff": "diff --git a/netbox/extras/management/commands/reindex.py b/netbox/extras/management/commands/reindex.py\n--- a/netbox/extras/management/commands/reindex.py\n+++ b/netbox/extras/management/commands/reindex.py\n@@ -27,17 +27,28 @@\n # Return only indexers for the specified models\n else:\n for label in model_names:\n- try:\n- app_label, model_name = label.lower().split('.')\n- except ValueError:\n+ labels = label.lower().split('.')\n+\n+ # Label specifies an exact model\n+ if len(labels) == 2:\n+ app_label, model_name = labels\n+ try:\n+ idx = registry['search'][f'{app_label}.{model_name}']\n+ indexers[idx.model] = idx\n+ except KeyError:\n+ raise CommandError(f\"No indexer registered for {label}\")\n+\n+ # Label specifies all the models of an app\n+ elif len(labels) == 1:\n+ app_label = labels[0] + '.'\n+ for indexer_label, idx in registry['search'].items():\n+ if indexer_label.startswith(app_label):\n+ indexers[idx.model] = idx\n+\n+ else:\n raise CommandError(\n- f\"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>.\"\n+ f\"Invalid model: {label}. Model names must be in the format <app_label> or <app_label>.<model_name>.\"\n )\n- try:\n- idx = registry['search'][f'{app_label}.{model_name}']\n- indexers[idx.model] = idx\n- except KeyError:\n- raise CommandError(f\"No indexer registered for {label}\")\n \n return indexers\n \ndiff --git a/netbox/extras/migrations/0083_search.py b/netbox/extras/migrations/0083_search.py\n--- a/netbox/extras/migrations/0083_search.py\n+++ b/netbox/extras/migrations/0083_search.py\n@@ -10,7 +10,16 @@\n def reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n- management.call_command('reindex')\n+ management.call_command(\n+ 'reindex',\n+ 'circuits',\n+ 'dcim',\n+ 'extras',\n+ 'ipam',\n+ 'tenancy',\n+ 'virtualization',\n+ 'wireless',\n+ )\n \n \n class Migration(migrations.Migration):\n", "issue": "Initial NetBox 3.4.x migration fails when a plugin using SearchIndex is installed\n### NetBox version\n\n3.4.1\n\n### Python version\n\n3.8\n\n### Steps to Reproduce\n\n1. Install NetBox 3.4.1\r\n2. Install any plugin that uses SearchIndex functionality (e.g. netbox-dns) and configure it in PLUGINS\r\n3. Run the initial migration\n\n### Expected Behavior\n\nThe migration succeeds\n\n### Observed Behavior\n\nThe migration fails with ProgrammingError exception:\r\n```\r\nOperations to perform:\r\n Apply all migrations: admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, netbox_dns, sessions, social_django, taggit, tenancy, users, virtualization, wireless\r\nRunning migrations:\r\n Applying extras.0083_search...Reindexing 67 models.\r\nClearing cached values... 0 entries deleted.\r\nIndexing models\r\n netbox_dns.nameserver... Traceback (most recent call last):\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\npsycopg2.errors.UndefinedTable: relation \"netbox_dns_nameserver\" does not exist\r\nLINE 1: ...name\", \"netbox_dns_nameserver\".\"description\" FROM \"netbox_dn...\r\n ^\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/netbox/netbox/manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py\", line 446, in execute_from_command_line\r\n utility.execute()\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py\", line 440, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 402, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 448, in execute\r\n output = self.handle(*args, **options)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 96, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/commands/migrate.py\", line 349, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 135, in migrate\r\n state = self._migrate_all_forwards(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 167, in _migrate_all_forwards\r\n state = self.apply_migration(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 252, in apply_migration\r\n state = migration.apply(state, schema_editor)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/migration.py\", line 130, in apply\r\n operation.database_forwards(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/migrations/operations/special.py\", line 193, in database_forwards\r\n self.code(from_state.apps, schema_editor)\r\n File \"/opt/netbox/netbox/extras/migrations/0083_search.py\", line 13, in reindex\r\n management.call_command('reindex')\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/__init__.py\", line 198, in call_command\r\n return command.execute(*args, **defaults)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/core/management/base.py\", line 448, in execute\r\n output = self.handle(*args, **options)\r\n File \"/opt/netbox/netbox/extras/management/commands/reindex.py\", line 68, in handle\r\n i = search_backend.cache(model.objects.iterator(), remove_existing=False)\r\n File \"/opt/netbox/netbox/netbox/search/backends.py\", line 148, in cache\r\n for instance in instances:\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py\", line 512, in _iterator\r\n yield from iterable\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/models/query.py\", line 87, in __iter__\r\n results = compiler.execute_sql(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/models/sql/compiler.py\", line 1398, in execute_sql\r\n cursor.execute(sql, params)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 103, in execute\r\n return super().execute(sql, params)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/opt/netbox/lib/python3.8/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\ndjango.db.utils.ProgrammingError: relation \"netbox_dns_nameserver\" does not exist\r\nLINE 1: ...name\", \"netbox_dns_nameserver\".\"description\" FROM \"netbox_dn...\r\n```\r\n\r\nThe `netbox_dns_nameserver` relation for the `netbox_dns.models.NameServer` model uses `SearchIndex`:\r\n```\r\n@register_search\r\nclass NameServerIndex(SearchIndex):\r\n model = NameServer\r\n fields = (\r\n (\"name\", 100),\r\n (\"description\", 500),\r\n )\r\n```\r\n\r\nThat results in the NetBox migration `extras.0083_search` trying to reindex the plugin's relation, which does not exist yet at that point.\r\n\r\nThe obvious workaround is to disable the plugin, run the migration, re-enable the plugin and then run the migration again. \n", "before_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom netbox.registry import registry\nfrom netbox.search.backends import search_backend\n\n\nclass Command(BaseCommand):\n help = 'Reindex objects for search'\n\n def add_arguments(self, parser):\n parser.add_argument(\n 'args',\n metavar='app_label[.ModelName]',\n nargs='*',\n help='One or more apps or models to reindex',\n )\n\n def _get_indexers(self, *model_names):\n indexers = {}\n\n # No models specified; pull in all registered indexers\n if not model_names:\n for idx in registry['search'].values():\n indexers[idx.model] = idx\n\n # Return only indexers for the specified models\n else:\n for label in model_names:\n try:\n app_label, model_name = label.lower().split('.')\n except ValueError:\n raise CommandError(\n f\"Invalid model: {label}. Model names must be in the format <app_label>.<model_name>.\"\n )\n try:\n idx = registry['search'][f'{app_label}.{model_name}']\n indexers[idx.model] = idx\n except KeyError:\n raise CommandError(f\"No indexer registered for {label}\")\n\n return indexers\n\n def handle(self, *model_labels, **kwargs):\n\n # Determine which models to reindex\n indexers = self._get_indexers(*model_labels)\n if not indexers:\n raise CommandError(\"No indexers found!\")\n self.stdout.write(f'Reindexing {len(indexers)} models.')\n\n # Clear all cached values for the specified models\n self.stdout.write('Clearing cached values... ', ending='')\n self.stdout.flush()\n content_types = [\n ContentType.objects.get_for_model(model) for model in indexers.keys()\n ]\n deleted_count = search_backend.clear(content_types)\n self.stdout.write(f'{deleted_count} entries deleted.')\n\n # Index models\n self.stdout.write('Indexing models')\n for model, idx in indexers.items():\n app_label = model._meta.app_label\n model_name = model._meta.model_name\n self.stdout.write(f' {app_label}.{model_name}... ', ending='')\n self.stdout.flush()\n i = search_backend.cache(model.objects.iterator(), remove_existing=False)\n if i:\n self.stdout.write(f'{i} entries cached.')\n else:\n self.stdout.write(f'None found.')\n\n msg = f'Completed.'\n if total_count := search_backend.size:\n msg += f' Total entries: {total_count}'\n self.stdout.write(msg, self.style.SUCCESS)\n", "path": "netbox/extras/management/commands/reindex.py"}, {"content": "import sys\nimport uuid\n\nimport django.db.models.deletion\nimport django.db.models.lookups\nfrom django.core import management\nfrom django.db import migrations, models\n\n\ndef reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n management.call_command('reindex')\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('circuits', '0041_standardize_description_comments'),\n ('contenttypes', '0002_remove_content_type_name'),\n ('dcim', '0166_virtualdevicecontext'),\n ('extras', '0082_savedfilter'),\n ('ipam', '0063_standardize_description_comments'),\n ('tenancy', '0009_standardize_description_comments'),\n ('virtualization', '0034_standardize_description_comments'),\n ('wireless', '0008_wirelesslan_status'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='customfield',\n name='search_weight',\n field=models.PositiveSmallIntegerField(default=1000),\n ),\n migrations.CreateModel(\n name='CachedValue',\n fields=[\n ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),\n ('timestamp', models.DateTimeField(auto_now_add=True)),\n ('object_id', models.PositiveBigIntegerField()),\n ('field', models.CharField(max_length=200)),\n ('type', models.CharField(max_length=30)),\n ('value', models.TextField()),\n ('weight', models.PositiveSmallIntegerField(default=1000)),\n ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),\n ],\n options={\n 'ordering': ('weight', 'object_type', 'object_id'),\n },\n ),\n migrations.RunPython(\n code=reindex,\n reverse_code=migrations.RunPython.noop\n ),\n ]\n", "path": "netbox/extras/migrations/0083_search.py"}]} | 3,348 | 573 |
gh_patches_debug_34492 | rasdani/github-patches | git_diff | WeblateOrg__weblate-9861 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider using ahocorasick-rs instead of pyahocorasick
### Describe the problem
https://pypi.org/project/ahocorasick-rs/ seems faster alternative to pyahocorasick.
### Describe the solution you'd like
It would be useful to benchmark it in Weblate use-case and switch to it in case it outperforms pyahocorasick.
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
> That being said, I've seen ahocorasick_rs run 1.5× to 7× as fast as pyahocorasick, depending on the options used.
</issue>
<code>
[start of weblate/glossary/models.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 import re
6 from itertools import chain
7
8 import ahocorasick
9 import sentry_sdk
10 from django.db.models import Q
11 from django.db.models.functions import Lower
12
13 from weblate.trans.models.unit import Unit
14 from weblate.trans.util import PLURAL_SEPARATOR
15 from weblate.utils.db import re_escape, using_postgresql
16 from weblate.utils.state import STATE_TRANSLATED
17
18 SPLIT_RE = re.compile(r"[\s,.:!?]+", re.UNICODE)
19 NON_WORD_RE = re.compile(r"\W", re.UNICODE)
20
21
22 def get_glossary_sources(component):
23 # Fetch list of terms defined in a translation
24 return list(
25 set(
26 component.source_translation.unit_set.filter(
27 state__gte=STATE_TRANSLATED
28 ).values_list(Lower("source"), flat=True)
29 )
30 )
31
32
33 def get_glossary_automaton(project):
34 with sentry_sdk.start_span(op="glossary.automaton", description=project.slug):
35 # Chain terms
36 terms = set(
37 chain.from_iterable(
38 glossary.glossary_sources for glossary in project.glossaries
39 )
40 )
41 # Build automaton for efficient Aho-Corasick search
42 automaton = ahocorasick.Automaton()
43 for term in terms:
44 automaton.add_word(term, term)
45 automaton.make_automaton()
46 return automaton
47
48
49 def get_glossary_terms(unit):
50 """Return list of term pairs for an unit."""
51 if unit.glossary_terms is not None:
52 return unit.glossary_terms
53 translation = unit.translation
54 language = translation.language
55 component = translation.component
56 project = component.project
57 source_language = component.source_language
58
59 units = (
60 Unit.objects.prefetch()
61 .select_related("source_unit")
62 .order_by("translation__component__priority", Lower("source"))
63 )
64 if language == source_language:
65 return units.none()
66
67 # Build complete source for matching
68 parts = []
69 for text in unit.get_source_plurals():
70 text = text.lower().strip()
71 if text:
72 parts.append(text)
73 source = PLURAL_SEPARATOR.join(parts)
74
75 uses_ngram = source_language.uses_ngram()
76
77 terms = set()
78 automaton = project.glossary_automaton
79 if automaton.kind == ahocorasick.AHOCORASICK:
80 # Extract terms present in the source
81 with sentry_sdk.start_span(op="glossary.match", description=project.slug):
82 for end, term in automaton.iter(source):
83 if uses_ngram or (
84 (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))
85 and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))
86 ):
87 terms.add(term)
88
89 if using_postgresql():
90 match = r"^({})$".format("|".join(re_escape(term) for term in terms))
91 # Use regex as that is utilizing pg_trgm index
92 query = Q(source__iregex=match) | Q(variant__unit__source__iregex=match)
93 else:
94 # With MySQL we utilize it does case insensitive lookup
95 query = Q(source__in=terms) | Q(variant__unit__source__in=terms)
96
97 units = units.filter(
98 query,
99 translation__component__in=project.glossaries,
100 translation__component__source_language=source_language,
101 translation__language=language,
102 ).distinct()
103
104 # Store in a unit cache
105 unit.glossary_terms = units
106
107 return units
108
[end of weblate/glossary/models.py]
[start of weblate/utils/requirements.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 import sys
6 from importlib.metadata import PackageNotFoundError, metadata
7
8 from django.conf import settings
9 from django.core.cache import cache
10 from django.core.exceptions import ImproperlyConfigured
11 from django.db import connection
12
13 import weblate.utils.version
14 from weblate.utils.db import using_postgresql
15 from weblate.utils.errors import report_error
16 from weblate.vcs.git import GitRepository, GitWithGerritRepository, SubversionRepository
17 from weblate.vcs.mercurial import HgRepository
18
19 REQUIRES = [
20 "Django",
21 "siphashc",
22 "translate-toolkit",
23 "lxml",
24 "Pillow",
25 "nh3",
26 "python-dateutil",
27 "social-auth-core",
28 "social-auth-app-django",
29 "django-crispy-forms",
30 "oauthlib",
31 "django-compressor",
32 "djangorestframework",
33 "django-filter",
34 "django-appconf",
35 "user-agents",
36 "filelock",
37 "rapidfuzz",
38 "openpyxl",
39 "celery",
40 "django-celery-beat",
41 "kombu",
42 "translation-finder",
43 "weblate-language-data",
44 "html2text",
45 "pycairo",
46 "pygobject",
47 "diff-match-patch",
48 "requests",
49 "django-redis",
50 "hiredis",
51 "sentry_sdk",
52 "Cython",
53 "misaka",
54 "GitPython",
55 "borgbackup",
56 "pyparsing",
57 "pyahocorasick",
58 "python-redis-lock",
59 "charset-normalizer",
60 ]
61
62 OPTIONAL = [
63 "psycopg2",
64 "psycopg2-binary",
65 "phply",
66 "ruamel.yaml",
67 "tesserocr",
68 "akismet",
69 "boto3",
70 "zeep",
71 "aeidon",
72 "iniparse",
73 "mysqlclient",
74 ]
75
76
77 def get_version_module(name, optional=False):
78 """
79 Return module object.
80
81 On error raises verbose exception with name and URL.
82 """
83 try:
84 package = metadata(name)
85 except PackageNotFoundError as exc:
86 if optional:
87 return None
88 raise ImproperlyConfigured(
89 f"Missing dependency {name}, please install using: pip install {name}"
90 ) from exc
91 url = package.get("Home-page")
92 if url is None:
93 for project_url in package.get_all("Project-URL"):
94 name, current_url = project_url.split(",", 1)
95 if name.lower().strip() == "homepage":
96 url = current_url.strip()
97 break
98 if url is None:
99 url = f"https://pypi.org/project/{name}/"
100 return (
101 package.get("Name"),
102 url,
103 package.get("Version"),
104 )
105
106
107 def get_optional_versions():
108 """Return versions of optional modules."""
109 result = []
110
111 for name in OPTIONAL:
112 module = get_version_module(name, True)
113 if module is not None:
114 result.append(module)
115
116 if HgRepository.is_supported():
117 result.append(
118 ("Mercurial", "https://www.mercurial-scm.org/", HgRepository.get_version())
119 )
120
121 if SubversionRepository.is_supported():
122 result.append(
123 (
124 "git-svn",
125 "https://git-scm.com/docs/git-svn",
126 SubversionRepository.get_version(),
127 )
128 )
129
130 if GitWithGerritRepository.is_supported():
131 result.append(
132 (
133 "git-review",
134 "https://pypi.org/project/git-review/",
135 GitWithGerritRepository.get_version(),
136 )
137 )
138
139 return result
140
141
142 def get_versions():
143 """Return list of used versions."""
144 result = [get_version_module(name) for name in REQUIRES]
145
146 result.append(("Python", "https://www.python.org/", sys.version.split()[0]))
147
148 try:
149 result.append(("Git", "https://git-scm.com/", GitRepository.get_version()))
150 except OSError as exc:
151 raise ImproperlyConfigured("Could not run git, please install it.") from exc
152
153 return result
154
155
156 def get_db_version():
157 if using_postgresql():
158 try:
159 with connection.cursor() as cursor:
160 cursor.execute("SHOW server_version")
161 version = cursor.fetchone()
162 except RuntimeError:
163 report_error(cause="PostgreSQL version check")
164 return None
165
166 return (
167 "PostgreSQL server",
168 "https://www.postgresql.org/",
169 version[0].split(" ")[0],
170 )
171 try:
172 with connection.cursor() as cursor:
173 version = cursor.connection.get_server_info()
174 except RuntimeError:
175 report_error(cause="MySQL version check")
176 return None
177 return (
178 f"{connection.display_name} sever",
179 "https://mariadb.org/"
180 if connection.mysql_is_mariadb
181 else "https://www.mysql.com/",
182 version.split("-", 1)[0],
183 )
184
185
186 def get_cache_version():
187 if settings.CACHES["default"]["BACKEND"] == "django_redis.cache.RedisCache":
188 try:
189 version = cache.client.get_client().info()["redis_version"]
190 except RuntimeError:
191 report_error(cause="Redis version check")
192 return None
193
194 return ("Redis server", "https://redis.io/", version)
195
196 return None
197
198
199 def get_db_cache_version():
200 """Returns the list of all the Database and Cache version."""
201 result = []
202 cache_version = get_cache_version()
203 if cache_version:
204 result.append(cache_version)
205 db_version = get_db_version()
206 if db_version:
207 result.append(db_version)
208 return result
209
210
211 def get_versions_list():
212 """Return list with version information summary."""
213 return [
214 ("Weblate", "https://weblate.org/", weblate.utils.version.GIT_VERSION),
215 *get_versions(),
216 *get_optional_versions(),
217 *get_db_cache_version(),
218 ]
219
[end of weblate/utils/requirements.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/weblate/glossary/models.py b/weblate/glossary/models.py
--- a/weblate/glossary/models.py
+++ b/weblate/glossary/models.py
@@ -5,7 +5,7 @@
import re
from itertools import chain
-import ahocorasick
+import ahocorasick_rs
import sentry_sdk
from django.db.models import Q
from django.db.models.functions import Lower
@@ -39,11 +39,11 @@
)
)
# Build automaton for efficient Aho-Corasick search
- automaton = ahocorasick.Automaton()
- for term in terms:
- automaton.add_word(term, term)
- automaton.make_automaton()
- return automaton
+ return ahocorasick_rs.AhoCorasick(
+ terms,
+ implementation=ahocorasick_rs.Implementation.ContiguousNFA,
+ store_patterns=False,
+ )
def get_glossary_terms(unit):
@@ -76,15 +76,16 @@
terms = set()
automaton = project.glossary_automaton
- if automaton.kind == ahocorasick.AHOCORASICK:
- # Extract terms present in the source
- with sentry_sdk.start_span(op="glossary.match", description=project.slug):
- for end, term in automaton.iter(source):
- if uses_ngram or (
- (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))
- and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))
- ):
- terms.add(term)
+ # Extract terms present in the source
+ with sentry_sdk.start_span(op="glossary.match", description=project.slug):
+ for _termno, start, end in automaton.find_matches_as_indexes(
+ source, overlapping=True
+ ):
+ if uses_ngram or (
+ (start == 0 or NON_WORD_RE.match(source[start - 1]))
+ and (end >= len(source) or NON_WORD_RE.match(source[end]))
+ ):
+ terms.add(source[start:end])
if using_postgresql():
match = r"^({})$".format("|".join(re_escape(term) for term in terms))
diff --git a/weblate/utils/requirements.py b/weblate/utils/requirements.py
--- a/weblate/utils/requirements.py
+++ b/weblate/utils/requirements.py
@@ -54,7 +54,7 @@
"GitPython",
"borgbackup",
"pyparsing",
- "pyahocorasick",
+ "ahocorasick_rs",
"python-redis-lock",
"charset-normalizer",
]
| {"golden_diff": "diff --git a/weblate/glossary/models.py b/weblate/glossary/models.py\n--- a/weblate/glossary/models.py\n+++ b/weblate/glossary/models.py\n@@ -5,7 +5,7 @@\n import re\n from itertools import chain\n \n-import ahocorasick\n+import ahocorasick_rs\n import sentry_sdk\n from django.db.models import Q\n from django.db.models.functions import Lower\n@@ -39,11 +39,11 @@\n )\n )\n # Build automaton for efficient Aho-Corasick search\n- automaton = ahocorasick.Automaton()\n- for term in terms:\n- automaton.add_word(term, term)\n- automaton.make_automaton()\n- return automaton\n+ return ahocorasick_rs.AhoCorasick(\n+ terms,\n+ implementation=ahocorasick_rs.Implementation.ContiguousNFA,\n+ store_patterns=False,\n+ )\n \n \n def get_glossary_terms(unit):\n@@ -76,15 +76,16 @@\n \n terms = set()\n automaton = project.glossary_automaton\n- if automaton.kind == ahocorasick.AHOCORASICK:\n- # Extract terms present in the source\n- with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n- for end, term in automaton.iter(source):\n- if uses_ngram or (\n- (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))\n- and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))\n- ):\n- terms.add(term)\n+ # Extract terms present in the source\n+ with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n+ for _termno, start, end in automaton.find_matches_as_indexes(\n+ source, overlapping=True\n+ ):\n+ if uses_ngram or (\n+ (start == 0 or NON_WORD_RE.match(source[start - 1]))\n+ and (end >= len(source) or NON_WORD_RE.match(source[end]))\n+ ):\n+ terms.add(source[start:end])\n \n if using_postgresql():\n match = r\"^({})$\".format(\"|\".join(re_escape(term) for term in terms))\ndiff --git a/weblate/utils/requirements.py b/weblate/utils/requirements.py\n--- a/weblate/utils/requirements.py\n+++ b/weblate/utils/requirements.py\n@@ -54,7 +54,7 @@\n \"GitPython\",\n \"borgbackup\",\n \"pyparsing\",\n- \"pyahocorasick\",\n+ \"ahocorasick_rs\",\n \"python-redis-lock\",\n \"charset-normalizer\",\n ]\n", "issue": "Consider using ahocorasick-rs instead of pyahocorasick\n### Describe the problem\n\nhttps://pypi.org/project/ahocorasick-rs/ seems faster alternative to pyahocorasick.\n\n### Describe the solution you'd like\n\nIt would be useful to benchmark it in Weblate use-case and switch to it in case it outperforms pyahocorasick.\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n> That being said, I've seen ahocorasick_rs run 1.5\u00d7 to 7\u00d7 as fast as pyahocorasick, depending on the options used.\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport re\nfrom itertools import chain\n\nimport ahocorasick\nimport sentry_sdk\nfrom django.db.models import Q\nfrom django.db.models.functions import Lower\n\nfrom weblate.trans.models.unit import Unit\nfrom weblate.trans.util import PLURAL_SEPARATOR\nfrom weblate.utils.db import re_escape, using_postgresql\nfrom weblate.utils.state import STATE_TRANSLATED\n\nSPLIT_RE = re.compile(r\"[\\s,.:!?]+\", re.UNICODE)\nNON_WORD_RE = re.compile(r\"\\W\", re.UNICODE)\n\n\ndef get_glossary_sources(component):\n # Fetch list of terms defined in a translation\n return list(\n set(\n component.source_translation.unit_set.filter(\n state__gte=STATE_TRANSLATED\n ).values_list(Lower(\"source\"), flat=True)\n )\n )\n\n\ndef get_glossary_automaton(project):\n with sentry_sdk.start_span(op=\"glossary.automaton\", description=project.slug):\n # Chain terms\n terms = set(\n chain.from_iterable(\n glossary.glossary_sources for glossary in project.glossaries\n )\n )\n # Build automaton for efficient Aho-Corasick search\n automaton = ahocorasick.Automaton()\n for term in terms:\n automaton.add_word(term, term)\n automaton.make_automaton()\n return automaton\n\n\ndef get_glossary_terms(unit):\n \"\"\"Return list of term pairs for an unit.\"\"\"\n if unit.glossary_terms is not None:\n return unit.glossary_terms\n translation = unit.translation\n language = translation.language\n component = translation.component\n project = component.project\n source_language = component.source_language\n\n units = (\n Unit.objects.prefetch()\n .select_related(\"source_unit\")\n .order_by(\"translation__component__priority\", Lower(\"source\"))\n )\n if language == source_language:\n return units.none()\n\n # Build complete source for matching\n parts = []\n for text in unit.get_source_plurals():\n text = text.lower().strip()\n if text:\n parts.append(text)\n source = PLURAL_SEPARATOR.join(parts)\n\n uses_ngram = source_language.uses_ngram()\n\n terms = set()\n automaton = project.glossary_automaton\n if automaton.kind == ahocorasick.AHOCORASICK:\n # Extract terms present in the source\n with sentry_sdk.start_span(op=\"glossary.match\", description=project.slug):\n for end, term in automaton.iter(source):\n if uses_ngram or (\n (end + 1 == len(term) or NON_WORD_RE.match(source[end - len(term)]))\n and (end + 1 == len(source) or NON_WORD_RE.match(source[end + 1]))\n ):\n terms.add(term)\n\n if using_postgresql():\n match = r\"^({})$\".format(\"|\".join(re_escape(term) for term in terms))\n # Use regex as that is utilizing pg_trgm index\n query = Q(source__iregex=match) | Q(variant__unit__source__iregex=match)\n else:\n # With MySQL we utilize it does case insensitive lookup\n query = Q(source__in=terms) | Q(variant__unit__source__in=terms)\n\n units = units.filter(\n query,\n translation__component__in=project.glossaries,\n translation__component__source_language=source_language,\n translation__language=language,\n ).distinct()\n\n # Store in a unit cache\n unit.glossary_terms = units\n\n return units\n", "path": "weblate/glossary/models.py"}, {"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nimport sys\nfrom importlib.metadata import PackageNotFoundError, metadata\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import connection\n\nimport weblate.utils.version\nfrom weblate.utils.db import using_postgresql\nfrom weblate.utils.errors import report_error\nfrom weblate.vcs.git import GitRepository, GitWithGerritRepository, SubversionRepository\nfrom weblate.vcs.mercurial import HgRepository\n\nREQUIRES = [\n \"Django\",\n \"siphashc\",\n \"translate-toolkit\",\n \"lxml\",\n \"Pillow\",\n \"nh3\",\n \"python-dateutil\",\n \"social-auth-core\",\n \"social-auth-app-django\",\n \"django-crispy-forms\",\n \"oauthlib\",\n \"django-compressor\",\n \"djangorestframework\",\n \"django-filter\",\n \"django-appconf\",\n \"user-agents\",\n \"filelock\",\n \"rapidfuzz\",\n \"openpyxl\",\n \"celery\",\n \"django-celery-beat\",\n \"kombu\",\n \"translation-finder\",\n \"weblate-language-data\",\n \"html2text\",\n \"pycairo\",\n \"pygobject\",\n \"diff-match-patch\",\n \"requests\",\n \"django-redis\",\n \"hiredis\",\n \"sentry_sdk\",\n \"Cython\",\n \"misaka\",\n \"GitPython\",\n \"borgbackup\",\n \"pyparsing\",\n \"pyahocorasick\",\n \"python-redis-lock\",\n \"charset-normalizer\",\n]\n\nOPTIONAL = [\n \"psycopg2\",\n \"psycopg2-binary\",\n \"phply\",\n \"ruamel.yaml\",\n \"tesserocr\",\n \"akismet\",\n \"boto3\",\n \"zeep\",\n \"aeidon\",\n \"iniparse\",\n \"mysqlclient\",\n]\n\n\ndef get_version_module(name, optional=False):\n \"\"\"\n Return module object.\n\n On error raises verbose exception with name and URL.\n \"\"\"\n try:\n package = metadata(name)\n except PackageNotFoundError as exc:\n if optional:\n return None\n raise ImproperlyConfigured(\n f\"Missing dependency {name}, please install using: pip install {name}\"\n ) from exc\n url = package.get(\"Home-page\")\n if url is None:\n for project_url in package.get_all(\"Project-URL\"):\n name, current_url = project_url.split(\",\", 1)\n if name.lower().strip() == \"homepage\":\n url = current_url.strip()\n break\n if url is None:\n url = f\"https://pypi.org/project/{name}/\"\n return (\n package.get(\"Name\"),\n url,\n package.get(\"Version\"),\n )\n\n\ndef get_optional_versions():\n \"\"\"Return versions of optional modules.\"\"\"\n result = []\n\n for name in OPTIONAL:\n module = get_version_module(name, True)\n if module is not None:\n result.append(module)\n\n if HgRepository.is_supported():\n result.append(\n (\"Mercurial\", \"https://www.mercurial-scm.org/\", HgRepository.get_version())\n )\n\n if SubversionRepository.is_supported():\n result.append(\n (\n \"git-svn\",\n \"https://git-scm.com/docs/git-svn\",\n SubversionRepository.get_version(),\n )\n )\n\n if GitWithGerritRepository.is_supported():\n result.append(\n (\n \"git-review\",\n \"https://pypi.org/project/git-review/\",\n GitWithGerritRepository.get_version(),\n )\n )\n\n return result\n\n\ndef get_versions():\n \"\"\"Return list of used versions.\"\"\"\n result = [get_version_module(name) for name in REQUIRES]\n\n result.append((\"Python\", \"https://www.python.org/\", sys.version.split()[0]))\n\n try:\n result.append((\"Git\", \"https://git-scm.com/\", GitRepository.get_version()))\n except OSError as exc:\n raise ImproperlyConfigured(\"Could not run git, please install it.\") from exc\n\n return result\n\n\ndef get_db_version():\n if using_postgresql():\n try:\n with connection.cursor() as cursor:\n cursor.execute(\"SHOW server_version\")\n version = cursor.fetchone()\n except RuntimeError:\n report_error(cause=\"PostgreSQL version check\")\n return None\n\n return (\n \"PostgreSQL server\",\n \"https://www.postgresql.org/\",\n version[0].split(\" \")[0],\n )\n try:\n with connection.cursor() as cursor:\n version = cursor.connection.get_server_info()\n except RuntimeError:\n report_error(cause=\"MySQL version check\")\n return None\n return (\n f\"{connection.display_name} sever\",\n \"https://mariadb.org/\"\n if connection.mysql_is_mariadb\n else \"https://www.mysql.com/\",\n version.split(\"-\", 1)[0],\n )\n\n\ndef get_cache_version():\n if settings.CACHES[\"default\"][\"BACKEND\"] == \"django_redis.cache.RedisCache\":\n try:\n version = cache.client.get_client().info()[\"redis_version\"]\n except RuntimeError:\n report_error(cause=\"Redis version check\")\n return None\n\n return (\"Redis server\", \"https://redis.io/\", version)\n\n return None\n\n\ndef get_db_cache_version():\n \"\"\"Returns the list of all the Database and Cache version.\"\"\"\n result = []\n cache_version = get_cache_version()\n if cache_version:\n result.append(cache_version)\n db_version = get_db_version()\n if db_version:\n result.append(db_version)\n return result\n\n\ndef get_versions_list():\n \"\"\"Return list with version information summary.\"\"\"\n return [\n (\"Weblate\", \"https://weblate.org/\", weblate.utils.version.GIT_VERSION),\n *get_versions(),\n *get_optional_versions(),\n *get_db_cache_version(),\n ]\n", "path": "weblate/utils/requirements.py"}]} | 3,614 | 621 |
gh_patches_debug_31552 | rasdani/github-patches | git_diff | CTFd__CTFd-1516 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change Configs detail API GET/PATCH for a more structured response
The API endpoints for GET, PATCH /api/v1/configs/{config_key} return badly structured data. This should return better structured data.
</issue>
<code>
[start of CTFd/api/v1/config.py]
1 from typing import List
2
3 from flask import request
4 from flask_restx import Namespace, Resource
5
6 from CTFd.api.v1.helpers.models import build_model_filters
7 from CTFd.api.v1.helpers.request import validate_args
8 from CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic
9 from CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse
10 from CTFd.cache import clear_config, clear_standings
11 from CTFd.constants import RawEnum
12 from CTFd.models import Configs, db
13 from CTFd.schemas.config import ConfigSchema
14 from CTFd.utils import get_config, set_config
15 from CTFd.utils.decorators import admins_only
16
17 configs_namespace = Namespace("configs", description="Endpoint to retrieve Configs")
18
19 ConfigModel = sqlalchemy_to_pydantic(Configs)
20
21
22 class ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):
23 data: ConfigModel
24
25
26 class ConfigListSuccessResponse(APIListSuccessResponse):
27 data: List[ConfigModel]
28
29
30 configs_namespace.schema_model(
31 "ConfigDetailedSuccessResponse", ConfigDetailedSuccessResponse.apidoc()
32 )
33
34 configs_namespace.schema_model(
35 "ConfigListSuccessResponse", ConfigListSuccessResponse.apidoc()
36 )
37
38
39 @configs_namespace.route("")
40 class ConfigList(Resource):
41 @admins_only
42 @configs_namespace.doc(
43 description="Endpoint to get Config objects in bulk",
44 responses={
45 200: ("Success", "ConfigListSuccessResponse"),
46 400: (
47 "An error occured processing the provided or stored data",
48 "APISimpleErrorResponse",
49 ),
50 },
51 )
52 @validate_args(
53 {
54 "key": (str, None),
55 "value": (str, None),
56 "q": (str, None),
57 "field": (RawEnum("ConfigFields", {"key": "key", "value": "value"}), None),
58 },
59 location="query",
60 )
61 def get(self, query_args):
62 q = query_args.pop("q", None)
63 field = str(query_args.pop("field", None))
64 filters = build_model_filters(model=Configs, query=q, field=field)
65
66 configs = Configs.query.filter_by(**query_args).filter(*filters).all()
67 schema = ConfigSchema(many=True)
68 response = schema.dump(configs)
69 if response.errors:
70 return {"success": False, "errors": response.errors}, 400
71
72 return {"success": True, "data": response.data}
73
74 @admins_only
75 @configs_namespace.doc(
76 description="Endpoint to get create a Config object",
77 responses={
78 200: ("Success", "ConfigDetailedSuccessResponse"),
79 400: (
80 "An error occured processing the provided or stored data",
81 "APISimpleErrorResponse",
82 ),
83 },
84 )
85 def post(self):
86 req = request.get_json()
87 schema = ConfigSchema()
88 response = schema.load(req)
89
90 if response.errors:
91 return {"success": False, "errors": response.errors}, 400
92
93 db.session.add(response.data)
94 db.session.commit()
95
96 response = schema.dump(response.data)
97 db.session.close()
98
99 clear_config()
100 clear_standings()
101
102 return {"success": True, "data": response.data}
103
104 @admins_only
105 @configs_namespace.doc(
106 description="Endpoint to get patch Config objects in bulk",
107 responses={200: ("Success", "APISimpleSuccessResponse")},
108 )
109 def patch(self):
110 req = request.get_json()
111
112 for key, value in req.items():
113 set_config(key=key, value=value)
114
115 clear_config()
116 clear_standings()
117
118 return {"success": True}
119
120
121 @configs_namespace.route("/<config_key>")
122 class Config(Resource):
123 @admins_only
124 # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
125 def get(self, config_key):
126
127 return {"success": True, "data": get_config(config_key)}
128
129 @admins_only
130 # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
131 def patch(self, config_key):
132 config = Configs.query.filter_by(key=config_key).first()
133 data = request.get_json()
134 if config:
135 schema = ConfigSchema(instance=config, partial=True)
136 response = schema.load(data)
137 else:
138 schema = ConfigSchema()
139 data["key"] = config_key
140 response = schema.load(data)
141 db.session.add(response.data)
142
143 if response.errors:
144 return response.errors, 400
145
146 db.session.commit()
147
148 response = schema.dump(response.data)
149 db.session.close()
150
151 clear_config()
152 clear_standings()
153
154 return {"success": True, "data": response.data}
155
156 @admins_only
157 @configs_namespace.doc(
158 description="Endpoint to delete a Config object",
159 responses={200: ("Success", "APISimpleSuccessResponse")},
160 )
161 def delete(self, config_key):
162 config = Configs.query.filter_by(key=config_key).first_or_404()
163
164 db.session.delete(config)
165 db.session.commit()
166 db.session.close()
167
168 clear_config()
169 clear_standings()
170
171 return {"success": True}
172
[end of CTFd/api/v1/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/api/v1/config.py b/CTFd/api/v1/config.py
--- a/CTFd/api/v1/config.py
+++ b/CTFd/api/v1/config.py
@@ -11,7 +11,7 @@
from CTFd.constants import RawEnum
from CTFd.models import Configs, db
from CTFd.schemas.config import ConfigSchema
-from CTFd.utils import get_config, set_config
+from CTFd.utils import set_config
from CTFd.utils.decorators import admins_only
configs_namespace = Namespace("configs", description="Endpoint to retrieve Configs")
@@ -121,13 +121,33 @@
@configs_namespace.route("/<config_key>")
class Config(Resource):
@admins_only
- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
+ @configs_namespace.doc(
+ description="Endpoint to get a specific Config object",
+ responses={
+ 200: ("Success", "ConfigDetailedSuccessResponse"),
+ 400: (
+ "An error occured processing the provided or stored data",
+ "APISimpleErrorResponse",
+ ),
+ },
+ )
def get(self, config_key):
-
- return {"success": True, "data": get_config(config_key)}
+ config = Configs.query.filter_by(key=config_key).first_or_404()
+ schema = ConfigSchema()
+ response = schema.dump(config)
+ return {"success": True, "data": response.data}
@admins_only
- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506
+ @configs_namespace.doc(
+ description="Endpoint to edit a specific Config object",
+ responses={
+ 200: ("Success", "ConfigDetailedSuccessResponse"),
+ 400: (
+ "An error occured processing the provided or stored data",
+ "APISimpleErrorResponse",
+ ),
+ },
+ )
def patch(self, config_key):
config = Configs.query.filter_by(key=config_key).first()
data = request.get_json()
| {"golden_diff": "diff --git a/CTFd/api/v1/config.py b/CTFd/api/v1/config.py\n--- a/CTFd/api/v1/config.py\n+++ b/CTFd/api/v1/config.py\n@@ -11,7 +11,7 @@\n from CTFd.constants import RawEnum\n from CTFd.models import Configs, db\n from CTFd.schemas.config import ConfigSchema\n-from CTFd.utils import get_config, set_config\n+from CTFd.utils import set_config\n from CTFd.utils.decorators import admins_only\n \n configs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n@@ -121,13 +121,33 @@\n @configs_namespace.route(\"/<config_key>\")\n class Config(Resource):\n @admins_only\n- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n+ @configs_namespace.doc(\n+ description=\"Endpoint to get a specific Config object\",\n+ responses={\n+ 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n+ 400: (\n+ \"An error occured processing the provided or stored data\",\n+ \"APISimpleErrorResponse\",\n+ ),\n+ },\n+ )\n def get(self, config_key):\n-\n- return {\"success\": True, \"data\": get_config(config_key)}\n+ config = Configs.query.filter_by(key=config_key).first_or_404()\n+ schema = ConfigSchema()\n+ response = schema.dump(config)\n+ return {\"success\": True, \"data\": response.data}\n \n @admins_only\n- # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n+ @configs_namespace.doc(\n+ description=\"Endpoint to edit a specific Config object\",\n+ responses={\n+ 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n+ 400: (\n+ \"An error occured processing the provided or stored data\",\n+ \"APISimpleErrorResponse\",\n+ ),\n+ },\n+ )\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n", "issue": "Change Configs detail API GET/PATCH for a more structured response\nThe API endpoints for GET, PATCH /api/v1/configs/{config_key} return badly structured data. This should return better structured data. \n", "before_files": [{"content": "from typing import List\n\nfrom flask import request\nfrom flask_restx import Namespace, Resource\n\nfrom CTFd.api.v1.helpers.models import build_model_filters\nfrom CTFd.api.v1.helpers.request import validate_args\nfrom CTFd.api.v1.helpers.schemas import sqlalchemy_to_pydantic\nfrom CTFd.api.v1.schemas import APIDetailedSuccessResponse, APIListSuccessResponse\nfrom CTFd.cache import clear_config, clear_standings\nfrom CTFd.constants import RawEnum\nfrom CTFd.models import Configs, db\nfrom CTFd.schemas.config import ConfigSchema\nfrom CTFd.utils import get_config, set_config\nfrom CTFd.utils.decorators import admins_only\n\nconfigs_namespace = Namespace(\"configs\", description=\"Endpoint to retrieve Configs\")\n\nConfigModel = sqlalchemy_to_pydantic(Configs)\n\n\nclass ConfigDetailedSuccessResponse(APIDetailedSuccessResponse):\n data: ConfigModel\n\n\nclass ConfigListSuccessResponse(APIListSuccessResponse):\n data: List[ConfigModel]\n\n\nconfigs_namespace.schema_model(\n \"ConfigDetailedSuccessResponse\", ConfigDetailedSuccessResponse.apidoc()\n)\n\nconfigs_namespace.schema_model(\n \"ConfigListSuccessResponse\", ConfigListSuccessResponse.apidoc()\n)\n\n\n@configs_namespace.route(\"\")\nclass ConfigList(Resource):\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get Config objects in bulk\",\n responses={\n 200: (\"Success\", \"ConfigListSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n @validate_args(\n {\n \"key\": (str, None),\n \"value\": (str, None),\n \"q\": (str, None),\n \"field\": (RawEnum(\"ConfigFields\", {\"key\": \"key\", \"value\": \"value\"}), None),\n },\n location=\"query\",\n )\n def get(self, query_args):\n q = query_args.pop(\"q\", None)\n field = str(query_args.pop(\"field\", None))\n filters = build_model_filters(model=Configs, query=q, field=field)\n\n configs = Configs.query.filter_by(**query_args).filter(*filters).all()\n schema = ConfigSchema(many=True)\n response = schema.dump(configs)\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get create a Config object\",\n responses={\n 200: (\"Success\", \"ConfigDetailedSuccessResponse\"),\n 400: (\n \"An error occured processing the provided or stored data\",\n \"APISimpleErrorResponse\",\n ),\n },\n )\n def post(self):\n req = request.get_json()\n schema = ConfigSchema()\n response = schema.load(req)\n\n if response.errors:\n return {\"success\": False, \"errors\": response.errors}, 400\n\n db.session.add(response.data)\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to get patch Config objects in bulk\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def patch(self):\n req = request.get_json()\n\n for key, value in req.items():\n set_config(key=key, value=value)\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n\n\n@configs_namespace.route(\"/<config_key>\")\nclass Config(Resource):\n @admins_only\n # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n def get(self, config_key):\n\n return {\"success\": True, \"data\": get_config(config_key)}\n\n @admins_only\n # TODO: This returns weirdly structured data. It should more closely match ConfigDetailedSuccessResponse #1506\n def patch(self, config_key):\n config = Configs.query.filter_by(key=config_key).first()\n data = request.get_json()\n if config:\n schema = ConfigSchema(instance=config, partial=True)\n response = schema.load(data)\n else:\n schema = ConfigSchema()\n data[\"key\"] = config_key\n response = schema.load(data)\n db.session.add(response.data)\n\n if response.errors:\n return response.errors, 400\n\n db.session.commit()\n\n response = schema.dump(response.data)\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True, \"data\": response.data}\n\n @admins_only\n @configs_namespace.doc(\n description=\"Endpoint to delete a Config object\",\n responses={200: (\"Success\", \"APISimpleSuccessResponse\")},\n )\n def delete(self, config_key):\n config = Configs.query.filter_by(key=config_key).first_or_404()\n\n db.session.delete(config)\n db.session.commit()\n db.session.close()\n\n clear_config()\n clear_standings()\n\n return {\"success\": True}\n", "path": "CTFd/api/v1/config.py"}]} | 2,144 | 485 |
gh_patches_debug_15874 | rasdani/github-patches | git_diff | kubeflow__pipelines-4104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
</issue>
<code>
[start of sdk/python/kfp/dsl/_component_bridge.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 from typing import Any, Mapping
17 from ..components.structures import ComponentSpec, ComponentReference
18 from ..components._components import _default_component_name, _resolve_command_line_and_paths
19 from ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table
20 from .. import dsl
21
22
23 def _create_container_op_from_component_and_arguments(
24 component_spec: ComponentSpec,
25 arguments: Mapping[str, Any],
26 component_ref: ComponentReference = None,
27 ) -> 'dsl.ContainerOp':
28 # Check types of the reference arguments and serialize PipelineParams
29 arguments = arguments.copy()
30 for input_name, argument_value in arguments.items():
31 if isinstance(argument_value, dsl.PipelineParam):
32 input_type = component_spec._inputs_dict[input_name].type
33 reference_type = argument_value.param_type
34 dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input "{}" of component "{}": '.format(input_name, component_spec.name))
35
36 arguments[input_name] = str(argument_value)
37
38 resolved_cmd = _resolve_command_line_and_paths(
39 component_spec=component_spec,
40 arguments=arguments,
41 )
42
43 container_spec = component_spec.implementation.container
44
45 task = dsl.ContainerOp(
46 name=component_spec.name or _default_component_name,
47 image=container_spec.image,
48 command=resolved_cmd.command,
49 arguments=resolved_cmd.args,
50 file_outputs=resolved_cmd.output_paths,
51 artifact_argument_paths=[
52 dsl.InputArgumentPath(
53 argument=arguments[input_name],
54 input=input_name,
55 path=path,
56 )
57 for input_name, path in resolved_cmd.input_paths.items()
58 ],
59 )
60
61 component_meta = copy.copy(component_spec)
62 task._set_metadata(component_meta)
63 component_ref_without_spec = copy.copy(component_ref)
64 component_ref_without_spec.spec = None
65 task._component_ref = component_ref_without_spec
66
67 # Previously, ContainerOp had strict requirements for the output names, so we had to
68 # convert all the names before passing them to the ContainerOp constructor.
69 # Outputs with non-pythonic names could not be accessed using their original names.
70 # Now ContainerOp supports any output names, so we're now using the original output names.
71 # However to support legacy pipelines, we're also adding output references with pythonic names.
72 # TODO: Add warning when people use the legacy output names.
73 output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering
74 output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)
75 for output_name in output_names:
76 pythonic_output_name = output_name_to_python[output_name]
77 # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)
78 if pythonic_output_name not in task.outputs and output_name in task.outputs:
79 task.outputs[pythonic_output_name] = task.outputs[output_name]
80
81 if container_spec.env:
82 from kubernetes import client as k8s_client
83 for name, value in container_spec.env.items():
84 task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
85
86 if component_spec.metadata:
87 for key, value in (component_spec.metadata.annotations or {}).items():
88 task.add_pod_annotation(key, value)
89 for key, value in (component_spec.metadata.labels or {}).items():
90 task.add_pod_label(key, value)
91
92 return task
93
[end of sdk/python/kfp/dsl/_component_bridge.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py
--- a/sdk/python/kfp/dsl/_component_bridge.py
+++ b/sdk/python/kfp/dsl/_component_bridge.py
@@ -84,9 +84,13 @@
task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))
if component_spec.metadata:
- for key, value in (component_spec.metadata.annotations or {}).items():
+ annotations = component_spec.metadata.annotations or {}
+ for key, value in annotations.items():
task.add_pod_annotation(key, value)
for key, value in (component_spec.metadata.labels or {}).items():
task.add_pod_label(key, value)
+ # Disabling the caching for the volatile components by default
+ if annotations.get('volatile_component', 'false') == 'true':
+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'
return task
| {"golden_diff": "diff --git a/sdk/python/kfp/dsl/_component_bridge.py b/sdk/python/kfp/dsl/_component_bridge.py\n--- a/sdk/python/kfp/dsl/_component_bridge.py\n+++ b/sdk/python/kfp/dsl/_component_bridge.py\n@@ -84,9 +84,13 @@\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n \n if component_spec.metadata:\n- for key, value in (component_spec.metadata.annotations or {}).items():\n+ annotations = component_spec.metadata.annotations or {}\n+ for key, value in annotations.items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n+ # Disabling the caching for the volatile components by default\n+ if annotations.get('volatile_component', 'false') == 'true':\n+ task.execution_options.caching_strategy.max_cache_staleness = 'P0D'\n \n return task\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Mapping\nfrom ..components.structures import ComponentSpec, ComponentReference\nfrom ..components._components import _default_component_name, _resolve_command_line_and_paths\nfrom ..components._naming import _sanitize_python_function_name, generate_unique_name_conversion_table\nfrom .. import dsl\n\n\ndef _create_container_op_from_component_and_arguments(\n component_spec: ComponentSpec,\n arguments: Mapping[str, Any],\n component_ref: ComponentReference = None,\n) -> 'dsl.ContainerOp':\n # Check types of the reference arguments and serialize PipelineParams\n arguments = arguments.copy()\n for input_name, argument_value in arguments.items():\n if isinstance(argument_value, dsl.PipelineParam):\n input_type = component_spec._inputs_dict[input_name].type\n reference_type = argument_value.param_type\n dsl.types.verify_type_compatibility(reference_type, input_type, 'Incompatible argument passed to the input \"{}\" of component \"{}\": '.format(input_name, component_spec.name))\n\n arguments[input_name] = str(argument_value)\n\n resolved_cmd = _resolve_command_line_and_paths(\n component_spec=component_spec,\n arguments=arguments,\n )\n\n container_spec = component_spec.implementation.container\n\n task = dsl.ContainerOp(\n name=component_spec.name or _default_component_name,\n image=container_spec.image,\n command=resolved_cmd.command,\n arguments=resolved_cmd.args,\n file_outputs=resolved_cmd.output_paths,\n artifact_argument_paths=[\n dsl.InputArgumentPath(\n argument=arguments[input_name],\n input=input_name,\n path=path,\n )\n for input_name, path in resolved_cmd.input_paths.items()\n ],\n )\n\n component_meta = copy.copy(component_spec)\n task._set_metadata(component_meta)\n component_ref_without_spec = copy.copy(component_ref)\n component_ref_without_spec.spec = None\n task._component_ref = component_ref_without_spec\n\n # Previously, ContainerOp had strict requirements for the output names, so we had to\n # convert all the names before passing them to the ContainerOp constructor.\n # Outputs with non-pythonic names could not be accessed using their original names.\n # Now ContainerOp supports any output names, so we're now using the original output names.\n # However to support legacy pipelines, we're also adding output references with pythonic names.\n # TODO: Add warning when people use the legacy output names.\n output_names = [output_spec.name for output_spec in component_spec.outputs or []] # Stabilizing the ordering\n output_name_to_python = generate_unique_name_conversion_table(output_names, _sanitize_python_function_name)\n for output_name in output_names:\n pythonic_output_name = output_name_to_python[output_name]\n # Note: Some component outputs are currently missing from task.outputs (e.g. MLPipeline UI Metadata)\n if pythonic_output_name not in task.outputs and output_name in task.outputs:\n task.outputs[pythonic_output_name] = task.outputs[output_name]\n\n if container_spec.env:\n from kubernetes import client as k8s_client\n for name, value in container_spec.env.items():\n task.container.add_env_variable(k8s_client.V1EnvVar(name=name, value=value))\n\n if component_spec.metadata:\n for key, value in (component_spec.metadata.annotations or {}).items():\n task.add_pod_annotation(key, value)\n for key, value in (component_spec.metadata.labels or {}).items():\n task.add_pod_label(key, value)\n\n return task\n", "path": "sdk/python/kfp/dsl/_component_bridge.py"}]} | 1,948 | 215 |
gh_patches_debug_3647 | rasdani/github-patches | git_diff | wagtail__wagtail-1272 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Getting an item after slicing ElasticSearchResults object gives wrong result
For example, let's say we have a list of results with the items A, B, C and D
If you run results[0]. You get A
If you run results[2:]. You get [C, D]
But if you run results[2:][0]. You will get A (you should get C)
Fix coming shortly
</issue>
<code>
[start of wagtail/wagtailsearch/backends/base.py]
1 from six import text_type
2
3 from django.db.models.query import QuerySet
4 from django.db.models.lookups import Lookup
5 from django.db.models.sql.where import SubqueryConstraint, WhereNode
6
7 from wagtail.wagtailsearch.index import class_is_indexed
8
9
10 class FilterError(Exception):
11 pass
12
13
14 class FieldError(Exception):
15 pass
16
17
18 class BaseSearchQuery(object):
19 def __init__(self, queryset, query_string, fields=None):
20 self.queryset = queryset
21 self.query_string = query_string
22 self.fields = fields
23
24 def _get_searchable_field(self, field_attname):
25 # Get field
26 field = dict(
27 (field.get_attname(self.queryset.model), field)
28 for field in self.queryset.model.get_searchable_search_fields()
29 ).get(field_attname, None)
30
31 return field
32
33 def _get_filterable_field(self, field_attname):
34 # Get field
35 field = dict(
36 (field.get_attname(self.queryset.model), field)
37 for field in self.queryset.model.get_filterable_search_fields()
38 ).get(field_attname, None)
39
40 return field
41
42 def _process_lookup(self, field, lookup, value):
43 raise NotImplementedError
44
45 def _connect_filters(self, filters, connector, negated):
46 raise NotImplementedError
47
48 def _process_filter(self, field_attname, lookup, value):
49 # Get the field
50 field = self._get_filterable_field(field_attname)
51
52 if field is None:
53 raise FieldError('Cannot filter search results with field "' + field_attname + '". Please add index.FilterField(\'' + field_attname + '\') to ' + self.queryset.model.__name__ + '.search_fields.')
54
55 # Process the lookup
56 result = self._process_lookup(field, lookup, value)
57
58 if result is None:
59 raise FilterError('Could not apply filter on search results: "' + field_attname + '__' + lookup + ' = ' + text_type(value) + '". Lookup "' + lookup + '"" not recognosed.')
60
61 return result
62
63 def _get_filters_from_where_node(self, where_node):
64 # Check if this is a leaf node
65 if isinstance(where_node, Lookup):
66 field_attname = where_node.lhs.target.attname
67 lookup = where_node.lookup_name
68 value = where_node.rhs
69
70 # Process the filter
71 return self._process_filter(field_attname, lookup, value)
72
73 elif isinstance(where_node, SubqueryConstraint):
74 raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')
75
76 elif isinstance(where_node, WhereNode):
77 # Get child filters
78 connector = where_node.connector
79 child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]
80 child_filters = [child_filter for child_filter in child_filters if child_filter]
81
82 return self._connect_filters(child_filters, connector, where_node.negated)
83
84 else:
85 raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))
86
87 def _get_filters_from_queryset(self):
88 return self._get_filters_from_where_node(self.queryset.query.where)
89
90
91 class BaseSearchResults(object):
92 def __init__(self, backend, query, prefetch_related=None):
93 self.backend = backend
94 self.query = query
95 self.prefetch_related = prefetch_related
96 self.start = 0
97 self.stop = None
98 self._results_cache = None
99 self._count_cache = None
100
101 def _set_limits(self, start=None, stop=None):
102 if stop is not None:
103 if self.stop is not None:
104 self.stop = min(self.stop, self.start + stop)
105 else:
106 self.stop = self.start + stop
107
108 if start is not None:
109 if self.stop is not None:
110 self.start = min(self.stop, self.start + start)
111 else:
112 self.start = self.start + start
113
114 def _clone(self):
115 klass = self.__class__
116 new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)
117 new.start = self.start
118 new.stop = self.stop
119 return new
120
121 def _do_search(self):
122 raise NotImplementedError
123
124 def _do_count(self):
125 raise NotImplementedError
126
127 def results(self):
128 if self._results_cache is None:
129 self._results_cache = self._do_search()
130 return self._results_cache
131
132 def count(self):
133 if self._count_cache is None:
134 if self._results_cache is not None:
135 self._count_cache = len(self._results_cache)
136 else:
137 self._count_cache = self._do_count()
138 return self._count_cache
139
140 def __getitem__(self, key):
141 new = self._clone()
142
143 if isinstance(key, slice):
144 # Set limits
145 start = int(key.start) if key.start else None
146 stop = int(key.stop) if key.stop else None
147 new._set_limits(start, stop)
148
149 # Copy results cache
150 if self._results_cache is not None:
151 new._results_cache = self._results_cache[key]
152
153 return new
154 else:
155 if self._results_cache is not None:
156 return self._results_cache[key]
157
158 new.start = key
159 new.stop = key + 1
160 return list(new)[0]
161
162 def __iter__(self):
163 return iter(self.results())
164
165 def __len__(self):
166 return len(self.results())
167
168 def __repr__(self):
169 data = list(self[:21])
170 if len(data) > 20:
171 data[-1] = "...(remaining elements truncated)..."
172 return repr(data)
173
174
175 class BaseSearch(object):
176 def __init__(self, params):
177 pass
178
179 def reset_index(self):
180 raise NotImplementedError
181
182 def add_type(self, model):
183 raise NotImplementedError
184
185 def refresh_index(self):
186 raise NotImplementedError
187
188 def add(self, obj):
189 raise NotImplementedError
190
191 def add_bulk(self, model, obj_list):
192 raise NotImplementedError
193
194 def delete(self, obj):
195 raise NotImplementedError
196
197 def _search(self, queryset, query_string, fields=None):
198 raise NotImplementedError
199
200 def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):
201 # Find model/queryset
202 if isinstance(model_or_queryset, QuerySet):
203 model = model_or_queryset.model
204 queryset = model_or_queryset
205 else:
206 model = model_or_queryset
207 queryset = model_or_queryset.objects.all()
208
209 # Model must be a class that is in the index
210 if not class_is_indexed(model):
211 return []
212
213 # Check that theres still a query string after the clean up
214 if query_string == "":
215 return []
216
217 # Apply filters to queryset
218 if filters:
219 queryset = queryset.filter(**filters)
220
221 # Prefetch related
222 if prefetch_related:
223 for prefetch in prefetch_related:
224 queryset = queryset.prefetch_related(prefetch)
225
226 # Search
227 return self._search(queryset, query_string, fields=fields)
228
[end of wagtail/wagtailsearch/backends/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailsearch/backends/base.py b/wagtail/wagtailsearch/backends/base.py
--- a/wagtail/wagtailsearch/backends/base.py
+++ b/wagtail/wagtailsearch/backends/base.py
@@ -155,8 +155,8 @@
if self._results_cache is not None:
return self._results_cache[key]
- new.start = key
- new.stop = key + 1
+ new.start = self.start + key
+ new.stop = self.start + key + 1
return list(new)[0]
def __iter__(self):
| {"golden_diff": "diff --git a/wagtail/wagtailsearch/backends/base.py b/wagtail/wagtailsearch/backends/base.py\n--- a/wagtail/wagtailsearch/backends/base.py\n+++ b/wagtail/wagtailsearch/backends/base.py\n@@ -155,8 +155,8 @@\n if self._results_cache is not None:\n return self._results_cache[key]\n \n- new.start = key\n- new.stop = key + 1\n+ new.start = self.start + key\n+ new.stop = self.start + key + 1\n return list(new)[0]\n \n def __iter__(self):\n", "issue": "Getting an item after slicing ElasticSearchResults object gives wrong result\nFor example, let's say we have a list of results with the items A, B, C and D\n\nIf you run results[0]. You get A\nIf you run results[2:]. You get [C, D]\nBut if you run results[2:][0]. You will get A (you should get C)\n\nFix coming shortly\n\n", "before_files": [{"content": "from six import text_type\n\nfrom django.db.models.query import QuerySet\nfrom django.db.models.lookups import Lookup\nfrom django.db.models.sql.where import SubqueryConstraint, WhereNode\n\nfrom wagtail.wagtailsearch.index import class_is_indexed\n\n\nclass FilterError(Exception):\n pass\n\n\nclass FieldError(Exception):\n pass\n\n\nclass BaseSearchQuery(object):\n def __init__(self, queryset, query_string, fields=None):\n self.queryset = queryset\n self.query_string = query_string\n self.fields = fields\n\n def _get_searchable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_searchable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _get_filterable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_filterable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _process_lookup(self, field, lookup, value):\n raise NotImplementedError\n\n def _connect_filters(self, filters, connector, negated):\n raise NotImplementedError\n\n def _process_filter(self, field_attname, lookup, value):\n # Get the field\n field = self._get_filterable_field(field_attname)\n\n if field is None:\n raise FieldError('Cannot filter search results with field \"' + field_attname + '\". Please add index.FilterField(\\'' + field_attname + '\\') to ' + self.queryset.model.__name__ + '.search_fields.')\n\n # Process the lookup\n result = self._process_lookup(field, lookup, value)\n\n if result is None:\n raise FilterError('Could not apply filter on search results: \"' + field_attname + '__' + lookup + ' = ' + text_type(value) + '\". Lookup \"' + lookup + '\"\" not recognosed.')\n\n return result\n\n def _get_filters_from_where_node(self, where_node):\n # Check if this is a leaf node\n if isinstance(where_node, Lookup):\n field_attname = where_node.lhs.target.attname\n lookup = where_node.lookup_name\n value = where_node.rhs\n\n # Process the filter\n return self._process_filter(field_attname, lookup, value)\n\n elif isinstance(where_node, SubqueryConstraint):\n raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')\n\n elif isinstance(where_node, WhereNode):\n # Get child filters\n connector = where_node.connector\n child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]\n child_filters = [child_filter for child_filter in child_filters if child_filter]\n\n return self._connect_filters(child_filters, connector, where_node.negated)\n\n else:\n raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))\n\n def _get_filters_from_queryset(self):\n return self._get_filters_from_where_node(self.queryset.query.where)\n\n\nclass BaseSearchResults(object):\n def __init__(self, backend, query, prefetch_related=None):\n self.backend = backend\n self.query = query\n self.prefetch_related = prefetch_related\n self.start = 0\n self.stop = None\n self._results_cache = None\n self._count_cache = None\n\n def _set_limits(self, start=None, stop=None):\n if stop is not None:\n if self.stop is not None:\n self.stop = min(self.stop, self.start + stop)\n else:\n self.stop = self.start + stop\n\n if start is not None:\n if self.stop is not None:\n self.start = min(self.stop, self.start + start)\n else:\n self.start = self.start + start\n\n def _clone(self):\n klass = self.__class__\n new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)\n new.start = self.start\n new.stop = self.stop\n return new\n\n def _do_search(self):\n raise NotImplementedError\n\n def _do_count(self):\n raise NotImplementedError\n\n def results(self):\n if self._results_cache is None:\n self._results_cache = self._do_search()\n return self._results_cache\n\n def count(self):\n if self._count_cache is None:\n if self._results_cache is not None:\n self._count_cache = len(self._results_cache)\n else:\n self._count_cache = self._do_count()\n return self._count_cache\n\n def __getitem__(self, key):\n new = self._clone()\n\n if isinstance(key, slice):\n # Set limits\n start = int(key.start) if key.start else None\n stop = int(key.stop) if key.stop else None\n new._set_limits(start, stop)\n\n # Copy results cache\n if self._results_cache is not None:\n new._results_cache = self._results_cache[key]\n\n return new\n else:\n if self._results_cache is not None:\n return self._results_cache[key]\n\n new.start = key\n new.stop = key + 1\n return list(new)[0]\n\n def __iter__(self):\n return iter(self.results())\n\n def __len__(self):\n return len(self.results())\n\n def __repr__(self):\n data = list(self[:21])\n if len(data) > 20:\n data[-1] = \"...(remaining elements truncated)...\"\n return repr(data)\n\n\nclass BaseSearch(object):\n def __init__(self, params):\n pass\n\n def reset_index(self):\n raise NotImplementedError\n\n def add_type(self, model):\n raise NotImplementedError\n\n def refresh_index(self):\n raise NotImplementedError\n\n def add(self, obj):\n raise NotImplementedError\n\n def add_bulk(self, model, obj_list):\n raise NotImplementedError\n\n def delete(self, obj):\n raise NotImplementedError\n\n def _search(self, queryset, query_string, fields=None):\n raise NotImplementedError\n\n def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):\n # Find model/queryset\n if isinstance(model_or_queryset, QuerySet):\n model = model_or_queryset.model\n queryset = model_or_queryset\n else:\n model = model_or_queryset\n queryset = model_or_queryset.objects.all()\n\n # Model must be a class that is in the index\n if not class_is_indexed(model):\n return []\n\n # Check that theres still a query string after the clean up\n if query_string == \"\":\n return []\n\n # Apply filters to queryset\n if filters:\n queryset = queryset.filter(**filters)\n\n # Prefetch related\n if prefetch_related:\n for prefetch in prefetch_related:\n queryset = queryset.prefetch_related(prefetch)\n\n # Search\n return self._search(queryset, query_string, fields=fields)\n", "path": "wagtail/wagtailsearch/backends/base.py"}]} | 2,755 | 144 |
gh_patches_debug_5825 | rasdani/github-patches | git_diff | Kinto__kinto-500 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
POST with If-None-Match: * and provided id in body always return 412
Detected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205
See https://github.com/mozilla-services/cliquet/issues/673
</issue>
<code>
[start of setup.py]
1 import codecs
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 def read_file(filename):
10 """Open a related file and return its content."""
11 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
12 content = f.read()
13 return content
14
15 README = read_file('README.rst')
16 CHANGELOG = read_file('CHANGELOG.rst')
17 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
18
19 REQUIREMENTS = [
20 'waitress',
21 'cliquet>=3,<4',
22 'jsonschema',
23 ]
24
25 POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
26 'cliquet[postgresql]>=3,<4'
27 ]
28
29 MONITORING_REQUIREMENTS = REQUIREMENTS + [
30 'cliquet[monitoring]>=3,<4'
31 ]
32
33 FXA_REQUIREMENTS = REQUIREMENTS + [
34 'cliquet-fxa<2'
35 ]
36
37 ENTRY_POINTS = {
38 'paste.app_factory': [
39 'main = kinto:main',
40 ],
41 'console_scripts': [
42 'kinto = kinto.__main__:main'
43 ],
44 }
45
46 DEPENDENCY_LINKS = [
47 ]
48
49 setup(name='kinto',
50 version='1.12.0.dev0',
51 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
52 long_description=README + "\n\n" + CHANGELOG + "\n\n" + CONTRIBUTORS,
53 license='Apache License (2.0)',
54 classifiers=[
55 "Programming Language :: Python",
56 "Programming Language :: Python :: 2",
57 "Programming Language :: Python :: 2.7",
58 "Programming Language :: Python :: 3",
59 "Programming Language :: Python :: 3.4",
60 "Programming Language :: Python :: 3.5",
61 "Programming Language :: Python :: Implementation :: CPython",
62 "Programming Language :: Python :: Implementation :: PyPy",
63 "Topic :: Internet :: WWW/HTTP",
64 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
65 "License :: OSI Approved :: Apache Software License"
66 ],
67 keywords="web sync json storage",
68 author='Mozilla Services',
69 author_email='[email protected]',
70 url='https://github.com/Kinto/kinto',
71 packages=find_packages(),
72 include_package_data=True,
73 zip_safe=False,
74 install_requires=REQUIREMENTS,
75 extras_require={
76 'postgresql': POSTGRESQL_REQUIREMENTS,
77 'monitoring': MONITORING_REQUIREMENTS,
78 'fxa': FXA_REQUIREMENTS,
79 ":python_version=='2.7'": ["functools32"],
80 },
81 test_suite="kinto.tests",
82 entry_points=ENTRY_POINTS,
83 dependency_links=DEPENDENCY_LINKS)
84
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,16 +18,16 @@
REQUIREMENTS = [
'waitress',
- 'cliquet>=3,<4',
+ 'cliquet>=3.1,<4',
'jsonschema',
]
POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[postgresql]>=3,<4'
+ 'cliquet[postgresql]>=3.1,<4'
]
MONITORING_REQUIREMENTS = REQUIREMENTS + [
- 'cliquet[monitoring]>=3,<4'
+ 'cliquet[monitoring]>=3.1,<4'
]
FXA_REQUIREMENTS = REQUIREMENTS + [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,16 +18,16 @@\n \n REQUIREMENTS = [\n 'waitress',\n- 'cliquet>=3,<4',\n+ 'cliquet>=3.1,<4',\n 'jsonschema',\n ]\n \n POSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[postgresql]>=3,<4'\n+ 'cliquet[postgresql]>=3.1,<4'\n ]\n \n MONITORING_REQUIREMENTS = REQUIREMENTS + [\n- 'cliquet[monitoring]>=3,<4'\n+ 'cliquet[monitoring]>=3.1,<4'\n ]\n \n FXA_REQUIREMENTS = REQUIREMENTS + [\n", "issue": "POST with If-None-Match: * and provided id in body always return 412\nDetected using kinto-client v0.4.0 https://github.com/Kinto/kinto-client/blob/v0.4.0/src/requests.js#L188-L205\n\nSee https://github.com/mozilla-services/cliquet/issues/673\n\n", "before_files": [{"content": "import codecs\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'waitress',\n 'cliquet>=3,<4',\n 'jsonschema',\n]\n\nPOSTGRESQL_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[postgresql]>=3,<4'\n]\n\nMONITORING_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet[monitoring]>=3,<4'\n]\n\nFXA_REQUIREMENTS = REQUIREMENTS + [\n 'cliquet-fxa<2'\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\nDEPENDENCY_LINKS = [\n]\n\nsetup(name='kinto',\n version='1.12.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIREMENTS,\n 'monitoring': MONITORING_REQUIREMENTS,\n 'fxa': FXA_REQUIREMENTS,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n entry_points=ENTRY_POINTS,\n dependency_links=DEPENDENCY_LINKS)\n", "path": "setup.py"}]} | 1,364 | 166 |
gh_patches_debug_19521 | rasdani/github-patches | git_diff | streamlink__streamlink-453 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Less violent way of closing player when stream ends
Currently streamlink uses SIGKILL to close the player when a stream ends. This prevents the player from doing its own cleanup. For example, mpv leaves DPMS/screensaver disabled because of this. I know there is --player-no-close option, but that has an unwanted side-effect of not closing the player immediately in some situations.
I suggest fixing it by using SIGTERM instead:
```diff
diff -bur streamlink-0.1.0-orig/src/streamlink_cli/output.py streamlink-0.1.0/src/streamlink_cli/output.py
--- streamlink-0.1.0-orig/src/streamlink_cli/output.py 2016-11-21 21:56:29.000000000 +0200
+++ streamlink-0.1.0/src/streamlink_cli/output.py 2016-12-08 22:08:23.000000000 +0200
@@ -161,7 +161,7 @@
if self.kill:
with ignored(Exception):
- self.player.kill()
+ self.player.terminate()
self.player.wait()
def _write(self, data):
```
</issue>
<code>
[start of src/streamlink_cli/output.py]
1 import os
2 import shlex
3 import subprocess
4 import sys
5
6 from time import sleep
7
8 import re
9
10 from .compat import is_win32, stdout
11 from .constants import DEFAULT_PLAYER_ARGUMENTS
12 from .utils import ignored
13
14 if is_win32:
15 import msvcrt
16
17
18 class Output(object):
19 def __init__(self):
20 self.opened = False
21
22 def open(self):
23 self._open()
24 self.opened = True
25
26 def close(self):
27 if self.opened:
28 self._close()
29
30 self.opened = False
31
32 def write(self, data):
33 if not self.opened:
34 raise IOError("Output is not opened")
35
36 return self._write(data)
37
38 def _open(self):
39 pass
40
41 def _close(self):
42 pass
43
44 def _write(self, data):
45 pass
46
47
48 class FileOutput(Output):
49 def __init__(self, filename=None, fd=None):
50 super(FileOutput, self).__init__()
51 self.filename = filename
52 self.fd = fd
53
54 def _open(self):
55 if self.filename:
56 self.fd = open(self.filename, "wb")
57
58 if is_win32:
59 msvcrt.setmode(self.fd.fileno(), os.O_BINARY)
60
61 def _close(self):
62 if self.fd is not stdout:
63 self.fd.close()
64
65 def _write(self, data):
66 self.fd.write(data)
67
68
69 class PlayerOutput(Output):
70 def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,
71 namedpipe=None):
72 super(PlayerOutput, self).__init__()
73 self.cmd = cmd
74 self.args = args
75 self.kill = kill
76 self.call = call
77 self.quiet = quiet
78
79 self.filename = filename
80 self.namedpipe = namedpipe
81 self.http = http
82
83 if self.namedpipe or self.filename or self.http:
84 self.stdin = sys.stdin
85 else:
86 self.stdin = subprocess.PIPE
87
88 if self.quiet:
89 self.stdout = open(os.devnull, "w")
90 self.stderr = open(os.devnull, "w")
91 else:
92 self.stdout = sys.stdout
93 self.stderr = sys.stderr
94
95 @property
96 def running(self):
97 sleep(0.5)
98 self.player.poll()
99 return self.player.returncode is None
100
101 def _create_arguments(self):
102 if self.namedpipe:
103 filename = self.namedpipe.path
104 elif self.filename:
105 filename = self.filename
106 elif self.http:
107 filename = self.http.url
108 else:
109 filename = "-"
110
111 args = self.args.format(filename=filename)
112 cmd = self.cmd
113 if is_win32:
114 return cmd + " " + args
115
116 return shlex.split(cmd) + shlex.split(args)
117
118 def _open(self):
119 try:
120 if self.call and self.filename:
121 self._open_call()
122 else:
123 self._open_subprocess()
124 finally:
125 if self.quiet:
126 # Output streams no longer needed in parent process
127 self.stdout.close()
128 self.stderr.close()
129
130 def _open_call(self):
131 subprocess.call(self._create_arguments(),
132 stdout=self.stdout,
133 stderr=self.stderr)
134
135 def _open_subprocess(self):
136 # Force bufsize=0 on all Python versions to avoid writing the
137 # unflushed buffer when closing a broken input pipe
138 self.player = subprocess.Popen(self._create_arguments(),
139 stdin=self.stdin, bufsize=0,
140 stdout=self.stdout,
141 stderr=self.stderr)
142 # Wait 0.5 seconds to see if program exited prematurely
143 if not self.running:
144 raise OSError("Process exited prematurely")
145
146 if self.namedpipe:
147 self.namedpipe.open("wb")
148 elif self.http:
149 self.http.open()
150
151 def _close(self):
152 # Close input to the player first to signal the end of the
153 # stream and allow the player to terminate of its own accord
154 if self.namedpipe:
155 self.namedpipe.close()
156 elif self.http:
157 self.http.close()
158 elif not self.filename:
159 self.player.stdin.close()
160
161 if self.kill:
162 with ignored(Exception):
163 self.player.kill()
164 self.player.wait()
165
166 def _write(self, data):
167 if self.namedpipe:
168 self.namedpipe.write(data)
169 elif self.http:
170 self.http.write(data)
171 else:
172 self.player.stdin.write(data)
173
174
175 __all__ = ["PlayerOutput", "FileOutput"]
176
[end of src/streamlink_cli/output.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py
--- a/src/streamlink_cli/output.py
+++ b/src/streamlink_cli/output.py
@@ -67,6 +67,8 @@
class PlayerOutput(Output):
+ PLAYER_TERMINATE_TIMEOUT = 10.0
+
def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,
namedpipe=None):
super(PlayerOutput, self).__init__()
@@ -160,7 +162,15 @@
if self.kill:
with ignored(Exception):
- self.player.kill()
+ self.player.terminate()
+ if not is_win32:
+ t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT
+ while not self.player.poll() and t < timeout:
+ sleep(0.5)
+ t += 0.5
+
+ if not self.player.returncode:
+ self.player.kill()
self.player.wait()
def _write(self, data):
| {"golden_diff": "diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py\n--- a/src/streamlink_cli/output.py\n+++ b/src/streamlink_cli/output.py\n@@ -67,6 +67,8 @@\n \n \n class PlayerOutput(Output):\n+ PLAYER_TERMINATE_TIMEOUT = 10.0\n+\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n@@ -160,7 +162,15 @@\n \n if self.kill:\n with ignored(Exception):\n- self.player.kill()\n+ self.player.terminate()\n+ if not is_win32:\n+ t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n+ while not self.player.poll() and t < timeout:\n+ sleep(0.5)\n+ t += 0.5\n+\n+ if not self.player.returncode:\n+ self.player.kill()\n self.player.wait()\n \n def _write(self, data):\n", "issue": "Less violent way of closing player when stream ends\nCurrently streamlink uses SIGKILL to close the player when a stream ends. This prevents the player from doing its own cleanup. For example, mpv leaves DPMS/screensaver disabled because of this. I know there is --player-no-close option, but that has an unwanted side-effect of not closing the player immediately in some situations.\r\n\r\nI suggest fixing it by using SIGTERM instead:\r\n```diff\r\ndiff -bur streamlink-0.1.0-orig/src/streamlink_cli/output.py streamlink-0.1.0/src/streamlink_cli/output.py\r\n--- streamlink-0.1.0-orig/src/streamlink_cli/output.py 2016-11-21 21:56:29.000000000 +0200\r\n+++ streamlink-0.1.0/src/streamlink_cli/output.py 2016-12-08 22:08:23.000000000 +0200\r\n@@ -161,7 +161,7 @@\r\n \r\n if self.kill:\r\n with ignored(Exception):\r\n- self.player.kill()\r\n+ self.player.terminate()\r\n self.player.wait()\r\n \r\n def _write(self, data):\r\n```\n", "before_files": [{"content": "import os\nimport shlex\nimport subprocess\nimport sys\n\nfrom time import sleep\n\nimport re\n\nfrom .compat import is_win32, stdout\nfrom .constants import DEFAULT_PLAYER_ARGUMENTS\nfrom .utils import ignored\n\nif is_win32:\n import msvcrt\n\n\nclass Output(object):\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise IOError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None):\n super(FileOutput, self).__init__()\n self.filename = filename\n self.fd = fd\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n\n def _write(self, data):\n self.fd.write(data)\n\n\nclass PlayerOutput(Output):\n def __init__(self, cmd, args=DEFAULT_PLAYER_ARGUMENTS, filename=None, quiet=True, kill=True, call=False, http=False,\n namedpipe=None):\n super(PlayerOutput, self).__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n @property\n def running(self):\n sleep(0.5)\n self.player.poll()\n return self.player.returncode is None\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n\n args = self.args.format(filename=filename)\n cmd = self.cmd\n if is_win32:\n return cmd + \" \" + args\n\n return shlex.split(cmd) + shlex.split(args)\n\n def _open(self):\n try:\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n subprocess.call(self._create_arguments(),\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n self.player = subprocess.Popen(self._create_arguments(),\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}]} | 2,225 | 238 |
gh_patches_debug_3223 | rasdani/github-patches | git_diff | searx__searx-2454 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Input turns language to Chinese
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SEARX -->
**Version of Searx, commit number if you are using on master branch and stipulate if you forked Searx**
0.17.0-17b48ff6e858b0c74116068cf6444bd578bbb747
<!-- If you are running on master branch using git execute this command
in order to fetch the latest commit ID:
```
git log -1
```
If you are using searx-docker then look at the bottom of the Searx page
and check for the version after "Powered by searx"
Please also stipulate if you are using a forked version of Searx and
include a link to the fork source code.
-->
**How did you install Searx?**
Manual install
<!-- Did you install Searx using the official wiki or using searx-docker
or manually by executing the searx/webapp.py file? -->
**What happened?**
If I search the phrase `parser error : invalid character in attribute value`, the search language changes to `zh`.
<!-- A clear and concise description of what the bug is. -->
**How To Reproduce**
This works on every searx instance I can find. Just search the phrase `parser error : invalid character in attribute value`.
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
**Expected behavior**
Results in the language chosen.
<!-- A clear and concise description of what you expected to happen. -->
</issue>
<code>
[start of searx/query.py]
1 #!/usr/bin/env python
2
3 '''
4 searx is free software: you can redistribute it and/or modify
5 it under the terms of the GNU Affero General Public License as published by
6 the Free Software Foundation, either version 3 of the License, or
7 (at your option) any later version.
8
9 searx is distributed in the hope that it will be useful,
10 but WITHOUT ANY WARRANTY; without even the implied warranty of
11 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 GNU Affero General Public License for more details.
13
14 You should have received a copy of the GNU Affero General Public License
15 along with searx. If not, see < http://www.gnu.org/licenses/ >.
16
17 (C) 2014 by Thomas Pointhuber, <[email protected]>
18 '''
19
20 import re
21
22 from searx.languages import language_codes
23 from searx.engines import categories, engines, engine_shortcuts
24 from searx.search import EngineRef
25 from searx.webutils import VALID_LANGUAGE_CODE
26
27
28 class RawTextQuery:
29 """parse raw text query (the value from the html input)"""
30
31 def __init__(self, query, disabled_engines):
32 assert isinstance(query, str)
33 self.query = query
34 self.disabled_engines = []
35
36 if disabled_engines:
37 self.disabled_engines = disabled_engines
38
39 self.query_parts = []
40 self.user_query_parts = []
41 self.enginerefs = []
42 self.languages = []
43 self.timeout_limit = None
44 self.external_bang = None
45 self.specific = False
46 self._parse_query()
47
48 # parse query, if tags are set, which
49 # change the search engine or search-language
50 def _parse_query(self):
51 self.query_parts = []
52
53 # split query, including whitespaces
54 raw_query_parts = re.split(r'(\s+)', self.query)
55
56 for query_part in raw_query_parts:
57 searx_query_part = False
58
59 # part does only contain spaces, skip
60 if query_part.isspace()\
61 or query_part == '':
62 continue
63
64 # this force the timeout
65 if query_part[0] == '<':
66 try:
67 raw_timeout_limit = int(query_part[1:])
68 if raw_timeout_limit < 100:
69 # below 100, the unit is the second ( <3 = 3 seconds timeout )
70 self.timeout_limit = float(raw_timeout_limit)
71 else:
72 # 100 or above, the unit is the millisecond ( <850 = 850 milliseconds timeout )
73 self.timeout_limit = raw_timeout_limit / 1000.0
74 searx_query_part = True
75 except ValueError:
76 # error not reported to the user
77 pass
78
79 # this force a language
80 if query_part[0] == ':':
81 lang = query_part[1:].lower().replace('_', '-')
82
83 # check if any language-code is equal with
84 # declared language-codes
85 for lc in language_codes:
86 lang_id, lang_name, country, english_name = map(str.lower, lc)
87
88 # if correct language-code is found
89 # set it as new search-language
90 if (lang == lang_id
91 or lang == lang_name
92 or lang == english_name
93 or lang.replace('-', ' ') == country)\
94 and lang not in self.languages:
95 searx_query_part = True
96 lang_parts = lang_id.split('-')
97 if len(lang_parts) == 2:
98 self.languages.append(lang_parts[0] + '-' + lang_parts[1].upper())
99 else:
100 self.languages.append(lang_id)
101 # to ensure best match (first match is not necessarily the best one)
102 if lang == lang_id:
103 break
104
105 # user may set a valid, yet not selectable language
106 if VALID_LANGUAGE_CODE.match(lang):
107 lang_parts = lang.split('-')
108 if len(lang_parts) > 1:
109 lang = lang_parts[0].lower() + '-' + lang_parts[1].upper()
110 if lang not in self.languages:
111 self.languages.append(lang)
112 searx_query_part = True
113
114 # external bang
115 if query_part[0:2] == "!!":
116 self.external_bang = query_part[2:]
117 searx_query_part = True
118 continue
119 # this force a engine or category
120 if query_part[0] == '!' or query_part[0] == '?':
121 prefix = query_part[1:].replace('-', ' ').replace('_', ' ')
122
123 # check if prefix is equal with engine shortcut
124 if prefix in engine_shortcuts:
125 searx_query_part = True
126 engine_name = engine_shortcuts[prefix]
127 if engine_name in engines:
128 self.enginerefs.append(EngineRef(engine_name, 'none'))
129
130 # check if prefix is equal with engine name
131 elif prefix in engines:
132 searx_query_part = True
133 self.enginerefs.append(EngineRef(prefix, 'none'))
134
135 # check if prefix is equal with categorie name
136 elif prefix in categories:
137 # using all engines for that search, which
138 # are declared under that categorie name
139 searx_query_part = True
140 self.enginerefs.extend(EngineRef(engine.name, prefix)
141 for engine in categories[prefix]
142 if (engine.name, prefix) not in self.disabled_engines)
143
144 if query_part[0] == '!':
145 self.specific = True
146
147 # append query part to query_part list
148 if searx_query_part:
149 self.query_parts.append(query_part)
150 else:
151 self.user_query_parts.append(query_part)
152
153 def changeQuery(self, query):
154 self.user_query_parts = query.strip().split()
155 return self
156
157 def getQuery(self):
158 return ' '.join(self.user_query_parts)
159
160 def getFullQuery(self):
161 # get full querry including whitespaces
162 return '{0} {1}'.format(''.join(self.query_parts), self.getQuery()).strip()
163
[end of searx/query.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/query.py b/searx/query.py
--- a/searx/query.py
+++ b/searx/query.py
@@ -77,7 +77,7 @@
pass
# this force a language
- if query_part[0] == ':':
+ if query_part[0] == ':' and len(query_part) > 1:
lang = query_part[1:].lower().replace('_', '-')
# check if any language-code is equal with
| {"golden_diff": "diff --git a/searx/query.py b/searx/query.py\n--- a/searx/query.py\n+++ b/searx/query.py\n@@ -77,7 +77,7 @@\n pass\n \n # this force a language\n- if query_part[0] == ':':\n+ if query_part[0] == ':' and len(query_part) > 1:\n lang = query_part[1:].lower().replace('_', '-')\n \n # check if any language-code is equal with\n", "issue": "Input turns language to Chinese\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SEARX -->\r\n\r\n**Version of Searx, commit number if you are using on master branch and stipulate if you forked Searx**\r\n0.17.0-17b48ff6e858b0c74116068cf6444bd578bbb747\r\n<!-- If you are running on master branch using git execute this command\r\nin order to fetch the latest commit ID:\r\n```\r\ngit log -1\r\n``` \r\nIf you are using searx-docker then look at the bottom of the Searx page\r\nand check for the version after \"Powered by searx\"\r\n\r\nPlease also stipulate if you are using a forked version of Searx and\r\ninclude a link to the fork source code.\r\n-->\r\n**How did you install Searx?**\r\nManual install\r\n<!-- Did you install Searx using the official wiki or using searx-docker\r\nor manually by executing the searx/webapp.py file? -->\r\n**What happened?**\r\nIf I search the phrase `parser error : invalid character in attribute value`, the search language changes to `zh`.\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n**How To Reproduce**\r\nThis works on every searx instance I can find. Just search the phrase `parser error : invalid character in attribute value`.\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n\r\n**Expected behavior**\r\nResults in the language chosen.\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2014 by Thomas Pointhuber, <[email protected]>\n'''\n\nimport re\n\nfrom searx.languages import language_codes\nfrom searx.engines import categories, engines, engine_shortcuts\nfrom searx.search import EngineRef\nfrom searx.webutils import VALID_LANGUAGE_CODE\n\n\nclass RawTextQuery:\n \"\"\"parse raw text query (the value from the html input)\"\"\"\n\n def __init__(self, query, disabled_engines):\n assert isinstance(query, str)\n self.query = query\n self.disabled_engines = []\n\n if disabled_engines:\n self.disabled_engines = disabled_engines\n\n self.query_parts = []\n self.user_query_parts = []\n self.enginerefs = []\n self.languages = []\n self.timeout_limit = None\n self.external_bang = None\n self.specific = False\n self._parse_query()\n\n # parse query, if tags are set, which\n # change the search engine or search-language\n def _parse_query(self):\n self.query_parts = []\n\n # split query, including whitespaces\n raw_query_parts = re.split(r'(\\s+)', self.query)\n\n for query_part in raw_query_parts:\n searx_query_part = False\n\n # part does only contain spaces, skip\n if query_part.isspace()\\\n or query_part == '':\n continue\n\n # this force the timeout\n if query_part[0] == '<':\n try:\n raw_timeout_limit = int(query_part[1:])\n if raw_timeout_limit < 100:\n # below 100, the unit is the second ( <3 = 3 seconds timeout )\n self.timeout_limit = float(raw_timeout_limit)\n else:\n # 100 or above, the unit is the millisecond ( <850 = 850 milliseconds timeout )\n self.timeout_limit = raw_timeout_limit / 1000.0\n searx_query_part = True\n except ValueError:\n # error not reported to the user\n pass\n\n # this force a language\n if query_part[0] == ':':\n lang = query_part[1:].lower().replace('_', '-')\n\n # check if any language-code is equal with\n # declared language-codes\n for lc in language_codes:\n lang_id, lang_name, country, english_name = map(str.lower, lc)\n\n # if correct language-code is found\n # set it as new search-language\n if (lang == lang_id\n or lang == lang_name\n or lang == english_name\n or lang.replace('-', ' ') == country)\\\n and lang not in self.languages:\n searx_query_part = True\n lang_parts = lang_id.split('-')\n if len(lang_parts) == 2:\n self.languages.append(lang_parts[0] + '-' + lang_parts[1].upper())\n else:\n self.languages.append(lang_id)\n # to ensure best match (first match is not necessarily the best one)\n if lang == lang_id:\n break\n\n # user may set a valid, yet not selectable language\n if VALID_LANGUAGE_CODE.match(lang):\n lang_parts = lang.split('-')\n if len(lang_parts) > 1:\n lang = lang_parts[0].lower() + '-' + lang_parts[1].upper()\n if lang not in self.languages:\n self.languages.append(lang)\n searx_query_part = True\n\n # external bang\n if query_part[0:2] == \"!!\":\n self.external_bang = query_part[2:]\n searx_query_part = True\n continue\n # this force a engine or category\n if query_part[0] == '!' or query_part[0] == '?':\n prefix = query_part[1:].replace('-', ' ').replace('_', ' ')\n\n # check if prefix is equal with engine shortcut\n if prefix in engine_shortcuts:\n searx_query_part = True\n engine_name = engine_shortcuts[prefix]\n if engine_name in engines:\n self.enginerefs.append(EngineRef(engine_name, 'none'))\n\n # check if prefix is equal with engine name\n elif prefix in engines:\n searx_query_part = True\n self.enginerefs.append(EngineRef(prefix, 'none'))\n\n # check if prefix is equal with categorie name\n elif prefix in categories:\n # using all engines for that search, which\n # are declared under that categorie name\n searx_query_part = True\n self.enginerefs.extend(EngineRef(engine.name, prefix)\n for engine in categories[prefix]\n if (engine.name, prefix) not in self.disabled_engines)\n\n if query_part[0] == '!':\n self.specific = True\n\n # append query part to query_part list\n if searx_query_part:\n self.query_parts.append(query_part)\n else:\n self.user_query_parts.append(query_part)\n\n def changeQuery(self, query):\n self.user_query_parts = query.strip().split()\n return self\n\n def getQuery(self):\n return ' '.join(self.user_query_parts)\n\n def getFullQuery(self):\n # get full querry including whitespaces\n return '{0} {1}'.format(''.join(self.query_parts), self.getQuery()).strip()\n", "path": "searx/query.py"}]} | 2,595 | 109 |
gh_patches_debug_24906 | rasdani/github-patches | git_diff | dotkom__onlineweb4-2341 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OW doesnt support use of employee mails
**Describe the bug**
Apperantly OW4 doesnt support use of employee mails, which means the user has issues with verifying their membership if their main email is their employee mail
**Expected behavior**
An user should be able to verify their membership and use OW as long as they are a student too.
</issue>
<code>
[start of apps/dataporten/views.py]
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth.decorators import login_required
6 from django.db import IntegrityError
7 from django.shortcuts import redirect
8 from oic import rndstr
9 from oic.oauth2 import AuthorizationResponse, ResponseError
10
11 from apps.dataporten.study.tasks import (fetch_groups_information, find_user_study_and_update,
12 set_ntnu_username)
13
14 from .client import client_setup
15
16 logger = logging.getLogger(__name__)
17
18 DATAPORTEN_CLIENT_ID = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_ID')
19 DATAPORTEN_CLIENT_SECRET = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_SECRET')
20 DATAPORTEN_REDIRECT_URI = settings.DATAPORTEN.get('STUDY', {}).get('REDIRECT_URI')
21 DATAPORTEN_SCOPES = settings.DATAPORTEN.get('STUDY', {}).get('SCOPES')
22
23
24 @login_required()
25 def study(request):
26 """This view redirects the user to Dataporten to request authorization for fetching information about the
27 user's groups membership, which can be used to verify eligibility for membership of Online."""
28
29 # If the user already is a member we can return early. However, if we're in testing, we want to skip the check.
30 if settings.DATAPORTEN.get('STUDY').get('ENABLED') and request.user.is_member:
31 messages.info(request, 'Du er allerede registrert som medlem.')
32 return redirect('profiles_active', active_tab='membership')
33
34 logger.debug(
35 '{} wants to automatically confirm study programme through Dataporten.'.format(request.user),
36 extra={'user': request.user}
37 )
38
39 client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)
40
41 # Generate random values used to verify that it's the same user when in the callback.
42 state = rndstr()
43 nonce = rndstr()
44
45 request.session['dataporten_study_state'] = state
46 request.session['dataporten_study_nonce'] = nonce
47
48 args = {
49 'client_id': DATAPORTEN_CLIENT_ID,
50 'response_type': 'code',
51 'scope': DATAPORTEN_SCOPES,
52 'redirect_uri': DATAPORTEN_REDIRECT_URI,
53 'nonce': nonce,
54 'state': state,
55 }
56
57 logger.debug(
58 'Constructing authorization request and redirecting user to authorize through Dataporten.',
59 extra={'user': request.user}
60 )
61
62 auth_req = client.construct_AuthorizationRequest(request_args=args)
63 login_url = auth_req.request(client.authorization_endpoint)
64
65 return redirect(login_url)
66
67
68 @login_required() # noqa: C901
69 def study_callback(request):
70 """This view fetches information from Dataporten to verify the eligibility. This is done by fetching
71 the /me/groups-API from Dataporten and further processing the fetched groups to find group membership.
72
73 Dataporten Groups API: https://docs.dataporten.no/docs/groups/"""
74 logger.debug('Fetching study programme for user {}'.format(request.user), extra={'user': request.user})
75
76 client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)
77
78 queryparams = request.GET.urlencode()
79
80 try:
81 auth_resp = client.parse_response(AuthorizationResponse, info=queryparams, sformat='urlencoded')
82 except ResponseError:
83 messages.error(request, 'Forespørselen mangler påkrevde felter, vennligst prøv igjen.')
84 return redirect('profiles_active', active_tab='membership')
85
86 if not request.session.get('dataporten_study_state', '') or \
87 request.session['dataporten_study_state'] != auth_resp['state']:
88 logger.warning('Dataporten state did not equal the one in session!')
89 messages.error(request, 'Verifisering av forespørselen feilet. Vennligst prøv igjen.')
90 return redirect('profiles_active', active_tab='membership')
91
92 args = {
93 'code': auth_resp['code'],
94 'redirect_uri': DATAPORTEN_REDIRECT_URI,
95 }
96
97 token_request = client.do_access_token_request(
98 state=auth_resp['state'], request_args=args, authn_method='client_secret_basic',
99 )
100
101 access_token = token_request.get('access_token')
102
103 # Do user info request
104 userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')
105 ntnu_username_dataporten = userinfo.get('email').split('@')[0]
106 if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:
107 logger.warning(
108 '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'
109 .format(request.user),
110 extra={
111 'user': request.user,
112 'ntnu_username__ow4': request.user.ntnu_username,
113 'ntnu_username__dataporten': ntnu_username_dataporten
114 }
115 )
116 messages.error(
117 request,
118 'Brukernavnet for brukerkontoen brukt til verifisering i Dataporten stemmer ikke overens med '
119 'kontoen du er logget inn med hos Online. Pass på at du er logget inn på din egen konto begge '
120 'steder og prøv igjen.'
121 )
122 return redirect('profiles_active', active_tab='membership')
123 elif not request.user.ntnu_username:
124 pass
125 # @ToDo: Register email address. Maybe store it, but ask user to confirm? -> resend auth email
126
127 # Getting information about study of the user
128 groups = fetch_groups_information(access_token)
129
130 try:
131 if not request.user.ntnu_username:
132 set_ntnu_username(request.user, ntnu_username_dataporten)
133 studies_info = find_user_study_and_update(request.user, groups)
134
135 if not studies_info:
136 logger.warning(
137 'Dataporten groups do not match groups for informatics',
138 extra={
139 'user': request.user,
140 'groups': groups,
141 }
142 )
143 messages.error(
144 request,
145 'Studieretningen du studerer ved gir ikke medlemskap i Online. ',
146 'Hvis du mener dette er en feil; ta vennligst kontakt Dotkom slik at vi kan feilsøke prosessen.'
147 )
148 return redirect('profiles_active', active_tab='membership')
149
150 studies_informatics, study_name, study_year = studies_info
151 except IntegrityError:
152 messages.error(
153 request,
154 'En bruker er allerede knyttet til denne NTNU-kontoen. '
155 'Dersom du har glemt passordet til din andre bruker kan du bruke "glemt passord"-funksjonen.'
156 )
157 return redirect('profiles_active', active_tab='membership')
158
159 if studies_informatics:
160 messages.success(
161 request,
162 'Bekreftet studieretning som {} i {}. klasse. Dersom dette er feil, '
163 'kontakt dotkom slik at vi kan rette opp og finne ut hva som gikk galt.'
164 .format(study_name, study_year)
165 )
166 else:
167 messages.error(
168 request,
169 'Det ser ikke ut som du tar informatikkfag. Dersom du mener dette er galt kan du sende inn en søknad '
170 'manuelt. Ta gjerne kontakt med dotkom slik at vi kan feilsøke prosessen.'
171 )
172
173 return redirect('profiles_active', active_tab='membership')
174
[end of apps/dataporten/views.py]
[start of apps/dataporten/settings.py]
1 from decouple import config
2
3 DATAPORTEN = {
4 'STUDY': {
5 'ENABLED': config('OW4_DP_STUDY_ENABLED', cast=bool, default=False),
6 'TESTING': config('OW4_DP_STUDY_TESTING', cast=bool, default=True),
7 'CLIENT_ID': config('OW4_DP_STUDY_CLIENT_ID', default=''),
8 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),
9 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),
10 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',
11 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],
12 }
13 }
14
[end of apps/dataporten/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/dataporten/settings.py b/apps/dataporten/settings.py
--- a/apps/dataporten/settings.py
+++ b/apps/dataporten/settings.py
@@ -8,6 +8,6 @@
'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),
'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),
'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',
- 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],
+ 'SCOPES': ['openid', 'userid-feide', 'profile', 'groups', 'email'],
}
}
diff --git a/apps/dataporten/views.py b/apps/dataporten/views.py
--- a/apps/dataporten/views.py
+++ b/apps/dataporten/views.py
@@ -102,7 +102,8 @@
# Do user info request
userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')
- ntnu_username_dataporten = userinfo.get('email').split('@')[0]
+ # connect-userid_sec format is array with "feide:[email protected]"
+ ntnu_username_dataporten = userinfo.get('connect-userid_sec')[0].split(':')[1].split('@')[0]
if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:
logger.warning(
'{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'
| {"golden_diff": "diff --git a/apps/dataporten/settings.py b/apps/dataporten/settings.py\n--- a/apps/dataporten/settings.py\n+++ b/apps/dataporten/settings.py\n@@ -8,6 +8,6 @@\n 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),\n 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),\n 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',\n- 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],\n+ 'SCOPES': ['openid', 'userid-feide', 'profile', 'groups', 'email'],\n }\n }\ndiff --git a/apps/dataporten/views.py b/apps/dataporten/views.py\n--- a/apps/dataporten/views.py\n+++ b/apps/dataporten/views.py\n@@ -102,7 +102,8 @@\n \n # Do user info request\n userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')\n- ntnu_username_dataporten = userinfo.get('email').split('@')[0]\n+ # connect-userid_sec format is array with \"feide:[email protected]\"\n+ ntnu_username_dataporten = userinfo.get('connect-userid_sec')[0].split(':')[1].split('@')[0]\n if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:\n logger.warning(\n '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'\n", "issue": "OW doesnt support use of employee mails\n**Describe the bug**\r\nApperantly OW4 doesnt support use of employee mails, which means the user has issues with verifying their membership if their main email is their employee mail\r\n\r\n**Expected behavior**\r\nAn user should be able to verify their membership and use OW as long as they are a student too.\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.db import IntegrityError\nfrom django.shortcuts import redirect\nfrom oic import rndstr\nfrom oic.oauth2 import AuthorizationResponse, ResponseError\n\nfrom apps.dataporten.study.tasks import (fetch_groups_information, find_user_study_and_update,\n set_ntnu_username)\n\nfrom .client import client_setup\n\nlogger = logging.getLogger(__name__)\n\nDATAPORTEN_CLIENT_ID = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_ID')\nDATAPORTEN_CLIENT_SECRET = settings.DATAPORTEN.get('STUDY', {}).get('CLIENT_SECRET')\nDATAPORTEN_REDIRECT_URI = settings.DATAPORTEN.get('STUDY', {}).get('REDIRECT_URI')\nDATAPORTEN_SCOPES = settings.DATAPORTEN.get('STUDY', {}).get('SCOPES')\n\n\n@login_required()\ndef study(request):\n \"\"\"This view redirects the user to Dataporten to request authorization for fetching information about the\n user's groups membership, which can be used to verify eligibility for membership of Online.\"\"\"\n\n # If the user already is a member we can return early. However, if we're in testing, we want to skip the check.\n if settings.DATAPORTEN.get('STUDY').get('ENABLED') and request.user.is_member:\n messages.info(request, 'Du er allerede registrert som medlem.')\n return redirect('profiles_active', active_tab='membership')\n\n logger.debug(\n '{} wants to automatically confirm study programme through Dataporten.'.format(request.user),\n extra={'user': request.user}\n )\n\n client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)\n\n # Generate random values used to verify that it's the same user when in the callback.\n state = rndstr()\n nonce = rndstr()\n\n request.session['dataporten_study_state'] = state\n request.session['dataporten_study_nonce'] = nonce\n\n args = {\n 'client_id': DATAPORTEN_CLIENT_ID,\n 'response_type': 'code',\n 'scope': DATAPORTEN_SCOPES,\n 'redirect_uri': DATAPORTEN_REDIRECT_URI,\n 'nonce': nonce,\n 'state': state,\n }\n\n logger.debug(\n 'Constructing authorization request and redirecting user to authorize through Dataporten.',\n extra={'user': request.user}\n )\n\n auth_req = client.construct_AuthorizationRequest(request_args=args)\n login_url = auth_req.request(client.authorization_endpoint)\n\n return redirect(login_url)\n\n\n@login_required() # noqa: C901\ndef study_callback(request):\n \"\"\"This view fetches information from Dataporten to verify the eligibility. This is done by fetching\n the /me/groups-API from Dataporten and further processing the fetched groups to find group membership.\n\n Dataporten Groups API: https://docs.dataporten.no/docs/groups/\"\"\"\n logger.debug('Fetching study programme for user {}'.format(request.user), extra={'user': request.user})\n\n client = client_setup(DATAPORTEN_CLIENT_ID, DATAPORTEN_CLIENT_SECRET)\n\n queryparams = request.GET.urlencode()\n\n try:\n auth_resp = client.parse_response(AuthorizationResponse, info=queryparams, sformat='urlencoded')\n except ResponseError:\n messages.error(request, 'Foresp\u00f8rselen mangler p\u00e5krevde felter, vennligst pr\u00f8v igjen.')\n return redirect('profiles_active', active_tab='membership')\n\n if not request.session.get('dataporten_study_state', '') or \\\n request.session['dataporten_study_state'] != auth_resp['state']:\n logger.warning('Dataporten state did not equal the one in session!')\n messages.error(request, 'Verifisering av foresp\u00f8rselen feilet. Vennligst pr\u00f8v igjen.')\n return redirect('profiles_active', active_tab='membership')\n\n args = {\n 'code': auth_resp['code'],\n 'redirect_uri': DATAPORTEN_REDIRECT_URI,\n }\n\n token_request = client.do_access_token_request(\n state=auth_resp['state'], request_args=args, authn_method='client_secret_basic',\n )\n\n access_token = token_request.get('access_token')\n\n # Do user info request\n userinfo = client.do_user_info_request(state=auth_resp['state'], behavior='use_authorization_header')\n ntnu_username_dataporten = userinfo.get('email').split('@')[0]\n if request.user.ntnu_username and request.user.ntnu_username != ntnu_username_dataporten:\n logger.warning(\n '{} tried to authorize, but the registered ntnu_username and the one received from Dataporten differ.'\n .format(request.user),\n extra={\n 'user': request.user,\n 'ntnu_username__ow4': request.user.ntnu_username,\n 'ntnu_username__dataporten': ntnu_username_dataporten\n }\n )\n messages.error(\n request,\n 'Brukernavnet for brukerkontoen brukt til verifisering i Dataporten stemmer ikke overens med '\n 'kontoen du er logget inn med hos Online. Pass p\u00e5 at du er logget inn p\u00e5 din egen konto begge '\n 'steder og pr\u00f8v igjen.'\n )\n return redirect('profiles_active', active_tab='membership')\n elif not request.user.ntnu_username:\n pass\n # @ToDo: Register email address. Maybe store it, but ask user to confirm? -> resend auth email\n\n # Getting information about study of the user\n groups = fetch_groups_information(access_token)\n\n try:\n if not request.user.ntnu_username:\n set_ntnu_username(request.user, ntnu_username_dataporten)\n studies_info = find_user_study_and_update(request.user, groups)\n\n if not studies_info:\n logger.warning(\n 'Dataporten groups do not match groups for informatics',\n extra={\n 'user': request.user,\n 'groups': groups,\n }\n )\n messages.error(\n request,\n 'Studieretningen du studerer ved gir ikke medlemskap i Online. ',\n 'Hvis du mener dette er en feil; ta vennligst kontakt Dotkom slik at vi kan feils\u00f8ke prosessen.'\n )\n return redirect('profiles_active', active_tab='membership')\n\n studies_informatics, study_name, study_year = studies_info\n except IntegrityError:\n messages.error(\n request,\n 'En bruker er allerede knyttet til denne NTNU-kontoen. '\n 'Dersom du har glemt passordet til din andre bruker kan du bruke \"glemt passord\"-funksjonen.'\n )\n return redirect('profiles_active', active_tab='membership')\n\n if studies_informatics:\n messages.success(\n request,\n 'Bekreftet studieretning som {} i {}. klasse. Dersom dette er feil, '\n 'kontakt dotkom slik at vi kan rette opp og finne ut hva som gikk galt.'\n .format(study_name, study_year)\n )\n else:\n messages.error(\n request,\n 'Det ser ikke ut som du tar informatikkfag. Dersom du mener dette er galt kan du sende inn en s\u00f8knad '\n 'manuelt. Ta gjerne kontakt med dotkom slik at vi kan feils\u00f8ke prosessen.'\n )\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/dataporten/views.py"}, {"content": "from decouple import config\n\nDATAPORTEN = {\n 'STUDY': {\n 'ENABLED': config('OW4_DP_STUDY_ENABLED', cast=bool, default=False),\n 'TESTING': config('OW4_DP_STUDY_TESTING', cast=bool, default=True),\n 'CLIENT_ID': config('OW4_DP_STUDY_CLIENT_ID', default=''),\n 'CLIENT_SECRET': config('OW4_DP_STUDY_CLIENT_SECRET', default=''),\n 'REDIRECT_URI': config('OW4_DP_STUDY_REDIRECT_URI', default=''),\n 'PROVIDER_URL': 'https://auth.dataporten.no/oauth/token',\n 'SCOPES': ['openid', 'userid', 'profile', 'groups', 'email'],\n }\n}\n", "path": "apps/dataporten/settings.py"}]} | 2,906 | 364 |
gh_patches_debug_5184 | rasdani/github-patches | git_diff | googleapis__python-bigquery-51 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQuery: TypeError: from_arrays() takes at least 2 positional arguments (1 given)
Hi all, i tried bq client in python with the default example. Since i moved from 1.23.1 to 1.24.0 last week i get the following issue.
Its related to pyarrow but i was not upgrading pyarrow (worked with it before)
#### Environment details
- Python 3.7.6
- bigquery.__version__ '1.24.0'
- pyarrow.__version__ '0.11.1'
- Linux jupyter-generic 4.15.0-1057-aws googleapis/google-cloud-python#59-Ubuntu SMP Wed Dec 4 10:02:00 UTC 2019 x86_64
- x86_64 x86_64 GNU/Linux
- Name: google-cloud-bigquery
- Version: 1.24.0
- Summary: Google BigQuery API client library
- Location: /opt/conda/lib/python3.7/site-packages
- Requires: google-cloud-core, google-auth, six, google-resumable-media, protobuf, google-api-core
- Required-by: pandas-gbq
#### Steps to reproduce
just running a default example form the webhttps://cloud.google.com/bigquery/docs/bigquery-storage-python-pandas
```python
import google.auth
from google.cloud import bigquery
client = bigquery.Client.from_service_account_json('cred.json')
# Download query results.
query_string = """
SELECT
CONCAT(
'https://stackoverflow.com/questions/',
CAST(id as STRING)) as url,
view_count
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE tags like '%google-bigquery%'
ORDER BY view_count DESC
"""
dataframe = (
client.query(query_string)
.result()
.to_dataframe()
)
print(dataframe.head())
```
#### Stack trace
```
--------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-61d06599dbdd> in <module>
12
13 dataframe = (
---> 14 client.query(query_string)
15 .result()
16 .to_dataframe()
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_dataframe(self, bqstorage_client, dtypes, progress_bar_type, create_bqstorage_client)
1727 progress_bar_type=progress_bar_type,
1728 bqstorage_client=bqstorage_client,
-> 1729 create_bqstorage_client=create_bqstorage_client,
1730 )
1731 df = record_batch.to_pandas()
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_arrow(self, progress_bar_type, bqstorage_client, create_bqstorage_client)
1541 record_batches = []
1542 for record_batch in self._to_arrow_iterable(
-> 1543 bqstorage_client=bqstorage_client
1544 ):
1545 record_batches.append(record_batch)
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in _to_page_iterable(self, bqstorage_download, tabledata_list_download, bqstorage_client)
1433 )
1434 )
-> 1435 for item in tabledata_list_download():
1436 yield item
1437
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in download_arrow_tabledata_list(pages, bq_schema)
523
524 for page in pages:
--> 525 yield _tabledata_list_page_to_arrow(page, column_names, arrow_types)
526
527
/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in _tabledata_list_page_to_arrow(page, column_names, arrow_types)
499
500 if isinstance(column_names, pyarrow.Schema):
--> 501 return pyarrow.RecordBatch.from_arrays(arrays, schema=column_names)
502 return pyarrow.RecordBatch.from_arrays(arrays, names=column_names)
503
/opt/conda/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_arrays()
TypeError: from_arrays() takes at least 2 positional arguments (1 given)
```
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25 version = "1.24.0"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 'enum34; python_version < "3.4"',
33 "google-auth >= 1.9.0, < 2.0dev",
34 "google-api-core >= 1.15.0, < 2.0dev",
35 "google-cloud-core >= 1.1.0, < 2.0dev",
36 "google-resumable-media >= 0.5.0, < 0.6dev",
37 "protobuf >= 3.6.0",
38 "six >=1.13.0,< 2.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev",
43 # Bad Linux release for 0.14.0.
44 # https://issues.apache.org/jira/browse/ARROW-5868
45 "pyarrow>=0.13.0, != 0.14.0",
46 ],
47 "pandas": ["pandas>=0.17.1"],
48 # Exclude PyArrow dependency from Windows Python 2.7.
49 'pyarrow: platform_system != "Windows" or python_version >= "3.4"': [
50 # Bad Linux release for 0.14.0.
51 # https://issues.apache.org/jira/browse/ARROW-5868
52 "pyarrow>=0.4.1, != 0.14.0"
53 ],
54 "tqdm": ["tqdm >= 4.0.0, <5.0.0dev"],
55 "fastparquet": ["fastparquet", "python-snappy"],
56 }
57
58 all_extras = []
59
60 for extra in extras:
61 if extra == "fastparquet":
62 # Skip fastparquet from "all" because it is redundant with pyarrow and
63 # creates a dependency on pre-release versions of numpy. See:
64 # https://github.com/googleapis/google-cloud-python/issues/8549
65 continue
66 all_extras.extend(extras[extra])
67
68 extras["all"] = all_extras
69
70 # Setup boilerplate below this line.
71
72 package_root = os.path.abspath(os.path.dirname(__file__))
73
74 readme_filename = os.path.join(package_root, "README.rst")
75 with io.open(readme_filename, encoding="utf-8") as readme_file:
76 readme = readme_file.read()
77
78 # Only include packages under the 'google' namespace. Do not include tests,
79 # benchmarks, etc.
80 packages = [
81 package for package in setuptools.find_packages() if package.startswith("google")
82 ]
83
84 # Determine which namespaces are needed.
85 namespaces = ["google"]
86 if "google.cloud" in packages:
87 namespaces.append("google.cloud")
88
89
90 setuptools.setup(
91 name=name,
92 version=version,
93 description=description,
94 long_description=readme,
95 author="Google LLC",
96 author_email="[email protected]",
97 license="Apache 2.0",
98 url="https://github.com/googleapis/python-bigquery",
99 classifiers=[
100 release_status,
101 "Intended Audience :: Developers",
102 "License :: OSI Approved :: Apache Software License",
103 "Programming Language :: Python",
104 "Programming Language :: Python :: 2",
105 "Programming Language :: Python :: 2.7",
106 "Programming Language :: Python :: 3",
107 "Programming Language :: Python :: 3.5",
108 "Programming Language :: Python :: 3.6",
109 "Programming Language :: Python :: 3.7",
110 "Operating System :: OS Independent",
111 "Topic :: Internet",
112 ],
113 platforms="Posix; MacOS X; Windows",
114 packages=packages,
115 namespace_packages=namespaces,
116 install_requires=dependencies,
117 extras_require=extras,
118 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
119 include_package_data=True,
120 zip_safe=False,
121 )
122
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -40,9 +40,7 @@
extras = {
"bqstorage": [
"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev",
- # Bad Linux release for 0.14.0.
- # https://issues.apache.org/jira/browse/ARROW-5868
- "pyarrow>=0.13.0, != 0.14.0",
+ "pyarrow>=0.16.0, < 2.0dev",
],
"pandas": ["pandas>=0.17.1"],
# Exclude PyArrow dependency from Windows Python 2.7.
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -40,9 +40,7 @@\n extras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n- # Bad Linux release for 0.14.0.\n- # https://issues.apache.org/jira/browse/ARROW-5868\n- \"pyarrow>=0.13.0, != 0.14.0\",\n+ \"pyarrow>=0.16.0, < 2.0dev\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n", "issue": "BigQuery: TypeError: from_arrays() takes at least 2 positional arguments (1 given)\nHi all, i tried bq client in python with the default example. Since i moved from 1.23.1 to 1.24.0 last week i get the following issue.\r\n\r\nIts related to pyarrow but i was not upgrading pyarrow (worked with it before)\r\n\r\n#### Environment details\r\n- Python 3.7.6\r\n- bigquery.__version__ '1.24.0'\r\n- pyarrow.__version__ '0.11.1'\r\n- Linux jupyter-generic 4.15.0-1057-aws googleapis/google-cloud-python#59-Ubuntu SMP Wed Dec 4 10:02:00 UTC 2019 x86_64 \r\n\r\n- x86_64 x86_64 GNU/Linux\r\n- Name: google-cloud-bigquery\r\n- Version: 1.24.0\r\n- Summary: Google BigQuery API client library\r\n- Location: /opt/conda/lib/python3.7/site-packages\r\n- Requires: google-cloud-core, google-auth, six, google-resumable-media, protobuf, google-api-core\r\n- Required-by: pandas-gbq\r\n\r\n#### Steps to reproduce\r\njust running a default example form the webhttps://cloud.google.com/bigquery/docs/bigquery-storage-python-pandas\r\n\r\n```python\r\nimport google.auth\r\nfrom google.cloud import bigquery\r\nclient = bigquery.Client.from_service_account_json('cred.json')\r\n\r\n# Download query results.\r\nquery_string = \"\"\"\r\nSELECT\r\nCONCAT(\r\n 'https://stackoverflow.com/questions/',\r\n CAST(id as STRING)) as url,\r\nview_count\r\nFROM `bigquery-public-data.stackoverflow.posts_questions`\r\nWHERE tags like '%google-bigquery%'\r\nORDER BY view_count DESC\r\n\"\"\"\r\n\r\ndataframe = (\r\n client.query(query_string)\r\n .result()\r\n .to_dataframe()\r\n)\r\nprint(dataframe.head())\r\n```\r\n\r\n\r\n#### Stack trace\r\n```\r\n--------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-11-61d06599dbdd> in <module>\r\n 12 \r\n 13 dataframe = (\r\n---> 14 client.query(query_string)\r\n 15 .result()\r\n 16 .to_dataframe()\r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_dataframe(self, bqstorage_client, dtypes, progress_bar_type, create_bqstorage_client)\r\n 1727 progress_bar_type=progress_bar_type,\r\n 1728 bqstorage_client=bqstorage_client,\r\n-> 1729 create_bqstorage_client=create_bqstorage_client,\r\n 1730 )\r\n 1731 df = record_batch.to_pandas()\r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in to_arrow(self, progress_bar_type, bqstorage_client, create_bqstorage_client)\r\n 1541 record_batches = []\r\n 1542 for record_batch in self._to_arrow_iterable(\r\n-> 1543 bqstorage_client=bqstorage_client\r\n 1544 ):\r\n 1545 record_batches.append(record_batch)\r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/table.py in _to_page_iterable(self, bqstorage_download, tabledata_list_download, bqstorage_client)\r\n 1433 )\r\n 1434 )\r\n-> 1435 for item in tabledata_list_download():\r\n 1436 yield item\r\n 1437 \r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in download_arrow_tabledata_list(pages, bq_schema)\r\n 523 \r\n 524 for page in pages:\r\n--> 525 yield _tabledata_list_page_to_arrow(page, column_names, arrow_types)\r\n 526 \r\n 527 \r\n\r\n/opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/_pandas_helpers.py in _tabledata_list_page_to_arrow(page, column_names, arrow_types)\r\n 499 \r\n 500 if isinstance(column_names, pyarrow.Schema):\r\n--> 501 return pyarrow.RecordBatch.from_arrays(arrays, schema=column_names)\r\n 502 return pyarrow.RecordBatch.from_arrays(arrays, names=column_names)\r\n 503 \r\n\r\n/opt/conda/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.RecordBatch.from_arrays()\r\n\r\nTypeError: from_arrays() takes at least 2 positional arguments (1 given)\r\n\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\nversion = \"1.24.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n 'enum34; python_version < \"3.4\"',\n \"google-auth >= 1.9.0, < 2.0dev\",\n \"google-api-core >= 1.15.0, < 2.0dev\",\n \"google-cloud-core >= 1.1.0, < 2.0dev\",\n \"google-resumable-media >= 0.5.0, < 0.6dev\",\n \"protobuf >= 3.6.0\",\n \"six >=1.13.0,< 2.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 0.6.0, <2.0.0dev\",\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.13.0, != 0.14.0\",\n ],\n \"pandas\": [\"pandas>=0.17.1\"],\n # Exclude PyArrow dependency from Windows Python 2.7.\n 'pyarrow: platform_system != \"Windows\" or python_version >= \"3.4\"': [\n # Bad Linux release for 0.14.0.\n # https://issues.apache.org/jira/browse/ARROW-5868\n \"pyarrow>=0.4.1, != 0.14.0\"\n ],\n \"tqdm\": [\"tqdm >= 4.0.0, <5.0.0dev\"],\n \"fastparquet\": [\"fastparquet\", \"python-snappy\"],\n}\n\nall_extras = []\n\nfor extra in extras:\n if extra == \"fastparquet\":\n # Skip fastparquet from \"all\" because it is redundant with pyarrow and\n # creates a dependency on pre-release versions of numpy. See:\n # https://github.com/googleapis/google-cloud-python/issues/8549\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 2,885 | 174 |
gh_patches_debug_7018 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong baseurl in remote @-mention notification
When someone on a remote server @-mentions me on Bookwyrm, the "status" link in notifications has the wrong baseUrl.
Example:

The url that the username `Flancian` links to is `https://bookwyrm.social/user/[email protected]`. The url that `status` links to is `https://social.coop/users/flancian/status/14794`, which takes me to a 404 on the `social.coop` server. The correct url that status should be linking to is `https://bookwyrm.social/user/[email protected]/status/14794`.
I've confirmed this happens for @-mentions from any remote server.
</issue>
<code>
[start of bookwyrm/models/base_model.py]
1 ''' base model with default fields '''
2 from base64 import b64encode
3 from functools import reduce
4 import operator
5 from uuid import uuid4
6
7 from Crypto.PublicKey import RSA
8 from Crypto.Signature import pkcs1_15
9 from Crypto.Hash import SHA256
10 from django.core.paginator import Paginator
11 from django.db import models
12 from django.db.models import Q
13 from django.dispatch import receiver
14
15 from bookwyrm import activitypub
16 from bookwyrm.settings import DOMAIN, PAGE_LENGTH
17 from .fields import ImageField, ManyToManyField, RemoteIdField
18
19
20 class BookWyrmModel(models.Model):
21 ''' shared fields '''
22 created_date = models.DateTimeField(auto_now_add=True)
23 updated_date = models.DateTimeField(auto_now=True)
24 remote_id = RemoteIdField(null=True, activitypub_field='id')
25
26 def get_remote_id(self):
27 ''' generate a url that resolves to the local object '''
28 base_path = 'https://%s' % DOMAIN
29 if hasattr(self, 'user'):
30 base_path = self.user.remote_id
31 model_name = type(self).__name__.lower()
32 return '%s/%s/%d' % (base_path, model_name, self.id)
33
34 class Meta:
35 ''' this is just here to provide default fields for other models '''
36 abstract = True
37
38 @property
39 def local_path(self):
40 ''' how to link to this object in the local app '''
41 return self.get_remote_id().replace('https://%s' % DOMAIN, '')
42
43
44 @receiver(models.signals.post_save)
45 #pylint: disable=unused-argument
46 def execute_after_save(sender, instance, created, *args, **kwargs):
47 ''' set the remote_id after save (when the id is available) '''
48 if not created or not hasattr(instance, 'get_remote_id'):
49 return
50 if not instance.remote_id:
51 instance.remote_id = instance.get_remote_id()
52 instance.save()
53
54
55 def unfurl_related_field(related_field, sort_field=None):
56 ''' load reverse lookups (like public key owner or Status attachment '''
57 if hasattr(related_field, 'all'):
58 return [unfurl_related_field(i) for i in related_field.order_by(
59 sort_field).all()]
60 if related_field.reverse_unfurl:
61 return related_field.field_to_activity()
62 return related_field.remote_id
63
64
65 class ActivitypubMixin:
66 ''' add this mixin for models that are AP serializable '''
67 activity_serializer = lambda: {}
68 reverse_unfurl = False
69
70 def __init__(self, *args, **kwargs):
71 ''' collect some info on model fields '''
72 self.image_fields = []
73 self.many_to_many_fields = []
74 self.simple_fields = [] # "simple"
75 for field in self._meta.get_fields():
76 if not hasattr(field, 'field_to_activity'):
77 continue
78
79 if isinstance(field, ImageField):
80 self.image_fields.append(field)
81 elif isinstance(field, ManyToManyField):
82 self.many_to_many_fields.append(field)
83 else:
84 self.simple_fields.append(field)
85
86 self.activity_fields = self.image_fields + \
87 self.many_to_many_fields + self.simple_fields
88
89 self.deserialize_reverse_fields = self.deserialize_reverse_fields \
90 if hasattr(self, 'deserialize_reverse_fields') else []
91 self.serialize_reverse_fields = self.serialize_reverse_fields \
92 if hasattr(self, 'serialize_reverse_fields') else []
93
94 super().__init__(*args, **kwargs)
95
96
97 @classmethod
98 def find_existing_by_remote_id(cls, remote_id):
99 ''' look up a remote id in the db '''
100 return cls.find_existing({'id': remote_id})
101
102 @classmethod
103 def find_existing(cls, data):
104 ''' compare data to fields that can be used for deduplation.
105 This always includes remote_id, but can also be unique identifiers
106 like an isbn for an edition '''
107 filters = []
108 for field in cls._meta.get_fields():
109 if not hasattr(field, 'deduplication_field') or \
110 not field.deduplication_field:
111 continue
112
113 value = data.get(field.get_activitypub_field())
114 if not value:
115 continue
116 filters.append({field.name: value})
117
118 if hasattr(cls, 'origin_id') and 'id' in data:
119 # kinda janky, but this handles special case for books
120 filters.append({'origin_id': data['id']})
121
122 if not filters:
123 # if there are no deduplication fields, it will match the first
124 # item no matter what. this shouldn't happen but just in case.
125 return None
126
127 objects = cls.objects
128 if hasattr(objects, 'select_subclasses'):
129 objects = objects.select_subclasses()
130
131 # an OR operation on all the match fields
132 match = objects.filter(
133 reduce(
134 operator.or_, (Q(**f) for f in filters)
135 )
136 )
137 # there OUGHT to be only one match
138 return match.first()
139
140
141 def to_activity(self):
142 ''' convert from a model to an activity '''
143 activity = generate_activity(self)
144 return self.activity_serializer(**activity).serialize()
145
146
147 def to_create_activity(self, user, **kwargs):
148 ''' returns the object wrapped in a Create activity '''
149 activity_object = self.to_activity(**kwargs)
150
151 signature = None
152 create_id = self.remote_id + '/activity'
153 if 'content' in activity_object:
154 signer = pkcs1_15.new(RSA.import_key(user.key_pair.private_key))
155 content = activity_object['content']
156 signed_message = signer.sign(SHA256.new(content.encode('utf8')))
157
158 signature = activitypub.Signature(
159 creator='%s#main-key' % user.remote_id,
160 created=activity_object['published'],
161 signatureValue=b64encode(signed_message).decode('utf8')
162 )
163
164 return activitypub.Create(
165 id=create_id,
166 actor=user.remote_id,
167 to=activity_object['to'],
168 cc=activity_object['cc'],
169 object=activity_object,
170 signature=signature,
171 ).serialize()
172
173
174 def to_delete_activity(self, user):
175 ''' notice of deletion '''
176 return activitypub.Delete(
177 id=self.remote_id + '/activity',
178 actor=user.remote_id,
179 to=['%s/followers' % user.remote_id],
180 cc=['https://www.w3.org/ns/activitystreams#Public'],
181 object=self.to_activity(),
182 ).serialize()
183
184
185 def to_update_activity(self, user):
186 ''' wrapper for Updates to an activity '''
187 activity_id = '%s#update/%s' % (self.remote_id, uuid4())
188 return activitypub.Update(
189 id=activity_id,
190 actor=user.remote_id,
191 to=['https://www.w3.org/ns/activitystreams#Public'],
192 object=self.to_activity()
193 ).serialize()
194
195
196 def to_undo_activity(self, user):
197 ''' undo an action '''
198 return activitypub.Undo(
199 id='%s#undo' % self.remote_id,
200 actor=user.remote_id,
201 object=self.to_activity()
202 ).serialize()
203
204
205 class OrderedCollectionPageMixin(ActivitypubMixin):
206 ''' just the paginator utilities, so you don't HAVE to
207 override ActivitypubMixin's to_activity (ie, for outbox '''
208 @property
209 def collection_remote_id(self):
210 ''' this can be overriden if there's a special remote id, ie outbox '''
211 return self.remote_id
212
213
214 def to_ordered_collection(self, queryset, \
215 remote_id=None, page=False, collection_only=False, **kwargs):
216 ''' an ordered collection of whatevers '''
217 if not queryset.ordered:
218 raise RuntimeError('queryset must be ordered')
219
220 remote_id = remote_id or self.remote_id
221 if page:
222 return to_ordered_collection_page(
223 queryset, remote_id, **kwargs)
224
225 if collection_only or not hasattr(self, 'activity_serializer'):
226 serializer = activitypub.OrderedCollection
227 activity = {}
228 else:
229 serializer = self.activity_serializer
230 # a dict from the model fields
231 activity = generate_activity(self)
232
233 if remote_id:
234 activity['id'] = remote_id
235
236 paginated = Paginator(queryset, PAGE_LENGTH)
237 # add computed fields specific to orderd collections
238 activity['totalItems'] = paginated.count
239 activity['first'] = '%s?page=1' % remote_id
240 activity['last'] = '%s?page=%d' % (remote_id, paginated.num_pages)
241
242 return serializer(**activity).serialize()
243
244
245 # pylint: disable=unused-argument
246 def to_ordered_collection_page(
247 queryset, remote_id, id_only=False, page=1, **kwargs):
248 ''' serialize and pagiante a queryset '''
249 paginated = Paginator(queryset, PAGE_LENGTH)
250
251 activity_page = paginated.page(page)
252 if id_only:
253 items = [s.remote_id for s in activity_page.object_list]
254 else:
255 items = [s.to_activity() for s in activity_page.object_list]
256
257 prev_page = next_page = None
258 if activity_page.has_next():
259 next_page = '%s?page=%d' % (remote_id, activity_page.next_page_number())
260 if activity_page.has_previous():
261 prev_page = '%s?page=%d' % \
262 (remote_id, activity_page.previous_page_number())
263 return activitypub.OrderedCollectionPage(
264 id='%s?page=%s' % (remote_id, page),
265 partOf=remote_id,
266 orderedItems=items,
267 next=next_page,
268 prev=prev_page
269 ).serialize()
270
271
272 class OrderedCollectionMixin(OrderedCollectionPageMixin):
273 ''' extends activitypub models to work as ordered collections '''
274 @property
275 def collection_queryset(self):
276 ''' usually an ordered collection model aggregates a different model '''
277 raise NotImplementedError('Model must define collection_queryset')
278
279 activity_serializer = activitypub.OrderedCollection
280
281 def to_activity(self, **kwargs):
282 ''' an ordered collection of the specified model queryset '''
283 return self.to_ordered_collection(self.collection_queryset, **kwargs)
284
285
286 def generate_activity(obj):
287 ''' go through the fields on an object '''
288 activity = {}
289 for field in obj.activity_fields:
290 field.set_activity_from_field(activity, obj)
291
292 if hasattr(obj, 'serialize_reverse_fields'):
293 # for example, editions of a work
294 for model_field_name, activity_field_name, sort_field in \
295 obj.serialize_reverse_fields:
296 related_field = getattr(obj, model_field_name)
297 activity[activity_field_name] = \
298 unfurl_related_field(related_field, sort_field)
299
300 if not activity.get('id'):
301 activity['id'] = obj.get_remote_id()
302 return activity
303
[end of bookwyrm/models/base_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/models/base_model.py b/bookwyrm/models/base_model.py
--- a/bookwyrm/models/base_model.py
+++ b/bookwyrm/models/base_model.py
@@ -27,7 +27,7 @@
''' generate a url that resolves to the local object '''
base_path = 'https://%s' % DOMAIN
if hasattr(self, 'user'):
- base_path = self.user.remote_id
+ base_path = '%s%s' % (base_path, self.user.local_path)
model_name = type(self).__name__.lower()
return '%s/%s/%d' % (base_path, model_name, self.id)
| {"golden_diff": "diff --git a/bookwyrm/models/base_model.py b/bookwyrm/models/base_model.py\n--- a/bookwyrm/models/base_model.py\n+++ b/bookwyrm/models/base_model.py\n@@ -27,7 +27,7 @@\n ''' generate a url that resolves to the local object '''\n base_path = 'https://%s' % DOMAIN\n if hasattr(self, 'user'):\n- base_path = self.user.remote_id\n+ base_path = '%s%s' % (base_path, self.user.local_path)\n model_name = type(self).__name__.lower()\n return '%s/%s/%d' % (base_path, model_name, self.id)\n", "issue": "Wrong baseurl in remote @-mention notification\nWhen someone on a remote server @-mentions me on Bookwyrm, the \"status\" link in notifications has the wrong baseUrl.\r\n\r\nExample:\r\n\r\n\r\n\r\nThe url that the username `Flancian` links to is `https://bookwyrm.social/user/[email protected]`. The url that `status` links to is `https://social.coop/users/flancian/status/14794`, which takes me to a 404 on the `social.coop` server. The correct url that status should be linking to is `https://bookwyrm.social/user/[email protected]/status/14794`.\r\n\r\nI've confirmed this happens for @-mentions from any remote server.\n", "before_files": [{"content": "''' base model with default fields '''\nfrom base64 import b64encode\nfrom functools import reduce\nimport operator\nfrom uuid import uuid4\n\nfrom Crypto.PublicKey import RSA\nfrom Crypto.Signature import pkcs1_15\nfrom Crypto.Hash import SHA256\nfrom django.core.paginator import Paginator\nfrom django.db import models\nfrom django.db.models import Q\nfrom django.dispatch import receiver\n\nfrom bookwyrm import activitypub\nfrom bookwyrm.settings import DOMAIN, PAGE_LENGTH\nfrom .fields import ImageField, ManyToManyField, RemoteIdField\n\n\nclass BookWyrmModel(models.Model):\n ''' shared fields '''\n created_date = models.DateTimeField(auto_now_add=True)\n updated_date = models.DateTimeField(auto_now=True)\n remote_id = RemoteIdField(null=True, activitypub_field='id')\n\n def get_remote_id(self):\n ''' generate a url that resolves to the local object '''\n base_path = 'https://%s' % DOMAIN\n if hasattr(self, 'user'):\n base_path = self.user.remote_id\n model_name = type(self).__name__.lower()\n return '%s/%s/%d' % (base_path, model_name, self.id)\n\n class Meta:\n ''' this is just here to provide default fields for other models '''\n abstract = True\n\n @property\n def local_path(self):\n ''' how to link to this object in the local app '''\n return self.get_remote_id().replace('https://%s' % DOMAIN, '')\n\n\n@receiver(models.signals.post_save)\n#pylint: disable=unused-argument\ndef execute_after_save(sender, instance, created, *args, **kwargs):\n ''' set the remote_id after save (when the id is available) '''\n if not created or not hasattr(instance, 'get_remote_id'):\n return\n if not instance.remote_id:\n instance.remote_id = instance.get_remote_id()\n instance.save()\n\n\ndef unfurl_related_field(related_field, sort_field=None):\n ''' load reverse lookups (like public key owner or Status attachment '''\n if hasattr(related_field, 'all'):\n return [unfurl_related_field(i) for i in related_field.order_by(\n sort_field).all()]\n if related_field.reverse_unfurl:\n return related_field.field_to_activity()\n return related_field.remote_id\n\n\nclass ActivitypubMixin:\n ''' add this mixin for models that are AP serializable '''\n activity_serializer = lambda: {}\n reverse_unfurl = False\n\n def __init__(self, *args, **kwargs):\n ''' collect some info on model fields '''\n self.image_fields = []\n self.many_to_many_fields = []\n self.simple_fields = [] # \"simple\"\n for field in self._meta.get_fields():\n if not hasattr(field, 'field_to_activity'):\n continue\n\n if isinstance(field, ImageField):\n self.image_fields.append(field)\n elif isinstance(field, ManyToManyField):\n self.many_to_many_fields.append(field)\n else:\n self.simple_fields.append(field)\n\n self.activity_fields = self.image_fields + \\\n self.many_to_many_fields + self.simple_fields\n\n self.deserialize_reverse_fields = self.deserialize_reverse_fields \\\n if hasattr(self, 'deserialize_reverse_fields') else []\n self.serialize_reverse_fields = self.serialize_reverse_fields \\\n if hasattr(self, 'serialize_reverse_fields') else []\n\n super().__init__(*args, **kwargs)\n\n\n @classmethod\n def find_existing_by_remote_id(cls, remote_id):\n ''' look up a remote id in the db '''\n return cls.find_existing({'id': remote_id})\n\n @classmethod\n def find_existing(cls, data):\n ''' compare data to fields that can be used for deduplation.\n This always includes remote_id, but can also be unique identifiers\n like an isbn for an edition '''\n filters = []\n for field in cls._meta.get_fields():\n if not hasattr(field, 'deduplication_field') or \\\n not field.deduplication_field:\n continue\n\n value = data.get(field.get_activitypub_field())\n if not value:\n continue\n filters.append({field.name: value})\n\n if hasattr(cls, 'origin_id') and 'id' in data:\n # kinda janky, but this handles special case for books\n filters.append({'origin_id': data['id']})\n\n if not filters:\n # if there are no deduplication fields, it will match the first\n # item no matter what. this shouldn't happen but just in case.\n return None\n\n objects = cls.objects\n if hasattr(objects, 'select_subclasses'):\n objects = objects.select_subclasses()\n\n # an OR operation on all the match fields\n match = objects.filter(\n reduce(\n operator.or_, (Q(**f) for f in filters)\n )\n )\n # there OUGHT to be only one match\n return match.first()\n\n\n def to_activity(self):\n ''' convert from a model to an activity '''\n activity = generate_activity(self)\n return self.activity_serializer(**activity).serialize()\n\n\n def to_create_activity(self, user, **kwargs):\n ''' returns the object wrapped in a Create activity '''\n activity_object = self.to_activity(**kwargs)\n\n signature = None\n create_id = self.remote_id + '/activity'\n if 'content' in activity_object:\n signer = pkcs1_15.new(RSA.import_key(user.key_pair.private_key))\n content = activity_object['content']\n signed_message = signer.sign(SHA256.new(content.encode('utf8')))\n\n signature = activitypub.Signature(\n creator='%s#main-key' % user.remote_id,\n created=activity_object['published'],\n signatureValue=b64encode(signed_message).decode('utf8')\n )\n\n return activitypub.Create(\n id=create_id,\n actor=user.remote_id,\n to=activity_object['to'],\n cc=activity_object['cc'],\n object=activity_object,\n signature=signature,\n ).serialize()\n\n\n def to_delete_activity(self, user):\n ''' notice of deletion '''\n return activitypub.Delete(\n id=self.remote_id + '/activity',\n actor=user.remote_id,\n to=['%s/followers' % user.remote_id],\n cc=['https://www.w3.org/ns/activitystreams#Public'],\n object=self.to_activity(),\n ).serialize()\n\n\n def to_update_activity(self, user):\n ''' wrapper for Updates to an activity '''\n activity_id = '%s#update/%s' % (self.remote_id, uuid4())\n return activitypub.Update(\n id=activity_id,\n actor=user.remote_id,\n to=['https://www.w3.org/ns/activitystreams#Public'],\n object=self.to_activity()\n ).serialize()\n\n\n def to_undo_activity(self, user):\n ''' undo an action '''\n return activitypub.Undo(\n id='%s#undo' % self.remote_id,\n actor=user.remote_id,\n object=self.to_activity()\n ).serialize()\n\n\nclass OrderedCollectionPageMixin(ActivitypubMixin):\n ''' just the paginator utilities, so you don't HAVE to\n override ActivitypubMixin's to_activity (ie, for outbox '''\n @property\n def collection_remote_id(self):\n ''' this can be overriden if there's a special remote id, ie outbox '''\n return self.remote_id\n\n\n def to_ordered_collection(self, queryset, \\\n remote_id=None, page=False, collection_only=False, **kwargs):\n ''' an ordered collection of whatevers '''\n if not queryset.ordered:\n raise RuntimeError('queryset must be ordered')\n\n remote_id = remote_id or self.remote_id\n if page:\n return to_ordered_collection_page(\n queryset, remote_id, **kwargs)\n\n if collection_only or not hasattr(self, 'activity_serializer'):\n serializer = activitypub.OrderedCollection\n activity = {}\n else:\n serializer = self.activity_serializer\n # a dict from the model fields\n activity = generate_activity(self)\n\n if remote_id:\n activity['id'] = remote_id\n\n paginated = Paginator(queryset, PAGE_LENGTH)\n # add computed fields specific to orderd collections\n activity['totalItems'] = paginated.count\n activity['first'] = '%s?page=1' % remote_id\n activity['last'] = '%s?page=%d' % (remote_id, paginated.num_pages)\n\n return serializer(**activity).serialize()\n\n\n# pylint: disable=unused-argument\ndef to_ordered_collection_page(\n queryset, remote_id, id_only=False, page=1, **kwargs):\n ''' serialize and pagiante a queryset '''\n paginated = Paginator(queryset, PAGE_LENGTH)\n\n activity_page = paginated.page(page)\n if id_only:\n items = [s.remote_id for s in activity_page.object_list]\n else:\n items = [s.to_activity() for s in activity_page.object_list]\n\n prev_page = next_page = None\n if activity_page.has_next():\n next_page = '%s?page=%d' % (remote_id, activity_page.next_page_number())\n if activity_page.has_previous():\n prev_page = '%s?page=%d' % \\\n (remote_id, activity_page.previous_page_number())\n return activitypub.OrderedCollectionPage(\n id='%s?page=%s' % (remote_id, page),\n partOf=remote_id,\n orderedItems=items,\n next=next_page,\n prev=prev_page\n ).serialize()\n\n\nclass OrderedCollectionMixin(OrderedCollectionPageMixin):\n ''' extends activitypub models to work as ordered collections '''\n @property\n def collection_queryset(self):\n ''' usually an ordered collection model aggregates a different model '''\n raise NotImplementedError('Model must define collection_queryset')\n\n activity_serializer = activitypub.OrderedCollection\n\n def to_activity(self, **kwargs):\n ''' an ordered collection of the specified model queryset '''\n return self.to_ordered_collection(self.collection_queryset, **kwargs)\n\n\ndef generate_activity(obj):\n ''' go through the fields on an object '''\n activity = {}\n for field in obj.activity_fields:\n field.set_activity_from_field(activity, obj)\n\n if hasattr(obj, 'serialize_reverse_fields'):\n # for example, editions of a work\n for model_field_name, activity_field_name, sort_field in \\\n obj.serialize_reverse_fields:\n related_field = getattr(obj, model_field_name)\n activity[activity_field_name] = \\\n unfurl_related_field(related_field, sort_field)\n\n if not activity.get('id'):\n activity['id'] = obj.get_remote_id()\n return activity\n", "path": "bookwyrm/models/base_model.py"}]} | 3,843 | 144 |
gh_patches_debug_15039 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-469 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`TypeError` when trying to issue `BatchHttpRequest` when service was built using application default credentials
This issue is very similar to #211. My team has an app-engine application that uses batch requests in the Gmail API. After an upgrade of this client library, we started seeing failures:
```
...
File "/base/data/home/apps/s~app-id/modname:version/path/to/my/code.py", line 241, in _user_method
return batch_request.execute(http=self._http)
File "/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File "/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py", line 1417, in execute
self._execute(http, self._order, self._requests)
File "/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py", line 1333, in _execute
body = self._serialize_request(request)
File "/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py", line 1204, in _serialize_request
request.http.request.credentials.apply(headers)
File "/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/client.py", line 558, in apply
headers['Authorization'] = 'Bearer ' + self.access_token
TypeError: cannot concatenate 'str' and 'NoneType' objects
```
Our usage of this client library looks like the following:
```python
from googleapiclient.discovery import build
gmail_service = build('gmail', 'v1') # Uses application credentials
...
authorized_http = user_credentials.authorize(httplib2.Http()) # Uses end user credentials
batch_request = gmail_service.new_batch_http_request()
batch_request.add(
gmail_service.users().threads().get(id=thread_id, userId='me'),
callback=callback,
request_id=thread_id)
batch_request.execute(http=authorized_http)
```
The `gmail_service` is cached at the application level to avoid needing to do an API call for each request (which is why it ends up with the application credentials). What I believe is happening is that
```py
gmail_service.users().threads().get(id=thread_id, userId='me')
```
ends up creating an `HttpRequest` that uses the same `http` that was used to construct `gmail_service` (i.e. the application credentials) and I believe that the application credentials have no `access_token`. This results in the `TypeError` seen above. The fix in [#232](https://github.com/google/google-api-python-client/pull/232/files) won't help (the `http` that is getting passed to `batch_request.execute` is valid).
It seems to me that the application of credentials in [_serialize_request](https://github.com/google/google-api-python-client/blob/master/googleapiclient/http.py#L1202) should be conditional on the credentials actually having an access token:
```python
if request.http is not None and hasattr(request.http.request,
'credentials'):
if request.http.request.credentials.access_token:
request.http.request.credentials.apply(headers)
```
Otherwise, if the sub-request isn't authenticated then the authentication from the outer request should be used (at least, if the [gmail docs](https://developers.google.com/gmail/api/guides/batch) are any indicator):
> The HTTP headers for the outer batch request, except for the Content- headers such as Content-Type, apply to every request in the batch. If you specify a given HTTP header in both the outer request and an individual call, then the individual call header's value overrides the outer batch request header's value. The headers for an individual call apply only to that call.
> For example, if you provide an Authorization header for a specific call, then that header applies only to that call. If you provide an Authorization header for the outer request, then that header applies to all of the individual calls unless they override it with Authorization headers of their own.
--------
In the event that this behavior is actually working as intended or if my proposed fix isn't satisfactory and a real fix would be too difficult, a work-around to this issue is possible by updating the requests that you add to the batch with an appropriately authorized http instance. i.e changing the above to:
```python
batch_request = gmail_service.new_batch_http_request()
request = gmail_service.users().threads().get(id=thread_id, userId='me')
request.http = authorized_http # explicitly set the authorization on the request.
batch_request.add(
request,
callback=callback,
request_id=thread_id)
batch_request.execute(http=authorized_http)
```
Seems to fix the issue.
</issue>
<code>
[start of googleapiclient/_auth.py]
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for authentication using oauth2client or google-auth."""
16
17 import httplib2
18
19 try:
20 import google.auth
21 import google.auth.credentials
22 HAS_GOOGLE_AUTH = True
23 except ImportError: # pragma: NO COVER
24 HAS_GOOGLE_AUTH = False
25
26 try:
27 import google_auth_httplib2
28 except ImportError: # pragma: NO COVER
29 google_auth_httplib2 = None
30
31 try:
32 import oauth2client
33 import oauth2client.client
34 HAS_OAUTH2CLIENT = True
35 except ImportError: # pragma: NO COVER
36 HAS_OAUTH2CLIENT = False
37
38
39 def default_credentials():
40 """Returns Application Default Credentials."""
41 if HAS_GOOGLE_AUTH:
42 credentials, _ = google.auth.default()
43 return credentials
44 elif HAS_OAUTH2CLIENT:
45 return oauth2client.client.GoogleCredentials.get_application_default()
46 else:
47 raise EnvironmentError(
48 'No authentication library is available. Please install either '
49 'google-auth or oauth2client.')
50
51
52 def with_scopes(credentials, scopes):
53 """Scopes the credentials if necessary.
54
55 Args:
56 credentials (Union[
57 google.auth.credentials.Credentials,
58 oauth2client.client.Credentials]): The credentials to scope.
59 scopes (Sequence[str]): The list of scopes.
60
61 Returns:
62 Union[google.auth.credentials.Credentials,
63 oauth2client.client.Credentials]: The scoped credentials.
64 """
65 if HAS_GOOGLE_AUTH and isinstance(
66 credentials, google.auth.credentials.Credentials):
67 return google.auth.credentials.with_scopes_if_required(
68 credentials, scopes)
69 else:
70 try:
71 if credentials.create_scoped_required():
72 return credentials.create_scoped(scopes)
73 else:
74 return credentials
75 except AttributeError:
76 return credentials
77
78
79 def authorized_http(credentials):
80 """Returns an http client that is authorized with the given credentials.
81
82 Args:
83 credentials (Union[
84 google.auth.credentials.Credentials,
85 oauth2client.client.Credentials]): The credentials to use.
86
87 Returns:
88 Union[httplib2.Http, google_auth_httplib2.AuthorizedHttp]: An
89 authorized http client.
90 """
91 from googleapiclient.http import build_http
92
93 if HAS_GOOGLE_AUTH and isinstance(
94 credentials, google.auth.credentials.Credentials):
95 if google_auth_httplib2 is None:
96 raise ValueError(
97 'Credentials from google.auth specified, but '
98 'google-api-python-client is unable to use these credentials '
99 'unless google-auth-httplib2 is installed. Please install '
100 'google-auth-httplib2.')
101 return google_auth_httplib2.AuthorizedHttp(credentials,
102 http=build_http())
103 else:
104 return credentials.authorize(build_http())
105
106
107 def refresh_credentials(credentials):
108 # Refresh must use a new http instance, as the one associated with the
109 # credentials could be a AuthorizedHttp or an oauth2client-decorated
110 # Http instance which would cause a weird recursive loop of refreshing
111 # and likely tear a hole in spacetime.
112 refresh_http = httplib2.Http()
113 if HAS_GOOGLE_AUTH and isinstance(
114 credentials, google.auth.credentials.Credentials):
115 request = google_auth_httplib2.Request(refresh_http)
116 return credentials.refresh(request)
117 else:
118 return credentials.refresh(refresh_http)
119
120
121 def apply_credentials(credentials, headers):
122 # oauth2client and google-auth have the same interface for this.
123 return credentials.apply(headers)
124
125
126 def is_valid(credentials):
127 if HAS_GOOGLE_AUTH and isinstance(
128 credentials, google.auth.credentials.Credentials):
129 return credentials.valid
130 else:
131 return not credentials.access_token_expired
132
133
134 def get_credentials_from_http(http):
135 if http is None:
136 return None
137 elif hasattr(http.request, 'credentials'):
138 return http.request.credentials
139 elif (hasattr(http, 'credentials')
140 and not isinstance(http.credentials, httplib2.Credentials)):
141 return http.credentials
142 else:
143 return None
144
[end of googleapiclient/_auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/googleapiclient/_auth.py b/googleapiclient/_auth.py
--- a/googleapiclient/_auth.py
+++ b/googleapiclient/_auth.py
@@ -120,6 +120,8 @@
def apply_credentials(credentials, headers):
# oauth2client and google-auth have the same interface for this.
+ if not is_valid(credentials):
+ refresh_credentials(credentials)
return credentials.apply(headers)
@@ -128,7 +130,9 @@
credentials, google.auth.credentials.Credentials):
return credentials.valid
else:
- return not credentials.access_token_expired
+ return (
+ credentials.access_token is not None and
+ not credentials.access_token_expired)
def get_credentials_from_http(http):
| {"golden_diff": "diff --git a/googleapiclient/_auth.py b/googleapiclient/_auth.py\n--- a/googleapiclient/_auth.py\n+++ b/googleapiclient/_auth.py\n@@ -120,6 +120,8 @@\n \n def apply_credentials(credentials, headers):\n # oauth2client and google-auth have the same interface for this.\n+ if not is_valid(credentials):\n+ refresh_credentials(credentials)\n return credentials.apply(headers)\n \n \n@@ -128,7 +130,9 @@\n credentials, google.auth.credentials.Credentials):\n return credentials.valid\n else:\n- return not credentials.access_token_expired\n+ return (\n+ credentials.access_token is not None and\n+ not credentials.access_token_expired)\n \n \n def get_credentials_from_http(http):\n", "issue": "`TypeError` when trying to issue `BatchHttpRequest` when service was built using application default credentials\nThis issue is very similar to #211. My team has an app-engine application that uses batch requests in the Gmail API. After an upgrade of this client library, we started seeing failures:\r\n\r\n```\r\n...\r\n File \"/base/data/home/apps/s~app-id/modname:version/path/to/my/code.py\", line 241, in _user_method\r\n return batch_request.execute(http=self._http)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/_helpers.py\", line 133, in positional_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py\", line 1417, in execute\r\n self._execute(http, self._order, self._requests)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py\", line 1333, in _execute\r\n body = self._serialize_request(request)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/googleapiclient/http.py\", line 1204, in _serialize_request\r\n request.http.request.credentials.apply(headers)\r\n File \"/base/data/home/apps/s~app-id/modname:version/third_party/oauth2client/client.py\", line 558, in apply\r\n headers['Authorization'] = 'Bearer ' + self.access_token\r\nTypeError: cannot concatenate 'str' and 'NoneType' objects\r\n```\r\n\r\nOur usage of this client library looks like the following:\r\n\r\n```python\r\nfrom googleapiclient.discovery import build\r\ngmail_service = build('gmail', 'v1') # Uses application credentials\r\n\r\n...\r\n\r\nauthorized_http = user_credentials.authorize(httplib2.Http()) # Uses end user credentials\r\n\r\nbatch_request = gmail_service.new_batch_http_request()\r\nbatch_request.add(\r\n gmail_service.users().threads().get(id=thread_id, userId='me'),\r\n callback=callback,\r\n request_id=thread_id)\r\n\r\nbatch_request.execute(http=authorized_http)\r\n```\r\n\r\nThe `gmail_service` is cached at the application level to avoid needing to do an API call for each request (which is why it ends up with the application credentials). What I believe is happening is that\r\n\r\n```py\r\ngmail_service.users().threads().get(id=thread_id, userId='me')\r\n```\r\n\r\nends up creating an `HttpRequest` that uses the same `http` that was used to construct `gmail_service` (i.e. the application credentials) and I believe that the application credentials have no `access_token`. This results in the `TypeError` seen above. The fix in [#232](https://github.com/google/google-api-python-client/pull/232/files) won't help (the `http` that is getting passed to `batch_request.execute` is valid).\r\n\r\nIt seems to me that the application of credentials in [_serialize_request](https://github.com/google/google-api-python-client/blob/master/googleapiclient/http.py#L1202) should be conditional on the credentials actually having an access token:\r\n\r\n```python\r\nif request.http is not None and hasattr(request.http.request,\r\n 'credentials'):\r\n if request.http.request.credentials.access_token:\r\n request.http.request.credentials.apply(headers)\r\n```\r\n\r\nOtherwise, if the sub-request isn't authenticated then the authentication from the outer request should be used (at least, if the [gmail docs](https://developers.google.com/gmail/api/guides/batch) are any indicator):\r\n\r\n> The HTTP headers for the outer batch request, except for the Content- headers such as Content-Type, apply to every request in the batch. If you specify a given HTTP header in both the outer request and an individual call, then the individual call header's value overrides the outer batch request header's value. The headers for an individual call apply only to that call.\r\n\r\n> For example, if you provide an Authorization header for a specific call, then that header applies only to that call. If you provide an Authorization header for the outer request, then that header applies to all of the individual calls unless they override it with Authorization headers of their own.\r\n\r\n--------\r\n\r\nIn the event that this behavior is actually working as intended or if my proposed fix isn't satisfactory and a real fix would be too difficult, a work-around to this issue is possible by updating the requests that you add to the batch with an appropriately authorized http instance. i.e changing the above to:\r\n\r\n```python\r\n\r\nbatch_request = gmail_service.new_batch_http_request()\r\n\r\nrequest = gmail_service.users().threads().get(id=thread_id, userId='me')\r\nrequest.http = authorized_http # explicitly set the authorization on the request.\r\n\r\nbatch_request.add(\r\n request,\r\n callback=callback,\r\n request_id=thread_id)\r\n\r\nbatch_request.execute(http=authorized_http)\r\n```\r\n\r\nSeems to fix the issue.\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for authentication using oauth2client or google-auth.\"\"\"\n\nimport httplib2\n\ntry:\n import google.auth\n import google.auth.credentials\n HAS_GOOGLE_AUTH = True\nexcept ImportError: # pragma: NO COVER\n HAS_GOOGLE_AUTH = False\n\ntry:\n import google_auth_httplib2\nexcept ImportError: # pragma: NO COVER\n google_auth_httplib2 = None\n\ntry:\n import oauth2client\n import oauth2client.client\n HAS_OAUTH2CLIENT = True\nexcept ImportError: # pragma: NO COVER\n HAS_OAUTH2CLIENT = False\n\n\ndef default_credentials():\n \"\"\"Returns Application Default Credentials.\"\"\"\n if HAS_GOOGLE_AUTH:\n credentials, _ = google.auth.default()\n return credentials\n elif HAS_OAUTH2CLIENT:\n return oauth2client.client.GoogleCredentials.get_application_default()\n else:\n raise EnvironmentError(\n 'No authentication library is available. Please install either '\n 'google-auth or oauth2client.')\n\n\ndef with_scopes(credentials, scopes):\n \"\"\"Scopes the credentials if necessary.\n\n Args:\n credentials (Union[\n google.auth.credentials.Credentials,\n oauth2client.client.Credentials]): The credentials to scope.\n scopes (Sequence[str]): The list of scopes.\n\n Returns:\n Union[google.auth.credentials.Credentials,\n oauth2client.client.Credentials]: The scoped credentials.\n \"\"\"\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n return google.auth.credentials.with_scopes_if_required(\n credentials, scopes)\n else:\n try:\n if credentials.create_scoped_required():\n return credentials.create_scoped(scopes)\n else:\n return credentials\n except AttributeError:\n return credentials\n\n\ndef authorized_http(credentials):\n \"\"\"Returns an http client that is authorized with the given credentials.\n\n Args:\n credentials (Union[\n google.auth.credentials.Credentials,\n oauth2client.client.Credentials]): The credentials to use.\n\n Returns:\n Union[httplib2.Http, google_auth_httplib2.AuthorizedHttp]: An\n authorized http client.\n \"\"\"\n from googleapiclient.http import build_http\n\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n if google_auth_httplib2 is None:\n raise ValueError(\n 'Credentials from google.auth specified, but '\n 'google-api-python-client is unable to use these credentials '\n 'unless google-auth-httplib2 is installed. Please install '\n 'google-auth-httplib2.')\n return google_auth_httplib2.AuthorizedHttp(credentials,\n http=build_http())\n else:\n return credentials.authorize(build_http())\n\n\ndef refresh_credentials(credentials):\n # Refresh must use a new http instance, as the one associated with the\n # credentials could be a AuthorizedHttp or an oauth2client-decorated\n # Http instance which would cause a weird recursive loop of refreshing\n # and likely tear a hole in spacetime.\n refresh_http = httplib2.Http()\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n request = google_auth_httplib2.Request(refresh_http)\n return credentials.refresh(request)\n else:\n return credentials.refresh(refresh_http)\n\n\ndef apply_credentials(credentials, headers):\n # oauth2client and google-auth have the same interface for this.\n return credentials.apply(headers)\n\n\ndef is_valid(credentials):\n if HAS_GOOGLE_AUTH and isinstance(\n credentials, google.auth.credentials.Credentials):\n return credentials.valid\n else:\n return not credentials.access_token_expired\n\n\ndef get_credentials_from_http(http):\n if http is None:\n return None\n elif hasattr(http.request, 'credentials'):\n return http.request.credentials\n elif (hasattr(http, 'credentials')\n and not isinstance(http.credentials, httplib2.Credentials)):\n return http.credentials\n else:\n return None\n", "path": "googleapiclient/_auth.py"}]} | 2,879 | 168 |
gh_patches_debug_18802 | rasdani/github-patches | git_diff | cobbler__cobbler-3620 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scm_track: Push script not working
### Describe the bug
After the refactoring of
### Steps to reproduce
1. Enable `scm_track`
2. Perform any change action in Cobbler
3. See error in logs
Note: The error with pathspec is already fixed on `main` through #3021.
### Expected behavior
Cobbler can push the commits to the specified remote.
### Cobbler version
<!--- Paste output from `cobbler version` -->
````paste below
cobbler:~ # cobbler version
Cobbler 3.3.3
source: ?, ?
build time: Thu Dec 19 12:00:00 2019
````
### Operating system
SLES 15 SP5
### Cobbler log
<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->
````paste below
[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python triggers from /var/lib/cobbler/triggers/change/*
[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python trigger cobbler.modules.scm_track
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'collections']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr:
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'templates']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr:
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'snippets']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr:
[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'commit', '-m', 'API', 'update', '--author', 'Cobbler <[email protected]>']
[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout:
[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: error: pathspec 'update' did not match any file(s) known to git
````
### Screenshots
None
### Additional information
Snippet for from the settings:
```yaml
scm_track_enabled: true
scm_track_mode: "git"
scm_track_author: "Cobbler <[email protected]>"
# scm_push_script: "git push"
scm_push_script: ""
```
</issue>
<code>
[start of cobbler/modules/scm_track.py]
1 """
2 Cobbler Trigger Module that puts the content of the Cobbler data directory under version control. Depending on
3 ``scm_track_mode`` in the settings, this can either be git or Mercurial.
4 """
5
6 # SPDX-License-Identifier: GPL-2.0-or-later
7 # SPDX-FileCopyrightText: Copyright 2009, Red Hat Inc.
8 # SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
9
10
11 import os
12 from typing import TYPE_CHECKING, Any
13
14 from cobbler import utils
15 from cobbler.cexceptions import CX
16
17 if TYPE_CHECKING:
18 from cobbler.api import CobblerAPI
19
20
21 def register() -> str:
22 """
23 This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method
24 indicates the trigger type
25 :return: Always: ``/var/lib/cobbler/triggers/change/*``
26 """
27
28 return "/var/lib/cobbler/triggers/change/*"
29
30
31 def run(api: "CobblerAPI", args: Any):
32 """
33 Runs the trigger, meaning in this case track any changed which happen to a config or data file.
34
35 :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.
36 :param args: The parameter is currently unused for this trigger.
37 :return: 0 on success, otherwise an exception is risen.
38 """
39 settings = api.settings()
40
41 if not settings.scm_track_enabled:
42 # feature disabled
43 return 0
44
45 mode = str(settings.scm_track_mode).lower()
46 author = str(settings.scm_track_author)
47 push_script = str(settings.scm_push_script)
48
49 if mode == "git":
50 old_dir = os.getcwd()
51 os.chdir("/var/lib/cobbler")
52 if os.getcwd() != "/var/lib/cobbler":
53 raise CX("danger will robinson")
54
55 if not os.path.exists("/var/lib/cobbler/.git"):
56 utils.subprocess_call(["git", "init"], shell=False)
57
58 # FIXME: If we know the remote user of an XMLRPC call use them as the author
59 utils.subprocess_call(["git", "add", "--all", "collections"], shell=False)
60 utils.subprocess_call(["git", "add", "--all", "templates"], shell=False)
61 utils.subprocess_call(["git", "add", "--all", "snippets"], shell=False)
62 utils.subprocess_call(
63 ["git", "commit", "-m", "API update", "--author", author], shell=False
64 )
65
66 if push_script:
67 utils.subprocess_call([push_script], shell=False)
68
69 os.chdir(old_dir)
70 return 0
71
72 if mode == "hg":
73 # use mercurial
74 old_dir = os.getcwd()
75 os.chdir("/var/lib/cobbler")
76 if os.getcwd() != "/var/lib/cobbler":
77 raise CX("danger will robinson")
78
79 if not os.path.exists("/var/lib/cobbler/.hg"):
80 utils.subprocess_call(["hg", "init"], shell=False)
81
82 # FIXME: If we know the remote user of an XMLRPC call use them as the user
83 utils.subprocess_call(["hg", "add collections"], shell=False)
84 utils.subprocess_call(["hg", "add templates"], shell=False)
85 utils.subprocess_call(["hg", "add snippets"], shell=False)
86 utils.subprocess_call(
87 ["hg", "commit", "-m", "API", "update", "--user", author], shell=False
88 )
89
90 if push_script:
91 utils.subprocess_call([push_script], shell=False)
92
93 os.chdir(old_dir)
94 return 0
95
96 raise CX(f"currently unsupported SCM type: {mode}")
97
[end of cobbler/modules/scm_track.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cobbler/modules/scm_track.py b/cobbler/modules/scm_track.py
--- a/cobbler/modules/scm_track.py
+++ b/cobbler/modules/scm_track.py
@@ -64,7 +64,7 @@
)
if push_script:
- utils.subprocess_call([push_script], shell=False)
+ utils.subprocess_call(push_script.split(" "), shell=False)
os.chdir(old_dir)
return 0
@@ -84,11 +84,11 @@
utils.subprocess_call(["hg", "add templates"], shell=False)
utils.subprocess_call(["hg", "add snippets"], shell=False)
utils.subprocess_call(
- ["hg", "commit", "-m", "API", "update", "--user", author], shell=False
+ ["hg", "commit", "-m", "API update", "--user", author], shell=False
)
if push_script:
- utils.subprocess_call([push_script], shell=False)
+ utils.subprocess_call(push_script.split(" "), shell=False)
os.chdir(old_dir)
return 0
| {"golden_diff": "diff --git a/cobbler/modules/scm_track.py b/cobbler/modules/scm_track.py\n--- a/cobbler/modules/scm_track.py\n+++ b/cobbler/modules/scm_track.py\n@@ -64,7 +64,7 @@\n )\n \n if push_script:\n- utils.subprocess_call([push_script], shell=False)\n+ utils.subprocess_call(push_script.split(\" \"), shell=False)\n \n os.chdir(old_dir)\n return 0\n@@ -84,11 +84,11 @@\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call(\n- [\"hg\", \"commit\", \"-m\", \"API\", \"update\", \"--user\", author], shell=False\n+ [\"hg\", \"commit\", \"-m\", \"API update\", \"--user\", author], shell=False\n )\n \n if push_script:\n- utils.subprocess_call([push_script], shell=False)\n+ utils.subprocess_call(push_script.split(\" \"), shell=False)\n \n os.chdir(old_dir)\n return 0\n", "issue": "scm_track: Push script not working\n### Describe the bug\r\n\r\nAfter the refactoring of \r\n\r\n### Steps to reproduce\r\n\r\n1. Enable `scm_track` \r\n2. Perform any change action in Cobbler\r\n3. See error in logs\r\n\r\nNote: The error with pathspec is already fixed on `main` through #3021.\r\n\r\n### Expected behavior\r\n\r\nCobbler can push the commits to the specified remote.\r\n\r\n### Cobbler version\r\n\r\n<!--- Paste output from `cobbler version` -->\r\n````paste below\r\ncobbler:~ # cobbler version\r\nCobbler 3.3.3\r\n source: ?, ?\r\n build time: Thu Dec 19 12:00:00 2019\r\n````\r\n\r\n### Operating system\r\n\r\nSLES 15 SP5\r\n\r\n### Cobbler log\r\n\r\n<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->\r\n````paste below\r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python triggers from /var/lib/cobbler/triggers/change/*\r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | running python trigger cobbler.modules.scm_track\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'collections']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: \r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'templates']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: \r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'add', '--all', 'snippets']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: \r\n[Thread-20] 2024-02-12T16:03:34 - INFO | running: ['git', 'commit', '-m', 'API', 'update', '--author', 'Cobbler <[email protected]>']\r\n[Thread-20] 2024-02-12T16:03:34 - INFO | received on stdout: \r\n[Thread-20] 2024-02-12T16:03:34 - DEBUG | received on stderr: error: pathspec 'update' did not match any file(s) known to git\r\n````\r\n\r\n### Screenshots\r\n\r\nNone\r\n\r\n### Additional information\r\n\r\nSnippet for from the settings:\r\n\r\n```yaml\r\nscm_track_enabled: true\r\nscm_track_mode: \"git\"\r\nscm_track_author: \"Cobbler <[email protected]>\"\r\n# scm_push_script: \"git push\"\r\nscm_push_script: \"\"\r\n```\n", "before_files": [{"content": "\"\"\"\nCobbler Trigger Module that puts the content of the Cobbler data directory under version control. Depending on\n``scm_track_mode`` in the settings, this can either be git or Mercurial.\n\"\"\"\n\n# SPDX-License-Identifier: GPL-2.0-or-later\n# SPDX-FileCopyrightText: Copyright 2009, Red Hat Inc.\n# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n\n\nimport os\nfrom typing import TYPE_CHECKING, Any\n\nfrom cobbler import utils\nfrom cobbler.cexceptions import CX\n\nif TYPE_CHECKING:\n from cobbler.api import CobblerAPI\n\n\ndef register() -> str:\n \"\"\"\n This pure python trigger acts as if it were a legacy shell-trigger, but is much faster. The return of this method\n indicates the trigger type\n :return: Always: ``/var/lib/cobbler/triggers/change/*``\n \"\"\"\n\n return \"/var/lib/cobbler/triggers/change/*\"\n\n\ndef run(api: \"CobblerAPI\", args: Any):\n \"\"\"\n Runs the trigger, meaning in this case track any changed which happen to a config or data file.\n\n :param api: The api instance of the Cobbler server. Used to look up if scm_track_enabled is true.\n :param args: The parameter is currently unused for this trigger.\n :return: 0 on success, otherwise an exception is risen.\n \"\"\"\n settings = api.settings()\n\n if not settings.scm_track_enabled:\n # feature disabled\n return 0\n\n mode = str(settings.scm_track_mode).lower()\n author = str(settings.scm_track_author)\n push_script = str(settings.scm_push_script)\n\n if mode == \"git\":\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.git\"):\n utils.subprocess_call([\"git\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the author\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"collections\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"templates\"], shell=False)\n utils.subprocess_call([\"git\", \"add\", \"--all\", \"snippets\"], shell=False)\n utils.subprocess_call(\n [\"git\", \"commit\", \"-m\", \"API update\", \"--author\", author], shell=False\n )\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n if mode == \"hg\":\n # use mercurial\n old_dir = os.getcwd()\n os.chdir(\"/var/lib/cobbler\")\n if os.getcwd() != \"/var/lib/cobbler\":\n raise CX(\"danger will robinson\")\n\n if not os.path.exists(\"/var/lib/cobbler/.hg\"):\n utils.subprocess_call([\"hg\", \"init\"], shell=False)\n\n # FIXME: If we know the remote user of an XMLRPC call use them as the user\n utils.subprocess_call([\"hg\", \"add collections\"], shell=False)\n utils.subprocess_call([\"hg\", \"add templates\"], shell=False)\n utils.subprocess_call([\"hg\", \"add snippets\"], shell=False)\n utils.subprocess_call(\n [\"hg\", \"commit\", \"-m\", \"API\", \"update\", \"--user\", author], shell=False\n )\n\n if push_script:\n utils.subprocess_call([push_script], shell=False)\n\n os.chdir(old_dir)\n return 0\n\n raise CX(f\"currently unsupported SCM type: {mode}\")\n", "path": "cobbler/modules/scm_track.py"}]} | 2,410 | 248 |
gh_patches_debug_4727 | rasdani/github-patches | git_diff | kserve__kserve-658 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Help wanted] Add e2e test for canary rollout
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
</issue>
<code>
[start of python/kfserving/kfserving/constants/constants.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 # KFServing K8S constants
18 KFSERVING_GROUP = 'serving.kubeflow.org'
19 KFSERVING_KIND = 'InferenceService'
20 KFSERVING_PLURAL = 'inferenceservices'
21 KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
22
23 KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
24
25 # INFERENCESERVICE credentials common constants
26 INFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'
27 INFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'
28 DEFAULT_SECRET_NAME = "kfserving-secret-"
29 DEFAULT_SA_NAME = "kfserving-service-credentials"
30
31 # S3 credentials constants
32 S3_ACCESS_KEY_ID_DEFAULT_NAME = "awsAccessKeyID"
33 S3_SECRET_ACCESS_KEY_DEFAULT_NAME = "awsSecretAccessKey"
34 S3_DEFAULT_CREDS_FILE = '~/.aws/credentials'
35
36 # GCS credentials constants
37 GCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'
38 GCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'
39
40 # Azure credentials constants
41 AZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'
42
[end of python/kfserving/kfserving/constants/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py
--- a/python/kfserving/kfserving/constants/constants.py
+++ b/python/kfserving/kfserving/constants/constants.py
@@ -19,6 +19,7 @@
KFSERVING_KIND = 'InferenceService'
KFSERVING_PLURAL = 'inferenceservices'
KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')
+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION
KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()
| {"golden_diff": "diff --git a/python/kfserving/kfserving/constants/constants.py b/python/kfserving/kfserving/constants/constants.py\n--- a/python/kfserving/kfserving/constants/constants.py\n+++ b/python/kfserving/kfserving/constants/constants.py\n@@ -19,6 +19,7 @@\n KFSERVING_KIND = 'InferenceService'\n KFSERVING_PLURAL = 'inferenceservices'\n KFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n+KFSERVING_API_VERSION = KFSERVING_GROUP + '/' + KFSERVING_VERSION\n \n KFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n", "issue": "[Help wanted] Add e2e test for canary rollout\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\n[A clear and concise description of what you want to happen.]\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\n# KFServing K8S constants\nKFSERVING_GROUP = 'serving.kubeflow.org'\nKFSERVING_KIND = 'InferenceService'\nKFSERVING_PLURAL = 'inferenceservices'\nKFSERVING_VERSION = os.environ.get('KFSERVING_VERSION', 'v1alpha2')\n\nKFSERVING_LOGLEVEL = os.environ.get('KFSERVING_LOGLEVEL', 'INFO').upper()\n\n# INFERENCESERVICE credentials common constants\nINFERENCESERVICE_CONFIG_MAP_NAME = 'inferenceservice-config'\nINFERENCESERVICE_SYSTEM_NAMESPACE = 'kfserving-system'\nDEFAULT_SECRET_NAME = \"kfserving-secret-\"\nDEFAULT_SA_NAME = \"kfserving-service-credentials\"\n\n# S3 credentials constants\nS3_ACCESS_KEY_ID_DEFAULT_NAME = \"awsAccessKeyID\"\nS3_SECRET_ACCESS_KEY_DEFAULT_NAME = \"awsSecretAccessKey\"\nS3_DEFAULT_CREDS_FILE = '~/.aws/credentials'\n\n# GCS credentials constants\nGCS_CREDS_FILE_DEFAULT_NAME = 'gcloud-application-credentials.json'\nGCS_DEFAULT_CREDS_FILE = '~/.config/gcloud/application_default_credentials.json'\n\n# Azure credentials constants\nAZ_DEFAULT_CREDS_FILE = '~/.azure/azure_credentials.json'\n", "path": "python/kfserving/kfserving/constants/constants.py"}]} | 1,082 | 154 |
gh_patches_debug_31066 | rasdani/github-patches | git_diff | getsentry__sentry-python-434 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception: raise OSError("handle is closed")
When I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.
```
from concurrent.futures.process import ProcessPoolExecutor
import sentry_sdk
sentry_sdk.init(dsn="")
def test():
...
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=4) as worker:
worker.submit(test)
```
The exception:
```
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
</issue>
<code>
[start of sentry_sdk/integrations/threading.py]
1 from __future__ import absolute_import
2
3 import sys
4
5 from threading import Thread
6
7 from sentry_sdk import Hub
8 from sentry_sdk._compat import reraise
9 from sentry_sdk.utils import event_from_exception
10 from sentry_sdk.integrations import Integration
11
12 from sentry_sdk._types import MYPY
13
14 if MYPY:
15 from typing import Any
16
17
18 class ThreadingIntegration(Integration):
19 identifier = "threading"
20
21 def __init__(self, propagate_hub=False):
22 self.propagate_hub = propagate_hub
23
24 @staticmethod
25 def setup_once():
26 # type: () -> None
27 old_start = Thread.start
28
29 def sentry_start(self, *a, **kw):
30 hub = Hub.current
31 integration = hub.get_integration(ThreadingIntegration)
32 if integration is not None:
33 if not integration.propagate_hub:
34 hub_ = None
35 else:
36 hub_ = Hub(hub)
37
38 self.run = _wrap_run(hub_, self.run)
39
40 return old_start(self, *a, **kw) # type: ignore
41
42 Thread.start = sentry_start # type: ignore
43
44
45 def _wrap_run(parent_hub, old_run):
46 def run(*a, **kw):
47 hub = parent_hub or Hub.current
48
49 with hub:
50 try:
51 return old_run(*a, **kw)
52 except Exception:
53 reraise(*_capture_exception())
54
55 return run
56
57
58 def _capture_exception():
59 hub = Hub.current
60 exc_info = sys.exc_info()
61
62 if hub.get_integration(ThreadingIntegration) is not None:
63 # If an integration is there, a client has to be there.
64 client = hub.client # type: Any
65
66 event, hint = event_from_exception(
67 exc_info,
68 client_options=client.options,
69 mechanism={"type": "threading", "handled": False},
70 )
71 hub.capture_event(event, hint=hint)
72
73 return exc_info
74
[end of sentry_sdk/integrations/threading.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py
--- a/sentry_sdk/integrations/threading.py
+++ b/sentry_sdk/integrations/threading.py
@@ -1,15 +1,13 @@
from __future__ import absolute_import
import sys
-
-from threading import Thread
+from threading import Thread, current_thread
from sentry_sdk import Hub
from sentry_sdk._compat import reraise
-from sentry_sdk.utils import event_from_exception
-from sentry_sdk.integrations import Integration
-
from sentry_sdk._types import MYPY
+from sentry_sdk.integrations import Integration
+from sentry_sdk.utils import event_from_exception
if MYPY:
from typing import Any
@@ -34,21 +32,26 @@
hub_ = None
else:
hub_ = Hub(hub)
-
- self.run = _wrap_run(hub_, self.run)
+ # Patching instance methods in `start()` creates a reference cycle if
+ # done in a naive way. See
+ # https://github.com/getsentry/sentry-python/pull/434
+ #
+ # In threading module, using current_thread API will access current thread instance
+ # without holding it to avoid a reference cycle in an easier way.
+ self.run = _wrap_run(hub_, self.run.__func__)
return old_start(self, *a, **kw) # type: ignore
Thread.start = sentry_start # type: ignore
-def _wrap_run(parent_hub, old_run):
+def _wrap_run(parent_hub, old_run_func):
def run(*a, **kw):
hub = parent_hub or Hub.current
-
with hub:
try:
- return old_run(*a, **kw)
+ self = current_thread()
+ return old_run_func(self, *a, **kw)
except Exception:
reraise(*_capture_exception())
| {"golden_diff": "diff --git a/sentry_sdk/integrations/threading.py b/sentry_sdk/integrations/threading.py\n--- a/sentry_sdk/integrations/threading.py\n+++ b/sentry_sdk/integrations/threading.py\n@@ -1,15 +1,13 @@\n from __future__ import absolute_import\n \n import sys\n-\n-from threading import Thread\n+from threading import Thread, current_thread\n \n from sentry_sdk import Hub\n from sentry_sdk._compat import reraise\n-from sentry_sdk.utils import event_from_exception\n-from sentry_sdk.integrations import Integration\n-\n from sentry_sdk._types import MYPY\n+from sentry_sdk.integrations import Integration\n+from sentry_sdk.utils import event_from_exception\n \n if MYPY:\n from typing import Any\n@@ -34,21 +32,26 @@\n hub_ = None\n else:\n hub_ = Hub(hub)\n-\n- self.run = _wrap_run(hub_, self.run)\n+ # Patching instance methods in `start()` creates a reference cycle if\n+ # done in a naive way. See\n+ # https://github.com/getsentry/sentry-python/pull/434\n+ #\n+ # In threading module, using current_thread API will access current thread instance\n+ # without holding it to avoid a reference cycle in an easier way.\n+ self.run = _wrap_run(hub_, self.run.__func__)\n \n return old_start(self, *a, **kw) # type: ignore\n \n Thread.start = sentry_start # type: ignore\n \n \n-def _wrap_run(parent_hub, old_run):\n+def _wrap_run(parent_hub, old_run_func):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n-\n with hub:\n try:\n- return old_run(*a, **kw)\n+ self = current_thread()\n+ return old_run_func(self, *a, **kw)\n except Exception:\n reraise(*_capture_exception())\n", "issue": "Exception: raise OSError(\"handle is closed\")\nWhen I initialized sentry_sdk and used concurrent.futures.process.ProcessPoolExecutor, the exception will be raised after python exit.\r\n\r\n```\r\nfrom concurrent.futures.process import ProcessPoolExecutor\r\n\r\nimport sentry_sdk\r\n\r\nsentry_sdk.init(dsn=\"\")\r\n\r\n\r\ndef test():\r\n ...\r\n\r\n\r\nif __name__ == \"__main__\":\r\n with ProcessPoolExecutor(max_workers=4) as worker:\r\n worker.submit(test)\r\n```\r\n\r\nThe exception:\r\n```\r\nError in atexit._run_exitfuncs:\r\nTraceback (most recent call last):\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 101, in _python_exit\r\n thread_wakeup.wakeup()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/concurrent/futures/process.py\", line 89, in wakeup\r\n self._writer.send_bytes(b\"\")\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 183, in send_bytes\r\n self._check_closed()\r\n File \"/Users/tony.li/miniconda3/lib/python3.7/multiprocessing/connection.py\", line 136, in _check_closed\r\n raise OSError(\"handle is closed\")\r\nOSError: handle is closed\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom threading import Thread\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.utils import event_from_exception\nfrom sentry_sdk.integrations import Integration\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n\n\nclass ThreadingIntegration(Integration):\n identifier = \"threading\"\n\n def __init__(self, propagate_hub=False):\n self.propagate_hub = propagate_hub\n\n @staticmethod\n def setup_once():\n # type: () -> None\n old_start = Thread.start\n\n def sentry_start(self, *a, **kw):\n hub = Hub.current\n integration = hub.get_integration(ThreadingIntegration)\n if integration is not None:\n if not integration.propagate_hub:\n hub_ = None\n else:\n hub_ = Hub(hub)\n\n self.run = _wrap_run(hub_, self.run)\n\n return old_start(self, *a, **kw) # type: ignore\n\n Thread.start = sentry_start # type: ignore\n\n\ndef _wrap_run(parent_hub, old_run):\n def run(*a, **kw):\n hub = parent_hub or Hub.current\n\n with hub:\n try:\n return old_run(*a, **kw)\n except Exception:\n reraise(*_capture_exception())\n\n return run\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(ThreadingIntegration) is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"threading\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/threading.py"}]} | 1,406 | 441 |
gh_patches_debug_18060 | rasdani/github-patches | git_diff | scrapy__scrapy-4378 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated
There is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.
It would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.
Related to https://github.com/scrapy/scrapy/issues/4356
</issue>
<code>
[start of scrapy/settings/deprecated.py]
1 import warnings
2 from scrapy.exceptions import ScrapyDeprecationWarning
3
4 DEPRECATED_SETTINGS = [
5 ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),
6 ('RESPONSE_CLASSES', 'no longer supported'),
7 ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),
8 ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),
9 ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
10 ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
11 ('SQLITE_DB', 'no longer supported'),
12 ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
13 ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
14 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
15 ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
16 ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
17 ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
18 ]
19
20
21 def check_deprecated_settings(settings):
22 deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]
23 if deprecated:
24 msg = "You are using the following settings which are deprecated or obsolete"
25 msg += " (ask [email protected] for alternatives):"
26 msg = msg + "\n " + "\n ".join("%s: %s" % x for x in deprecated)
27 warnings.warn(msg, ScrapyDeprecationWarning)
28
[end of scrapy/settings/deprecated.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py
--- a/scrapy/settings/deprecated.py
+++ b/scrapy/settings/deprecated.py
@@ -9,10 +9,8 @@
('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),
('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),
('SQLITE_DB', 'no longer supported'),
- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),
('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),
('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),
('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),
('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),
]
| {"golden_diff": "diff --git a/scrapy/settings/deprecated.py b/scrapy/settings/deprecated.py\n--- a/scrapy/settings/deprecated.py\n+++ b/scrapy/settings/deprecated.py\n@@ -9,10 +9,8 @@\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n- ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n- ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n ]\n", "issue": "Remove SCRAPY_SELECTORS_BACKEND from scrapy.settings.deprecated\nThere is no trace of `SCRAPY_SELECTORS_BACKEND` in the code anymore, so that line should go.\r\n\r\nIt would be a good chance to review the rest of the lines in that file, some others may be worth removing as well.\r\n\r\nRelated to https://github.com/scrapy/scrapy/issues/4356\n", "before_files": [{"content": "import warnings\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\nDEPRECATED_SETTINGS = [\n ('TRACK_REFS', 'no longer needed (trackref is always enabled)'),\n ('RESPONSE_CLASSES', 'no longer supported'),\n ('DEFAULT_RESPONSE_ENCODING', 'no longer supported'),\n ('BOT_VERSION', 'no longer used (user agent defaults to Scrapy now)'),\n ('ENCODING_ALIASES', 'no longer needed (encoding discovery uses w3lib now)'),\n ('STATS_ENABLED', 'no longer supported (change STATS_CLASS instead)'),\n ('SQLITE_DB', 'no longer supported'),\n ('SELECTORS_BACKEND', 'use SCRAPY_SELECTORS_BACKEND environment variable instead'),\n ('AUTOTHROTTLE_MIN_DOWNLOAD_DELAY', 'use DOWNLOAD_DELAY instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('AUTOTHROTTLE_MAX_CONCURRENCY', 'use CONCURRENT_REQUESTS_PER_DOMAIN instead'),\n ('REDIRECT_MAX_METAREFRESH_DELAY', 'use METAREFRESH_MAXDELAY instead'),\n ('LOG_UNSERIALIZABLE_REQUESTS', 'use SCHEDULER_DEBUG instead'),\n]\n\n\ndef check_deprecated_settings(settings):\n deprecated = [x for x in DEPRECATED_SETTINGS if settings[x[0]] is not None]\n if deprecated:\n msg = \"You are using the following settings which are deprecated or obsolete\"\n msg += \" (ask [email protected] for alternatives):\"\n msg = msg + \"\\n \" + \"\\n \".join(\"%s: %s\" % x for x in deprecated)\n warnings.warn(msg, ScrapyDeprecationWarning)\n", "path": "scrapy/settings/deprecated.py"}]} | 1,015 | 218 |
gh_patches_debug_22000 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG]: colossalai run failed with unknown reason
### 🐛 Describe the bug
Some users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.
```text
Error: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1
```
### Environment
_No response_
</issue>
<code>
[start of colossalai/cli/launcher/multinode_runner.py]
1 import fabric
2 from .hostinfo import HostInfo, HostInfoList
3 from multiprocessing import Pipe, Process
4 from multiprocessing import connection as mp_connection
5 import click
6
7
8 def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
9 send_conn: mp_connection.Connection, env: dict) -> None:
10 """
11 Use fabric connection to execute command on local or remote hosts.
12
13 Args:
14 hostinfo (HostInfo): host information
15 workdir (str): the directory to execute the command
16 recv_conn (multiprocessing.connection.Connection): receive messages from the master sender
17 send_conn (multiprocessing.connection.Connection): send messages to the master receiver
18 env (dict): a dictionary for environment variables
19 """
20
21 fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)
22 finish = False
23 env_msg = ' '.join([f'{k}=\"{v}\"' for k, v in env.items()])
24
25 # keep listening until exit
26 while not finish:
27 # receive cmd
28 cmds = recv_conn.recv()
29
30 if cmds == 'exit':
31 # exit from the loop
32 finish = True
33 break
34 else:
35 # execute the commands
36 try:
37 # cd to execute directory
38 with fab_conn.cd(workdir):
39 # propagate the runtime environment
40 with fab_conn.prefix(f"export {env_msg}"):
41 if hostinfo.is_local_host:
42 # execute on the local machine
43 fab_conn.local(cmds, hide=False)
44 else:
45 # execute on the remote machine
46 fab_conn.run(cmds, hide=False)
47 send_conn.send('success')
48 except:
49 click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
50 send_conn.send('failure')
51
52 # shutdown
53 send_conn.send("finish")
54 fab_conn.close()
55
56
57 class MultiNodeRunner:
58 """
59 A runner to execute commands on an array of machines. This runner
60 is inspired by Nezha (https://github.com/zhuzilin/NeZha).
61 """
62
63 def __init__(self):
64 self.processes = {}
65 self.master_send_conns = {}
66 self.master_recv_conns = {}
67
68 def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:
69 """
70 Establish connections to a list of hosts
71
72 Args:
73 host_info_list (HostInfoList): a list of HostInfo objects
74 workdir (str): the directory where command is executed
75 env (dict): environment variables to propagate to hosts
76 """
77 for hostinfo in host_info_list:
78 master_send_conn, worker_recv_conn = Pipe()
79 master_recv_conn, worker_send_conn = Pipe()
80 p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))
81 p.start()
82 self.processes[hostinfo.hostname] = p
83 self.master_recv_conns[hostinfo.hostname] = master_recv_conn
84 self.master_send_conns[hostinfo.hostname] = master_send_conn
85
86 def send(self, hostinfo: HostInfo, cmd: str) -> None:
87 """
88 Send a command to a local/remote host.
89
90 Args:
91 hostinfo (HostInfo): host information
92 cmd (str): the command to execute
93 """
94
95 assert hostinfo.hostname in self.master_send_conns, \
96 f'{hostinfo} is not found in the current connections'
97 conn = self.master_send_conns[hostinfo.hostname]
98 conn.send(cmd)
99
100 def stop_all(self) -> None:
101 """
102 Stop connections to all hosts.
103 """
104
105 for hostname, conn in self.master_send_conns.items():
106 conn.send('exit')
107
108 def recv_from_all(self) -> dict:
109 """
110 Receive messages from all hosts
111
112 Returns:
113 msg_from_node (dict): a dictionry which contains messages from each node
114 """
115
116 msg_from_node = dict()
117 for hostname, conn in self.master_recv_conns.items():
118 msg_from_node[hostname] = conn.recv()
119 return msg_from_node
120
[end of colossalai/cli/launcher/multinode_runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py
--- a/colossalai/cli/launcher/multinode_runner.py
+++ b/colossalai/cli/launcher/multinode_runner.py
@@ -1,8 +1,10 @@
-import fabric
-from .hostinfo import HostInfo, HostInfoList
from multiprocessing import Pipe, Process
from multiprocessing import connection as mp_connection
+
import click
+import fabric
+
+from .hostinfo import HostInfo, HostInfoList
def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,
@@ -45,8 +47,10 @@
# execute on the remote machine
fab_conn.run(cmds, hide=False)
send_conn.send('success')
- except:
- click.echo(f"Error: failed to run {cmds} on {hostinfo.hostname}")
+ except Exception as e:
+ click.echo(
+ f"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}"
+ )
send_conn.send('failure')
# shutdown
| {"golden_diff": "diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py\n--- a/colossalai/cli/launcher/multinode_runner.py\n+++ b/colossalai/cli/launcher/multinode_runner.py\n@@ -1,8 +1,10 @@\n-import fabric\n-from .hostinfo import HostInfo, HostInfoList\n from multiprocessing import Pipe, Process\n from multiprocessing import connection as mp_connection\n+\n import click\n+import fabric\n+\n+from .hostinfo import HostInfo, HostInfoList\n \n \n def run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n@@ -45,8 +47,10 @@\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n- except:\n- click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n+ except Exception as e:\n+ click.echo(\n+ f\"Error: failed to run {cmds} on {hostinfo.hostname}, is localhost: {hostinfo.is_local_host}, exception: {e}\"\n+ )\n send_conn.send('failure')\n \n # shutdown\n", "issue": "[BUG]: colossalai run failed with unknown reason\n### \ud83d\udc1b Describe the bug\n\nSome users have reported that they encounter launch failure when using `colossalai run`, but there is no error message to tell exactly what went wrong. A sample output is given below.\r\n\r\n```text\r\nError: failed to run torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --rdzv_backend=c10d --rdzv_endpoint=127.0.0.1:29500 --rdzv_id=colossalai-default-job train.py --config config.py -s on 127.0.0.1\r\n```\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "import fabric\nfrom .hostinfo import HostInfo, HostInfoList\nfrom multiprocessing import Pipe, Process\nfrom multiprocessing import connection as mp_connection\nimport click\n\n\ndef run_on_host(hostinfo: HostInfo, workdir: str, recv_conn: mp_connection.Connection,\n send_conn: mp_connection.Connection, env: dict) -> None:\n \"\"\"\n Use fabric connection to execute command on local or remote hosts.\n\n Args:\n hostinfo (HostInfo): host information\n workdir (str): the directory to execute the command\n recv_conn (multiprocessing.connection.Connection): receive messages from the master sender\n send_conn (multiprocessing.connection.Connection): send messages to the master receiver\n env (dict): a dictionary for environment variables\n \"\"\"\n\n fab_conn = fabric.Connection(hostinfo.hostname, port=hostinfo.port)\n finish = False\n env_msg = ' '.join([f'{k}=\\\"{v}\\\"' for k, v in env.items()])\n\n # keep listening until exit\n while not finish:\n # receive cmd\n cmds = recv_conn.recv()\n\n if cmds == 'exit':\n # exit from the loop\n finish = True\n break\n else:\n # execute the commands\n try:\n # cd to execute directory\n with fab_conn.cd(workdir):\n # propagate the runtime environment\n with fab_conn.prefix(f\"export {env_msg}\"):\n if hostinfo.is_local_host:\n # execute on the local machine\n fab_conn.local(cmds, hide=False)\n else:\n # execute on the remote machine\n fab_conn.run(cmds, hide=False)\n send_conn.send('success')\n except:\n click.echo(f\"Error: failed to run {cmds} on {hostinfo.hostname}\")\n send_conn.send('failure')\n\n # shutdown\n send_conn.send(\"finish\")\n fab_conn.close()\n\n\nclass MultiNodeRunner:\n \"\"\"\n A runner to execute commands on an array of machines. This runner\n is inspired by Nezha (https://github.com/zhuzilin/NeZha).\n \"\"\"\n\n def __init__(self):\n self.processes = {}\n self.master_send_conns = {}\n self.master_recv_conns = {}\n\n def connect(self, host_info_list: HostInfoList, workdir: str, env: dict) -> None:\n \"\"\"\n Establish connections to a list of hosts\n\n Args:\n host_info_list (HostInfoList): a list of HostInfo objects\n workdir (str): the directory where command is executed\n env (dict): environment variables to propagate to hosts\n \"\"\"\n for hostinfo in host_info_list:\n master_send_conn, worker_recv_conn = Pipe()\n master_recv_conn, worker_send_conn = Pipe()\n p = Process(target=run_on_host, args=(hostinfo, workdir, worker_recv_conn, worker_send_conn, env))\n p.start()\n self.processes[hostinfo.hostname] = p\n self.master_recv_conns[hostinfo.hostname] = master_recv_conn\n self.master_send_conns[hostinfo.hostname] = master_send_conn\n\n def send(self, hostinfo: HostInfo, cmd: str) -> None:\n \"\"\"\n Send a command to a local/remote host.\n\n Args:\n hostinfo (HostInfo): host information\n cmd (str): the command to execute\n \"\"\"\n\n assert hostinfo.hostname in self.master_send_conns, \\\n f'{hostinfo} is not found in the current connections'\n conn = self.master_send_conns[hostinfo.hostname]\n conn.send(cmd)\n\n def stop_all(self) -> None:\n \"\"\"\n Stop connections to all hosts.\n \"\"\"\n\n for hostname, conn in self.master_send_conns.items():\n conn.send('exit')\n\n def recv_from_all(self) -> dict:\n \"\"\"\n Receive messages from all hosts\n\n Returns:\n msg_from_node (dict): a dictionry which contains messages from each node\n \"\"\"\n\n msg_from_node = dict()\n for hostname, conn in self.master_recv_conns.items():\n msg_from_node[hostname] = conn.recv()\n return msg_from_node\n", "path": "colossalai/cli/launcher/multinode_runner.py"}]} | 1,846 | 268 |
gh_patches_debug_5847 | rasdani/github-patches | git_diff | enthought__chaco-401 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`six.moves` incorrectly imported in `errorbar_plot.py`
**Problem Description**
Noticed a stacktrace in my work that pointed out a `NameError` (`global name 'sm' is not defined`) at `errorbar_plot.py:77` of 4.7.1, which is:
l1, l2, l3 = sm.map(len, (index, value_low, value_high))
It seems that this is because normally `six.moves` is imported like `import six.moves as sm` and the author was used to this, but this file just has `import six.moves`. This seems to result from [`dc08831`](https://github.com/enthought/chaco/commit/dc08831d35c60057b0e26466e412e644dea1c89b#diff-7b3ca9023e76b4689bb2a0e42bf4d8f1)
**Reproduction Steps:**
I don't have clear cut reproduction steps, but a cursory glance at the code seems to make the cause of the error obvious. I'm afraid I'm not actually even able to easily modify the setup we have to create a more minimal example or to even test the proposed change (which should be to just add the alias to the import).
Especially since the traceback doesn't even contain any of our code, so I don't have an easy way to find the offending code, haha. Traceback is:
Traceback (most recent call last):
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py", line 202, in paintEvent
self.handler.paintEvent(event)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py", line 54, in paintEvent
self._enable_window._paint(event)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/abstract_window.py", line 468, in _paint
self.component.draw(gc, view_bounds=(0, 0, size[0], size[1]))
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py", line 427, in draw
self._draw(gc, view_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py", line 769, in _draw
self._dispatch_draw(layer, bb, view_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py", line 272, in _dispatch_draw
component._dispatch_draw(layer, gc, new_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py", line 272, in _dispatch_draw
component._dispatch_draw(layer, gc, new_bounds, mode)
File "/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py", line 799, in _dispatch_draw
handler(gc, view_bounds, mode)
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py", line 466, in _draw_plot
self._draw_component(gc, view_bounds, mode)
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py", line 473, in _draw_component
pts = self.get_screen_points()
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py", line 61, in get_screen_points
self._gather_points()
File "/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py", line 77, in _gather_points
l1, l2, l3 = sm.map(len, (index, value_low, value_high))
NameError: global name 'sm' is not defined
**Expected behavior:**
The traceback does not occur.
**OS, Python version:**
Ubuntu 14.04, Python 2.7.14, and chaco 4.7.1 with enable 4.7.1.
</issue>
<code>
[start of chaco/errorbar_plot.py]
1
2 from __future__ import with_statement
3
4 import six
5 import six.moves
6
7 # Major library imports
8 from numpy import column_stack, compress, invert, isnan, transpose
9 import logging
10
11 # Enthought library imports
12 from traits.api import Any, Enum, Float, Instance
13
14 # Chaco imports
15 from .lineplot import LinePlot
16 from .abstract_data_source import AbstractDataSource
17
18 # Set up a logger for this module
19 logger = logging.getLogger(__name__)
20
21
22
23 class ErrorBarPlot(LinePlot):
24 """ Renders errorbars at various points.
25 """
26
27 # The datasource containing the low values
28 value_low = Instance(AbstractDataSource)
29
30 # The datasource containing the high values
31 value_high = Instance(AbstractDataSource)
32
33 # The screen-space width of the endcap bars
34 endcap_size = Float(5.0)
35
36 # The kind of encap to render on error bars
37 endcap_style = Enum("bar", "none", None)
38
39 # Override the inherited trait definition
40 _cached_data_pts = Any
41
42 def map_screen(self, data_array):
43 """ data_array can be Nx2 or Nx3. In the former case, each row is
44 treated as (index, value), and this method returns screen X and Y
45 coordinates. In the latter case, each row is treated as (index,
46 value_low, value_high), and the method returns either (x, ylow, yhigh)
47 or (y, xlow, xhigh) depending on self.orientation.
48 """
49 if len(data_array) == 0:
50 return []
51 elif data_array.shape[1] == 2:
52 return LinePlot.map_screen(self, data_array)
53 else:
54 x, ylow, yhigh = transpose(data_array)
55 sx = self.index_mapper.map_screen(x)
56 sylow = self.value_mapper.map_screen(ylow)
57 syhigh = self.value_mapper.map_screen(yhigh)
58 return column_stack((sx, sylow, syhigh))
59
60 def get_screen_points(self):
61 self._gather_points()
62 return self.map_screen(self._cached_data_pts)
63
64 def _gather_points(self):
65
66 if self._cache_valid:
67 return
68
69 if not self.index or not self.value_low or not self.value_high:
70 return
71
72 index, index_mask = self.index.get_data_mask()
73 value_low, value_low_mask = self.value_low.get_data_mask()
74 value_high, value_high_mask = self.value_high.get_data_mask()
75 value_mask = value_low_mask & value_high_mask
76
77 l1, l2, l3 = sm.map(len, (index, value_low, value_high))
78 if 0 in (l1, l2, l3) or not (l1 == l2 == l3):
79 logger.warn("Chaco: using empty dataset; index_len=%d, value_low_len=%d, value_high_len=%d." % (l1,l2,l3))
80 self._cached_data_pts = []
81 self._cache_valid = True
82 return
83
84 index_range_mask = self.index_mapper.range.mask_data(index)
85 value_low_mask = self.value_mapper.range.mask_data(value_low)
86 value_high_mask = self.value_mapper.range.mask_data(value_high)
87 value_range_mask = value_low_mask | value_high_mask
88
89 nan_mask = invert(isnan(index_mask) | isnan(value_mask))
90 point_mask = index_mask & value_mask & nan_mask & index_range_mask & value_range_mask
91
92 points = column_stack((index, value_low, value_high))
93
94 self._cached_data_pts = compress(point_mask, points, axis=0)
95 self._cache_valid = True
96 return
97
98 def _render(self, gc, points, icon_mode=False):
99 if len(points) == 0:
100 return
101
102 if not icon_mode:
103 gc.clip_to_rect(self.x, self.y, self.width, self.height)
104
105 with gc:
106 gc.set_antialias(False)
107 gc.set_stroke_color(self.color_)
108 gc.set_line_width(self.line_width)
109 gc.set_line_dash(self.line_style_)
110
111 if self.orientation == "h":
112 x, ylow, yhigh = transpose(points)
113 start, end = column_stack((x, ylow)), column_stack((x, yhigh))
114 gc.line_set(start, end)
115 axis = 0
116 low = ylow
117 high = yhigh
118
119 else:
120 y, xlow, xhigh = transpose(points)
121 start, end = column_stack((xlow, y)), column_stack((xhigh, y))
122 gc.line_set(start, end)
123 axis = 1
124 low = xlow
125 high = xhigh
126
127 if self.endcap_style == "bar":
128 self._render_bar_endcap(gc, start, end, low, high, axis)
129 else:
130 gc.stroke_path()
131
132 if not icon_mode:
133 self._draw_default_axes(gc)
134 return
135
136
137 def _render_bar_endcap(self, gc, start, end, low, high, axis):
138 """ Renders the endcaps for endcap_style == "bar". start and end are
139 the two endpoints of the bare errorbar. axis is the column index
140 corresponding to the index direction, so for orientation of 'h', axis
141 is 0.
142
143 This method modifies start and end.
144 """
145 delta = self.endcap_size / 2.0
146 start[:,axis] -= delta
147 end[:,axis] += delta
148
149 start[:,1-axis] = low
150 end[:,1-axis] = low
151 gc.line_set(start, end)
152
153 start[:,1-axis] = high
154 end[:,1-axis] = high
155 gc.line_set(start, end)
156 gc.stroke_path()
157 return
158
159
160 def _render_icon(self, gc, x, y, width, height):
161 pass
162
163
[end of chaco/errorbar_plot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chaco/errorbar_plot.py b/chaco/errorbar_plot.py
--- a/chaco/errorbar_plot.py
+++ b/chaco/errorbar_plot.py
@@ -2,7 +2,7 @@
from __future__ import with_statement
import six
-import six.moves
+import six.moves as sm
# Major library imports
from numpy import column_stack, compress, invert, isnan, transpose
@@ -159,4 +159,3 @@
def _render_icon(self, gc, x, y, width, height):
pass
-
| {"golden_diff": "diff --git a/chaco/errorbar_plot.py b/chaco/errorbar_plot.py\n--- a/chaco/errorbar_plot.py\n+++ b/chaco/errorbar_plot.py\n@@ -2,7 +2,7 @@\n from __future__ import with_statement\n \n import six\n-import six.moves\n+import six.moves as sm\n \n # Major library imports\n from numpy import column_stack, compress, invert, isnan, transpose\n@@ -159,4 +159,3 @@\n \n def _render_icon(self, gc, x, y, width, height):\n pass\n-\n", "issue": "`six.moves` incorrectly imported in `errorbar_plot.py`\n**Problem Description**\r\nNoticed a stacktrace in my work that pointed out a `NameError` (`global name 'sm' is not defined`) at `errorbar_plot.py:77` of 4.7.1, which is:\r\n\r\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\r\n\r\nIt seems that this is because normally `six.moves` is imported like `import six.moves as sm` and the author was used to this, but this file just has `import six.moves`. This seems to result from [`dc08831`](https://github.com/enthought/chaco/commit/dc08831d35c60057b0e26466e412e644dea1c89b#diff-7b3ca9023e76b4689bb2a0e42bf4d8f1)\r\n\r\n**Reproduction Steps:**\r\n\r\nI don't have clear cut reproduction steps, but a cursory glance at the code seems to make the cause of the error obvious. I'm afraid I'm not actually even able to easily modify the setup we have to create a more minimal example or to even test the proposed change (which should be to just add the alias to the import).\r\n\r\nEspecially since the traceback doesn't even contain any of our code, so I don't have an easy way to find the offending code, haha. Traceback is:\r\n\r\n Traceback (most recent call last):\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py\", line 202, in paintEvent\r\n self.handler.paintEvent(event)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/qt4/base_window.py\", line 54, in paintEvent\r\n self._enable_window._paint(event)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/abstract_window.py\", line 468, in _paint\r\n self.component.draw(gc, view_bounds=(0, 0, size[0], size[1]))\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py\", line 427, in draw\r\n self._draw(gc, view_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py\", line 769, in _draw\r\n self._dispatch_draw(layer, bb, view_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py\", line 272, in _dispatch_draw\r\n component._dispatch_draw(layer, gc, new_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/container.py\", line 272, in _dispatch_draw\r\n component._dispatch_draw(layer, gc, new_bounds, mode)\r\n File \"/[removed]/site-packages/enable-4.7.1-py2.7-linux-x86_64.egg/enable/component.py\", line 799, in _dispatch_draw\r\n handler(gc, view_bounds, mode)\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py\", line 466, in _draw_plot\r\n self._draw_component(gc, view_bounds, mode)\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/base_xy_plot.py\", line 473, in _draw_component\r\n pts = self.get_screen_points()\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py\", line 61, in get_screen_points\r\n self._gather_points()\r\n File \"/[removed]/site-packages/chaco-4.7.1-py2.7-linux-x86_64.egg/chaco/errorbar_plot.py\", line 77, in _gather_points\r\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\r\n NameError: global name 'sm' is not defined\r\n\r\n**Expected behavior:**\r\n\r\nThe traceback does not occur.\r\n\r\n**OS, Python version:**\r\n\r\nUbuntu 14.04, Python 2.7.14, and chaco 4.7.1 with enable 4.7.1.\r\n\n", "before_files": [{"content": "\nfrom __future__ import with_statement\n\nimport six\nimport six.moves\n\n# Major library imports\nfrom numpy import column_stack, compress, invert, isnan, transpose\nimport logging\n\n# Enthought library imports\nfrom traits.api import Any, Enum, Float, Instance\n\n# Chaco imports\nfrom .lineplot import LinePlot\nfrom .abstract_data_source import AbstractDataSource\n\n# Set up a logger for this module\nlogger = logging.getLogger(__name__)\n\n\n\nclass ErrorBarPlot(LinePlot):\n \"\"\" Renders errorbars at various points.\n \"\"\"\n\n # The datasource containing the low values\n value_low = Instance(AbstractDataSource)\n\n # The datasource containing the high values\n value_high = Instance(AbstractDataSource)\n\n # The screen-space width of the endcap bars\n endcap_size = Float(5.0)\n\n # The kind of encap to render on error bars\n endcap_style = Enum(\"bar\", \"none\", None)\n\n # Override the inherited trait definition\n _cached_data_pts = Any\n\n def map_screen(self, data_array):\n \"\"\" data_array can be Nx2 or Nx3. In the former case, each row is\n treated as (index, value), and this method returns screen X and Y\n coordinates. In the latter case, each row is treated as (index,\n value_low, value_high), and the method returns either (x, ylow, yhigh)\n or (y, xlow, xhigh) depending on self.orientation.\n \"\"\"\n if len(data_array) == 0:\n return []\n elif data_array.shape[1] == 2:\n return LinePlot.map_screen(self, data_array)\n else:\n x, ylow, yhigh = transpose(data_array)\n sx = self.index_mapper.map_screen(x)\n sylow = self.value_mapper.map_screen(ylow)\n syhigh = self.value_mapper.map_screen(yhigh)\n return column_stack((sx, sylow, syhigh))\n\n def get_screen_points(self):\n self._gather_points()\n return self.map_screen(self._cached_data_pts)\n\n def _gather_points(self):\n\n if self._cache_valid:\n return\n\n if not self.index or not self.value_low or not self.value_high:\n return\n\n index, index_mask = self.index.get_data_mask()\n value_low, value_low_mask = self.value_low.get_data_mask()\n value_high, value_high_mask = self.value_high.get_data_mask()\n value_mask = value_low_mask & value_high_mask\n\n l1, l2, l3 = sm.map(len, (index, value_low, value_high))\n if 0 in (l1, l2, l3) or not (l1 == l2 == l3):\n logger.warn(\"Chaco: using empty dataset; index_len=%d, value_low_len=%d, value_high_len=%d.\" % (l1,l2,l3))\n self._cached_data_pts = []\n self._cache_valid = True\n return\n\n index_range_mask = self.index_mapper.range.mask_data(index)\n value_low_mask = self.value_mapper.range.mask_data(value_low)\n value_high_mask = self.value_mapper.range.mask_data(value_high)\n value_range_mask = value_low_mask | value_high_mask\n\n nan_mask = invert(isnan(index_mask) | isnan(value_mask))\n point_mask = index_mask & value_mask & nan_mask & index_range_mask & value_range_mask\n\n points = column_stack((index, value_low, value_high))\n\n self._cached_data_pts = compress(point_mask, points, axis=0)\n self._cache_valid = True\n return\n\n def _render(self, gc, points, icon_mode=False):\n if len(points) == 0:\n return\n\n if not icon_mode:\n gc.clip_to_rect(self.x, self.y, self.width, self.height)\n\n with gc:\n gc.set_antialias(False)\n gc.set_stroke_color(self.color_)\n gc.set_line_width(self.line_width)\n gc.set_line_dash(self.line_style_)\n\n if self.orientation == \"h\":\n x, ylow, yhigh = transpose(points)\n start, end = column_stack((x, ylow)), column_stack((x, yhigh))\n gc.line_set(start, end)\n axis = 0\n low = ylow\n high = yhigh\n\n else:\n y, xlow, xhigh = transpose(points)\n start, end = column_stack((xlow, y)), column_stack((xhigh, y))\n gc.line_set(start, end)\n axis = 1\n low = xlow\n high = xhigh\n\n if self.endcap_style == \"bar\":\n self._render_bar_endcap(gc, start, end, low, high, axis)\n else:\n gc.stroke_path()\n\n if not icon_mode:\n self._draw_default_axes(gc)\n return\n\n\n def _render_bar_endcap(self, gc, start, end, low, high, axis):\n \"\"\" Renders the endcaps for endcap_style == \"bar\". start and end are\n the two endpoints of the bare errorbar. axis is the column index\n corresponding to the index direction, so for orientation of 'h', axis\n is 0.\n\n This method modifies start and end.\n \"\"\"\n delta = self.endcap_size / 2.0\n start[:,axis] -= delta\n end[:,axis] += delta\n\n start[:,1-axis] = low\n end[:,1-axis] = low\n gc.line_set(start, end)\n\n start[:,1-axis] = high\n end[:,1-axis] = high\n gc.line_set(start, end)\n gc.stroke_path()\n return\n\n\n def _render_icon(self, gc, x, y, width, height):\n pass\n\n", "path": "chaco/errorbar_plot.py"}]} | 3,312 | 124 |
gh_patches_debug_17718 | rasdani/github-patches | git_diff | bokeh__bokeh-8795 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DirectoryHandler does not handle ipynb files correctly
The documentation says that the ``DirectoryHandler`` allows serving main.ipynb notebook files inside a directory and the code does have codepaths that mention notebooks. However the implementation tries to load notebook files as if they were normal scripts, leading to immediate errors because notebook files are actually JSON. If the DirectoryHandler finds an ipynb file it should apply the same nbconvert transform used by the NotebookHandler.
</issue>
<code>
[start of bokeh/application/handlers/directory.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Provide a Bokeh Application Handler to build up documents by running
8 the code from ``main.py`` or ``main.ipynb`` files in specified directories.
9
10 The directory may also optionally contain:
11
12 * A ``server_lifecyle.py`` module to provide lifecycle callbacks for the
13 application and sessions.
14
15 * A ``static`` subdirectory containing app-specific static resources to
16 serve.
17
18 * A ``theme.yaml`` file containing a Bokeh theme to automatically apply to
19 all new documents.
20
21 * A ``templates`` subdirectory containing templates for app display
22
23 A full directory layout might look like:
24
25 .. code-block:: none
26
27 myapp
28 |
29 +---main.py
30 +---server_lifecycle.py
31 +---static
32 +---theme.yaml
33 +---templates
34 +---index.html
35
36 '''
37
38 #-----------------------------------------------------------------------------
39 # Boilerplate
40 #-----------------------------------------------------------------------------
41 from __future__ import absolute_import, division, print_function, unicode_literals
42
43 import logging
44 log = logging.getLogger(__name__)
45
46 #-----------------------------------------------------------------------------
47 # Imports
48 #-----------------------------------------------------------------------------
49
50 # Standard library imports
51 from os.path import basename, dirname, exists, join
52
53 # External imports
54 from jinja2 import Environment, FileSystemLoader
55
56 # Bokeh imports
57 from .handler import Handler
58 from .script import ScriptHandler
59 from .server_lifecycle import ServerLifecycleHandler
60
61 #-----------------------------------------------------------------------------
62 # Globals and constants
63 #-----------------------------------------------------------------------------
64
65 __all__ = (
66 'DirectoryHandler',
67 )
68
69 #-----------------------------------------------------------------------------
70 # General API
71 #-----------------------------------------------------------------------------
72
73 #-----------------------------------------------------------------------------
74 # Dev API
75 #-----------------------------------------------------------------------------
76
77 class DirectoryHandler(Handler):
78 ''' Load an application directory which modifies a Document.
79
80 '''
81
82 def __init__(self, *args, **kwargs):
83 '''
84 Keywords:
85 filename (str) : a path to an application directory with either "main.py" or "main.ipynb"
86
87 argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py
88 '''
89 super(DirectoryHandler, self).__init__(*args, **kwargs)
90
91 if 'filename' not in kwargs:
92 raise ValueError('Must pass a filename to DirectoryHandler')
93 src_path = kwargs['filename']
94 argv = kwargs.get('argv', [])
95
96 main_py = join(src_path, 'main.py')
97 main_ipy = join(src_path, 'main.ipynb')
98 if exists(main_py) and exists(main_ipy):
99 log.warning("Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'" % (src_path))
100 main = main_py
101 elif exists(main_py):
102 main = main_py
103 elif exists(main_ipy):
104 main = main_ipy
105 else:
106 raise ValueError("No 'main.py' or 'main.ipynb' in %s" % (src_path))
107 self._path = src_path
108 self._main = main
109 self._main_handler = ScriptHandler(filename=self._main, argv=argv)
110
111 lifecycle = join(src_path, 'server_lifecycle.py')
112 if exists(lifecycle):
113 self._lifecycle = lifecycle
114 self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)
115 else:
116 self._lifecycle = None
117 self._lifecycle_handler = Handler() # no-op handler
118
119 self._theme = None
120 themeyaml = join(src_path, 'theme.yaml')
121 if exists(themeyaml):
122 from bokeh.themes import Theme
123 self._theme = Theme(filename=themeyaml)
124
125 appstatic = join(src_path, 'static')
126 if exists(appstatic):
127 self._static = appstatic
128
129 self._template = None
130 appindex = join(src_path, 'templates', 'index.html')
131 if exists(appindex):
132 env = Environment(loader=FileSystemLoader(dirname(appindex)))
133 self._template = env.get_template('index.html')
134
135 # Properties --------------------------------------------------------------
136
137 @property
138 def error(self):
139 ''' If the handler fails, may contain a related error message.
140
141 '''
142 return self._main_handler.error or self._lifecycle_handler.error
143
144 @property
145 def error_detail(self):
146 ''' If the handler fails, may contain a traceback or other details.
147
148 '''
149 return self._main_handler.error_detail or self._lifecycle_handler.error_detail
150
151 @property
152 def failed(self):
153 ''' ``True`` if the handler failed to modify the doc
154
155 '''
156 return self._main_handler.failed or self._lifecycle_handler.failed
157
158 @property
159 def safe_to_fork(self):
160 ''' Whether it is still safe for the Bokeh server to fork new workers.
161
162 ``False`` if the configured code (script, notebook, etc.) has already
163 been run.
164
165 '''
166 return self._main_handler.safe_to_fork
167
168 # Public methods ----------------------------------------------------------
169
170 def modify_document(self, doc):
171 ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the
172 document.
173
174 This method will also search the app directory for any theme or
175 template files, and automatically configure the document with them
176 if they are found.
177
178 '''
179 if self._lifecycle_handler.failed:
180 return
181 # Note: we do NOT copy self._theme, which assumes the Theme
182 # class is immutable (has no setters)
183 if self._theme is not None:
184 doc.theme = self._theme
185
186 if self._template is not None:
187 doc.template = self._template
188
189 # This internal handler should never add a template
190 self._main_handler.modify_document(doc)
191
192 def on_server_loaded(self, server_context):
193 ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if
194 it is defined) when the server is first started.
195
196 Args:
197 server_context (ServerContext) :
198
199 '''
200 return self._lifecycle_handler.on_server_loaded(server_context)
201
202 def on_server_unloaded(self, server_context):
203 ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if
204 it is defined) when the server cleanly exits. (Before stopping the
205 server's ``IOLoop``.)
206
207 Args:
208 server_context (ServerContext) :
209
210 .. warning::
211 In practice this code may not run, since servers are often killed
212 by a signal.
213
214
215 '''
216 return self._lifecycle_handler.on_server_unloaded(server_context)
217
218 def on_session_created(self, session_context):
219 ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if
220 it is defined) when a new session is created.
221
222 Args:
223 session_context (SessionContext) :
224
225 '''
226 return self._lifecycle_handler.on_session_created(session_context)
227
228 def on_session_destroyed(self, session_context):
229 ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if
230 it is defined) when a session is destroyed.
231
232 Args:
233 session_context (SessionContext) :
234
235 '''
236 return self._lifecycle_handler.on_session_destroyed(session_context)
237
238 def url_path(self):
239 ''' The last path component for the basename of the path to the
240 configured directory.
241
242 '''
243 if self.failed:
244 return None
245 else:
246 # TODO should fix invalid URL characters
247 return '/' + basename(self._path)
248
249 #-----------------------------------------------------------------------------
250 # Private API
251 #-----------------------------------------------------------------------------
252
253 #-----------------------------------------------------------------------------
254 # Code
255 #-----------------------------------------------------------------------------
256
[end of bokeh/application/handlers/directory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bokeh/application/handlers/directory.py b/bokeh/application/handlers/directory.py
--- a/bokeh/application/handlers/directory.py
+++ b/bokeh/application/handlers/directory.py
@@ -55,6 +55,7 @@
# Bokeh imports
from .handler import Handler
+from .notebook import NotebookHandler
from .script import ScriptHandler
from .server_lifecycle import ServerLifecycleHandler
@@ -106,7 +107,9 @@
raise ValueError("No 'main.py' or 'main.ipynb' in %s" % (src_path))
self._path = src_path
self._main = main
- self._main_handler = ScriptHandler(filename=self._main, argv=argv)
+
+ handler = NotebookHandler if main.endswith('.ipynb') else ScriptHandler
+ self._main_handler = handler(filename=self._main, argv=argv)
lifecycle = join(src_path, 'server_lifecycle.py')
if exists(lifecycle):
| {"golden_diff": "diff --git a/bokeh/application/handlers/directory.py b/bokeh/application/handlers/directory.py\n--- a/bokeh/application/handlers/directory.py\n+++ b/bokeh/application/handlers/directory.py\n@@ -55,6 +55,7 @@\n \n # Bokeh imports\n from .handler import Handler\n+from .notebook import NotebookHandler\n from .script import ScriptHandler\n from .server_lifecycle import ServerLifecycleHandler\n \n@@ -106,7 +107,9 @@\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n- self._main_handler = ScriptHandler(filename=self._main, argv=argv)\n+\n+ handler = NotebookHandler if main.endswith('.ipynb') else ScriptHandler\n+ self._main_handler = handler(filename=self._main, argv=argv)\n \n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n", "issue": "DirectoryHandler does not handle ipynb files correctly\nThe documentation says that the ``DirectoryHandler`` allows serving main.ipynb notebook files inside a directory and the code does have codepaths that mention notebooks. However the implementation tries to load notebook files as if they were normal scripts, leading to immediate errors because notebook files are actually JSON. If the DirectoryHandler finds an ipynb file it should apply the same nbconvert transform used by the NotebookHandler.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from ``main.py`` or ``main.ipynb`` files in specified directories.\n\nThe directory may also optionally contain:\n\n* A ``server_lifecyle.py`` module to provide lifecycle callbacks for the\n application and sessions.\n\n* A ``static`` subdirectory containing app-specific static resources to\n serve.\n\n* A ``theme.yaml`` file containing a Bokeh theme to automatically apply to\n all new documents.\n\n* A ``templates`` subdirectory containing templates for app display\n\nA full directory layout might look like:\n\n.. code-block:: none\n\n myapp\n |\n +---main.py\n +---server_lifecycle.py\n +---static\n +---theme.yaml\n +---templates\n +---index.html\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nfrom os.path import basename, dirname, exists, join\n\n# External imports\nfrom jinja2 import Environment, FileSystemLoader\n\n# Bokeh imports\nfrom .handler import Handler\nfrom .script import ScriptHandler\nfrom .server_lifecycle import ServerLifecycleHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'DirectoryHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass DirectoryHandler(Handler):\n ''' Load an application directory which modifies a Document.\n\n '''\n\n def __init__(self, *args, **kwargs):\n '''\n Keywords:\n filename (str) : a path to an application directory with either \"main.py\" or \"main.ipynb\"\n\n argv (list[str], optional) : a list of string arguments to make available as sys.argv to main.py\n '''\n super(DirectoryHandler, self).__init__(*args, **kwargs)\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to DirectoryHandler')\n src_path = kwargs['filename']\n argv = kwargs.get('argv', [])\n\n main_py = join(src_path, 'main.py')\n main_ipy = join(src_path, 'main.ipynb')\n if exists(main_py) and exists(main_ipy):\n log.warning(\"Found both 'main.py' and 'main.ipynb' in %s, using 'main.py'\" % (src_path))\n main = main_py\n elif exists(main_py):\n main = main_py\n elif exists(main_ipy):\n main = main_ipy\n else:\n raise ValueError(\"No 'main.py' or 'main.ipynb' in %s\" % (src_path))\n self._path = src_path\n self._main = main\n self._main_handler = ScriptHandler(filename=self._main, argv=argv)\n\n lifecycle = join(src_path, 'server_lifecycle.py')\n if exists(lifecycle):\n self._lifecycle = lifecycle\n self._lifecycle_handler = ServerLifecycleHandler(filename=self._lifecycle, argv=argv)\n else:\n self._lifecycle = None\n self._lifecycle_handler = Handler() # no-op handler\n\n self._theme = None\n themeyaml = join(src_path, 'theme.yaml')\n if exists(themeyaml):\n from bokeh.themes import Theme\n self._theme = Theme(filename=themeyaml)\n\n appstatic = join(src_path, 'static')\n if exists(appstatic):\n self._static = appstatic\n\n self._template = None\n appindex = join(src_path, 'templates', 'index.html')\n if exists(appindex):\n env = Environment(loader=FileSystemLoader(dirname(appindex)))\n self._template = env.get_template('index.html')\n\n # Properties --------------------------------------------------------------\n\n @property\n def error(self):\n ''' If the handler fails, may contain a related error message.\n\n '''\n return self._main_handler.error or self._lifecycle_handler.error\n\n @property\n def error_detail(self):\n ''' If the handler fails, may contain a traceback or other details.\n\n '''\n return self._main_handler.error_detail or self._lifecycle_handler.error_detail\n\n @property\n def failed(self):\n ''' ``True`` if the handler failed to modify the doc\n\n '''\n return self._main_handler.failed or self._lifecycle_handler.failed\n\n @property\n def safe_to_fork(self):\n ''' Whether it is still safe for the Bokeh server to fork new workers.\n\n ``False`` if the configured code (script, notebook, etc.) has already\n been run.\n\n '''\n return self._main_handler.safe_to_fork\n\n # Public methods ----------------------------------------------------------\n\n def modify_document(self, doc):\n ''' Execute the configured ``main.py`` or ``main.ipynb`` to modify the\n document.\n\n This method will also search the app directory for any theme or\n template files, and automatically configure the document with them\n if they are found.\n\n '''\n if self._lifecycle_handler.failed:\n return\n # Note: we do NOT copy self._theme, which assumes the Theme\n # class is immutable (has no setters)\n if self._theme is not None:\n doc.theme = self._theme\n\n if self._template is not None:\n doc.template = self._template\n\n # This internal handler should never add a template\n self._main_handler.modify_document(doc)\n\n def on_server_loaded(self, server_context):\n ''' Execute `on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server is first started.\n\n Args:\n server_context (ServerContext) :\n\n '''\n return self._lifecycle_handler.on_server_loaded(server_context)\n\n def on_server_unloaded(self, server_context):\n ''' Execute ``on_server_unloaded`` from ``server_lifecycle.py`` (if\n it is defined) when the server cleanly exits. (Before stopping the\n server's ``IOLoop``.)\n\n Args:\n server_context (ServerContext) :\n\n .. warning::\n In practice this code may not run, since servers are often killed\n by a signal.\n\n\n '''\n return self._lifecycle_handler.on_server_unloaded(server_context)\n\n def on_session_created(self, session_context):\n ''' Execute ``on_session_created`` from ``server_lifecycle.py`` (if\n it is defined) when a new session is created.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_created(session_context)\n\n def on_session_destroyed(self, session_context):\n ''' Execute ``on_session_destroyed`` from ``server_lifecycle.py`` (if\n it is defined) when a session is destroyed.\n\n Args:\n session_context (SessionContext) :\n\n '''\n return self._lifecycle_handler.on_session_destroyed(session_context)\n\n def url_path(self):\n ''' The last path component for the basename of the path to the\n configured directory.\n\n '''\n if self.failed:\n return None\n else:\n # TODO should fix invalid URL characters\n return '/' + basename(self._path)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/application/handlers/directory.py"}]} | 2,973 | 230 |
gh_patches_debug_32894 | rasdani/github-patches | git_diff | facebookresearch__hydra-609 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request] Allow @hydra.main() to take a config object and pass it through
# 🚀 Feature Request
Allow @hydra.main() to take a config and pass it through
</issue>
<code>
[start of hydra/main.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import functools
3 from typing import Callable, Optional
4
5 from ._internal.utils import get_args_parser, run_hydra
6 from .types import TaskFunction
7
8
9 def main(
10 config_path: Optional[str] = None,
11 config_name: Optional[str] = None,
12 strict: Optional[bool] = None,
13 ) -> Callable[[TaskFunction], Callable[[], None]]:
14 """
15 :param config_path: the config path, a directory relative to the declaring python file.
16 :param config_name: the name of the config (usually the file name without the .yaml extension)
17 :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an
18 existing key or if the code is accessing a non existent key
19 """
20
21 def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
22 @functools.wraps(task_function)
23 def decorated_main() -> None:
24 run_hydra(
25 args_parser=get_args_parser(),
26 task_function=task_function,
27 config_path=config_path,
28 config_name=config_name,
29 strict=strict,
30 )
31
32 return decorated_main
33
34 return main_decorator
35
[end of hydra/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/main.py b/hydra/main.py
--- a/hydra/main.py
+++ b/hydra/main.py
@@ -1,6 +1,8 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import functools
-from typing import Callable, Optional
+from typing import Any, Callable, Optional
+
+from omegaconf import DictConfig
from ._internal.utils import get_args_parser, run_hydra
from .types import TaskFunction
@@ -10,7 +12,7 @@
config_path: Optional[str] = None,
config_name: Optional[str] = None,
strict: Optional[bool] = None,
-) -> Callable[[TaskFunction], Callable[[], None]]:
+) -> Callable[[TaskFunction], Any]:
"""
:param config_path: the config path, a directory relative to the declaring python file.
:param config_name: the name of the config (usually the file name without the .yaml extension)
@@ -20,14 +22,20 @@
def main_decorator(task_function: TaskFunction) -> Callable[[], None]:
@functools.wraps(task_function)
- def decorated_main() -> None:
- run_hydra(
- args_parser=get_args_parser(),
- task_function=task_function,
- config_path=config_path,
- config_name=config_name,
- strict=strict,
- )
+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:
+ if cfg_passthrough is not None:
+ return task_function(cfg_passthrough)
+ else:
+ args = get_args_parser()
+ # no return value from run_hydra() as it may sometime actually run the task_function
+ # multiple times (--multirun)
+ run_hydra(
+ args_parser=args,
+ task_function=task_function,
+ config_path=config_path,
+ config_name=config_name,
+ strict=strict,
+ )
return decorated_main
| {"golden_diff": "diff --git a/hydra/main.py b/hydra/main.py\n--- a/hydra/main.py\n+++ b/hydra/main.py\n@@ -1,6 +1,8 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import functools\n-from typing import Callable, Optional\n+from typing import Any, Callable, Optional\n+\n+from omegaconf import DictConfig\n \n from ._internal.utils import get_args_parser, run_hydra\n from .types import TaskFunction\n@@ -10,7 +12,7 @@\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n-) -> Callable[[TaskFunction], Callable[[], None]]:\n+) -> Callable[[TaskFunction], Any]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n@@ -20,14 +22,20 @@\n \n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n- def decorated_main() -> None:\n- run_hydra(\n- args_parser=get_args_parser(),\n- task_function=task_function,\n- config_path=config_path,\n- config_name=config_name,\n- strict=strict,\n- )\n+ def decorated_main(cfg_passthrough: Optional[DictConfig] = None) -> Any:\n+ if cfg_passthrough is not None:\n+ return task_function(cfg_passthrough)\n+ else:\n+ args = get_args_parser()\n+ # no return value from run_hydra() as it may sometime actually run the task_function\n+ # multiple times (--multirun)\n+ run_hydra(\n+ args_parser=args,\n+ task_function=task_function,\n+ config_path=config_path,\n+ config_name=config_name,\n+ strict=strict,\n+ )\n \n return decorated_main\n", "issue": "[Feature Request] Allow @hydra.main() to take a config object and pass it through\n# \ud83d\ude80 Feature Request\r\n\r\nAllow @hydra.main() to take a config and pass it through\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport functools\nfrom typing import Callable, Optional\n\nfrom ._internal.utils import get_args_parser, run_hydra\nfrom .types import TaskFunction\n\n\ndef main(\n config_path: Optional[str] = None,\n config_name: Optional[str] = None,\n strict: Optional[bool] = None,\n) -> Callable[[TaskFunction], Callable[[], None]]:\n \"\"\"\n :param config_path: the config path, a directory relative to the declaring python file.\n :param config_name: the name of the config (usually the file name without the .yaml extension)\n :param strict: (Deprecated) strict mode, will throw an error if command line overrides are not changing an\n existing key or if the code is accessing a non existent key\n \"\"\"\n\n def main_decorator(task_function: TaskFunction) -> Callable[[], None]:\n @functools.wraps(task_function)\n def decorated_main() -> None:\n run_hydra(\n args_parser=get_args_parser(),\n task_function=task_function,\n config_path=config_path,\n config_name=config_name,\n strict=strict,\n )\n\n return decorated_main\n\n return main_decorator\n", "path": "hydra/main.py"}]} | 902 | 444 |
gh_patches_debug_44261 | rasdani/github-patches | git_diff | Project-MONAI__MONAI-3308 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Niftisaver doesn't save new voxelspacing
### Discussed in https://github.com/Project-MONAI/MONAI/discussions/3299 as well as https://github.com/Project-MONAI/MONAI/discussions/2029
<div type='discussions-op-text'>
<sup>Originally posted by **Gijz33** November 10, 2021</sup>
Hi,
I am changing the voxel spacing of my CT data using monai.transforms.Spacingd and saving the CT with new spacing into a .nii.gz-file using monai.data.NiftiSaver.
The voxel size transformation is succesful inside the python script. However, opening the .nii.gz file using different software (3D-slicer) the spacing of the old/orginal CT is used. Can someone help me out? A snippet of my code is shown below:
```python
train_images = sorted(glob.glob(os.path.join(data_folder, "*_ct.nii.gz")))
train_labels = sorted(glob.glob(os.path.join(data_folder, "*_seg.nii.gz")))
data_dicts = [ {"image": image_name, "label": label_name} for image_name, label_name in zip(train_images, train_labels) ]
loader = LoadImaged(keys=("image","label"))
data_dict = loader(data_dicts[0])
add_channel = AddChanneld(keys=["image", "label"])
data_dict = add_channel(data_dict)
orientation = Orientationd(keys=["image", "label"], axcodes="LPS")
data_dict = orientation(data_dict)
spacing = Spacingd(keys=["image", "label"], pixdim=(0.8, 0.8, 3.0), mode=("bilinear"))
data_dict = spacing(data_dict)
saver = NiftiSaver(output_dir="./", output_postfix="test" ,output_ext=".nii.gz",mode="nearest")
saver.save(data_dict["image"], data_dict['image_meta_dict'])
```
</div>
</issue>
<code>
[start of monai/data/nifti_saver.py]
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from pathlib import Path
13 from typing import Dict, Optional, Union
14
15 import numpy as np
16 import torch
17
18 from monai.config import DtypeLike
19 from monai.data.nifti_writer import write_nifti
20 from monai.data.utils import create_file_basename
21 from monai.utils import GridSampleMode, GridSamplePadMode
22 from monai.utils import ImageMetaKey as Key
23
24
25 class NiftiSaver:
26 """
27 Save the data as NIfTI file, it can support single data content or a batch of data.
28 Typically, the data can be segmentation predictions, call `save` for single data
29 or call `save_batch` to save a batch of data together.
30 The name of saved file will be `{input_image_name}_{output_postfix}{output_ext}`,
31 where the input image name is extracted from the provided meta data dictionary.
32 If no meta data provided, use index from 0 as the filename prefix.
33
34 Note: image should include channel dimension: [B],C,H,W,[D].
35
36 """
37
38 def __init__(
39 self,
40 output_dir: Union[Path, str] = "./",
41 output_postfix: str = "seg",
42 output_ext: str = ".nii.gz",
43 resample: bool = True,
44 mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,
45 padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,
46 align_corners: bool = False,
47 dtype: DtypeLike = np.float64,
48 output_dtype: DtypeLike = np.float32,
49 squeeze_end_dims: bool = True,
50 data_root_dir: str = "",
51 separate_folder: bool = True,
52 print_log: bool = True,
53 ) -> None:
54 """
55 Args:
56 output_dir: output image directory.
57 output_postfix: a string appended to all output file names.
58 output_ext: output file extension name.
59 resample: whether to resample before saving the data array.
60 mode: {``"bilinear"``, ``"nearest"``}
61 This option is used when ``resample = True``.
62 Interpolation mode to calculate output values. Defaults to ``"bilinear"``.
63 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
64 padding_mode: {``"zeros"``, ``"border"``, ``"reflection"``}
65 This option is used when ``resample = True``.
66 Padding mode for outside grid values. Defaults to ``"border"``.
67 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
68 align_corners: Geometrically, we consider the pixels of the input as squares rather than points.
69 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
70 dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.
71 If None, use the data type of input data.
72 output_dtype: data type for saving data. Defaults to ``np.float32``.
73 squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel
74 has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and
75 then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,
76 image will always be saved as (H,W,D,C).
77 data_root_dir: if not empty, it specifies the beginning parts of the input file's
78 absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from
79 `data_root_dir` to preserve folder structure when saving in case there are files in different
80 folders with the same file names. for example:
81 input_file_name: /foo/bar/test1/image.nii,
82 postfix: seg
83 output_ext: nii.gz
84 output_dir: /output,
85 data_root_dir: /foo/bar,
86 output will be: /output/test1/image/image_seg.nii.gz
87 separate_folder: whether to save every file in a separate folder, for example: if input filename is
88 `image.nii`, postfix is `seg` and folder_path is `output`, if `True`, save as:
89 `output/image/image_seg.nii`, if `False`, save as `output/image_seg.nii`. default to `True`.
90 print_log: whether to print log about the saved NIfTI file path, etc. default to `True`.
91
92 """
93 self.output_dir = output_dir
94 self.output_postfix = output_postfix
95 self.output_ext = output_ext
96 self.resample = resample
97 self.mode: GridSampleMode = GridSampleMode(mode)
98 self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)
99 self.align_corners = align_corners
100 self.dtype = dtype
101 self.output_dtype = output_dtype
102 self._data_index = 0
103 self.squeeze_end_dims = squeeze_end_dims
104 self.data_root_dir = data_root_dir
105 self.separate_folder = separate_folder
106 self.print_log = print_log
107
108 def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
109 """
110 Save data into a Nifti file.
111 The meta_data could optionally have the following keys:
112
113 - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.
114 - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.
115 - ``'affine'`` -- for data output affine, defaulting to an identity matrix.
116 - ``'spatial_shape'`` -- for data output shape.
117 - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.
118
119 When meta_data is specified, the saver will try to resample batch data from the space
120 defined by "affine" to the space defined by "original_affine".
121
122 If meta_data is None, use the default index (starting from 0) as the filename.
123
124 Args:
125 data: target data content that to be saved as a NIfTI format file.
126 Assuming the data shape starts with a channel dimension and followed by spatial dimensions.
127 meta_data: the meta data information corresponding to the data.
128
129 See Also
130 :py:meth:`monai.data.nifti_writer.write_nifti`
131 """
132 filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)
133 self._data_index += 1
134 original_affine = meta_data.get("original_affine", None) if meta_data else None
135 affine = meta_data.get("affine", None) if meta_data else None
136 spatial_shape = meta_data.get("spatial_shape", None) if meta_data else None
137 patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None
138
139 if isinstance(data, torch.Tensor):
140 data = data.detach().cpu().numpy()
141
142 path = create_file_basename(
143 postfix=self.output_postfix,
144 input_file_name=filename,
145 folder_path=self.output_dir,
146 data_root_dir=self.data_root_dir,
147 separate_folder=self.separate_folder,
148 patch_index=patch_index,
149 )
150 path = f"{path}{self.output_ext}"
151 # change data shape to be (channel, h, w, d)
152 while len(data.shape) < 4:
153 data = np.expand_dims(data, -1)
154 # change data to "channel last" format and write to nifti format file
155 data = np.moveaxis(np.asarray(data), 0, -1)
156
157 # if desired, remove trailing singleton dimensions
158 if self.squeeze_end_dims:
159 while data.shape[-1] == 1:
160 data = np.squeeze(data, -1)
161
162 write_nifti(
163 data,
164 file_name=path,
165 affine=affine,
166 target_affine=original_affine,
167 resample=self.resample,
168 output_spatial_shape=spatial_shape,
169 mode=self.mode,
170 padding_mode=self.padding_mode,
171 align_corners=self.align_corners,
172 dtype=self.dtype,
173 output_dtype=self.output_dtype,
174 )
175
176 if self.print_log:
177 print(f"file written: {path}.")
178
179 def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
180 """
181 Save a batch of data into Nifti format files.
182
183 Spatially it supports up to three dimensions, that is, H, HW, HWD for
184 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).
185
186 When saving multiple time steps or multiple channels `batch_data`,
187 time and/or modality axes should be appended after the batch dimensions.
188 For example, the shape of a batch of 2D eight-class
189 segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;
190 in this case each item in the batch will be saved as (64, 64, 1, 8)
191 NIfTI file (the third dimension is reserved as a spatial dimension).
192
193 Args:
194 batch_data: target batch data content that save into NIfTI format.
195 meta_data: every key-value in the meta_data is corresponding to a batch of data.
196
197 """
198 for i, data in enumerate(batch_data): # save a batch of files
199 self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)
200
[end of monai/data/nifti_saver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/monai/data/nifti_saver.py b/monai/data/nifti_saver.py
--- a/monai/data/nifti_saver.py
+++ b/monai/data/nifti_saver.py
@@ -56,7 +56,8 @@
output_dir: output image directory.
output_postfix: a string appended to all output file names.
output_ext: output file extension name.
- resample: whether to resample before saving the data array.
+ resample: whether to convert the data array to it's original coordinate system
+ based on `original_affine` in the `meta_data`.
mode: {``"bilinear"``, ``"nearest"``}
This option is used when ``resample = True``.
Interpolation mode to calculate output values. Defaults to ``"bilinear"``.
@@ -107,7 +108,7 @@
def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
"""
- Save data into a Nifti file.
+ Save data into a NIfTI file.
The meta_data could optionally have the following keys:
- ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.
@@ -116,7 +117,7 @@
- ``'spatial_shape'`` -- for data output shape.
- ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.
- When meta_data is specified, the saver will try to resample batch data from the space
+ When meta_data is specified and `resample=True`, the saver will try to resample batch data from the space
defined by "affine" to the space defined by "original_affine".
If meta_data is None, use the default index (starting from 0) as the filename.
@@ -131,7 +132,7 @@
"""
filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)
self._data_index += 1
- original_affine = meta_data.get("original_affine", None) if meta_data else None
+ original_affine = meta_data.get("original_affine", None) if meta_data and self.resample else None
affine = meta_data.get("affine", None) if meta_data else None
spatial_shape = meta_data.get("spatial_shape", None) if meta_data else None
patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None
@@ -151,7 +152,7 @@
# change data shape to be (channel, h, w, d)
while len(data.shape) < 4:
data = np.expand_dims(data, -1)
- # change data to "channel last" format and write to nifti format file
+ # change data to "channel last" format and write to NIfTI format file
data = np.moveaxis(np.asarray(data), 0, -1)
# if desired, remove trailing singleton dimensions
@@ -164,7 +165,7 @@
file_name=path,
affine=affine,
target_affine=original_affine,
- resample=self.resample,
+ resample=True,
output_spatial_shape=spatial_shape,
mode=self.mode,
padding_mode=self.padding_mode,
@@ -178,7 +179,7 @@
def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
"""
- Save a batch of data into Nifti format files.
+ Save a batch of data into NIfTI format files.
Spatially it supports up to three dimensions, that is, H, HW, HWD for
1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).
| {"golden_diff": "diff --git a/monai/data/nifti_saver.py b/monai/data/nifti_saver.py\n--- a/monai/data/nifti_saver.py\n+++ b/monai/data/nifti_saver.py\n@@ -56,7 +56,8 @@\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n- resample: whether to resample before saving the data array.\n+ resample: whether to convert the data array to it's original coordinate system\n+ based on `original_affine` in the `meta_data`.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n@@ -107,7 +108,7 @@\n \n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n- Save data into a Nifti file.\n+ Save data into a NIfTI file.\n The meta_data could optionally have the following keys:\n \n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n@@ -116,7 +117,7 @@\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n \n- When meta_data is specified, the saver will try to resample batch data from the space\n+ When meta_data is specified and `resample=True`, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n \n If meta_data is None, use the default index (starting from 0) as the filename.\n@@ -131,7 +132,7 @@\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n- original_affine = meta_data.get(\"original_affine\", None) if meta_data else None\n+ original_affine = meta_data.get(\"original_affine\", None) if meta_data and self.resample else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n@@ -151,7 +152,7 @@\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n- # change data to \"channel last\" format and write to nifti format file\n+ # change data to \"channel last\" format and write to NIfTI format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n \n # if desired, remove trailing singleton dimensions\n@@ -164,7 +165,7 @@\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n- resample=self.resample,\n+ resample=True,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n@@ -178,7 +179,7 @@\n \n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n- Save a batch of data into Nifti format files.\n+ Save a batch of data into NIfTI format files.\n \n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n", "issue": "Niftisaver doesn't save new voxelspacing\n### Discussed in https://github.com/Project-MONAI/MONAI/discussions/3299 as well as https://github.com/Project-MONAI/MONAI/discussions/2029\r\n\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **Gijz33** November 10, 2021</sup>\r\nHi,\r\n\r\nI am changing the voxel spacing of my CT data using monai.transforms.Spacingd and saving the CT with new spacing into a .nii.gz-file using monai.data.NiftiSaver.\r\nThe voxel size transformation is succesful inside the python script. However, opening the .nii.gz file using different software (3D-slicer) the spacing of the old/orginal CT is used. Can someone help me out? A snippet of my code is shown below:\r\n\r\n\r\n```python\r\ntrain_images = sorted(glob.glob(os.path.join(data_folder, \"*_ct.nii.gz\")))\r\ntrain_labels = sorted(glob.glob(os.path.join(data_folder, \"*_seg.nii.gz\")))\r\n\r\ndata_dicts = [ {\"image\": image_name, \"label\": label_name} for image_name, label_name in zip(train_images, train_labels) ]\r\n\r\nloader = LoadImaged(keys=(\"image\",\"label\"))\r\ndata_dict = loader(data_dicts[0])\r\n \r\nadd_channel = AddChanneld(keys=[\"image\", \"label\"])\r\ndata_dict = add_channel(data_dict)\r\n\r\norientation = Orientationd(keys=[\"image\", \"label\"], axcodes=\"LPS\")\r\ndata_dict = orientation(data_dict)\r\n\r\nspacing = Spacingd(keys=[\"image\", \"label\"], pixdim=(0.8, 0.8, 3.0), mode=(\"bilinear\")) \r\ndata_dict = spacing(data_dict)\r\n\r\n\r\nsaver = NiftiSaver(output_dir=\"./\", output_postfix=\"test\" ,output_ext=\".nii.gz\",mode=\"nearest\")\r\nsaver.save(data_dict[\"image\"], data_dict['image_meta_dict'])\r\n```\r\n\r\n\r\n\r\n</div>\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom monai.config import DtypeLike\nfrom monai.data.nifti_writer import write_nifti\nfrom monai.data.utils import create_file_basename\nfrom monai.utils import GridSampleMode, GridSamplePadMode\nfrom monai.utils import ImageMetaKey as Key\n\n\nclass NiftiSaver:\n \"\"\"\n Save the data as NIfTI file, it can support single data content or a batch of data.\n Typically, the data can be segmentation predictions, call `save` for single data\n or call `save_batch` to save a batch of data together.\n The name of saved file will be `{input_image_name}_{output_postfix}{output_ext}`,\n where the input image name is extracted from the provided meta data dictionary.\n If no meta data provided, use index from 0 as the filename prefix.\n\n Note: image should include channel dimension: [B],C,H,W,[D].\n\n \"\"\"\n\n def __init__(\n self,\n output_dir: Union[Path, str] = \"./\",\n output_postfix: str = \"seg\",\n output_ext: str = \".nii.gz\",\n resample: bool = True,\n mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,\n padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,\n align_corners: bool = False,\n dtype: DtypeLike = np.float64,\n output_dtype: DtypeLike = np.float32,\n squeeze_end_dims: bool = True,\n data_root_dir: str = \"\",\n separate_folder: bool = True,\n print_log: bool = True,\n ) -> None:\n \"\"\"\n Args:\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n resample: whether to resample before saving the data array.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n padding_mode: {``\"zeros\"``, ``\"border\"``, ``\"reflection\"``}\n This option is used when ``resample = True``.\n Padding mode for outside grid values. Defaults to ``\"border\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n align_corners: Geometrically, we consider the pixels of the input as squares rather than points.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.\n If None, use the data type of input data.\n output_dtype: data type for saving data. Defaults to ``np.float32``.\n squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel\n has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and\n then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,\n image will always be saved as (H,W,D,C).\n data_root_dir: if not empty, it specifies the beginning parts of the input file's\n absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from\n `data_root_dir` to preserve folder structure when saving in case there are files in different\n folders with the same file names. for example:\n input_file_name: /foo/bar/test1/image.nii,\n postfix: seg\n output_ext: nii.gz\n output_dir: /output,\n data_root_dir: /foo/bar,\n output will be: /output/test1/image/image_seg.nii.gz\n separate_folder: whether to save every file in a separate folder, for example: if input filename is\n `image.nii`, postfix is `seg` and folder_path is `output`, if `True`, save as:\n `output/image/image_seg.nii`, if `False`, save as `output/image_seg.nii`. default to `True`.\n print_log: whether to print log about the saved NIfTI file path, etc. default to `True`.\n\n \"\"\"\n self.output_dir = output_dir\n self.output_postfix = output_postfix\n self.output_ext = output_ext\n self.resample = resample\n self.mode: GridSampleMode = GridSampleMode(mode)\n self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)\n self.align_corners = align_corners\n self.dtype = dtype\n self.output_dtype = output_dtype\n self._data_index = 0\n self.squeeze_end_dims = squeeze_end_dims\n self.data_root_dir = data_root_dir\n self.separate_folder = separate_folder\n self.print_log = print_log\n\n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save data into a Nifti file.\n The meta_data could optionally have the following keys:\n\n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.\n - ``'affine'`` -- for data output affine, defaulting to an identity matrix.\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n\n When meta_data is specified, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n\n If meta_data is None, use the default index (starting from 0) as the filename.\n\n Args:\n data: target data content that to be saved as a NIfTI format file.\n Assuming the data shape starts with a channel dimension and followed by spatial dimensions.\n meta_data: the meta data information corresponding to the data.\n\n See Also\n :py:meth:`monai.data.nifti_writer.write_nifti`\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n original_affine = meta_data.get(\"original_affine\", None) if meta_data else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n\n if isinstance(data, torch.Tensor):\n data = data.detach().cpu().numpy()\n\n path = create_file_basename(\n postfix=self.output_postfix,\n input_file_name=filename,\n folder_path=self.output_dir,\n data_root_dir=self.data_root_dir,\n separate_folder=self.separate_folder,\n patch_index=patch_index,\n )\n path = f\"{path}{self.output_ext}\"\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n # change data to \"channel last\" format and write to nifti format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n\n # if desired, remove trailing singleton dimensions\n if self.squeeze_end_dims:\n while data.shape[-1] == 1:\n data = np.squeeze(data, -1)\n\n write_nifti(\n data,\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n resample=self.resample,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n align_corners=self.align_corners,\n dtype=self.dtype,\n output_dtype=self.output_dtype,\n )\n\n if self.print_log:\n print(f\"file written: {path}.\")\n\n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save a batch of data into Nifti format files.\n\n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n\n When saving multiple time steps or multiple channels `batch_data`,\n time and/or modality axes should be appended after the batch dimensions.\n For example, the shape of a batch of 2D eight-class\n segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;\n in this case each item in the batch will be saved as (64, 64, 1, 8)\n NIfTI file (the third dimension is reserved as a spatial dimension).\n\n Args:\n batch_data: target batch data content that save into NIfTI format.\n meta_data: every key-value in the meta_data is corresponding to a batch of data.\n\n \"\"\"\n for i, data in enumerate(batch_data): # save a batch of files\n self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)\n", "path": "monai/data/nifti_saver.py"}]} | 3,720 | 899 |
gh_patches_debug_41639 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-477 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Undesirable record grouping behaviours
## Description
Record grouping has a set of behaviours, that are not desirable.
* It considers order_by, which leads to formation of incorrect query on the backend, if we don't group by the sorted column.

* It considers limit and offset. These apply on the grouped result itself, and is unrelated to the record limit & offset.


## Expected behavior
* It should not consider order_by.
* It should not consider limit and offset.
We could also probably have a dedicated API for this. It could also obtain the values for columns, to filter the grouped results. Having it as part of records API makes less sense, since the group count is not a reflection of the record results.
</issue>
<code>
[start of db/records.py]
1 import logging
2 from sqlalchemy import delete, select, Column, func
3 from sqlalchemy.inspection import inspect
4 from sqlalchemy_filters import apply_filters, apply_sort
5 from sqlalchemy_filters.exceptions import FieldNotFound
6
7 from db.constants import ID
8
9 logger = logging.getLogger(__name__)
10
11
12 # Grouping exceptions follow the sqlalchemy_filters exceptions patterns
13 class BadGroupFormat(Exception):
14 pass
15
16
17 class GroupFieldNotFound(FieldNotFound):
18 pass
19
20
21 def _get_primary_key_column(table):
22 primary_key_list = list(inspect(table).primary_key)
23 # We do not support getting by composite primary keys
24 assert len(primary_key_list) == 1
25 return primary_key_list[0]
26
27
28 def _create_col_objects(table, column_list):
29 return [
30 table.columns[col] if type(col) == str else col
31 for col in column_list
32 ]
33
34
35 def get_record(table, engine, id_value):
36 primary_key_column = _get_primary_key_column(table)
37 query = select(table).where(primary_key_column == id_value)
38 with engine.begin() as conn:
39 result = conn.execute(query).fetchall()
40 assert len(result) <= 1
41 return result[0] if result else None
42
43
44 def get_records(
45 table, engine, limit=None, offset=None, order_by=[], filters=[],
46 ):
47 """
48 Returns records from a table.
49
50 Args:
51 table: SQLAlchemy table object
52 engine: SQLAlchemy engine object
53 limit: int, gives number of rows to return
54 offset: int, gives number of rows to skip
55 order_by: list of dictionaries, where each dictionary has a 'field' and
56 'direction' field.
57 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
58 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
59 field, in addition to an 'value' field if appropriate.
60 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
61 """
62 query = select(table).limit(limit).offset(offset)
63 if order_by is not None:
64 query = apply_sort(query, order_by)
65 if filters is not None:
66 query = apply_filters(query, filters)
67 with engine.begin() as conn:
68 return conn.execute(query).fetchall()
69
70
71 def get_group_counts(
72 table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],
73 ):
74 """
75 Returns counts by specified groupings
76
77 Args:
78 table: SQLAlchemy table object
79 engine: SQLAlchemy engine object
80 limit: int, gives number of rows to return
81 offset: int, gives number of rows to skip
82 group_by: list or tuple of column names or column objects to group by
83 order_by: list of dictionaries, where each dictionary has a 'field' and
84 'direction' field.
85 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
86 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
87 field, in addition to an 'value' field if appropriate.
88 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
89 """
90 if type(group_by) not in (tuple, list):
91 raise BadGroupFormat(f"Group spec {group_by} must be list or tuple.")
92 for field in group_by:
93 if type(field) not in (str, Column):
94 raise BadGroupFormat(f"Group field {field} must be a string or Column.")
95 field_name = field if type(field) == str else field.name
96 if field_name not in table.c:
97 raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
98
99 query = (
100 select(table)
101 .limit(limit)
102 .offset(offset)
103 )
104 if order_by is not None:
105 query = apply_sort(query, order_by)
106 if filters is not None:
107 query = apply_filters(query, filters)
108 subquery = query.subquery()
109
110 group_by = [
111 subquery.columns[col] if type(col) == str else subquery.columns[col.name]
112 for col in group_by
113 ]
114 query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)
115 with engine.begin() as conn:
116 records = conn.execute(query).fetchall()
117
118 # Last field is the count, preceding fields are the group by fields
119 counts = {
120 (*record[:-1],): record[-1]
121 for record in records
122 }
123 return counts
124
125
126 def get_distinct_tuple_values(
127 column_list, engine, table=None, limit=None, offset=None,
128 ):
129 """
130 Returns distinct tuples from a given list of columns.
131
132 Args:
133 column_list: list of column names or SQLAlchemy column objects
134 engine: SQLAlchemy engine object
135 table: SQLAlchemy table object
136 limit: int, gives number of rows to return
137 offset: int, gives number of rows to skip
138
139 If no table is given, the column_list must consist entirely of
140 SQLAlchemy column objects associated with a table.
141 """
142 if table is not None:
143 column_objects = _create_col_objects(table, column_list)
144 else:
145 column_objects = column_list
146 try:
147 assert all([type(col) == Column for col in column_objects])
148 except AssertionError as e:
149 logger.error("All columns must be str or sqlalchemy.Column type")
150 raise e
151
152 query = (
153 select(*column_objects)
154 .distinct()
155 .limit(limit)
156 .offset(offset)
157 )
158 with engine.begin() as conn:
159 res = conn.execute(query).fetchall()
160 return [tuple(zip(column_objects, row)) for row in res]
161
162
163 def distinct_tuples_to_filter(distinct_tuples):
164 filters = []
165 for col, value in distinct_tuples:
166 filters.append({
167 "field": col,
168 "op": "==",
169 "value": value,
170 })
171 return filters
172
173
174 def create_record_or_records(table, engine, record_data):
175 """
176 record_data can be a dictionary, tuple, or list of dictionaries or tuples.
177 if record_data is a list, it creates multiple records.
178 """
179 id_value = None
180 with engine.begin() as connection:
181 result = connection.execute(table.insert(), record_data)
182 # If there was only a single record created, return the record.
183 if result.rowcount == 1:
184 # We need to manually commit insertion so that we can retrieve the record.
185 connection.commit()
186 id_value = result.inserted_primary_key[0]
187 if id_value is not None:
188 return get_record(table, engine, id_value)
189 # Do not return any records if multiple rows were added.
190 return None
191
192
193 def create_records_from_csv(
194 table,
195 engine,
196 csv_filename,
197 column_names,
198 header,
199 delimiter=None,
200 escape=None,
201 quote=None,
202 ):
203 with open(csv_filename, 'rb') as csv_file:
204 with engine.begin() as conn:
205 cursor = conn.connection.cursor()
206 relation = '.'.join('"{}"'.format(part) for part in (table.schema, table.name))
207 formatted_columns = '({})'.format(','.join([f'"{column_name}"' for column_name in column_names]))
208
209 copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'
210 if header:
211 copy_sql += " HEADER"
212 if delimiter:
213 copy_sql += f" DELIMITER E'{delimiter}'"
214 if escape:
215 copy_sql += f" ESCAPE '{escape}'"
216 if quote:
217 if quote == "'":
218 quote = "''"
219 copy_sql += f" QUOTE '{quote}'"
220
221 cursor.copy_expert(copy_sql, csv_file)
222
223
224 def update_record(table, engine, id_value, record_data):
225 primary_key_column = _get_primary_key_column(table)
226 with engine.begin() as connection:
227 connection.execute(
228 table.update().where(primary_key_column == id_value).values(record_data)
229 )
230 return get_record(table, engine, id_value)
231
232
233 def delete_record(table, engine, id_value):
234 primary_key_column = _get_primary_key_column(table)
235 query = delete(table).where(primary_key_column == id_value)
236 with engine.begin() as conn:
237 return conn.execute(query)
238
[end of db/records.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/records.py b/db/records.py
--- a/db/records.py
+++ b/db/records.py
@@ -4,7 +4,6 @@
from sqlalchemy_filters import apply_filters, apply_sort
from sqlalchemy_filters.exceptions import FieldNotFound
-from db.constants import ID
logger = logging.getLogger(__name__)
@@ -32,13 +31,27 @@
]
+def _get_query(table, limit, offset, order_by, filters):
+ query = select(table).limit(limit).offset(offset)
+ if order_by is not None:
+ query = apply_sort(query, order_by)
+ if filters is not None:
+ query = apply_filters(query, filters)
+ return query
+
+
+def _execute_query(query, engine):
+ with engine.begin() as conn:
+ records = conn.execute(query).fetchall()
+ return records
+
+
def get_record(table, engine, id_value):
primary_key_column = _get_primary_key_column(table)
query = select(table).where(primary_key_column == id_value)
- with engine.begin() as conn:
- result = conn.execute(query).fetchall()
- assert len(result) <= 1
- return result[0] if result else None
+ result = _execute_query(query, engine)
+ assert len(result) <= 1
+ return result[0] if result else None
def get_records(
@@ -59,13 +72,8 @@
field, in addition to an 'value' field if appropriate.
See: https://github.com/centerofci/sqlalchemy-filters#filters-format
"""
- query = select(table).limit(limit).offset(offset)
- if order_by is not None:
- query = apply_sort(query, order_by)
- if filters is not None:
- query = apply_filters(query, filters)
- with engine.begin() as conn:
- return conn.execute(query).fetchall()
+ query = _get_query(table, limit, offset, order_by, filters)
+ return _execute_query(query, engine)
def get_group_counts(
@@ -96,24 +104,17 @@
if field_name not in table.c:
raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
- query = (
- select(table)
- .limit(limit)
- .offset(offset)
- )
- if order_by is not None:
- query = apply_sort(query, order_by)
- if filters is not None:
- query = apply_filters(query, filters)
- subquery = query.subquery()
+ # Get the list of groups that we should count.
+ # We're considering limit and offset here so that we only count relevant groups
+ relevant_groups_query = _get_query(table, limit, offset, order_by, filters)
+ subquery = relevant_groups_query.subquery()
- group_by = [
+ columns = [
subquery.columns[col] if type(col) == str else subquery.columns[col.name]
for col in group_by
]
- query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)
- with engine.begin() as conn:
- records = conn.execute(query).fetchall()
+ count_query = select(*columns, func.count(columns[0])).group_by(*columns)
+ records = _execute_query(count_query, engine)
# Last field is the count, preceding fields are the group by fields
counts = {
@@ -155,9 +156,8 @@
.limit(limit)
.offset(offset)
)
- with engine.begin() as conn:
- res = conn.execute(query).fetchall()
- return [tuple(zip(column_objects, row)) for row in res]
+ result = _execute_query(query, engine)
+ return [tuple(zip(column_objects, row)) for row in result]
def distinct_tuples_to_filter(distinct_tuples):
| {"golden_diff": "diff --git a/db/records.py b/db/records.py\n--- a/db/records.py\n+++ b/db/records.py\n@@ -4,7 +4,6 @@\n from sqlalchemy_filters import apply_filters, apply_sort\n from sqlalchemy_filters.exceptions import FieldNotFound\n \n-from db.constants import ID\n \n logger = logging.getLogger(__name__)\n \n@@ -32,13 +31,27 @@\n ]\n \n \n+def _get_query(table, limit, offset, order_by, filters):\n+ query = select(table).limit(limit).offset(offset)\n+ if order_by is not None:\n+ query = apply_sort(query, order_by)\n+ if filters is not None:\n+ query = apply_filters(query, filters)\n+ return query\n+\n+\n+def _execute_query(query, engine):\n+ with engine.begin() as conn:\n+ records = conn.execute(query).fetchall()\n+ return records\n+\n+\n def get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n- with engine.begin() as conn:\n- result = conn.execute(query).fetchall()\n- assert len(result) <= 1\n- return result[0] if result else None\n+ result = _execute_query(query, engine)\n+ assert len(result) <= 1\n+ return result[0] if result else None\n \n \n def get_records(\n@@ -59,13 +72,8 @@\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n- query = select(table).limit(limit).offset(offset)\n- if order_by is not None:\n- query = apply_sort(query, order_by)\n- if filters is not None:\n- query = apply_filters(query, filters)\n- with engine.begin() as conn:\n- return conn.execute(query).fetchall()\n+ query = _get_query(table, limit, offset, order_by, filters)\n+ return _execute_query(query, engine)\n \n \n def get_group_counts(\n@@ -96,24 +104,17 @@\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n \n- query = (\n- select(table)\n- .limit(limit)\n- .offset(offset)\n- )\n- if order_by is not None:\n- query = apply_sort(query, order_by)\n- if filters is not None:\n- query = apply_filters(query, filters)\n- subquery = query.subquery()\n+ # Get the list of groups that we should count.\n+ # We're considering limit and offset here so that we only count relevant groups\n+ relevant_groups_query = _get_query(table, limit, offset, order_by, filters)\n+ subquery = relevant_groups_query.subquery()\n \n- group_by = [\n+ columns = [\n subquery.columns[col] if type(col) == str else subquery.columns[col.name]\n for col in group_by\n ]\n- query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)\n- with engine.begin() as conn:\n- records = conn.execute(query).fetchall()\n+ count_query = select(*columns, func.count(columns[0])).group_by(*columns)\n+ records = _execute_query(count_query, engine)\n \n # Last field is the count, preceding fields are the group by fields\n counts = {\n@@ -155,9 +156,8 @@\n .limit(limit)\n .offset(offset)\n )\n- with engine.begin() as conn:\n- res = conn.execute(query).fetchall()\n- return [tuple(zip(column_objects, row)) for row in res]\n+ result = _execute_query(query, engine)\n+ return [tuple(zip(column_objects, row)) for row in result]\n \n \n def distinct_tuples_to_filter(distinct_tuples):\n", "issue": "Undesirable record grouping behaviours\n## Description\r\nRecord grouping has a set of behaviours, that are not desirable.\r\n* It considers order_by, which leads to formation of incorrect query on the backend, if we don't group by the sorted column.\r\n\r\n\r\n* It considers limit and offset. These apply on the grouped result itself, and is unrelated to the record limit & offset.\r\n\r\n\r\n\r\n\r\n## Expected behavior\r\n* It should not consider order_by.\r\n* It should not consider limit and offset.\r\n\r\nWe could also probably have a dedicated API for this. It could also obtain the values for columns, to filter the grouped results. Having it as part of records API makes less sense, since the group count is not a reflection of the record results.\n", "before_files": [{"content": "import logging\nfrom sqlalchemy import delete, select, Column, func\nfrom sqlalchemy.inspection import inspect\nfrom sqlalchemy_filters import apply_filters, apply_sort\nfrom sqlalchemy_filters.exceptions import FieldNotFound\n\nfrom db.constants import ID\n\nlogger = logging.getLogger(__name__)\n\n\n# Grouping exceptions follow the sqlalchemy_filters exceptions patterns\nclass BadGroupFormat(Exception):\n pass\n\n\nclass GroupFieldNotFound(FieldNotFound):\n pass\n\n\ndef _get_primary_key_column(table):\n primary_key_list = list(inspect(table).primary_key)\n # We do not support getting by composite primary keys\n assert len(primary_key_list) == 1\n return primary_key_list[0]\n\n\ndef _create_col_objects(table, column_list):\n return [\n table.columns[col] if type(col) == str else col\n for col in column_list\n ]\n\n\ndef get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n result = conn.execute(query).fetchall()\n assert len(result) <= 1\n return result[0] if result else None\n\n\ndef get_records(\n table, engine, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns records from a table.\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n query = select(table).limit(limit).offset(offset)\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n with engine.begin() as conn:\n return conn.execute(query).fetchall()\n\n\ndef get_group_counts(\n table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns counts by specified groupings\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n group_by: list or tuple of column names or column objects to group by\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n if type(group_by) not in (tuple, list):\n raise BadGroupFormat(f\"Group spec {group_by} must be list or tuple.\")\n for field in group_by:\n if type(field) not in (str, Column):\n raise BadGroupFormat(f\"Group field {field} must be a string or Column.\")\n field_name = field if type(field) == str else field.name\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n\n query = (\n select(table)\n .limit(limit)\n .offset(offset)\n )\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n subquery = query.subquery()\n\n group_by = [\n subquery.columns[col] if type(col) == str else subquery.columns[col.name]\n for col in group_by\n ]\n query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)\n with engine.begin() as conn:\n records = conn.execute(query).fetchall()\n\n # Last field is the count, preceding fields are the group by fields\n counts = {\n (*record[:-1],): record[-1]\n for record in records\n }\n return counts\n\n\ndef get_distinct_tuple_values(\n column_list, engine, table=None, limit=None, offset=None,\n):\n \"\"\"\n Returns distinct tuples from a given list of columns.\n\n Args:\n column_list: list of column names or SQLAlchemy column objects\n engine: SQLAlchemy engine object\n table: SQLAlchemy table object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n\n If no table is given, the column_list must consist entirely of\n SQLAlchemy column objects associated with a table.\n \"\"\"\n if table is not None:\n column_objects = _create_col_objects(table, column_list)\n else:\n column_objects = column_list\n try:\n assert all([type(col) == Column for col in column_objects])\n except AssertionError as e:\n logger.error(\"All columns must be str or sqlalchemy.Column type\")\n raise e\n\n query = (\n select(*column_objects)\n .distinct()\n .limit(limit)\n .offset(offset)\n )\n with engine.begin() as conn:\n res = conn.execute(query).fetchall()\n return [tuple(zip(column_objects, row)) for row in res]\n\n\ndef distinct_tuples_to_filter(distinct_tuples):\n filters = []\n for col, value in distinct_tuples:\n filters.append({\n \"field\": col,\n \"op\": \"==\",\n \"value\": value,\n })\n return filters\n\n\ndef create_record_or_records(table, engine, record_data):\n \"\"\"\n record_data can be a dictionary, tuple, or list of dictionaries or tuples.\n if record_data is a list, it creates multiple records.\n \"\"\"\n id_value = None\n with engine.begin() as connection:\n result = connection.execute(table.insert(), record_data)\n # If there was only a single record created, return the record.\n if result.rowcount == 1:\n # We need to manually commit insertion so that we can retrieve the record.\n connection.commit()\n id_value = result.inserted_primary_key[0]\n if id_value is not None:\n return get_record(table, engine, id_value)\n # Do not return any records if multiple rows were added.\n return None\n\n\ndef create_records_from_csv(\n table,\n engine,\n csv_filename,\n column_names,\n header,\n delimiter=None,\n escape=None,\n quote=None,\n):\n with open(csv_filename, 'rb') as csv_file:\n with engine.begin() as conn:\n cursor = conn.connection.cursor()\n relation = '.'.join('\"{}\"'.format(part) for part in (table.schema, table.name))\n formatted_columns = '({})'.format(','.join([f'\"{column_name}\"' for column_name in column_names]))\n\n copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'\n if header:\n copy_sql += \" HEADER\"\n if delimiter:\n copy_sql += f\" DELIMITER E'{delimiter}'\"\n if escape:\n copy_sql += f\" ESCAPE '{escape}'\"\n if quote:\n if quote == \"'\":\n quote = \"''\"\n copy_sql += f\" QUOTE '{quote}'\"\n\n cursor.copy_expert(copy_sql, csv_file)\n\n\ndef update_record(table, engine, id_value, record_data):\n primary_key_column = _get_primary_key_column(table)\n with engine.begin() as connection:\n connection.execute(\n table.update().where(primary_key_column == id_value).values(record_data)\n )\n return get_record(table, engine, id_value)\n\n\ndef delete_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = delete(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n return conn.execute(query)\n", "path": "db/records.py"}]} | 3,338 | 877 |
gh_patches_debug_12083 | rasdani/github-patches | git_diff | huggingface__text-generation-inference-609 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't load local flan-small models due to weight conversion failure
### System Info
OS Version:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
8 A-100 GPUS
Using latest text-generation-inference docker version.
I've run fine-tuning on a [Flan-T5-Small](https://huggingface.co/google/flan-t5-small) model and saved the checkpoint in my local directory. I've stored this local model checkpoint in my data2 volume and run the command as follows:
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data2 ghcr.io/huggingface/text-generation-inference:0.9 --model-id /data2/checkpoint-20 --num-shard $num_shard
But I run into errors with the converting weights as mentioned below.
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Run docker command above.
I get this error now:
2023-07-12T05:45:31.707548Z INFO text_generation_launcher: Args { model_id: "/data2/checkpoint-20", revision: None, sharded: None, num_shard: Some(2), quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: "0341f92fe465", port: 80, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }
2023-07-12T05:45:31.707602Z INFO text_generation_launcher: Sharding model on 2 processes
2023-07-12T05:45:31.707781Z INFO text_generation_launcher: Starting download process.
2023-07-12T05:45:33.261253Z WARN download: text_generation_launcher: No safetensors weights found for model /data2/checkpoint-20 at revision None. Converting PyTorch weights to safetensors.
2023-07-12T05:45:33.711218Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):
File "/opt/conda/bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 164, in download_weights
utils.convert_files(local_pt_files, local_st_files)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py", line 53, in convert_files
convert_file(pt_file, sf_file)
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py", line 21, in convert_file
if "state_dict" in loaded:
TypeError: argument of type 'Seq2SeqTrainingArguments' is not iterable
Error: DownloadError
### Expected behavior
I would expect the local model to load as do the models from the hugging-face library. Appreciate any help!
</issue>
<code>
[start of server/text_generation_server/utils/convert.py]
1 import datetime
2 import torch
3 import os
4
5 from loguru import logger
6 from pathlib import Path
7 from safetensors.torch import save_file, load_file, _find_shared_tensors, _is_complete
8 from typing import List, Dict
9 from collections import defaultdict
10
11
12 def _remove_duplicate_names(
13 state_dict: Dict[str, torch.Tensor],
14 *,
15 preferred_names: List[str] = None,
16 discard_names: List[str] = None,
17 ) -> Dict[str, List[str]]:
18 if preferred_names is None:
19 preferred_names = []
20 preferred_names = set(preferred_names)
21 if discard_names is None:
22 discard_names = []
23 discard_names = set(discard_names)
24
25 shareds = _find_shared_tensors(state_dict)
26 to_remove = defaultdict(list)
27 for shared in shareds:
28 complete_names = set(
29 [name for name in shared if _is_complete(state_dict[name])]
30 )
31 if not complete_names:
32 raise RuntimeError(
33 f"Error while trying to find names to remove to save state dict, but found no suitable name to keep for saving amongst: {shared}. None is covering the entire storage.Refusing to save/load the model since you could be storing much more memory than needed. Please refer to https://huggingface.co/docs/safetensors/torch_shared_tensors for more information. Or open an issue."
34 )
35
36 keep_name = sorted(list(complete_names))[0]
37
38 # Mecanism to preferentially select keys to keep
39 # coming from the on-disk file to allow
40 # loading models saved with a different choice
41 # of keep_name
42 preferred = complete_names.difference(discard_names)
43 if preferred:
44 keep_name = sorted(list(preferred))[0]
45
46 if preferred_names:
47 preferred = preferred_names.intersection(complete_names)
48 if preferred:
49 keep_name = sorted(list(preferred))[0]
50 for name in sorted(shared):
51 if name != keep_name:
52 to_remove[keep_name].append(name)
53 return to_remove
54
55
56 def convert_file(pt_file: Path, sf_file: Path, discard_names: List[str]):
57 """
58 Convert a pytorch file to a safetensors file
59 This will remove duplicate tensors from the file.
60
61 Unfortunately, this might not respect *transformers* convention.
62 Forcing us to check for potentially different keys during load when looking
63 for specific tensors (making tensor sharing explicit).
64 """
65 loaded = torch.load(pt_file, map_location="cpu")
66 if "state_dict" in loaded:
67 loaded = loaded["state_dict"]
68 to_removes = _remove_duplicate_names(loaded, discard_names=discard_names)
69
70 metadata = {"format": "pt"}
71 for kept_name, to_remove_group in to_removes.items():
72 for to_remove in to_remove_group:
73 if to_remove not in metadata:
74 metadata[to_remove] = kept_name
75 del loaded[to_remove]
76 # Force tensors to be contiguous
77 loaded = {k: v.contiguous() for k, v in loaded.items()}
78
79 dirname = os.path.dirname(sf_file)
80 os.makedirs(dirname, exist_ok=True)
81 save_file(loaded, sf_file, metadata=metadata)
82 reloaded = load_file(sf_file)
83 for k in loaded:
84 pt_tensor = loaded[k]
85 sf_tensor = reloaded[k]
86 if not torch.equal(pt_tensor, sf_tensor):
87 raise RuntimeError(f"The output tensors do not match for key {k}")
88
89
90 def convert_files(pt_files: List[Path], sf_files: List[Path], discard_names: List[str]):
91 assert len(pt_files) == len(sf_files)
92
93 N = len(pt_files)
94 # We do this instead of using tqdm because we want to parse the logs with the launcher
95
96 for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):
97 start = datetime.datetime.now()
98 convert_file(pt_file, sf_file, discard_names)
99 elapsed = datetime.datetime.now() - start
100 logger.info(f"Convert: [{i + 1}/{N}] -- Took: {elapsed}")
101
[end of server/text_generation_server/utils/convert.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/server/text_generation_server/utils/convert.py b/server/text_generation_server/utils/convert.py
--- a/server/text_generation_server/utils/convert.py
+++ b/server/text_generation_server/utils/convert.py
@@ -94,6 +94,14 @@
# We do this instead of using tqdm because we want to parse the logs with the launcher
for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):
+ # Skip blacklisted files
+ if (
+ "arguments" in pt_file.name
+ or "args" in pt_file.name
+ or "training" in pt_file.name
+ ):
+ continue
+
start = datetime.datetime.now()
convert_file(pt_file, sf_file, discard_names)
elapsed = datetime.datetime.now() - start
| {"golden_diff": "diff --git a/server/text_generation_server/utils/convert.py b/server/text_generation_server/utils/convert.py\n--- a/server/text_generation_server/utils/convert.py\n+++ b/server/text_generation_server/utils/convert.py\n@@ -94,6 +94,14 @@\n # We do this instead of using tqdm because we want to parse the logs with the launcher\n \n for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):\n+ # Skip blacklisted files\n+ if (\n+ \"arguments\" in pt_file.name\n+ or \"args\" in pt_file.name\n+ or \"training\" in pt_file.name\n+ ):\n+ continue\n+\n start = datetime.datetime.now()\n convert_file(pt_file, sf_file, discard_names)\n elapsed = datetime.datetime.now() - start\n", "issue": "Can't load local flan-small models due to weight conversion failure \n### System Info\n\nOS Version: \r\nDistributor ID: Ubuntu\r\nDescription: Ubuntu 20.04.3 LTS\r\nRelease: 20.04\r\nCodename: focal\r\n\r\n8 A-100 GPUS\r\n\r\nUsing latest text-generation-inference docker version. \r\n\r\nI've run fine-tuning on a [Flan-T5-Small](https://huggingface.co/google/flan-t5-small) model and saved the checkpoint in my local directory. I've stored this local model checkpoint in my data2 volume and run the command as follows:\r\ndocker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data2 ghcr.io/huggingface/text-generation-inference:0.9 --model-id /data2/checkpoint-20 --num-shard $num_shard\r\n\r\nBut I run into errors with the converting weights as mentioned below. \n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nRun docker command above. \r\n\r\nI get this error now: \r\n\r\n2023-07-12T05:45:31.707548Z INFO text_generation_launcher: Args { model_id: \"/data2/checkpoint-20\", revision: None, sharded: None, num_shard: Some(2), quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: \"0341f92fe465\", port: 80, shard_uds_path: \"/tmp/text-generation-server\", master_addr: \"localhost\", master_port: 29500, huggingface_hub_cache: Some(\"/data\"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }\r\n2023-07-12T05:45:31.707602Z INFO text_generation_launcher: Sharding model on 2 processes\r\n2023-07-12T05:45:31.707781Z INFO text_generation_launcher: Starting download process.\r\n2023-07-12T05:45:33.261253Z WARN download: text_generation_launcher: No safetensors weights found for model /data2/checkpoint-20 at revision None. Converting PyTorch weights to safetensors.\r\n\r\n2023-07-12T05:45:33.711218Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):\r\n\r\n File \"/opt/conda/bin/text-generation-server\", line 8, in <module>\r\n sys.exit(app())\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 164, in download_weights\r\n utils.convert_files(local_pt_files, local_st_files)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py\", line 53, in convert_files\r\n convert_file(pt_file, sf_file)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/convert.py\", line 21, in convert_file\r\n if \"state_dict\" in loaded:\r\n\r\nTypeError: argument of type 'Seq2SeqTrainingArguments' is not iterable\r\n\r\n\r\nError: DownloadError\r\n\n\n### Expected behavior\n\nI would expect the local model to load as do the models from the hugging-face library. Appreciate any help!\n", "before_files": [{"content": "import datetime\nimport torch\nimport os\n\nfrom loguru import logger\nfrom pathlib import Path\nfrom safetensors.torch import save_file, load_file, _find_shared_tensors, _is_complete\nfrom typing import List, Dict\nfrom collections import defaultdict\n\n\ndef _remove_duplicate_names(\n state_dict: Dict[str, torch.Tensor],\n *,\n preferred_names: List[str] = None,\n discard_names: List[str] = None,\n) -> Dict[str, List[str]]:\n if preferred_names is None:\n preferred_names = []\n preferred_names = set(preferred_names)\n if discard_names is None:\n discard_names = []\n discard_names = set(discard_names)\n\n shareds = _find_shared_tensors(state_dict)\n to_remove = defaultdict(list)\n for shared in shareds:\n complete_names = set(\n [name for name in shared if _is_complete(state_dict[name])]\n )\n if not complete_names:\n raise RuntimeError(\n f\"Error while trying to find names to remove to save state dict, but found no suitable name to keep for saving amongst: {shared}. None is covering the entire storage.Refusing to save/load the model since you could be storing much more memory than needed. Please refer to https://huggingface.co/docs/safetensors/torch_shared_tensors for more information. Or open an issue.\"\n )\n\n keep_name = sorted(list(complete_names))[0]\n\n # Mecanism to preferentially select keys to keep\n # coming from the on-disk file to allow\n # loading models saved with a different choice\n # of keep_name\n preferred = complete_names.difference(discard_names)\n if preferred:\n keep_name = sorted(list(preferred))[0]\n\n if preferred_names:\n preferred = preferred_names.intersection(complete_names)\n if preferred:\n keep_name = sorted(list(preferred))[0]\n for name in sorted(shared):\n if name != keep_name:\n to_remove[keep_name].append(name)\n return to_remove\n\n\ndef convert_file(pt_file: Path, sf_file: Path, discard_names: List[str]):\n \"\"\"\n Convert a pytorch file to a safetensors file\n This will remove duplicate tensors from the file.\n\n Unfortunately, this might not respect *transformers* convention.\n Forcing us to check for potentially different keys during load when looking\n for specific tensors (making tensor sharing explicit).\n \"\"\"\n loaded = torch.load(pt_file, map_location=\"cpu\")\n if \"state_dict\" in loaded:\n loaded = loaded[\"state_dict\"]\n to_removes = _remove_duplicate_names(loaded, discard_names=discard_names)\n\n metadata = {\"format\": \"pt\"}\n for kept_name, to_remove_group in to_removes.items():\n for to_remove in to_remove_group:\n if to_remove not in metadata:\n metadata[to_remove] = kept_name\n del loaded[to_remove]\n # Force tensors to be contiguous\n loaded = {k: v.contiguous() for k, v in loaded.items()}\n\n dirname = os.path.dirname(sf_file)\n os.makedirs(dirname, exist_ok=True)\n save_file(loaded, sf_file, metadata=metadata)\n reloaded = load_file(sf_file)\n for k in loaded:\n pt_tensor = loaded[k]\n sf_tensor = reloaded[k]\n if not torch.equal(pt_tensor, sf_tensor):\n raise RuntimeError(f\"The output tensors do not match for key {k}\")\n\n\ndef convert_files(pt_files: List[Path], sf_files: List[Path], discard_names: List[str]):\n assert len(pt_files) == len(sf_files)\n\n N = len(pt_files)\n # We do this instead of using tqdm because we want to parse the logs with the launcher\n\n for i, (pt_file, sf_file) in enumerate(zip(pt_files, sf_files)):\n start = datetime.datetime.now()\n convert_file(pt_file, sf_file, discard_names)\n elapsed = datetime.datetime.now() - start\n logger.info(f\"Convert: [{i + 1}/{N}] -- Took: {elapsed}\")\n", "path": "server/text_generation_server/utils/convert.py"}]} | 2,548 | 178 |
gh_patches_debug_354 | rasdani/github-patches | git_diff | sanic-org__sanic-1343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pin versions for LTS release
I think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.
@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins
</issue>
<code>
[start of setup.py]
1 """
2 Sanic
3 """
4 import codecs
5 import os
6 import re
7 from distutils.errors import DistutilsPlatformError
8 from distutils.util import strtobool
9
10 from setuptools import setup
11
12
13 def open_local(paths, mode='r', encoding='utf8'):
14 path = os.path.join(
15 os.path.abspath(os.path.dirname(__file__)),
16 *paths
17 )
18
19 return codecs.open(path, mode, encoding)
20
21
22 with open_local(['sanic', '__init__.py'], encoding='latin1') as fp:
23 try:
24 version = re.findall(r"^__version__ = '([^']+)'\r?$",
25 fp.read(), re.M)[0]
26 except IndexError:
27 raise RuntimeError('Unable to determine version.')
28
29
30 with open_local(['README.rst']) as rm:
31 long_description = rm.read()
32
33 setup_kwargs = {
34 'name': 'sanic',
35 'version': version,
36 'url': 'http://github.com/channelcat/sanic/',
37 'license': 'MIT',
38 'author': 'Channel Cat',
39 'author_email': '[email protected]',
40 'description': (
41 'A microframework based on uvloop, httptools, and learnings of flask'),
42 'long_description': long_description,
43 'packages': ['sanic'],
44 'platforms': 'any',
45 'classifiers': [
46 'Development Status :: 4 - Beta',
47 'Environment :: Web Environment',
48 'License :: OSI Approved :: MIT License',
49 'Programming Language :: Python :: 3.5',
50 'Programming Language :: Python :: 3.6',
51 ],
52 }
53
54 env_dependency = '; sys_platform != "win32" and implementation_name == "cpython"'
55 ujson = 'ujson>=1.35' + env_dependency
56 uvloop = 'uvloop>=0.5.3' + env_dependency
57
58 requirements = [
59 'httptools>=0.0.9',
60 uvloop,
61 ujson,
62 'aiofiles>=0.3.0',
63 'websockets>=5.0,<6.0',
64 'multidict>=4.0,<5.0',
65 ]
66 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
67 print("Installing without uJSON")
68 requirements.remove(ujson)
69
70 # 'nt' means windows OS
71 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
72 print("Installing without uvLoop")
73 requirements.remove(uvloop)
74
75 setup_kwargs['install_requires'] = requirements
76 setup(**setup_kwargs)
77
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
uvloop = 'uvloop>=0.5.3' + env_dependency
requirements = [
- 'httptools>=0.0.9',
+ 'httptools>=0.0.10',
uvloop,
ujson,
'aiofiles>=0.3.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n uvloop = 'uvloop>=0.5.3' + env_dependency\n \n requirements = [\n- 'httptools>=0.0.9',\n+ 'httptools>=0.0.10',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n", "issue": "Pin versions for LTS release\nI think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.\r\n\r\n@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins \n", "before_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nfrom distutils.errors import DistutilsPlatformError\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\n\n\ndef open_local(paths, mode='r', encoding='utf8'):\n path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n *paths\n )\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local(['sanic', '__init__.py'], encoding='latin1') as fp:\n try:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError('Unable to determine version.')\n\n\nwith open_local(['README.rst']) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n 'name': 'sanic',\n 'version': version,\n 'url': 'http://github.com/channelcat/sanic/',\n 'license': 'MIT',\n 'author': 'Channel Cat',\n 'author_email': '[email protected]',\n 'description': (\n 'A microframework based on uvloop, httptools, and learnings of flask'),\n 'long_description': long_description,\n 'packages': ['sanic'],\n 'platforms': 'any',\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n}\n\nenv_dependency = '; sys_platform != \"win32\" and implementation_name == \"cpython\"'\nujson = 'ujson>=1.35' + env_dependency\nuvloop = 'uvloop>=0.5.3' + env_dependency\n\nrequirements = [\n 'httptools>=0.0.9',\n uvloop,\n ujson,\n 'aiofiles>=0.3.0',\n 'websockets>=5.0,<6.0',\n 'multidict>=4.0,<5.0',\n]\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nsetup_kwargs['install_requires'] = requirements\nsetup(**setup_kwargs)\n", "path": "setup.py"}]} | 1,292 | 99 |
gh_patches_debug_55601 | rasdani/github-patches | git_diff | xonsh__xonsh-138 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
In .xonshrc, import does not create a global name
xonsh: git checkout f44013b31756ba5491f2a7e1dffb7ad64513b28e
python: 3.4.1
OS: Fedora 21
With this as your .xonshrc:
``` python
import subprocess
def get_tty():
tty = subprocess.check_output('tty').decode().strip()
segments = tty.split('/')
return '/'.join(segments[-2:])
$PROMPT='{tty}@{{hostname}}$ '.format(tty=get_tty())
```
Trying to start .xonshrc yields a traceback:
```
Traceback (most recent call last):
File "scripts/xonsh", line 3, in <module>
main()
File "/srv/git/wishlist/xonsh/xonsh/main.py", line 36, in main
shell = Shell()
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 94, in __init__
execer=self.execer)
File "/srv/git/wishlist/xonsh/xonsh/environ.py", line 168, in xonshrc_context
execer.exec(rc, glbs={}, locs=env)
File "/srv/git/wishlist/xonsh/xonsh/execer.py", line 110, in exec
return exec(code, glbs, locs)
File "/home/badger/.xonshrc", line 7, in <module>
File "/home/badger/.xonshrc", line 259, in get_tty
NameError: name 'subprocess' is not defined
Exception ignored in: <bound method Shell.__del__ of <xonsh.shell.Shell object at 0x7f383127e4e0>>
Traceback (most recent call last):
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 102, in __del__
teardown_readline()
File "/srv/git/wishlist/xonsh/xonsh/shell.py", line 65, in teardown_readline
import readline
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2222, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 2164, in _find_spec
File "<frozen importlib._bootstrap>", line 1940, in find_spec
File "<frozen importlib._bootstrap>", line 1908, in _get_spec
TypeError: 'NoneType' object is not iterable
```
If I change .xonshrc to have the subprocess import inside of the function then it starts up fine. So it seems like importing does not create a globally available name. The other things I tried such as:
``` python
import subprocess as subprocess
subprocess = __import__('subprocess')
```
also lead to the same traceback.
</issue>
<code>
[start of xonsh/environ.py]
1 """Environment for the xonsh shell.
2 """
3 import os
4 import re
5 import socket
6 import locale
7 import builtins
8 import platform
9 import subprocess
10 from warnings import warn
11
12 from xonsh.tools import TERM_COLORS
13
14 def current_branch(cwd=None):
15 """Gets the branch for a current working directory. Returns None
16 if the cwd is not a repository. This currently only works for git,
17 bust should be extended in the future.
18 """
19 branch = None
20 cwd = os.getcwd() if cwd is None else cwd
21
22 # step out completely if git is not installed
23 try:
24 binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,
25 stderr=subprocess.PIPE,
26 universal_newlines=True)
27 if not binary_location:
28 return branch
29 except subprocess.CalledProcessError:
30 return branch
31
32 prompt_scripts = [
33 '/usr/lib/git-core/git-sh-prompt',
34 '/usr/local/etc/bash_completion.d/git-prompt.sh'
35 ]
36
37 for script in prompt_scripts:
38 # note that this is about 10x faster than bash -i "__git_ps1"
39 _input = ('source {}; __git_ps1 "${{1:-%s}}"'.format(script))
40 try:
41 branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,
42 stderr=subprocess.PIPE,
43 universal_newlines=True) or None
44 except subprocess.CalledProcessError:
45 continue
46
47 # fall back to using the git binary if the above failed
48 if branch is None:
49 try:
50 s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],
51 stderr=subprocess.PIPE, cwd=cwd,
52 universal_newlines=True)
53 s = s.strip()
54 if len(s) > 0:
55 branch = s
56 except subprocess.CalledProcessError:
57 pass
58
59 return branch
60
61
62 default_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '
63 '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')
64 default_title = '{user}@{hostname}: {cwd} | xonsh'
65
66 def format_prompt(template=default_prompt):
67 """Formats a xonsh prompt template string.
68
69 The following keyword arguments are recognized in the template string:
70
71 + user -- Name of current user
72 + hostname -- Name of host computer
73 + cwd -- Current working directory
74 + curr_branch -- Name of current git branch (preceded by a space), if any
75 + (QUALIFIER\_)COLORNAME -- Inserts an ANSI color code
76 - COLORNAME can be any of:
77 BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE
78 - QUALIFIER is optional and can be any of:
79 BOLD, UNDERLINE, BACKGROUND, INTENSE,
80 BOLD_INTENSE, BACKGROUND_INTENSE
81 + NO_COLOR -- Resets any previously used color codes
82 """
83 env = builtins.__xonsh_env__
84 cwd = env['PWD']
85 branch = current_branch(cwd=cwd)
86 branch = '' if branch is None else ' ' + branch
87 p = template.format(
88 user=env.get('USER', '<user>'),
89 hostname=socket.gethostname(),
90 cwd=cwd.replace(env['HOME'], '~'),
91 curr_branch=branch,
92 **TERM_COLORS
93 )
94 return p
95
96
97 RE_HIDDEN = re.compile('\001.*?\002')
98
99 def multiline_prompt():
100 """Returns the filler text for the prompt in multiline scenarios."""
101 curr = builtins.__xonsh_env__.get('PROMPT', "set '$PROMPT = ...' $ ")
102 curr = curr() if callable(curr) else curr
103 curr = format_prompt(curr)
104 line = curr.rsplit('\n', 1)[1] if '\n' in curr else curr
105 line = RE_HIDDEN.sub('', line) # gets rid of colors
106 # most prompts end in whitespace, head is the part before that.
107 head = line.rstrip()
108 headlen = len(head)
109 # tail is the trailing whitespace
110 tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]
111 # now to constuct the actual string
112 dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')
113 dots = dots() if callable(dots) else dots
114 if dots is None or len(dots) == 0:
115 return ''
116 return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail
117
118
119 BASE_ENV = {
120 'INDENT': ' ',
121 'PROMPT': default_prompt,
122 'TITLE': default_title,
123 'MULTILINE_PROMPT': '.',
124 'XONSHRC': os.path.expanduser('~/.xonshrc'),
125 'XONSH_HISTORY_SIZE': 8128,
126 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),
127 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),
128 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),
129 'LC_TIME': locale.setlocale(locale.LC_TIME),
130 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),
131 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),
132 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),
133 }
134
135 if platform.system() == 'Darwin':
136 BASE_ENV['BASH_COMPLETIONS'] = []
137 else:
138 BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion',
139 '/usr/share/bash-completion/completions/git']
140
141 def bash_env():
142 """Attempts to compute the bash envinronment variables."""
143 currenv = None
144 if hasattr(builtins, '__xonsh_env__'):
145 currenv = builtins.__xonsh_env__.detype()
146 try:
147 s = subprocess.check_output(['bash', '-i'], input='env', env=currenv,
148 stderr=subprocess.PIPE,
149 universal_newlines=True)
150 except subprocess.CalledProcessError:
151 s = ''
152 items = [line.split('=', 1) for line in s.splitlines() if '=' in line]
153 env = dict(items)
154 return env
155
156 def xonshrc_context(rcfile=None, execer=None):
157 """Attempts to read in xonshrc file, and return the contents."""
158 if rcfile is None or execer is None or not os.path.isfile(rcfile):
159 return {}
160 with open(rcfile, 'r') as f:
161 rc = f.read()
162 if not rc.endswith('\n'):
163 rc += '\n'
164 fname = execer.filename
165 env = {}
166 try:
167 execer.filename = rcfile
168 execer.exec(rc, glbs={}, locs=env)
169 except SyntaxError as err:
170 msg = 'syntax error in xonsh run control file {0!r}: {1!s}'
171 warn(msg.format(rcfile, err), RuntimeWarning)
172 finally:
173 execer.filename = fname
174 return env
175
176 def default_env(env=None):
177 """Constructs a default xonsh environment."""
178 # in order of increasing precedence
179 ctx = dict(BASE_ENV)
180 ctx.update(os.environ)
181 ctx.update(bash_env())
182 if env is not None:
183 ctx.update(env)
184 return ctx
185
[end of xonsh/environ.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/environ.py b/xonsh/environ.py
--- a/xonsh/environ.py
+++ b/xonsh/environ.py
@@ -165,7 +165,7 @@
env = {}
try:
execer.filename = rcfile
- execer.exec(rc, glbs={}, locs=env)
+ execer.exec(rc, glbs=env)
except SyntaxError as err:
msg = 'syntax error in xonsh run control file {0!r}: {1!s}'
warn(msg.format(rcfile, err), RuntimeWarning)
| {"golden_diff": "diff --git a/xonsh/environ.py b/xonsh/environ.py\n--- a/xonsh/environ.py\n+++ b/xonsh/environ.py\n@@ -165,7 +165,7 @@\n env = {}\n try:\n execer.filename = rcfile\n- execer.exec(rc, glbs={}, locs=env)\n+ execer.exec(rc, glbs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n", "issue": "In .xonshrc, import does not create a global name\nxonsh: git checkout f44013b31756ba5491f2a7e1dffb7ad64513b28e\npython: 3.4.1\nOS: Fedora 21\n\nWith this as your .xonshrc:\n\n``` python\nimport subprocess\n\ndef get_tty():\n tty = subprocess.check_output('tty').decode().strip()\n segments = tty.split('/')\n return '/'.join(segments[-2:])\n\n$PROMPT='{tty}@{{hostname}}$ '.format(tty=get_tty())\n```\n\nTrying to start .xonshrc yields a traceback:\n\n```\nTraceback (most recent call last):\n File \"scripts/xonsh\", line 3, in <module>\n main()\n File \"/srv/git/wishlist/xonsh/xonsh/main.py\", line 36, in main\n shell = Shell()\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 94, in __init__\n execer=self.execer)\n File \"/srv/git/wishlist/xonsh/xonsh/environ.py\", line 168, in xonshrc_context\n execer.exec(rc, glbs={}, locs=env)\n File \"/srv/git/wishlist/xonsh/xonsh/execer.py\", line 110, in exec\n return exec(code, glbs, locs)\n File \"/home/badger/.xonshrc\", line 7, in <module>\n\n File \"/home/badger/.xonshrc\", line 259, in get_tty\nNameError: name 'subprocess' is not defined\nException ignored in: <bound method Shell.__del__ of <xonsh.shell.Shell object at 0x7f383127e4e0>>\nTraceback (most recent call last):\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 102, in __del__\n teardown_readline()\n File \"/srv/git/wishlist/xonsh/xonsh/shell.py\", line 65, in teardown_readline\n import readline\n File \"<frozen importlib._bootstrap>\", line 2237, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 2222, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 2164, in _find_spec\n File \"<frozen importlib._bootstrap>\", line 1940, in find_spec\n File \"<frozen importlib._bootstrap>\", line 1908, in _get_spec\nTypeError: 'NoneType' object is not iterable\n```\n\nIf I change .xonshrc to have the subprocess import inside of the function then it starts up fine. So it seems like importing does not create a globally available name. The other things I tried such as:\n\n``` python\nimport subprocess as subprocess\nsubprocess = __import__('subprocess')\n```\n\nalso lead to the same traceback.\n\n", "before_files": [{"content": "\"\"\"Environment for the xonsh shell.\n\"\"\"\nimport os\nimport re\nimport socket\nimport locale\nimport builtins\nimport platform\nimport subprocess\nfrom warnings import warn\n\nfrom xonsh.tools import TERM_COLORS\n\ndef current_branch(cwd=None):\n \"\"\"Gets the branch for a current working directory. Returns None\n if the cwd is not a repository. This currently only works for git, \n bust should be extended in the future.\n \"\"\"\n branch = None\n cwd = os.getcwd() if cwd is None else cwd\n\n # step out completely if git is not installed\n try:\n binary_location = subprocess.check_output(['which', 'git'], cwd=cwd,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n if not binary_location:\n return branch\n except subprocess.CalledProcessError:\n return branch\n\n prompt_scripts = [\n '/usr/lib/git-core/git-sh-prompt',\n '/usr/local/etc/bash_completion.d/git-prompt.sh'\n ]\n\n for script in prompt_scripts:\n # note that this is about 10x faster than bash -i \"__git_ps1\"\n _input = ('source {}; __git_ps1 \"${{1:-%s}}\"'.format(script))\n try:\n branch = subprocess.check_output(['bash',], cwd=cwd, input=_input,\n stderr=subprocess.PIPE,\n universal_newlines=True) or None\n except subprocess.CalledProcessError:\n continue\n\n # fall back to using the git binary if the above failed\n if branch is None:\n try:\n s = subprocess.check_output(['git', 'rev-parse','--abbrev-ref', 'HEAD'],\n stderr=subprocess.PIPE, cwd=cwd,\n universal_newlines=True) \n s = s.strip()\n if len(s) > 0:\n branch = s\n except subprocess.CalledProcessError:\n pass\n\n return branch\n\n\ndefault_prompt = ('{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE} '\n '{cwd}{BOLD_RED}{curr_branch} {BOLD_BLUE}${NO_COLOR} ')\ndefault_title = '{user}@{hostname}: {cwd} | xonsh'\n\ndef format_prompt(template=default_prompt):\n \"\"\"Formats a xonsh prompt template string.\n\n The following keyword arguments are recognized in the template string:\n\n + user -- Name of current user\n + hostname -- Name of host computer\n + cwd -- Current working directory\n + curr_branch -- Name of current git branch (preceded by a space), if any\n + (QUALIFIER\\_)COLORNAME -- Inserts an ANSI color code\n - COLORNAME can be any of:\n BLACK, RED, GREEN, YELLOW, BLUE, PURPLE, CYAN, WHITE\n - QUALIFIER is optional and can be any of:\n BOLD, UNDERLINE, BACKGROUND, INTENSE,\n BOLD_INTENSE, BACKGROUND_INTENSE\n + NO_COLOR -- Resets any previously used color codes\n \"\"\"\n env = builtins.__xonsh_env__\n cwd = env['PWD']\n branch = current_branch(cwd=cwd)\n branch = '' if branch is None else ' ' + branch\n p = template.format(\n user=env.get('USER', '<user>'),\n hostname=socket.gethostname(),\n cwd=cwd.replace(env['HOME'], '~'),\n curr_branch=branch,\n **TERM_COLORS\n )\n return p\n\n\nRE_HIDDEN = re.compile('\\001.*?\\002')\n\ndef multiline_prompt():\n \"\"\"Returns the filler text for the prompt in multiline scenarios.\"\"\"\n curr = builtins.__xonsh_env__.get('PROMPT', \"set '$PROMPT = ...' $ \")\n curr = curr() if callable(curr) else curr\n curr = format_prompt(curr)\n line = curr.rsplit('\\n', 1)[1] if '\\n' in curr else curr\n line = RE_HIDDEN.sub('', line) # gets rid of colors\n # most prompts end in whitespace, head is the part before that.\n head = line.rstrip()\n headlen = len(head)\n # tail is the trailing whitespace\n tail = line if headlen == 0 else line.rsplit(head[-1], 1)[1]\n # now to constuct the actual string\n dots = builtins.__xonsh_env__.get('MULTILINE_PROMPT', '.')\n dots = dots() if callable(dots) else dots\n if dots is None or len(dots) == 0:\n return ''\n return (dots*(headlen//len(dots))) + dots[:headlen%len(dots)] + tail\n\n\nBASE_ENV = {\n 'INDENT': ' ',\n 'PROMPT': default_prompt,\n 'TITLE': default_title,\n 'MULTILINE_PROMPT': '.',\n 'XONSHRC': os.path.expanduser('~/.xonshrc'),\n 'XONSH_HISTORY_SIZE': 8128,\n 'XONSH_HISTORY_FILE': os.path.expanduser('~/.xonsh_history'),\n 'LC_CTYPE': locale.setlocale(locale.LC_CTYPE),\n 'LC_COLLATE': locale.setlocale(locale.LC_COLLATE),\n 'LC_TIME': locale.setlocale(locale.LC_TIME),\n 'LC_MONETARY': locale.setlocale(locale.LC_MONETARY),\n 'LC_MESSAGES': locale.setlocale(locale.LC_MESSAGES),\n 'LC_NUMERIC': locale.setlocale(locale.LC_NUMERIC),\n }\n\nif platform.system() == 'Darwin':\n BASE_ENV['BASH_COMPLETIONS'] = []\nelse:\n BASE_ENV['BASH_COMPLETIONS'] = ['/etc/bash_completion', \n '/usr/share/bash-completion/completions/git']\n\ndef bash_env():\n \"\"\"Attempts to compute the bash envinronment variables.\"\"\"\n currenv = None\n if hasattr(builtins, '__xonsh_env__'):\n currenv = builtins.__xonsh_env__.detype()\n try:\n s = subprocess.check_output(['bash', '-i'], input='env', env=currenv, \n stderr=subprocess.PIPE,\n universal_newlines=True)\n except subprocess.CalledProcessError:\n s = ''\n items = [line.split('=', 1) for line in s.splitlines() if '=' in line]\n env = dict(items)\n return env\n\ndef xonshrc_context(rcfile=None, execer=None):\n \"\"\"Attempts to read in xonshrc file, and return the contents.\"\"\"\n if rcfile is None or execer is None or not os.path.isfile(rcfile):\n return {}\n with open(rcfile, 'r') as f:\n rc = f.read()\n if not rc.endswith('\\n'):\n rc += '\\n'\n fname = execer.filename\n env = {}\n try:\n execer.filename = rcfile\n execer.exec(rc, glbs={}, locs=env)\n except SyntaxError as err:\n msg = 'syntax error in xonsh run control file {0!r}: {1!s}'\n warn(msg.format(rcfile, err), RuntimeWarning)\n finally:\n execer.filename = fname\n return env\n\ndef default_env(env=None):\n \"\"\"Constructs a default xonsh environment.\"\"\"\n # in order of increasing precedence\n ctx = dict(BASE_ENV)\n ctx.update(os.environ)\n ctx.update(bash_env())\n if env is not None:\n ctx.update(env)\n return ctx\n", "path": "xonsh/environ.py"}]} | 3,264 | 134 |
gh_patches_debug_32361 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3420 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of applications/Chat/coati/models/gpt/gpt_actor.py]
1 from typing import Optional
2
3 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
4 from transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel
5
6 from ..base import Actor
7
8
9 class GPTActor(Actor):
10 """
11 GPT Actor model.
12
13 Args:
14 pretrained (str): Pretrained model name or path.
15 config (GPT2Config): Model config.
16 checkpoint (bool): Enable gradient checkpointing.
17 lora_rank (int): Rank of the LoRa layer.
18 lora_train_bias (str): Bias training strategy for the LoRa layer.
19 """
20
21 def __init__(self,
22 pretrained: Optional[str] = None,
23 config: Optional[GPT2Config] = None,
24 checkpoint: bool = False,
25 lora_rank: int = 0,
26 lora_train_bias: str = 'none') -> None:
27 if pretrained is not None:
28 model = GPT2LMHeadModel.from_pretrained(pretrained)
29 elif config is not None:
30 model = GPT2LMHeadModel(config)
31 else:
32 model = GPT2LMHeadModel(GPT2Config())
33 if checkpoint:
34 model.gradient_checkpointing_enable()
35 super().__init__(model, lora_rank, lora_train_bias)
36
[end of applications/Chat/coati/models/gpt/gpt_actor.py]
[start of applications/Chat/coati/models/gpt/gpt_critic.py]
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
5 from transformers.models.gpt2.modeling_gpt2 import GPT2Model
6
7 from ..base import Critic
8
9
10 class GPTCritic(Critic):
11 """
12 GPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (GPT2Config): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 lora_rank (int): Rank of the LO-RA decomposition.
19 lora_train_bias (str): LoRA bias training mode.
20 """
21
22 def __init__(self,
23 pretrained: Optional[str] = None,
24 config: Optional[GPT2Config] = None,
25 checkpoint: bool = False,
26 lora_rank: int = 0,
27 lora_train_bias: str = 'none') -> None:
28 if pretrained is not None:
29 model = GPT2Model.from_pretrained(pretrained)
30 elif config is not None:
31 model = GPT2Model(config)
32 else:
33 model = GPT2Model(GPT2Config())
34 if checkpoint:
35 model.gradient_checkpointing_enable()
36 value_head = nn.Linear(model.config.n_embd, 1)
37 super().__init__(model, value_head, lora_rank, lora_train_bias)
38
[end of applications/Chat/coati/models/gpt/gpt_critic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py
--- a/applications/Chat/coati/models/gpt/gpt_actor.py
+++ b/applications/Chat/coati/models/gpt/gpt_actor.py
@@ -23,7 +23,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2LMHeadModel.from_pretrained(pretrained)
elif config is not None:
@@ -32,4 +33,4 @@
model = GPT2LMHeadModel(GPT2Config())
if checkpoint:
model.gradient_checkpointing_enable()
- super().__init__(model, lora_rank, lora_train_bias)
+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)
diff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py
--- a/applications/Chat/coati/models/gpt/gpt_critic.py
+++ b/applications/Chat/coati/models/gpt/gpt_critic.py
@@ -24,7 +24,8 @@
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
lora_rank: int = 0,
- lora_train_bias: str = 'none') -> None:
+ lora_train_bias: str = 'none',
+ **kwargs) -> None:
if pretrained is not None:
model = GPT2Model.from_pretrained(pretrained)
elif config is not None:
@@ -34,4 +35,4 @@
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.n_embd, 1)
- super().__init__(model, value_head, lora_rank, lora_train_bias)
+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
| {"golden_diff": "diff --git a/applications/Chat/coati/models/gpt/gpt_actor.py b/applications/Chat/coati/models/gpt/gpt_actor.py\n--- a/applications/Chat/coati/models/gpt/gpt_actor.py\n+++ b/applications/Chat/coati/models/gpt/gpt_actor.py\n@@ -23,7 +23,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n@@ -32,4 +33,4 @@\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n- super().__init__(model, lora_rank, lora_train_bias)\n+ super().__init__(model, lora_rank, lora_train_bias, **kwargs)\ndiff --git a/applications/Chat/coati/models/gpt/gpt_critic.py b/applications/Chat/coati/models/gpt/gpt_critic.py\n--- a/applications/Chat/coati/models/gpt/gpt_critic.py\n+++ b/applications/Chat/coati/models/gpt/gpt_critic.py\n@@ -24,7 +24,8 @@\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n- lora_train_bias: str = 'none') -> None:\n+ lora_train_bias: str = 'none',\n+ **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n@@ -34,4 +35,4 @@\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n- super().__init__(model, value_head, lora_rank, lora_train_bias)\n+ super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_actor.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias)\n", "path": "applications/Chat/coati/models/gpt/gpt_critic.py"}]} | 1,311 | 495 |
gh_patches_debug_12471 | rasdani/github-patches | git_diff | deis__deis-207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update AMIs in all EC2 regions
Our images are behind on some kernel and security updates and should be re-published as v0.1.0 versions. It's also kind of a performance optimization since we do apt-get upgrade during bootstrap.
</issue>
<code>
[start of provider/ec2.py]
1 """
2 Deis cloud provider implementation for Amazon EC2.
3 """
4
5 from __future__ import unicode_literals
6
7 import json
8 import time
9
10 from boto import ec2
11 from boto.exception import EC2ResponseError
12
13 # from api.ssh import connect_ssh, exec_ssh
14 from deis import settings
15
16
17 # Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,
18 # and large docker images (e.g. buildstep) pre-installed
19 IMAGE_MAP = {
20 'ap-northeast-1': 'ami-6da8356c',
21 'ap-southeast-1': 'ami-a66f24f4',
22 'ap-southeast-2': 'ami-d5f66bef',
23 'eu-west-1': 'ami-acbf5adb',
24 'sa-east-1': 'ami-f9fd5ae4',
25 'us-east-1': 'ami-69f3bc00',
26 'us-west-1': 'ami-f0695cb5',
27 'us-west-2': 'ami-ea1e82da',
28 }
29
30
31 def seed_flavors():
32 """Seed the database with default flavors for each EC2 region.
33
34 :rtype: list of dicts containing flavor data
35 """
36 flavors = []
37 for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',
38 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',
39 'sa-east-1'):
40 flavors.append({'id': 'ec2-{}'.format(r),
41 'provider': 'ec2',
42 'params': json.dumps({
43 'region': r,
44 'image': IMAGE_MAP[r],
45 'zone': 'any',
46 'size': 'm1.medium'})})
47 return flavors
48
49
50 def build_layer(layer):
51 """
52 Build a layer.
53
54 :param layer: a dict containing formation, id, params, and creds info
55 """
56 region = layer['params'].get('region', 'us-east-1')
57 conn = _create_ec2_connection(layer['creds'], region)
58 # create a new sg and authorize all ports
59 # use iptables on the host to firewall ports
60 name = "{formation}-{id}".format(**layer)
61 sg = conn.create_security_group(name, 'Created by Deis')
62 # import a new keypair using the layer key material
63 conn.import_key_pair(name, layer['ssh_public_key'])
64 # loop until the sg is *actually* there
65 for i in xrange(10):
66 try:
67 sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,
68 cidr_ip='0.0.0.0/0')
69 break
70 except EC2ResponseError:
71 if i < 10:
72 time.sleep(1.5)
73 continue
74 else:
75 raise RuntimeError('Failed to authorize security group')
76
77
78 def destroy_layer(layer):
79 """
80 Destroy a layer.
81
82 :param layer: a dict containing formation, id, params, and creds info
83 """
84 region = layer['params'].get('region', 'us-east-1')
85 name = "{formation}-{id}".format(**layer)
86 conn = _create_ec2_connection(layer['creds'], region)
87 conn.delete_key_pair(name)
88 # there's an ec2 race condition on instances terminating
89 # successfully but still holding a lock on the security group
90 # let's take a nap
91 time.sleep(5)
92 try:
93 conn.delete_security_group(name)
94 except EC2ResponseError as e:
95 if e.code != 'InvalidGroup.NotFound':
96 raise e
97
98
99 def build_node(node):
100 """
101 Build a node.
102
103 :param node: a dict containing formation, layer, params, and creds info.
104 :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)
105 """
106 params, creds = node['params'], node['creds']
107 region = params.setdefault('region', 'us-east-1')
108 conn = _create_ec2_connection(creds, region)
109 name = "{formation}-{layer}".format(**node)
110 params['key_name'] = name
111 sg = conn.get_all_security_groups(name)[0]
112 params.setdefault('security_groups', []).append(sg.name)
113 image_id = params.get(
114 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])
115 images = conn.get_all_images([image_id])
116 if len(images) != 1:
117 raise LookupError('Could not find AMI: %s' % image_id)
118 image = images[0]
119 kwargs = _prepare_run_kwargs(params)
120 reservation = image.run(**kwargs)
121 instances = reservation.instances
122 boto = instances[0]
123 # sleep before tagging
124 time.sleep(10)
125 boto.update()
126 boto.add_tag('Name', node['id'])
127 # loop until running
128 while(True):
129 time.sleep(2)
130 boto.update()
131 if boto.state == 'running':
132 break
133 # prepare return values
134 provider_id = boto.id
135 fqdn = boto.public_dns_name
136 metadata = _format_metadata(boto)
137 return provider_id, fqdn, metadata
138
139
140 def destroy_node(node):
141 """
142 Destroy a node.
143
144 :param node: a dict containing a node's provider_id, params, and creds
145 """
146 provider_id = node['provider_id']
147 region = node['params'].get('region', 'us-east-1')
148 conn = _create_ec2_connection(node['creds'], region)
149 if provider_id:
150 conn.terminate_instances([provider_id])
151 i = conn.get_all_instances([provider_id])[0].instances[0]
152 while(True):
153 time.sleep(2)
154 i.update()
155 if i.state == "terminated":
156 break
157
158
159 def _create_ec2_connection(creds, region):
160 """
161 Connect to an EC2 region with the given credentials.
162
163 :param creds: a dict containing an EC2 access_key and secret_key
164 :region: the name of an EC2 region, such as "us-west-2"
165 :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`
166 :raises EnvironmentError: if no credentials are provided
167 """
168 if not creds:
169 raise EnvironmentError('No credentials provided')
170 return ec2.connect_to_region(region,
171 aws_access_key_id=creds['access_key'],
172 aws_secret_access_key=creds['secret_key'])
173
174
175 def _prepare_run_kwargs(params):
176 # start with sane defaults
177 kwargs = {
178 'min_count': 1, 'max_count': 1,
179 'user_data': None, 'addressing_type': None,
180 'instance_type': None, 'placement': None,
181 'kernel_id': None, 'ramdisk_id': None,
182 'monitoring_enabled': False, 'subnet_id': None,
183 'block_device_map': None,
184 }
185 # convert zone "any" to NoneType
186 requested_zone = params.get('zone')
187 if requested_zone and requested_zone.lower() == 'any':
188 requested_zone = None
189 # lookup kwargs from params
190 param_kwargs = {
191 'instance_type': params.get('size', 'm1.medium'),
192 'security_groups': params['security_groups'],
193 'placement': requested_zone,
194 'key_name': params['key_name'],
195 'kernel_id': params.get('kernel', None),
196 }
197 # add user_data if provided in params
198 user_data = params.get('user_data')
199 if user_data:
200 kwargs.update({'user_data': user_data})
201 # params override defaults
202 kwargs.update(param_kwargs)
203 return kwargs
204
205
206 def _format_metadata(boto):
207 return {
208 'architecture': boto.architecture,
209 'block_device_mapping': {
210 k: v.volume_id for k, v in boto.block_device_mapping.items()
211 },
212 'client_token': boto.client_token,
213 'dns_name': boto.dns_name,
214 'ebs_optimized': boto.ebs_optimized,
215 'eventsSet': boto.eventsSet,
216 'group_name': boto.group_name,
217 'groups': [g.id for g in boto.groups],
218 'hypervisor': boto.hypervisor,
219 'id': boto.id,
220 'image_id': boto.image_id,
221 'instance_profile': boto.instance_profile,
222 'instance_type': boto.instance_type,
223 'interfaces': list(boto.interfaces),
224 'ip_address': boto.ip_address,
225 'kernel': boto.kernel,
226 'key_name': boto.key_name,
227 'launch_time': boto.launch_time,
228 'monitored': boto.monitored,
229 'monitoring_state': boto.monitoring_state,
230 'persistent': boto.persistent,
231 'placement': boto.placement,
232 'placement_group': boto.placement_group,
233 'placement_tenancy': boto.placement_tenancy,
234 'previous_state': boto.previous_state,
235 'private_dns_name': boto.private_dns_name,
236 'private_ip_address': boto.private_ip_address,
237 'public_dns_name': boto.public_dns_name,
238 'ramdisk': boto.ramdisk,
239 'region': boto.region.name,
240 'root_device_name': boto.root_device_name,
241 'root_device_type': boto.root_device_type,
242 'spot_instance_request_id': boto.spot_instance_request_id,
243 'state': boto.state,
244 'state_code': boto.state_code,
245 'state_reason': boto.state_reason,
246 'subnet_id': boto.subnet_id,
247 'tags': dict(boto.tags),
248 'virtualization_type': boto.virtualization_type,
249 'vpc_id': boto.vpc_id,
250 }
251
[end of provider/ec2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/provider/ec2.py b/provider/ec2.py
--- a/provider/ec2.py
+++ b/provider/ec2.py
@@ -17,14 +17,14 @@
# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,
# and large docker images (e.g. buildstep) pre-installed
IMAGE_MAP = {
- 'ap-northeast-1': 'ami-6da8356c',
- 'ap-southeast-1': 'ami-a66f24f4',
- 'ap-southeast-2': 'ami-d5f66bef',
- 'eu-west-1': 'ami-acbf5adb',
- 'sa-east-1': 'ami-f9fd5ae4',
- 'us-east-1': 'ami-69f3bc00',
- 'us-west-1': 'ami-f0695cb5',
- 'us-west-2': 'ami-ea1e82da',
+ 'ap-northeast-1': 'ami-d95ac4d8',
+ 'ap-southeast-1': 'ami-1823694a',
+ 'ap-southeast-2': 'ami-e56af7df',
+ 'eu-west-1': 'ami-7447a003',
+ 'sa-east-1': 'ami-334bec2e',
+ 'us-east-1': 'ami-493d6a20',
+ 'us-west-1': 'ami-0e2b1f4b',
+ 'us-west-2': 'ami-72e27c42',
}
| {"golden_diff": "diff --git a/provider/ec2.py b/provider/ec2.py\n--- a/provider/ec2.py\n+++ b/provider/ec2.py\n@@ -17,14 +17,14 @@\n # Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n # and large docker images (e.g. buildstep) pre-installed\n IMAGE_MAP = {\n- 'ap-northeast-1': 'ami-6da8356c',\n- 'ap-southeast-1': 'ami-a66f24f4',\n- 'ap-southeast-2': 'ami-d5f66bef',\n- 'eu-west-1': 'ami-acbf5adb',\n- 'sa-east-1': 'ami-f9fd5ae4',\n- 'us-east-1': 'ami-69f3bc00',\n- 'us-west-1': 'ami-f0695cb5',\n- 'us-west-2': 'ami-ea1e82da',\n+ 'ap-northeast-1': 'ami-d95ac4d8',\n+ 'ap-southeast-1': 'ami-1823694a',\n+ 'ap-southeast-2': 'ami-e56af7df',\n+ 'eu-west-1': 'ami-7447a003',\n+ 'sa-east-1': 'ami-334bec2e',\n+ 'us-east-1': 'ami-493d6a20',\n+ 'us-west-1': 'ami-0e2b1f4b',\n+ 'us-west-2': 'ami-72e27c42',\n }\n", "issue": "Update AMIs in all EC2 regions\nOur images are behind on some kernel and security updates and should be re-published as v0.1.0 versions. It's also kind of a performance optimization since we do apt-get upgrade during bootstrap.\n\n", "before_files": [{"content": "\"\"\"\nDeis cloud provider implementation for Amazon EC2.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport json\nimport time\n\nfrom boto import ec2\nfrom boto.exception import EC2ResponseError\n\n# from api.ssh import connect_ssh, exec_ssh\nfrom deis import settings\n\n\n# Deis-optimized EC2 amis -- with 3.8 kernel, chef 11 deps,\n# and large docker images (e.g. buildstep) pre-installed\nIMAGE_MAP = {\n 'ap-northeast-1': 'ami-6da8356c',\n 'ap-southeast-1': 'ami-a66f24f4',\n 'ap-southeast-2': 'ami-d5f66bef',\n 'eu-west-1': 'ami-acbf5adb',\n 'sa-east-1': 'ami-f9fd5ae4',\n 'us-east-1': 'ami-69f3bc00',\n 'us-west-1': 'ami-f0695cb5',\n 'us-west-2': 'ami-ea1e82da',\n}\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for each EC2 region.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for r in ('us-east-1', 'us-west-1', 'us-west-2', 'eu-west-1',\n 'ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2',\n 'sa-east-1'):\n flavors.append({'id': 'ec2-{}'.format(r),\n 'provider': 'ec2',\n 'params': json.dumps({\n 'region': r,\n 'image': IMAGE_MAP[r],\n 'zone': 'any',\n 'size': 'm1.medium'})})\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(layer['creds'], region)\n # create a new sg and authorize all ports\n # use iptables on the host to firewall ports\n name = \"{formation}-{id}\".format(**layer)\n sg = conn.create_security_group(name, 'Created by Deis')\n # import a new keypair using the layer key material\n conn.import_key_pair(name, layer['ssh_public_key'])\n # loop until the sg is *actually* there\n for i in xrange(10):\n try:\n sg.authorize(ip_protocol='tcp', from_port=1, to_port=65535,\n cidr_ip='0.0.0.0/0')\n break\n except EC2ResponseError:\n if i < 10:\n time.sleep(1.5)\n continue\n else:\n raise RuntimeError('Failed to authorize security group')\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n region = layer['params'].get('region', 'us-east-1')\n name = \"{formation}-{id}\".format(**layer)\n conn = _create_ec2_connection(layer['creds'], region)\n conn.delete_key_pair(name)\n # there's an ec2 race condition on instances terminating\n # successfully but still holding a lock on the security group\n # let's take a nap\n time.sleep(5)\n try:\n conn.delete_security_group(name)\n except EC2ResponseError as e:\n if e.code != 'InvalidGroup.NotFound':\n raise e\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n params, creds = node['params'], node['creds']\n region = params.setdefault('region', 'us-east-1')\n conn = _create_ec2_connection(creds, region)\n name = \"{formation}-{layer}\".format(**node)\n params['key_name'] = name\n sg = conn.get_all_security_groups(name)[0]\n params.setdefault('security_groups', []).append(sg.name)\n image_id = params.get(\n 'image', getattr(settings, 'IMAGE_MAP', IMAGE_MAP)[region])\n images = conn.get_all_images([image_id])\n if len(images) != 1:\n raise LookupError('Could not find AMI: %s' % image_id)\n image = images[0]\n kwargs = _prepare_run_kwargs(params)\n reservation = image.run(**kwargs)\n instances = reservation.instances\n boto = instances[0]\n # sleep before tagging\n time.sleep(10)\n boto.update()\n boto.add_tag('Name', node['id'])\n # loop until running\n while(True):\n time.sleep(2)\n boto.update()\n if boto.state == 'running':\n break\n # prepare return values\n provider_id = boto.id\n fqdn = boto.public_dns_name\n metadata = _format_metadata(boto)\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n provider_id = node['provider_id']\n region = node['params'].get('region', 'us-east-1')\n conn = _create_ec2_connection(node['creds'], region)\n if provider_id:\n conn.terminate_instances([provider_id])\n i = conn.get_all_instances([provider_id])[0].instances[0]\n while(True):\n time.sleep(2)\n i.update()\n if i.state == \"terminated\":\n break\n\n\ndef _create_ec2_connection(creds, region):\n \"\"\"\n Connect to an EC2 region with the given credentials.\n\n :param creds: a dict containing an EC2 access_key and secret_key\n :region: the name of an EC2 region, such as \"us-west-2\"\n :rtype: a connected :class:`~boto.ec2.connection.EC2Connection`\n :raises EnvironmentError: if no credentials are provided\n \"\"\"\n if not creds:\n raise EnvironmentError('No credentials provided')\n return ec2.connect_to_region(region,\n aws_access_key_id=creds['access_key'],\n aws_secret_access_key=creds['secret_key'])\n\n\ndef _prepare_run_kwargs(params):\n # start with sane defaults\n kwargs = {\n 'min_count': 1, 'max_count': 1,\n 'user_data': None, 'addressing_type': None,\n 'instance_type': None, 'placement': None,\n 'kernel_id': None, 'ramdisk_id': None,\n 'monitoring_enabled': False, 'subnet_id': None,\n 'block_device_map': None,\n }\n # convert zone \"any\" to NoneType\n requested_zone = params.get('zone')\n if requested_zone and requested_zone.lower() == 'any':\n requested_zone = None\n # lookup kwargs from params\n param_kwargs = {\n 'instance_type': params.get('size', 'm1.medium'),\n 'security_groups': params['security_groups'],\n 'placement': requested_zone,\n 'key_name': params['key_name'],\n 'kernel_id': params.get('kernel', None),\n }\n # add user_data if provided in params\n user_data = params.get('user_data')\n if user_data:\n kwargs.update({'user_data': user_data})\n # params override defaults\n kwargs.update(param_kwargs)\n return kwargs\n\n\ndef _format_metadata(boto):\n return {\n 'architecture': boto.architecture,\n 'block_device_mapping': {\n k: v.volume_id for k, v in boto.block_device_mapping.items()\n },\n 'client_token': boto.client_token,\n 'dns_name': boto.dns_name,\n 'ebs_optimized': boto.ebs_optimized,\n 'eventsSet': boto.eventsSet,\n 'group_name': boto.group_name,\n 'groups': [g.id for g in boto.groups],\n 'hypervisor': boto.hypervisor,\n 'id': boto.id,\n 'image_id': boto.image_id,\n 'instance_profile': boto.instance_profile,\n 'instance_type': boto.instance_type,\n 'interfaces': list(boto.interfaces),\n 'ip_address': boto.ip_address,\n 'kernel': boto.kernel,\n 'key_name': boto.key_name,\n 'launch_time': boto.launch_time,\n 'monitored': boto.monitored,\n 'monitoring_state': boto.monitoring_state,\n 'persistent': boto.persistent,\n 'placement': boto.placement,\n 'placement_group': boto.placement_group,\n 'placement_tenancy': boto.placement_tenancy,\n 'previous_state': boto.previous_state,\n 'private_dns_name': boto.private_dns_name,\n 'private_ip_address': boto.private_ip_address,\n 'public_dns_name': boto.public_dns_name,\n 'ramdisk': boto.ramdisk,\n 'region': boto.region.name,\n 'root_device_name': boto.root_device_name,\n 'root_device_type': boto.root_device_type,\n 'spot_instance_request_id': boto.spot_instance_request_id,\n 'state': boto.state,\n 'state_code': boto.state_code,\n 'state_reason': boto.state_reason,\n 'subnet_id': boto.subnet_id,\n 'tags': dict(boto.tags),\n 'virtualization_type': boto.virtualization_type,\n 'vpc_id': boto.vpc_id,\n }\n", "path": "provider/ec2.py"}]} | 3,330 | 392 |
gh_patches_debug_33155 | rasdani/github-patches | git_diff | fossasia__open-event-server-6566 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Call for Speakers Signup Form: "Don't require this email" possible for everyone
Server issue for [fossasia/open-event-frontend#3506](https://github.com/fossasia/open-event-frontend/issues/3506)
</issue>
<code>
[start of app/api/speakers.py]
1 from flask import request
2 from flask_login import current_user
3 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
4 from flask_rest_jsonapi.exceptions import ObjectNotFound
5
6 from app.api.bootstrap import api
7 from app.api.helpers.db import safe_query, get_count, save_to_db
8 from app.api.helpers.exceptions import ForbiddenException
9 from app.api.helpers.permission_manager import has_access
10 from app.api.helpers.query import event_query
11 from app.api.helpers.utilities import require_relationship
12 from app.api.schema.speakers import SpeakerSchema
13 from app.models import db
14 from app.models.event import Event
15 from app.models.session import Session
16 from app.models.speaker import Speaker
17 from app.models.session_speaker_link import SessionsSpeakersLink
18 from app.models.user import User
19
20
21 class SpeakerListPost(ResourceList):
22 """
23 List and create speakers
24 """
25
26 def before_post(self, args, kwargs, data=None):
27 """
28 method to add user_id to view_kwargs before post
29 :param args:
30 :param kwargs:
31 :param data:
32 :return:
33 """
34 require_relationship(['event', 'user'], data)
35
36 if not has_access('is_coorganizer', event_id=data['event']):
37 event = db.session.query(Event).filter_by(id=data['event']).one()
38 if event.state == "draft":
39 raise ObjectNotFound({'parameter': 'event_id'},
40 "Event: {} not found".format(data['event_id']))
41
42 if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:
43 raise ForbiddenException({'pointer': ''}, "Speakers are disabled for this Event")
44
45 if not data.get('is_email_overridden') and \
46 get_count(db.session.query(Speaker).filter_by(event_id=int(data['event']), email=data['email'],
47 deleted_at=None)) > 0:
48 raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')
49
50 if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):
51 raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
52 'Organizer access required to override email')
53 elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \
54 not data.get('email'):
55 data['email'] = current_user.email
56
57 if 'sessions' in data:
58 session_ids = data['sessions']
59 for session_id in session_ids:
60 if not has_access('is_session_self_submitted', session_id=session_id):
61 raise ObjectNotFound({'parameter': 'session_id'},
62 "Session: {} not found".format(session_id))
63
64 def after_create_object(self, speaker, data, view_kwargs):
65 """
66 after create method to save resized images for speaker
67 :param speaker:
68 :param data:
69 :param view_kwargs:
70 :return:
71 """
72
73 if data.get('photo_url'):
74 start_image_resizing_tasks(speaker, data['photo_url'])
75
76 schema = SpeakerSchema
77 methods = ['POST', ]
78 data_layer = {'session': db.session,
79 'model': Speaker,
80 'methods': {
81 'after_create_object': after_create_object
82 }}
83
84
85 class SpeakerList(ResourceList):
86 """
87 List speakers based on different params from view_kwargs
88 """
89
90 def query(self, view_kwargs):
91 """
92 query method for speakers list class
93 :param view_kwargs:
94 :return:
95 """
96 query_ = self.session.query(Speaker)
97 query_ = event_query(self, query_, view_kwargs)
98
99 if view_kwargs.get('user_id'):
100 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
101 query_ = query_.join(User).filter(User.id == user.id)
102
103 if view_kwargs.get('session_id'):
104 session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
105 # session-speaker :: many-to-many relationship
106 query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))
107 if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):
108 if not has_access('is_session_self_submitted', session_id=session.id):
109 query_ = query_.filter(Session.state == "approved" or Session.state == "accepted")
110
111 return query_
112
113 view_kwargs = True
114 schema = SpeakerSchema
115 methods = ['GET', ]
116 data_layer = {'session': db.session,
117 'model': Speaker,
118 'methods': {
119 'query': query,
120 }}
121
122
123 class SpeakerDetail(ResourceDetail):
124 """
125 Speakers Detail by id
126 """
127 def before_update_object(self, speaker, data, view_kwargs):
128 """
129 method to save image urls before updating speaker object
130 :param speaker:
131 :param data:
132 :param view_kwargs:
133 :return:
134 """
135 if data.get('photo_url') and data['photo_url'] != speaker.photo_url:
136 start_image_resizing_tasks(speaker, data['photo_url'])
137
138 if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):
139 raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
140 'Organizer access required to override email')
141 elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \
142 not data.get('email'):
143 data['email'] = current_user.email
144
145 def after_patch(self, result):
146 """
147 method to create session speaker link
148 :param result:
149 """
150 # This method is executed when a new speaker is created
151 # and added to an existing session
152 speaker_id = result['data']['id']
153 speaker = Speaker.query.filter_by(id=speaker_id).first()
154 if SessionsSpeakersLink.query.filter_by(speaker_id=speaker_id).count() == 0:
155 all_sessions = Session.query.filter_by(deleted_at=None)
156 for session in all_sessions:
157 if speaker in session.speakers:
158 session_speaker_link = SessionsSpeakersLink(session_state=session.state,
159 session_id=session.id,
160 event_id=session.event.id,
161 speaker_id=speaker.id)
162 save_to_db(session_speaker_link, "Session Speaker Link Saved")
163
164 decorators = (api.has_permission('is_speaker_itself_or_admin', methods="PATCH,DELETE", fetch="event_id",
165 fetch_as="event_id", model=Speaker),
166 api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
167 fetch_as="event_id", model=Speaker),)
168 schema = SpeakerSchema
169 data_layer = {'session': db.session,
170 'model': Speaker,
171 'methods': {
172 'before_update_object': before_update_object
173 }}
174
175
176 class SpeakerRelationshipRequired(ResourceRelationship):
177 """
178 Speaker Relationship class for required entities
179 """
180 decorators = (api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
181 fetch_as="event_id", model=Speaker),)
182 methods = ['GET', 'PATCH']
183 schema = SpeakerSchema
184 data_layer = {'session': db.session,
185 'model': Speaker}
186
187
188 class SpeakerRelationshipOptional(ResourceRelationship):
189 """
190 Speaker Relationship class
191 """
192 decorators = (api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
193 fetch_as="event_id", model=Speaker),)
194 schema = SpeakerSchema
195 data_layer = {'session': db.session,
196 'model': Speaker}
197
198
199 def start_image_resizing_tasks(speaker, photo_url):
200 speaker_id = str(speaker.id)
201 from .helpers.tasks import resize_speaker_images_task
202 resize_speaker_images_task.delay(speaker_id, photo_url)
203
[end of app/api/speakers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/speakers.py b/app/api/speakers.py
--- a/app/api/speakers.py
+++ b/app/api/speakers.py
@@ -47,10 +47,10 @@
deleted_at=None)) > 0:
raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')
- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):
- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=data['event']):
+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},
'Organizer access required to override email')
- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \
+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=data['event']) and \
not data.get('email'):
data['email'] = current_user.email
@@ -135,10 +135,10 @@
if data.get('photo_url') and data['photo_url'] != speaker.photo_url:
start_image_resizing_tasks(speaker, data['photo_url'])
- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):
- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},
+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=speaker.event_id):
+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},
'Organizer access required to override email')
- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \
+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=speaker.event_id) and \
not data.get('email'):
data['email'] = current_user.email
| {"golden_diff": "diff --git a/app/api/speakers.py b/app/api/speakers.py\n--- a/app/api/speakers.py\n+++ b/app/api/speakers.py\n@@ -47,10 +47,10 @@\n deleted_at=None)) > 0:\n raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')\n \n- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):\n- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=data['event']):\n+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},\n 'Organizer access required to override email')\n- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \\\n+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=data['event']) and \\\n not data.get('email'):\n data['email'] = current_user.email\n \n@@ -135,10 +135,10 @@\n if data.get('photo_url') and data['photo_url'] != speaker.photo_url:\n start_image_resizing_tasks(speaker, data['photo_url'])\n \n- if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):\n- raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n+ if data.get('is_email_overridden') and not has_access('is_organizer', event_id=speaker.event_id):\n+ raise ForbiddenException({'pointer': 'data/attributes/is_email_overridden'},\n 'Organizer access required to override email')\n- elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n+ elif data.get('is_email_overridden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n not data.get('email'):\n data['email'] = current_user.email\n", "issue": "Call for Speakers Signup Form: \"Don't require this email\" possible for everyone\nServer issue for [fossasia/open-event-frontend#3506](https://github.com/fossasia/open-event-frontend/issues/3506) \n", "before_files": [{"content": "from flask import request\nfrom flask_login import current_user\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count, save_to_db\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.speakers import SpeakerSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.session_speaker_link import SessionsSpeakersLink\nfrom app.models.user import User\n\n\nclass SpeakerListPost(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n\n def before_post(self, args, kwargs, data=None):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n event = db.session.query(Event).filter_by(id=data['event']).one()\n if event.state == \"draft\":\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Speakers are disabled for this Event\")\n\n if not data.get('is_email_overridden') and \\\n get_count(db.session.query(Speaker).filter_by(event_id=int(data['event']), email=data['email'],\n deleted_at=None)) > 0:\n raise ForbiddenException({'pointer': ''}, 'Speaker with this Email ID already exists')\n\n if data.get('is_email_overriden') and not has_access('is_organizer', event_id=data['event']):\n raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n 'Organizer access required to override email')\n elif data.get('is_email_overriden') and has_access('is_organizer', event_id=data['event']) and \\\n not data.get('email'):\n data['email'] = current_user.email\n\n if 'sessions' in data:\n session_ids = data['sessions']\n for session_id in session_ids:\n if not has_access('is_session_self_submitted', session_id=session_id):\n raise ObjectNotFound({'parameter': 'session_id'},\n \"Session: {} not found\".format(session_id))\n\n def after_create_object(self, speaker, data, view_kwargs):\n \"\"\"\n after create method to save resized images for speaker\n :param speaker:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n\n if data.get('photo_url'):\n start_image_resizing_tasks(speaker, data['photo_url'])\n\n schema = SpeakerSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'after_create_object': after_create_object\n }}\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List speakers based on different params from view_kwargs\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n query_ = event_query(self, query_, view_kwargs)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):\n if not has_access('is_session_self_submitted', session_id=session.id):\n query_ = query_.filter(Session.state == \"approved\" or Session.state == \"accepted\")\n\n return query_\n\n view_kwargs = True\n schema = SpeakerSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n def before_update_object(self, speaker, data, view_kwargs):\n \"\"\"\n method to save image urls before updating speaker object\n :param speaker:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if data.get('photo_url') and data['photo_url'] != speaker.photo_url:\n start_image_resizing_tasks(speaker, data['photo_url'])\n\n if data.get('is_email_overriden') and not has_access('is_organizer', event_id=speaker.event_id):\n raise ForbiddenException({'pointer': 'data/attributes/is_email_overriden'},\n 'Organizer access required to override email')\n elif data.get('is_email_overriden') and has_access('is_organizer', event_id=speaker.event_id) and \\\n not data.get('email'):\n data['email'] = current_user.email\n\n def after_patch(self, result):\n \"\"\"\n method to create session speaker link\n :param result:\n \"\"\"\n # This method is executed when a new speaker is created\n # and added to an existing session\n speaker_id = result['data']['id']\n speaker = Speaker.query.filter_by(id=speaker_id).first()\n if SessionsSpeakersLink.query.filter_by(speaker_id=speaker_id).count() == 0:\n all_sessions = Session.query.filter_by(deleted_at=None)\n for session in all_sessions:\n if speaker in session.speakers:\n session_speaker_link = SessionsSpeakersLink(session_state=session.state,\n session_id=session.id,\n event_id=session.event.id,\n speaker_id=speaker.id)\n save_to_db(session_speaker_link, \"Session Speaker Link Saved\")\n\n decorators = (api.has_permission('is_speaker_itself_or_admin', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),\n api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'before_update_object': before_update_object\n }}\n\n\nclass SpeakerRelationshipRequired(ResourceRelationship):\n \"\"\"\n Speaker Relationship class for required entities\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n methods = ['GET', 'PATCH']\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipOptional(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\ndef start_image_resizing_tasks(speaker, photo_url):\n speaker_id = str(speaker.id)\n from .helpers.tasks import resize_speaker_images_task\n resize_speaker_images_task.delay(speaker_id, photo_url)\n", "path": "app/api/speakers.py"}]} | 2,801 | 480 |
gh_patches_debug_57504 | rasdani/github-patches | git_diff | dotkom__onlineweb4-745 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Filtering my events doesn't work
_Actually I'm not even sure if it's just my local setup that's fucking around or not, but this doesn't seem to work at all.
I can't check with moonshine or prod because everything is down atm, so I'm just making this before I forget._
```
if filters['myevents'] == 'true':
kwargs['attendance_event__attendees'] = request.user
events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(
'attendance_event', 'attendance_event__attendees')
```
in events/views.py _search_indexed
Comparing attendance_event__attendees (Attendee) with request.user (OnlineUser) doesn't make sense.
It should be attendance_event__attendees__user which from limited testing seems to work.
</issue>
<code>
[start of apps/events/views.py]
1 #-*- coding: utf-8 -*-
2
3 import datetime
4
5 from django.utils import timezone
6
7 from django.conf import settings
8 from django.contrib import messages
9 from django.contrib.auth.decorators import login_required, user_passes_test
10 from django.core.urlresolvers import reverse
11 from django.http import HttpResponseRedirect
12 from django.shortcuts import render, get_object_or_404, redirect
13 from django.utils.translation import ugettext as _
14
15 import watson
16
17 from apps.events.forms import CaptchaForm
18 from apps.events.models import Event, AttendanceEvent, Attendee
19 from apps.events.pdf_generator import EventPDF
20
21
22 def index(request):
23 return render(request, 'events/index.html', {})
24
25 def details(request, event_id, event_slug):
26 event = get_object_or_404(Event, pk=event_id)
27
28 is_attendance_event = False
29 user_anonymous = True
30 user_attending = False
31 place_on_wait_list = 0
32 will_be_on_wait_list = False
33 rules = []
34 user_status = False
35
36 try:
37 attendance_event = AttendanceEvent.objects.get(pk=event_id)
38 is_attendance_event = True
39 form = CaptchaForm(user=request.user)
40
41 if attendance_event.rule_bundles:
42 for rule_bundle in attendance_event.rule_bundles.all():
43 rules.append(rule_bundle.get_rule_strings)
44
45 if request.user.is_authenticated():
46 user_anonymous = False
47 if attendance_event.is_attendee(request.user):
48 user_attending = True
49
50
51 will_be_on_wait_list = attendance_event.will_i_be_on_wait_list
52
53 user_status = event.is_eligible_for_signup(request.user)
54
55 # Check if this user is on the waitlist
56 place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)
57
58 except AttendanceEvent.DoesNotExist:
59 pass
60
61 if is_attendance_event:
62 context = {
63 'now': timezone.now(),
64 'event': event,
65 'attendance_event': attendance_event,
66 'user_anonymous': user_anonymous,
67 'user_attending': user_attending,
68 'will_be_on_wait_list': will_be_on_wait_list,
69 'rules': rules,
70 'user_status': user_status,
71 'place_on_wait_list': int(place_on_wait_list),
72 #'position_in_wait_list': position_in_wait_list,
73 'captcha_form': form,
74 }
75
76 return render(request, 'events/details.html', context)
77 else:
78 return render(request, 'events/details.html', {'event': event})
79
80
81 def get_attendee(attendee_id):
82 return get_object_or_404(Attendee, pk=attendee_id)
83
84 @login_required
85 def attendEvent(request, event_id):
86
87 event = get_object_or_404(Event, pk=event_id)
88
89 if not request.POST:
90 messages.error(request, _(u'Vennligst fyll ut skjemaet.'))
91 return redirect(event)
92
93 form = CaptchaForm(request.POST, user=request.user)
94
95 if not form.is_valid():
96 for field,errors in form.errors.items():
97 for error in errors:
98 messages.error(request, error)
99
100 return redirect(event)
101
102 # Check if the user is eligible to attend this event.
103 # If not, an error message will be present in the returned dict
104 attendance_event = event.attendance_event
105
106 response = event.is_eligible_for_signup(request.user);
107
108 if response['status']:
109 Attendee(event=attendance_event, user=request.user).save()
110 messages.success(request, _(u"Du er nå påmeldt på arrangementet!"))
111 return redirect(event)
112 else:
113 messages.error(request, response['message'])
114 return redirect(event)
115
116 @login_required
117 def unattendEvent(request, event_id):
118
119 event = get_object_or_404(Event, pk=event_id)
120 attendance_event = event.attendance_event
121
122 # Check if the deadline for unattending has passed
123 if attendance_event.unattend_deadline < timezone.now():
124 messages.error(request, _(u"Avmeldingsfristen for dette arrangementet har utløpt."))
125 return redirect(event)
126
127 event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=request.user)
128 Attendee.objects.get(event=attendance_event, user=request.user).delete()
129
130 messages.success(request, _(u"Du ble meldt av arrangementet."))
131 return redirect(event)
132
133 def search_events(request):
134 query = request.GET.get('query')
135 filters = {
136 'future' : request.GET.get('future'),
137 'myevents' : request.GET.get('myevents')
138 }
139 events = _search_indexed(request, query, filters)
140
141 return render(request, 'events/search.html', {'events': events})
142
143
144 def _search_indexed(request, query, filters):
145 results = []
146 kwargs = {}
147
148 if filters['future'] == 'true':
149 kwargs['event_start__gte'] = timezone.now()
150
151 if filters['myevents'] == 'true':
152 kwargs['attendance_event__attendees'] = request.user
153
154 events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(
155 'attendance_event', 'attendance_event__attendees')
156
157 if query:
158 for result in watson.search(query, models=(events,)):
159 results.append(result.object)
160 return results[:10]
161
162 return events
163
164
165 @login_required()
166 @user_passes_test(lambda u: u.groups.filter(name='Komiteer').count() == 1)
167 def generate_pdf(request, event_id):
168
169 event = get_object_or_404(Event, pk=event_id)
170
171 groups = request.user.groups.all()
172 if not (groups.filter(name='dotKom').count() == 1 or groups.filter(name='Hovedstyret').count() == 1):
173 if event.event_type == 1 and not groups.filter(name='arrKom').count() == 1:
174 messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))
175 return redirect(event)
176
177 if event.event_type == 2 and not groups.filter(name='bedKom').count() == 1:
178 messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))
179 return redirect(event)
180
181 if event.event_type == 3 and not groups.filter(name='fagKom').count() == 1:
182 messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))
183 return redirect(event)
184
185 return EventPDF(event).render_pdf()
186
[end of apps/events/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/events/views.py b/apps/events/views.py
--- a/apps/events/views.py
+++ b/apps/events/views.py
@@ -149,7 +149,7 @@
kwargs['event_start__gte'] = timezone.now()
if filters['myevents'] == 'true':
- kwargs['attendance_event__attendees'] = request.user
+ kwargs['attendance_event__attendees__user'] = request.user
events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(
'attendance_event', 'attendance_event__attendees')
| {"golden_diff": "diff --git a/apps/events/views.py b/apps/events/views.py\n--- a/apps/events/views.py\n+++ b/apps/events/views.py\n@@ -149,7 +149,7 @@\n kwargs['event_start__gte'] = timezone.now()\n \n if filters['myevents'] == 'true':\n- kwargs['attendance_event__attendees'] = request.user\n+ kwargs['attendance_event__attendees__user'] = request.user\n \n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n", "issue": "Filtering my events doesn't work\n_Actually I'm not even sure if it's just my local setup that's fucking around or not, but this doesn't seem to work at all.\nI can't check with moonshine or prod because everything is down atm, so I'm just making this before I forget._\n\n```\nif filters['myevents'] == 'true':\n kwargs['attendance_event__attendees'] = request.user\n\n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n```\n\nin events/views.py _search_indexed\n\nComparing attendance_event__attendees (Attendee) with request.user (OnlineUser) doesn't make sense. \n\nIt should be attendance_event__attendees__user which from limited testing seems to work. \n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\n\nimport datetime\n\nfrom django.utils import timezone\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required, user_passes_test\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\n\nimport watson\n\nfrom apps.events.forms import CaptchaForm\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.events.pdf_generator import EventPDF\n\n\ndef index(request):\n return render(request, 'events/index.html', {})\n\ndef details(request, event_id, event_slug):\n event = get_object_or_404(Event, pk=event_id)\n\n is_attendance_event = False\n user_anonymous = True\n user_attending = False\n place_on_wait_list = 0\n will_be_on_wait_list = False\n rules = []\n user_status = False\n\n try:\n attendance_event = AttendanceEvent.objects.get(pk=event_id)\n is_attendance_event = True\n form = CaptchaForm(user=request.user)\n\n if attendance_event.rule_bundles:\n for rule_bundle in attendance_event.rule_bundles.all():\n rules.append(rule_bundle.get_rule_strings)\n\n if request.user.is_authenticated():\n user_anonymous = False\n if attendance_event.is_attendee(request.user):\n user_attending = True\n\n \n will_be_on_wait_list = attendance_event.will_i_be_on_wait_list\n\n user_status = event.is_eligible_for_signup(request.user)\n\n # Check if this user is on the waitlist\n place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)\n\n except AttendanceEvent.DoesNotExist:\n pass\n\n if is_attendance_event:\n context = {\n 'now': timezone.now(),\n 'event': event,\n 'attendance_event': attendance_event,\n 'user_anonymous': user_anonymous,\n 'user_attending': user_attending,\n 'will_be_on_wait_list': will_be_on_wait_list,\n 'rules': rules,\n 'user_status': user_status,\n 'place_on_wait_list': int(place_on_wait_list),\n #'position_in_wait_list': position_in_wait_list,\n 'captcha_form': form,\n }\n \n return render(request, 'events/details.html', context)\n else:\n return render(request, 'events/details.html', {'event': event})\n\n\ndef get_attendee(attendee_id):\n return get_object_or_404(Attendee, pk=attendee_id)\n\n@login_required\ndef attendEvent(request, event_id):\n \n event = get_object_or_404(Event, pk=event_id)\n\n if not request.POST:\n messages.error(request, _(u'Vennligst fyll ut skjemaet.'))\n return redirect(event)\n\n form = CaptchaForm(request.POST, user=request.user)\n\n if not form.is_valid():\n for field,errors in form.errors.items():\n for error in errors:\n messages.error(request, error)\n\n return redirect(event)\n\n # Check if the user is eligible to attend this event.\n # If not, an error message will be present in the returned dict\n attendance_event = event.attendance_event\n\n response = event.is_eligible_for_signup(request.user);\n\n if response['status']: \n Attendee(event=attendance_event, user=request.user).save()\n messages.success(request, _(u\"Du er n\u00e5 p\u00e5meldt p\u00e5 arrangementet!\"))\n return redirect(event)\n else:\n messages.error(request, response['message'])\n return redirect(event)\n\n@login_required\ndef unattendEvent(request, event_id):\n\n event = get_object_or_404(Event, pk=event_id)\n attendance_event = event.attendance_event\n\n # Check if the deadline for unattending has passed\n if attendance_event.unattend_deadline < timezone.now():\n messages.error(request, _(u\"Avmeldingsfristen for dette arrangementet har utl\u00f8pt.\"))\n return redirect(event)\n\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=request.user)\n Attendee.objects.get(event=attendance_event, user=request.user).delete()\n\n messages.success(request, _(u\"Du ble meldt av arrangementet.\"))\n return redirect(event)\n\ndef search_events(request):\n query = request.GET.get('query')\n filters = {\n 'future' : request.GET.get('future'),\n 'myevents' : request.GET.get('myevents')\n }\n events = _search_indexed(request, query, filters)\n\n return render(request, 'events/search.html', {'events': events})\n\n\ndef _search_indexed(request, query, filters):\n results = []\n kwargs = {}\n\n if filters['future'] == 'true':\n kwargs['event_start__gte'] = timezone.now()\n\n if filters['myevents'] == 'true':\n kwargs['attendance_event__attendees'] = request.user\n\n events = Event.objects.filter(**kwargs).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n\n if query:\n for result in watson.search(query, models=(events,)):\n results.append(result.object)\n return results[:10]\n\n return events\n\n\n@login_required()\n@user_passes_test(lambda u: u.groups.filter(name='Komiteer').count() == 1)\ndef generate_pdf(request, event_id):\n\n event = get_object_or_404(Event, pk=event_id)\n\n groups = request.user.groups.all()\n if not (groups.filter(name='dotKom').count() == 1 or groups.filter(name='Hovedstyret').count() == 1):\n if event.event_type == 1 and not groups.filter(name='arrKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))\n return redirect(event)\n\n if event.event_type == 2 and not groups.filter(name='bedKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.'))\n return redirect(event)\n\n if event.event_type == 3 and not groups.filter(name='fagKom').count() == 1:\n messages.error(request, _(u'Du har ikke tilgang til listen for dette arrangementet.')) \n return redirect(event)\n\n return EventPDF(event).render_pdf()\n", "path": "apps/events/views.py"}]} | 2,575 | 127 |
gh_patches_debug_30523 | rasdani/github-patches | git_diff | meltano__meltano-6695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use `pytest-randomly` plugin
https://github.com/pytest-dev/pytest-randomly
This plugin randomizes the order the tests are run in which prevents us from relying on their order to succeed. It also seeds various sources of randomness to improve reproducibility. For instance, if we encounter a test failure in CI due to randomness, we can run the tests locally with the `pytest --randomly-seed=<seed used in the CI test run>` to reproduce.
I've used this plugin before, and it has been helpful numerous times. I've already run into problems with Meltano's tests that are caused by their execution order. Using this plugin should force us to fix those issues, and prevent them from returning.
</issue>
<code>
[start of noxfile.py]
1 """Nox configuration."""
2
3 from __future__ import annotations
4
5 import os
6 import sys
7 from pathlib import Path
8 from textwrap import dedent
9
10 try:
11 from nox_poetry import Session
12 from nox_poetry import session as nox_session
13 except ImportError:
14 message = f"""\
15 Nox failed to import the 'nox-poetry' package.
16 Please install it using the following command:
17 {sys.executable} -m pip install nox-poetry"""
18 raise SystemExit(dedent(message)) from None
19
20
21 package = "meltano"
22 python_versions = ["3.10", "3.9", "3.8", "3.7"]
23 main_python_version = "3.9"
24 locations = "src", "tests", "noxfile.py"
25
26
27 @nox_session(python=python_versions)
28 def tests(session: Session) -> None:
29 """Execute pytest tests and compute coverage.
30
31 Args:
32 session: Nox session.
33 """
34 backend_db = os.environ.get("PYTEST_BACKEND", "sqlite")
35
36 if backend_db == "mssql":
37 session.install(".[mssql]")
38 else:
39 session.install(".")
40
41 session.install(
42 "coverage[toml]",
43 "freezegun",
44 "mock",
45 "pytest",
46 "pytest-asyncio",
47 "pytest-docker",
48 "requests-mock",
49 )
50
51 try:
52 session.run(
53 "coverage",
54 "run",
55 "--parallel",
56 "-m",
57 "pytest",
58 *session.posargs,
59 env={"NOX_CURRENT_SESSION": "tests"},
60 )
61 finally:
62 if session.interactive:
63 session.notify("coverage", posargs=[])
64
65
66 @nox_session(python=main_python_version)
67 def coverage(session: Session) -> None:
68 """Upload coverage data.
69
70 Args:
71 session: Nox session.
72 """
73 args = session.posargs or ["report"]
74
75 session.install("coverage[toml]")
76
77 if not session.posargs and any(Path().glob(".coverage.*")):
78 session.run("coverage", "combine")
79
80 session.run("coverage", *args)
81
[end of noxfile.py]
[start of src/meltano/core/db.py]
1 """Defines helpers related to the system database."""
2
3 from __future__ import annotations
4
5 import logging
6 import time
7
8 from sqlalchemy import create_engine
9 from sqlalchemy.engine import Connection, Engine
10 from sqlalchemy.exc import OperationalError
11 from sqlalchemy.orm import sessionmaker
12 from sqlalchemy.sql import text
13
14 from meltano.core.project import Project
15
16 from .project_settings_service import ProjectSettingsService
17
18 # Keep a Project → Engine mapping to serve
19 # the same engine for the same Project
20 _engines = {}
21
22
23 def project_engine(
24 project: Project,
25 default: bool = False,
26 ) -> tuple[Engine, sessionmaker]:
27 """Create and register a SQLAlchemy engine for a Meltano project instance.
28
29 Args:
30 project: The Meltano project that the engine will be connected to.
31 default: Whether the engine created should be stored as the default
32 engine for this project.
33
34 Returns:
35 The engine, and a session maker bound to the engine.
36 """
37 existing_engine = _engines.get(project)
38 if existing_engine:
39 return existing_engine
40
41 settings = ProjectSettingsService(project)
42
43 engine_uri = settings.get("database_uri")
44 logging.debug(f"Creating engine {project}@{engine_uri}")
45 engine = create_engine(engine_uri, pool_pre_ping=True)
46
47 # Connect to the database to ensure it is available.
48 connect(
49 engine,
50 max_retries=settings.get("database_max_retries"),
51 retry_timeout=settings.get("database_retry_timeout"),
52 )
53
54 init_hook(engine)
55
56 engine_session = (engine, sessionmaker(bind=engine))
57
58 if default:
59 # register the default engine
60 _engines[project] = engine_session
61
62 return engine_session
63
64
65 def connect(
66 engine: Engine,
67 max_retries: int,
68 retry_timeout: float,
69 ) -> Connection:
70 """Connect to the database.
71
72 Args:
73 engine: The DB engine with which the check will be performed.
74 max_retries: The maximum number of retries that will be attempted.
75 retry_timeout: The number of seconds to wait between retries.
76
77 Raises:
78 OperationalError: Error during DB connection - max retries exceeded.
79
80 Returns:
81 A connection to the database.
82 """
83 attempt = 0
84 while True:
85 try:
86 return engine.connect()
87 except OperationalError:
88 if attempt >= max_retries:
89 logging.error(
90 f"Could not connect to the database after {attempt} "
91 "attempts. Max retries exceeded."
92 )
93 raise
94 attempt += 1
95 logging.info(
96 f"DB connection failed. Will retry after {retry_timeout}s. "
97 f"Attempt {attempt}/{max_retries}"
98 )
99 time.sleep(retry_timeout)
100
101
102 init_hooks = {
103 "sqlite": lambda x: x.execute("PRAGMA journal_mode=WAL"),
104 }
105
106
107 def init_hook(engine: Engine) -> None:
108 """Run the initialization hook for the provided DB engine.
109
110 The initialization hooks are taken from the `meltano.core.db.init_hooks`
111 dictionary, which maps the dialect name of the engine to a unary function
112 which will be called with the provided DB engine.
113
114 Args:
115 engine: The engine for which the init hook will be run.
116
117 Raises:
118 Exception: The init hook raised an exception.
119 """
120 try:
121 hook = init_hooks[engine.dialect.name]
122 except KeyError:
123 return
124
125 try:
126 hook(engine)
127 except Exception as ex:
128 raise Exception(f"Failed to initialize database: {ex!s}") from ex
129
130
131 def ensure_schema_exists(
132 engine: Engine,
133 schema_name: str,
134 grant_roles: tuple[str] = (),
135 ) -> None:
136 """Ensure the specified `schema_name` exists in the database.
137
138 Args:
139 engine: The DB engine to be used.
140 schema_name: The name of the schema.
141 grant_roles: Roles to grant to the specified schema.
142 """
143 schema_identifier = schema_name
144 group_identifiers = ",".join(grant_roles)
145
146 create_schema = text(f"CREATE SCHEMA IF NOT EXISTS {schema_identifier}")
147 grant_select_schema = text(
148 f"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_identifier} GRANT SELECT ON TABLES TO {group_identifiers}"
149 )
150 grant_usage_schema = text(
151 f"GRANT USAGE ON SCHEMA {schema_identifier} TO {group_identifiers}"
152 )
153
154 with engine.connect() as conn, conn.begin():
155 conn.execute(create_schema)
156 if grant_roles:
157 conn.execute(grant_select_schema)
158 conn.execute(grant_usage_schema)
159
160 logging.info(f"Schema {schema_name} has been created successfully.")
161 for role in grant_roles:
162 logging.info(f"Usage has been granted for role: {role}.")
163
[end of src/meltano/core/db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -5,6 +5,7 @@
import os
import sys
from pathlib import Path
+from random import randint
from textwrap import dedent
try:
@@ -45,6 +46,8 @@
"pytest",
"pytest-asyncio",
"pytest-docker",
+ "pytest-order",
+ "pytest-randomly",
"requests-mock",
)
@@ -55,6 +58,7 @@
"--parallel",
"-m",
"pytest",
+ f"--randomly-seed={randint(0, 2**32-1)}", # noqa: S311, WPS432
*session.posargs,
env={"NOX_CURRENT_SESSION": "tests"},
)
diff --git a/src/meltano/core/db.py b/src/meltano/core/db.py
--- a/src/meltano/core/db.py
+++ b/src/meltano/core/db.py
@@ -9,6 +9,7 @@
from sqlalchemy.engine import Connection, Engine
from sqlalchemy.exc import OperationalError
from sqlalchemy.orm import sessionmaker
+from sqlalchemy.pool import NullPool
from sqlalchemy.sql import text
from meltano.core.project import Project
@@ -41,8 +42,9 @@
settings = ProjectSettingsService(project)
engine_uri = settings.get("database_uri")
- logging.debug(f"Creating engine {project}@{engine_uri}")
- engine = create_engine(engine_uri, pool_pre_ping=True)
+ logging.debug(f"Creating engine '{project}@{engine_uri}'")
+
+ engine = create_engine(engine_uri, poolclass=NullPool)
# Connect to the database to ensure it is available.
connect(
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -5,6 +5,7 @@\n import os\n import sys\n from pathlib import Path\n+from random import randint\n from textwrap import dedent\n \n try:\n@@ -45,6 +46,8 @@\n \"pytest\",\n \"pytest-asyncio\",\n \"pytest-docker\",\n+ \"pytest-order\",\n+ \"pytest-randomly\",\n \"requests-mock\",\n )\n \n@@ -55,6 +58,7 @@\n \"--parallel\",\n \"-m\",\n \"pytest\",\n+ f\"--randomly-seed={randint(0, 2**32-1)}\", # noqa: S311, WPS432\n *session.posargs,\n env={\"NOX_CURRENT_SESSION\": \"tests\"},\n )\ndiff --git a/src/meltano/core/db.py b/src/meltano/core/db.py\n--- a/src/meltano/core/db.py\n+++ b/src/meltano/core/db.py\n@@ -9,6 +9,7 @@\n from sqlalchemy.engine import Connection, Engine\n from sqlalchemy.exc import OperationalError\n from sqlalchemy.orm import sessionmaker\n+from sqlalchemy.pool import NullPool\n from sqlalchemy.sql import text\n \n from meltano.core.project import Project\n@@ -41,8 +42,9 @@\n settings = ProjectSettingsService(project)\n \n engine_uri = settings.get(\"database_uri\")\n- logging.debug(f\"Creating engine {project}@{engine_uri}\")\n- engine = create_engine(engine_uri, pool_pre_ping=True)\n+ logging.debug(f\"Creating engine '{project}@{engine_uri}'\")\n+\n+ engine = create_engine(engine_uri, poolclass=NullPool)\n \n # Connect to the database to ensure it is available.\n connect(\n", "issue": "Use `pytest-randomly` plugin\nhttps://github.com/pytest-dev/pytest-randomly\r\n\r\nThis plugin randomizes the order the tests are run in which prevents us from relying on their order to succeed. It also seeds various sources of randomness to improve reproducibility. For instance, if we encounter a test failure in CI due to randomness, we can run the tests locally with the `pytest --randomly-seed=<seed used in the CI test run>` to reproduce.\r\n\r\nI've used this plugin before, and it has been helpful numerous times. I've already run into problems with Meltano's tests that are caused by their execution order. Using this plugin should force us to fix those issues, and prevent them from returning.\n", "before_files": [{"content": "\"\"\"Nox configuration.\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom pathlib import Path\nfrom textwrap import dedent\n\ntry:\n from nox_poetry import Session\n from nox_poetry import session as nox_session\nexcept ImportError:\n message = f\"\"\"\\\n Nox failed to import the 'nox-poetry' package.\n Please install it using the following command:\n {sys.executable} -m pip install nox-poetry\"\"\"\n raise SystemExit(dedent(message)) from None\n\n\npackage = \"meltano\"\npython_versions = [\"3.10\", \"3.9\", \"3.8\", \"3.7\"]\nmain_python_version = \"3.9\"\nlocations = \"src\", \"tests\", \"noxfile.py\"\n\n\n@nox_session(python=python_versions)\ndef tests(session: Session) -> None:\n \"\"\"Execute pytest tests and compute coverage.\n\n Args:\n session: Nox session.\n \"\"\"\n backend_db = os.environ.get(\"PYTEST_BACKEND\", \"sqlite\")\n\n if backend_db == \"mssql\":\n session.install(\".[mssql]\")\n else:\n session.install(\".\")\n\n session.install(\n \"coverage[toml]\",\n \"freezegun\",\n \"mock\",\n \"pytest\",\n \"pytest-asyncio\",\n \"pytest-docker\",\n \"requests-mock\",\n )\n\n try:\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel\",\n \"-m\",\n \"pytest\",\n *session.posargs,\n env={\"NOX_CURRENT_SESSION\": \"tests\"},\n )\n finally:\n if session.interactive:\n session.notify(\"coverage\", posargs=[])\n\n\n@nox_session(python=main_python_version)\ndef coverage(session: Session) -> None:\n \"\"\"Upload coverage data.\n\n Args:\n session: Nox session.\n \"\"\"\n args = session.posargs or [\"report\"]\n\n session.install(\"coverage[toml]\")\n\n if not session.posargs and any(Path().glob(\".coverage.*\")):\n session.run(\"coverage\", \"combine\")\n\n session.run(\"coverage\", *args)\n", "path": "noxfile.py"}, {"content": "\"\"\"Defines helpers related to the system database.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport time\n\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.engine import Connection, Engine\nfrom sqlalchemy.exc import OperationalError\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.sql import text\n\nfrom meltano.core.project import Project\n\nfrom .project_settings_service import ProjectSettingsService\n\n# Keep a Project \u2192 Engine mapping to serve\n# the same engine for the same Project\n_engines = {}\n\n\ndef project_engine(\n project: Project,\n default: bool = False,\n) -> tuple[Engine, sessionmaker]:\n \"\"\"Create and register a SQLAlchemy engine for a Meltano project instance.\n\n Args:\n project: The Meltano project that the engine will be connected to.\n default: Whether the engine created should be stored as the default\n engine for this project.\n\n Returns:\n The engine, and a session maker bound to the engine.\n \"\"\"\n existing_engine = _engines.get(project)\n if existing_engine:\n return existing_engine\n\n settings = ProjectSettingsService(project)\n\n engine_uri = settings.get(\"database_uri\")\n logging.debug(f\"Creating engine {project}@{engine_uri}\")\n engine = create_engine(engine_uri, pool_pre_ping=True)\n\n # Connect to the database to ensure it is available.\n connect(\n engine,\n max_retries=settings.get(\"database_max_retries\"),\n retry_timeout=settings.get(\"database_retry_timeout\"),\n )\n\n init_hook(engine)\n\n engine_session = (engine, sessionmaker(bind=engine))\n\n if default:\n # register the default engine\n _engines[project] = engine_session\n\n return engine_session\n\n\ndef connect(\n engine: Engine,\n max_retries: int,\n retry_timeout: float,\n) -> Connection:\n \"\"\"Connect to the database.\n\n Args:\n engine: The DB engine with which the check will be performed.\n max_retries: The maximum number of retries that will be attempted.\n retry_timeout: The number of seconds to wait between retries.\n\n Raises:\n OperationalError: Error during DB connection - max retries exceeded.\n\n Returns:\n A connection to the database.\n \"\"\"\n attempt = 0\n while True:\n try:\n return engine.connect()\n except OperationalError:\n if attempt >= max_retries:\n logging.error(\n f\"Could not connect to the database after {attempt} \"\n \"attempts. Max retries exceeded.\"\n )\n raise\n attempt += 1\n logging.info(\n f\"DB connection failed. Will retry after {retry_timeout}s. \"\n f\"Attempt {attempt}/{max_retries}\"\n )\n time.sleep(retry_timeout)\n\n\ninit_hooks = {\n \"sqlite\": lambda x: x.execute(\"PRAGMA journal_mode=WAL\"),\n}\n\n\ndef init_hook(engine: Engine) -> None:\n \"\"\"Run the initialization hook for the provided DB engine.\n\n The initialization hooks are taken from the `meltano.core.db.init_hooks`\n dictionary, which maps the dialect name of the engine to a unary function\n which will be called with the provided DB engine.\n\n Args:\n engine: The engine for which the init hook will be run.\n\n Raises:\n Exception: The init hook raised an exception.\n \"\"\"\n try:\n hook = init_hooks[engine.dialect.name]\n except KeyError:\n return\n\n try:\n hook(engine)\n except Exception as ex:\n raise Exception(f\"Failed to initialize database: {ex!s}\") from ex\n\n\ndef ensure_schema_exists(\n engine: Engine,\n schema_name: str,\n grant_roles: tuple[str] = (),\n) -> None:\n \"\"\"Ensure the specified `schema_name` exists in the database.\n\n Args:\n engine: The DB engine to be used.\n schema_name: The name of the schema.\n grant_roles: Roles to grant to the specified schema.\n \"\"\"\n schema_identifier = schema_name\n group_identifiers = \",\".join(grant_roles)\n\n create_schema = text(f\"CREATE SCHEMA IF NOT EXISTS {schema_identifier}\")\n grant_select_schema = text(\n f\"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_identifier} GRANT SELECT ON TABLES TO {group_identifiers}\"\n )\n grant_usage_schema = text(\n f\"GRANT USAGE ON SCHEMA {schema_identifier} TO {group_identifiers}\"\n )\n\n with engine.connect() as conn, conn.begin():\n conn.execute(create_schema)\n if grant_roles:\n conn.execute(grant_select_schema)\n conn.execute(grant_usage_schema)\n\n logging.info(f\"Schema {schema_name} has been created successfully.\")\n for role in grant_roles:\n logging.info(f\"Usage has been granted for role: {role}.\")\n", "path": "src/meltano/core/db.py"}]} | 2,727 | 398 |
gh_patches_debug_25257 | rasdani/github-patches | git_diff | ESMCI__cime-4442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cs.status reset to force rebuild
I would like an additional option to cs.status or perhaps create_test that
would reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that
all tests are rebuilt before being restarted.
</issue>
<code>
[start of CIME/cs_status.py]
1 """
2 Implementation of the cs.status script, which prints the status of all
3 of the tests in one or more test suites
4 """
5
6 from __future__ import print_function
7 from CIME.XML.standard_module_setup import *
8 from CIME.XML.expected_fails_file import ExpectedFailsFile
9 from CIME.test_status import TestStatus
10 import os
11 import sys
12 from collections import defaultdict
13
14
15 def cs_status(
16 test_paths,
17 summary=False,
18 fails_only=False,
19 count_fails_phase_list=None,
20 check_throughput=False,
21 check_memory=False,
22 expected_fails_filepath=None,
23 out=sys.stdout,
24 ):
25 """Print the test statuses of all tests in test_paths. The default
26 is to print to stdout, but this can be overridden with the 'out'
27 argument.
28
29 If summary is True, then only the overall status of each test is printed
30
31 If fails_only is True, then only test failures are printed (this
32 includes PENDs as well as FAILs).
33
34 If count_fails_phase_list is provided, it should be a list of phases
35 (from the phases given by test_status.ALL_PHASES). For each phase in
36 this list: do not give line-by-line output; instead, just report the
37 total number of tests that have not PASSed this phase (this includes
38 PENDs and FAILs). (This is typically used with the fails_only
39 option, but it can also be used without that option.)
40
41 If expected_fails_filepath is provided, it should be a string giving
42 the full path to a file listing expected failures for this test
43 suite. Expected failures are then labeled as such in the output.
44 """
45 expect(not (summary and fails_only), "Cannot have both summary and fails_only")
46 expect(
47 not (summary and count_fails_phase_list),
48 "Cannot have both summary and count_fails_phase_list",
49 )
50 if count_fails_phase_list is None:
51 count_fails_phase_list = []
52 non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)
53 xfails = _get_xfails(expected_fails_filepath)
54 test_id_output = defaultdict(str)
55 test_id_counts = defaultdict(int)
56 for test_path in test_paths:
57 test_dir = os.path.dirname(test_path)
58 ts = TestStatus(test_dir=test_dir)
59 test_id = os.path.basename(test_dir).split(".")[-1]
60 if summary:
61 output = _overall_output(
62 ts, " {status} {test_name}\n", check_throughput, check_memory
63 )
64 else:
65 if fails_only:
66 output = ""
67 else:
68 output = _overall_output(
69 ts,
70 " {test_name} (Overall: {status}) details:\n",
71 check_throughput,
72 check_memory,
73 )
74 output += ts.phase_statuses_dump(
75 prefix=" ",
76 skip_passes=fails_only,
77 skip_phase_list=count_fails_phase_list,
78 xfails=xfails.get(ts.get_name()),
79 )
80 if count_fails_phase_list:
81 ts.increment_non_pass_counts(non_pass_counts)
82
83 test_id_output[test_id] += output
84 test_id_counts[test_id] += 1
85
86 for test_id in sorted(test_id_output):
87 count = test_id_counts[test_id]
88 print(
89 "{}: {} test{}".format(test_id, count, "s" if count > 1 else ""), file=out
90 )
91 print(test_id_output[test_id], file=out)
92 print(" ", file=out)
93
94 if count_fails_phase_list:
95 print(72 * "=", file=out)
96 print("Non-PASS results for select phases:", file=out)
97 for phase in count_fails_phase_list:
98 print("{} non-passes: {}".format(phase, non_pass_counts[phase]), file=out)
99
100
101 def _get_xfails(expected_fails_filepath):
102 """Returns a dictionary of ExpectedFails objects, where the keys are test names
103
104 expected_fails_filepath should be either a string giving the path to
105 the file containing expected failures, or None. If None, then this
106 returns an empty dictionary (as if expected_fails_filepath were
107 pointing to a file with no expected failures listed).
108 """
109 if expected_fails_filepath is not None:
110 expected_fails_file = ExpectedFailsFile(expected_fails_filepath)
111 xfails = expected_fails_file.get_expected_fails()
112 else:
113 xfails = {}
114 return xfails
115
116
117 def _overall_output(ts, format_str, check_throughput, check_memory):
118 """Returns a string giving the overall test status
119
120 Args:
121 ts: TestStatus object
122 format_str (string): string giving the format of the output; must
123 contain place-holders for status and test_name
124 """
125 test_name = ts.get_name()
126 status = ts.get_overall_test_status(
127 check_throughput=check_throughput,
128 check_memory=check_memory,
129 )[0]
130 return format_str.format(status=status, test_name=test_name)
131
[end of CIME/cs_status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CIME/cs_status.py b/CIME/cs_status.py
--- a/CIME/cs_status.py
+++ b/CIME/cs_status.py
@@ -6,7 +6,7 @@
from __future__ import print_function
from CIME.XML.standard_module_setup import *
from CIME.XML.expected_fails_file import ExpectedFailsFile
-from CIME.test_status import TestStatus
+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS
import os
import sys
from collections import defaultdict
@@ -20,6 +20,7 @@
check_throughput=False,
check_memory=False,
expected_fails_filepath=None,
+ force_rebuild=False,
out=sys.stdout,
):
"""Print the test statuses of all tests in test_paths. The default
@@ -56,6 +57,11 @@
for test_path in test_paths:
test_dir = os.path.dirname(test_path)
ts = TestStatus(test_dir=test_dir)
+
+ if force_rebuild:
+ with ts:
+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)
+
test_id = os.path.basename(test_dir).split(".")[-1]
if summary:
output = _overall_output(
| {"golden_diff": "diff --git a/CIME/cs_status.py b/CIME/cs_status.py\n--- a/CIME/cs_status.py\n+++ b/CIME/cs_status.py\n@@ -6,7 +6,7 @@\n from __future__ import print_function\n from CIME.XML.standard_module_setup import *\n from CIME.XML.expected_fails_file import ExpectedFailsFile\n-from CIME.test_status import TestStatus\n+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS\n import os\n import sys\n from collections import defaultdict\n@@ -20,6 +20,7 @@\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n+ force_rebuild=False,\n out=sys.stdout,\n ):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n@@ -56,6 +57,11 @@\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n+\n+ if force_rebuild:\n+ with ts:\n+ ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS)\n+\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n", "issue": "cs.status reset to force rebuild\nI would like an additional option to cs.status or perhaps create_test that\r\nwould reset all cases in a test suite to the PEND SHAREDLIB_BUILD state so that \r\nall tests are rebuilt before being restarted. \n", "before_files": [{"content": "\"\"\"\nImplementation of the cs.status script, which prints the status of all\nof the tests in one or more test suites\n\"\"\"\n\nfrom __future__ import print_function\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.expected_fails_file import ExpectedFailsFile\nfrom CIME.test_status import TestStatus\nimport os\nimport sys\nfrom collections import defaultdict\n\n\ndef cs_status(\n test_paths,\n summary=False,\n fails_only=False,\n count_fails_phase_list=None,\n check_throughput=False,\n check_memory=False,\n expected_fails_filepath=None,\n out=sys.stdout,\n):\n \"\"\"Print the test statuses of all tests in test_paths. The default\n is to print to stdout, but this can be overridden with the 'out'\n argument.\n\n If summary is True, then only the overall status of each test is printed\n\n If fails_only is True, then only test failures are printed (this\n includes PENDs as well as FAILs).\n\n If count_fails_phase_list is provided, it should be a list of phases\n (from the phases given by test_status.ALL_PHASES). For each phase in\n this list: do not give line-by-line output; instead, just report the\n total number of tests that have not PASSed this phase (this includes\n PENDs and FAILs). (This is typically used with the fails_only\n option, but it can also be used without that option.)\n\n If expected_fails_filepath is provided, it should be a string giving\n the full path to a file listing expected failures for this test\n suite. Expected failures are then labeled as such in the output.\n \"\"\"\n expect(not (summary and fails_only), \"Cannot have both summary and fails_only\")\n expect(\n not (summary and count_fails_phase_list),\n \"Cannot have both summary and count_fails_phase_list\",\n )\n if count_fails_phase_list is None:\n count_fails_phase_list = []\n non_pass_counts = dict.fromkeys(count_fails_phase_list, 0)\n xfails = _get_xfails(expected_fails_filepath)\n test_id_output = defaultdict(str)\n test_id_counts = defaultdict(int)\n for test_path in test_paths:\n test_dir = os.path.dirname(test_path)\n ts = TestStatus(test_dir=test_dir)\n test_id = os.path.basename(test_dir).split(\".\")[-1]\n if summary:\n output = _overall_output(\n ts, \" {status} {test_name}\\n\", check_throughput, check_memory\n )\n else:\n if fails_only:\n output = \"\"\n else:\n output = _overall_output(\n ts,\n \" {test_name} (Overall: {status}) details:\\n\",\n check_throughput,\n check_memory,\n )\n output += ts.phase_statuses_dump(\n prefix=\" \",\n skip_passes=fails_only,\n skip_phase_list=count_fails_phase_list,\n xfails=xfails.get(ts.get_name()),\n )\n if count_fails_phase_list:\n ts.increment_non_pass_counts(non_pass_counts)\n\n test_id_output[test_id] += output\n test_id_counts[test_id] += 1\n\n for test_id in sorted(test_id_output):\n count = test_id_counts[test_id]\n print(\n \"{}: {} test{}\".format(test_id, count, \"s\" if count > 1 else \"\"), file=out\n )\n print(test_id_output[test_id], file=out)\n print(\" \", file=out)\n\n if count_fails_phase_list:\n print(72 * \"=\", file=out)\n print(\"Non-PASS results for select phases:\", file=out)\n for phase in count_fails_phase_list:\n print(\"{} non-passes: {}\".format(phase, non_pass_counts[phase]), file=out)\n\n\ndef _get_xfails(expected_fails_filepath):\n \"\"\"Returns a dictionary of ExpectedFails objects, where the keys are test names\n\n expected_fails_filepath should be either a string giving the path to\n the file containing expected failures, or None. If None, then this\n returns an empty dictionary (as if expected_fails_filepath were\n pointing to a file with no expected failures listed).\n \"\"\"\n if expected_fails_filepath is not None:\n expected_fails_file = ExpectedFailsFile(expected_fails_filepath)\n xfails = expected_fails_file.get_expected_fails()\n else:\n xfails = {}\n return xfails\n\n\ndef _overall_output(ts, format_str, check_throughput, check_memory):\n \"\"\"Returns a string giving the overall test status\n\n Args:\n ts: TestStatus object\n format_str (string): string giving the format of the output; must\n contain place-holders for status and test_name\n \"\"\"\n test_name = ts.get_name()\n status = ts.get_overall_test_status(\n check_throughput=check_throughput,\n check_memory=check_memory,\n )[0]\n return format_str.format(status=status, test_name=test_name)\n", "path": "CIME/cs_status.py"}]} | 1,947 | 272 |
gh_patches_debug_13744 | rasdani/github-patches | git_diff | saleor__saleor-1471 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor displaying success messages in the dashboard
The code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.
</issue>
<code>
[start of saleor/dashboard/templatetags/utils.py]
1 from urllib.parse import urlencode
2
3 from django import forms
4 from django.template import Library
5 from django_filters.fields import RangeField
6 from versatileimagefield.widgets import VersatileImagePPOIClickWidget
7
8 from ...product.utils import get_margin_for_variant, get_variant_costs_data
9 from ..product.widgets import ImagePreviewWidget
10 from .chips import (
11 handle_default, handle_multiple_choice, handle_multiple_model_choice,
12 handle_nullboolean, handle_range, handle_single_choice,
13 handle_single_model_choice)
14
15 register = Library()
16
17
18 @register.simple_tag(takes_context=True)
19 def construct_get_query(context, **params):
20 request_get = context['request'].GET.dict()
21 if not (request_get or params):
22 return ''
23 all_params = {}
24 all_params.update(request_get)
25 all_params.update(params)
26 all_params.update(context.get('default_pagination_params', {}))
27 return '?' + urlencode(all_params)
28
29
30 @register.filter
31 def is_versatile_image_ppoi_click_widget(field):
32 '''
33 This filter checks if image field widget is used when user wants to edit
34 existing product image.
35 '''
36 return isinstance(field.field.widget, VersatileImagePPOIClickWidget)
37
38
39 @register.filter
40 def is_image_preview_widget(field):
41 '''
42 This filter checks if image field widget is used when user wants to add new
43 product image.
44 '''
45 return isinstance(field.field.widget, ImagePreviewWidget)
46
47
48 @register.inclusion_tag('dashboard/product/product_variant/_image_select.html')
49 def render_image_choice(field):
50 choices = zip(field, field.field.queryset)
51 return {'field': field, 'choices_with_images': choices}
52
53
54 @register.inclusion_tag('dashboard/includes/_pagination.html',
55 takes_context=True)
56 def paginate(context, page_obj, num_of_pages=5):
57 context['page_obj'] = page_obj
58 context['n_forward'] = num_of_pages + 1
59 context['n_backward'] = -num_of_pages - 1
60 context['next_section'] = (2 * num_of_pages) + 1
61 context['previous_section'] = (-2 * num_of_pages) - 1
62 return context
63
64
65 @register.simple_tag
66 def margin_for_variant(stock):
67 return get_margin_for_variant(stock)
68
69
70 @register.simple_tag
71 def margins_for_variant(variant):
72 margins = get_variant_costs_data(variant)['margins']
73 return margins
74
75
76 @register.inclusion_tag('dashboard/includes/_filters.html', takes_context=True)
77 def add_filters(context, filter_set, sort_by_filter_name='sort_by'):
78 chips = []
79 request_get = context['request'].GET.copy()
80 for filter_name in filter_set.form.cleaned_data.keys():
81 if filter_name == sort_by_filter_name:
82 # Skip processing of sort_by filter, as it's rendered differently
83 continue
84
85 field = filter_set.form[filter_name]
86 if field.value() not in ['', None]:
87 if isinstance(field.field, forms.NullBooleanField):
88 items = handle_nullboolean(field, request_get)
89 elif isinstance(field.field, forms.ModelMultipleChoiceField):
90 items = handle_multiple_model_choice(field, request_get)
91 elif isinstance(field.field, forms.MultipleChoiceField):
92 items = handle_multiple_choice(field, request_get)
93 elif isinstance(field.field, forms.ModelChoiceField):
94 items = handle_single_model_choice(field, request_get)
95 elif isinstance(field.field, forms.ChoiceField):
96 items = handle_single_choice(field, request_get)
97 elif isinstance(field.field, RangeField):
98 items = handle_range(field, request_get)
99 else:
100 items = handle_default(field, request_get)
101 chips.extend(items)
102 return {
103 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
104 'sort_by': request_get.get(sort_by_filter_name, None)}
105
[end of saleor/dashboard/templatetags/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py
--- a/saleor/dashboard/templatetags/utils.py
+++ b/saleor/dashboard/templatetags/utils.py
@@ -1,3 +1,5 @@
+from __future__ import unicode_literals
+from json import dumps
from urllib.parse import urlencode
from django import forms
@@ -102,3 +104,13 @@
return {
'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),
'sort_by': request_get.get(sort_by_filter_name, None)}
+
+
[email protected]_tag(takes_context=True)
+def serialize_messages(context):
+ """Serialize django.contrib.messages to JSON"""
+ messages = context.get('messages', [])
+ data = {}
+ for i, message in enumerate(messages):
+ data[i] = str(message)
+ return dumps(data)
| {"golden_diff": "diff --git a/saleor/dashboard/templatetags/utils.py b/saleor/dashboard/templatetags/utils.py\n--- a/saleor/dashboard/templatetags/utils.py\n+++ b/saleor/dashboard/templatetags/utils.py\n@@ -1,3 +1,5 @@\n+from __future__ import unicode_literals\n+from json import dumps\n from urllib.parse import urlencode\n \n from django import forms\n@@ -102,3 +104,13 @@\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n+\n+\[email protected]_tag(takes_context=True)\n+def serialize_messages(context):\n+ \"\"\"Serialize django.contrib.messages to JSON\"\"\"\n+ messages = context.get('messages', [])\n+ data = {}\n+ for i, message in enumerate(messages):\n+ data[i] = str(message)\n+ return dumps(data)\n", "issue": "Refactor displaying success messages in the dashboard\nThe code responsible for displaying success messages in the dashboard lives in [_messages.html](https://github.com/mirumee/saleor/blob/master/templates/dashboard/includes/_messages.html) template and mixes Django's templating language with JS which isn't very elegant. Instead, there should be a function written entirely in JS that would take care of rendering those messages with data passed from backend through `data-*` attributes.\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom django import forms\nfrom django.template import Library\nfrom django_filters.fields import RangeField\nfrom versatileimagefield.widgets import VersatileImagePPOIClickWidget\n\nfrom ...product.utils import get_margin_for_variant, get_variant_costs_data\nfrom ..product.widgets import ImagePreviewWidget\nfrom .chips import (\n handle_default, handle_multiple_choice, handle_multiple_model_choice,\n handle_nullboolean, handle_range, handle_single_choice,\n handle_single_model_choice)\n\nregister = Library()\n\n\[email protected]_tag(takes_context=True)\ndef construct_get_query(context, **params):\n request_get = context['request'].GET.dict()\n if not (request_get or params):\n return ''\n all_params = {}\n all_params.update(request_get)\n all_params.update(params)\n all_params.update(context.get('default_pagination_params', {}))\n return '?' + urlencode(all_params)\n\n\[email protected]\ndef is_versatile_image_ppoi_click_widget(field):\n '''\n This filter checks if image field widget is used when user wants to edit\n existing product image.\n '''\n return isinstance(field.field.widget, VersatileImagePPOIClickWidget)\n\n\[email protected]\ndef is_image_preview_widget(field):\n '''\n This filter checks if image field widget is used when user wants to add new\n product image.\n '''\n return isinstance(field.field.widget, ImagePreviewWidget)\n\n\[email protected]_tag('dashboard/product/product_variant/_image_select.html')\ndef render_image_choice(field):\n choices = zip(field, field.field.queryset)\n return {'field': field, 'choices_with_images': choices}\n\n\[email protected]_tag('dashboard/includes/_pagination.html',\n takes_context=True)\ndef paginate(context, page_obj, num_of_pages=5):\n context['page_obj'] = page_obj\n context['n_forward'] = num_of_pages + 1\n context['n_backward'] = -num_of_pages - 1\n context['next_section'] = (2 * num_of_pages) + 1\n context['previous_section'] = (-2 * num_of_pages) - 1\n return context\n\n\[email protected]_tag\ndef margin_for_variant(stock):\n return get_margin_for_variant(stock)\n\n\[email protected]_tag\ndef margins_for_variant(variant):\n margins = get_variant_costs_data(variant)['margins']\n return margins\n\n\[email protected]_tag('dashboard/includes/_filters.html', takes_context=True)\ndef add_filters(context, filter_set, sort_by_filter_name='sort_by'):\n chips = []\n request_get = context['request'].GET.copy()\n for filter_name in filter_set.form.cleaned_data.keys():\n if filter_name == sort_by_filter_name:\n # Skip processing of sort_by filter, as it's rendered differently\n continue\n\n field = filter_set.form[filter_name]\n if field.value() not in ['', None]:\n if isinstance(field.field, forms.NullBooleanField):\n items = handle_nullboolean(field, request_get)\n elif isinstance(field.field, forms.ModelMultipleChoiceField):\n items = handle_multiple_model_choice(field, request_get)\n elif isinstance(field.field, forms.MultipleChoiceField):\n items = handle_multiple_choice(field, request_get)\n elif isinstance(field.field, forms.ModelChoiceField):\n items = handle_single_model_choice(field, request_get)\n elif isinstance(field.field, forms.ChoiceField):\n items = handle_single_choice(field, request_get)\n elif isinstance(field.field, RangeField):\n items = handle_range(field, request_get)\n else:\n items = handle_default(field, request_get)\n chips.extend(items)\n return {\n 'chips': chips, 'filter': filter_set, 'count': filter_set.qs.count(),\n 'sort_by': request_get.get(sort_by_filter_name, None)}\n", "path": "saleor/dashboard/templatetags/utils.py"}]} | 1,654 | 215 |
gh_patches_debug_18707 | rasdani/github-patches | git_diff | dynaconf__dynaconf-42 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModuleNotFoundError: No module named 'flask'
Dynaconf requires Flask by default, is that by mistake or is it intentionally?
```bash
File "/app/.heroku/python/lib/python3.6/site-packages/dynaconf/__init__.py", line 5, in <module>
from dynaconf.contrib import FlaskDynaconf
File "/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/__init__.py", line 1, in <module>
from dynaconf.contrib.flask_dynaconf import FlaskDynaconf, DynaconfConfig # noqa
File "/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/flask_dynaconf.py", line 2, in <module>
from flask.config import Config
ModuleNotFoundError: No module named 'flask'
```
</issue>
<code>
[start of dynaconf/contrib/flask_dynaconf.py]
1 # coding: utf-8
2 from flask.config import Config
3 from dynaconf import LazySettings
4
5
6 class FlaskDynaconf(object):
7 """
8 The arguments are.
9 app = The created app
10 dynaconf_args = Extra args to be passed to Dynaconf (validator for example)
11
12 All other values are stored as config vars specially:
13
14 ENVVAR_FOR_DYNACONF = Name of environment variable to use if you want to
15 change the settings file from env vars
16 example:
17 export MYSITE_SETTINGS_MODULE=/tmp/settings.py
18 with the above the settings will be loaded from that
19 file
20 Dynaconf supports .py, .yml, .toml
21
22 DYNACONF_NAMESPACE = Namespace prefix for your envvars to become settings
23 example:
24 export MYSITE_SQL_PORT='@int 5445'
25
26 with that exported to env you access using:
27 app.config.SQL_PORT
28 app.config.get('SQL_PORT')
29 app.config.get('sql_port')
30 # get is case insensitive
31 app.config['SQL_PORT']
32
33 Dynaconf uses `@int, @bool, @float, @json` to cast env
34 vars
35
36 SETTINGS_MODULE_FOR_DYNACONF = The name of the module or file to use as
37 default to load settings. If nothing is passed
38 it will be `settings.py` or value found in
39 `ENVVAR_FOR_DYNACONF`
40 Dynaconf supports .py, .yml, .toml
41
42 YAML = If using YAML for settings module, you pass an extra yaml file here
43 It is general useful to have a different file to store secrets
44 example `.secrets.yml` and then values in that file will
45 override other values. And you can exclude the .secrets from your
46 public repositories.
47
48 --------------------------------------------------------------------------
49
50 ATTENTION: Take a look at `settings.yml` and `.secrets.yml` to know the
51 required settings format.
52
53 Settings load order in Dynaconf:
54 0) Load all defaults and Flask defaults
55 1) Load all passed variables when applying FlaskDynaconf
56 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF
57 3) Update with data in YAML extra file if provided
58 4) Update with data in environmente vars `DYNACONF_NAMESPACE_`
59
60 YAML files are very useful to have `namespaced` settings, lets say,
61 `production` and `development`.
62
63 You can also achieve the same using multiple `.py` files naming as
64 `settings.py`, `production_settings.py` and `development_settings.py`
65 (see examples/validator)
66
67 Example::
68
69 app = Flask(__name__)
70 FlaskDynaconf(
71 app,
72 ENVVAR_FOR_DYNACONF="MYSITE_SETTINGS_MODULE",
73 DYNACONF_NAMESPACE='MYSITE',
74 SETTINGS_MODULE_FOR_DYNACONF='settings.yml',
75 YAML='.secrets.yml',
76 EXTRA_VALUE='You can add aditional config vars here'
77 )
78
79 Take a look at examples/flask in Dynaconf repository
80
81 """
82 def __init__(self, app=None, instance_relative_config=False,
83 dynaconf_instance=None, **kwargs):
84 """kwargs holds initial dynaconf configuration"""
85 self.kwargs = kwargs
86 if 'DYNACONF_NAMESPACE' not in kwargs:
87 kwargs['DYNACONF_NAMESPACE'] = 'FLASK'
88 self.dynaconf_instance = dynaconf_instance
89 self.instance_relative_config = instance_relative_config
90 if app:
91 self.init_app(app, **kwargs)
92
93 def init_app(self, app, **kwargs):
94 """kwargs holds initial dynaconf configuration"""
95 self.kwargs.update(kwargs)
96 self.settings = self.dynaconf_instance or LazySettings(**self.kwargs)
97 app.config = self.make_config(app)
98 app.dynaconf = self.settings
99
100 def make_config(self, app):
101 root_path = app.root_path
102 if self.instance_relative_config: # pragma: no cover
103 root_path = app.instance_path
104 if self.dynaconf_instance:
105 self.settings.update(self.kwargs)
106 return DynaconfConfig(
107 root_path=root_path,
108 defaults=app.config,
109 _settings=self.settings
110 )
111
112
113 class DynaconfConfig(Config):
114 """
115 Settings load order in Dynaconf
116 0) Load all defaults and Flask defaults
117 1) Load all passed variables above
118 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF
119 3) Update with data in YAML
120 4) Update with data in rnvironmente vars `DYNACONF_NAMESPACE_`
121 """
122
123 def get(self, key, default=None):
124 """Gets config from dynaconf variables
125 if variables does not exists in dynaconf try getting from
126 app.config to support runtime settings."""
127 return self._settings.get(key, Config.get(self, key, default))
128
129 def __init__(self, _settings, *args, **kwargs):
130 """perform the initial load"""
131 super(DynaconfConfig, self).__init__(*args, **kwargs)
132 Config.update(self, _settings.store)
133 self._settings = _settings
134
135 def __getitem__(self, key):
136 """
137 First try to get value from dynaconf then from Flask
138 """
139 return self.get(key)
140
141 def __getattr__(self, name):
142 """
143 First try to get value from dynaconf then from Flask
144 """
145 try:
146 return getattr(self._settings, name)
147 except AttributeError:
148 return self[name]
149
150 def __call__(self, name, *args, **kwargs):
151 return self.get(name, *args, **kwargs)
152
[end of dynaconf/contrib/flask_dynaconf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dynaconf/contrib/flask_dynaconf.py b/dynaconf/contrib/flask_dynaconf.py
--- a/dynaconf/contrib/flask_dynaconf.py
+++ b/dynaconf/contrib/flask_dynaconf.py
@@ -1,5 +1,12 @@
# coding: utf-8
-from flask.config import Config
+try:
+ from flask.config import Config
+ flask_installed = True
+except ImportError:
+ flask_installed = False
+ Config = object
+
+
from dynaconf import LazySettings
@@ -82,6 +89,11 @@
def __init__(self, app=None, instance_relative_config=False,
dynaconf_instance=None, **kwargs):
"""kwargs holds initial dynaconf configuration"""
+ if not flask_installed:
+ raise RuntimeError(
+ "To use this extension Flask must be installed "
+ "install it with: pip install flask"
+ )
self.kwargs = kwargs
if 'DYNACONF_NAMESPACE' not in kwargs:
kwargs['DYNACONF_NAMESPACE'] = 'FLASK'
| {"golden_diff": "diff --git a/dynaconf/contrib/flask_dynaconf.py b/dynaconf/contrib/flask_dynaconf.py\n--- a/dynaconf/contrib/flask_dynaconf.py\n+++ b/dynaconf/contrib/flask_dynaconf.py\n@@ -1,5 +1,12 @@\n # coding: utf-8\n-from flask.config import Config\n+try:\n+ from flask.config import Config\n+ flask_installed = True\n+except ImportError:\n+ flask_installed = False\n+ Config = object\n+\n+\n from dynaconf import LazySettings\n \n \n@@ -82,6 +89,11 @@\n def __init__(self, app=None, instance_relative_config=False,\n dynaconf_instance=None, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n+ if not flask_installed:\n+ raise RuntimeError(\n+ \"To use this extension Flask must be installed \"\n+ \"install it with: pip install flask\"\n+ )\n self.kwargs = kwargs\n if 'DYNACONF_NAMESPACE' not in kwargs:\n kwargs['DYNACONF_NAMESPACE'] = 'FLASK'\n", "issue": "ModuleNotFoundError: No module named 'flask'\nDynaconf requires Flask by default, is that by mistake or is it intentionally?\r\n\r\n```bash\r\n File \"/app/.heroku/python/lib/python3.6/site-packages/dynaconf/__init__.py\", line 5, in <module> \r\n from dynaconf.contrib import FlaskDynaconf \r\n File \"/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/__init__.py\", line 1, in <module> \r\n from dynaconf.contrib.flask_dynaconf import FlaskDynaconf, DynaconfConfig # noqa \r\n File \"/app/.heroku/python/lib/python3.6/site-packages/dynaconf/contrib/flask_dynaconf.py\", line 2, in <module> \r\n from flask.config import Config \r\nModuleNotFoundError: No module named 'flask'\r\n```\n", "before_files": [{"content": "# coding: utf-8\nfrom flask.config import Config\nfrom dynaconf import LazySettings\n\n\nclass FlaskDynaconf(object):\n \"\"\"\n The arguments are.\n app = The created app\n dynaconf_args = Extra args to be passed to Dynaconf (validator for example)\n\n All other values are stored as config vars specially:\n\n ENVVAR_FOR_DYNACONF = Name of environment variable to use if you want to\n change the settings file from env vars\n example:\n export MYSITE_SETTINGS_MODULE=/tmp/settings.py\n with the above the settings will be loaded from that\n file\n Dynaconf supports .py, .yml, .toml\n\n DYNACONF_NAMESPACE = Namespace prefix for your envvars to become settings\n example:\n export MYSITE_SQL_PORT='@int 5445'\n\n with that exported to env you access using:\n app.config.SQL_PORT\n app.config.get('SQL_PORT')\n app.config.get('sql_port')\n # get is case insensitive\n app.config['SQL_PORT']\n\n Dynaconf uses `@int, @bool, @float, @json` to cast env\n vars\n\n SETTINGS_MODULE_FOR_DYNACONF = The name of the module or file to use as\n default to load settings. If nothing is passed\n it will be `settings.py` or value found in\n `ENVVAR_FOR_DYNACONF`\n Dynaconf supports .py, .yml, .toml\n\n YAML = If using YAML for settings module, you pass an extra yaml file here\n It is general useful to have a different file to store secrets\n example `.secrets.yml` and then values in that file will\n override other values. And you can exclude the .secrets from your\n public repositories.\n\n --------------------------------------------------------------------------\n\n ATTENTION: Take a look at `settings.yml` and `.secrets.yml` to know the\n required settings format.\n\n Settings load order in Dynaconf:\n 0) Load all defaults and Flask defaults\n 1) Load all passed variables when applying FlaskDynaconf\n 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF\n 3) Update with data in YAML extra file if provided\n 4) Update with data in environmente vars `DYNACONF_NAMESPACE_`\n\n YAML files are very useful to have `namespaced` settings, lets say,\n `production` and `development`.\n\n You can also achieve the same using multiple `.py` files naming as\n `settings.py`, `production_settings.py` and `development_settings.py`\n (see examples/validator)\n\n Example::\n\n app = Flask(__name__)\n FlaskDynaconf(\n app,\n ENVVAR_FOR_DYNACONF=\"MYSITE_SETTINGS_MODULE\",\n DYNACONF_NAMESPACE='MYSITE',\n SETTINGS_MODULE_FOR_DYNACONF='settings.yml',\n YAML='.secrets.yml',\n EXTRA_VALUE='You can add aditional config vars here'\n )\n\n Take a look at examples/flask in Dynaconf repository\n\n \"\"\"\n def __init__(self, app=None, instance_relative_config=False,\n dynaconf_instance=None, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n self.kwargs = kwargs\n if 'DYNACONF_NAMESPACE' not in kwargs:\n kwargs['DYNACONF_NAMESPACE'] = 'FLASK'\n self.dynaconf_instance = dynaconf_instance\n self.instance_relative_config = instance_relative_config\n if app:\n self.init_app(app, **kwargs)\n\n def init_app(self, app, **kwargs):\n \"\"\"kwargs holds initial dynaconf configuration\"\"\"\n self.kwargs.update(kwargs)\n self.settings = self.dynaconf_instance or LazySettings(**self.kwargs)\n app.config = self.make_config(app)\n app.dynaconf = self.settings\n\n def make_config(self, app):\n root_path = app.root_path\n if self.instance_relative_config: # pragma: no cover\n root_path = app.instance_path\n if self.dynaconf_instance:\n self.settings.update(self.kwargs)\n return DynaconfConfig(\n root_path=root_path,\n defaults=app.config,\n _settings=self.settings\n )\n\n\nclass DynaconfConfig(Config):\n \"\"\"\n Settings load order in Dynaconf\n 0) Load all defaults and Flask defaults\n 1) Load all passed variables above\n 2) Update with data in SETTINGS_MODULE_FOR_DYNACONF\n 3) Update with data in YAML\n 4) Update with data in rnvironmente vars `DYNACONF_NAMESPACE_`\n \"\"\"\n\n def get(self, key, default=None):\n \"\"\"Gets config from dynaconf variables\n if variables does not exists in dynaconf try getting from\n app.config to support runtime settings.\"\"\"\n return self._settings.get(key, Config.get(self, key, default))\n\n def __init__(self, _settings, *args, **kwargs):\n \"\"\"perform the initial load\"\"\"\n super(DynaconfConfig, self).__init__(*args, **kwargs)\n Config.update(self, _settings.store)\n self._settings = _settings\n\n def __getitem__(self, key):\n \"\"\"\n First try to get value from dynaconf then from Flask\n \"\"\"\n return self.get(key)\n\n def __getattr__(self, name):\n \"\"\"\n First try to get value from dynaconf then from Flask\n \"\"\"\n try:\n return getattr(self._settings, name)\n except AttributeError:\n return self[name]\n\n def __call__(self, name, *args, **kwargs):\n return self.get(name, *args, **kwargs)\n", "path": "dynaconf/contrib/flask_dynaconf.py"}]} | 2,338 | 247 |
gh_patches_debug_12447 | rasdani/github-patches | git_diff | searxng__searxng-3204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: lingva engine / redirects & Key-Errors
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/return42/searxng
Branch: darmarit.org
Version: 2024.2.3+a6f5d690
**How did you install SearXNG?**
(unmodified fork/brand) from master branch
**What happened?**
With the default config / the "official instance" we have the errors reported below:
https://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041
**How To Reproduce**
```
!lingva en-de convenient
```
**Technical report**
```
Error
* Error: httpx.ReadTimeout
* Percentage: 50
* Parameters: `(None, None, 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:118`
* Function: `_send_http_request`
* Code: `response = req(params['url'], **request_args)`
```
```
Error
* Error: 1 redirects, maximum: 0
* Percentage: 50
* Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`
* File name: `searx/search/processors/online.py:127`
* Function: `_send_http_request`
* Code: `count_error(`
```
```
Error
* Error: KeyError
* Percentage: 50
* Parameters: `()`
* File name: `searx/engines/lingva.py:51`
* Function: `response`
* Code: `infobox += f"<b>{translation['type']}</b>"`
```
</issue>
<code>
[start of searx/engines/lingva.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Lingva (alternative Google Translate frontend)"""
4
5 from json import loads
6
7 about = {
8 "website": 'https://lingva.ml',
9 "wikidata_id": None,
10 "official_api_documentation": 'https://github.com/thedaviddelta/lingva-translate#public-apis',
11 "use_official_api": True,
12 "require_api_key": False,
13 "results": 'JSON',
14 }
15
16 engine_type = 'online_dictionary'
17 categories = ['general']
18
19 url = "https://lingva.thedaviddelta.com/"
20 search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
21
22
23 def request(_query, params):
24 params['url'] = search_url.format(
25 url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']
26 )
27 return params
28
29
30 def response(resp):
31 results = []
32
33 result = loads(resp.text)
34 info = result["info"]
35 from_to_prefix = "%s-%s " % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])
36
37 if "typo" in info:
38 results.append({"suggestion": from_to_prefix + info["typo"]})
39
40 if 'definitions' in info: # pylint: disable=too-many-nested-blocks
41 for definition in info['definitions']:
42 if 'list' in definition:
43 for item in definition['list']:
44 if 'synonyms' in item:
45 for synonym in item['synonyms']:
46 results.append({"suggestion": from_to_prefix + synonym})
47
48 infobox = ""
49
50 for translation in info["extraTranslations"]:
51 infobox += f"<b>{translation['type']}</b>"
52
53 for word in translation["list"]:
54 infobox += f"<dl><dt>{word['word']}</dt>"
55
56 for meaning in word["meanings"]:
57 infobox += f"<dd>{meaning}</dd>"
58
59 infobox += "</dl>"
60
61 results.append(
62 {
63 'infobox': result["translation"],
64 'content': infobox,
65 }
66 )
67
68 return results
69
[end of searx/engines/lingva.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py
--- a/searx/engines/lingva.py
+++ b/searx/engines/lingva.py
@@ -16,7 +16,7 @@
engine_type = 'online_dictionary'
categories = ['general']
-url = "https://lingva.thedaviddelta.com/"
+url = "https://lingva.thedaviddelta.com"
search_url = "{url}/api/v1/{from_lang}/{to_lang}/{query}"
@@ -48,8 +48,6 @@
infobox = ""
for translation in info["extraTranslations"]:
- infobox += f"<b>{translation['type']}</b>"
-
for word in translation["list"]:
infobox += f"<dl><dt>{word['word']}</dt>"
| {"golden_diff": "diff --git a/searx/engines/lingva.py b/searx/engines/lingva.py\n--- a/searx/engines/lingva.py\n+++ b/searx/engines/lingva.py\n@@ -16,7 +16,7 @@\n engine_type = 'online_dictionary'\n categories = ['general']\n \n-url = \"https://lingva.thedaviddelta.com/\"\n+url = \"https://lingva.thedaviddelta.com\"\n search_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n \n \n@@ -48,8 +48,6 @@\n infobox = \"\"\n \n for translation in info[\"extraTranslations\"]:\n- infobox += f\"<b>{translation['type']}</b>\"\n-\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n", "issue": "Bug: lingva engine / redirects & Key-Errors\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n\r\nRepository: https://github.com/return42/searxng\r\nBranch: darmarit.org\r\nVersion: 2024.2.3+a6f5d690\r\n\r\n**How did you install SearXNG?**\r\n\r\n(unmodified fork/brand) from master branch\r\n\r\n**What happened?**\r\n\r\nWith the default config / the \"official instance\" we have the errors reported below:\r\n\r\nhttps://github.com/searxng/searxng/blob/df1a774003c285866a96b149bf92412037b4932d/searx/settings.yml#L1037-L1041\r\n\r\n**How To Reproduce**\r\n\r\n```\r\n!lingva en-de convenient\r\n```\r\n\r\n**Technical report**\r\n\r\n```\r\nError\r\n * Error: httpx.ReadTimeout\r\n * Percentage: 50\r\n * Parameters: `(None, None, 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:118`\r\n * Function: `_send_http_request`\r\n * Code: `response = req(params['url'], **request_args)`\r\n```\r\n\r\n```\r\nError\r\n * Error: 1 redirects, maximum: 0\r\n * Percentage: 50\r\n * Parameters: `('200', 'OK', 'lingva.thedaviddelta.com')`\r\n * File name: `searx/search/processors/online.py:127`\r\n * Function: `_send_http_request`\r\n * Code: `count_error(`\r\n```\r\n\r\n```\r\nError\r\n * Error: KeyError\r\n * Percentage: 50\r\n * Parameters: `()`\r\n * File name: `searx/engines/lingva.py:51`\r\n * Function: `response`\r\n * Code: `infobox += f\"<b>{translation['type']}</b>\"`\r\n```\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Lingva (alternative Google Translate frontend)\"\"\"\n\nfrom json import loads\n\nabout = {\n \"website\": 'https://lingva.ml',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://github.com/thedaviddelta/lingva-translate#public-apis',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\nengine_type = 'online_dictionary'\ncategories = ['general']\n\nurl = \"https://lingva.thedaviddelta.com/\"\nsearch_url = \"{url}/api/v1/{from_lang}/{to_lang}/{query}\"\n\n\ndef request(_query, params):\n params['url'] = search_url.format(\n url=url, from_lang=params['from_lang'][1], to_lang=params['to_lang'][1], query=params['query']\n )\n return params\n\n\ndef response(resp):\n results = []\n\n result = loads(resp.text)\n info = result[\"info\"]\n from_to_prefix = \"%s-%s \" % (resp.search_params['from_lang'][1], resp.search_params['to_lang'][1])\n\n if \"typo\" in info:\n results.append({\"suggestion\": from_to_prefix + info[\"typo\"]})\n\n if 'definitions' in info: # pylint: disable=too-many-nested-blocks\n for definition in info['definitions']:\n if 'list' in definition:\n for item in definition['list']:\n if 'synonyms' in item:\n for synonym in item['synonyms']:\n results.append({\"suggestion\": from_to_prefix + synonym})\n\n infobox = \"\"\n\n for translation in info[\"extraTranslations\"]:\n infobox += f\"<b>{translation['type']}</b>\"\n\n for word in translation[\"list\"]:\n infobox += f\"<dl><dt>{word['word']}</dt>\"\n\n for meaning in word[\"meanings\"]:\n infobox += f\"<dd>{meaning}</dd>\"\n\n infobox += \"</dl>\"\n\n results.append(\n {\n 'infobox': result[\"translation\"],\n 'content': infobox,\n }\n )\n\n return results\n", "path": "searx/engines/lingva.py"}]} | 1,631 | 192 |
gh_patches_debug_254 | rasdani/github-patches | git_diff | mindee__doctr-123 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[docs] Enable documentation of multiple versions at once
As of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:
- having the latest version by default
- having the documentation of each release accessible as well using a displayed selector
Hugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import sphinx_rtd_theme
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('../..'))
18 import doctr
19
20 # -- Project information -----------------------------------------------------
21
22 master_doc = 'index'
23 project = 'doctr'
24 copyright = '2021, Mindee'
25 author = 'François-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'
26
27 # The full version, including alpha/beta/rc tags
28 version = doctr.__version__
29 release = doctr.__version__ + '-git'
30
31
32 # -- General configuration ---------------------------------------------------
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.napoleon',
40 'sphinx.ext.viewcode',
41 'sphinx.ext.coverage',
42 'sphinx.ext.mathjax',
43 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/
44 'sphinx_copybutton',
45 ]
46
47 napoleon_use_ivar = True
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # List of patterns, relative to source directory, that match files and
53 # directories to ignore when looking for source files.
54 # This pattern also affects html_static_path and html_extra_path.
55 exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
56
57
58 # The name of the Pygments (syntax highlighting) style to use.
59 pygments_style = 'sphinx'
60 highlight_language = 'python3'
61
62 # -- Options for HTML output -------------------------------------------------
63
64 # The theme to use for HTML and HTML Help pages. See the documentation for
65 # a list of builtin themes.
66 #
67 html_theme = 'sphinx_rtd_theme'
68 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
69
70 # Theme options are theme-specific and customize the look and feel of a theme
71 # further. For a list of options available for each theme, see the
72 # documentation.
73 #
74 html_theme_options = {
75 'collapse_navigation': False,
76 'display_version': True,
77 'logo_only': False,
78 }
79
80 # html_logo = '_static/images/logo.png'
81
82
83 # Add any paths that contain custom static files (such as style sheets) here,
84 # relative to this directory. They are copied after the builtin static files,
85 # so a file named "default.css" will overwrite the builtin "default.css".
86 html_static_path = ['_static']
87
88 # A list of files that should not be packed into the epub file.
89 epub_exclude_files = ['search.html']
90
91 def setup(app):
92 app.add_css_file('css/mindee.css')
93 app.add_js_file('js/custom.js')
94
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -73,7 +73,7 @@
#
html_theme_options = {
'collapse_navigation': False,
- 'display_version': True,
+ 'display_version': False,
'logo_only': False,
}
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -73,7 +73,7 @@\n #\n html_theme_options = {\n 'collapse_navigation': False,\n- 'display_version': True,\n+ 'display_version': False,\n 'logo_only': False,\n }\n", "issue": "[docs] Enable documentation of multiple versions at once\nAs of now, the documentation that would be deployed publicly is only the latest version. The better alternative would be:\r\n- having the latest version by default\r\n- having the documentation of each release accessible as well using a displayed selector\r\n\r\nHugginface transformers did the following: https://github.com/huggingface/transformers/blob/master/.circleci/deploy.sh\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'doctr'\ncopyright = '2021, Mindee'\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard, Mohamed Biaz'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': False,\n}\n\n# html_logo = '_static/images/logo.png'\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}]} | 1,504 | 77 |
gh_patches_debug_49809 | rasdani/github-patches | git_diff | plotly__plotly.py-699 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
jsonschema.SchemaError when a figure is validated
Here is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020
The notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:
_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:
Notebook Validation failed_:
`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:
`{
"data": [
{
"colorscale": "Viridis",
"z": [
[
2,
27,
105,
100
],
[
87,
14,
121,
102
],
[
26,
121,
73,
34
],
[
44,
105,
111,
127
]
],
"type": "heatmap",
"zsmooth": "best"
}
],
"layout": {
"width": 400,
"height": 400
}
}`
Initially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 exec (open('plotly/version.py').read())
4
5
6 def readme():
7 with open('README.rst') as f:
8 return f.read()
9
10
11 setup(name='plotly',
12 version=__version__,
13 use_2to3=False,
14 author='Chris P',
15 author_email='[email protected]',
16 maintainer='Chris P',
17 maintainer_email='[email protected]',
18 url='https://plot.ly/python/',
19 description="Python plotting library for collaborative, "
20 "interactive, publication-quality graphs.",
21 long_description=readme(),
22 classifiers=[
23 'Development Status :: 4 - Beta',
24 'Programming Language :: Python :: 2',
25 'Programming Language :: Python :: 2.7',
26 'Programming Language :: Python :: 3',
27 'Programming Language :: Python :: 3.3',
28 'Programming Language :: Python :: 3.4',
29 'Programming Language :: Python :: 3.5',
30 'Topic :: Scientific/Engineering :: Visualization',
31 ],
32 license='MIT',
33 packages=['plotly',
34 'plotly/api',
35 'plotly/api/v1',
36 'plotly/api/v2',
37 'plotly/plotly',
38 'plotly/plotly/chunked_requests',
39 'plotly/figure_factory',
40 'plotly/graph_objs',
41 'plotly/grid_objs',
42 'plotly/widgets',
43 'plotly/offline',
44 'plotly/matplotlylib',
45 'plotly/matplotlylib/mplexporter',
46 'plotly/matplotlylib/mplexporter/renderers'],
47 package_data={'plotly': ['package_data/*']},
48 install_requires=['decorator', 'requests', 'six', 'pytz'],
49 zip_safe=False)
50
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,5 +45,9 @@
'plotly/matplotlylib/mplexporter',
'plotly/matplotlylib/mplexporter/renderers'],
package_data={'plotly': ['package_data/*']},
- install_requires=['decorator', 'requests', 'six', 'pytz'],
+ install_requires=['decorator',
+ 'nbformat>=4.2',
+ 'pytz',
+ 'requests',
+ 'six'],
zip_safe=False)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,5 +45,9 @@\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n- install_requires=['decorator', 'requests', 'six', 'pytz'],\n+ install_requires=['decorator',\n+ 'nbformat>=4.2',\n+ 'pytz',\n+ 'requests',\n+ 'six'],\n zip_safe=False)\n", "issue": "jsonschema.SchemaError when a figure is validated\nHere is a minimal example that reproduces the bug: http://nbviewer.jupyter.org/gist/empet/cf922d7c7f4269d6f63432ec67a5d020\r\n\r\nThe notebook runs OK (with plotly 2.0.1) when I call `plot(fig)`. `iplot(fig)` generates the plot too, but an error box pops up whenever Jupyter tries to save the notebook. The box has the following content:\r\n\r\n_The save operation succeeded, but the notebook does not appear to be valid. The validation error was:\r\nNotebook Validation failed_:\r\n`u'data': [{u'colorscale': u'Viridis', u'z': [[2, 27, 105, 100], [87, 14, 121, 102], [26, 121, 73, 34], [44, 105, 111, 127]], u'type': u'heatmap', u'zsmooth': u'best'}], u'layout': {u'width': 400, u'height': 400}}` _is not valid under any of the given schemas_:\r\n\r\n`{\r\n \"data\": [\r\n {\r\n \"colorscale\": \"Viridis\",\r\n \"z\": [\r\n [\r\n 2,\r\n 27,\r\n 105,\r\n 100\r\n ],\r\n [\r\n 87,\r\n 14,\r\n 121,\r\n 102\r\n ],\r\n [\r\n 26,\r\n 121,\r\n 73,\r\n 34\r\n ],\r\n [\r\n 44,\r\n 105,\r\n 111,\r\n 127\r\n ]\r\n ],\r\n \"type\": \"heatmap\",\r\n \"zsmooth\": \"best\"\r\n }\r\n ],\r\n \"layout\": {\r\n \"width\": 400,\r\n \"height\": 400\r\n }\r\n}`\r\n\r\nInitially I formulated this issue only for heatmaps, but meanwhile I realized that this behaviour manifests for any type of plot.\n", "before_files": [{"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/python/',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/api',\n 'plotly/api/v1',\n 'plotly/api/v2',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/figure_factory',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/offline',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['package_data/*']},\n install_requires=['decorator', 'requests', 'six', 'pytz'],\n zip_safe=False)\n", "path": "setup.py"}]} | 1,486 | 127 |
gh_patches_debug_38019 | rasdani/github-patches | git_diff | open-mmlab__mmaction2-642 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Seed in sampler
https://github.com/open-mmlab/mmdetection/pull/4665
</issue>
<code>
[start of mmaction/datasets/builder.py]
1 import platform
2 import random
3 from functools import partial
4
5 import numpy as np
6 from mmcv.parallel import collate
7 from mmcv.runner import get_dist_info
8 from mmcv.utils import build_from_cfg
9 from torch.utils.data import DataLoader
10
11 from .dataset_wrappers import RepeatDataset
12 from .registry import DATASETS
13 from .samplers import DistributedPowerSampler, DistributedSampler
14
15 if platform.system() != 'Windows':
16 # https://github.com/pytorch/pytorch/issues/973
17 import resource
18 rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
19 hard_limit = rlimit[1]
20 soft_limit = min(4096, hard_limit)
21 resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
22
23
24 def build_dataset(cfg, default_args=None):
25 """Build a dataset from config dict.
26
27 Args:
28 cfg (dict): Config dict. It should at least contain the key "type".
29 default_args (dict | None, optional): Default initialization arguments.
30 Default: None.
31
32 Returns:
33 Dataset: The constructed dataset.
34 """
35 if cfg['type'] == 'RepeatDataset':
36 dataset = RepeatDataset(
37 build_dataset(cfg['dataset'], default_args), cfg['times'])
38 else:
39 dataset = build_from_cfg(cfg, DATASETS, default_args)
40 return dataset
41
42
43 def build_dataloader(dataset,
44 videos_per_gpu,
45 workers_per_gpu,
46 num_gpus=1,
47 dist=True,
48 shuffle=True,
49 seed=None,
50 drop_last=False,
51 pin_memory=True,
52 **kwargs):
53 """Build PyTorch DataLoader.
54
55 In distributed training, each GPU/process has a dataloader.
56 In non-distributed training, there is only one dataloader for all GPUs.
57
58 Args:
59 dataset (:obj:`Dataset`): A PyTorch dataset.
60 videos_per_gpu (int): Number of videos on each GPU, i.e.,
61 batch size of each GPU.
62 workers_per_gpu (int): How many subprocesses to use for data
63 loading for each GPU.
64 num_gpus (int): Number of GPUs. Only used in non-distributed
65 training. Default: 1.
66 dist (bool): Distributed training/test or not. Default: True.
67 shuffle (bool): Whether to shuffle the data at every epoch.
68 Default: True.
69 seed (int | None): Seed to be used. Default: None.
70 drop_last (bool): Whether to drop the last incomplete batch in epoch.
71 Default: False
72 pin_memory (bool): Whether to use pin_memory in DataLoader.
73 Default: True
74 kwargs (dict, optional): Any keyword argument to be used to initialize
75 DataLoader.
76
77 Returns:
78 DataLoader: A PyTorch dataloader.
79 """
80 rank, world_size = get_dist_info()
81 sample_by_class = getattr(dataset, 'sample_by_class', False)
82 power = getattr(dataset, 'power', None)
83
84 if dist:
85 if sample_by_class:
86 assert power is not None
87 sampler = DistributedPowerSampler(dataset, world_size, rank, power)
88 else:
89 sampler = DistributedSampler(
90 dataset, world_size, rank, shuffle=shuffle)
91 shuffle = False
92 batch_size = videos_per_gpu
93 num_workers = workers_per_gpu
94 else:
95 sampler = None
96 batch_size = num_gpus * videos_per_gpu
97 num_workers = num_gpus * workers_per_gpu
98
99 init_fn = partial(
100 worker_init_fn, num_workers=num_workers, rank=rank,
101 seed=seed) if seed is not None else None
102
103 data_loader = DataLoader(
104 dataset,
105 batch_size=batch_size,
106 sampler=sampler,
107 num_workers=num_workers,
108 collate_fn=partial(collate, samples_per_gpu=videos_per_gpu),
109 pin_memory=pin_memory,
110 shuffle=shuffle,
111 worker_init_fn=init_fn,
112 drop_last=drop_last,
113 **kwargs)
114
115 return data_loader
116
117
118 def worker_init_fn(worker_id, num_workers, rank, seed):
119 """Init the random seed for various workers."""
120 # The seed of each worker equals to
121 # num_worker * rank + worker_id + user_seed
122 worker_seed = num_workers * rank + worker_id + seed
123 np.random.seed(worker_seed)
124 random.seed(worker_seed)
125
[end of mmaction/datasets/builder.py]
[start of mmaction/datasets/samplers/distributed_sampler.py]
1 import torch
2 from torch.utils.data import DistributedSampler as _DistributedSampler
3
4
5 class DistributedSampler(_DistributedSampler):
6 """DistributedSampler inheriting from
7 ``torch.utils.data.DistributedSampler``.
8
9 In pytorch of lower versions, there is no ``shuffle`` argument. This child
10 class will port one to DistributedSampler.
11 """
12
13 def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
14 super().__init__(dataset, num_replicas=num_replicas, rank=rank)
15 self.shuffle = shuffle
16
17 def __iter__(self):
18 # deterministically shuffle based on epoch
19 if self.shuffle:
20 g = torch.Generator()
21 g.manual_seed(self.epoch)
22 indices = torch.randperm(len(self.dataset), generator=g).tolist()
23 else:
24 indices = torch.arange(len(self.dataset)).tolist()
25
26 # add extra samples to make it evenly divisible
27 indices += indices[:(self.total_size - len(indices))]
28 assert len(indices) == self.total_size
29
30 # subsample
31 indices = indices[self.rank:self.total_size:self.num_replicas]
32 assert len(indices) == self.num_samples
33 return iter(indices)
34
35
36 class DistributedPowerSampler(_DistributedSampler):
37 """DistributedPowerSampler inheriting from
38 ``torch.utils.data.DistributedSampler``.
39
40 Samples are sampled with the probability that is proportional to the power
41 of label frequency (freq ^ power). The sampler only applies to single class
42 recognition dataset.
43
44 The default value of power is 1, which is equivalent to bootstrap sampling
45 from the entire dataset.
46 """
47
48 def __init__(self, dataset, num_replicas=None, rank=None, power=1):
49 super().__init__(dataset, num_replicas=num_replicas, rank=rank)
50 self.power = power
51
52 def __iter__(self):
53 # deterministically shuffle based on epoch
54 g = torch.Generator()
55 g.manual_seed(self.epoch)
56 video_infos_by_class = self.dataset.video_infos_by_class
57 num_classes = self.dataset.num_classes
58 # For simplicity, discontinuous labels are not permitted
59 assert set(video_infos_by_class) == set(range(num_classes))
60 counts = [len(video_infos_by_class[i]) for i in range(num_classes)]
61 counts = [cnt**self.power for cnt in counts]
62
63 indices = torch.multinomial(
64 torch.Tensor(counts),
65 self.total_size,
66 replacement=True,
67 generator=g)
68 indices = indices.data.numpy().tolist()
69 assert len(indices) == self.total_size
70
71 indices = indices[self.rank:self.total_size:self.num_replicas]
72 assert len(indices) == self.num_samples
73
74 return iter(indices)
75
[end of mmaction/datasets/samplers/distributed_sampler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmaction/datasets/builder.py b/mmaction/datasets/builder.py
--- a/mmaction/datasets/builder.py
+++ b/mmaction/datasets/builder.py
@@ -84,10 +84,11 @@
if dist:
if sample_by_class:
assert power is not None
- sampler = DistributedPowerSampler(dataset, world_size, rank, power)
+ sampler = DistributedPowerSampler(
+ dataset, world_size, rank, power, seed=seed)
else:
sampler = DistributedSampler(
- dataset, world_size, rank, shuffle=shuffle)
+ dataset, world_size, rank, shuffle=shuffle, seed=seed)
shuffle = False
batch_size = videos_per_gpu
num_workers = workers_per_gpu
diff --git a/mmaction/datasets/samplers/distributed_sampler.py b/mmaction/datasets/samplers/distributed_sampler.py
--- a/mmaction/datasets/samplers/distributed_sampler.py
+++ b/mmaction/datasets/samplers/distributed_sampler.py
@@ -10,15 +10,22 @@
class will port one to DistributedSampler.
"""
- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank)
- self.shuffle = shuffle
+ def __init__(self,
+ dataset,
+ num_replicas=None,
+ rank=None,
+ shuffle=True,
+ seed=0):
+ super().__init__(
+ dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
+ # for the compatibility from PyTorch 1.3+
+ self.seed = seed if seed is not None else 0
def __iter__(self):
# deterministically shuffle based on epoch
if self.shuffle:
g = torch.Generator()
- g.manual_seed(self.epoch)
+ g.manual_seed(self.epoch + self.seed)
indices = torch.randperm(len(self.dataset), generator=g).tolist()
else:
indices = torch.arange(len(self.dataset)).tolist()
@@ -45,14 +52,15 @@
from the entire dataset.
"""
- def __init__(self, dataset, num_replicas=None, rank=None, power=1):
+ def __init__(self, dataset, num_replicas=None, rank=None, power=1, seed=0):
super().__init__(dataset, num_replicas=num_replicas, rank=rank)
self.power = power
+ self.seed = seed if seed is not None else 0
def __iter__(self):
# deterministically shuffle based on epoch
g = torch.Generator()
- g.manual_seed(self.epoch)
+ g.manual_seed(self.epoch + self.seed)
video_infos_by_class = self.dataset.video_infos_by_class
num_classes = self.dataset.num_classes
# For simplicity, discontinuous labels are not permitted
| {"golden_diff": "diff --git a/mmaction/datasets/builder.py b/mmaction/datasets/builder.py\n--- a/mmaction/datasets/builder.py\n+++ b/mmaction/datasets/builder.py\n@@ -84,10 +84,11 @@\n if dist:\n if sample_by_class:\n assert power is not None\n- sampler = DistributedPowerSampler(dataset, world_size, rank, power)\n+ sampler = DistributedPowerSampler(\n+ dataset, world_size, rank, power, seed=seed)\n else:\n sampler = DistributedSampler(\n- dataset, world_size, rank, shuffle=shuffle)\n+ dataset, world_size, rank, shuffle=shuffle, seed=seed)\n shuffle = False\n batch_size = videos_per_gpu\n num_workers = workers_per_gpu\ndiff --git a/mmaction/datasets/samplers/distributed_sampler.py b/mmaction/datasets/samplers/distributed_sampler.py\n--- a/mmaction/datasets/samplers/distributed_sampler.py\n+++ b/mmaction/datasets/samplers/distributed_sampler.py\n@@ -10,15 +10,22 @@\n class will port one to DistributedSampler.\n \"\"\"\n \n- def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n- super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n- self.shuffle = shuffle\n+ def __init__(self,\n+ dataset,\n+ num_replicas=None,\n+ rank=None,\n+ shuffle=True,\n+ seed=0):\n+ super().__init__(\n+ dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)\n+ # for the compatibility from PyTorch 1.3+\n+ self.seed = seed if seed is not None else 0\n \n def __iter__(self):\n # deterministically shuffle based on epoch\n if self.shuffle:\n g = torch.Generator()\n- g.manual_seed(self.epoch)\n+ g.manual_seed(self.epoch + self.seed)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n@@ -45,14 +52,15 @@\n from the entire dataset.\n \"\"\"\n \n- def __init__(self, dataset, num_replicas=None, rank=None, power=1):\n+ def __init__(self, dataset, num_replicas=None, rank=None, power=1, seed=0):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.power = power\n+ self.seed = seed if seed is not None else 0\n \n def __iter__(self):\n # deterministically shuffle based on epoch\n g = torch.Generator()\n- g.manual_seed(self.epoch)\n+ g.manual_seed(self.epoch + self.seed)\n video_infos_by_class = self.dataset.video_infos_by_class\n num_classes = self.dataset.num_classes\n # For simplicity, discontinuous labels are not permitted\n", "issue": "Seed in sampler\nhttps://github.com/open-mmlab/mmdetection/pull/4665\n", "before_files": [{"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom mmcv.utils import build_from_cfg\nfrom torch.utils.data import DataLoader\n\nfrom .dataset_wrappers import RepeatDataset\nfrom .registry import DATASETS\nfrom .samplers import DistributedPowerSampler, DistributedSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n hard_limit = rlimit[1]\n soft_limit = min(4096, hard_limit)\n resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))\n\n\ndef build_dataset(cfg, default_args=None):\n \"\"\"Build a dataset from config dict.\n\n Args:\n cfg (dict): Config dict. It should at least contain the key \"type\".\n default_args (dict | None, optional): Default initialization arguments.\n Default: None.\n\n Returns:\n Dataset: The constructed dataset.\n \"\"\"\n if cfg['type'] == 'RepeatDataset':\n dataset = RepeatDataset(\n build_dataset(cfg['dataset'], default_args), cfg['times'])\n else:\n dataset = build_from_cfg(cfg, DATASETS, default_args)\n return dataset\n\n\ndef build_dataloader(dataset,\n videos_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n drop_last=False,\n pin_memory=True,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (:obj:`Dataset`): A PyTorch dataset.\n videos_per_gpu (int): Number of videos on each GPU, i.e.,\n batch size of each GPU.\n workers_per_gpu (int): How many subprocesses to use for data\n loading for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed\n training. Default: 1.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n seed (int | None): Seed to be used. Default: None.\n drop_last (bool): Whether to drop the last incomplete batch in epoch.\n Default: False\n pin_memory (bool): Whether to use pin_memory in DataLoader.\n Default: True\n kwargs (dict, optional): Any keyword argument to be used to initialize\n DataLoader.\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n sample_by_class = getattr(dataset, 'sample_by_class', False)\n power = getattr(dataset, 'power', None)\n\n if dist:\n if sample_by_class:\n assert power is not None\n sampler = DistributedPowerSampler(dataset, world_size, rank, power)\n else:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=shuffle)\n shuffle = False\n batch_size = videos_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = None\n batch_size = num_gpus * videos_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n init_fn = partial(\n worker_init_fn, num_workers=num_workers, rank=rank,\n seed=seed) if seed is not None else None\n\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=videos_per_gpu),\n pin_memory=pin_memory,\n shuffle=shuffle,\n worker_init_fn=init_fn,\n drop_last=drop_last,\n **kwargs)\n\n return data_loader\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n \"\"\"Init the random seed for various workers.\"\"\"\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n", "path": "mmaction/datasets/builder.py"}, {"content": "import torch\nfrom torch.utils.data import DistributedSampler as _DistributedSampler\n\n\nclass DistributedSampler(_DistributedSampler):\n \"\"\"DistributedSampler inheriting from\n ``torch.utils.data.DistributedSampler``.\n\n In pytorch of lower versions, there is no ``shuffle`` argument. This child\n class will port one to DistributedSampler.\n \"\"\"\n\n def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.shuffle = shuffle\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n if self.shuffle:\n g = torch.Generator()\n g.manual_seed(self.epoch)\n indices = torch.randperm(len(self.dataset), generator=g).tolist()\n else:\n indices = torch.arange(len(self.dataset)).tolist()\n\n # add extra samples to make it evenly divisible\n indices += indices[:(self.total_size - len(indices))]\n assert len(indices) == self.total_size\n\n # subsample\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n return iter(indices)\n\n\nclass DistributedPowerSampler(_DistributedSampler):\n \"\"\"DistributedPowerSampler inheriting from\n ``torch.utils.data.DistributedSampler``.\n\n Samples are sampled with the probability that is proportional to the power\n of label frequency (freq ^ power). The sampler only applies to single class\n recognition dataset.\n\n The default value of power is 1, which is equivalent to bootstrap sampling\n from the entire dataset.\n \"\"\"\n\n def __init__(self, dataset, num_replicas=None, rank=None, power=1):\n super().__init__(dataset, num_replicas=num_replicas, rank=rank)\n self.power = power\n\n def __iter__(self):\n # deterministically shuffle based on epoch\n g = torch.Generator()\n g.manual_seed(self.epoch)\n video_infos_by_class = self.dataset.video_infos_by_class\n num_classes = self.dataset.num_classes\n # For simplicity, discontinuous labels are not permitted\n assert set(video_infos_by_class) == set(range(num_classes))\n counts = [len(video_infos_by_class[i]) for i in range(num_classes)]\n counts = [cnt**self.power for cnt in counts]\n\n indices = torch.multinomial(\n torch.Tensor(counts),\n self.total_size,\n replacement=True,\n generator=g)\n indices = indices.data.numpy().tolist()\n assert len(indices) == self.total_size\n\n indices = indices[self.rank:self.total_size:self.num_replicas]\n assert len(indices) == self.num_samples\n\n return iter(indices)\n", "path": "mmaction/datasets/samplers/distributed_sampler.py"}]} | 2,515 | 651 |
gh_patches_debug_12617 | rasdani/github-patches | git_diff | learningequality__kolibri-3529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can change username with duplicates
<!--
Instructions:
* Fill out the sections below, replace …'s with information about your issue
* Use the 'preview' function above this text box to verify formatting before submitting
-->
### Observed behavior
<!--
Description of the behavior that was observed, including screenshots or other references when applicable
-->
I was sure I fixed the issue before. It's now reoccurring as seen below:


### Expected behavior
<!--
Description of what behavior was expected but did not occur
-->
Should throw an error that says username already exists
### User-facing consequences
<!--
Implications and real-world consequences for learners, coaches, admins, and other users of the application
-->
Can't login?
### Errors and logs
<!--
Relevant logs from:
* the command line
* ~/.kolibri/kolibri.log
* the browser console
Please wrap errors in triple backticks for clean formatting like this:
```
01:10 info: something happened
01:12 error: something bad happened
```
-->
### Steps to reproduce
<!--
Precise steps that someone else can follow in order to see this behavior
-->
### Context
<!--
Tell us about your environment, including:
* Kolibri version
* Operating system
* Browser
-->
Kolibri version: develop
Same username suggestion while signing-in (case-sensitive feature not accounted for) after the user edits a name using the edit username feature.
### Observed behavior
When the user edits the username from his/her profile and then again tries to login, there can be 2 suggestions of same username based on case-sensitive nature, for eg sahilm, sahilM
### Expected behavior
The username must not be allowed to be the same (as mentioned in #3458) and suggestions must not be case-sensitive nature, for eg sahilm and sahilM cannot exist simultaneously.
### User-facing consequences
It would create confusion in the classroom if the students are shown same username.
### Errors and logs


### Steps to reproduce
1. Login as admin and give permission to the users to edit the username.
2. Login as a user and edit the username which is same as existing but has different cases, like abc1, ABC1.
3. Try to sign in as user and look for suggestions which will be same with different cases.
### Context
Kolibri version : kolibri 0.9.0
Operating system : ubuntu 14.04
Browser : Chrome
### Screenshot



</issue>
<code>
[start of kolibri/auth/serializers.py]
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 from django.utils.translation import ugettext_lazy as _
6 from rest_framework import serializers
7 from rest_framework.validators import UniqueTogetherValidator
8
9 from .models import Classroom
10 from .models import Facility
11 from .models import FacilityDataset
12 from .models import FacilityUser
13 from .models import LearnerGroup
14 from .models import Membership
15 from .models import Role
16
17
18 class RoleSerializer(serializers.ModelSerializer):
19 collection_parent = serializers.SerializerMethodField()
20
21 class Meta:
22 model = Role
23 fields = ('id', 'kind', 'collection', 'user', 'collection_parent',)
24
25 def get_collection_parent(self, instance):
26 if instance.collection.parent is not None:
27 return instance.collection.parent.id
28 else:
29 return None
30
31
32 class FacilityUserSerializer(serializers.ModelSerializer):
33 roles = RoleSerializer(many=True, read_only=True)
34
35 class Meta:
36 model = FacilityUser
37 extra_kwargs = {'password': {'write_only': True}}
38 fields = ('id', 'username', 'full_name', 'password', 'facility', 'roles', 'is_superuser')
39
40 def create(self, validated_data):
41 if FacilityUser.objects.filter(username__iexact=validated_data['username']).exists():
42 raise serializers.ValidationError(_('An account with that username already exists'))
43 return super(FacilityUserSerializer, self).create(validated_data)
44
45
46 class FacilityUserSignupSerializer(FacilityUserSerializer):
47
48 def validate_username(self, value):
49 if FacilityUser.objects.filter(username__iexact=value).exists():
50 raise serializers.ValidationError(_('An account with that username already exists'))
51 return value
52
53
54 class FacilityUsernameSerializer(serializers.ModelSerializer):
55
56 class Meta:
57 model = FacilityUser
58 fields = ('username', )
59
60
61 class MembershipSerializer(serializers.ModelSerializer):
62
63 class Meta:
64 model = Membership
65 fields = ('id', 'collection', 'user')
66
67
68 class FacilityDatasetSerializer(serializers.ModelSerializer):
69
70 class Meta:
71 model = FacilityDataset
72 fields = ('id', 'learner_can_edit_username', 'learner_can_edit_name', 'learner_can_edit_password',
73 'learner_can_sign_up', 'learner_can_delete_account', 'learner_can_login_with_no_password',
74 'show_download_button_in_learn', 'description', 'location')
75
76
77 class FacilitySerializer(serializers.ModelSerializer):
78 dataset = FacilityDatasetSerializer(read_only=True)
79
80 class Meta:
81 model = Facility
82 extra_kwargs = {'id': {'read_only': True}, 'dataset': {'read_only': True}}
83 fields = ('id', 'name', 'dataset')
84
85
86 class PublicFacilitySerializer(serializers.ModelSerializer):
87
88 class Meta:
89 model = Facility
90 fields = ('dataset', 'name')
91
92
93 class ClassroomSerializer(serializers.ModelSerializer):
94 learner_count = serializers.SerializerMethodField()
95 coaches = serializers.SerializerMethodField()
96
97 def get_learner_count(self, instance):
98 return instance.get_members().count()
99
100 def get_coaches(self, instance):
101 return FacilityUserSerializer(instance.get_coaches(), many=True).data
102
103 class Meta:
104 model = Classroom
105 fields = (
106 'id',
107 'name',
108 'parent',
109 'learner_count',
110 'coaches',
111 )
112
113 validators = [
114 UniqueTogetherValidator(
115 queryset=Classroom.objects.all(),
116 fields=('parent', 'name')
117 )
118 ]
119
120
121 class LearnerGroupSerializer(serializers.ModelSerializer):
122
123 user_ids = serializers.SerializerMethodField()
124
125 def get_user_ids(self, group):
126 return [str(user_id['id']) for user_id in group.get_members().values('id')]
127
128 class Meta:
129 model = LearnerGroup
130 fields = ('id', 'name', 'parent', 'user_ids')
131
132 validators = [
133 UniqueTogetherValidator(
134 queryset=Classroom.objects.all(),
135 fields=('parent', 'name')
136 )
137 ]
138
[end of kolibri/auth/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/auth/serializers.py b/kolibri/auth/serializers.py
--- a/kolibri/auth/serializers.py
+++ b/kolibri/auth/serializers.py
@@ -42,6 +42,11 @@
raise serializers.ValidationError(_('An account with that username already exists'))
return super(FacilityUserSerializer, self).create(validated_data)
+ def update(self, instance, validated_data):
+ if validated_data.get('username') and FacilityUser.objects.exclude(id__exact=instance.id).filter(username__iexact=validated_data['username']).exists():
+ raise serializers.ValidationError(_('An account with that username already exists'))
+ return super(FacilityUserSerializer, self).update(instance, validated_data)
+
class FacilityUserSignupSerializer(FacilityUserSerializer):
| {"golden_diff": "diff --git a/kolibri/auth/serializers.py b/kolibri/auth/serializers.py\n--- a/kolibri/auth/serializers.py\n+++ b/kolibri/auth/serializers.py\n@@ -42,6 +42,11 @@\n raise serializers.ValidationError(_('An account with that username already exists'))\n return super(FacilityUserSerializer, self).create(validated_data)\n \n+ def update(self, instance, validated_data):\n+ if validated_data.get('username') and FacilityUser.objects.exclude(id__exact=instance.id).filter(username__iexact=validated_data['username']).exists():\n+ raise serializers.ValidationError(_('An account with that username already exists'))\n+ return super(FacilityUserSerializer, self).update(instance, validated_data)\n+\n \n class FacilityUserSignupSerializer(FacilityUserSerializer):\n", "issue": "Can change username with duplicates\n<!--\r\nInstructions:\r\n * Fill out the sections below, replace \u2026's with information about your issue\r\n * Use the 'preview' function above this text box to verify formatting before submitting\r\n-->\r\n\r\n### Observed behavior\r\n<!--\r\nDescription of the behavior that was observed, including screenshots or other references when applicable\r\n-->\r\n\r\nI was sure I fixed the issue before. It's now reoccurring as seen below:\r\n\r\n\r\n\r\n \r\n\r\n\r\n\r\n### Expected behavior\r\n<!--\r\nDescription of what behavior was expected but did not occur\r\n-->\r\n\r\nShould throw an error that says username already exists\r\n\r\n### User-facing consequences\r\n<!--\r\nImplications and real-world consequences for learners, coaches, admins, and other users of the application\r\n-->\r\n\r\nCan't login?\r\n\r\n### Errors and logs\r\n<!--\r\nRelevant logs from:\r\n * the command line\r\n * ~/.kolibri/kolibri.log\r\n * the browser console\r\n\r\nPlease wrap errors in triple backticks for clean formatting like this:\r\n```\r\n01:10 info: something happened\r\n01:12 error: something bad happened\r\n```\r\n-->\r\n\r\n### Steps to reproduce\r\n<!--\r\nPrecise steps that someone else can follow in order to see this behavior\r\n-->\r\n\r\n\r\n### Context\r\n<!--\r\nTell us about your environment, including:\r\n * Kolibri version\r\n * Operating system\r\n * Browser\r\n-->\r\n\r\nKolibri version: develop\r\n\nSame username suggestion while signing-in (case-sensitive feature not accounted for) after the user edits a name using the edit username feature.\n### Observed behavior\r\nWhen the user edits the username from his/her profile and then again tries to login, there can be 2 suggestions of same username based on case-sensitive nature, for eg sahilm, sahilM\r\n\r\n### Expected behavior\r\n\r\nThe username must not be allowed to be the same (as mentioned in #3458) and suggestions must not be case-sensitive nature, for eg sahilm and sahilM cannot exist simultaneously.\r\n\r\n### User-facing consequences\r\nIt would create confusion in the classroom if the students are shown same username.\r\n\r\n### Errors and logs\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Login as admin and give permission to the users to edit the username.\r\n2. Login as a user and edit the username which is same as existing but has different cases, like abc1, ABC1.\r\n3. Try to sign in as user and look for suggestions which will be same with different cases.\r\n\r\n### Context\r\n\r\nKolibri version : kolibri 0.9.0\r\nOperating system : ubuntu 14.04\r\nBrowser : Chrome\r\n\r\n### Screenshot\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom .models import Classroom\nfrom .models import Facility\nfrom .models import FacilityDataset\nfrom .models import FacilityUser\nfrom .models import LearnerGroup\nfrom .models import Membership\nfrom .models import Role\n\n\nclass RoleSerializer(serializers.ModelSerializer):\n collection_parent = serializers.SerializerMethodField()\n\n class Meta:\n model = Role\n fields = ('id', 'kind', 'collection', 'user', 'collection_parent',)\n\n def get_collection_parent(self, instance):\n if instance.collection.parent is not None:\n return instance.collection.parent.id\n else:\n return None\n\n\nclass FacilityUserSerializer(serializers.ModelSerializer):\n roles = RoleSerializer(many=True, read_only=True)\n\n class Meta:\n model = FacilityUser\n extra_kwargs = {'password': {'write_only': True}}\n fields = ('id', 'username', 'full_name', 'password', 'facility', 'roles', 'is_superuser')\n\n def create(self, validated_data):\n if FacilityUser.objects.filter(username__iexact=validated_data['username']).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return super(FacilityUserSerializer, self).create(validated_data)\n\n\nclass FacilityUserSignupSerializer(FacilityUserSerializer):\n\n def validate_username(self, value):\n if FacilityUser.objects.filter(username__iexact=value).exists():\n raise serializers.ValidationError(_('An account with that username already exists'))\n return value\n\n\nclass FacilityUsernameSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = FacilityUser\n fields = ('username', )\n\n\nclass MembershipSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Membership\n fields = ('id', 'collection', 'user')\n\n\nclass FacilityDatasetSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = FacilityDataset\n fields = ('id', 'learner_can_edit_username', 'learner_can_edit_name', 'learner_can_edit_password',\n 'learner_can_sign_up', 'learner_can_delete_account', 'learner_can_login_with_no_password',\n 'show_download_button_in_learn', 'description', 'location')\n\n\nclass FacilitySerializer(serializers.ModelSerializer):\n dataset = FacilityDatasetSerializer(read_only=True)\n\n class Meta:\n model = Facility\n extra_kwargs = {'id': {'read_only': True}, 'dataset': {'read_only': True}}\n fields = ('id', 'name', 'dataset')\n\n\nclass PublicFacilitySerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Facility\n fields = ('dataset', 'name')\n\n\nclass ClassroomSerializer(serializers.ModelSerializer):\n learner_count = serializers.SerializerMethodField()\n coaches = serializers.SerializerMethodField()\n\n def get_learner_count(self, instance):\n return instance.get_members().count()\n\n def get_coaches(self, instance):\n return FacilityUserSerializer(instance.get_coaches(), many=True).data\n\n class Meta:\n model = Classroom\n fields = (\n 'id',\n 'name',\n 'parent',\n 'learner_count',\n 'coaches',\n )\n\n validators = [\n UniqueTogetherValidator(\n queryset=Classroom.objects.all(),\n fields=('parent', 'name')\n )\n ]\n\n\nclass LearnerGroupSerializer(serializers.ModelSerializer):\n\n user_ids = serializers.SerializerMethodField()\n\n def get_user_ids(self, group):\n return [str(user_id['id']) for user_id in group.get_members().values('id')]\n\n class Meta:\n model = LearnerGroup\n fields = ('id', 'name', 'parent', 'user_ids')\n\n validators = [\n UniqueTogetherValidator(\n queryset=Classroom.objects.all(),\n fields=('parent', 'name')\n )\n ]\n", "path": "kolibri/auth/serializers.py"}]} | 2,777 | 177 |
gh_patches_debug_57139 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace `sphinxcontrib-napoleon`
It is now bundled with `sphinx` as `sphinx.ext.napoleon`.
So, we need to remove this dependency from both:
- `pyproject.toml`
- `docs/requirements.txt`
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/master/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('..'))
18
19
20 # -- Project information -----------------------------------------------------
21
22 def _get_project_meta():
23 import tomlkit
24
25 with open('../pyproject.toml') as pyproject:
26 contents = pyproject.read()
27
28 return tomlkit.parse(contents)['tool']['poetry']
29
30
31 pkg_meta = _get_project_meta()
32 project = pkg_meta['name']
33 copyright = '2018, wemake.services'
34 author = 'wemake.services'
35
36 # The short X.Y version
37 version = pkg_meta['version']
38 # The full version, including alpha/beta/rc tags
39 release = version
40
41
42 # -- General configuration ---------------------------------------------------
43
44 # If your documentation needs a minimal Sphinx version, state it here.
45 #
46 # needs_sphinx = '1.0'
47
48 # Add any Sphinx extension module names here, as strings. They can be
49 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
50 # ones.
51 extensions = [
52 'sphinx.ext.autodoc',
53 'sphinx.ext.doctest',
54 'sphinx.ext.todo',
55 'sphinx.ext.coverage',
56 'sphinx.ext.viewcode',
57 'sphinx.ext.autosummary',
58
59 # Used to include .md files:
60 'm2r',
61
62 # Used to write python docstrings in a readable way:
63 'sphinxcontrib.napoleon',
64
65 # Used to insert typehints into the final docs:
66 'sphinx_autodoc_typehints',
67
68 # Used to embed values from the source code into the docs:
69 'added_value',
70 ]
71
72 autoclass_content = 'class'
73 autodoc_member_order = 'bysource'
74
75 autodoc_mock_imports = [
76 'attr',
77 ]
78
79 autodoc_member_order = 'bysource'
80 autodoc_default_flags = {
81 'members': '',
82 'undoc-members': 'code,error_template',
83 'exclude-members': '__dict__,__weakref__',
84 }
85
86 # Add any paths that contain templates here, relative to this directory.
87 templates_path = ['_templates']
88
89 # The suffix(es) of source filenames.
90 # You can specify multiple suffix as a list of string:
91
92 source_suffix = ['.rst', '.md']
93
94 # The master toctree document.
95 master_doc = 'index'
96
97 # The language for content autogenerated by Sphinx. Refer to documentation
98 # for a list of supported languages.
99 #
100 # This is also used if you do content translation via gettext catalogs.
101 # Usually you set "language" from the command line for these cases.
102 language = None
103
104 # List of patterns, relative to source directory, that match files and
105 # directories to ignore when looking for source files.
106 # This pattern also affects html_static_path and html_extra_path .
107 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
108
109 # The name of the Pygments (syntax highlighting) style to use.
110 pygments_style = 'sphinx'
111
112 add_module_names = False
113
114 autodoc_default_options = {
115 'show-inheritance': True,
116 }
117
118
119 # -- Options for HTML output -------------------------------------------------
120
121 # The theme to use for HTML and HTML Help pages. See the documentation for
122 # a list of builtin themes.
123 #
124 html_theme = 'alabaster'
125
126 # Theme options are theme-specific and customize the look and feel of a theme
127 # further. For a list of options available for each theme, see the
128 # documentation.
129 html_theme_options = {
130 'sidebar_collapse': False,
131 'show_powered_by': False,
132 }
133
134 # Add any paths that contain custom static files (such as style sheets) here,
135 # relative to this directory. They are copied after the builtin static files,
136 # so a file named "default.css" will overwrite the builtin "default.css".
137 html_static_path = ['_static']
138
139 # Custom sidebar templates, must be a dictionary that maps document names
140 # to template names.
141 #
142 # This is required for the alabaster theme
143 # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
144 html_sidebars = {
145 '**': [
146 'about.html',
147 'navigation.html',
148 'moreinfo.html',
149 'github.html',
150 'searchbox.html',
151 ]
152 }
153
154
155 # -- Options for HTMLHelp output ---------------------------------------------
156
157 # Output file base name for HTML help builder.
158 htmlhelp_basename = 'wemake-python-styleguidedoc'
159
160
161 # -- Options for LaTeX output ------------------------------------------------
162
163 latex_elements = {
164 # The paper size ('letterpaper' or 'a4paper').
165 #
166 # 'papersize': 'letterpaper',
167
168 # The font size ('10pt', '11pt' or '12pt').
169 #
170 # 'pointsize': '10pt',
171
172 # Additional stuff for the LaTeX preamble.
173 #
174 # 'preamble': '',
175
176 # Latex figure (float) alignment
177 #
178 # 'figure_align': 'htbp',
179 }
180
181 # Grouping the document tree into LaTeX files. List of tuples
182 # (source start file, target name, title,
183 # author, documentclass [howto, manual, or own class]).
184 latex_documents = [
185 (
186 master_doc,
187 'wemake-python-styleguide.tex',
188 'wemake-python-styleguide Documentation',
189 'wemake.services',
190 'manual',
191 ),
192 ]
193
194
195 # -- Options for manual page output ------------------------------------------
196
197 # One entry per manual page. List of tuples
198 # (source start file, name, description, authors, manual section).
199 man_pages = [
200 (
201 master_doc,
202 'wemake-python-styleguide',
203 'wemake-python-styleguide Documentation',
204 [author],
205 1,
206 )
207 ]
208
209
210 # -- Options for Texinfo output ----------------------------------------------
211
212 # Grouping the document tree into Texinfo files. List of tuples
213 # (source start file, target name, title, author,
214 # dir menu entry, description, category)
215 texinfo_documents = [
216 (
217 master_doc,
218 'wemake-python-styleguide',
219 'wemake-python-styleguide Documentation',
220 author,
221 'wemake-python-styleguide',
222 'One line description of project.',
223 'Miscellaneous',
224 ),
225 ]
226
227
228 # -- Extension configuration -------------------------------------------------
229
230 napoleon_numpy_docstring = False
231
232 # -- Options for todo extension ----------------------------------------------
233
234 # If true, `todo` and `todoList` produce output, else they produce nothing.
235 todo_include_todos = True
236
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -55,13 +55,11 @@
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.autosummary',
+ 'sphinx.ext.napoleon',
# Used to include .md files:
'm2r',
- # Used to write python docstrings in a readable way:
- 'sphinxcontrib.napoleon',
-
# Used to insert typehints into the final docs:
'sphinx_autodoc_typehints',
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -55,13 +55,11 @@\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n+ 'sphinx.ext.napoleon',\n \n # Used to include .md files:\n 'm2r',\n \n- # Used to write python docstrings in a readable way:\n- 'sphinxcontrib.napoleon',\n-\n # Used to insert typehints into the final docs:\n 'sphinx_autodoc_typehints',\n", "issue": "Replace `sphinxcontrib-napoleon`\nIt is now bundled with `sphinx` as `sphinx.ext.napoleon`.\r\n\r\nSo, we need to remove this dependency from both:\r\n- `pyproject.toml`\r\n- `docs/requirements.txt`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\n\n# -- Project information -----------------------------------------------------\n\ndef _get_project_meta():\n import tomlkit\n\n with open('../pyproject.toml') as pyproject:\n contents = pyproject.read()\n\n return tomlkit.parse(contents)['tool']['poetry']\n\n\npkg_meta = _get_project_meta()\nproject = pkg_meta['name']\ncopyright = '2018, wemake.services'\nauthor = 'wemake.services'\n\n# The short X.Y version\nversion = pkg_meta['version']\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n\n # Used to include .md files:\n 'm2r',\n\n # Used to write python docstrings in a readable way:\n 'sphinxcontrib.napoleon',\n\n # Used to insert typehints into the final docs:\n 'sphinx_autodoc_typehints',\n\n # Used to embed values from the source code into the docs:\n 'added_value',\n]\n\nautoclass_content = 'class'\nautodoc_member_order = 'bysource'\n\nautodoc_mock_imports = [\n 'attr',\n]\n\nautodoc_member_order = 'bysource'\nautodoc_default_flags = {\n 'members': '',\n 'undoc-members': 'code,error_template',\n 'exclude-members': '__dict__,__weakref__',\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n\nsource_suffix = ['.rst', '.md']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\nadd_module_names = False\n\nautodoc_default_options = {\n 'show-inheritance': True,\n}\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'sidebar_collapse': False,\n 'show_powered_by': False,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n '**': [\n 'about.html',\n 'navigation.html',\n 'moreinfo.html',\n 'github.html',\n 'searchbox.html',\n ]\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'wemake-python-styleguidedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n 'wemake-python-styleguide.tex',\n 'wemake-python-styleguide Documentation',\n 'wemake.services',\n 'manual',\n ),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (\n master_doc,\n 'wemake-python-styleguide',\n 'wemake-python-styleguide Documentation',\n [author],\n 1,\n )\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n 'wemake-python-styleguide',\n 'wemake-python-styleguide Documentation',\n author,\n 'wemake-python-styleguide',\n 'One line description of project.',\n 'Miscellaneous',\n ),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\nnapoleon_numpy_docstring = False\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}]} | 2,695 | 138 |
gh_patches_debug_20709 | rasdani/github-patches | git_diff | keras-team__keras-nlp-131 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with tf.data and MLMMaskGenerator for dense inputs
When using the MLMMaskGenerator for to map over dense, batched inputs in a tf.data.Dataset, we get the following error...
`TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'`
This colab has a reproduction https://colab.research.google.com/gist/mattdangerw/4596df85105ff6e6731128fc79d16bf3/mlmmaskgenerator-bug.ipynb
tf.data + dense, batched inputs might be the most common use case for this layer, so this is an important one to fix.
</issue>
<code>
[start of keras_nlp/layers/preprocessing/mlm_mask_generator.py]
1 # Copyright 2022 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import tensorflow as tf
16 import tensorflow_text as tf_text
17 from tensorflow import keras
18
19
20 class MLMMaskGenerator(keras.layers.Layer):
21 """Layer that applies language model masking.
22
23 This layer is useful for preparing inputs for masked languaged modeling
24 (MLM) tasks. It follows the masking strategy described in the [original BERT
25 paper](https://arxiv.org/abs/1810.04805). Given tokenized text,
26 it randomly selects certain number of tokens for masking. Then for each
27 selected token, it has a chance (configurable) to be replaced by
28 "mask token" or random token, or stay unchanged.
29
30 Users should use this layer with `tf.data` to generate masks.
31
32 Args:
33 vocabulary_size: int, the size of the vocabulary.
34 mask_selection_rate: float, the probability of a token is selected for
35 masking.
36 mask_token_id: int. The id of mask token.
37 mask_selection_length: int, defaults to None. Maximum number of tokens
38 selected for masking in each sequence. If set, the output
39 `mask_positions`, `mask_ids` and `mask_weights` will be padded
40 to dense tensors of length `mask_selection_length`,
41 otherwise the output will be a RaggedTensor.
42 unselectable_token_ids: A list of tokens, defaults to [0] (the default
43 `padding_token_id`).
44 mask_token_rate: float, defaults to 0.8. `mask_token_rate` must be
45 between 0 and 1 which indicates how often the mask_token is
46 substituted for tokens selected for masking.
47 random_token_rate: float, defaults to 0.1. `random_token_rate` must be
48 between 0 and 1 which indicates how often a random token is
49 substituted for tokens selected for masking. Default is 0.1.
50 Note: mask_token_rate + random_token_rate <= 1, and for
51 (1 - mask_token_rate - random_token_rate), the token will not be
52 changed.
53
54 Input:
55 A 1D integer tensor of shape [sequence_length] or a 2D integer tensor
56 of shape [batch_size, sequence_length], or a 2D integer RaggedTensor.
57 Represents the sequence to mask.
58
59 Returns:
60 A Dict with 4 keys:
61 tokens: Tensor or RaggedTensor, has the same type and shape of
62 input. Sequence after getting masked.
63 mask_positions: Tensor, or RaggedTensor if `mask_selection_length`
64 is None. The positions of tokens getting masked.
65 mask_ids: Tensor, or RaggedTensor if `mask_selection_length` is
66 None. The original token ids at masked positions.
67 mask_weights: Tensor, or RaggedTensor if `mask_selection_length` is
68 None. `mask_weights` has the same shape as `mask_positions` and
69 `mask_ids`. Each element in `mask_weights` should be 0 or 1,
70 1 means the corresponding position in `mask_positions` is an
71 actual mask, 0 means it is a pad.
72
73 Examples:
74
75 Basic usage.
76 >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \
77 vocabulary_size=10, mask_selection_rate=0.2, mask_token_id=0, \
78 mask_selection_length=5)
79 >>> masker(tf.constant([1, 2, 3, 4, 5]))
80
81 Ragged Input:
82 >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \
83 vocabulary_size=10, mask_selection_rate=0.5, mask_token_id=0, \
84 mask_selection_length=5)
85 >>> masker(tf.ragged.constant([[1, 2], [1, 2, 3, 4]]))
86 """
87
88 def __init__(
89 self,
90 vocabulary_size,
91 mask_selection_rate,
92 mask_token_id,
93 mask_selection_length=None,
94 unselectable_token_ids=[0],
95 mask_token_rate=0.8,
96 random_token_rate=0.1,
97 **kwargs,
98 ):
99 super().__init__(**kwargs)
100 self.vocabulary_size = vocabulary_size
101 self.unselectable_token_ids = unselectable_token_ids
102 self.mask_selection_rate = mask_selection_rate
103 self.mask_selection_length = mask_selection_length
104 self.mask_token_rate = mask_token_rate
105 self.random_token_rate = random_token_rate
106
107 if mask_token_id >= vocabulary_size:
108 raise ValueError(
109 f"Mask token id should be in range [0, vocabulary_size - 1], "
110 f"but received mask_token_id={mask_token_id}."
111 )
112 self.mask_token_id = mask_token_id
113
114 max_selections = self.mask_selection_length
115 if max_selections is None:
116 # Set a large number to remove the `max_selections_per_batch` cap.
117 max_selections = 2**31 - 1
118 self._random_selector = tf_text.RandomItemSelector(
119 max_selections_per_batch=max_selections,
120 selection_rate=self.mask_selection_rate,
121 unselectable_ids=self.unselectable_token_ids,
122 )
123 self._mask_values_chooser = tf_text.MaskValuesChooser(
124 self.vocabulary_size,
125 self.mask_token_id,
126 mask_token_rate=self.mask_token_rate,
127 random_token_rate=self.random_token_rate,
128 )
129
130 def call(self, inputs):
131 input_is_ragged = isinstance(inputs, tf.RaggedTensor)
132 input_is_1d = tf.rank(inputs) == 1
133 if input_is_1d:
134 # If inputs is of rank 1, we manually add the batch axis.
135 inputs = inputs[tf.newaxis, :]
136 if not input_is_ragged:
137 # `tf_text.mask_language_model` requires a ragged tensor, so
138 # convert dense to ragged.
139 inputs = tf.RaggedTensor.from_tensor(inputs)
140 (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(
141 inputs,
142 item_selector=self._random_selector,
143 mask_values_chooser=self._mask_values_chooser,
144 )
145
146 if not input_is_ragged:
147 # If we converted the input from dense to ragged, convert back.
148 tokens = tokens.to_tensor()
149
150 mask_weights = tf.ones_like(mask_positions, self.compute_dtype)
151 # If mask_selection_length is set, covert to raggeds to dense.
152 if self.mask_selection_length:
153 target_shape = tf.cast([-1, self.mask_selection_length], tf.int64)
154 mask_positions = mask_positions.to_tensor(shape=target_shape)
155 mask_ids = mask_ids.to_tensor(shape=target_shape)
156 mask_weights = mask_weights.to_tensor(shape=target_shape)
157
158 if input_is_1d:
159 # If inputs is 1D, we format the output to be 1D as well.
160 tokens = tf.squeeze(tokens, axis=0)
161 mask_positions = tf.squeeze(mask_positions, axis=0)
162 mask_ids = tf.squeeze(mask_ids, axis=0)
163 mask_weights = tf.squeeze(mask_weights, axis=0)
164
165 output_dict = {
166 "tokens": tokens,
167 "mask_positions": mask_positions,
168 "mask_ids": mask_ids,
169 "mask_weights": mask_weights,
170 }
171 return output_dict
172
173 def get_config(self):
174 config = super().get_config()
175 config.update(
176 {
177 "vocabulary_size": self.vocabulary_size,
178 "mask_selection_rate": self.mask_selection_rate,
179 "mask_selection_length": self.mask_selection_length,
180 "unselectable_token_ids": self.unselectable_token_ids,
181 "mask_token_id": self.mask_token_id,
182 "mask_token_rate": self.mask_token_rate,
183 "random_token_rate": self.random_token_rate,
184 }
185 )
186 return config
187
[end of keras_nlp/layers/preprocessing/mlm_mask_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/keras_nlp/layers/preprocessing/mlm_mask_generator.py b/keras_nlp/layers/preprocessing/mlm_mask_generator.py
--- a/keras_nlp/layers/preprocessing/mlm_mask_generator.py
+++ b/keras_nlp/layers/preprocessing/mlm_mask_generator.py
@@ -129,7 +129,7 @@
def call(self, inputs):
input_is_ragged = isinstance(inputs, tf.RaggedTensor)
- input_is_1d = tf.rank(inputs) == 1
+ input_is_1d = inputs.shape.rank == 1
if input_is_1d:
# If inputs is of rank 1, we manually add the batch axis.
inputs = inputs[tf.newaxis, :]
@@ -137,6 +137,7 @@
# `tf_text.mask_language_model` requires a ragged tensor, so
# convert dense to ragged.
inputs = tf.RaggedTensor.from_tensor(inputs)
+
(tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(
inputs,
item_selector=self._random_selector,
| {"golden_diff": "diff --git a/keras_nlp/layers/preprocessing/mlm_mask_generator.py b/keras_nlp/layers/preprocessing/mlm_mask_generator.py\n--- a/keras_nlp/layers/preprocessing/mlm_mask_generator.py\n+++ b/keras_nlp/layers/preprocessing/mlm_mask_generator.py\n@@ -129,7 +129,7 @@\n \n def call(self, inputs):\n input_is_ragged = isinstance(inputs, tf.RaggedTensor)\n- input_is_1d = tf.rank(inputs) == 1\n+ input_is_1d = inputs.shape.rank == 1\n if input_is_1d:\n # If inputs is of rank 1, we manually add the batch axis.\n inputs = inputs[tf.newaxis, :]\n@@ -137,6 +137,7 @@\n # `tf_text.mask_language_model` requires a ragged tensor, so\n # convert dense to ragged.\n inputs = tf.RaggedTensor.from_tensor(inputs)\n+\n (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(\n inputs,\n item_selector=self._random_selector,\n", "issue": "Error with tf.data and MLMMaskGenerator for dense inputs\nWhen using the MLMMaskGenerator for to map over dense, batched inputs in a tf.data.Dataset, we get the following error...\r\n\r\n`TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'`\r\n\r\nThis colab has a reproduction https://colab.research.google.com/gist/mattdangerw/4596df85105ff6e6731128fc79d16bf3/mlmmaskgenerator-bug.ipynb\r\n\r\ntf.data + dense, batched inputs might be the most common use case for this layer, so this is an important one to fix.\n", "before_files": [{"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tensorflow as tf\nimport tensorflow_text as tf_text\nfrom tensorflow import keras\n\n\nclass MLMMaskGenerator(keras.layers.Layer):\n \"\"\"Layer that applies language model masking.\n\n This layer is useful for preparing inputs for masked languaged modeling\n (MLM) tasks. It follows the masking strategy described in the [original BERT\n paper](https://arxiv.org/abs/1810.04805). Given tokenized text,\n it randomly selects certain number of tokens for masking. Then for each\n selected token, it has a chance (configurable) to be replaced by\n \"mask token\" or random token, or stay unchanged.\n\n Users should use this layer with `tf.data` to generate masks.\n\n Args:\n vocabulary_size: int, the size of the vocabulary.\n mask_selection_rate: float, the probability of a token is selected for\n masking.\n mask_token_id: int. The id of mask token.\n mask_selection_length: int, defaults to None. Maximum number of tokens\n selected for masking in each sequence. If set, the output\n `mask_positions`, `mask_ids` and `mask_weights` will be padded\n to dense tensors of length `mask_selection_length`,\n otherwise the output will be a RaggedTensor.\n unselectable_token_ids: A list of tokens, defaults to [0] (the default\n `padding_token_id`).\n mask_token_rate: float, defaults to 0.8. `mask_token_rate` must be\n between 0 and 1 which indicates how often the mask_token is\n substituted for tokens selected for masking.\n random_token_rate: float, defaults to 0.1. `random_token_rate` must be\n between 0 and 1 which indicates how often a random token is\n substituted for tokens selected for masking. Default is 0.1.\n Note: mask_token_rate + random_token_rate <= 1, and for\n (1 - mask_token_rate - random_token_rate), the token will not be\n changed.\n\n Input:\n A 1D integer tensor of shape [sequence_length] or a 2D integer tensor\n of shape [batch_size, sequence_length], or a 2D integer RaggedTensor.\n Represents the sequence to mask.\n\n Returns:\n A Dict with 4 keys:\n tokens: Tensor or RaggedTensor, has the same type and shape of\n input. Sequence after getting masked.\n mask_positions: Tensor, or RaggedTensor if `mask_selection_length`\n is None. The positions of tokens getting masked.\n mask_ids: Tensor, or RaggedTensor if `mask_selection_length` is\n None. The original token ids at masked positions.\n mask_weights: Tensor, or RaggedTensor if `mask_selection_length` is\n None. `mask_weights` has the same shape as `mask_positions` and\n `mask_ids`. Each element in `mask_weights` should be 0 or 1,\n 1 means the corresponding position in `mask_positions` is an\n actual mask, 0 means it is a pad.\n\n Examples:\n\n Basic usage.\n >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \\\n vocabulary_size=10, mask_selection_rate=0.2, mask_token_id=0, \\\n mask_selection_length=5)\n >>> masker(tf.constant([1, 2, 3, 4, 5]))\n\n Ragged Input:\n >>> masker = keras_nlp.layers.preprocessing.MLMMaskGenerator( \\\n vocabulary_size=10, mask_selection_rate=0.5, mask_token_id=0, \\\n mask_selection_length=5)\n >>> masker(tf.ragged.constant([[1, 2], [1, 2, 3, 4]]))\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n mask_selection_rate,\n mask_token_id,\n mask_selection_length=None,\n unselectable_token_ids=[0],\n mask_token_rate=0.8,\n random_token_rate=0.1,\n **kwargs,\n ):\n super().__init__(**kwargs)\n self.vocabulary_size = vocabulary_size\n self.unselectable_token_ids = unselectable_token_ids\n self.mask_selection_rate = mask_selection_rate\n self.mask_selection_length = mask_selection_length\n self.mask_token_rate = mask_token_rate\n self.random_token_rate = random_token_rate\n\n if mask_token_id >= vocabulary_size:\n raise ValueError(\n f\"Mask token id should be in range [0, vocabulary_size - 1], \"\n f\"but received mask_token_id={mask_token_id}.\"\n )\n self.mask_token_id = mask_token_id\n\n max_selections = self.mask_selection_length\n if max_selections is None:\n # Set a large number to remove the `max_selections_per_batch` cap.\n max_selections = 2**31 - 1\n self._random_selector = tf_text.RandomItemSelector(\n max_selections_per_batch=max_selections,\n selection_rate=self.mask_selection_rate,\n unselectable_ids=self.unselectable_token_ids,\n )\n self._mask_values_chooser = tf_text.MaskValuesChooser(\n self.vocabulary_size,\n self.mask_token_id,\n mask_token_rate=self.mask_token_rate,\n random_token_rate=self.random_token_rate,\n )\n\n def call(self, inputs):\n input_is_ragged = isinstance(inputs, tf.RaggedTensor)\n input_is_1d = tf.rank(inputs) == 1\n if input_is_1d:\n # If inputs is of rank 1, we manually add the batch axis.\n inputs = inputs[tf.newaxis, :]\n if not input_is_ragged:\n # `tf_text.mask_language_model` requires a ragged tensor, so\n # convert dense to ragged.\n inputs = tf.RaggedTensor.from_tensor(inputs)\n (tokens, mask_positions, mask_ids,) = tf_text.mask_language_model(\n inputs,\n item_selector=self._random_selector,\n mask_values_chooser=self._mask_values_chooser,\n )\n\n if not input_is_ragged:\n # If we converted the input from dense to ragged, convert back.\n tokens = tokens.to_tensor()\n\n mask_weights = tf.ones_like(mask_positions, self.compute_dtype)\n # If mask_selection_length is set, covert to raggeds to dense.\n if self.mask_selection_length:\n target_shape = tf.cast([-1, self.mask_selection_length], tf.int64)\n mask_positions = mask_positions.to_tensor(shape=target_shape)\n mask_ids = mask_ids.to_tensor(shape=target_shape)\n mask_weights = mask_weights.to_tensor(shape=target_shape)\n\n if input_is_1d:\n # If inputs is 1D, we format the output to be 1D as well.\n tokens = tf.squeeze(tokens, axis=0)\n mask_positions = tf.squeeze(mask_positions, axis=0)\n mask_ids = tf.squeeze(mask_ids, axis=0)\n mask_weights = tf.squeeze(mask_weights, axis=0)\n\n output_dict = {\n \"tokens\": tokens,\n \"mask_positions\": mask_positions,\n \"mask_ids\": mask_ids,\n \"mask_weights\": mask_weights,\n }\n return output_dict\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"mask_selection_rate\": self.mask_selection_rate,\n \"mask_selection_length\": self.mask_selection_length,\n \"unselectable_token_ids\": self.unselectable_token_ids,\n \"mask_token_id\": self.mask_token_id,\n \"mask_token_rate\": self.mask_token_rate,\n \"random_token_rate\": self.random_token_rate,\n }\n )\n return config\n", "path": "keras_nlp/layers/preprocessing/mlm_mask_generator.py"}]} | 2,969 | 248 |
gh_patches_debug_5672 | rasdani/github-patches | git_diff | sosreport__sos-471 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[block] Don't use parted human readable output - rhbz #1183770
Changed the parted command to return data in sectors units
instead of human readable form.
Signed-off-by: Shane Bradley [email protected]
</issue>
<code>
[start of sos/plugins/block.py]
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; either version 2 of the License, or
4 # (at your option) any later version.
5
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
9 # GNU General Public License for more details.
10
11 # You should have received a copy of the GNU General Public License
12 # along with this program; if not, write to the Free Software
13 # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
14
15 import os
16 from sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin
17
18
19 class Block(Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin):
20 """Block device information
21 """
22
23 plugin_name = 'block'
24 profiles = ('storage', 'hardware')
25
26 def setup(self):
27 self.add_cmd_output([
28 "lsblk",
29 "blkid -c /dev/null",
30 "ls -lanR /dev",
31 "ls -lanR /sys/block"
32 ])
33
34 # legacy location for non-/run distributions
35 self.add_copy_spec([
36 "/etc/blkid.tab",
37 "/run/blkid/blkid.tab",
38 "/proc/partitions",
39 "/proc/diskstats"
40 ])
41
42 if os.path.isdir("/sys/block"):
43 for disk in os.listdir("/sys/block"):
44 if disk in [".", ".."] or disk.startswith("ram"):
45 continue
46 disk_path = os.path.join('/dev/', disk)
47 self.add_cmd_output([
48 "udevadm info -ap /sys/block/%s" % (disk),
49 "parted -s %s print" % (disk_path),
50 "fdisk -l %s" % disk_path
51 ])
52
53 # vim: et ts=4 sw=4
54
[end of sos/plugins/block.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sos/plugins/block.py b/sos/plugins/block.py
--- a/sos/plugins/block.py
+++ b/sos/plugins/block.py
@@ -46,7 +46,7 @@
disk_path = os.path.join('/dev/', disk)
self.add_cmd_output([
"udevadm info -ap /sys/block/%s" % (disk),
- "parted -s %s print" % (disk_path),
+ "parted -s %s unit s print" % (disk_path),
"fdisk -l %s" % disk_path
])
| {"golden_diff": "diff --git a/sos/plugins/block.py b/sos/plugins/block.py\n--- a/sos/plugins/block.py\n+++ b/sos/plugins/block.py\n@@ -46,7 +46,7 @@\n disk_path = os.path.join('/dev/', disk)\n self.add_cmd_output([\n \"udevadm info -ap /sys/block/%s\" % (disk),\n- \"parted -s %s print\" % (disk_path),\n+ \"parted -s %s unit s print\" % (disk_path),\n \"fdisk -l %s\" % disk_path\n ])\n", "issue": "[block] Don't use parted human readable output - rhbz #1183770\nChanged the parted command to return data in sectors units\ninstead of human readable form.\n\nSigned-off-by: Shane Bradley [email protected]\n\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nimport os\nfrom sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin\n\n\nclass Block(Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin):\n \"\"\"Block device information\n \"\"\"\n\n plugin_name = 'block'\n profiles = ('storage', 'hardware')\n\n def setup(self):\n self.add_cmd_output([\n \"lsblk\",\n \"blkid -c /dev/null\",\n \"ls -lanR /dev\",\n \"ls -lanR /sys/block\"\n ])\n\n # legacy location for non-/run distributions\n self.add_copy_spec([\n \"/etc/blkid.tab\",\n \"/run/blkid/blkid.tab\",\n \"/proc/partitions\",\n \"/proc/diskstats\"\n ])\n\n if os.path.isdir(\"/sys/block\"):\n for disk in os.listdir(\"/sys/block\"):\n if disk in [\".\", \"..\"] or disk.startswith(\"ram\"):\n continue\n disk_path = os.path.join('/dev/', disk)\n self.add_cmd_output([\n \"udevadm info -ap /sys/block/%s\" % (disk),\n \"parted -s %s print\" % (disk_path),\n \"fdisk -l %s\" % disk_path\n ])\n\n# vim: et ts=4 sw=4\n", "path": "sos/plugins/block.py"}]} | 1,122 | 128 |
gh_patches_debug_6080 | rasdani/github-patches | git_diff | frappe__frappe-8232 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug(API): TypeError, cannot create document
```bash
curl -X POST https://my.erpnext.com/api/resource/Lead \
-H 'Accept: application/json' \
-H 'Authorization: Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==' \
-H 'Content-Type: application/json' \
-d '{"lead_name": "Jon Doe"}'
```
Returns
```json
{"exc":"[\"Traceback (most recent call last):\\n File \\\"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\\\", line 60, in application\\n response = frappe.api.handle()\\n File \\\"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\\\", line 116, in handle\\n data = json.loads(frappe.local.form_dict.data)\\n File \\\"/usr/lib64/python2.7/json/__init__.py\\\", line 338, in loads\\n return _default_decoder.decode(s)\\n File \\\"/usr/lib64/python2.7/json/decoder.py\\\", line 366, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\nTypeError: expected string or buffer\\n\"]"}
```
Cleaned up stack trace:
```
Traceback (most recent call last):
File "/home/frappe/frappe-bench/apps/frappe/frappe/app.py", line 60, in application
response = frappe.api.handle()
File "/home/frappe/frappe-bench/apps/frappe/frappe/api.py", line 116, in handle
data = json.loads(frappe.local.form_dict.data)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
```
This seems to apply to any DocType – it's not possible to create documents this way.
@netchampfaris may [this](https://github.com/frappe/frappe/commit/f63ad574e580360996807931f9c9cfa363385c3d#diff-d65e8dff7122e8822cd2009d1ef1a963) be the cause?
### Versions
ERPNext: v12.0.6 (version-12)
Frappe Framework: v12.0.6 (version-12)
</issue>
<code>
[start of frappe/api.py]
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3 from __future__ import unicode_literals
4
5 import json
6 import frappe
7 import frappe.handler
8 import frappe.client
9 from frappe.utils.response import build_response
10 from frappe import _
11 from six.moves.urllib.parse import urlparse, urlencode
12 import base64
13
14 def handle():
15 """
16 Handler for `/api` methods
17
18 ### Examples:
19
20 `/api/method/{methodname}` will call a whitelisted method
21
22 `/api/resource/{doctype}` will query a table
23 examples:
24 - `?fields=["name", "owner"]`
25 - `?filters=[["Task", "name", "like", "%005"]]`
26 - `?limit_start=0`
27 - `?limit_page_length=20`
28
29 `/api/resource/{doctype}/{name}` will point to a resource
30 `GET` will return doclist
31 `POST` will insert
32 `PUT` will update
33 `DELETE` will delete
34
35 `/api/resource/{doctype}/{name}?run_method={method}` will run a whitelisted controller method
36 """
37
38 validate_oauth()
39 validate_auth_via_api_keys()
40
41 parts = frappe.request.path[1:].split("/",3)
42 call = doctype = name = None
43
44 if len(parts) > 1:
45 call = parts[1]
46
47 if len(parts) > 2:
48 doctype = parts[2]
49
50 if len(parts) > 3:
51 name = parts[3]
52
53 if call=="method":
54 frappe.local.form_dict.cmd = doctype
55 return frappe.handler.handle()
56
57 elif call=="resource":
58 if "run_method" in frappe.local.form_dict:
59 method = frappe.local.form_dict.pop("run_method")
60 doc = frappe.get_doc(doctype, name)
61 doc.is_whitelisted(method)
62
63 if frappe.local.request.method=="GET":
64 if not doc.has_permission("read"):
65 frappe.throw(_("Not permitted"), frappe.PermissionError)
66 frappe.local.response.update({"data": doc.run_method(method, **frappe.local.form_dict)})
67
68 if frappe.local.request.method=="POST":
69 if not doc.has_permission("write"):
70 frappe.throw(_("Not permitted"), frappe.PermissionError)
71
72 frappe.local.response.update({"data": doc.run_method(method, **frappe.local.form_dict)})
73 frappe.db.commit()
74
75 else:
76 if name:
77 if frappe.local.request.method=="GET":
78 doc = frappe.get_doc(doctype, name)
79 if not doc.has_permission("read"):
80 raise frappe.PermissionError
81 frappe.local.response.update({"data": doc})
82
83 if frappe.local.request.method=="PUT":
84 data = json.loads(frappe.local.form_dict.data)
85 doc = frappe.get_doc(doctype, name)
86
87 if "flags" in data:
88 del data["flags"]
89
90 # Not checking permissions here because it's checked in doc.save
91 doc.update(data)
92
93 frappe.local.response.update({
94 "data": doc.save().as_dict()
95 })
96 frappe.db.commit()
97
98 if frappe.local.request.method=="DELETE":
99 # Not checking permissions here because it's checked in delete_doc
100 frappe.delete_doc(doctype, name, ignore_missing=False)
101 frappe.local.response.http_status_code = 202
102 frappe.local.response.message = "ok"
103 frappe.db.commit()
104
105
106 elif doctype:
107 if frappe.local.request.method=="GET":
108 if frappe.local.form_dict.get('fields'):
109 frappe.local.form_dict['fields'] = json.loads(frappe.local.form_dict['fields'])
110 frappe.local.form_dict.setdefault('limit_page_length', 20)
111 frappe.local.response.update({
112 "data": frappe.call(frappe.client.get_list,
113 doctype, **frappe.local.form_dict)})
114
115 if frappe.local.request.method=="POST":
116 data = json.loads(frappe.local.form_dict.data)
117 data.update({
118 "doctype": doctype
119 })
120 frappe.local.response.update({
121 "data": frappe.get_doc(data).insert().as_dict()
122 })
123 frappe.db.commit()
124 else:
125 raise frappe.DoesNotExistError
126
127 else:
128 raise frappe.DoesNotExistError
129
130 return build_response("json")
131
132 def validate_oauth():
133 from frappe.oauth import get_url_delimiter
134 form_dict = frappe.local.form_dict
135 authorization_header = frappe.get_request_header("Authorization").split(" ") if frappe.get_request_header("Authorization") else None
136 if authorization_header and authorization_header[0].lower() == "bearer":
137 from frappe.integrations.oauth2 import get_oauth_server
138 token = authorization_header[1]
139 r = frappe.request
140 parsed_url = urlparse(r.url)
141 access_token = { "access_token": token}
142 uri = parsed_url.scheme + "://" + parsed_url.netloc + parsed_url.path + "?" + urlencode(access_token)
143 http_method = r.method
144 body = r.get_data()
145 headers = r.headers
146
147 required_scopes = frappe.db.get_value("OAuth Bearer Token", token, "scopes").split(get_url_delimiter())
148
149 valid, oauthlib_request = get_oauth_server().verify_request(uri, http_method, body, headers, required_scopes)
150
151 if valid:
152 frappe.set_user(frappe.db.get_value("OAuth Bearer Token", token, "user"))
153 frappe.local.form_dict = form_dict
154
155
156 def validate_auth_via_api_keys():
157 """
158 authentication using api key and api secret
159
160 set user
161 """
162 try:
163 authorization_header = frappe.get_request_header("Authorization", None).split(" ") if frappe.get_request_header("Authorization") else None
164 if authorization_header and authorization_header[0] == 'Basic':
165 token = frappe.safe_decode(base64.b64decode(authorization_header[1])).split(":")
166 validate_api_key_secret(token[0], token[1])
167 elif authorization_header and authorization_header[0] == 'token':
168 token = authorization_header[1].split(":")
169 validate_api_key_secret(token[0], token[1])
170 except Exception as e:
171 raise e
172
173 def validate_api_key_secret(api_key, api_secret):
174 user = frappe.db.get_value(
175 doctype="User",
176 filters={"api_key": api_key},
177 fieldname=['name']
178 )
179 form_dict = frappe.local.form_dict
180 user_secret = frappe.utils.password.get_decrypted_password ("User", user, fieldname='api_secret')
181 if api_secret == user_secret:
182 frappe.set_user(user)
183 frappe.local.form_dict = form_dict
184
[end of frappe/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/frappe/api.py b/frappe/api.py
--- a/frappe/api.py
+++ b/frappe/api.py
@@ -113,7 +113,10 @@
doctype, **frappe.local.form_dict)})
if frappe.local.request.method=="POST":
- data = json.loads(frappe.local.form_dict.data)
+ if frappe.local.form_dict.data is None:
+ data = json.loads(frappe.local.request.get_data())
+ else:
+ data = json.loads(frappe.local.form_dict.data)
data.update({
"doctype": doctype
})
| {"golden_diff": "diff --git a/frappe/api.py b/frappe/api.py\n--- a/frappe/api.py\n+++ b/frappe/api.py\n@@ -113,7 +113,10 @@\n \t\t\t\t\t\t\tdoctype, **frappe.local.form_dict)})\n \n \t\t\t\tif frappe.local.request.method==\"POST\":\n-\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n+\t\t\t\t\tif frappe.local.form_dict.data is None:\n+\t\t\t\t\t\tdata = json.loads(frappe.local.request.get_data())\n+\t\t\t\t\telse:\n+\t\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n \t\t\t\t\tdata.update({\n \t\t\t\t\t\t\"doctype\": doctype\n \t\t\t\t\t})\n", "issue": "bug(API): TypeError, cannot create document \n\r\n```bash\r\ncurl -X POST https://my.erpnext.com/api/resource/Lead \\\r\n -H 'Accept: application/json' \\\r\n -H 'Authorization: Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==' \\\r\n -H 'Content-Type: application/json' \\\r\n -d '{\"lead_name\": \"Jon Doe\"}'\r\n```\r\n\r\nReturns\r\n\r\n```json\r\n{\"exc\":\"[\\\"Traceback (most recent call last):\\\\n File \\\\\\\"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\\\\\\\", line 60, in application\\\\n response = frappe.api.handle()\\\\n File \\\\\\\"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\\\\\\\", line 116, in handle\\\\n data = json.loads(frappe.local.form_dict.data)\\\\n File \\\\\\\"/usr/lib64/python2.7/json/__init__.py\\\\\\\", line 338, in loads\\\\n return _default_decoder.decode(s)\\\\n File \\\\\\\"/usr/lib64/python2.7/json/decoder.py\\\\\\\", line 366, in decode\\\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\\\nTypeError: expected string or buffer\\\\n\\\"]\"}\r\n```\r\n\r\nCleaned up stack trace:\r\n\r\n```\r\nTraceback (most recent call last):\r\nFile \"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\", line 60, in application\r\n\tresponse = frappe.api.handle()\r\nFile \"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\", line 116, in handle\r\n\tdata = json.loads(frappe.local.form_dict.data)\r\nFile \"/usr/lib64/python2.7/json/__init__.py\", line 338, in loads\r\n\treturn _default_decoder.decode(s)\r\nFile \"/usr/lib64/python2.7/json/decoder.py\", line 366, in decode\r\n\tobj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\nTypeError: expected string or buffer\r\n```\r\n\r\nThis seems to apply to any DocType \u2013 it's not possible to create documents this way.\r\n\r\n@netchampfaris may [this](https://github.com/frappe/frappe/commit/f63ad574e580360996807931f9c9cfa363385c3d#diff-d65e8dff7122e8822cd2009d1ef1a963) be the cause?\r\n\r\n### Versions\r\nERPNext: v12.0.6 (version-12)\r\nFrappe Framework: v12.0.6 (version-12)\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\nfrom __future__ import unicode_literals\n\nimport json\nimport frappe\nimport frappe.handler\nimport frappe.client\nfrom frappe.utils.response import build_response\nfrom frappe import _\nfrom six.moves.urllib.parse import urlparse, urlencode\nimport base64\n\ndef handle():\n\t\"\"\"\n\tHandler for `/api` methods\n\n\t### Examples:\n\n\t`/api/method/{methodname}` will call a whitelisted method\n\n\t`/api/resource/{doctype}` will query a table\n\t\texamples:\n\t\t- `?fields=[\"name\", \"owner\"]`\n\t\t- `?filters=[[\"Task\", \"name\", \"like\", \"%005\"]]`\n\t\t- `?limit_start=0`\n\t\t- `?limit_page_length=20`\n\n\t`/api/resource/{doctype}/{name}` will point to a resource\n\t\t`GET` will return doclist\n\t\t`POST` will insert\n\t\t`PUT` will update\n\t\t`DELETE` will delete\n\n\t`/api/resource/{doctype}/{name}?run_method={method}` will run a whitelisted controller method\n\t\"\"\"\n\n\tvalidate_oauth()\n\tvalidate_auth_via_api_keys()\n\n\tparts = frappe.request.path[1:].split(\"/\",3)\n\tcall = doctype = name = None\n\n\tif len(parts) > 1:\n\t\tcall = parts[1]\n\n\tif len(parts) > 2:\n\t\tdoctype = parts[2]\n\n\tif len(parts) > 3:\n\t\tname = parts[3]\n\n\tif call==\"method\":\n\t\tfrappe.local.form_dict.cmd = doctype\n\t\treturn frappe.handler.handle()\n\n\telif call==\"resource\":\n\t\tif \"run_method\" in frappe.local.form_dict:\n\t\t\tmethod = frappe.local.form_dict.pop(\"run_method\")\n\t\t\tdoc = frappe.get_doc(doctype, name)\n\t\t\tdoc.is_whitelisted(method)\n\n\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\tif not doc.has_permission(\"read\"):\n\t\t\t\t\tfrappe.throw(_(\"Not permitted\"), frappe.PermissionError)\n\t\t\t\tfrappe.local.response.update({\"data\": doc.run_method(method, **frappe.local.form_dict)})\n\n\t\t\tif frappe.local.request.method==\"POST\":\n\t\t\t\tif not doc.has_permission(\"write\"):\n\t\t\t\t\tfrappe.throw(_(\"Not permitted\"), frappe.PermissionError)\n\n\t\t\t\tfrappe.local.response.update({\"data\": doc.run_method(method, **frappe.local.form_dict)})\n\t\t\t\tfrappe.db.commit()\n\n\t\telse:\n\t\t\tif name:\n\t\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\t\tdoc = frappe.get_doc(doctype, name)\n\t\t\t\t\tif not doc.has_permission(\"read\"):\n\t\t\t\t\t\traise frappe.PermissionError\n\t\t\t\t\tfrappe.local.response.update({\"data\": doc})\n\n\t\t\t\tif frappe.local.request.method==\"PUT\":\n\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n\t\t\t\t\tdoc = frappe.get_doc(doctype, name)\n\n\t\t\t\t\tif \"flags\" in data:\n\t\t\t\t\t\tdel data[\"flags\"]\n\n\t\t\t\t\t# Not checking permissions here because it's checked in doc.save\n\t\t\t\t\tdoc.update(data)\n\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": doc.save().as_dict()\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.db.commit()\n\n\t\t\t\tif frappe.local.request.method==\"DELETE\":\n\t\t\t\t\t# Not checking permissions here because it's checked in delete_doc\n\t\t\t\t\tfrappe.delete_doc(doctype, name, ignore_missing=False)\n\t\t\t\t\tfrappe.local.response.http_status_code = 202\n\t\t\t\t\tfrappe.local.response.message = \"ok\"\n\t\t\t\t\tfrappe.db.commit()\n\n\n\t\t\telif doctype:\n\t\t\t\tif frappe.local.request.method==\"GET\":\n\t\t\t\t\tif frappe.local.form_dict.get('fields'):\n\t\t\t\t\t\tfrappe.local.form_dict['fields'] = json.loads(frappe.local.form_dict['fields'])\n\t\t\t\t\tfrappe.local.form_dict.setdefault('limit_page_length', 20)\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": frappe.call(frappe.client.get_list,\n\t\t\t\t\t\t\tdoctype, **frappe.local.form_dict)})\n\n\t\t\t\tif frappe.local.request.method==\"POST\":\n\t\t\t\t\tdata = json.loads(frappe.local.form_dict.data)\n\t\t\t\t\tdata.update({\n\t\t\t\t\t\t\"doctype\": doctype\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.local.response.update({\n\t\t\t\t\t\t\"data\": frappe.get_doc(data).insert().as_dict()\n\t\t\t\t\t})\n\t\t\t\t\tfrappe.db.commit()\n\t\t\telse:\n\t\t\t\traise frappe.DoesNotExistError\n\n\telse:\n\t\traise frappe.DoesNotExistError\n\n\treturn build_response(\"json\")\n\ndef validate_oauth():\n\tfrom frappe.oauth import get_url_delimiter\n\tform_dict = frappe.local.form_dict\n\tauthorization_header = frappe.get_request_header(\"Authorization\").split(\" \") if frappe.get_request_header(\"Authorization\") else None\n\tif authorization_header and authorization_header[0].lower() == \"bearer\":\n\t\tfrom frappe.integrations.oauth2 import get_oauth_server\n\t\ttoken = authorization_header[1]\n\t\tr = frappe.request\n\t\tparsed_url = urlparse(r.url)\n\t\taccess_token = { \"access_token\": token}\n\t\turi = parsed_url.scheme + \"://\" + parsed_url.netloc + parsed_url.path + \"?\" + urlencode(access_token)\n\t\thttp_method = r.method\n\t\tbody = r.get_data()\n\t\theaders = r.headers\n\n\t\trequired_scopes = frappe.db.get_value(\"OAuth Bearer Token\", token, \"scopes\").split(get_url_delimiter())\n\n\t\tvalid, oauthlib_request = get_oauth_server().verify_request(uri, http_method, body, headers, required_scopes)\n\n\t\tif valid:\n\t\t\tfrappe.set_user(frappe.db.get_value(\"OAuth Bearer Token\", token, \"user\"))\n\t\t\tfrappe.local.form_dict = form_dict\n\n\ndef validate_auth_via_api_keys():\n\t\"\"\"\n\tauthentication using api key and api secret\n\n\tset user\n\t\"\"\"\n\ttry:\n\t\tauthorization_header = frappe.get_request_header(\"Authorization\", None).split(\" \") if frappe.get_request_header(\"Authorization\") else None\n\t\tif authorization_header and authorization_header[0] == 'Basic':\n\t\t\ttoken = frappe.safe_decode(base64.b64decode(authorization_header[1])).split(\":\")\n\t\t\tvalidate_api_key_secret(token[0], token[1])\n\t\telif authorization_header and authorization_header[0] == 'token':\n\t\t\ttoken = authorization_header[1].split(\":\")\n\t\t\tvalidate_api_key_secret(token[0], token[1])\n\texcept Exception as e:\n\t\traise e\n\ndef validate_api_key_secret(api_key, api_secret):\n\tuser = frappe.db.get_value(\n\t\tdoctype=\"User\",\n\t\tfilters={\"api_key\": api_key},\n\t\tfieldname=['name']\n\t)\n\tform_dict = frappe.local.form_dict\n\tuser_secret = frappe.utils.password.get_decrypted_password (\"User\", user, fieldname='api_secret')\n\tif api_secret == user_secret:\n\t\tfrappe.set_user(user)\n\t\tfrappe.local.form_dict = form_dict\n", "path": "frappe/api.py"}]} | 3,125 | 138 |
gh_patches_debug_50224 | rasdani/github-patches | git_diff | pex-tool__pex-1692 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.74
On the docket:
+ [x] Add support for locking VCS requirements. (#1687)
+ [x] Fix `--lock` for multiplatform via sdists. (#1689)
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.73"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.73"
+__version__ = "2.1.74"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.73\"\n+__version__ = \"2.1.74\"\n", "issue": "Release 2.1.74\nOn the docket:\r\n+ [x] Add support for locking VCS requirements. (#1687)\r\n+ [x] Fix `--lock` for multiplatform via sdists. (#1689)\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.73\"\n", "path": "pex/version.py"}]} | 638 | 96 |
gh_patches_debug_33963 | rasdani/github-patches | git_diff | learningequality__kolibri-10461 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Removing last user from "on my own facility" does not remove the facility from the device
## Observed behavior
Reported by @rtibbles in the alpha9 bug bash
When the last user is migrated out of an on my own facility, that facility is not removed from the device
## Expected behavior
The facility should be removed from the device
</issue>
<code>
[start of kolibri/plugins/user_profile/tasks.py]
1 import requests
2 from django.core.management import call_command
3 from morango.errors import MorangoError
4 from rest_framework import serializers
5 from rest_framework.exceptions import AuthenticationFailed
6 from rest_framework.status import HTTP_201_CREATED
7
8 from .utils import TokenGenerator
9 from kolibri.core.auth.constants import role_kinds
10 from kolibri.core.auth.models import FacilityUser
11 from kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator
12 from kolibri.core.auth.utils.migrate import merge_users
13 from kolibri.core.device.models import DevicePermissions
14 from kolibri.core.device.utils import set_device_settings
15 from kolibri.core.tasks.decorators import register_task
16 from kolibri.core.tasks.job import JobStatus
17 from kolibri.core.tasks.job import Priority
18 from kolibri.core.tasks.permissions import IsFacilityAdmin
19 from kolibri.core.tasks.permissions import IsSelf
20 from kolibri.core.tasks.permissions import IsSuperAdmin
21 from kolibri.core.tasks.permissions import PermissionsFromAny
22 from kolibri.core.tasks.utils import get_current_job
23 from kolibri.core.utils.urls import reverse_remote
24 from kolibri.utils.translation import ugettext as _
25
26
27 class MergeUserValidator(PeerImportSingleSyncJobValidator):
28 local_user_id = serializers.PrimaryKeyRelatedField(
29 queryset=FacilityUser.objects.all()
30 )
31 new_superuser_id = serializers.PrimaryKeyRelatedField(
32 queryset=FacilityUser.objects.all(), required=False
33 )
34 facility_name = serializers.CharField(default="")
35
36 def validate(self, data):
37 try:
38 job_data = super(MergeUserValidator, self).validate(data)
39 except AuthenticationFailed:
40 self.create_remote_user(data)
41 job_data = super(MergeUserValidator, self).validate(data)
42
43 job_data["kwargs"]["local_user_id"] = data["local_user_id"].id
44 job_data["extra_metadata"].update(user_fullname=data["local_user_id"].full_name)
45 if data.get("new_superuser_id"):
46 job_data["kwargs"]["new_superuser_id"] = data["new_superuser_id"].id
47
48 return job_data
49
50 def create_remote_user(self, data):
51 baseurl = data["baseurl"]
52 facility = data["facility"]
53 user_data = {
54 "username": data["username"],
55 "password": data["password"],
56 "facility": facility,
57 }
58 for f in ["gender", "birth_year", "id_number", "full_name"]:
59 if getattr(data["local_user_id"], f, "NOT_SPECIFIED") != "NOT_SPECIFIED":
60 user_data[f] = getattr(data["local_user_id"], f, None)
61 public_signup_url = reverse_remote(baseurl, "kolibri:core:publicsignup-list")
62 response = requests.post(public_signup_url, data=user_data)
63 if response.status_code != HTTP_201_CREATED:
64 raise serializers.ValidationError(response.json()[0]["id"])
65
66
67 def status_fn(job):
68 # Translators: A notification title shown to users when their learner account is joining a new learning facility.
69 account_transfer_in_progress = _("Account transfer in progress")
70 # Translators: Notification text shown to users when their learner account is joining a new learning facility.
71 notification_text = _(
72 "Moving {learner_name} to learning facility {facility_name}"
73 ).format(
74 learner_name=job.extra_metadata["user_fullname"],
75 facility_name=job.extra_metadata["facility_name"],
76 )
77 return JobStatus(account_transfer_in_progress, notification_text)
78
79
80 @register_task(
81 queue="soud",
82 validator=MergeUserValidator,
83 priority=Priority.HIGH,
84 cancellable=False,
85 track_progress=True,
86 permission_classes=[
87 PermissionsFromAny(IsSelf(), IsSuperAdmin(), IsFacilityAdmin())
88 ],
89 status_fn=status_fn,
90 )
91 def mergeuser(command, **kwargs):
92 """
93 This is an example of the POST payload to create this task:
94 {
95 "type": "kolibri.plugins.user_profile.tasks.mergeuser",
96 "baseurl": "http://192.168.0.201:80/",
97 "facility": "41d0e8bb1600347f17ab3d9172fff87a",
98 "username": "uno",
99 "local_user_id": "05685392311d1d259fe01c65c7a6c28e"
100 }
101 being baseurl, facility and username all parameters of the remote server.
102 If the remote server requires password to authenticate user,
103 a "password" parameter must be added, otherwise it's not needed.
104
105 If the username/password does not exist in the remote server,
106 this task will try to create the user.
107 """
108
109 local_user_id = kwargs.pop("local_user_id")
110 local_user = FacilityUser.objects.get(id=local_user_id)
111 job = get_current_job()
112
113 # Sync with the server to get the remote user:
114 kwargs["no_push"] = True
115 try:
116 call_command(command, **kwargs)
117 except MorangoError:
118 # error syncing with the server, probably a networking issue
119 raise
120
121 remote_user = FacilityUser.objects.get(id=kwargs["user"])
122 merge_users(local_user, remote_user)
123 set_device_settings(subset_of_users_device=True)
124
125 # Resync with the server to update the merged records
126 del kwargs["no_push"]
127
128 try:
129 call_command(command, **kwargs)
130 except MorangoError:
131 # error syncing with the server, probably a networking issue
132 # syncing will happen later in scheduled syncs
133 from kolibri.core.auth.tasks import begin_request_soud_sync
134
135 begin_request_soud_sync(kwargs["baseurl"], remote_user.id)
136
137 new_superuser_id = kwargs.get("new_superuser_id")
138 if new_superuser_id:
139 new_superuser = FacilityUser.objects.get(id=new_superuser_id)
140 # make the user a new super user for this device:
141 new_superuser.facility.add_role(new_superuser, role_kinds.ADMIN)
142 DevicePermissions.objects.create(
143 user=new_superuser, is_superuser=True, can_manage_content=True
144 )
145
146 # create token to validate user in the new facility
147 # after it's deleted in the current facility:
148 remote_user_pk = job.kwargs["user"]
149 remote_user = FacilityUser.objects.get(pk=remote_user_pk)
150 token = TokenGenerator().make_token(remote_user)
151 job.extra_metadata["token"] = token
152 job.extra_metadata["remote_user_pk"] = remote_user_pk
153 job.save_meta()
154 job.update_progress(1.0, 1.0)
155 local_user.delete()
156
[end of kolibri/plugins/user_profile/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/plugins/user_profile/tasks.py b/kolibri/plugins/user_profile/tasks.py
--- a/kolibri/plugins/user_profile/tasks.py
+++ b/kolibri/plugins/user_profile/tasks.py
@@ -9,6 +9,7 @@
from kolibri.core.auth.constants import role_kinds
from kolibri.core.auth.models import FacilityUser
from kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator
+from kolibri.core.auth.utils.delete import delete_facility
from kolibri.core.auth.utils.migrate import merge_users
from kolibri.core.device.models import DevicePermissions
from kolibri.core.device.utils import set_device_settings
@@ -32,6 +33,7 @@
queryset=FacilityUser.objects.all(), required=False
)
facility_name = serializers.CharField(default="")
+ set_as_super_user = serializers.BooleanField(required=False)
def validate(self, data):
try:
@@ -44,6 +46,8 @@
job_data["extra_metadata"].update(user_fullname=data["local_user_id"].full_name)
if data.get("new_superuser_id"):
job_data["kwargs"]["new_superuser_id"] = data["new_superuser_id"].id
+ if data.get("set_as_super_user"):
+ job_data["kwargs"]["set_as_super_user"] = data["set_as_super_user"]
return job_data
@@ -152,4 +156,14 @@
job.extra_metadata["remote_user_pk"] = remote_user_pk
job.save_meta()
job.update_progress(1.0, 1.0)
- local_user.delete()
+
+ # check if current user should be set as superuser:
+ set_as_super_user = kwargs.get("set_as_super_user")
+ if set_as_super_user:
+ DevicePermissions.objects.create(
+ user=remote_user, is_superuser=True, can_manage_content=True
+ )
+ delete_facility(local_user.facility)
+ set_device_settings(default_facility=remote_user.facility)
+ else:
+ local_user.delete()
| {"golden_diff": "diff --git a/kolibri/plugins/user_profile/tasks.py b/kolibri/plugins/user_profile/tasks.py\n--- a/kolibri/plugins/user_profile/tasks.py\n+++ b/kolibri/plugins/user_profile/tasks.py\n@@ -9,6 +9,7 @@\n from kolibri.core.auth.constants import role_kinds\n from kolibri.core.auth.models import FacilityUser\n from kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator\n+from kolibri.core.auth.utils.delete import delete_facility\n from kolibri.core.auth.utils.migrate import merge_users\n from kolibri.core.device.models import DevicePermissions\n from kolibri.core.device.utils import set_device_settings\n@@ -32,6 +33,7 @@\n queryset=FacilityUser.objects.all(), required=False\n )\n facility_name = serializers.CharField(default=\"\")\n+ set_as_super_user = serializers.BooleanField(required=False)\n \n def validate(self, data):\n try:\n@@ -44,6 +46,8 @@\n job_data[\"extra_metadata\"].update(user_fullname=data[\"local_user_id\"].full_name)\n if data.get(\"new_superuser_id\"):\n job_data[\"kwargs\"][\"new_superuser_id\"] = data[\"new_superuser_id\"].id\n+ if data.get(\"set_as_super_user\"):\n+ job_data[\"kwargs\"][\"set_as_super_user\"] = data[\"set_as_super_user\"]\n \n return job_data\n \n@@ -152,4 +156,14 @@\n job.extra_metadata[\"remote_user_pk\"] = remote_user_pk\n job.save_meta()\n job.update_progress(1.0, 1.0)\n- local_user.delete()\n+\n+ # check if current user should be set as superuser:\n+ set_as_super_user = kwargs.get(\"set_as_super_user\")\n+ if set_as_super_user:\n+ DevicePermissions.objects.create(\n+ user=remote_user, is_superuser=True, can_manage_content=True\n+ )\n+ delete_facility(local_user.facility)\n+ set_device_settings(default_facility=remote_user.facility)\n+ else:\n+ local_user.delete()\n", "issue": "Removing last user from \"on my own facility\" does not remove the facility from the device\n\r\n## Observed behavior\r\nReported by @rtibbles in the alpha9 bug bash \r\n\r\nWhen the last user is migrated out of an on my own facility, that facility is not removed from the device\r\n\r\n## Expected behavior\r\nThe facility should be removed from the device\r\n\n", "before_files": [{"content": "import requests\nfrom django.core.management import call_command\nfrom morango.errors import MorangoError\nfrom rest_framework import serializers\nfrom rest_framework.exceptions import AuthenticationFailed\nfrom rest_framework.status import HTTP_201_CREATED\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.constants import role_kinds\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.tasks import PeerImportSingleSyncJobValidator\nfrom kolibri.core.auth.utils.migrate import merge_users\nfrom kolibri.core.device.models import DevicePermissions\nfrom kolibri.core.device.utils import set_device_settings\nfrom kolibri.core.tasks.decorators import register_task\nfrom kolibri.core.tasks.job import JobStatus\nfrom kolibri.core.tasks.job import Priority\nfrom kolibri.core.tasks.permissions import IsFacilityAdmin\nfrom kolibri.core.tasks.permissions import IsSelf\nfrom kolibri.core.tasks.permissions import IsSuperAdmin\nfrom kolibri.core.tasks.permissions import PermissionsFromAny\nfrom kolibri.core.tasks.utils import get_current_job\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.translation import ugettext as _\n\n\nclass MergeUserValidator(PeerImportSingleSyncJobValidator):\n local_user_id = serializers.PrimaryKeyRelatedField(\n queryset=FacilityUser.objects.all()\n )\n new_superuser_id = serializers.PrimaryKeyRelatedField(\n queryset=FacilityUser.objects.all(), required=False\n )\n facility_name = serializers.CharField(default=\"\")\n\n def validate(self, data):\n try:\n job_data = super(MergeUserValidator, self).validate(data)\n except AuthenticationFailed:\n self.create_remote_user(data)\n job_data = super(MergeUserValidator, self).validate(data)\n\n job_data[\"kwargs\"][\"local_user_id\"] = data[\"local_user_id\"].id\n job_data[\"extra_metadata\"].update(user_fullname=data[\"local_user_id\"].full_name)\n if data.get(\"new_superuser_id\"):\n job_data[\"kwargs\"][\"new_superuser_id\"] = data[\"new_superuser_id\"].id\n\n return job_data\n\n def create_remote_user(self, data):\n baseurl = data[\"baseurl\"]\n facility = data[\"facility\"]\n user_data = {\n \"username\": data[\"username\"],\n \"password\": data[\"password\"],\n \"facility\": facility,\n }\n for f in [\"gender\", \"birth_year\", \"id_number\", \"full_name\"]:\n if getattr(data[\"local_user_id\"], f, \"NOT_SPECIFIED\") != \"NOT_SPECIFIED\":\n user_data[f] = getattr(data[\"local_user_id\"], f, None)\n public_signup_url = reverse_remote(baseurl, \"kolibri:core:publicsignup-list\")\n response = requests.post(public_signup_url, data=user_data)\n if response.status_code != HTTP_201_CREATED:\n raise serializers.ValidationError(response.json()[0][\"id\"])\n\n\ndef status_fn(job):\n # Translators: A notification title shown to users when their learner account is joining a new learning facility.\n account_transfer_in_progress = _(\"Account transfer in progress\")\n # Translators: Notification text shown to users when their learner account is joining a new learning facility.\n notification_text = _(\n \"Moving {learner_name} to learning facility {facility_name}\"\n ).format(\n learner_name=job.extra_metadata[\"user_fullname\"],\n facility_name=job.extra_metadata[\"facility_name\"],\n )\n return JobStatus(account_transfer_in_progress, notification_text)\n\n\n@register_task(\n queue=\"soud\",\n validator=MergeUserValidator,\n priority=Priority.HIGH,\n cancellable=False,\n track_progress=True,\n permission_classes=[\n PermissionsFromAny(IsSelf(), IsSuperAdmin(), IsFacilityAdmin())\n ],\n status_fn=status_fn,\n)\ndef mergeuser(command, **kwargs):\n \"\"\"\n This is an example of the POST payload to create this task:\n {\n \"type\": \"kolibri.plugins.user_profile.tasks.mergeuser\",\n \"baseurl\": \"http://192.168.0.201:80/\",\n \"facility\": \"41d0e8bb1600347f17ab3d9172fff87a\",\n \"username\": \"uno\",\n \"local_user_id\": \"05685392311d1d259fe01c65c7a6c28e\"\n }\n being baseurl, facility and username all parameters of the remote server.\n If the remote server requires password to authenticate user,\n a \"password\" parameter must be added, otherwise it's not needed.\n\n If the username/password does not exist in the remote server,\n this task will try to create the user.\n \"\"\"\n\n local_user_id = kwargs.pop(\"local_user_id\")\n local_user = FacilityUser.objects.get(id=local_user_id)\n job = get_current_job()\n\n # Sync with the server to get the remote user:\n kwargs[\"no_push\"] = True\n try:\n call_command(command, **kwargs)\n except MorangoError:\n # error syncing with the server, probably a networking issue\n raise\n\n remote_user = FacilityUser.objects.get(id=kwargs[\"user\"])\n merge_users(local_user, remote_user)\n set_device_settings(subset_of_users_device=True)\n\n # Resync with the server to update the merged records\n del kwargs[\"no_push\"]\n\n try:\n call_command(command, **kwargs)\n except MorangoError:\n # error syncing with the server, probably a networking issue\n # syncing will happen later in scheduled syncs\n from kolibri.core.auth.tasks import begin_request_soud_sync\n\n begin_request_soud_sync(kwargs[\"baseurl\"], remote_user.id)\n\n new_superuser_id = kwargs.get(\"new_superuser_id\")\n if new_superuser_id:\n new_superuser = FacilityUser.objects.get(id=new_superuser_id)\n # make the user a new super user for this device:\n new_superuser.facility.add_role(new_superuser, role_kinds.ADMIN)\n DevicePermissions.objects.create(\n user=new_superuser, is_superuser=True, can_manage_content=True\n )\n\n # create token to validate user in the new facility\n # after it's deleted in the current facility:\n remote_user_pk = job.kwargs[\"user\"]\n remote_user = FacilityUser.objects.get(pk=remote_user_pk)\n token = TokenGenerator().make_token(remote_user)\n job.extra_metadata[\"token\"] = token\n job.extra_metadata[\"remote_user_pk\"] = remote_user_pk\n job.save_meta()\n job.update_progress(1.0, 1.0)\n local_user.delete()\n", "path": "kolibri/plugins/user_profile/tasks.py"}]} | 2,376 | 441 |
gh_patches_debug_19495 | rasdani/github-patches | git_diff | Pyomo__pyomo-1273 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyNumero support on Windows
We need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.
PyNumero support on Windows
We need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.
</issue>
<code>
[start of pyomo/contrib/pynumero/extensions/utils.py]
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10 from ctypes.util import find_library
11 import sys
12 import os
13
14
15 def find_pynumero_library(library_name):
16
17 asl_path = find_library(library_name)
18 if asl_path is not None:
19 return asl_path
20 else:
21 # try looking into extensions directory now
22 file_path = os.path.abspath(__file__)
23 dir_path = os.path.dirname(file_path)
24
25 if os.name in ['nt', 'dos']:
26 libname = 'lib/Windows/lib{}.dll'.format(library_name)
27 elif sys.platform in ['darwin']:
28 libname = 'lib/Darwin/lib{}.dylib'.format(library_name)
29 else:
30 libname = 'lib/Linux/lib{}.so'.format(library_name)
31
32 asl_lib_path = os.path.join(dir_path, libname)
33
34 if os.path.exists(asl_lib_path):
35 return asl_lib_path
36 return None
37
38
39 def found_pynumero_libraries():
40
41 p1 = find_pynumero_library('pynumero_ASL')
42 p2 = find_pynumero_library('pynumero_SPARSE')
43
44 if p1 is not None and p2 is not None:
45 return True
46 return False
47
[end of pyomo/contrib/pynumero/extensions/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyomo/contrib/pynumero/extensions/utils.py b/pyomo/contrib/pynumero/extensions/utils.py
--- a/pyomo/contrib/pynumero/extensions/utils.py
+++ b/pyomo/contrib/pynumero/extensions/utils.py
@@ -14,9 +14,14 @@
def find_pynumero_library(library_name):
- asl_path = find_library(library_name)
- if asl_path is not None:
- return asl_path
+ lib_path = find_library(library_name)
+ if lib_path is not None:
+ return lib_path
+
+ # On windows the library is prefixed with 'lib'
+ lib_path = find_library('lib'+library_name)
+ if lib_path is not None:
+ return lib_path
else:
# try looking into extensions directory now
file_path = os.path.abspath(__file__)
@@ -29,10 +34,10 @@
else:
libname = 'lib/Linux/lib{}.so'.format(library_name)
- asl_lib_path = os.path.join(dir_path, libname)
+ lib_path = os.path.join(dir_path, libname)
- if os.path.exists(asl_lib_path):
- return asl_lib_path
+ if os.path.exists(lib_path):
+ return lib_path
return None
| {"golden_diff": "diff --git a/pyomo/contrib/pynumero/extensions/utils.py b/pyomo/contrib/pynumero/extensions/utils.py\n--- a/pyomo/contrib/pynumero/extensions/utils.py\n+++ b/pyomo/contrib/pynumero/extensions/utils.py\n@@ -14,9 +14,14 @@\n \n def find_pynumero_library(library_name):\n \n- asl_path = find_library(library_name)\n- if asl_path is not None:\n- return asl_path\n+ lib_path = find_library(library_name)\n+ if lib_path is not None:\n+ return lib_path\n+\n+ # On windows the library is prefixed with 'lib'\n+ lib_path = find_library('lib'+library_name)\n+ if lib_path is not None:\n+ return lib_path\n else:\n # try looking into extensions directory now\n file_path = os.path.abspath(__file__)\n@@ -29,10 +34,10 @@\n else:\n libname = 'lib/Linux/lib{}.so'.format(library_name)\n \n- asl_lib_path = os.path.join(dir_path, libname)\n+ lib_path = os.path.join(dir_path, libname)\n \n- if os.path.exists(asl_lib_path):\n- return asl_lib_path\n+ if os.path.exists(lib_path):\n+ return lib_path\n return None\n", "issue": "PyNumero support on Windows\nWe need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.\nPyNumero support on Windows\nWe need to make sure PyNumero installs/runs on Windows and make sure any extra installation steps are well documented. Some discussion on this was started in #1253. When this is resolved we also need to make sure to enable the tests currently being skipped on Windows.\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\nfrom ctypes.util import find_library\nimport sys\nimport os\n\n\ndef find_pynumero_library(library_name):\n\n asl_path = find_library(library_name)\n if asl_path is not None:\n return asl_path\n else:\n # try looking into extensions directory now\n file_path = os.path.abspath(__file__)\n dir_path = os.path.dirname(file_path)\n\n if os.name in ['nt', 'dos']:\n libname = 'lib/Windows/lib{}.dll'.format(library_name)\n elif sys.platform in ['darwin']:\n libname = 'lib/Darwin/lib{}.dylib'.format(library_name)\n else:\n libname = 'lib/Linux/lib{}.so'.format(library_name)\n\n asl_lib_path = os.path.join(dir_path, libname)\n\n if os.path.exists(asl_lib_path):\n return asl_lib_path\n return None\n\n\ndef found_pynumero_libraries():\n\n p1 = find_pynumero_library('pynumero_ASL')\n p2 = find_pynumero_library('pynumero_SPARSE')\n\n if p1 is not None and p2 is not None:\n return True\n return False\n", "path": "pyomo/contrib/pynumero/extensions/utils.py"}]} | 1,124 | 300 |
gh_patches_debug_3268 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1158 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AU battery returning type error
```
fetch_production("AUS-SA") ->
Traceback (most recent call last):
File "AU.py", line 558, in <module>
print(fetch_production('AUS-SA'))
File "AU.py", line 422, in fetch_production
data['storage']['battery'] = AU_battery.fetch_SA_battery()
File "/home/chris/electricitymap/parsers/lib/AU_battery.py", line 30, in fetch_SA_battery
latest = json.loads(data[-1])
File "/usr/lib/python3.5/json/__init__.py", line 312, in loads
s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'
```
</issue>
<code>
[start of parsers/lib/AU_battery.py]
1 #!/usr/bin/env python3
2
3 """Parser for South Australia's 129MWh battery built by Tesla."""
4 import arrow
5 import json
6 import requests
7
8 # nemlog_url gets generation status in 5 min intervals.
9
10
11 def fetch_SA_battery(session=None):
12 """
13 Makes a request to the nemlog api for South Australia battery data.
14 Returns a float or None.
15 """
16
17 today = arrow.now('Australia/Adelaide')
18 current = today.format('YYYYMMDD')
19 old = today.shift(days=-2).format('YYYYMMDD')
20 nemlog_url = 'http://nemlog.com.au/api/unit/HPRL1/{}/{}/json'.format(old, current)
21
22 s = session or requests.Session()
23 req = s.get(nemlog_url)
24
25 data = []
26 for line in req.iter_lines():
27 data.append(line)
28
29 try:
30 latest = json.loads(data[-1])
31 except IndexError:
32 # No data available.
33 return None
34
35 state = float(latest["SCADAVALUE"])
36
37 # Source classifies charge/discharge opposite to EM.
38 battery_status = -1 * state
39
40 return battery_status
41
42
43 if __name__ == '__main__':
44 print('fetch_SA_battery() ->')
45 print(fetch_SA_battery())
46
[end of parsers/lib/AU_battery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/lib/AU_battery.py b/parsers/lib/AU_battery.py
--- a/parsers/lib/AU_battery.py
+++ b/parsers/lib/AU_battery.py
@@ -21,11 +21,9 @@
s = session or requests.Session()
req = s.get(nemlog_url)
-
data = []
- for line in req.iter_lines():
+ for line in req.iter_lines(decode_unicode=True):
data.append(line)
-
try:
latest = json.loads(data[-1])
except IndexError:
| {"golden_diff": "diff --git a/parsers/lib/AU_battery.py b/parsers/lib/AU_battery.py\n--- a/parsers/lib/AU_battery.py\n+++ b/parsers/lib/AU_battery.py\n@@ -21,11 +21,9 @@\n \n s = session or requests.Session()\n req = s.get(nemlog_url)\n-\n data = []\n- for line in req.iter_lines():\n+ for line in req.iter_lines(decode_unicode=True):\n data.append(line)\n-\n try:\n latest = json.loads(data[-1])\n except IndexError:\n", "issue": "AU battery returning type error\n```\r\nfetch_production(\"AUS-SA\") ->\r\nTraceback (most recent call last):\r\n File \"AU.py\", line 558, in <module>\r\n print(fetch_production('AUS-SA'))\r\n File \"AU.py\", line 422, in fetch_production\r\n data['storage']['battery'] = AU_battery.fetch_SA_battery()\r\n File \"/home/chris/electricitymap/parsers/lib/AU_battery.py\", line 30, in fetch_SA_battery\r\n latest = json.loads(data[-1])\r\n File \"/usr/lib/python3.5/json/__init__.py\", line 312, in loads\r\n s.__class__.__name__))\r\nTypeError: the JSON object must be str, not 'bytes'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Parser for South Australia's 129MWh battery built by Tesla.\"\"\"\nimport arrow\nimport json\nimport requests\n\n# nemlog_url gets generation status in 5 min intervals.\n\n\ndef fetch_SA_battery(session=None):\n \"\"\"\n Makes a request to the nemlog api for South Australia battery data.\n Returns a float or None.\n \"\"\"\n\n today = arrow.now('Australia/Adelaide')\n current = today.format('YYYYMMDD')\n old = today.shift(days=-2).format('YYYYMMDD')\n nemlog_url = 'http://nemlog.com.au/api/unit/HPRL1/{}/{}/json'.format(old, current)\n\n s = session or requests.Session()\n req = s.get(nemlog_url)\n\n data = []\n for line in req.iter_lines():\n data.append(line)\n\n try:\n latest = json.loads(data[-1])\n except IndexError:\n # No data available.\n return None\n\n state = float(latest[\"SCADAVALUE\"])\n\n # Source classifies charge/discharge opposite to EM.\n battery_status = -1 * state\n\n return battery_status\n\n\nif __name__ == '__main__':\n print('fetch_SA_battery() ->')\n print(fetch_SA_battery())\n", "path": "parsers/lib/AU_battery.py"}]} | 1,063 | 121 |
gh_patches_debug_16023 | rasdani/github-patches | git_diff | databricks__koalas-161 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show pandas style Table of Contents on the left side in docs
Right now our docs show a weird Table of Contents for only a section, rather than the entire doc. Can we fix it so it shows the Table of Contents of the entire docs, e.g. start from the top level?
<img width="647" alt="Screen Shot 2019-04-23 at 4 40 38 PM" src="https://user-images.githubusercontent.com/323388/56622865-9351b600-65e6-11e9-98b3-7930660b1c93.png">
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # http://www.sphinx-doc.org/en/master/config
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12
13 import os
14 import sys
15 from databricks import koalas
16 sys.path.insert(0, os.path.abspath('.'))
17
18
19 # -- Project information -----------------------------------------------------
20
21 project = 'Koalas'
22 copyright = '2019, Databricks'
23 author = 'The Koalas Team'
24
25 # The full version, including alpha/beta/rc tags
26 release = os.environ.get('RELEASE_VERSION', koalas.__version__)
27
28
29 # -- General configuration ---------------------------------------------------
30
31 # If your documentation needs a minimal Sphinx version, state it here.
32 needs_sphinx = '1.2'
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 'sphinx.ext.autodoc',
39 'sphinx.ext.viewcode',
40 'numpydoc', # handle NumPy documentation formatted docstrings. Needs to install
41 'nbsphinx', # Jupyter Notebook. Needs to install
42 ]
43
44 # Add any paths that contain templates here, relative to this directory.
45 templates_path = ['_templates']
46
47 # List of patterns, relative to source directory, that match files and
48 # directories to ignore when looking for source files.
49 # This pattern also affects html_static_path and html_extra_path.
50 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
51
52 # The name of the Pygments (syntax highlighting) style to use.
53 pygments_style = 'sphinx'
54
55 # The master toctree document.
56 master_doc = 'index'
57
58 numpydoc_show_class_members = False
59
60 # -- Options for auto output -------------------------------------------------
61
62 autoclass_content = 'both'
63 autosummary_generate = True
64
65
66 # -- Options for HTML output -------------------------------------------------
67
68 # The theme to use for HTML and HTML Help pages. See the documentation for
69 # a list of builtin themes.
70 #
71 html_theme = 'nature'
72
73 # Add any paths that contain custom static files (such as style sheets) here,
74 # relative to this directory. They are copied after the builtin static files,
75 # so a file named "default.css" will overwrite the builtin "default.css".
76 html_static_path = ['_static']
77
78 # If false, no index is generated.
79 html_use_index = False
80
81 # If false, no module index is generated.
82 html_domain_indices = False
83
84
85 # -- Options for manual page output ---------------------------------------
86
87 # One entry per manual page. List of tuples
88 # (source start file, name, description, authors, manual section).
89 man_pages = [
90 ('index', 'databricks.koalas', u'databricks.koalas Documentation',
91 [u'Author'], 1)
92 ]
93
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -68,13 +68,16 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = 'nature'
+html_theme = 'nature_with_gtoc'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
+# Add any paths that contain custom themes here, relative to this directory.
+html_theme_path = ['themes']
+
# If false, no index is generated.
html_use_index = False
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -68,13 +68,16 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n-html_theme = 'nature'\n+html_theme = 'nature_with_gtoc'\n \n # Add any paths that contain custom static files (such as style sheets) here,\n # relative to this directory. They are copied after the builtin static files,\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n html_static_path = ['_static']\n \n+# Add any paths that contain custom themes here, relative to this directory.\n+html_theme_path = ['themes']\n+\n # If false, no index is generated.\n html_use_index = False\n", "issue": "Show pandas style Table of Contents on the left side in docs\nRight now our docs show a weird Table of Contents for only a section, rather than the entire doc. Can we fix it so it shows the Table of Contents of the entire docs, e.g. start from the top level?\r\n\r\n<img width=\"647\" alt=\"Screen Shot 2019-04-23 at 4 40 38 PM\" src=\"https://user-images.githubusercontent.com/323388/56622865-9351b600-65e6-11e9-98b3-7930660b1c93.png\">\r\n\r\n\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\nfrom databricks import koalas\nsys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Koalas'\ncopyright = '2019, Databricks'\nauthor = 'The Koalas Team'\n\n# The full version, including alpha/beta/rc tags\nrelease = os.environ.get('RELEASE_VERSION', koalas.__version__)\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = '1.2'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.viewcode',\n 'numpydoc', # handle NumPy documentation formatted docstrings. Needs to install\n 'nbsphinx', # Jupyter Notebook. Needs to install\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# The master toctree document.\nmaster_doc = 'index'\n\nnumpydoc_show_class_members = False\n\n# -- Options for auto output -------------------------------------------------\n\nautoclass_content = 'both'\nautosummary_generate = True\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'nature'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If false, no module index is generated.\nhtml_domain_indices = False\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'databricks.koalas', u'databricks.koalas Documentation',\n [u'Author'], 1)\n]\n", "path": "docs/source/conf.py"}]} | 1,548 | 182 |
gh_patches_debug_23082 | rasdani/github-patches | git_diff | microsoft__playwright-python-401 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Interactive mode (REPL) Error !!!
**pip install playwright==0.162.2**
from playwright import sync_playwright
**playwright = sync_playwright().start()**
Traceback (most recent call last):
File "<pyshell#1>", line 1, in
playwright = sync_playwright().start()
File "C:\Python37\lib\site-packages\playwright_init_.py", line 34, in sync_playwright
return SyncPlaywrightContextManager()
File "C:\Python37\lib\site-packages\playwright\main.py", line 81, in init
self._connection = run_driver()
File "C:\Python37\lib\site-packages\playwright\main.py", line 76, in run_driver
return loop.run_until_complete(run_driver_async())
File "C:\Python37\lib\asyncio\base_events.py", line 587, in run_until_complete
return future.result()
File "C:\Python37\lib\site-packages\playwright\main.py", line 61, in run_driver_async
stderr=_get_stderr_fileno(),
File "C:\Python37\lib\site-packages\playwright\main.py", line 54, in _get_stderr_fileno
return sys.stderr.fileno()
**AttributeError: 'NoneType' object has no attribute 'fileno'**
</issue>
<code>
[start of playwright/_impl/_transport.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import json
17 import os
18 import sys
19 from pathlib import Path
20 from typing import Dict
21
22
23 class Transport:
24 def __init__(self, driver_executable: Path) -> None:
25 super().__init__()
26 self.on_message = lambda _: None
27 self._stopped = False
28 self._driver_executable = driver_executable
29 self._loop: asyncio.AbstractEventLoop
30
31 def stop(self) -> None:
32 self._stopped = True
33 self._output.close()
34
35 async def run(self) -> None:
36 self._loop = asyncio.get_running_loop()
37 driver_executable = self._driver_executable
38
39 proc = await asyncio.create_subprocess_exec(
40 str(driver_executable),
41 "run-driver",
42 stdin=asyncio.subprocess.PIPE,
43 stdout=asyncio.subprocess.PIPE,
44 stderr=sys.stderr,
45 limit=32768,
46 )
47 assert proc.stdout
48 assert proc.stdin
49 self._output = proc.stdin
50
51 while not self._stopped:
52 try:
53 buffer = await proc.stdout.readexactly(4)
54 length = int.from_bytes(buffer, byteorder="little", signed=False)
55 buffer = bytes(0)
56 while length:
57 to_read = min(length, 32768)
58 data = await proc.stdout.readexactly(to_read)
59 length -= to_read
60 if len(buffer):
61 buffer = buffer + data
62 else:
63 buffer = data
64 obj = json.loads(buffer)
65
66 if "DEBUGP" in os.environ: # pragma: no cover
67 print("\x1b[33mRECV>\x1b[0m", json.dumps(obj, indent=2))
68 self.on_message(obj)
69 except asyncio.IncompleteReadError:
70 break
71 await asyncio.sleep(0)
72
73 def send(self, message: Dict) -> None:
74 msg = json.dumps(message)
75 if "DEBUGP" in os.environ: # pragma: no cover
76 print("\x1b[32mSEND>\x1b[0m", json.dumps(message, indent=2))
77 data = msg.encode()
78 self._output.write(
79 len(data).to_bytes(4, byteorder="little", signed=False) + data
80 )
81
[end of playwright/_impl/_transport.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/playwright/_impl/_transport.py b/playwright/_impl/_transport.py
--- a/playwright/_impl/_transport.py
+++ b/playwright/_impl/_transport.py
@@ -13,11 +13,25 @@
# limitations under the License.
import asyncio
+import io
import json
import os
import sys
from pathlib import Path
-from typing import Dict
+from typing import Dict, Optional
+
+
+# Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77
+def _get_stderr_fileno() -> Optional[int]:
+ try:
+ return sys.stderr.fileno()
+ except (AttributeError, io.UnsupportedOperation):
+ # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
+ # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
+ # This is potentially dangerous, but the best we can do.
+ if not hasattr(sys, "__stderr__"):
+ return None
+ return sys.__stderr__.fileno()
class Transport:
@@ -41,7 +55,7 @@
"run-driver",
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
- stderr=sys.stderr,
+ stderr=_get_stderr_fileno(),
limit=32768,
)
assert proc.stdout
| {"golden_diff": "diff --git a/playwright/_impl/_transport.py b/playwright/_impl/_transport.py\n--- a/playwright/_impl/_transport.py\n+++ b/playwright/_impl/_transport.py\n@@ -13,11 +13,25 @@\n # limitations under the License.\n \n import asyncio\n+import io\n import json\n import os\n import sys\n from pathlib import Path\n-from typing import Dict\n+from typing import Dict, Optional\n+\n+\n+# Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77\n+def _get_stderr_fileno() -> Optional[int]:\n+ try:\n+ return sys.stderr.fileno()\n+ except (AttributeError, io.UnsupportedOperation):\n+ # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n+ # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n+ # This is potentially dangerous, but the best we can do.\n+ if not hasattr(sys, \"__stderr__\"):\n+ return None\n+ return sys.__stderr__.fileno()\n \n \n class Transport:\n@@ -41,7 +55,7 @@\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n- stderr=sys.stderr,\n+ stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n", "issue": "Interactive mode (REPL) Error !!!\n**pip install playwright==0.162.2**\r\n\r\nfrom playwright import sync_playwright\r\n**playwright = sync_playwright().start()**\r\n\r\nTraceback (most recent call last):\r\nFile \"<pyshell#1>\", line 1, in\r\nplaywright = sync_playwright().start()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright_init_.py\", line 34, in sync_playwright\r\nreturn SyncPlaywrightContextManager()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 81, in init\r\nself._connection = run_driver()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 76, in run_driver\r\nreturn loop.run_until_complete(run_driver_async())\r\nFile \"C:\\Python37\\lib\\asyncio\\base_events.py\", line 587, in run_until_complete\r\nreturn future.result()\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 61, in run_driver_async\r\nstderr=_get_stderr_fileno(),\r\nFile \"C:\\Python37\\lib\\site-packages\\playwright\\main.py\", line 54, in _get_stderr_fileno\r\nreturn sys.stderr.fileno()\r\n**AttributeError: 'NoneType' object has no attribute 'fileno'**\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport json\nimport os\nimport sys\nfrom pathlib import Path\nfrom typing import Dict\n\n\nclass Transport:\n def __init__(self, driver_executable: Path) -> None:\n super().__init__()\n self.on_message = lambda _: None\n self._stopped = False\n self._driver_executable = driver_executable\n self._loop: asyncio.AbstractEventLoop\n\n def stop(self) -> None:\n self._stopped = True\n self._output.close()\n\n async def run(self) -> None:\n self._loop = asyncio.get_running_loop()\n driver_executable = self._driver_executable\n\n proc = await asyncio.create_subprocess_exec(\n str(driver_executable),\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=sys.stderr,\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n self._output = proc.stdin\n\n while not self._stopped:\n try:\n buffer = await proc.stdout.readexactly(4)\n length = int.from_bytes(buffer, byteorder=\"little\", signed=False)\n buffer = bytes(0)\n while length:\n to_read = min(length, 32768)\n data = await proc.stdout.readexactly(to_read)\n length -= to_read\n if len(buffer):\n buffer = buffer + data\n else:\n buffer = data\n obj = json.loads(buffer)\n\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[33mRECV>\\x1b[0m\", json.dumps(obj, indent=2))\n self.on_message(obj)\n except asyncio.IncompleteReadError:\n break\n await asyncio.sleep(0)\n\n def send(self, message: Dict) -> None:\n msg = json.dumps(message)\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[32mSEND>\\x1b[0m\", json.dumps(message, indent=2))\n data = msg.encode()\n self._output.write(\n len(data).to_bytes(4, byteorder=\"little\", signed=False) + data\n )\n", "path": "playwright/_impl/_transport.py"}]} | 1,607 | 350 |
gh_patches_debug_16461 | rasdani/github-patches | git_diff | conda__conda-build-3212 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow overriding the .so check for noarch: python?
`compas` is a pure-python package that could be made `noarch: python` except that it [ships some `.so` and `.dll` files](https://github.com/compas-dev/compas/tree/master/src/compas/numerical/fd/__fd_cpp) (that it doesn't compile, so they're the same across platforms). That's triggering [this check](https://github.com/conda/conda-build/blob/19f5d7847ee8a4c5b979680595103fa6cc21e4b1/conda_build/noarch_python.py#L60-L66) and killing the build ([PR](https://github.com/conda-forge/compas-feedstock/pull/6#issuecomment-429024394), [failed build](https://circleci.com/gh/conda-forge/compas-feedstock/37?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).
The check definitely makes sense in general. But maybe there should be a way to override it for cases like this?
</issue>
<code>
[start of conda_build/noarch_python.py]
1 import io
2 import json
3 import locale
4 import logging
5 import os
6 from os.path import basename, dirname, isdir, join, isfile
7 import shutil
8 import sys
9
10 ISWIN = sys.platform.startswith('win')
11
12
13 def _force_dir(dirname):
14 if not isdir(dirname):
15 os.makedirs(dirname)
16
17
18 def _error_exit(exit_message):
19 sys.exit("[noarch_python] %s" % exit_message)
20
21
22 def rewrite_script(fn, prefix):
23 """Take a file from the bin directory and rewrite it into the python-scripts
24 directory with the same permissions after it passes some sanity checks for
25 noarch pacakges"""
26
27 # Load and check the source file for not being a binary
28 src = join(prefix, 'Scripts' if ISWIN else 'bin', fn)
29 with io.open(src, encoding=locale.getpreferredencoding()) as fi:
30 try:
31 data = fi.read()
32 except UnicodeDecodeError: # file is binary
33 _error_exit("Noarch package contains binary script: %s" % fn)
34 src_mode = os.stat(src).st_mode
35 os.unlink(src)
36
37 # Get rid of '-script.py' suffix on Windows
38 if ISWIN and fn.endswith('-script.py'):
39 fn = fn[:-10]
40
41 # Rewrite the file to the python-scripts directory
42 dst_dir = join(prefix, 'python-scripts')
43 _force_dir(dst_dir)
44 dst = join(dst_dir, fn)
45 with open(dst, 'w') as fo:
46 fo.write(data)
47 os.chmod(dst, src_mode)
48 return fn
49
50
51 def handle_file(f, d, prefix):
52 """Process a file for inclusion in a noarch python package.
53 """
54 path = join(prefix, f)
55
56 # Ignore egg-info and pyc files.
57 if f.endswith(('.egg-info', '.pyc', '.pyo')):
58 os.unlink(path)
59
60 # The presence of .so indicated this is not a noarch package
61 elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):
62 if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or
63 basename(f[:-4]) in d['python-scripts']):
64 os.unlink(path) # this is an entry point with a matching xx-script.py
65 return
66 _error_exit("Error: Binary library or executable found: %s" % f)
67
68 elif 'site-packages' in f:
69 nsp = join(prefix, 'site-packages')
70 _force_dir(nsp)
71
72 g = f[f.find('site-packages'):]
73 dst = join(prefix, g)
74 dst_dir = dirname(dst)
75 _force_dir(dst_dir)
76 shutil.move(path, dst)
77 d['site-packages'].append(g[14:])
78
79 # Treat scripts specially with the logic from above
80 elif f.startswith(('bin/', 'Scripts')):
81 fn = basename(path)
82 fn = rewrite_script(fn, prefix)
83 d['python-scripts'].append(fn)
84
85 # Include examples in the metadata doc
86 elif f.startswith(('Examples/', 'Examples\\')):
87 d['Examples'].append(f[9:])
88 # No special treatment for other files
89 # leave them as-is
90 else:
91 # this should be the built-in logging module, not conda-build's stuff, because this file is standalone.
92 log = logging.getLogger(__name__)
93 log.debug("Don't know how to handle file: %s. Including it as-is." % f)
94
95
96 def populate_files(m, files, prefix, entry_point_scripts=None):
97 d = {'dist': m.dist(),
98 'site-packages': [],
99 'python-scripts': [],
100 'Examples': []}
101
102 # Populate site-package, python-scripts, and Examples into above
103 for f in files:
104 handle_file(f, d, prefix)
105
106 # Windows path conversion
107 if ISWIN:
108 for fns in (d['site-packages'], d['Examples']):
109 for i, fn in enumerate(fns):
110 fns[i] = fn.replace('\\', '/')
111
112 if entry_point_scripts:
113 for entry_point in entry_point_scripts:
114 src = join(prefix, entry_point)
115 if os.path.isfile(src):
116 os.unlink(src)
117
118 return d
119
120
121 def transform(m, files, prefix):
122 bin_dir = join(prefix, 'bin')
123 _force_dir(bin_dir)
124
125 scripts_dir = join(prefix, 'Scripts')
126 _force_dir(scripts_dir)
127
128 name = m.name()
129
130 # Create *nix prelink script
131 # Note: it's important to use LF newlines or it wont work if we build on Win
132 with open(join(bin_dir, '.%s-pre-link.sh' % name), 'wb') as fo:
133 fo.write('''\
134 #!/bin/bash
135 $PREFIX/bin/python $SOURCE_DIR/link.py
136 '''.encode('utf-8'))
137
138 # Create windows prelink script (be nice and use Windows newlines)
139 with open(join(scripts_dir, '.%s-pre-link.bat' % name), 'wb') as fo:
140 fo.write('''\
141 @echo off
142 "%PREFIX%\\python.exe" "%SOURCE_DIR%\\link.py"
143 '''.replace('\n', '\r\n').encode('utf-8'))
144
145 d = populate_files(m, files, prefix)
146
147 # Find our way to this directory
148 this_dir = dirname(__file__)
149
150 # copy in windows exe shims if there are any python-scripts
151 if d['python-scripts']:
152 for fn in 'cli-32.exe', 'cli-64.exe':
153 shutil.copyfile(join(this_dir, fn), join(prefix, fn))
154
155 # Read the local _link.py
156 with open(join(this_dir, '_link.py')) as fi:
157 link_code = fi.read()
158
159 # Write the package metadata, and bumper with code for linking
160 with open(join(prefix, 'link.py'), 'w') as fo:
161 fo.write('DATA = ')
162 json.dump(d, fo, indent=2, sort_keys=True)
163 fo.write('\n## END DATA\n\n')
164 fo.write(link_code)
165
[end of conda_build/noarch_python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_build/noarch_python.py b/conda_build/noarch_python.py
--- a/conda_build/noarch_python.py
+++ b/conda_build/noarch_python.py
@@ -57,13 +57,10 @@
if f.endswith(('.egg-info', '.pyc', '.pyo')):
os.unlink(path)
- # The presence of .so indicated this is not a noarch package
- elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):
- if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or
- basename(f[:-4]) in d['python-scripts']):
- os.unlink(path) # this is an entry point with a matching xx-script.py
- return
- _error_exit("Error: Binary library or executable found: %s" % f)
+ if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or
+ basename(f[:-4]) in d['python-scripts']):
+ os.unlink(path) # this is an entry point with a matching xx-script.py
+ return
elif 'site-packages' in f:
nsp = join(prefix, 'site-packages')
| {"golden_diff": "diff --git a/conda_build/noarch_python.py b/conda_build/noarch_python.py\n--- a/conda_build/noarch_python.py\n+++ b/conda_build/noarch_python.py\n@@ -57,13 +57,10 @@\n if f.endswith(('.egg-info', '.pyc', '.pyo')):\n os.unlink(path)\n \n- # The presence of .so indicated this is not a noarch package\n- elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):\n- if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n- basename(f[:-4]) in d['python-scripts']):\n- os.unlink(path) # this is an entry point with a matching xx-script.py\n- return\n- _error_exit(\"Error: Binary library or executable found: %s\" % f)\n+ if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n+ basename(f[:-4]) in d['python-scripts']):\n+ os.unlink(path) # this is an entry point with a matching xx-script.py\n+ return\n \n elif 'site-packages' in f:\n nsp = join(prefix, 'site-packages')\n", "issue": "allow overriding the .so check for noarch: python?\n`compas` is a pure-python package that could be made `noarch: python` except that it [ships some `.so` and `.dll` files](https://github.com/compas-dev/compas/tree/master/src/compas/numerical/fd/__fd_cpp) (that it doesn't compile, so they're the same across platforms). That's triggering [this check](https://github.com/conda/conda-build/blob/19f5d7847ee8a4c5b979680595103fa6cc21e4b1/conda_build/noarch_python.py#L60-L66) and killing the build ([PR](https://github.com/conda-forge/compas-feedstock/pull/6#issuecomment-429024394), [failed build](https://circleci.com/gh/conda-forge/compas-feedstock/37?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).\r\n\r\nThe check definitely makes sense in general. But maybe there should be a way to override it for cases like this?\n", "before_files": [{"content": "import io\nimport json\nimport locale\nimport logging\nimport os\nfrom os.path import basename, dirname, isdir, join, isfile\nimport shutil\nimport sys\n\nISWIN = sys.platform.startswith('win')\n\n\ndef _force_dir(dirname):\n if not isdir(dirname):\n os.makedirs(dirname)\n\n\ndef _error_exit(exit_message):\n sys.exit(\"[noarch_python] %s\" % exit_message)\n\n\ndef rewrite_script(fn, prefix):\n \"\"\"Take a file from the bin directory and rewrite it into the python-scripts\n directory with the same permissions after it passes some sanity checks for\n noarch pacakges\"\"\"\n\n # Load and check the source file for not being a binary\n src = join(prefix, 'Scripts' if ISWIN else 'bin', fn)\n with io.open(src, encoding=locale.getpreferredencoding()) as fi:\n try:\n data = fi.read()\n except UnicodeDecodeError: # file is binary\n _error_exit(\"Noarch package contains binary script: %s\" % fn)\n src_mode = os.stat(src).st_mode\n os.unlink(src)\n\n # Get rid of '-script.py' suffix on Windows\n if ISWIN and fn.endswith('-script.py'):\n fn = fn[:-10]\n\n # Rewrite the file to the python-scripts directory\n dst_dir = join(prefix, 'python-scripts')\n _force_dir(dst_dir)\n dst = join(dst_dir, fn)\n with open(dst, 'w') as fo:\n fo.write(data)\n os.chmod(dst, src_mode)\n return fn\n\n\ndef handle_file(f, d, prefix):\n \"\"\"Process a file for inclusion in a noarch python package.\n \"\"\"\n path = join(prefix, f)\n\n # Ignore egg-info and pyc files.\n if f.endswith(('.egg-info', '.pyc', '.pyo')):\n os.unlink(path)\n\n # The presence of .so indicated this is not a noarch package\n elif f.endswith(('.so', '.dll', '.pyd', '.exe', '.dylib')):\n if f.endswith('.exe') and (isfile(os.path.join(prefix, f[:-4] + '-script.py')) or\n basename(f[:-4]) in d['python-scripts']):\n os.unlink(path) # this is an entry point with a matching xx-script.py\n return\n _error_exit(\"Error: Binary library or executable found: %s\" % f)\n\n elif 'site-packages' in f:\n nsp = join(prefix, 'site-packages')\n _force_dir(nsp)\n\n g = f[f.find('site-packages'):]\n dst = join(prefix, g)\n dst_dir = dirname(dst)\n _force_dir(dst_dir)\n shutil.move(path, dst)\n d['site-packages'].append(g[14:])\n\n # Treat scripts specially with the logic from above\n elif f.startswith(('bin/', 'Scripts')):\n fn = basename(path)\n fn = rewrite_script(fn, prefix)\n d['python-scripts'].append(fn)\n\n # Include examples in the metadata doc\n elif f.startswith(('Examples/', 'Examples\\\\')):\n d['Examples'].append(f[9:])\n # No special treatment for other files\n # leave them as-is\n else:\n # this should be the built-in logging module, not conda-build's stuff, because this file is standalone.\n log = logging.getLogger(__name__)\n log.debug(\"Don't know how to handle file: %s. Including it as-is.\" % f)\n\n\ndef populate_files(m, files, prefix, entry_point_scripts=None):\n d = {'dist': m.dist(),\n 'site-packages': [],\n 'python-scripts': [],\n 'Examples': []}\n\n # Populate site-package, python-scripts, and Examples into above\n for f in files:\n handle_file(f, d, prefix)\n\n # Windows path conversion\n if ISWIN:\n for fns in (d['site-packages'], d['Examples']):\n for i, fn in enumerate(fns):\n fns[i] = fn.replace('\\\\', '/')\n\n if entry_point_scripts:\n for entry_point in entry_point_scripts:\n src = join(prefix, entry_point)\n if os.path.isfile(src):\n os.unlink(src)\n\n return d\n\n\ndef transform(m, files, prefix):\n bin_dir = join(prefix, 'bin')\n _force_dir(bin_dir)\n\n scripts_dir = join(prefix, 'Scripts')\n _force_dir(scripts_dir)\n\n name = m.name()\n\n # Create *nix prelink script\n # Note: it's important to use LF newlines or it wont work if we build on Win\n with open(join(bin_dir, '.%s-pre-link.sh' % name), 'wb') as fo:\n fo.write('''\\\n #!/bin/bash\n $PREFIX/bin/python $SOURCE_DIR/link.py\n '''.encode('utf-8'))\n\n # Create windows prelink script (be nice and use Windows newlines)\n with open(join(scripts_dir, '.%s-pre-link.bat' % name), 'wb') as fo:\n fo.write('''\\\n @echo off\n \"%PREFIX%\\\\python.exe\" \"%SOURCE_DIR%\\\\link.py\"\n '''.replace('\\n', '\\r\\n').encode('utf-8'))\n\n d = populate_files(m, files, prefix)\n\n # Find our way to this directory\n this_dir = dirname(__file__)\n\n # copy in windows exe shims if there are any python-scripts\n if d['python-scripts']:\n for fn in 'cli-32.exe', 'cli-64.exe':\n shutil.copyfile(join(this_dir, fn), join(prefix, fn))\n\n # Read the local _link.py\n with open(join(this_dir, '_link.py')) as fi:\n link_code = fi.read()\n\n # Write the package metadata, and bumper with code for linking\n with open(join(prefix, 'link.py'), 'w') as fo:\n fo.write('DATA = ')\n json.dump(d, fo, indent=2, sort_keys=True)\n fo.write('\\n## END DATA\\n\\n')\n fo.write(link_code)\n", "path": "conda_build/noarch_python.py"}]} | 2,525 | 287 |
gh_patches_debug_24405 | rasdani/github-patches | git_diff | zulip__zulip-14591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove the TODO "7 days" restriction for edit and move topics
Right now we have a restriction to move just the messages in the last week in method:
`update_messages_for_topic_edit` file `zerver/lib/topic.py`
```
# We only change messages up to 7 days in the past, to avoid hammering our
# DB by changing an unbounded amount of messages
#
# TODO: Look at removing this restriction and/or add a "change_last_week"
# option; this behavior feels buggy.
```
</issue>
<code>
[start of zerver/lib/topic.py]
1 import datetime
2
3 from django.db import connection
4 from django.db.models.query import QuerySet, Q
5 from django.utils.timezone import now as timezone_now
6
7 from sqlalchemy.sql import (
8 column,
9 literal,
10 func,
11 )
12
13 from zerver.lib.request import REQ
14 from zerver.models import (
15 Message,
16 Recipient,
17 Stream,
18 UserMessage,
19 UserProfile,
20 )
21
22 from typing import Any, Dict, List, Optional, Tuple
23
24 # Only use these constants for events.
25 ORIG_TOPIC = "orig_subject"
26 TOPIC_NAME = "subject"
27 TOPIC_LINKS = "topic_links"
28 MATCH_TOPIC = "match_subject"
29
30 # This constant is actually embedded into
31 # the JSON data for message edit history,
32 # so we'll always need to handle legacy data
33 # unless we do a pretty tricky migration.
34 LEGACY_PREV_TOPIC = "prev_subject"
35
36 # This constant is pretty closely coupled to the
37 # database, but it's the JSON field.
38 EXPORT_TOPIC_NAME = "subject"
39
40 '''
41 The following functions are for user-facing APIs
42 where we'll want to support "subject" for a while.
43 '''
44
45 def get_topic_from_message_info(message_info: Dict[str, Any]) -> str:
46 '''
47 Use this where you are getting dicts that are based off of messages
48 that may come from the outside world, especially from third party
49 APIs and bots.
50
51 We prefer 'topic' to 'subject' here. We expect at least one field
52 to be present (or the caller must know how to handle KeyError).
53 '''
54 if 'topic' in message_info:
55 return message_info['topic']
56
57 return message_info['subject']
58
59 def REQ_topic() -> Optional[str]:
60 # REQ handlers really return a REQ, but we
61 # lie to make the rest of the type matching work.
62 return REQ(
63 whence='topic',
64 aliases=['subject'],
65 converter=lambda x: x.strip(),
66 default=None,
67 )
68
69 '''
70 TRY TO KEEP THIS DIVIDING LINE.
71
72 Below this line we want to make it so that functions are only
73 using "subject" in the DB sense, and nothing customer facing.
74
75 '''
76
77 # This is used in low-level message functions in
78 # zerver/lib/message.py, and it's not user facing.
79 DB_TOPIC_NAME = "subject"
80 MESSAGE__TOPIC = 'message__subject'
81
82 def topic_match_sa(topic_name: str) -> Any:
83 # _sa is short for Sql Alchemy, which we use mostly for
84 # queries that search messages
85 topic_cond = func.upper(column("subject")) == func.upper(literal(topic_name))
86 return topic_cond
87
88 def topic_column_sa() -> Any:
89 return column("subject")
90
91 def filter_by_exact_message_topic(query: QuerySet, message: Message) -> QuerySet:
92 topic_name = message.topic_name()
93 return query.filter(subject=topic_name)
94
95 def filter_by_topic_name_via_message(query: QuerySet, topic_name: str) -> QuerySet:
96 return query.filter(message__subject__iexact=topic_name)
97
98 def messages_for_topic(stream_recipient_id: int, topic_name: str) -> QuerySet:
99 return Message.objects.filter(
100 recipient_id=stream_recipient_id,
101 subject__iexact=topic_name,
102 )
103
104 def save_message_for_edit_use_case(message: Message) -> None:
105 message.save(update_fields=[TOPIC_NAME, "content", "rendered_content",
106 "rendered_content_version", "last_edit_time",
107 "edit_history", "has_attachment", "has_image",
108 "has_link", "recipient_id"])
109
110
111 def user_message_exists_for_topic(user_profile: UserProfile,
112 recipient: Recipient,
113 topic_name: str) -> bool:
114 return UserMessage.objects.filter(
115 user_profile=user_profile,
116 message__recipient=recipient,
117 message__subject__iexact=topic_name,
118 ).exists()
119
120 def update_messages_for_topic_edit(message: Message,
121 propagate_mode: str,
122 orig_topic_name: str,
123 topic_name: Optional[str],
124 new_stream: Optional[Stream]) -> List[Message]:
125 propagate_query = Q(recipient = message.recipient, subject = orig_topic_name)
126 if propagate_mode == 'change_all':
127 # We only change messages up to 7 days in the past, to avoid hammering our
128 # DB by changing an unbounded amount of messages
129 #
130 # TODO: Look at removing this restriction and/or add a "change_last_week"
131 # option; this behavior feels buggy.
132 before_bound = timezone_now() - datetime.timedelta(days=7)
133
134 propagate_query = (propagate_query & ~Q(id = message.id) &
135 Q(date_sent__range=(before_bound, timezone_now())))
136 if propagate_mode == 'change_later':
137 propagate_query = propagate_query & Q(id__gt = message.id)
138
139 messages = Message.objects.filter(propagate_query).select_related()
140
141 update_fields = {}
142
143 # Evaluate the query before running the update
144 messages_list = list(messages)
145
146 # The cached ORM objects are not changed by the upcoming
147 # messages.update(), and the remote cache update (done by the
148 # caller) requires the new value, so we manually update the
149 # objects in addition to sending a bulk query to the database.
150 if new_stream is not None:
151 update_fields["recipient"] = new_stream.recipient
152 for m in messages_list:
153 m.recipient = new_stream.recipient
154 if topic_name is not None:
155 update_fields["subject"] = topic_name
156 for m in messages_list:
157 m.set_topic_name(topic_name)
158
159 messages.update(**update_fields)
160
161 return messages_list
162
163 def generate_topic_history_from_db_rows(rows: List[Tuple[str, int]]) -> List[Dict[str, Any]]:
164 canonical_topic_names: Dict[str, Tuple[int, str]] = {}
165
166 # Sort rows by max_message_id so that if a topic
167 # has many different casings, we use the most
168 # recent row.
169 rows = sorted(rows, key=lambda tup: tup[1])
170
171 for (topic_name, max_message_id) in rows:
172 canonical_name = topic_name.lower()
173 canonical_topic_names[canonical_name] = (max_message_id, topic_name)
174
175 history = []
176 for canonical_topic, (max_message_id, topic_name) in canonical_topic_names.items():
177 history.append(dict(
178 name=topic_name,
179 max_id=max_message_id)
180 )
181 return sorted(history, key=lambda x: -x['max_id'])
182
183 def get_topic_history_for_stream(user_profile: UserProfile,
184 recipient: Recipient,
185 public_history: bool) -> List[Dict[str, Any]]:
186 cursor = connection.cursor()
187 if public_history:
188 query = '''
189 SELECT
190 "zerver_message"."subject" as topic,
191 max("zerver_message".id) as max_message_id
192 FROM "zerver_message"
193 WHERE (
194 "zerver_message"."recipient_id" = %s
195 )
196 GROUP BY (
197 "zerver_message"."subject"
198 )
199 ORDER BY max("zerver_message".id) DESC
200 '''
201 cursor.execute(query, [recipient.id])
202 else:
203 query = '''
204 SELECT
205 "zerver_message"."subject" as topic,
206 max("zerver_message".id) as max_message_id
207 FROM "zerver_message"
208 INNER JOIN "zerver_usermessage" ON (
209 "zerver_usermessage"."message_id" = "zerver_message"."id"
210 )
211 WHERE (
212 "zerver_usermessage"."user_profile_id" = %s AND
213 "zerver_message"."recipient_id" = %s
214 )
215 GROUP BY (
216 "zerver_message"."subject"
217 )
218 ORDER BY max("zerver_message".id) DESC
219 '''
220 cursor.execute(query, [user_profile.id, recipient.id])
221 rows = cursor.fetchall()
222 cursor.close()
223
224 return generate_topic_history_from_db_rows(rows)
225
226 def get_topic_history_for_web_public_stream(recipient: Recipient) -> List[Dict[str, Any]]:
227 cursor = connection.cursor()
228 query = '''
229 SELECT
230 "zerver_message"."subject" as topic,
231 max("zerver_message".id) as max_message_id
232 FROM "zerver_message"
233 WHERE (
234 "zerver_message"."recipient_id" = %s
235 )
236 GROUP BY (
237 "zerver_message"."subject"
238 )
239 ORDER BY max("zerver_message".id) DESC
240 '''
241 cursor.execute(query, [recipient.id])
242 rows = cursor.fetchall()
243 cursor.close()
244
245 return generate_topic_history_from_db_rows(rows)
246
[end of zerver/lib/topic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/lib/topic.py b/zerver/lib/topic.py
--- a/zerver/lib/topic.py
+++ b/zerver/lib/topic.py
@@ -1,8 +1,5 @@
-import datetime
-
from django.db import connection
from django.db.models.query import QuerySet, Q
-from django.utils.timezone import now as timezone_now
from sqlalchemy.sql import (
column,
@@ -124,15 +121,7 @@
new_stream: Optional[Stream]) -> List[Message]:
propagate_query = Q(recipient = message.recipient, subject = orig_topic_name)
if propagate_mode == 'change_all':
- # We only change messages up to 7 days in the past, to avoid hammering our
- # DB by changing an unbounded amount of messages
- #
- # TODO: Look at removing this restriction and/or add a "change_last_week"
- # option; this behavior feels buggy.
- before_bound = timezone_now() - datetime.timedelta(days=7)
-
- propagate_query = (propagate_query & ~Q(id = message.id) &
- Q(date_sent__range=(before_bound, timezone_now())))
+ propagate_query = propagate_query & ~Q(id = message.id)
if propagate_mode == 'change_later':
propagate_query = propagate_query & Q(id__gt = message.id)
| {"golden_diff": "diff --git a/zerver/lib/topic.py b/zerver/lib/topic.py\n--- a/zerver/lib/topic.py\n+++ b/zerver/lib/topic.py\n@@ -1,8 +1,5 @@\n-import datetime\n-\n from django.db import connection\n from django.db.models.query import QuerySet, Q\n-from django.utils.timezone import now as timezone_now\n \n from sqlalchemy.sql import (\n column,\n@@ -124,15 +121,7 @@\n new_stream: Optional[Stream]) -> List[Message]:\n propagate_query = Q(recipient = message.recipient, subject = orig_topic_name)\n if propagate_mode == 'change_all':\n- # We only change messages up to 7 days in the past, to avoid hammering our\n- # DB by changing an unbounded amount of messages\n- #\n- # TODO: Look at removing this restriction and/or add a \"change_last_week\"\n- # option; this behavior feels buggy.\n- before_bound = timezone_now() - datetime.timedelta(days=7)\n-\n- propagate_query = (propagate_query & ~Q(id = message.id) &\n- Q(date_sent__range=(before_bound, timezone_now())))\n+ propagate_query = propagate_query & ~Q(id = message.id)\n if propagate_mode == 'change_later':\n propagate_query = propagate_query & Q(id__gt = message.id)\n", "issue": "Remove the TODO \"7 days\" restriction for edit and move topics\nRight now we have a restriction to move just the messages in the last week in method:\r\n`update_messages_for_topic_edit` file `zerver/lib/topic.py`\r\n```\r\n # We only change messages up to 7 days in the past, to avoid hammering our\r\n # DB by changing an unbounded amount of messages\r\n #\r\n # TODO: Look at removing this restriction and/or add a \"change_last_week\"\r\n # option; this behavior feels buggy.\r\n```\n", "before_files": [{"content": "import datetime\n\nfrom django.db import connection\nfrom django.db.models.query import QuerySet, Q\nfrom django.utils.timezone import now as timezone_now\n\nfrom sqlalchemy.sql import (\n column,\n literal,\n func,\n)\n\nfrom zerver.lib.request import REQ\nfrom zerver.models import (\n Message,\n Recipient,\n Stream,\n UserMessage,\n UserProfile,\n)\n\nfrom typing import Any, Dict, List, Optional, Tuple\n\n# Only use these constants for events.\nORIG_TOPIC = \"orig_subject\"\nTOPIC_NAME = \"subject\"\nTOPIC_LINKS = \"topic_links\"\nMATCH_TOPIC = \"match_subject\"\n\n# This constant is actually embedded into\n# the JSON data for message edit history,\n# so we'll always need to handle legacy data\n# unless we do a pretty tricky migration.\nLEGACY_PREV_TOPIC = \"prev_subject\"\n\n# This constant is pretty closely coupled to the\n# database, but it's the JSON field.\nEXPORT_TOPIC_NAME = \"subject\"\n\n'''\nThe following functions are for user-facing APIs\nwhere we'll want to support \"subject\" for a while.\n'''\n\ndef get_topic_from_message_info(message_info: Dict[str, Any]) -> str:\n '''\n Use this where you are getting dicts that are based off of messages\n that may come from the outside world, especially from third party\n APIs and bots.\n\n We prefer 'topic' to 'subject' here. We expect at least one field\n to be present (or the caller must know how to handle KeyError).\n '''\n if 'topic' in message_info:\n return message_info['topic']\n\n return message_info['subject']\n\ndef REQ_topic() -> Optional[str]:\n # REQ handlers really return a REQ, but we\n # lie to make the rest of the type matching work.\n return REQ(\n whence='topic',\n aliases=['subject'],\n converter=lambda x: x.strip(),\n default=None,\n )\n\n'''\nTRY TO KEEP THIS DIVIDING LINE.\n\nBelow this line we want to make it so that functions are only\nusing \"subject\" in the DB sense, and nothing customer facing.\n\n'''\n\n# This is used in low-level message functions in\n# zerver/lib/message.py, and it's not user facing.\nDB_TOPIC_NAME = \"subject\"\nMESSAGE__TOPIC = 'message__subject'\n\ndef topic_match_sa(topic_name: str) -> Any:\n # _sa is short for Sql Alchemy, which we use mostly for\n # queries that search messages\n topic_cond = func.upper(column(\"subject\")) == func.upper(literal(topic_name))\n return topic_cond\n\ndef topic_column_sa() -> Any:\n return column(\"subject\")\n\ndef filter_by_exact_message_topic(query: QuerySet, message: Message) -> QuerySet:\n topic_name = message.topic_name()\n return query.filter(subject=topic_name)\n\ndef filter_by_topic_name_via_message(query: QuerySet, topic_name: str) -> QuerySet:\n return query.filter(message__subject__iexact=topic_name)\n\ndef messages_for_topic(stream_recipient_id: int, topic_name: str) -> QuerySet:\n return Message.objects.filter(\n recipient_id=stream_recipient_id,\n subject__iexact=topic_name,\n )\n\ndef save_message_for_edit_use_case(message: Message) -> None:\n message.save(update_fields=[TOPIC_NAME, \"content\", \"rendered_content\",\n \"rendered_content_version\", \"last_edit_time\",\n \"edit_history\", \"has_attachment\", \"has_image\",\n \"has_link\", \"recipient_id\"])\n\n\ndef user_message_exists_for_topic(user_profile: UserProfile,\n recipient: Recipient,\n topic_name: str) -> bool:\n return UserMessage.objects.filter(\n user_profile=user_profile,\n message__recipient=recipient,\n message__subject__iexact=topic_name,\n ).exists()\n\ndef update_messages_for_topic_edit(message: Message,\n propagate_mode: str,\n orig_topic_name: str,\n topic_name: Optional[str],\n new_stream: Optional[Stream]) -> List[Message]:\n propagate_query = Q(recipient = message.recipient, subject = orig_topic_name)\n if propagate_mode == 'change_all':\n # We only change messages up to 7 days in the past, to avoid hammering our\n # DB by changing an unbounded amount of messages\n #\n # TODO: Look at removing this restriction and/or add a \"change_last_week\"\n # option; this behavior feels buggy.\n before_bound = timezone_now() - datetime.timedelta(days=7)\n\n propagate_query = (propagate_query & ~Q(id = message.id) &\n Q(date_sent__range=(before_bound, timezone_now())))\n if propagate_mode == 'change_later':\n propagate_query = propagate_query & Q(id__gt = message.id)\n\n messages = Message.objects.filter(propagate_query).select_related()\n\n update_fields = {}\n\n # Evaluate the query before running the update\n messages_list = list(messages)\n\n # The cached ORM objects are not changed by the upcoming\n # messages.update(), and the remote cache update (done by the\n # caller) requires the new value, so we manually update the\n # objects in addition to sending a bulk query to the database.\n if new_stream is not None:\n update_fields[\"recipient\"] = new_stream.recipient\n for m in messages_list:\n m.recipient = new_stream.recipient\n if topic_name is not None:\n update_fields[\"subject\"] = topic_name\n for m in messages_list:\n m.set_topic_name(topic_name)\n\n messages.update(**update_fields)\n\n return messages_list\n\ndef generate_topic_history_from_db_rows(rows: List[Tuple[str, int]]) -> List[Dict[str, Any]]:\n canonical_topic_names: Dict[str, Tuple[int, str]] = {}\n\n # Sort rows by max_message_id so that if a topic\n # has many different casings, we use the most\n # recent row.\n rows = sorted(rows, key=lambda tup: tup[1])\n\n for (topic_name, max_message_id) in rows:\n canonical_name = topic_name.lower()\n canonical_topic_names[canonical_name] = (max_message_id, topic_name)\n\n history = []\n for canonical_topic, (max_message_id, topic_name) in canonical_topic_names.items():\n history.append(dict(\n name=topic_name,\n max_id=max_message_id)\n )\n return sorted(history, key=lambda x: -x['max_id'])\n\ndef get_topic_history_for_stream(user_profile: UserProfile,\n recipient: Recipient,\n public_history: bool) -> List[Dict[str, Any]]:\n cursor = connection.cursor()\n if public_history:\n query = '''\n SELECT\n \"zerver_message\".\"subject\" as topic,\n max(\"zerver_message\".id) as max_message_id\n FROM \"zerver_message\"\n WHERE (\n \"zerver_message\".\"recipient_id\" = %s\n )\n GROUP BY (\n \"zerver_message\".\"subject\"\n )\n ORDER BY max(\"zerver_message\".id) DESC\n '''\n cursor.execute(query, [recipient.id])\n else:\n query = '''\n SELECT\n \"zerver_message\".\"subject\" as topic,\n max(\"zerver_message\".id) as max_message_id\n FROM \"zerver_message\"\n INNER JOIN \"zerver_usermessage\" ON (\n \"zerver_usermessage\".\"message_id\" = \"zerver_message\".\"id\"\n )\n WHERE (\n \"zerver_usermessage\".\"user_profile_id\" = %s AND\n \"zerver_message\".\"recipient_id\" = %s\n )\n GROUP BY (\n \"zerver_message\".\"subject\"\n )\n ORDER BY max(\"zerver_message\".id) DESC\n '''\n cursor.execute(query, [user_profile.id, recipient.id])\n rows = cursor.fetchall()\n cursor.close()\n\n return generate_topic_history_from_db_rows(rows)\n\ndef get_topic_history_for_web_public_stream(recipient: Recipient) -> List[Dict[str, Any]]:\n cursor = connection.cursor()\n query = '''\n SELECT\n \"zerver_message\".\"subject\" as topic,\n max(\"zerver_message\".id) as max_message_id\n FROM \"zerver_message\"\n WHERE (\n \"zerver_message\".\"recipient_id\" = %s\n )\n GROUP BY (\n \"zerver_message\".\"subject\"\n )\n ORDER BY max(\"zerver_message\".id) DESC\n '''\n cursor.execute(query, [recipient.id])\n rows = cursor.fetchall()\n cursor.close()\n\n return generate_topic_history_from_db_rows(rows)\n", "path": "zerver/lib/topic.py"}]} | 3,122 | 291 |
gh_patches_debug_26087 | rasdani/github-patches | git_diff | nvaccess__nvda-9208 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NVDA hangs up in terminal, when a large piece of text is loaded
I know that normal user doesn't have this problem.
But developers, mainly developers working in terminal, could have.
When large piece of text is loaded to the terminal at the same time, for example, 10000 characters of more, NVDA is hanging up. Also, after a moment, the system hangs up.
The only way is to wait.
NVDA is not reading the text, it's reading pieces of text, then a moment of silence, different piece, silence...
For example, I can call this in ruby by writing
for i in 1..100000
print("A fragment number #{i.to_s} ")
end
Also, we can find this error, when we'll using in terminal app, which writes big pieces of text.
In console commands, like tree, we won't observe this eror, because it isn't loading of text at the same time, there's a while between printing new files.
What is interesting...
The problem is hanging up the all system, you can not open task manager or other apps.
Thank you for help
Greetings,
Dawid Pieper
</issue>
<code>
[start of source/winInputHook.py]
1 #winInputHook.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #Copyright (C) 2006-2008 NVDA Contributors <http://www.nvda-project.org/>
4 #This file is covered by the GNU General Public License.
5 #See the file COPYING for more details.
6
7 import threading
8 import comtypes.client
9 import time
10 from ctypes import *
11 from ctypes.wintypes import *
12 from win32con import WM_QUIT, HC_ACTION, WH_KEYBOARD_LL, LLKHF_UP, LLKHF_EXTENDED, LLKHF_INJECTED, WH_MOUSE_LL, LLMHF_INJECTED
13
14 class KBDLLHOOKSTRUCT(Structure):
15 _fields_=[
16 ('vkCode',DWORD),
17 ('scanCode',DWORD),
18 ('flags',DWORD),
19 ('time',DWORD),
20 ('dwExtraInfo',DWORD),
21 ]
22
23 class MSLLHOOKSTRUCT(Structure):
24 _fields_=[
25 ('pt',POINT),
26 ('mouseData',DWORD),
27 ('flags',DWORD),
28 ('time',DWORD),
29 ('dwExtraInfo',DWORD),
30 ]
31
32 keyDownCallback=None
33 keyUpCallback=None
34 mouseCallback=None
35
36 @WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)
37 def keyboardHook(code,wParam,lParam):
38 if code!=HC_ACTION:
39 return windll.user32.CallNextHookEx(0,code,wParam,lParam)
40 kbd=KBDLLHOOKSTRUCT.from_address(lParam)
41 if keyUpCallback and kbd.flags&LLKHF_UP:
42 if not keyUpCallback(kbd.vkCode,kbd.scanCode,bool(kbd.flags&LLKHF_EXTENDED),bool(kbd.flags&LLKHF_INJECTED)):
43 return 1
44 elif keyDownCallback:
45 if not keyDownCallback(kbd.vkCode,kbd.scanCode,bool(kbd.flags&LLKHF_EXTENDED),bool(kbd.flags&LLKHF_INJECTED)):
46 return 1
47 return windll.user32.CallNextHookEx(0,code,wParam,lParam)
48
49 @WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)
50 def mouseHook(code,wParam,lParam):
51 if code!=HC_ACTION:
52 return windll.user32.CallNextHookEx(0,code,wParam,lParam)
53 msll=MSLLHOOKSTRUCT.from_address(lParam)
54 if mouseCallback:
55 if not mouseCallback(wParam,msll.pt.x,msll.pt.y,msll.flags&LLMHF_INJECTED):
56 return 1
57 return windll.user32.CallNextHookEx(0,code,wParam,lParam)
58
59 hookThread=None
60 hookThreadRefCount=0
61
62 def hookThreadFunc():
63 keyHookID=windll.user32.SetWindowsHookExW(WH_KEYBOARD_LL,keyboardHook,windll.kernel32.GetModuleHandleW(None),0)
64 if keyHookID==0:
65 raise OSError("Could not register keyboard hook")
66 mouseHookID=windll.user32.SetWindowsHookExW(WH_MOUSE_LL,mouseHook,windll.kernel32.GetModuleHandleW(None),0)
67 if mouseHookID==0:
68 raise OSError("Could not register mouse hook")
69 msg=MSG()
70 while windll.user32.GetMessageW(byref(msg),None,0,0):
71 pass
72 if windll.user32.UnhookWindowsHookEx(keyHookID)==0:
73 raise OSError("could not unregister key hook %s"%keyHookID)
74 if windll.user32.UnhookWindowsHookEx(mouseHookID)==0:
75 raise OSError("could not unregister mouse hook %s"%mouseHookID)
76
77 def initialize():
78 global hookThread, hookThreadRefCount
79 hookThreadRefCount+=1
80 if hookThreadRefCount==1:
81 hookThread=threading.Thread(target=hookThreadFunc)
82 hookThread.start()
83
84 def setCallbacks(keyUp=None,keyDown=None,mouse=None):
85 global keyUpCallback, keyDownCallback, mouseCallback
86 if keyUp:
87 keyUpCallback=keyUp
88 if keyDown:
89 keyDownCallback=keyDown
90 if mouse:
91 mouseCallback=mouse
92
93 def terminate():
94 global hookThread, hookThreadRefCount
95 if not hookThread:
96 raise RuntimeError("winInputHook not running")
97 hookThreadRefCount-=1
98 if hookThreadRefCount==0:
99 windll.user32.PostThreadMessageW(hookThread.ident,WM_QUIT,0,0)
100 hookThread.join()
101 hookThread=None
102
[end of source/winInputHook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/source/winInputHook.py b/source/winInputHook.py
--- a/source/winInputHook.py
+++ b/source/winInputHook.py
@@ -10,6 +10,7 @@
from ctypes import *
from ctypes.wintypes import *
from win32con import WM_QUIT, HC_ACTION, WH_KEYBOARD_LL, LLKHF_UP, LLKHF_EXTENDED, LLKHF_INJECTED, WH_MOUSE_LL, LLMHF_INJECTED
+import watchdog
class KBDLLHOOKSTRUCT(Structure):
_fields_=[
@@ -35,7 +36,7 @@
@WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)
def keyboardHook(code,wParam,lParam):
- if code!=HC_ACTION:
+ if watchdog.isAttemptingRecovery or code!=HC_ACTION:
return windll.user32.CallNextHookEx(0,code,wParam,lParam)
kbd=KBDLLHOOKSTRUCT.from_address(lParam)
if keyUpCallback and kbd.flags&LLKHF_UP:
@@ -48,7 +49,7 @@
@WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)
def mouseHook(code,wParam,lParam):
- if code!=HC_ACTION:
+ if watchdog.isAttemptingRecovery or code!=HC_ACTION:
return windll.user32.CallNextHookEx(0,code,wParam,lParam)
msll=MSLLHOOKSTRUCT.from_address(lParam)
if mouseCallback:
| {"golden_diff": "diff --git a/source/winInputHook.py b/source/winInputHook.py\n--- a/source/winInputHook.py\n+++ b/source/winInputHook.py\n@@ -10,6 +10,7 @@\n from ctypes import *\r\n from ctypes.wintypes import *\r\n from win32con import WM_QUIT, HC_ACTION, WH_KEYBOARD_LL, LLKHF_UP, LLKHF_EXTENDED, LLKHF_INJECTED, WH_MOUSE_LL, LLMHF_INJECTED\r\n+import watchdog\r\n \r\n class KBDLLHOOKSTRUCT(Structure):\r\n \t_fields_=[\r\n@@ -35,7 +36,7 @@\n \r\n @WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)\r\n def keyboardHook(code,wParam,lParam):\r\n-\tif code!=HC_ACTION:\r\n+\tif watchdog.isAttemptingRecovery or code!=HC_ACTION:\r\n \t\treturn windll.user32.CallNextHookEx(0,code,wParam,lParam)\r\n \tkbd=KBDLLHOOKSTRUCT.from_address(lParam)\r\n \tif keyUpCallback and kbd.flags&LLKHF_UP:\r\n@@ -48,7 +49,7 @@\n \r\n @WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)\r\n def mouseHook(code,wParam,lParam):\r\n-\tif code!=HC_ACTION:\r\n+\tif watchdog.isAttemptingRecovery or code!=HC_ACTION:\r\n \t\treturn windll.user32.CallNextHookEx(0,code,wParam,lParam)\r\n \tmsll=MSLLHOOKSTRUCT.from_address(lParam)\r\n \tif mouseCallback:\n", "issue": "NVDA hangs up in terminal, when a large piece of text is loaded\nI know that normal user doesn't have this problem.\nBut developers, mainly developers working in terminal, could have.\n\nWhen large piece of text is loaded to the terminal at the same time, for example, 10000 characters of more, NVDA is hanging up. Also, after a moment, the system hangs up.\nThe only way is to wait.\nNVDA is not reading the text, it's reading pieces of text, then a moment of silence, different piece, silence...\n\nFor example, I can call this in ruby by writing\n\nfor i in 1..100000\nprint(\"A fragment number #{i.to_s} \")\nend\n\nAlso, we can find this error, when we'll using in terminal app, which writes big pieces of text.\nIn console commands, like tree, we won't observe this eror, because it isn't loading of text at the same time, there's a while between printing new files.\n\nWhat is interesting...\nThe problem is hanging up the all system, you can not open task manager or other apps.\n\nThank you for help\nGreetings,\nDawid Pieper\n\n", "before_files": [{"content": "#winInputHook.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2008 NVDA Contributors <http://www.nvda-project.org/>\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\nimport threading\r\nimport comtypes.client\r\nimport time\r\nfrom ctypes import *\r\nfrom ctypes.wintypes import *\r\nfrom win32con import WM_QUIT, HC_ACTION, WH_KEYBOARD_LL, LLKHF_UP, LLKHF_EXTENDED, LLKHF_INJECTED, WH_MOUSE_LL, LLMHF_INJECTED\r\n\r\nclass KBDLLHOOKSTRUCT(Structure):\r\n\t_fields_=[\r\n\t\t('vkCode',DWORD),\r\n\t\t('scanCode',DWORD),\r\n\t\t('flags',DWORD),\r\n\t\t('time',DWORD),\r\n\t\t('dwExtraInfo',DWORD),\r\n\t]\r\n\r\nclass MSLLHOOKSTRUCT(Structure):\r\n\t_fields_=[\r\n\t\t('pt',POINT),\r\n\t\t('mouseData',DWORD),\r\n\t\t('flags',DWORD),\r\n\t\t('time',DWORD),\r\n\t\t('dwExtraInfo',DWORD),\r\n\t]\r\n\r\nkeyDownCallback=None\r\nkeyUpCallback=None\r\nmouseCallback=None\r\n\r\n@WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)\r\ndef keyboardHook(code,wParam,lParam):\r\n\tif code!=HC_ACTION:\r\n\t\treturn windll.user32.CallNextHookEx(0,code,wParam,lParam)\r\n\tkbd=KBDLLHOOKSTRUCT.from_address(lParam)\r\n\tif keyUpCallback and kbd.flags&LLKHF_UP:\r\n\t\tif not keyUpCallback(kbd.vkCode,kbd.scanCode,bool(kbd.flags&LLKHF_EXTENDED),bool(kbd.flags&LLKHF_INJECTED)):\r\n\t\t\treturn 1\r\n\telif keyDownCallback:\r\n\t\tif not keyDownCallback(kbd.vkCode,kbd.scanCode,bool(kbd.flags&LLKHF_EXTENDED),bool(kbd.flags&LLKHF_INJECTED)):\r\n\t\t\treturn 1\r\n\treturn windll.user32.CallNextHookEx(0,code,wParam,lParam)\r\n\r\n@WINFUNCTYPE(c_long,c_int,WPARAM,LPARAM)\r\ndef mouseHook(code,wParam,lParam):\r\n\tif code!=HC_ACTION:\r\n\t\treturn windll.user32.CallNextHookEx(0,code,wParam,lParam)\r\n\tmsll=MSLLHOOKSTRUCT.from_address(lParam)\r\n\tif mouseCallback:\r\n\t\tif not mouseCallback(wParam,msll.pt.x,msll.pt.y,msll.flags&LLMHF_INJECTED):\r\n\t\t\treturn 1\r\n\treturn windll.user32.CallNextHookEx(0,code,wParam,lParam)\r\n\r\nhookThread=None\r\nhookThreadRefCount=0\r\n\r\ndef hookThreadFunc():\r\n\tkeyHookID=windll.user32.SetWindowsHookExW(WH_KEYBOARD_LL,keyboardHook,windll.kernel32.GetModuleHandleW(None),0)\r\n\tif keyHookID==0:\r\n\t\traise OSError(\"Could not register keyboard hook\")\r\n\tmouseHookID=windll.user32.SetWindowsHookExW(WH_MOUSE_LL,mouseHook,windll.kernel32.GetModuleHandleW(None),0)\r\n\tif mouseHookID==0:\r\n\t\traise OSError(\"Could not register mouse hook\")\r\n\tmsg=MSG()\r\n\twhile windll.user32.GetMessageW(byref(msg),None,0,0):\r\n\t\tpass\r\n\tif windll.user32.UnhookWindowsHookEx(keyHookID)==0:\r\n\t\traise OSError(\"could not unregister key hook %s\"%keyHookID)\r\n\tif windll.user32.UnhookWindowsHookEx(mouseHookID)==0:\r\n\t\traise OSError(\"could not unregister mouse hook %s\"%mouseHookID)\r\n\r\ndef initialize():\r\n\tglobal hookThread, hookThreadRefCount\r\n\thookThreadRefCount+=1\r\n\tif hookThreadRefCount==1:\r\n\t\thookThread=threading.Thread(target=hookThreadFunc)\r\n\t\thookThread.start()\r\n\r\ndef setCallbacks(keyUp=None,keyDown=None,mouse=None):\r\n\tglobal keyUpCallback, keyDownCallback, mouseCallback\r\n\tif keyUp:\r\n\t\tkeyUpCallback=keyUp\r\n\tif keyDown:\r\n\t\tkeyDownCallback=keyDown\r\n\tif mouse:\r\n\t\tmouseCallback=mouse\r\n\r\ndef terminate():\r\n\tglobal hookThread, hookThreadRefCount\r\n\tif not hookThread:\r\n\t\traise RuntimeError(\"winInputHook not running\")\r\n\thookThreadRefCount-=1\r\n\tif hookThreadRefCount==0:\r\n\t\twindll.user32.PostThreadMessageW(hookThread.ident,WM_QUIT,0,0)\r\n\t\thookThread.join()\r\n\t\thookThread=None\r\n", "path": "source/winInputHook.py"}]} | 1,965 | 315 |
gh_patches_debug_3947 | rasdani/github-patches | git_diff | openai__gym-558 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Box2d won't find some RAND_LIMIT_swigconstant
Hello!
It's probably some silly mistake on my side, but i wasn't able to fix by random lever pulling, as usual.
Installing Box2d as in instuctions (using `pip install -e .[all]`) will throw error when trying to use some of Box2D examples.
Code that reproduces the issue:
```
import gym
atari = gym.make('LunarLander-v0')
atari.reset()
```
```
[2016-05-16 02:14:25,430] Making new env: LunarLander-v0
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-f89e78f4410b> in <module>()
1 import gym
----> 2 atari = gym.make('LunarLander-v0')
3 atari.reset()
4 #plt.imshow(atari.render('rgb_array'))
/home/jheuristic/yozhik/gym/gym/envs/registration.pyc in make(self, id)
77 logger.info('Making new env: %s', id)
78 spec = self.spec(id)
---> 79 return spec.make()
80
81 def all(self):
/home/jheuristic/yozhik/gym/gym/envs/registration.pyc in make(self)
52 raise error.Error('Attempting to make deprecated env {}. (HINT: is there a newer registered version of this env?)'.format(self.id))
53
---> 54 cls = load(self._entry_point)
55 env = cls(**self._kwargs)
56
/home/jheuristic/yozhik/gym/gym/envs/registration.pyc in load(name)
11 def load(name):
12 entry_point = pkg_resources.EntryPoint.parse('x={}'.format(name))
---> 13 result = entry_point.load(False)
14 return result
15
/home/jheuristic/thenv/local/lib/python2.7/site-packages/pkg_resources/__init__.pyc in load(self, require, *args, **kwargs)
2378 if require:
2379 self.require(*args, **kwargs)
-> 2380 return self.resolve()
2381
2382 def resolve(self):
/home/jheuristic/thenv/local/lib/python2.7/site-packages/pkg_resources/__init__.pyc in resolve(self)
2384 Resolve the entry point from its module and attrs.
2385 """
-> 2386 module = __import__(self.module_name, fromlist=['__name__'], level=0)
2387 try:
2388 return functools.reduce(getattr, self.attrs, module)
/home/jheuristic/yozhik/gym/gym/envs/box2d/__init__.py in <module>()
----> 1 from gym.envs.box2d.lunar_lander import LunarLander
2 from gym.envs.box2d.bipedal_walker import BipedalWalker, BipedalWalkerHardcore
/home/jheuristic/yozhik/gym/gym/envs/box2d/lunar_lander.py in <module>()
3 from six.moves import xrange
4
----> 5 import Box2D
6 from Box2D.b2 import (edgeShape, circleShape, fixtureDef, polygonShape, revoluteJointDef, contactListener)
7
/home/jheuristic/thenv/local/lib/python2.7/site-packages/Box2D/__init__.py in <module>()
18 # 3. This notice may not be removed or altered from any source distribution.
19 #
---> 20 from .Box2D import *
21 __author__ = '$Date$'
22 __version__ = '2.3.1'
/home/jheuristic/thenv/local/lib/python2.7/site-packages/Box2D/Box2D.py in <module>()
433 return _Box2D.b2CheckPolygon(shape, additional_checks)
434
--> 435 _Box2D.RAND_LIMIT_swigconstant(_Box2D)
436 RAND_LIMIT = _Box2D.RAND_LIMIT
437
AttributeError: 'module' object has no attribute 'RAND_LIMIT_swigconstant'
```
What didn't help:
```
pip uninstall gym
apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl
git clone https://github.com/openai/gym
cd gym
pip install -e .[all] --upgrade
```
The OS is Ubuntu 14.04 Server x64
It may be a clue that i am running the thing from inside python2 virtualenv (with all numpys, etc. installed)
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 import sys, os.path
3
4 # Don't import gym module here, since deps may not be installed
5 sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'gym'))
6 from version import VERSION
7
8 # Environment-specific dependencies.
9 extras = {
10 'atari': ['atari_py>=0.0.21', 'Pillow', 'PyOpenGL'],
11 'board_game' : ['pachi-py>=0.0.19'],
12 'box2d': ['box2d-py'],
13 'classic_control': ['PyOpenGL'],
14 'mujoco': ['mujoco_py>=0.4.3', 'imageio'],
15 'parameter_tuning': ['keras', 'theano'],
16 }
17
18 # Meta dependency groups.
19 all_deps = []
20 for group_name in extras:
21 all_deps += extras[group_name]
22 extras['all'] = all_deps
23
24 setup(name='gym',
25 version=VERSION,
26 description='The OpenAI Gym: A toolkit for developing and comparing your reinforcement learning agents.',
27 url='https://github.com/openai/gym',
28 author='OpenAI',
29 author_email='[email protected]',
30 license='',
31 packages=[package for package in find_packages()
32 if package.startswith('gym')],
33 zip_safe=False,
34 install_requires=[
35 'numpy>=1.10.4', 'requests>=2.0', 'six', 'pyglet>=1.2.0',
36 ],
37 extras_require=extras,
38 package_data={'gym': ['envs/mujoco/assets/*.xml', 'envs/classic_control/assets/*.png']},
39 tests_require=['pytest', 'mock'],
40 )
41
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
extras = {
'atari': ['atari_py>=0.0.21', 'Pillow', 'PyOpenGL'],
'board_game' : ['pachi-py>=0.0.19'],
- 'box2d': ['box2d-py'],
+ 'box2d': ['Box2D-kengz'],
'classic_control': ['PyOpenGL'],
'mujoco': ['mujoco_py>=0.4.3', 'imageio'],
'parameter_tuning': ['keras', 'theano'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n extras = {\n 'atari': ['atari_py>=0.0.21', 'Pillow', 'PyOpenGL'],\n 'board_game' : ['pachi-py>=0.0.19'],\n- 'box2d': ['box2d-py'],\n+ 'box2d': ['Box2D-kengz'],\n 'classic_control': ['PyOpenGL'],\n 'mujoco': ['mujoco_py>=0.4.3', 'imageio'],\n 'parameter_tuning': ['keras', 'theano'],\n", "issue": "Box2d won't find some RAND_LIMIT_swigconstant\nHello!\n\nIt's probably some silly mistake on my side, but i wasn't able to fix by random lever pulling, as usual.\n\nInstalling Box2d as in instuctions (using `pip install -e .[all]`) will throw error when trying to use some of Box2D examples.\n\nCode that reproduces the issue:\n\n```\nimport gym\natari = gym.make('LunarLander-v0')\natari.reset()\n```\n\n```\n[2016-05-16 02:14:25,430] Making new env: LunarLander-v0\n\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-1-f89e78f4410b> in <module>()\n 1 import gym\n----> 2 atari = gym.make('LunarLander-v0')\n 3 atari.reset()\n 4 #plt.imshow(atari.render('rgb_array'))\n\n/home/jheuristic/yozhik/gym/gym/envs/registration.pyc in make(self, id)\n 77 logger.info('Making new env: %s', id)\n 78 spec = self.spec(id)\n---> 79 return spec.make()\n 80 \n 81 def all(self):\n\n/home/jheuristic/yozhik/gym/gym/envs/registration.pyc in make(self)\n 52 raise error.Error('Attempting to make deprecated env {}. (HINT: is there a newer registered version of this env?)'.format(self.id))\n 53 \n---> 54 cls = load(self._entry_point)\n 55 env = cls(**self._kwargs)\n 56 \n\n/home/jheuristic/yozhik/gym/gym/envs/registration.pyc in load(name)\n 11 def load(name):\n 12 entry_point = pkg_resources.EntryPoint.parse('x={}'.format(name))\n---> 13 result = entry_point.load(False)\n 14 return result\n 15 \n\n/home/jheuristic/thenv/local/lib/python2.7/site-packages/pkg_resources/__init__.pyc in load(self, require, *args, **kwargs)\n 2378 if require:\n 2379 self.require(*args, **kwargs)\n-> 2380 return self.resolve()\n 2381 \n 2382 def resolve(self):\n\n/home/jheuristic/thenv/local/lib/python2.7/site-packages/pkg_resources/__init__.pyc in resolve(self)\n 2384 Resolve the entry point from its module and attrs.\n 2385 \"\"\"\n-> 2386 module = __import__(self.module_name, fromlist=['__name__'], level=0)\n 2387 try:\n 2388 return functools.reduce(getattr, self.attrs, module)\n\n/home/jheuristic/yozhik/gym/gym/envs/box2d/__init__.py in <module>()\n----> 1 from gym.envs.box2d.lunar_lander import LunarLander\n 2 from gym.envs.box2d.bipedal_walker import BipedalWalker, BipedalWalkerHardcore\n\n/home/jheuristic/yozhik/gym/gym/envs/box2d/lunar_lander.py in <module>()\n 3 from six.moves import xrange\n 4 \n----> 5 import Box2D\n 6 from Box2D.b2 import (edgeShape, circleShape, fixtureDef, polygonShape, revoluteJointDef, contactListener)\n 7 \n\n/home/jheuristic/thenv/local/lib/python2.7/site-packages/Box2D/__init__.py in <module>()\n 18 # 3. This notice may not be removed or altered from any source distribution.\n 19 #\n---> 20 from .Box2D import *\n 21 __author__ = '$Date$'\n 22 __version__ = '2.3.1'\n\n/home/jheuristic/thenv/local/lib/python2.7/site-packages/Box2D/Box2D.py in <module>()\n 433 return _Box2D.b2CheckPolygon(shape, additional_checks)\n 434 \n--> 435 _Box2D.RAND_LIMIT_swigconstant(_Box2D)\n 436 RAND_LIMIT = _Box2D.RAND_LIMIT\n 437 \n\nAttributeError: 'module' object has no attribute 'RAND_LIMIT_swigconstant'\n\n```\n\nWhat didn't help:\n\n```\npip uninstall gym\napt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl\ngit clone https://github.com/openai/gym\ncd gym\npip install -e .[all] --upgrade\n```\n\nThe OS is Ubuntu 14.04 Server x64\nIt may be a clue that i am running the thing from inside python2 virtualenv (with all numpys, etc. installed)\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport sys, os.path\n\n# Don't import gym module here, since deps may not be installed\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), 'gym'))\nfrom version import VERSION\n\n# Environment-specific dependencies.\nextras = {\n 'atari': ['atari_py>=0.0.21', 'Pillow', 'PyOpenGL'],\n 'board_game' : ['pachi-py>=0.0.19'],\n 'box2d': ['box2d-py'],\n 'classic_control': ['PyOpenGL'],\n 'mujoco': ['mujoco_py>=0.4.3', 'imageio'],\n 'parameter_tuning': ['keras', 'theano'],\n}\n\n# Meta dependency groups.\nall_deps = []\nfor group_name in extras:\n all_deps += extras[group_name]\nextras['all'] = all_deps\n\nsetup(name='gym',\n version=VERSION,\n description='The OpenAI Gym: A toolkit for developing and comparing your reinforcement learning agents.',\n url='https://github.com/openai/gym',\n author='OpenAI',\n author_email='[email protected]',\n license='',\n packages=[package for package in find_packages()\n if package.startswith('gym')],\n zip_safe=False,\n install_requires=[\n 'numpy>=1.10.4', 'requests>=2.0', 'six', 'pyglet>=1.2.0',\n ],\n extras_require=extras,\n package_data={'gym': ['envs/mujoco/assets/*.xml', 'envs/classic_control/assets/*.png']},\n tests_require=['pytest', 'mock'],\n)\n", "path": "setup.py"}]} | 2,098 | 151 |
gh_patches_debug_33666 | rasdani/github-patches | git_diff | google__fuzzbench-776 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move/publish reports of experimental experiments under fuzzbench.com/reports/experimental
Experimental experiments are experiments with fuzzers not in this list:
https://github.com/google/fuzzbench/blob/master/service/core-fuzzers.yaml
</issue>
<code>
[start of experiment/reporter.py]
1 #!/usr/bin/env python3
2 # Copyright 2020 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """A module containing the interface used by an experiment for generating
16 reports."""
17 import os
18 import posixpath
19
20 from common import experiment_utils
21 from common import experiment_path as exp_path
22 from common import filesystem
23 from common import filestore_utils
24 from common import logs
25 from common import yaml_utils
26 from analysis import generate_report
27 from analysis import data_utils
28
29 CORE_FUZZERS_YAML = os.path.join(os.path.dirname(__file__), '..', 'service',
30 'core-fuzzers.yaml')
31
32 logger = logs.Logger('reporter') # pylint: disable=invalid-name
33
34
35 def get_reports_dir():
36 """Return reports directory."""
37 return exp_path.path('reports')
38
39
40 def output_report(experiment_config: dict,
41 in_progress=False,
42 coverage_report=False):
43 """Generate the HTML report and write it to |web_bucket|."""
44 experiment_name = experiment_utils.get_experiment_name()
45 web_filestore_path = posixpath.join(experiment_config['report_filestore'],
46 experiment_name)
47
48 reports_dir = get_reports_dir()
49
50 core_fuzzers = yaml_utils.read(CORE_FUZZERS_YAML)['fuzzers']
51 fuzzers = sorted(set(experiment_config['fuzzers']).union(set(core_fuzzers)))
52
53 # Don't merge with nonprivate experiments until the very end as doing it
54 # while the experiment is in progress will produce unusable realtime
55 # results.
56 merge_with_nonprivate = (not in_progress and experiment_config.get(
57 'merge_with_nonprivate', False))
58
59 try:
60 logger.debug('Generating report.')
61 filesystem.recreate_directory(reports_dir)
62 generate_report.generate_report(
63 [experiment_name],
64 str(reports_dir),
65 report_name=experiment_name,
66 fuzzers=fuzzers,
67 in_progress=in_progress,
68 merge_with_clobber_nonprivate=merge_with_nonprivate,
69 coverage_report=coverage_report)
70 filestore_utils.rsync(
71 str(reports_dir),
72 web_filestore_path,
73 delete=False, # Don't remove existing coverage jsons.
74 gsutil_options=[
75 '-h', 'Cache-Control:public,max-age=0,no-transform'
76 ])
77 logger.debug('Done generating report.')
78 except data_utils.EmptyDataError:
79 logs.warning('No snapshot data.')
80 except Exception: # pylint: disable=broad-except
81 logger.error('Error generating HTML report.')
82
[end of experiment/reporter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/experiment/reporter.py b/experiment/reporter.py
--- a/experiment/reporter.py
+++ b/experiment/reporter.py
@@ -26,8 +26,9 @@
from analysis import generate_report
from analysis import data_utils
-CORE_FUZZERS_YAML = os.path.join(os.path.dirname(__file__), '..', 'service',
- 'core-fuzzers.yaml')
+CORE_FUZZERS_YAML = os.path.abspath(
+ os.path.join(os.path.dirname(__file__), '..', 'service',
+ 'core-fuzzers.yaml'))
logger = logs.Logger('reporter') # pylint: disable=invalid-name
@@ -37,18 +38,29 @@
return exp_path.path('reports')
+def get_core_fuzzers():
+ """Return list of core fuzzers to be used for merging experiment data."""
+ return yaml_utils.read(CORE_FUZZERS_YAML)['fuzzers']
+
+
def output_report(experiment_config: dict,
in_progress=False,
coverage_report=False):
"""Generate the HTML report and write it to |web_bucket|."""
experiment_name = experiment_utils.get_experiment_name()
- web_filestore_path = posixpath.join(experiment_config['report_filestore'],
- experiment_name)
-
reports_dir = get_reports_dir()
- core_fuzzers = yaml_utils.read(CORE_FUZZERS_YAML)['fuzzers']
- fuzzers = sorted(set(experiment_config['fuzzers']).union(set(core_fuzzers)))
+ core_fuzzers = set(get_core_fuzzers())
+ experiment_fuzzers = set(experiment_config['fuzzers'])
+ fuzzers = experiment_fuzzers.union(core_fuzzers)
+
+ # Calculate path to store report files in filestore.
+ web_filestore_path = experiment_config['report_filestore']
+ if not fuzzers.issubset(core_fuzzers):
+ # This means that we are running an experimental report with fuzzers
+ # not in the core list. So, store these in |experimental| sub-directory.
+ web_filestore_path = os.path.join(web_filestore_path, 'experimental')
+ web_filestore_path = posixpath.join(web_filestore_path, experiment_name)
# Don't merge with nonprivate experiments until the very end as doing it
# while the experiment is in progress will produce unusable realtime
| {"golden_diff": "diff --git a/experiment/reporter.py b/experiment/reporter.py\n--- a/experiment/reporter.py\n+++ b/experiment/reporter.py\n@@ -26,8 +26,9 @@\n from analysis import generate_report\n from analysis import data_utils\n \n-CORE_FUZZERS_YAML = os.path.join(os.path.dirname(__file__), '..', 'service',\n- 'core-fuzzers.yaml')\n+CORE_FUZZERS_YAML = os.path.abspath(\n+ os.path.join(os.path.dirname(__file__), '..', 'service',\n+ 'core-fuzzers.yaml'))\n \n logger = logs.Logger('reporter') # pylint: disable=invalid-name\n \n@@ -37,18 +38,29 @@\n return exp_path.path('reports')\n \n \n+def get_core_fuzzers():\n+ \"\"\"Return list of core fuzzers to be used for merging experiment data.\"\"\"\n+ return yaml_utils.read(CORE_FUZZERS_YAML)['fuzzers']\n+\n+\n def output_report(experiment_config: dict,\n in_progress=False,\n coverage_report=False):\n \"\"\"Generate the HTML report and write it to |web_bucket|.\"\"\"\n experiment_name = experiment_utils.get_experiment_name()\n- web_filestore_path = posixpath.join(experiment_config['report_filestore'],\n- experiment_name)\n-\n reports_dir = get_reports_dir()\n \n- core_fuzzers = yaml_utils.read(CORE_FUZZERS_YAML)['fuzzers']\n- fuzzers = sorted(set(experiment_config['fuzzers']).union(set(core_fuzzers)))\n+ core_fuzzers = set(get_core_fuzzers())\n+ experiment_fuzzers = set(experiment_config['fuzzers'])\n+ fuzzers = experiment_fuzzers.union(core_fuzzers)\n+\n+ # Calculate path to store report files in filestore.\n+ web_filestore_path = experiment_config['report_filestore']\n+ if not fuzzers.issubset(core_fuzzers):\n+ # This means that we are running an experimental report with fuzzers\n+ # not in the core list. So, store these in |experimental| sub-directory.\n+ web_filestore_path = os.path.join(web_filestore_path, 'experimental')\n+ web_filestore_path = posixpath.join(web_filestore_path, experiment_name)\n \n # Don't merge with nonprivate experiments until the very end as doing it\n # while the experiment is in progress will produce unusable realtime\n", "issue": "Move/publish reports of experimental experiments under fuzzbench.com/reports/experimental\nExperimental experiments are experiments with fuzzers not in this list:\r\nhttps://github.com/google/fuzzbench/blob/master/service/core-fuzzers.yaml\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A module containing the interface used by an experiment for generating\nreports.\"\"\"\nimport os\nimport posixpath\n\nfrom common import experiment_utils\nfrom common import experiment_path as exp_path\nfrom common import filesystem\nfrom common import filestore_utils\nfrom common import logs\nfrom common import yaml_utils\nfrom analysis import generate_report\nfrom analysis import data_utils\n\nCORE_FUZZERS_YAML = os.path.join(os.path.dirname(__file__), '..', 'service',\n 'core-fuzzers.yaml')\n\nlogger = logs.Logger('reporter') # pylint: disable=invalid-name\n\n\ndef get_reports_dir():\n \"\"\"Return reports directory.\"\"\"\n return exp_path.path('reports')\n\n\ndef output_report(experiment_config: dict,\n in_progress=False,\n coverage_report=False):\n \"\"\"Generate the HTML report and write it to |web_bucket|.\"\"\"\n experiment_name = experiment_utils.get_experiment_name()\n web_filestore_path = posixpath.join(experiment_config['report_filestore'],\n experiment_name)\n\n reports_dir = get_reports_dir()\n\n core_fuzzers = yaml_utils.read(CORE_FUZZERS_YAML)['fuzzers']\n fuzzers = sorted(set(experiment_config['fuzzers']).union(set(core_fuzzers)))\n\n # Don't merge with nonprivate experiments until the very end as doing it\n # while the experiment is in progress will produce unusable realtime\n # results.\n merge_with_nonprivate = (not in_progress and experiment_config.get(\n 'merge_with_nonprivate', False))\n\n try:\n logger.debug('Generating report.')\n filesystem.recreate_directory(reports_dir)\n generate_report.generate_report(\n [experiment_name],\n str(reports_dir),\n report_name=experiment_name,\n fuzzers=fuzzers,\n in_progress=in_progress,\n merge_with_clobber_nonprivate=merge_with_nonprivate,\n coverage_report=coverage_report)\n filestore_utils.rsync(\n str(reports_dir),\n web_filestore_path,\n delete=False, # Don't remove existing coverage jsons.\n gsutil_options=[\n '-h', 'Cache-Control:public,max-age=0,no-transform'\n ])\n logger.debug('Done generating report.')\n except data_utils.EmptyDataError:\n logs.warning('No snapshot data.')\n except Exception: # pylint: disable=broad-except\n logger.error('Error generating HTML report.')\n", "path": "experiment/reporter.py"}]} | 1,373 | 525 |
gh_patches_debug_8982 | rasdani/github-patches | git_diff | scrapy__scrapy-4778 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation example fails with `proxy URL with no authority`
Running the [example](https://doc.scrapy.org/en/1.5/intro/overview.html#walk-through-of-an-example-spider) from the documentation yields this:
```
10:11 $ scrapy runspider quotes.py
2018-07-11 10:12:04 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: scrapybot)
2018-07-11 10:12:04 [scrapy.utils.log] INFO: Versions: lxml 3.5.0.0, libxml2 2.9.3, cssselect 0.9.1, parsel 1.5.0, w3lib 1.19.0, Twisted 16.0.0, Python 2.7.12 (default, Dec 4 2017, 14:50:18) - [GCC 5.4.0 20160609], pyOpenSSL 0.15.1 (OpenSSL 1.0.2g 1 Mar 2016), cryptography 1.2.3, Platform Linux-4.4.0-130-generic-x86_64-with-Ubuntu-16.04-xenial
2018-07-11 10:12:04 [scrapy.crawler] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2018-07-11 10:12:04 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
Unhandled error in Deferred:
2018-07-11 10:12:04 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/runspider.py", line 88, in run
self.crawler_process.crawl(spidercls, **opts.spargs)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 171, in crawl
return self._crawl(crawler, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 175, in _crawl
d = crawler.crawl(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 98, in crawl
six.reraise(*exc_info)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 80, in crawl
self.engine = self._create_engine()
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 105, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 69, in __init__
self.downloader = downloader_cls(crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 36, in from_settings
mw = mwcls.from_crawler(crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/downloadermiddlewares/httpproxy.py", line 29, in from_crawler
return cls(auth_encoding)
File "/usr/local/lib/python2.7/dist-packages/scrapy/downloadermiddlewares/httpproxy.py", line 22, in __init__
self.proxies[type] = self._get_proxy(url, type)
File "/usr/local/lib/python2.7/dist-packages/scrapy/downloadermiddlewares/httpproxy.py", line 39, in _get_proxy
proxy_type, user, password, hostport = _parse_proxy(url)
File "/usr/lib/python2.7/urllib2.py", line 721, in _parse_proxy
raise ValueError("proxy URL with no authority: %r" % proxy)
exceptions.ValueError: proxy URL with no authority: '/var/run/docker.sock'
2018-07-11 10:12:04 [twisted] CRITICAL:
```
Looks like proxy code does not handle `no_proxy` correctly.
</issue>
<code>
[start of scrapy/downloadermiddlewares/httpproxy.py]
1 import base64
2 from urllib.parse import unquote, urlunparse
3 from urllib.request import getproxies, proxy_bypass, _parse_proxy
4
5 from scrapy.exceptions import NotConfigured
6 from scrapy.utils.httpobj import urlparse_cached
7 from scrapy.utils.python import to_bytes
8
9
10 class HttpProxyMiddleware:
11
12 def __init__(self, auth_encoding='latin-1'):
13 self.auth_encoding = auth_encoding
14 self.proxies = {}
15 for type_, url in getproxies().items():
16 self.proxies[type_] = self._get_proxy(url, type_)
17
18 @classmethod
19 def from_crawler(cls, crawler):
20 if not crawler.settings.getbool('HTTPPROXY_ENABLED'):
21 raise NotConfigured
22 auth_encoding = crawler.settings.get('HTTPPROXY_AUTH_ENCODING')
23 return cls(auth_encoding)
24
25 def _basic_auth_header(self, username, password):
26 user_pass = to_bytes(
27 f'{unquote(username)}:{unquote(password)}',
28 encoding=self.auth_encoding)
29 return base64.b64encode(user_pass)
30
31 def _get_proxy(self, url, orig_type):
32 proxy_type, user, password, hostport = _parse_proxy(url)
33 proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))
34
35 if user:
36 creds = self._basic_auth_header(user, password)
37 else:
38 creds = None
39
40 return creds, proxy_url
41
42 def process_request(self, request, spider):
43 # ignore if proxy is already set
44 if 'proxy' in request.meta:
45 if request.meta['proxy'] is None:
46 return
47 # extract credentials if present
48 creds, proxy_url = self._get_proxy(request.meta['proxy'], '')
49 request.meta['proxy'] = proxy_url
50 if creds and not request.headers.get('Proxy-Authorization'):
51 request.headers['Proxy-Authorization'] = b'Basic ' + creds
52 return
53 elif not self.proxies:
54 return
55
56 parsed = urlparse_cached(request)
57 scheme = parsed.scheme
58
59 # 'no_proxy' is only supported by http schemes
60 if scheme in ('http', 'https') and proxy_bypass(parsed.hostname):
61 return
62
63 if scheme in self.proxies:
64 self._set_proxy(request, scheme)
65
66 def _set_proxy(self, request, scheme):
67 creds, proxy = self.proxies[scheme]
68 request.meta['proxy'] = proxy
69 if creds:
70 request.headers['Proxy-Authorization'] = b'Basic ' + creds
71
[end of scrapy/downloadermiddlewares/httpproxy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/downloadermiddlewares/httpproxy.py b/scrapy/downloadermiddlewares/httpproxy.py
--- a/scrapy/downloadermiddlewares/httpproxy.py
+++ b/scrapy/downloadermiddlewares/httpproxy.py
@@ -13,7 +13,12 @@
self.auth_encoding = auth_encoding
self.proxies = {}
for type_, url in getproxies().items():
- self.proxies[type_] = self._get_proxy(url, type_)
+ try:
+ self.proxies[type_] = self._get_proxy(url, type_)
+ # some values such as '/var/run/docker.sock' can't be parsed
+ # by _parse_proxy and as such should be skipped
+ except ValueError:
+ continue
@classmethod
def from_crawler(cls, crawler):
| {"golden_diff": "diff --git a/scrapy/downloadermiddlewares/httpproxy.py b/scrapy/downloadermiddlewares/httpproxy.py\n--- a/scrapy/downloadermiddlewares/httpproxy.py\n+++ b/scrapy/downloadermiddlewares/httpproxy.py\n@@ -13,7 +13,12 @@\n self.auth_encoding = auth_encoding\n self.proxies = {}\n for type_, url in getproxies().items():\n- self.proxies[type_] = self._get_proxy(url, type_)\n+ try:\n+ self.proxies[type_] = self._get_proxy(url, type_)\n+ # some values such as '/var/run/docker.sock' can't be parsed\n+ # by _parse_proxy and as such should be skipped\n+ except ValueError:\n+ continue\n \n @classmethod\n def from_crawler(cls, crawler):\n", "issue": "Documentation example fails with `proxy URL with no authority`\nRunning the [example](https://doc.scrapy.org/en/1.5/intro/overview.html#walk-through-of-an-example-spider) from the documentation yields this:\r\n```\r\n10:11 $ scrapy runspider quotes.py \r\n2018-07-11 10:12:04 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: scrapybot)\r\n2018-07-11 10:12:04 [scrapy.utils.log] INFO: Versions: lxml 3.5.0.0, libxml2 2.9.3, cssselect 0.9.1, parsel 1.5.0, w3lib 1.19.0, Twisted 16.0.0, Python 2.7.12 (default, Dec 4 2017, 14:50:18) - [GCC 5.4.0 20160609], pyOpenSSL 0.15.1 (OpenSSL 1.0.2g 1 Mar 2016), cryptography 1.2.3, Platform Linux-4.4.0-130-generic-x86_64-with-Ubuntu-16.04-xenial\r\n2018-07-11 10:12:04 [scrapy.crawler] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}\r\n2018-07-11 10:12:04 [scrapy.middleware] INFO: Enabled extensions:\r\n['scrapy.extensions.memusage.MemoryUsage',\r\n 'scrapy.extensions.logstats.LogStats',\r\n 'scrapy.extensions.telnet.TelnetConsole',\r\n 'scrapy.extensions.corestats.CoreStats']\r\nUnhandled error in Deferred:\r\n2018-07-11 10:12:04 [twisted] CRITICAL: Unhandled error in Deferred:\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/commands/runspider.py\", line 88, in run\r\n self.crawler_process.crawl(spidercls, **opts.spargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 171, in crawl\r\n return self._crawl(crawler, *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 175, in _crawl\r\n d = crawler.crawl(*args, **kwargs)\r\n File \"/usr/lib/python2.7/dist-packages/twisted/internet/defer.py\", line 1274, in unwindGenerator\r\n return _inlineCallbacks(None, gen, Deferred())\r\n--- <exception caught here> ---\r\n File \"/usr/lib/python2.7/dist-packages/twisted/internet/defer.py\", line 1128, in _inlineCallbacks\r\n result = g.send(result)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 98, in crawl\r\n six.reraise(*exc_info)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 80, in crawl\r\n self.engine = self._create_engine()\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 105, in _create_engine\r\n return ExecutionEngine(self, lambda _: self.stop())\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py\", line 69, in __init__\r\n self.downloader = downloader_cls(crawler)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/__init__.py\", line 88, in __init__\r\n self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py\", line 58, in from_crawler\r\n return cls.from_settings(crawler.settings, crawler)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py\", line 36, in from_settings\r\n mw = mwcls.from_crawler(crawler)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/downloadermiddlewares/httpproxy.py\", line 29, in from_crawler\r\n return cls(auth_encoding)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/downloadermiddlewares/httpproxy.py\", line 22, in __init__\r\n self.proxies[type] = self._get_proxy(url, type)\r\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/downloadermiddlewares/httpproxy.py\", line 39, in _get_proxy\r\n proxy_type, user, password, hostport = _parse_proxy(url)\r\n File \"/usr/lib/python2.7/urllib2.py\", line 721, in _parse_proxy\r\n raise ValueError(\"proxy URL with no authority: %r\" % proxy)\r\nexceptions.ValueError: proxy URL with no authority: '/var/run/docker.sock'\r\n2018-07-11 10:12:04 [twisted] CRITICAL:\r\n```\r\nLooks like proxy code does not handle `no_proxy` correctly.\n", "before_files": [{"content": "import base64\nfrom urllib.parse import unquote, urlunparse\nfrom urllib.request import getproxies, proxy_bypass, _parse_proxy\n\nfrom scrapy.exceptions import NotConfigured\nfrom scrapy.utils.httpobj import urlparse_cached\nfrom scrapy.utils.python import to_bytes\n\n\nclass HttpProxyMiddleware:\n\n def __init__(self, auth_encoding='latin-1'):\n self.auth_encoding = auth_encoding\n self.proxies = {}\n for type_, url in getproxies().items():\n self.proxies[type_] = self._get_proxy(url, type_)\n\n @classmethod\n def from_crawler(cls, crawler):\n if not crawler.settings.getbool('HTTPPROXY_ENABLED'):\n raise NotConfigured\n auth_encoding = crawler.settings.get('HTTPPROXY_AUTH_ENCODING')\n return cls(auth_encoding)\n\n def _basic_auth_header(self, username, password):\n user_pass = to_bytes(\n f'{unquote(username)}:{unquote(password)}',\n encoding=self.auth_encoding)\n return base64.b64encode(user_pass)\n\n def _get_proxy(self, url, orig_type):\n proxy_type, user, password, hostport = _parse_proxy(url)\n proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))\n\n if user:\n creds = self._basic_auth_header(user, password)\n else:\n creds = None\n\n return creds, proxy_url\n\n def process_request(self, request, spider):\n # ignore if proxy is already set\n if 'proxy' in request.meta:\n if request.meta['proxy'] is None:\n return\n # extract credentials if present\n creds, proxy_url = self._get_proxy(request.meta['proxy'], '')\n request.meta['proxy'] = proxy_url\n if creds and not request.headers.get('Proxy-Authorization'):\n request.headers['Proxy-Authorization'] = b'Basic ' + creds\n return\n elif not self.proxies:\n return\n\n parsed = urlparse_cached(request)\n scheme = parsed.scheme\n\n # 'no_proxy' is only supported by http schemes\n if scheme in ('http', 'https') and proxy_bypass(parsed.hostname):\n return\n\n if scheme in self.proxies:\n self._set_proxy(request, scheme)\n\n def _set_proxy(self, request, scheme):\n creds, proxy = self.proxies[scheme]\n request.meta['proxy'] = proxy\n if creds:\n request.headers['Proxy-Authorization'] = b'Basic ' + creds\n", "path": "scrapy/downloadermiddlewares/httpproxy.py"}]} | 2,413 | 185 |
gh_patches_debug_31036 | rasdani/github-patches | git_diff | goauthentik__authentik-6325 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docker compose run --rm server ldap_sync - doesn't work anymore to debug
**Describe the bug**
this command doesnt work anymore as described here (https://goauthentik.io/docs/troubleshooting/ldap_source)
```
docker compose run --rm server ldap_sync nxnet
```
it will just create a backgorund taks instead of running it in foreground!
**To Reproduce**
```
docker compose run --rm server ldap_sync SLUGofLDAPsource
```
**Expected behavior**
it will run an LDAP- synchronization in the foreground and see any errors or warnings that might happen directly
**Logs**
{"event": "Task published", "level": "info", "logger": "authentik.root.celery", "pid": 7, "task_id": "17af668f-1d9f-4732-a0eb-044c4a16beed", "task_name": "authentik.sources.ldap.tasks.ldap_sync", "timestamp": "2023-07-20T15:10:12.575247"}
**Version and Deployment (please complete the following information):**
- authentik version: 2023.6.1
- Deployment: docker compose
</issue>
<code>
[start of authentik/sources/ldap/management/commands/ldap_sync.py]
1 """LDAP Sync"""
2 from django.core.management.base import BaseCommand
3 from structlog.stdlib import get_logger
4
5 from authentik.sources.ldap.models import LDAPSource
6 from authentik.sources.ldap.tasks import ldap_sync_single
7
8 LOGGER = get_logger()
9
10
11 class Command(BaseCommand):
12 """Run sync for an LDAP Source"""
13
14 def add_arguments(self, parser):
15 parser.add_argument("source_slugs", nargs="+", type=str)
16
17 def handle(self, **options):
18 for source_slug in options["source_slugs"]:
19 source = LDAPSource.objects.filter(slug=source_slug).first()
20 if not source:
21 LOGGER.warning("Source does not exist", slug=source_slug)
22 continue
23 ldap_sync_single(source)
24
[end of authentik/sources/ldap/management/commands/ldap_sync.py]
[start of authentik/sources/ldap/sync/users.py]
1 """Sync LDAP Users into authentik"""
2 from typing import Generator
3
4 from django.core.exceptions import FieldError
5 from django.db.utils import IntegrityError
6 from ldap3 import ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES, SUBTREE
7
8 from authentik.core.models import User
9 from authentik.events.models import Event, EventAction
10 from authentik.sources.ldap.sync.base import LDAP_UNIQUENESS, BaseLDAPSynchronizer
11 from authentik.sources.ldap.sync.vendor.freeipa import FreeIPA
12 from authentik.sources.ldap.sync.vendor.ms_ad import MicrosoftActiveDirectory
13
14
15 class UserLDAPSynchronizer(BaseLDAPSynchronizer):
16 """Sync LDAP Users into authentik"""
17
18 @staticmethod
19 def name() -> str:
20 return "users"
21
22 def get_objects(self, **kwargs) -> Generator:
23 return self.search_paginator(
24 search_base=self.base_dn_users,
25 search_filter=self._source.user_object_filter,
26 search_scope=SUBTREE,
27 attributes=[ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES],
28 **kwargs,
29 )
30
31 def sync(self, page_data: list) -> int:
32 """Iterate over all LDAP Users and create authentik_core.User instances"""
33 if not self._source.sync_users:
34 self.message("User syncing is disabled for this Source")
35 return -1
36 user_count = 0
37 for user in page_data:
38 if "attributes" not in user:
39 continue
40 attributes = user.get("attributes", {})
41 user_dn = self._flatten(user.get("entryDN", user.get("dn")))
42 if self._source.object_uniqueness_field not in attributes:
43 self.message(
44 f"Cannot find uniqueness field in attributes: '{user_dn}'",
45 attributes=attributes.keys(),
46 dn=user_dn,
47 )
48 continue
49 uniq = self._flatten(attributes[self._source.object_uniqueness_field])
50 try:
51 defaults = self.build_user_properties(user_dn, **attributes)
52 self._logger.debug("Creating user with attributes", **defaults)
53 if "username" not in defaults:
54 raise IntegrityError("Username was not set by propertymappings")
55 ak_user, created = self.update_or_create_attributes(
56 User, {f"attributes__{LDAP_UNIQUENESS}": uniq}, defaults
57 )
58 except (IntegrityError, FieldError, TypeError, AttributeError) as exc:
59 Event.new(
60 EventAction.CONFIGURATION_ERROR,
61 message=(
62 f"Failed to create user: {str(exc)} "
63 "To merge new user with existing user, set the user's "
64 f"Attribute '{LDAP_UNIQUENESS}' to '{uniq}'"
65 ),
66 source=self._source,
67 dn=user_dn,
68 ).save()
69 else:
70 self._logger.debug("Synced User", user=ak_user.username, created=created)
71 user_count += 1
72 MicrosoftActiveDirectory(self._source).sync(attributes, ak_user, created)
73 FreeIPA(self._source).sync(attributes, ak_user, created)
74 return user_count
75
[end of authentik/sources/ldap/sync/users.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/sources/ldap/management/commands/ldap_sync.py b/authentik/sources/ldap/management/commands/ldap_sync.py
--- a/authentik/sources/ldap/management/commands/ldap_sync.py
+++ b/authentik/sources/ldap/management/commands/ldap_sync.py
@@ -3,7 +3,10 @@
from structlog.stdlib import get_logger
from authentik.sources.ldap.models import LDAPSource
-from authentik.sources.ldap.tasks import ldap_sync_single
+from authentik.sources.ldap.sync.groups import GroupLDAPSynchronizer
+from authentik.sources.ldap.sync.membership import MembershipLDAPSynchronizer
+from authentik.sources.ldap.sync.users import UserLDAPSynchronizer
+from authentik.sources.ldap.tasks import ldap_sync_paginator
LOGGER = get_logger()
@@ -20,4 +23,10 @@
if not source:
LOGGER.warning("Source does not exist", slug=source_slug)
continue
- ldap_sync_single(source)
+ tasks = (
+ ldap_sync_paginator(source, UserLDAPSynchronizer)
+ + ldap_sync_paginator(source, GroupLDAPSynchronizer)
+ + ldap_sync_paginator(source, MembershipLDAPSynchronizer)
+ )
+ for task in tasks:
+ task()
diff --git a/authentik/sources/ldap/sync/users.py b/authentik/sources/ldap/sync/users.py
--- a/authentik/sources/ldap/sync/users.py
+++ b/authentik/sources/ldap/sync/users.py
@@ -49,7 +49,7 @@
uniq = self._flatten(attributes[self._source.object_uniqueness_field])
try:
defaults = self.build_user_properties(user_dn, **attributes)
- self._logger.debug("Creating user with attributes", **defaults)
+ self._logger.debug("Writing user with attributes", **defaults)
if "username" not in defaults:
raise IntegrityError("Username was not set by propertymappings")
ak_user, created = self.update_or_create_attributes(
| {"golden_diff": "diff --git a/authentik/sources/ldap/management/commands/ldap_sync.py b/authentik/sources/ldap/management/commands/ldap_sync.py\n--- a/authentik/sources/ldap/management/commands/ldap_sync.py\n+++ b/authentik/sources/ldap/management/commands/ldap_sync.py\n@@ -3,7 +3,10 @@\n from structlog.stdlib import get_logger\n \n from authentik.sources.ldap.models import LDAPSource\n-from authentik.sources.ldap.tasks import ldap_sync_single\n+from authentik.sources.ldap.sync.groups import GroupLDAPSynchronizer\n+from authentik.sources.ldap.sync.membership import MembershipLDAPSynchronizer\n+from authentik.sources.ldap.sync.users import UserLDAPSynchronizer\n+from authentik.sources.ldap.tasks import ldap_sync_paginator\n \n LOGGER = get_logger()\n \n@@ -20,4 +23,10 @@\n if not source:\n LOGGER.warning(\"Source does not exist\", slug=source_slug)\n continue\n- ldap_sync_single(source)\n+ tasks = (\n+ ldap_sync_paginator(source, UserLDAPSynchronizer)\n+ + ldap_sync_paginator(source, GroupLDAPSynchronizer)\n+ + ldap_sync_paginator(source, MembershipLDAPSynchronizer)\n+ )\n+ for task in tasks:\n+ task()\ndiff --git a/authentik/sources/ldap/sync/users.py b/authentik/sources/ldap/sync/users.py\n--- a/authentik/sources/ldap/sync/users.py\n+++ b/authentik/sources/ldap/sync/users.py\n@@ -49,7 +49,7 @@\n uniq = self._flatten(attributes[self._source.object_uniqueness_field])\n try:\n defaults = self.build_user_properties(user_dn, **attributes)\n- self._logger.debug(\"Creating user with attributes\", **defaults)\n+ self._logger.debug(\"Writing user with attributes\", **defaults)\n if \"username\" not in defaults:\n raise IntegrityError(\"Username was not set by propertymappings\")\n ak_user, created = self.update_or_create_attributes(\n", "issue": "docker compose run --rm server ldap_sync - doesn't work anymore to debug\n**Describe the bug**\r\nthis command doesnt work anymore as described here (https://goauthentik.io/docs/troubleshooting/ldap_source)\r\n```\r\ndocker compose run --rm server ldap_sync nxnet\r\n```\r\nit will just create a backgorund taks instead of running it in foreground!\r\n\r\n**To Reproduce**\r\n```\r\ndocker compose run --rm server ldap_sync SLUGofLDAPsource\r\n```\r\n\r\n**Expected behavior**\r\nit will run an LDAP- synchronization in the foreground and see any errors or warnings that might happen directly\r\n\r\n**Logs**\r\n{\"event\": \"Task published\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 7, \"task_id\": \"17af668f-1d9f-4732-a0eb-044c4a16beed\", \"task_name\": \"authentik.sources.ldap.tasks.ldap_sync\", \"timestamp\": \"2023-07-20T15:10:12.575247\"}\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.6.1\r\n- Deployment: docker compose\r\n\r\n\n", "before_files": [{"content": "\"\"\"LDAP Sync\"\"\"\nfrom django.core.management.base import BaseCommand\nfrom structlog.stdlib import get_logger\n\nfrom authentik.sources.ldap.models import LDAPSource\nfrom authentik.sources.ldap.tasks import ldap_sync_single\n\nLOGGER = get_logger()\n\n\nclass Command(BaseCommand):\n \"\"\"Run sync for an LDAP Source\"\"\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"source_slugs\", nargs=\"+\", type=str)\n\n def handle(self, **options):\n for source_slug in options[\"source_slugs\"]:\n source = LDAPSource.objects.filter(slug=source_slug).first()\n if not source:\n LOGGER.warning(\"Source does not exist\", slug=source_slug)\n continue\n ldap_sync_single(source)\n", "path": "authentik/sources/ldap/management/commands/ldap_sync.py"}, {"content": "\"\"\"Sync LDAP Users into authentik\"\"\"\nfrom typing import Generator\n\nfrom django.core.exceptions import FieldError\nfrom django.db.utils import IntegrityError\nfrom ldap3 import ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES, SUBTREE\n\nfrom authentik.core.models import User\nfrom authentik.events.models import Event, EventAction\nfrom authentik.sources.ldap.sync.base import LDAP_UNIQUENESS, BaseLDAPSynchronizer\nfrom authentik.sources.ldap.sync.vendor.freeipa import FreeIPA\nfrom authentik.sources.ldap.sync.vendor.ms_ad import MicrosoftActiveDirectory\n\n\nclass UserLDAPSynchronizer(BaseLDAPSynchronizer):\n \"\"\"Sync LDAP Users into authentik\"\"\"\n\n @staticmethod\n def name() -> str:\n return \"users\"\n\n def get_objects(self, **kwargs) -> Generator:\n return self.search_paginator(\n search_base=self.base_dn_users,\n search_filter=self._source.user_object_filter,\n search_scope=SUBTREE,\n attributes=[ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES],\n **kwargs,\n )\n\n def sync(self, page_data: list) -> int:\n \"\"\"Iterate over all LDAP Users and create authentik_core.User instances\"\"\"\n if not self._source.sync_users:\n self.message(\"User syncing is disabled for this Source\")\n return -1\n user_count = 0\n for user in page_data:\n if \"attributes\" not in user:\n continue\n attributes = user.get(\"attributes\", {})\n user_dn = self._flatten(user.get(\"entryDN\", user.get(\"dn\")))\n if self._source.object_uniqueness_field not in attributes:\n self.message(\n f\"Cannot find uniqueness field in attributes: '{user_dn}'\",\n attributes=attributes.keys(),\n dn=user_dn,\n )\n continue\n uniq = self._flatten(attributes[self._source.object_uniqueness_field])\n try:\n defaults = self.build_user_properties(user_dn, **attributes)\n self._logger.debug(\"Creating user with attributes\", **defaults)\n if \"username\" not in defaults:\n raise IntegrityError(\"Username was not set by propertymappings\")\n ak_user, created = self.update_or_create_attributes(\n User, {f\"attributes__{LDAP_UNIQUENESS}\": uniq}, defaults\n )\n except (IntegrityError, FieldError, TypeError, AttributeError) as exc:\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=(\n f\"Failed to create user: {str(exc)} \"\n \"To merge new user with existing user, set the user's \"\n f\"Attribute '{LDAP_UNIQUENESS}' to '{uniq}'\"\n ),\n source=self._source,\n dn=user_dn,\n ).save()\n else:\n self._logger.debug(\"Synced User\", user=ak_user.username, created=created)\n user_count += 1\n MicrosoftActiveDirectory(self._source).sync(attributes, ak_user, created)\n FreeIPA(self._source).sync(attributes, ak_user, created)\n return user_count\n", "path": "authentik/sources/ldap/sync/users.py"}]} | 1,847 | 452 |
gh_patches_debug_43971 | rasdani/github-patches | git_diff | pyca__cryptography-272 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check IV lengths are correct for cipher
OpenSSL can't be bothered to check these itself, because crypto is real easy and not important.
</issue>
<code>
[start of cryptography/hazmat/primitives/ciphers/base.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.exceptions import (
18 AlreadyFinalized, NotYetFinalized, AlreadyUpdated,
19 )
20 from cryptography.hazmat.primitives import interfaces
21
22
23 class Cipher(object):
24 def __init__(self, algorithm, mode, backend):
25 if not isinstance(algorithm, interfaces.CipherAlgorithm):
26 raise TypeError("Expected interface of interfaces.CipherAlgorithm")
27
28 self.algorithm = algorithm
29 self.mode = mode
30 self._backend = backend
31
32 def encryptor(self):
33 if isinstance(self.mode, interfaces.ModeWithAuthenticationTag):
34 if self.mode.tag is not None:
35 raise ValueError(
36 "Authentication tag must be None when encrypting"
37 )
38 ctx = self._backend.create_symmetric_encryption_ctx(
39 self.algorithm, self.mode
40 )
41 return self._wrap_ctx(ctx, encrypt=True)
42
43 def decryptor(self):
44 if isinstance(self.mode, interfaces.ModeWithAuthenticationTag):
45 if self.mode.tag is None:
46 raise ValueError(
47 "Authentication tag must be provided when decrypting"
48 )
49 ctx = self._backend.create_symmetric_decryption_ctx(
50 self.algorithm, self.mode
51 )
52 return self._wrap_ctx(ctx, encrypt=False)
53
54 def _wrap_ctx(self, ctx, encrypt):
55 if isinstance(self.mode, interfaces.ModeWithAuthenticationTag):
56 if encrypt:
57 return _AEADEncryptionContext(ctx)
58 else:
59 return _AEADCipherContext(ctx)
60 else:
61 return _CipherContext(ctx)
62
63
64 @utils.register_interface(interfaces.CipherContext)
65 class _CipherContext(object):
66 def __init__(self, ctx):
67 self._ctx = ctx
68
69 def update(self, data):
70 if self._ctx is None:
71 raise AlreadyFinalized("Context was already finalized")
72 return self._ctx.update(data)
73
74 def finalize(self):
75 if self._ctx is None:
76 raise AlreadyFinalized("Context was already finalized")
77 data = self._ctx.finalize()
78 self._ctx = None
79 return data
80
81
82 @utils.register_interface(interfaces.AEADCipherContext)
83 @utils.register_interface(interfaces.CipherContext)
84 class _AEADCipherContext(object):
85 def __init__(self, ctx):
86 self._ctx = ctx
87 self._tag = None
88 self._updated = False
89
90 def update(self, data):
91 if self._ctx is None:
92 raise AlreadyFinalized("Context was already finalized")
93 self._updated = True
94 return self._ctx.update(data)
95
96 def finalize(self):
97 if self._ctx is None:
98 raise AlreadyFinalized("Context was already finalized")
99 data = self._ctx.finalize()
100 self._tag = self._ctx.tag
101 self._ctx = None
102 return data
103
104 def authenticate_additional_data(self, data):
105 if self._ctx is None:
106 raise AlreadyFinalized("Context was already finalized")
107 if self._updated:
108 raise AlreadyUpdated("Update has been called on this context")
109 self._ctx.authenticate_additional_data(data)
110
111
112 @utils.register_interface(interfaces.AEADEncryptionContext)
113 class _AEADEncryptionContext(_AEADCipherContext):
114 @property
115 def tag(self):
116 if self._ctx is not None:
117 raise NotYetFinalized("You must finalize encryption before "
118 "getting the tag")
119 return self._tag
120
[end of cryptography/hazmat/primitives/ciphers/base.py]
[start of cryptography/hazmat/primitives/interfaces.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 import abc
17
18 import six
19
20
21 class CipherAlgorithm(six.with_metaclass(abc.ABCMeta)):
22 @abc.abstractproperty
23 def name(self):
24 """
25 A string naming this mode (e.g. "AES", "Camellia").
26 """
27
28 @abc.abstractproperty
29 def key_size(self):
30 """
31 The size of the key being used as an integer in bits (e.g. 128, 256).
32 """
33
34
35 class BlockCipherAlgorithm(six.with_metaclass(abc.ABCMeta)):
36 @abc.abstractproperty
37 def block_size(self):
38 """
39 The size of a block as an integer in bits (e.g. 64, 128).
40 """
41
42
43 class Mode(six.with_metaclass(abc.ABCMeta)):
44 @abc.abstractproperty
45 def name(self):
46 """
47 A string naming this mode (e.g. "ECB", "CBC").
48 """
49
50
51 class ModeWithInitializationVector(six.with_metaclass(abc.ABCMeta)):
52 @abc.abstractproperty
53 def initialization_vector(self):
54 """
55 The value of the initialization vector for this mode as bytes.
56 """
57
58
59 class ModeWithNonce(six.with_metaclass(abc.ABCMeta)):
60 @abc.abstractproperty
61 def nonce(self):
62 """
63 The value of the nonce for this mode as bytes.
64 """
65
66
67 class ModeWithAuthenticationTag(six.with_metaclass(abc.ABCMeta)):
68 @abc.abstractproperty
69 def tag(self):
70 """
71 The value of the tag supplied to the constructor of this mode.
72 """
73
74
75 class CipherContext(six.with_metaclass(abc.ABCMeta)):
76 @abc.abstractmethod
77 def update(self, data):
78 """
79 Processes the provided bytes through the cipher and returns the results
80 as bytes.
81 """
82
83 @abc.abstractmethod
84 def finalize(self):
85 """
86 Returns the results of processing the final block as bytes.
87 """
88
89
90 class AEADCipherContext(six.with_metaclass(abc.ABCMeta)):
91 @abc.abstractmethod
92 def authenticate_additional_data(self, data):
93 """
94 Authenticates the provided bytes.
95 """
96
97
98 class AEADEncryptionContext(six.with_metaclass(abc.ABCMeta)):
99 @abc.abstractproperty
100 def tag(self):
101 """
102 Returns tag bytes. This is only available after encryption is
103 finalized.
104 """
105
106
107 class PaddingContext(six.with_metaclass(abc.ABCMeta)):
108 @abc.abstractmethod
109 def update(self, data):
110 """
111 Pads the provided bytes and returns any available data as bytes.
112 """
113
114 @abc.abstractmethod
115 def finalize(self):
116 """
117 Finalize the padding, returns bytes.
118 """
119
120
121 class HashAlgorithm(six.with_metaclass(abc.ABCMeta)):
122 @abc.abstractproperty
123 def name(self):
124 """
125 A string naming this algorithm (e.g. "sha256", "md5").
126 """
127
128 @abc.abstractproperty
129 def digest_size(self):
130 """
131 The size of the resulting digest in bytes.
132 """
133
134 @abc.abstractproperty
135 def block_size(self):
136 """
137 The internal block size of the hash algorithm in bytes.
138 """
139
140
141 class HashContext(six.with_metaclass(abc.ABCMeta)):
142 @abc.abstractproperty
143 def algorithm(self):
144 """
145 A HashAlgorithm that will be used by this context.
146 """
147
148 @abc.abstractmethod
149 def update(self, data):
150 """
151 Processes the provided bytes through the hash.
152 """
153
154 @abc.abstractmethod
155 def finalize(self):
156 """
157 Finalizes the hash context and returns the hash digest as bytes.
158 """
159
160 @abc.abstractmethod
161 def copy(self):
162 """
163 Return a HashContext that is a copy of the current context.
164 """
165
[end of cryptography/hazmat/primitives/interfaces.py]
[start of cryptography/hazmat/primitives/ciphers/modes.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.hazmat.primitives import interfaces
18
19
20 @utils.register_interface(interfaces.Mode)
21 @utils.register_interface(interfaces.ModeWithInitializationVector)
22 class CBC(object):
23 name = "CBC"
24
25 def __init__(self, initialization_vector):
26 self.initialization_vector = initialization_vector
27
28
29 @utils.register_interface(interfaces.Mode)
30 class ECB(object):
31 name = "ECB"
32
33
34 @utils.register_interface(interfaces.Mode)
35 @utils.register_interface(interfaces.ModeWithInitializationVector)
36 class OFB(object):
37 name = "OFB"
38
39 def __init__(self, initialization_vector):
40 self.initialization_vector = initialization_vector
41
42
43 @utils.register_interface(interfaces.Mode)
44 @utils.register_interface(interfaces.ModeWithInitializationVector)
45 class CFB(object):
46 name = "CFB"
47
48 def __init__(self, initialization_vector):
49 self.initialization_vector = initialization_vector
50
51
52 @utils.register_interface(interfaces.Mode)
53 @utils.register_interface(interfaces.ModeWithNonce)
54 class CTR(object):
55 name = "CTR"
56
57 def __init__(self, nonce):
58 self.nonce = nonce
59
60
61 @utils.register_interface(interfaces.Mode)
62 @utils.register_interface(interfaces.ModeWithInitializationVector)
63 @utils.register_interface(interfaces.ModeWithAuthenticationTag)
64 class GCM(object):
65 name = "GCM"
66
67 def __init__(self, initialization_vector, tag=None):
68 if tag is not None and len(tag) < 4:
69 raise ValueError(
70 "Authentication tag must be 4 bytes or longer"
71 )
72
73 self.initialization_vector = initialization_vector
74 self.tag = tag
75
[end of cryptography/hazmat/primitives/ciphers/modes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cryptography/hazmat/primitives/ciphers/base.py b/cryptography/hazmat/primitives/ciphers/base.py
--- a/cryptography/hazmat/primitives/ciphers/base.py
+++ b/cryptography/hazmat/primitives/ciphers/base.py
@@ -25,6 +25,9 @@
if not isinstance(algorithm, interfaces.CipherAlgorithm):
raise TypeError("Expected interface of interfaces.CipherAlgorithm")
+ if mode is not None:
+ mode.validate_for_algorithm(algorithm)
+
self.algorithm = algorithm
self.mode = mode
self._backend = backend
diff --git a/cryptography/hazmat/primitives/ciphers/modes.py b/cryptography/hazmat/primitives/ciphers/modes.py
--- a/cryptography/hazmat/primitives/ciphers/modes.py
+++ b/cryptography/hazmat/primitives/ciphers/modes.py
@@ -25,11 +25,20 @@
def __init__(self, initialization_vector):
self.initialization_vector = initialization_vector
+ def validate_for_algorithm(self, algorithm):
+ if len(self.initialization_vector) * 8 != algorithm.block_size:
+ raise ValueError("Invalid iv size ({0}) for {1}".format(
+ len(self.initialization_vector), self.name
+ ))
+
@utils.register_interface(interfaces.Mode)
class ECB(object):
name = "ECB"
+ def validate_for_algorithm(self, algorithm):
+ pass
+
@utils.register_interface(interfaces.Mode)
@utils.register_interface(interfaces.ModeWithInitializationVector)
@@ -39,6 +48,12 @@
def __init__(self, initialization_vector):
self.initialization_vector = initialization_vector
+ def validate_for_algorithm(self, algorithm):
+ if len(self.initialization_vector) * 8 != algorithm.block_size:
+ raise ValueError("Invalid iv size ({0}) for {1}".format(
+ len(self.initialization_vector), self.name
+ ))
+
@utils.register_interface(interfaces.Mode)
@utils.register_interface(interfaces.ModeWithInitializationVector)
@@ -48,6 +63,12 @@
def __init__(self, initialization_vector):
self.initialization_vector = initialization_vector
+ def validate_for_algorithm(self, algorithm):
+ if len(self.initialization_vector) * 8 != algorithm.block_size:
+ raise ValueError("Invalid iv size ({0}) for {1}".format(
+ len(self.initialization_vector), self.name
+ ))
+
@utils.register_interface(interfaces.Mode)
@utils.register_interface(interfaces.ModeWithNonce)
@@ -57,6 +78,12 @@
def __init__(self, nonce):
self.nonce = nonce
+ def validate_for_algorithm(self, algorithm):
+ if len(self.nonce) * 8 != algorithm.block_size:
+ raise ValueError("Invalid nonce size ({0}) for {1}".format(
+ len(self.nonce), self.name
+ ))
+
@utils.register_interface(interfaces.Mode)
@utils.register_interface(interfaces.ModeWithInitializationVector)
@@ -65,6 +92,9 @@
name = "GCM"
def __init__(self, initialization_vector, tag=None):
+ # len(initialization_vector) must in [1, 2 ** 64), but it's impossible
+ # to actually construct a bytes object that large, so we don't check
+ # for it
if tag is not None and len(tag) < 4:
raise ValueError(
"Authentication tag must be 4 bytes or longer"
@@ -72,3 +102,6 @@
self.initialization_vector = initialization_vector
self.tag = tag
+
+ def validate_for_algorithm(self, algorithm):
+ pass
diff --git a/cryptography/hazmat/primitives/interfaces.py b/cryptography/hazmat/primitives/interfaces.py
--- a/cryptography/hazmat/primitives/interfaces.py
+++ b/cryptography/hazmat/primitives/interfaces.py
@@ -47,6 +47,13 @@
A string naming this mode (e.g. "ECB", "CBC").
"""
+ @abc.abstractmethod
+ def validate_for_algorithm(self, algorithm):
+ """
+ Checks that all the necessary invariants of this (mode, algorithm)
+ combination are met.
+ """
+
class ModeWithInitializationVector(six.with_metaclass(abc.ABCMeta)):
@abc.abstractproperty
| {"golden_diff": "diff --git a/cryptography/hazmat/primitives/ciphers/base.py b/cryptography/hazmat/primitives/ciphers/base.py\n--- a/cryptography/hazmat/primitives/ciphers/base.py\n+++ b/cryptography/hazmat/primitives/ciphers/base.py\n@@ -25,6 +25,9 @@\n if not isinstance(algorithm, interfaces.CipherAlgorithm):\n raise TypeError(\"Expected interface of interfaces.CipherAlgorithm\")\n \n+ if mode is not None:\n+ mode.validate_for_algorithm(algorithm)\n+\n self.algorithm = algorithm\n self.mode = mode\n self._backend = backend\ndiff --git a/cryptography/hazmat/primitives/ciphers/modes.py b/cryptography/hazmat/primitives/ciphers/modes.py\n--- a/cryptography/hazmat/primitives/ciphers/modes.py\n+++ b/cryptography/hazmat/primitives/ciphers/modes.py\n@@ -25,11 +25,20 @@\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n \n+ def validate_for_algorithm(self, algorithm):\n+ if len(self.initialization_vector) * 8 != algorithm.block_size:\n+ raise ValueError(\"Invalid iv size ({0}) for {1}\".format(\n+ len(self.initialization_vector), self.name\n+ ))\n+\n \n @utils.register_interface(interfaces.Mode)\n class ECB(object):\n name = \"ECB\"\n \n+ def validate_for_algorithm(self, algorithm):\n+ pass\n+\n \n @utils.register_interface(interfaces.Mode)\n @utils.register_interface(interfaces.ModeWithInitializationVector)\n@@ -39,6 +48,12 @@\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n \n+ def validate_for_algorithm(self, algorithm):\n+ if len(self.initialization_vector) * 8 != algorithm.block_size:\n+ raise ValueError(\"Invalid iv size ({0}) for {1}\".format(\n+ len(self.initialization_vector), self.name\n+ ))\n+\n \n @utils.register_interface(interfaces.Mode)\n @utils.register_interface(interfaces.ModeWithInitializationVector)\n@@ -48,6 +63,12 @@\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n \n+ def validate_for_algorithm(self, algorithm):\n+ if len(self.initialization_vector) * 8 != algorithm.block_size:\n+ raise ValueError(\"Invalid iv size ({0}) for {1}\".format(\n+ len(self.initialization_vector), self.name\n+ ))\n+\n \n @utils.register_interface(interfaces.Mode)\n @utils.register_interface(interfaces.ModeWithNonce)\n@@ -57,6 +78,12 @@\n def __init__(self, nonce):\n self.nonce = nonce\n \n+ def validate_for_algorithm(self, algorithm):\n+ if len(self.nonce) * 8 != algorithm.block_size:\n+ raise ValueError(\"Invalid nonce size ({0}) for {1}\".format(\n+ len(self.nonce), self.name\n+ ))\n+\n \n @utils.register_interface(interfaces.Mode)\n @utils.register_interface(interfaces.ModeWithInitializationVector)\n@@ -65,6 +92,9 @@\n name = \"GCM\"\n \n def __init__(self, initialization_vector, tag=None):\n+ # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n+ # to actually construct a bytes object that large, so we don't check\n+ # for it\n if tag is not None and len(tag) < 4:\n raise ValueError(\n \"Authentication tag must be 4 bytes or longer\"\n@@ -72,3 +102,6 @@\n \n self.initialization_vector = initialization_vector\n self.tag = tag\n+\n+ def validate_for_algorithm(self, algorithm):\n+ pass\ndiff --git a/cryptography/hazmat/primitives/interfaces.py b/cryptography/hazmat/primitives/interfaces.py\n--- a/cryptography/hazmat/primitives/interfaces.py\n+++ b/cryptography/hazmat/primitives/interfaces.py\n@@ -47,6 +47,13 @@\n A string naming this mode (e.g. \"ECB\", \"CBC\").\n \"\"\"\n \n+ @abc.abstractmethod\n+ def validate_for_algorithm(self, algorithm):\n+ \"\"\"\n+ Checks that all the necessary invariants of this (mode, algorithm)\n+ combination are met.\n+ \"\"\"\n+\n \n class ModeWithInitializationVector(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n", "issue": "Check IV lengths are correct for cipher\nOpenSSL can't be bothered to check these itself, because crypto is real easy and not important.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, NotYetFinalized, AlreadyUpdated,\n)\nfrom cryptography.hazmat.primitives import interfaces\n\n\nclass Cipher(object):\n def __init__(self, algorithm, mode, backend):\n if not isinstance(algorithm, interfaces.CipherAlgorithm):\n raise TypeError(\"Expected interface of interfaces.CipherAlgorithm\")\n\n self.algorithm = algorithm\n self.mode = mode\n self._backend = backend\n\n def encryptor(self):\n if isinstance(self.mode, interfaces.ModeWithAuthenticationTag):\n if self.mode.tag is not None:\n raise ValueError(\n \"Authentication tag must be None when encrypting\"\n )\n ctx = self._backend.create_symmetric_encryption_ctx(\n self.algorithm, self.mode\n )\n return self._wrap_ctx(ctx, encrypt=True)\n\n def decryptor(self):\n if isinstance(self.mode, interfaces.ModeWithAuthenticationTag):\n if self.mode.tag is None:\n raise ValueError(\n \"Authentication tag must be provided when decrypting\"\n )\n ctx = self._backend.create_symmetric_decryption_ctx(\n self.algorithm, self.mode\n )\n return self._wrap_ctx(ctx, encrypt=False)\n\n def _wrap_ctx(self, ctx, encrypt):\n if isinstance(self.mode, interfaces.ModeWithAuthenticationTag):\n if encrypt:\n return _AEADEncryptionContext(ctx)\n else:\n return _AEADCipherContext(ctx)\n else:\n return _CipherContext(ctx)\n\n\[email protected]_interface(interfaces.CipherContext)\nclass _CipherContext(object):\n def __init__(self, ctx):\n self._ctx = ctx\n\n def update(self, data):\n if self._ctx is None:\n raise AlreadyFinalized(\"Context was already finalized\")\n return self._ctx.update(data)\n\n def finalize(self):\n if self._ctx is None:\n raise AlreadyFinalized(\"Context was already finalized\")\n data = self._ctx.finalize()\n self._ctx = None\n return data\n\n\[email protected]_interface(interfaces.AEADCipherContext)\[email protected]_interface(interfaces.CipherContext)\nclass _AEADCipherContext(object):\n def __init__(self, ctx):\n self._ctx = ctx\n self._tag = None\n self._updated = False\n\n def update(self, data):\n if self._ctx is None:\n raise AlreadyFinalized(\"Context was already finalized\")\n self._updated = True\n return self._ctx.update(data)\n\n def finalize(self):\n if self._ctx is None:\n raise AlreadyFinalized(\"Context was already finalized\")\n data = self._ctx.finalize()\n self._tag = self._ctx.tag\n self._ctx = None\n return data\n\n def authenticate_additional_data(self, data):\n if self._ctx is None:\n raise AlreadyFinalized(\"Context was already finalized\")\n if self._updated:\n raise AlreadyUpdated(\"Update has been called on this context\")\n self._ctx.authenticate_additional_data(data)\n\n\[email protected]_interface(interfaces.AEADEncryptionContext)\nclass _AEADEncryptionContext(_AEADCipherContext):\n @property\n def tag(self):\n if self._ctx is not None:\n raise NotYetFinalized(\"You must finalize encryption before \"\n \"getting the tag\")\n return self._tag\n", "path": "cryptography/hazmat/primitives/ciphers/base.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\n\nimport six\n\n\nclass CipherAlgorithm(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def name(self):\n \"\"\"\n A string naming this mode (e.g. \"AES\", \"Camellia\").\n \"\"\"\n\n @abc.abstractproperty\n def key_size(self):\n \"\"\"\n The size of the key being used as an integer in bits (e.g. 128, 256).\n \"\"\"\n\n\nclass BlockCipherAlgorithm(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def block_size(self):\n \"\"\"\n The size of a block as an integer in bits (e.g. 64, 128).\n \"\"\"\n\n\nclass Mode(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def name(self):\n \"\"\"\n A string naming this mode (e.g. \"ECB\", \"CBC\").\n \"\"\"\n\n\nclass ModeWithInitializationVector(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def initialization_vector(self):\n \"\"\"\n The value of the initialization vector for this mode as bytes.\n \"\"\"\n\n\nclass ModeWithNonce(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def nonce(self):\n \"\"\"\n The value of the nonce for this mode as bytes.\n \"\"\"\n\n\nclass ModeWithAuthenticationTag(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def tag(self):\n \"\"\"\n The value of the tag supplied to the constructor of this mode.\n \"\"\"\n\n\nclass CipherContext(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractmethod\n def update(self, data):\n \"\"\"\n Processes the provided bytes through the cipher and returns the results\n as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def finalize(self):\n \"\"\"\n Returns the results of processing the final block as bytes.\n \"\"\"\n\n\nclass AEADCipherContext(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractmethod\n def authenticate_additional_data(self, data):\n \"\"\"\n Authenticates the provided bytes.\n \"\"\"\n\n\nclass AEADEncryptionContext(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def tag(self):\n \"\"\"\n Returns tag bytes. This is only available after encryption is\n finalized.\n \"\"\"\n\n\nclass PaddingContext(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractmethod\n def update(self, data):\n \"\"\"\n Pads the provided bytes and returns any available data as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def finalize(self):\n \"\"\"\n Finalize the padding, returns bytes.\n \"\"\"\n\n\nclass HashAlgorithm(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def name(self):\n \"\"\"\n A string naming this algorithm (e.g. \"sha256\", \"md5\").\n \"\"\"\n\n @abc.abstractproperty\n def digest_size(self):\n \"\"\"\n The size of the resulting digest in bytes.\n \"\"\"\n\n @abc.abstractproperty\n def block_size(self):\n \"\"\"\n The internal block size of the hash algorithm in bytes.\n \"\"\"\n\n\nclass HashContext(six.with_metaclass(abc.ABCMeta)):\n @abc.abstractproperty\n def algorithm(self):\n \"\"\"\n A HashAlgorithm that will be used by this context.\n \"\"\"\n\n @abc.abstractmethod\n def update(self, data):\n \"\"\"\n Processes the provided bytes through the hash.\n \"\"\"\n\n @abc.abstractmethod\n def finalize(self):\n \"\"\"\n Finalizes the hash context and returns the hash digest as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def copy(self):\n \"\"\"\n Return a HashContext that is a copy of the current context.\n \"\"\"\n", "path": "cryptography/hazmat/primitives/interfaces.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CBC(object):\n name = \"CBC\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n\[email protected]_interface(interfaces.Mode)\nclass ECB(object):\n name = \"ECB\"\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass OFB(object):\n name = \"OFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB(object):\n name = \"CFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithNonce)\nclass CTR(object):\n name = \"CTR\"\n\n def __init__(self, nonce):\n self.nonce = nonce\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\[email protected]_interface(interfaces.ModeWithAuthenticationTag)\nclass GCM(object):\n name = \"GCM\"\n\n def __init__(self, initialization_vector, tag=None):\n if tag is not None and len(tag) < 4:\n raise ValueError(\n \"Authentication tag must be 4 bytes or longer\"\n )\n\n self.initialization_vector = initialization_vector\n self.tag = tag\n", "path": "cryptography/hazmat/primitives/ciphers/modes.py"}]} | 3,697 | 971 |
gh_patches_debug_58129 | rasdani/github-patches | git_diff | alibaba__FederatedScope-496 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Message asked for local pretraining is missing the "content" para when train a graph model in distributed mode?
If no "content" para, there will raise ValueError('The data type {} has not been supported.'.format(type(value))) in Message.create_by_type() function.
</issue>
<code>
[start of federatedscope/core/message.py]
1 import json
2 import numpy as np
3 from federatedscope.core.proto import gRPC_comm_manager_pb2
4
5
6 class Message(object):
7 """
8 The data exchanged during an FL course are abstracted as 'Message' in
9 FederatedScope.
10 A message object includes:
11 msg_type: The type of message, which is used to trigger the
12 corresponding handlers of server/client
13 sender: The sender's ID
14 receiver: The receiver's ID
15 state: The training round of the message, which is determined by
16 the sender and used to filter out the outdated messages.
17 strategy: redundant attribute
18 """
19 def __init__(self,
20 msg_type=None,
21 sender=0,
22 receiver=0,
23 state=0,
24 content=None,
25 timestamp=0,
26 strategy=None):
27 self._msg_type = msg_type
28 self._sender = sender
29 self._receiver = receiver
30 self._state = state
31 self._content = content
32 self._timestamp = timestamp
33 self._strategy = strategy
34
35 @property
36 def msg_type(self):
37 return self._msg_type
38
39 @msg_type.setter
40 def msg_type(self, value):
41 self._msg_type = value
42
43 @property
44 def sender(self):
45 return self._sender
46
47 @sender.setter
48 def sender(self, value):
49 self._sender = value
50
51 @property
52 def receiver(self):
53 return self._receiver
54
55 @receiver.setter
56 def receiver(self, value):
57 self._receiver = value
58
59 @property
60 def state(self):
61 return self._state
62
63 @state.setter
64 def state(self, value):
65 self._state = value
66
67 @property
68 def content(self):
69 return self._content
70
71 @content.setter
72 def content(self, value):
73 self._content = value
74
75 @property
76 def timestamp(self):
77 return self._timestamp
78
79 @timestamp.setter
80 def timestamp(self, value):
81 assert isinstance(value, int) or isinstance(value, float), \
82 "We only support an int or a float value for timestamp"
83 self._timestamp = value
84
85 @property
86 def strategy(self):
87 return self._strategy
88
89 @strategy.setter
90 def strategy(self, value):
91 self._strategy = value
92
93 def __lt__(self, other):
94 if self.timestamp != other.timestamp:
95 return self.timestamp < other.timestamp
96 else:
97 return self.state < other.state
98
99 def transform_to_list(self, x):
100 if isinstance(x, list) or isinstance(x, tuple):
101 return [self.transform_to_list(each_x) for each_x in x]
102 elif isinstance(x, dict):
103 for key in x.keys():
104 x[key] = self.transform_to_list(x[key])
105 return x
106 else:
107 if hasattr(x, 'tolist'):
108 return x.tolist()
109 else:
110 return x
111
112 def msg_to_json(self, to_list=False):
113 if to_list:
114 self.content = self.transform_to_list(self.content)
115
116 json_msg = {
117 'msg_type': self.msg_type,
118 'sender': self.sender,
119 'receiver': self.receiver,
120 'state': self.state,
121 'content': self.content,
122 'timestamp': self.timestamp,
123 'strategy': self.strategy,
124 }
125 return json.dumps(json_msg)
126
127 def json_to_msg(self, json_string):
128 json_msg = json.loads(json_string)
129 self.msg_type = json_msg['msg_type']
130 self.sender = json_msg['sender']
131 self.receiver = json_msg['receiver']
132 self.state = json_msg['state']
133 self.content = json_msg['content']
134 self.timestamp = json_msg['timestamp']
135 self.strategy = json_msg['strategy']
136
137 def create_by_type(self, value, nested=False):
138 if isinstance(value, dict):
139 if isinstance(list(value.keys())[0], str):
140 m_dict = gRPC_comm_manager_pb2.mDict_keyIsString()
141 key_type = 'string'
142 else:
143 m_dict = gRPC_comm_manager_pb2.mDict_keyIsInt()
144 key_type = 'int'
145
146 for key in value.keys():
147 m_dict.dict_value[key].MergeFrom(
148 self.create_by_type(value[key], nested=True))
149 if nested:
150 msg_value = gRPC_comm_manager_pb2.MsgValue()
151 if key_type == 'string':
152 msg_value.dict_msg_stringkey.MergeFrom(m_dict)
153 else:
154 msg_value.dict_msg_intkey.MergeFrom(m_dict)
155 return msg_value
156 else:
157 return m_dict
158 elif isinstance(value, list) or isinstance(value, tuple):
159 m_list = gRPC_comm_manager_pb2.mList()
160 for each in value:
161 m_list.list_value.append(self.create_by_type(each,
162 nested=True))
163 if nested:
164 msg_value = gRPC_comm_manager_pb2.MsgValue()
165 msg_value.list_msg.MergeFrom(m_list)
166 return msg_value
167 else:
168 return m_list
169 else:
170 m_single = gRPC_comm_manager_pb2.mSingle()
171 if type(value) in [int, np.int32]:
172 m_single.int_value = value
173 elif type(value) in [str]:
174 m_single.str_value = value
175 elif type(value) in [float, np.float32]:
176 m_single.float_value = value
177 else:
178 raise ValueError(
179 'The data type {} has not been supported.'.format(
180 type(value)))
181
182 if nested:
183 msg_value = gRPC_comm_manager_pb2.MsgValue()
184 msg_value.single_msg.MergeFrom(m_single)
185 return msg_value
186 else:
187 return m_single
188
189 def build_msg_value(self, value):
190 msg_value = gRPC_comm_manager_pb2.MsgValue()
191
192 if isinstance(value, list) or isinstance(value, tuple):
193 msg_value.list_msg.MergeFrom(self.create_by_type(value))
194 elif isinstance(value, dict):
195 if isinstance(list(value.keys())[0], str):
196 msg_value.dict_msg_stringkey.MergeFrom(
197 self.create_by_type(value))
198 else:
199 msg_value.dict_msg_intkey.MergeFrom(self.create_by_type(value))
200 else:
201 msg_value.single_msg.MergeFrom(self.create_by_type(value))
202
203 return msg_value
204
205 def transform(self, to_list=False):
206 if to_list:
207 self.content = self.transform_to_list(self.content)
208
209 splited_msg = gRPC_comm_manager_pb2.MessageRequest() # map/dict
210 splited_msg.msg['sender'].MergeFrom(self.build_msg_value(self.sender))
211 splited_msg.msg['receiver'].MergeFrom(
212 self.build_msg_value(self.receiver))
213 splited_msg.msg['state'].MergeFrom(self.build_msg_value(self.state))
214 splited_msg.msg['msg_type'].MergeFrom(
215 self.build_msg_value(self.msg_type))
216 splited_msg.msg['content'].MergeFrom(self.build_msg_value(
217 self.content))
218 splited_msg.msg['timestamp'].MergeFrom(
219 self.build_msg_value(self.timestamp))
220 return splited_msg
221
222 def _parse_msg(self, value):
223 if isinstance(value, gRPC_comm_manager_pb2.MsgValue) or isinstance(
224 value, gRPC_comm_manager_pb2.mSingle):
225 return self._parse_msg(getattr(value, value.WhichOneof("type")))
226 elif isinstance(value, gRPC_comm_manager_pb2.mList):
227 return [self._parse_msg(each) for each in value.list_value]
228 elif isinstance(value, gRPC_comm_manager_pb2.mDict_keyIsString) or \
229 isinstance(value, gRPC_comm_manager_pb2.mDict_keyIsInt):
230 return {
231 k: self._parse_msg(value.dict_value[k])
232 for k in value.dict_value
233 }
234 else:
235 return value
236
237 def parse(self, received_msg):
238 self.sender = self._parse_msg(received_msg['sender'])
239 self.receiver = self._parse_msg(received_msg['receiver'])
240 self.msg_type = self._parse_msg(received_msg['msg_type'])
241 self.state = self._parse_msg(received_msg['state'])
242 self.content = self._parse_msg(received_msg['content'])
243 self.timestamp = self._parse_msg(received_msg['timestamp'])
244
245 def count_bytes(self):
246 """
247 calculate the message bytes to be sent/received
248 :return: tuple of bytes of the message to be sent and received
249 """
250 from pympler import asizeof
251 download_bytes = asizeof.asizeof(self.content)
252 upload_cnt = len(self.receiver) if isinstance(self.receiver,
253 list) else 1
254 upload_bytes = download_bytes * upload_cnt
255 return download_bytes, upload_bytes
256
[end of federatedscope/core/message.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/federatedscope/core/message.py b/federatedscope/core/message.py
--- a/federatedscope/core/message.py
+++ b/federatedscope/core/message.py
@@ -21,7 +21,7 @@
sender=0,
receiver=0,
state=0,
- content=None,
+ content='None',
timestamp=0,
strategy=None):
self._msg_type = msg_type
| {"golden_diff": "diff --git a/federatedscope/core/message.py b/federatedscope/core/message.py\n--- a/federatedscope/core/message.py\n+++ b/federatedscope/core/message.py\n@@ -21,7 +21,7 @@\n sender=0,\n receiver=0,\n state=0,\n- content=None,\n+ content='None',\n timestamp=0,\n strategy=None):\n self._msg_type = msg_type\n", "issue": "Message asked for local pretraining is missing the \"content\" para when train a graph model in distributed mode?\n\r\nIf no \"content\" para, there will raise ValueError('The data type {} has not been supported.'.format(type(value))) in Message.create_by_type() function.\n", "before_files": [{"content": "import json\nimport numpy as np\nfrom federatedscope.core.proto import gRPC_comm_manager_pb2\n\n\nclass Message(object):\n \"\"\"\n The data exchanged during an FL course are abstracted as 'Message' in\n FederatedScope.\n A message object includes:\n msg_type: The type of message, which is used to trigger the\n corresponding handlers of server/client\n sender: The sender's ID\n receiver: The receiver's ID\n state: The training round of the message, which is determined by\n the sender and used to filter out the outdated messages.\n strategy: redundant attribute\n \"\"\"\n def __init__(self,\n msg_type=None,\n sender=0,\n receiver=0,\n state=0,\n content=None,\n timestamp=0,\n strategy=None):\n self._msg_type = msg_type\n self._sender = sender\n self._receiver = receiver\n self._state = state\n self._content = content\n self._timestamp = timestamp\n self._strategy = strategy\n\n @property\n def msg_type(self):\n return self._msg_type\n\n @msg_type.setter\n def msg_type(self, value):\n self._msg_type = value\n\n @property\n def sender(self):\n return self._sender\n\n @sender.setter\n def sender(self, value):\n self._sender = value\n\n @property\n def receiver(self):\n return self._receiver\n\n @receiver.setter\n def receiver(self, value):\n self._receiver = value\n\n @property\n def state(self):\n return self._state\n\n @state.setter\n def state(self, value):\n self._state = value\n\n @property\n def content(self):\n return self._content\n\n @content.setter\n def content(self, value):\n self._content = value\n\n @property\n def timestamp(self):\n return self._timestamp\n\n @timestamp.setter\n def timestamp(self, value):\n assert isinstance(value, int) or isinstance(value, float), \\\n \"We only support an int or a float value for timestamp\"\n self._timestamp = value\n\n @property\n def strategy(self):\n return self._strategy\n\n @strategy.setter\n def strategy(self, value):\n self._strategy = value\n\n def __lt__(self, other):\n if self.timestamp != other.timestamp:\n return self.timestamp < other.timestamp\n else:\n return self.state < other.state\n\n def transform_to_list(self, x):\n if isinstance(x, list) or isinstance(x, tuple):\n return [self.transform_to_list(each_x) for each_x in x]\n elif isinstance(x, dict):\n for key in x.keys():\n x[key] = self.transform_to_list(x[key])\n return x\n else:\n if hasattr(x, 'tolist'):\n return x.tolist()\n else:\n return x\n\n def msg_to_json(self, to_list=False):\n if to_list:\n self.content = self.transform_to_list(self.content)\n\n json_msg = {\n 'msg_type': self.msg_type,\n 'sender': self.sender,\n 'receiver': self.receiver,\n 'state': self.state,\n 'content': self.content,\n 'timestamp': self.timestamp,\n 'strategy': self.strategy,\n }\n return json.dumps(json_msg)\n\n def json_to_msg(self, json_string):\n json_msg = json.loads(json_string)\n self.msg_type = json_msg['msg_type']\n self.sender = json_msg['sender']\n self.receiver = json_msg['receiver']\n self.state = json_msg['state']\n self.content = json_msg['content']\n self.timestamp = json_msg['timestamp']\n self.strategy = json_msg['strategy']\n\n def create_by_type(self, value, nested=False):\n if isinstance(value, dict):\n if isinstance(list(value.keys())[0], str):\n m_dict = gRPC_comm_manager_pb2.mDict_keyIsString()\n key_type = 'string'\n else:\n m_dict = gRPC_comm_manager_pb2.mDict_keyIsInt()\n key_type = 'int'\n\n for key in value.keys():\n m_dict.dict_value[key].MergeFrom(\n self.create_by_type(value[key], nested=True))\n if nested:\n msg_value = gRPC_comm_manager_pb2.MsgValue()\n if key_type == 'string':\n msg_value.dict_msg_stringkey.MergeFrom(m_dict)\n else:\n msg_value.dict_msg_intkey.MergeFrom(m_dict)\n return msg_value\n else:\n return m_dict\n elif isinstance(value, list) or isinstance(value, tuple):\n m_list = gRPC_comm_manager_pb2.mList()\n for each in value:\n m_list.list_value.append(self.create_by_type(each,\n nested=True))\n if nested:\n msg_value = gRPC_comm_manager_pb2.MsgValue()\n msg_value.list_msg.MergeFrom(m_list)\n return msg_value\n else:\n return m_list\n else:\n m_single = gRPC_comm_manager_pb2.mSingle()\n if type(value) in [int, np.int32]:\n m_single.int_value = value\n elif type(value) in [str]:\n m_single.str_value = value\n elif type(value) in [float, np.float32]:\n m_single.float_value = value\n else:\n raise ValueError(\n 'The data type {} has not been supported.'.format(\n type(value)))\n\n if nested:\n msg_value = gRPC_comm_manager_pb2.MsgValue()\n msg_value.single_msg.MergeFrom(m_single)\n return msg_value\n else:\n return m_single\n\n def build_msg_value(self, value):\n msg_value = gRPC_comm_manager_pb2.MsgValue()\n\n if isinstance(value, list) or isinstance(value, tuple):\n msg_value.list_msg.MergeFrom(self.create_by_type(value))\n elif isinstance(value, dict):\n if isinstance(list(value.keys())[0], str):\n msg_value.dict_msg_stringkey.MergeFrom(\n self.create_by_type(value))\n else:\n msg_value.dict_msg_intkey.MergeFrom(self.create_by_type(value))\n else:\n msg_value.single_msg.MergeFrom(self.create_by_type(value))\n\n return msg_value\n\n def transform(self, to_list=False):\n if to_list:\n self.content = self.transform_to_list(self.content)\n\n splited_msg = gRPC_comm_manager_pb2.MessageRequest() # map/dict\n splited_msg.msg['sender'].MergeFrom(self.build_msg_value(self.sender))\n splited_msg.msg['receiver'].MergeFrom(\n self.build_msg_value(self.receiver))\n splited_msg.msg['state'].MergeFrom(self.build_msg_value(self.state))\n splited_msg.msg['msg_type'].MergeFrom(\n self.build_msg_value(self.msg_type))\n splited_msg.msg['content'].MergeFrom(self.build_msg_value(\n self.content))\n splited_msg.msg['timestamp'].MergeFrom(\n self.build_msg_value(self.timestamp))\n return splited_msg\n\n def _parse_msg(self, value):\n if isinstance(value, gRPC_comm_manager_pb2.MsgValue) or isinstance(\n value, gRPC_comm_manager_pb2.mSingle):\n return self._parse_msg(getattr(value, value.WhichOneof(\"type\")))\n elif isinstance(value, gRPC_comm_manager_pb2.mList):\n return [self._parse_msg(each) for each in value.list_value]\n elif isinstance(value, gRPC_comm_manager_pb2.mDict_keyIsString) or \\\n isinstance(value, gRPC_comm_manager_pb2.mDict_keyIsInt):\n return {\n k: self._parse_msg(value.dict_value[k])\n for k in value.dict_value\n }\n else:\n return value\n\n def parse(self, received_msg):\n self.sender = self._parse_msg(received_msg['sender'])\n self.receiver = self._parse_msg(received_msg['receiver'])\n self.msg_type = self._parse_msg(received_msg['msg_type'])\n self.state = self._parse_msg(received_msg['state'])\n self.content = self._parse_msg(received_msg['content'])\n self.timestamp = self._parse_msg(received_msg['timestamp'])\n\n def count_bytes(self):\n \"\"\"\n calculate the message bytes to be sent/received\n :return: tuple of bytes of the message to be sent and received\n \"\"\"\n from pympler import asizeof\n download_bytes = asizeof.asizeof(self.content)\n upload_cnt = len(self.receiver) if isinstance(self.receiver,\n list) else 1\n upload_bytes = download_bytes * upload_cnt\n return download_bytes, upload_bytes\n", "path": "federatedscope/core/message.py"}]} | 3,110 | 96 |
gh_patches_debug_51991 | rasdani/github-patches | git_diff | pydantic__pydantic-391 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include a PEP 561 marker file
# Feature Request
Hi,
The new version 0.19 has improved typing support which is great, but looks like it doesn't work out of the box. I had similar problems as described in #245 , but after adding the installation to MYPYPATH it works fine.
I think a PEP 561 marker file `py.typed` should be added so that tools like mypy can utilize the inline type information without any configuration. Reading mypy docs looks like there is a downside that `zip_safe` must be disabled for this.
https://mypy.readthedocs.io/en/latest/installed_packages.html
https://www.python.org/dev/peps/pep-0561/
Include a PEP 561 marker file
# Feature Request
Hi,
The new version 0.19 has improved typing support which is great, but looks like it doesn't work out of the box. I had similar problems as described in #245 , but after adding the installation to MYPYPATH it works fine.
I think a PEP 561 marker file `py.typed` should be added so that tools like mypy can utilize the inline type information without any configuration. Reading mypy docs looks like there is a downside that `zip_safe` must be disabled for this.
https://mypy.readthedocs.io/en/latest/installed_packages.html
https://www.python.org/dev/peps/pep-0561/
</issue>
<code>
[start of setup.py]
1 import re
2 from importlib.machinery import SourceFileLoader
3 from pathlib import Path
4 from setuptools import setup
5
6
7 class ReplaceLinks:
8 def __init__(self):
9 self.links = set()
10
11 def replace_issues(self, m):
12 id = m.group(1)
13 self.links.add(f'.. _#{id}: https://github.com/samuelcolvin/pydantic/issues/{id}')
14 return f'`#{id}`_'
15
16 def replace_users(self, m):
17 name = m.group(2)
18 self.links.add(f'.. _@{name}: https://github.com/{name}')
19 return f'{m.group(1)}`@{name}`_'
20
21 def extra(self):
22 return '\n\n' + '\n'.join(self.links) + '\n'
23
24
25 description = 'Data validation and settings management using python 3.6 type hinting'
26 THIS_DIR = Path(__file__).resolve().parent
27 try:
28 history = THIS_DIR.joinpath('HISTORY.rst').read_text()
29
30 replacer = ReplaceLinks()
31 history = re.sub(r'#(\d+)', replacer.replace_issues, history)
32 history = re.sub(r'( +)@(\w+)', replacer.replace_users, history, flags=re.I)
33 history = re.sub(r'@@', '@', history)
34 history += replacer.extra()
35
36 long_description = '\n\n'.join([THIS_DIR.joinpath('README.rst').read_text(), history])
37 except FileNotFoundError:
38 long_description = description + '.\n\nSee https://pydantic-docs.helpmanual.io/ for documentation.'
39
40 # avoid loading the package before requirements are installed:
41 version = SourceFileLoader('version', 'pydantic/version.py').load_module()
42
43 setup(
44 name='pydantic',
45 version=str(version.VERSION),
46 description=description,
47 long_description=long_description,
48 classifiers=[
49 'Development Status :: 5 - Production/Stable',
50 'Programming Language :: Python',
51 'Programming Language :: Python :: 3',
52 'Programming Language :: Python :: 3 :: Only',
53 'Programming Language :: Python :: 3.6',
54 'Programming Language :: Python :: 3.7',
55 'Intended Audience :: Developers',
56 'Intended Audience :: Information Technology',
57 'Intended Audience :: System Administrators',
58 'License :: OSI Approved :: MIT License',
59 'Operating System :: Unix',
60 'Operating System :: POSIX :: Linux',
61 'Environment :: Console',
62 'Environment :: MacOS X',
63 'Topic :: Software Development :: Libraries :: Python Modules',
64 'Topic :: Internet',
65 ],
66 author='Samuel Colvin',
67 author_email='[email protected]',
68 url='https://github.com/samuelcolvin/pydantic',
69 license='MIT',
70 packages=['pydantic'],
71 python_requires='>=3.6',
72 zip_safe=True,
73 install_requires=[
74 'dataclasses>=0.6;python_version<"3.7"'
75 ],
76 extras_require={
77 'ujson': ['ujson>=1.35'],
78 'email': ['email-validator>=1.0.3'],
79 }
80 )
81
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -68,8 +68,9 @@
url='https://github.com/samuelcolvin/pydantic',
license='MIT',
packages=['pydantic'],
+ package_data={'pydantic': ['py.typed']},
python_requires='>=3.6',
- zip_safe=True,
+ zip_safe=False, # https://mypy.readthedocs.io/en/latest/installed_packages.html
install_requires=[
'dataclasses>=0.6;python_version<"3.7"'
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -68,8 +68,9 @@\n url='https://github.com/samuelcolvin/pydantic',\n license='MIT',\n packages=['pydantic'],\n+ package_data={'pydantic': ['py.typed']},\n python_requires='>=3.6',\n- zip_safe=True,\n+ zip_safe=False, # https://mypy.readthedocs.io/en/latest/installed_packages.html\n install_requires=[\n 'dataclasses>=0.6;python_version<\"3.7\"'\n ],\n", "issue": "Include a PEP 561 marker file\n# Feature Request\r\n\r\nHi,\r\n\r\nThe new version 0.19 has improved typing support which is great, but looks like it doesn't work out of the box. I had similar problems as described in #245 , but after adding the installation to MYPYPATH it works fine.\r\n\r\nI think a PEP 561 marker file `py.typed` should be added so that tools like mypy can utilize the inline type information without any configuration. Reading mypy docs looks like there is a downside that `zip_safe` must be disabled for this.\r\n\r\nhttps://mypy.readthedocs.io/en/latest/installed_packages.html\r\nhttps://www.python.org/dev/peps/pep-0561/\nInclude a PEP 561 marker file\n# Feature Request\r\n\r\nHi,\r\n\r\nThe new version 0.19 has improved typing support which is great, but looks like it doesn't work out of the box. I had similar problems as described in #245 , but after adding the installation to MYPYPATH it works fine.\r\n\r\nI think a PEP 561 marker file `py.typed` should be added so that tools like mypy can utilize the inline type information without any configuration. Reading mypy docs looks like there is a downside that `zip_safe` must be disabled for this.\r\n\r\nhttps://mypy.readthedocs.io/en/latest/installed_packages.html\r\nhttps://www.python.org/dev/peps/pep-0561/\n", "before_files": [{"content": "import re\nfrom importlib.machinery import SourceFileLoader\nfrom pathlib import Path\nfrom setuptools import setup\n\n\nclass ReplaceLinks:\n def __init__(self):\n self.links = set()\n\n def replace_issues(self, m):\n id = m.group(1)\n self.links.add(f'.. _#{id}: https://github.com/samuelcolvin/pydantic/issues/{id}')\n return f'`#{id}`_'\n\n def replace_users(self, m):\n name = m.group(2)\n self.links.add(f'.. _@{name}: https://github.com/{name}')\n return f'{m.group(1)}`@{name}`_'\n\n def extra(self):\n return '\\n\\n' + '\\n'.join(self.links) + '\\n'\n\n\ndescription = 'Data validation and settings management using python 3.6 type hinting'\nTHIS_DIR = Path(__file__).resolve().parent\ntry:\n history = THIS_DIR.joinpath('HISTORY.rst').read_text()\n\n replacer = ReplaceLinks()\n history = re.sub(r'#(\\d+)', replacer.replace_issues, history)\n history = re.sub(r'( +)@(\\w+)', replacer.replace_users, history, flags=re.I)\n history = re.sub(r'@@', '@', history)\n history += replacer.extra()\n\n long_description = '\\n\\n'.join([THIS_DIR.joinpath('README.rst').read_text(), history])\nexcept FileNotFoundError:\n long_description = description + '.\\n\\nSee https://pydantic-docs.helpmanual.io/ for documentation.'\n\n# avoid loading the package before requirements are installed:\nversion = SourceFileLoader('version', 'pydantic/version.py').load_module()\n\nsetup(\n name='pydantic',\n version=str(version.VERSION),\n description=description,\n long_description=long_description,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: Unix',\n 'Operating System :: POSIX :: Linux',\n 'Environment :: Console',\n 'Environment :: MacOS X',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet',\n ],\n author='Samuel Colvin',\n author_email='[email protected]',\n url='https://github.com/samuelcolvin/pydantic',\n license='MIT',\n packages=['pydantic'],\n python_requires='>=3.6',\n zip_safe=True,\n install_requires=[\n 'dataclasses>=0.6;python_version<\"3.7\"'\n ],\n extras_require={\n 'ujson': ['ujson>=1.35'],\n 'email': ['email-validator>=1.0.3'],\n }\n)\n", "path": "setup.py"}]} | 1,676 | 134 |
gh_patches_debug_9907 | rasdani/github-patches | git_diff | netbox-community__netbox-13028 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sporadically broken HTML formatting since at least 3.5.0
### NetBox version
v3.5.1
### Python version
3.10
### Steps to Reproduce
Spontaneously observed behavior:
1. Open any list
2. Click to any object
3. Press "Go back" (or ALT+←) in browser
### Expected Behavior
List will be rendered as usual.
### Observed Behavior
Loaded only part of page, without sidebar, with empty head tag:

</issue>
<code>
[start of netbox/netbox/middleware.py]
1 import logging
2 import uuid
3 from urllib import parse
4
5 from django.conf import settings
6 from django.contrib import auth, messages
7 from django.contrib.auth.middleware import RemoteUserMiddleware as RemoteUserMiddleware_
8 from django.core.exceptions import ImproperlyConfigured
9 from django.db import connection, ProgrammingError
10 from django.db.utils import InternalError
11 from django.http import Http404, HttpResponseRedirect
12
13 from extras.context_managers import change_logging
14 from netbox.config import clear_config, get_config
15 from netbox.views import handler_500
16 from utilities.api import is_api_request, rest_api_server_error
17
18 __all__ = (
19 'CoreMiddleware',
20 'MaintenanceModeMiddleware',
21 'RemoteUserMiddleware',
22 )
23
24
25 class CoreMiddleware:
26
27 def __init__(self, get_response):
28 self.get_response = get_response
29
30 def __call__(self, request):
31
32 # Assign a random unique ID to the request. This will be used for change logging.
33 request.id = uuid.uuid4()
34
35 # Enforce the LOGIN_REQUIRED config parameter. If true, redirect all non-exempt unauthenticated requests
36 # to the login page.
37 if (
38 settings.LOGIN_REQUIRED and
39 not request.user.is_authenticated and
40 not request.path_info.startswith(settings.AUTH_EXEMPT_PATHS)
41 ):
42 login_url = f'{settings.LOGIN_URL}?next={parse.quote(request.get_full_path_info())}'
43 return HttpResponseRedirect(login_url)
44
45 # Enable the change_logging context manager and process the request.
46 with change_logging(request):
47 response = self.get_response(request)
48
49 # Attach the unique request ID as an HTTP header.
50 response['X-Request-ID'] = request.id
51
52 # If this is an API request, attach an HTTP header annotating the API version (e.g. '3.5').
53 if is_api_request(request):
54 response['API-Version'] = settings.REST_FRAMEWORK_VERSION
55
56 # Clear any cached dynamic config parameters after each request.
57 clear_config()
58
59 return response
60
61 def process_exception(self, request, exception):
62 """
63 Implement custom error handling logic for production deployments.
64 """
65 # Don't catch exceptions when in debug mode
66 if settings.DEBUG:
67 return
68
69 # Cleanly handle exceptions that occur from REST API requests
70 if is_api_request(request):
71 return rest_api_server_error(request)
72
73 # Ignore Http404s (defer to Django's built-in 404 handling)
74 if isinstance(exception, Http404):
75 return
76
77 # Determine the type of exception. If it's a common issue, return a custom error page with instructions.
78 custom_template = None
79 if isinstance(exception, ProgrammingError):
80 custom_template = 'exceptions/programming_error.html'
81 elif isinstance(exception, ImportError):
82 custom_template = 'exceptions/import_error.html'
83 elif isinstance(exception, PermissionError):
84 custom_template = 'exceptions/permission_error.html'
85
86 # Return a custom error message, or fall back to Django's default 500 error handling
87 if custom_template:
88 return handler_500(request, template_name=custom_template)
89
90
91 class RemoteUserMiddleware(RemoteUserMiddleware_):
92 """
93 Custom implementation of Django's RemoteUserMiddleware which allows for a user-configurable HTTP header name.
94 """
95 force_logout_if_no_header = False
96
97 @property
98 def header(self):
99 return settings.REMOTE_AUTH_HEADER
100
101 def process_request(self, request):
102 logger = logging.getLogger(
103 'netbox.authentication.RemoteUserMiddleware')
104 # Bypass middleware if remote authentication is not enabled
105 if not settings.REMOTE_AUTH_ENABLED:
106 return
107 # AuthenticationMiddleware is required so that request.user exists.
108 if not hasattr(request, 'user'):
109 raise ImproperlyConfigured(
110 "The Django remote user auth middleware requires the"
111 " authentication middleware to be installed. Edit your"
112 " MIDDLEWARE setting to insert"
113 " 'django.contrib.auth.middleware.AuthenticationMiddleware'"
114 " before the RemoteUserMiddleware class.")
115 try:
116 username = request.META[self.header]
117 except KeyError:
118 # If specified header doesn't exist then remove any existing
119 # authenticated remote-user, or return (leaving request.user set to
120 # AnonymousUser by the AuthenticationMiddleware).
121 if self.force_logout_if_no_header and request.user.is_authenticated:
122 self._remove_invalid_user(request)
123 return
124 # If the user is already authenticated and that user is the user we are
125 # getting passed in the headers, then the correct user is already
126 # persisted in the session and we don't need to continue.
127 if request.user.is_authenticated:
128 if request.user.get_username() == self.clean_username(username, request):
129 return
130 else:
131 # An authenticated user is associated with the request, but
132 # it does not match the authorized user in the header.
133 self._remove_invalid_user(request)
134
135 # We are seeing this user for the first time in this session, attempt
136 # to authenticate the user.
137 if settings.REMOTE_AUTH_GROUP_SYNC_ENABLED:
138 logger.debug("Trying to sync Groups")
139 user = auth.authenticate(
140 request, remote_user=username, remote_groups=self._get_groups(request))
141 else:
142 user = auth.authenticate(request, remote_user=username)
143 if user:
144 # User is valid.
145 # Update the User's Profile if set by request headers
146 if settings.REMOTE_AUTH_USER_FIRST_NAME in request.META:
147 user.first_name = request.META[settings.REMOTE_AUTH_USER_FIRST_NAME]
148 if settings.REMOTE_AUTH_USER_LAST_NAME in request.META:
149 user.last_name = request.META[settings.REMOTE_AUTH_USER_LAST_NAME]
150 if settings.REMOTE_AUTH_USER_EMAIL in request.META:
151 user.email = request.META[settings.REMOTE_AUTH_USER_EMAIL]
152 user.save()
153
154 # Set request.user and persist user in the session
155 # by logging the user in.
156 request.user = user
157 auth.login(request, user)
158
159 def _get_groups(self, request):
160 logger = logging.getLogger(
161 'netbox.authentication.RemoteUserMiddleware')
162
163 groups_string = request.META.get(
164 settings.REMOTE_AUTH_GROUP_HEADER, None)
165 if groups_string:
166 groups = groups_string.split(settings.REMOTE_AUTH_GROUP_SEPARATOR)
167 else:
168 groups = []
169 logger.debug(f"Groups are {groups}")
170 return groups
171
172
173 class MaintenanceModeMiddleware:
174 """
175 Middleware that checks if the application is in maintenance mode
176 and restricts write-related operations to the database.
177 """
178
179 def __init__(self, get_response):
180 self.get_response = get_response
181
182 def __call__(self, request):
183 if get_config().MAINTENANCE_MODE:
184 self._set_session_type(
185 allow_write=request.path_info.startswith(settings.MAINTENANCE_EXEMPT_PATHS)
186 )
187
188 return self.get_response(request)
189
190 @staticmethod
191 def _set_session_type(allow_write):
192 """
193 Prevent any write-related database operations.
194
195 Args:
196 allow_write (bool): If True, write operations will be permitted.
197 """
198 with connection.cursor() as cursor:
199 mode = 'READ WRITE' if allow_write else 'READ ONLY'
200 cursor.execute(f'SET SESSION CHARACTERISTICS AS TRANSACTION {mode};')
201
202 def process_exception(self, request, exception):
203 """
204 Prevent any write-related database operations if an exception is raised.
205 """
206 if get_config().MAINTENANCE_MODE and isinstance(exception, InternalError):
207 error_message = 'NetBox is currently operating in maintenance mode and is unable to perform write ' \
208 'operations. Please try again later.'
209
210 if is_api_request(request):
211 return rest_api_server_error(request, error=error_message)
212
213 messages.error(request, error_message)
214 return HttpResponseRedirect(request.path_info)
215
[end of netbox/netbox/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/netbox/middleware.py b/netbox/netbox/middleware.py
--- a/netbox/netbox/middleware.py
+++ b/netbox/netbox/middleware.py
@@ -49,6 +49,9 @@
# Attach the unique request ID as an HTTP header.
response['X-Request-ID'] = request.id
+ # Enable the Vary header to help with caching of HTMX responses
+ response['Vary'] = 'HX-Request'
+
# If this is an API request, attach an HTTP header annotating the API version (e.g. '3.5').
if is_api_request(request):
response['API-Version'] = settings.REST_FRAMEWORK_VERSION
| {"golden_diff": "diff --git a/netbox/netbox/middleware.py b/netbox/netbox/middleware.py\n--- a/netbox/netbox/middleware.py\n+++ b/netbox/netbox/middleware.py\n@@ -49,6 +49,9 @@\n # Attach the unique request ID as an HTTP header.\n response['X-Request-ID'] = request.id\n \n+ # Enable the Vary header to help with caching of HTMX responses\n+ response['Vary'] = 'HX-Request'\n+\n # If this is an API request, attach an HTTP header annotating the API version (e.g. '3.5').\n if is_api_request(request):\n response['API-Version'] = settings.REST_FRAMEWORK_VERSION\n", "issue": "Sporadically broken HTML formatting since at least 3.5.0\n### NetBox version\n\nv3.5.1\n\n### Python version\n\n3.10\n\n### Steps to Reproduce\n\nSpontaneously observed behavior:\r\n1. Open any list\r\n2. Click to any object\r\n3. Press \"Go back\" (or ALT+\u2190) in browser\n\n### Expected Behavior\n\nList will be rendered as usual.\n\n### Observed Behavior\n\nLoaded only part of page, without sidebar, with empty head tag:\r\n\r\n\r\n\n", "before_files": [{"content": "import logging\nimport uuid\nfrom urllib import parse\n\nfrom django.conf import settings\nfrom django.contrib import auth, messages\nfrom django.contrib.auth.middleware import RemoteUserMiddleware as RemoteUserMiddleware_\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import connection, ProgrammingError\nfrom django.db.utils import InternalError\nfrom django.http import Http404, HttpResponseRedirect\n\nfrom extras.context_managers import change_logging\nfrom netbox.config import clear_config, get_config\nfrom netbox.views import handler_500\nfrom utilities.api import is_api_request, rest_api_server_error\n\n__all__ = (\n 'CoreMiddleware',\n 'MaintenanceModeMiddleware',\n 'RemoteUserMiddleware',\n)\n\n\nclass CoreMiddleware:\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n\n # Assign a random unique ID to the request. This will be used for change logging.\n request.id = uuid.uuid4()\n\n # Enforce the LOGIN_REQUIRED config parameter. If true, redirect all non-exempt unauthenticated requests\n # to the login page.\n if (\n settings.LOGIN_REQUIRED and\n not request.user.is_authenticated and\n not request.path_info.startswith(settings.AUTH_EXEMPT_PATHS)\n ):\n login_url = f'{settings.LOGIN_URL}?next={parse.quote(request.get_full_path_info())}'\n return HttpResponseRedirect(login_url)\n\n # Enable the change_logging context manager and process the request.\n with change_logging(request):\n response = self.get_response(request)\n\n # Attach the unique request ID as an HTTP header.\n response['X-Request-ID'] = request.id\n\n # If this is an API request, attach an HTTP header annotating the API version (e.g. '3.5').\n if is_api_request(request):\n response['API-Version'] = settings.REST_FRAMEWORK_VERSION\n\n # Clear any cached dynamic config parameters after each request.\n clear_config()\n\n return response\n\n def process_exception(self, request, exception):\n \"\"\"\n Implement custom error handling logic for production deployments.\n \"\"\"\n # Don't catch exceptions when in debug mode\n if settings.DEBUG:\n return\n\n # Cleanly handle exceptions that occur from REST API requests\n if is_api_request(request):\n return rest_api_server_error(request)\n\n # Ignore Http404s (defer to Django's built-in 404 handling)\n if isinstance(exception, Http404):\n return\n\n # Determine the type of exception. If it's a common issue, return a custom error page with instructions.\n custom_template = None\n if isinstance(exception, ProgrammingError):\n custom_template = 'exceptions/programming_error.html'\n elif isinstance(exception, ImportError):\n custom_template = 'exceptions/import_error.html'\n elif isinstance(exception, PermissionError):\n custom_template = 'exceptions/permission_error.html'\n\n # Return a custom error message, or fall back to Django's default 500 error handling\n if custom_template:\n return handler_500(request, template_name=custom_template)\n\n\nclass RemoteUserMiddleware(RemoteUserMiddleware_):\n \"\"\"\n Custom implementation of Django's RemoteUserMiddleware which allows for a user-configurable HTTP header name.\n \"\"\"\n force_logout_if_no_header = False\n\n @property\n def header(self):\n return settings.REMOTE_AUTH_HEADER\n\n def process_request(self, request):\n logger = logging.getLogger(\n 'netbox.authentication.RemoteUserMiddleware')\n # Bypass middleware if remote authentication is not enabled\n if not settings.REMOTE_AUTH_ENABLED:\n return\n # AuthenticationMiddleware is required so that request.user exists.\n if not hasattr(request, 'user'):\n raise ImproperlyConfigured(\n \"The Django remote user auth middleware requires the\"\n \" authentication middleware to be installed. Edit your\"\n \" MIDDLEWARE setting to insert\"\n \" 'django.contrib.auth.middleware.AuthenticationMiddleware'\"\n \" before the RemoteUserMiddleware class.\")\n try:\n username = request.META[self.header]\n except KeyError:\n # If specified header doesn't exist then remove any existing\n # authenticated remote-user, or return (leaving request.user set to\n # AnonymousUser by the AuthenticationMiddleware).\n if self.force_logout_if_no_header and request.user.is_authenticated:\n self._remove_invalid_user(request)\n return\n # If the user is already authenticated and that user is the user we are\n # getting passed in the headers, then the correct user is already\n # persisted in the session and we don't need to continue.\n if request.user.is_authenticated:\n if request.user.get_username() == self.clean_username(username, request):\n return\n else:\n # An authenticated user is associated with the request, but\n # it does not match the authorized user in the header.\n self._remove_invalid_user(request)\n\n # We are seeing this user for the first time in this session, attempt\n # to authenticate the user.\n if settings.REMOTE_AUTH_GROUP_SYNC_ENABLED:\n logger.debug(\"Trying to sync Groups\")\n user = auth.authenticate(\n request, remote_user=username, remote_groups=self._get_groups(request))\n else:\n user = auth.authenticate(request, remote_user=username)\n if user:\n # User is valid.\n # Update the User's Profile if set by request headers\n if settings.REMOTE_AUTH_USER_FIRST_NAME in request.META:\n user.first_name = request.META[settings.REMOTE_AUTH_USER_FIRST_NAME]\n if settings.REMOTE_AUTH_USER_LAST_NAME in request.META:\n user.last_name = request.META[settings.REMOTE_AUTH_USER_LAST_NAME]\n if settings.REMOTE_AUTH_USER_EMAIL in request.META:\n user.email = request.META[settings.REMOTE_AUTH_USER_EMAIL]\n user.save()\n\n # Set request.user and persist user in the session\n # by logging the user in.\n request.user = user\n auth.login(request, user)\n\n def _get_groups(self, request):\n logger = logging.getLogger(\n 'netbox.authentication.RemoteUserMiddleware')\n\n groups_string = request.META.get(\n settings.REMOTE_AUTH_GROUP_HEADER, None)\n if groups_string:\n groups = groups_string.split(settings.REMOTE_AUTH_GROUP_SEPARATOR)\n else:\n groups = []\n logger.debug(f\"Groups are {groups}\")\n return groups\n\n\nclass MaintenanceModeMiddleware:\n \"\"\"\n Middleware that checks if the application is in maintenance mode\n and restricts write-related operations to the database.\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n if get_config().MAINTENANCE_MODE:\n self._set_session_type(\n allow_write=request.path_info.startswith(settings.MAINTENANCE_EXEMPT_PATHS)\n )\n\n return self.get_response(request)\n\n @staticmethod\n def _set_session_type(allow_write):\n \"\"\"\n Prevent any write-related database operations.\n\n Args:\n allow_write (bool): If True, write operations will be permitted.\n \"\"\"\n with connection.cursor() as cursor:\n mode = 'READ WRITE' if allow_write else 'READ ONLY'\n cursor.execute(f'SET SESSION CHARACTERISTICS AS TRANSACTION {mode};')\n\n def process_exception(self, request, exception):\n \"\"\"\n Prevent any write-related database operations if an exception is raised.\n \"\"\"\n if get_config().MAINTENANCE_MODE and isinstance(exception, InternalError):\n error_message = 'NetBox is currently operating in maintenance mode and is unable to perform write ' \\\n 'operations. Please try again later.'\n\n if is_api_request(request):\n return rest_api_server_error(request, error=error_message)\n\n messages.error(request, error_message)\n return HttpResponseRedirect(request.path_info)\n", "path": "netbox/netbox/middleware.py"}]} | 2,903 | 155 |
gh_patches_debug_4719 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-946 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
valid_batch_size
In the train_single.py file , lines 120 to 124
```
def train_iter_fct(): return build_dataset_iter(
lazily_load_dataset("train", opt), fields, opt)
def valid_iter_fct(): return build_dataset_iter(
lazily_load_dataset("valid", opt), fields)
```
should be changed
```
def train_iter_fct(): return build_dataset_iter(
lazily_load_dataset("train", opt), fields, opt)
def valid_iter_fct(): return build_dataset_iter(
lazily_load_dataset("valid", opt), fields, opt, is_train=False)
```
If it doesn't, it will not use `valid_batch_size`.
</issue>
<code>
[start of onmt/train_single.py]
1 #!/usr/bin/env python
2 """
3 Training on a single process
4 """
5 from __future__ import division
6
7 import argparse
8 import os
9 import random
10 import torch
11
12 import onmt.opts as opts
13
14 from onmt.inputters.inputter import build_dataset_iter, lazily_load_dataset, \
15 _load_fields, _collect_report_features
16 from onmt.model_builder import build_model
17 from onmt.utils.optimizers import build_optim
18 from onmt.trainer import build_trainer
19 from onmt.models import build_model_saver
20 from onmt.utils.logging import init_logger, logger
21
22
23 def _check_save_model_path(opt):
24 save_model_path = os.path.abspath(opt.save_model)
25 model_dirname = os.path.dirname(save_model_path)
26 if not os.path.exists(model_dirname):
27 os.makedirs(model_dirname)
28
29
30 def _tally_parameters(model):
31 n_params = sum([p.nelement() for p in model.parameters()])
32 enc = 0
33 dec = 0
34 for name, param in model.named_parameters():
35 if 'encoder' in name:
36 enc += param.nelement()
37 elif 'decoder' or 'generator' in name:
38 dec += param.nelement()
39 return n_params, enc, dec
40
41
42 def training_opt_postprocessing(opt):
43 if opt.word_vec_size != -1:
44 opt.src_word_vec_size = opt.word_vec_size
45 opt.tgt_word_vec_size = opt.word_vec_size
46
47 if opt.layers != -1:
48 opt.enc_layers = opt.layers
49 opt.dec_layers = opt.layers
50
51 opt.brnn = (opt.encoder_type == "brnn")
52
53 if opt.rnn_type == "SRU" and not opt.gpuid:
54 raise AssertionError("Using SRU requires -gpuid set.")
55
56 if torch.cuda.is_available() and not opt.gpuid:
57 logger.info("WARNING: You have a CUDA device, should run with -gpuid")
58
59 if opt.gpuid:
60 torch.cuda.set_device(opt.device_id)
61 if opt.seed > 0:
62 # this one is needed for torchtext random call (shuffled iterator)
63 # in multi gpu it ensures datasets are read in the same order
64 random.seed(opt.seed)
65 # These ensure same initialization in multi gpu mode
66 torch.manual_seed(opt.seed)
67 torch.cuda.manual_seed(opt.seed)
68
69 return opt
70
71
72 def main(opt):
73 opt = training_opt_postprocessing(opt)
74 init_logger(opt.log_file)
75 # Load checkpoint if we resume from a previous training.
76 if opt.train_from:
77 logger.info('Loading checkpoint from %s' % opt.train_from)
78 checkpoint = torch.load(opt.train_from,
79 map_location=lambda storage, loc: storage)
80 model_opt = checkpoint['opt']
81 else:
82 checkpoint = None
83 model_opt = opt
84
85 # Peek the first dataset to determine the data_type.
86 # (All datasets have the same data_type).
87 first_dataset = next(lazily_load_dataset("train", opt))
88 data_type = first_dataset.data_type
89
90 # Load fields generated from preprocess phase.
91 fields = _load_fields(first_dataset, data_type, opt, checkpoint)
92
93 # Report src/tgt features.
94
95 src_features, tgt_features = _collect_report_features(fields)
96 for j, feat in enumerate(src_features):
97 logger.info(' * src feature %d size = %d'
98 % (j, len(fields[feat].vocab)))
99 for j, feat in enumerate(tgt_features):
100 logger.info(' * tgt feature %d size = %d'
101 % (j, len(fields[feat].vocab)))
102
103 # Build model.
104 model = build_model(model_opt, opt, fields, checkpoint)
105 n_params, enc, dec = _tally_parameters(model)
106 logger.info('encoder: %d' % enc)
107 logger.info('decoder: %d' % dec)
108 logger.info('* number of parameters: %d' % n_params)
109 _check_save_model_path(opt)
110
111 # Build optimizer.
112 optim = build_optim(model, opt, checkpoint)
113
114 # Build model saver
115 model_saver = build_model_saver(model_opt, opt, model, fields, optim)
116
117 trainer = build_trainer(
118 opt, model, fields, optim, data_type, model_saver=model_saver)
119
120 def train_iter_fct(): return build_dataset_iter(
121 lazily_load_dataset("train", opt), fields, opt)
122
123 def valid_iter_fct(): return build_dataset_iter(
124 lazily_load_dataset("valid", opt), fields, opt)
125
126 # Do training.
127 trainer.train(train_iter_fct, valid_iter_fct, opt.train_steps,
128 opt.valid_steps)
129
130 if opt.tensorboard:
131 trainer.report_manager.tensorboard_writer.close()
132
133
134 if __name__ == "__main__":
135 parser = argparse.ArgumentParser(
136 description='train.py',
137 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
138
139 opts.add_md_help_argument(parser)
140 opts.model_opts(parser)
141 opts.train_opts(parser)
142
143 opt = parser.parse_args()
144 main(opt)
145
[end of onmt/train_single.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/onmt/train_single.py b/onmt/train_single.py
--- a/onmt/train_single.py
+++ b/onmt/train_single.py
@@ -121,7 +121,7 @@
lazily_load_dataset("train", opt), fields, opt)
def valid_iter_fct(): return build_dataset_iter(
- lazily_load_dataset("valid", opt), fields, opt)
+ lazily_load_dataset("valid", opt), fields, opt, is_train=False)
# Do training.
trainer.train(train_iter_fct, valid_iter_fct, opt.train_steps,
| {"golden_diff": "diff --git a/onmt/train_single.py b/onmt/train_single.py\n--- a/onmt/train_single.py\n+++ b/onmt/train_single.py\n@@ -121,7 +121,7 @@\n lazily_load_dataset(\"train\", opt), fields, opt)\n \n def valid_iter_fct(): return build_dataset_iter(\n- lazily_load_dataset(\"valid\", opt), fields, opt)\n+ lazily_load_dataset(\"valid\", opt), fields, opt, is_train=False)\n \n # Do training.\n trainer.train(train_iter_fct, valid_iter_fct, opt.train_steps,\n", "issue": "valid_batch_size\nIn the train_single.py file , lines 120 to 124\r\n```\r\ndef train_iter_fct(): return build_dataset_iter(\r\n lazily_load_dataset(\"train\", opt), fields, opt)\r\n\r\n def valid_iter_fct(): return build_dataset_iter(\r\n lazily_load_dataset(\"valid\", opt), fields)\r\n```\r\nshould be changed\r\n```\r\ndef train_iter_fct(): return build_dataset_iter(\r\n lazily_load_dataset(\"train\", opt), fields, opt)\r\n\r\n def valid_iter_fct(): return build_dataset_iter(\r\n lazily_load_dataset(\"valid\", opt), fields, opt, is_train=False)\r\n```\r\nIf it doesn't, it will not use `valid_batch_size`.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\n Training on a single process\n\"\"\"\nfrom __future__ import division\n\nimport argparse\nimport os\nimport random\nimport torch\n\nimport onmt.opts as opts\n\nfrom onmt.inputters.inputter import build_dataset_iter, lazily_load_dataset, \\\n _load_fields, _collect_report_features\nfrom onmt.model_builder import build_model\nfrom onmt.utils.optimizers import build_optim\nfrom onmt.trainer import build_trainer\nfrom onmt.models import build_model_saver\nfrom onmt.utils.logging import init_logger, logger\n\n\ndef _check_save_model_path(opt):\n save_model_path = os.path.abspath(opt.save_model)\n model_dirname = os.path.dirname(save_model_path)\n if not os.path.exists(model_dirname):\n os.makedirs(model_dirname)\n\n\ndef _tally_parameters(model):\n n_params = sum([p.nelement() for p in model.parameters()])\n enc = 0\n dec = 0\n for name, param in model.named_parameters():\n if 'encoder' in name:\n enc += param.nelement()\n elif 'decoder' or 'generator' in name:\n dec += param.nelement()\n return n_params, enc, dec\n\n\ndef training_opt_postprocessing(opt):\n if opt.word_vec_size != -1:\n opt.src_word_vec_size = opt.word_vec_size\n opt.tgt_word_vec_size = opt.word_vec_size\n\n if opt.layers != -1:\n opt.enc_layers = opt.layers\n opt.dec_layers = opt.layers\n\n opt.brnn = (opt.encoder_type == \"brnn\")\n\n if opt.rnn_type == \"SRU\" and not opt.gpuid:\n raise AssertionError(\"Using SRU requires -gpuid set.\")\n\n if torch.cuda.is_available() and not opt.gpuid:\n logger.info(\"WARNING: You have a CUDA device, should run with -gpuid\")\n\n if opt.gpuid:\n torch.cuda.set_device(opt.device_id)\n if opt.seed > 0:\n # this one is needed for torchtext random call (shuffled iterator)\n # in multi gpu it ensures datasets are read in the same order\n random.seed(opt.seed)\n # These ensure same initialization in multi gpu mode\n torch.manual_seed(opt.seed)\n torch.cuda.manual_seed(opt.seed)\n\n return opt\n\n\ndef main(opt):\n opt = training_opt_postprocessing(opt)\n init_logger(opt.log_file)\n # Load checkpoint if we resume from a previous training.\n if opt.train_from:\n logger.info('Loading checkpoint from %s' % opt.train_from)\n checkpoint = torch.load(opt.train_from,\n map_location=lambda storage, loc: storage)\n model_opt = checkpoint['opt']\n else:\n checkpoint = None\n model_opt = opt\n\n # Peek the first dataset to determine the data_type.\n # (All datasets have the same data_type).\n first_dataset = next(lazily_load_dataset(\"train\", opt))\n data_type = first_dataset.data_type\n\n # Load fields generated from preprocess phase.\n fields = _load_fields(first_dataset, data_type, opt, checkpoint)\n\n # Report src/tgt features.\n\n src_features, tgt_features = _collect_report_features(fields)\n for j, feat in enumerate(src_features):\n logger.info(' * src feature %d size = %d'\n % (j, len(fields[feat].vocab)))\n for j, feat in enumerate(tgt_features):\n logger.info(' * tgt feature %d size = %d'\n % (j, len(fields[feat].vocab)))\n\n # Build model.\n model = build_model(model_opt, opt, fields, checkpoint)\n n_params, enc, dec = _tally_parameters(model)\n logger.info('encoder: %d' % enc)\n logger.info('decoder: %d' % dec)\n logger.info('* number of parameters: %d' % n_params)\n _check_save_model_path(opt)\n\n # Build optimizer.\n optim = build_optim(model, opt, checkpoint)\n\n # Build model saver\n model_saver = build_model_saver(model_opt, opt, model, fields, optim)\n\n trainer = build_trainer(\n opt, model, fields, optim, data_type, model_saver=model_saver)\n\n def train_iter_fct(): return build_dataset_iter(\n lazily_load_dataset(\"train\", opt), fields, opt)\n\n def valid_iter_fct(): return build_dataset_iter(\n lazily_load_dataset(\"valid\", opt), fields, opt)\n\n # Do training.\n trainer.train(train_iter_fct, valid_iter_fct, opt.train_steps,\n opt.valid_steps)\n\n if opt.tensorboard:\n trainer.report_manager.tensorboard_writer.close()\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(\n description='train.py',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\n opts.add_md_help_argument(parser)\n opts.model_opts(parser)\n opts.train_opts(parser)\n\n opt = parser.parse_args()\n main(opt)\n", "path": "onmt/train_single.py"}]} | 2,106 | 130 |
gh_patches_debug_38208 | rasdani/github-patches | git_diff | lk-geimfari__mimesis-772 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restructure Numbers provider
# Feature request
## Thesis
While I was implementing the ``matrix()`` function in the ``Numbers`` provider and I was thinking about some changes that we could make in this provider:
- Add a function ``complex(start, end, length)`` that return a random array of complex numbers
- Make the API uniform, so that every function in the ``Numbers`` provider has the arguments ``start, end, length`` (where possible). Maybe in the ``complex()`` function we can add ``start_real, end_real, start_imaginary, end_imaginary`` ?
- Remove the function ``ranting()`` and add an argument ``decimal_digits`` in the function ``floats()`` to specify the number of decimal digits to keep.
## Reasoning
I think these changes would make the provider more uniform and easy to use.
</issue>
<code>
[start of mimesis/providers/numbers.py]
1 # -*- coding: utf-8 -*-
2
3 """Provides data related to numbers."""
4
5 from typing import List, Union
6
7 from mimesis.providers.base import BaseProvider
8
9 __all__ = ['Numbers']
10
11
12 class Numbers(BaseProvider):
13 """Class for generating numbers."""
14
15 class Meta:
16 """Class for metadata."""
17
18 name = 'numbers'
19
20 def floats(self, n: int = 2) -> List[float]:
21 """Generate a list of random float numbers.
22
23 :param n: Raise 10 to the 'n' power.
24 :return: The list of floating-point numbers.
25 """
26 nums = [self.random.random()
27 for _ in range(10 ** int(n))]
28 return nums
29
30 def integers(self, start: int = 0, end: int = 10,
31 length: int = 10) -> List[int]:
32 """Generate a list of random integers.
33
34 Integers can be negative or positive numbers.
35 .. note: You can use both positive and negative numbers.
36
37 :param start: Start.
38 :param end: End.
39 :param length: Length of list.
40 :return: List of integers.
41
42 :Example:
43 [-20, -19, -18, -17]
44 """
45 return self.random.randints(
46 length, start, end)
47
48 @staticmethod
49 def primes(start: int = 1, end: int = 999) -> List[int]:
50 """Generate a list of prime numbers.
51
52 :param start: First value of range.
53 :param end: Last value of range.
54 :return: A list of prime numbers from start to end.
55 """
56 # TODO: It should generate random primes with passed length.
57 sieve_size = (end // 2 - 1) if end % 2 == 0 else (end // 2)
58 sieve = [True] * sieve_size
59
60 primes = [] # list of primes
61 # add 2 to the list if it's in the given range
62 if end >= 2:
63 primes.append(2)
64 for i in range(sieve_size):
65 if sieve[i]:
66 value_at_i = i * 2 + 3
67 primes.append(value_at_i)
68 for j in range(i, sieve_size, value_at_i):
69 sieve[j] = False
70
71 chop_index = 0
72 for i in range(len(primes)):
73 if primes[i] >= start:
74 chop_index = i
75 break
76 return primes[chop_index:]
77
78 def digit(self, to_bin: bool = False) -> Union[str, int]:
79 """Get a random digit.
80
81 :param to_bin: If True then convert to binary.
82 :return: Digit.
83
84 :Example:
85 4.
86 """
87 digit = self.random.randint(0, 9)
88
89 if to_bin:
90 return bin(digit)
91
92 return digit
93
94 def between(self, minimum: int = 1, maximum: int = 1000) -> int:
95 """Generate a random number between minimum and maximum.
96
97 :param minimum: Minimum of range.
98 :param maximum: Maximum of range.
99 :return: Number.
100 """
101 return self.random.randint(minimum, maximum)
102
103 def rating(self, maximum: float = 5.0) -> float:
104 """Generate a random rating for something.
105
106 :param maximum: Maximum value (default is 5.0).
107 :return: Rating.
108
109 :Example:
110 4.7
111 """
112 return self.random.uniform(0, maximum, 1)
113
[end of mimesis/providers/numbers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mimesis/providers/numbers.py b/mimesis/providers/numbers.py
--- a/mimesis/providers/numbers.py
+++ b/mimesis/providers/numbers.py
@@ -17,18 +17,20 @@
name = 'numbers'
- def floats(self, n: int = 2) -> List[float]:
+ def floats(self, start: float = 0, end: float = 1, n: int = 10,
+ rounding: int = 15) -> List[float]:
"""Generate a list of random float numbers.
- :param n: Raise 10 to the 'n' power.
+ :param start: Start range.
+ :param end: End range.
+ :param n: Length of the list.
+ :param rounding: Max number of decimal digits.
:return: The list of floating-point numbers.
"""
- nums = [self.random.random()
- for _ in range(10 ** int(n))]
- return nums
+ return [self.random.uniform(start, end, rounding) for _ in range(n)]
def integers(self, start: int = 0, end: int = 10,
- length: int = 10) -> List[int]:
+ n: int = 10) -> List[int]:
"""Generate a list of random integers.
Integers can be negative or positive numbers.
@@ -36,14 +38,33 @@
:param start: Start.
:param end: End.
- :param length: Length of list.
+ :param n: Length of list.
:return: List of integers.
:Example:
[-20, -19, -18, -17]
"""
- return self.random.randints(
- length, start, end)
+ return self.random.randints(n, start, end)
+
+ def complexes(self, start_real: float = 0, end_real: float = 1,
+ start_imag: float = 0, end_imag: float = 1,
+ rounding_real: int = 15, rounding_imag: int = 15,
+ n: int = 10) -> List[complex]:
+ """Generate a list of random complex numbers.
+
+ :param start_real: Start real range.
+ :param end_real: End real range.
+ :param start_imag: Start imaginary range.
+ :param end_imag: End imaginary range.
+ :param rounding_real: Rounding real part.
+ :param rounding_imag: Roungind imaginary part.
+ :param n: Length of the list.
+ :return: A list of random complex numbers.
+ """
+ return [
+ complex(self.random.uniform(start_real, end_real, rounding_real),
+ self.random.uniform(start_imag, end_imag, rounding_imag))
+ for _ in range(n)]
@staticmethod
def primes(start: int = 1, end: int = 999) -> List[int]:
@@ -99,14 +120,3 @@
:return: Number.
"""
return self.random.randint(minimum, maximum)
-
- def rating(self, maximum: float = 5.0) -> float:
- """Generate a random rating for something.
-
- :param maximum: Maximum value (default is 5.0).
- :return: Rating.
-
- :Example:
- 4.7
- """
- return self.random.uniform(0, maximum, 1)
| {"golden_diff": "diff --git a/mimesis/providers/numbers.py b/mimesis/providers/numbers.py\n--- a/mimesis/providers/numbers.py\n+++ b/mimesis/providers/numbers.py\n@@ -17,18 +17,20 @@\n \n name = 'numbers'\n \n- def floats(self, n: int = 2) -> List[float]:\n+ def floats(self, start: float = 0, end: float = 1, n: int = 10,\n+ rounding: int = 15) -> List[float]:\n \"\"\"Generate a list of random float numbers.\n \n- :param n: Raise 10 to the 'n' power.\n+ :param start: Start range.\n+ :param end: End range.\n+ :param n: Length of the list.\n+ :param rounding: Max number of decimal digits.\n :return: The list of floating-point numbers.\n \"\"\"\n- nums = [self.random.random()\n- for _ in range(10 ** int(n))]\n- return nums\n+ return [self.random.uniform(start, end, rounding) for _ in range(n)]\n \n def integers(self, start: int = 0, end: int = 10,\n- length: int = 10) -> List[int]:\n+ n: int = 10) -> List[int]:\n \"\"\"Generate a list of random integers.\n \n Integers can be negative or positive numbers.\n@@ -36,14 +38,33 @@\n \n :param start: Start.\n :param end: End.\n- :param length: Length of list.\n+ :param n: Length of list.\n :return: List of integers.\n \n :Example:\n [-20, -19, -18, -17]\n \"\"\"\n- return self.random.randints(\n- length, start, end)\n+ return self.random.randints(n, start, end)\n+\n+ def complexes(self, start_real: float = 0, end_real: float = 1,\n+ start_imag: float = 0, end_imag: float = 1,\n+ rounding_real: int = 15, rounding_imag: int = 15,\n+ n: int = 10) -> List[complex]:\n+ \"\"\"Generate a list of random complex numbers.\n+\n+ :param start_real: Start real range.\n+ :param end_real: End real range.\n+ :param start_imag: Start imaginary range.\n+ :param end_imag: End imaginary range.\n+ :param rounding_real: Rounding real part.\n+ :param rounding_imag: Roungind imaginary part.\n+ :param n: Length of the list.\n+ :return: A list of random complex numbers.\n+ \"\"\"\n+ return [\n+ complex(self.random.uniform(start_real, end_real, rounding_real),\n+ self.random.uniform(start_imag, end_imag, rounding_imag))\n+ for _ in range(n)]\n \n @staticmethod\n def primes(start: int = 1, end: int = 999) -> List[int]:\n@@ -99,14 +120,3 @@\n :return: Number.\n \"\"\"\n return self.random.randint(minimum, maximum)\n-\n- def rating(self, maximum: float = 5.0) -> float:\n- \"\"\"Generate a random rating for something.\n-\n- :param maximum: Maximum value (default is 5.0).\n- :return: Rating.\n-\n- :Example:\n- 4.7\n- \"\"\"\n- return self.random.uniform(0, maximum, 1)\n", "issue": "Restructure Numbers provider\n# Feature request\r\n\r\n## Thesis\r\n\r\nWhile I was implementing the ``matrix()`` function in the ``Numbers`` provider and I was thinking about some changes that we could make in this provider:\r\n\r\n- Add a function ``complex(start, end, length)`` that return a random array of complex numbers\r\n- Make the API uniform, so that every function in the ``Numbers`` provider has the arguments ``start, end, length`` (where possible). Maybe in the ``complex()`` function we can add ``start_real, end_real, start_imaginary, end_imaginary`` ?\r\n- Remove the function ``ranting()`` and add an argument ``decimal_digits`` in the function ``floats()`` to specify the number of decimal digits to keep.\r\n\r\n## Reasoning\r\n\r\nI think these changes would make the provider more uniform and easy to use.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides data related to numbers.\"\"\"\n\nfrom typing import List, Union\n\nfrom mimesis.providers.base import BaseProvider\n\n__all__ = ['Numbers']\n\n\nclass Numbers(BaseProvider):\n \"\"\"Class for generating numbers.\"\"\"\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'numbers'\n\n def floats(self, n: int = 2) -> List[float]:\n \"\"\"Generate a list of random float numbers.\n\n :param n: Raise 10 to the 'n' power.\n :return: The list of floating-point numbers.\n \"\"\"\n nums = [self.random.random()\n for _ in range(10 ** int(n))]\n return nums\n\n def integers(self, start: int = 0, end: int = 10,\n length: int = 10) -> List[int]:\n \"\"\"Generate a list of random integers.\n\n Integers can be negative or positive numbers.\n .. note: You can use both positive and negative numbers.\n\n :param start: Start.\n :param end: End.\n :param length: Length of list.\n :return: List of integers.\n\n :Example:\n [-20, -19, -18, -17]\n \"\"\"\n return self.random.randints(\n length, start, end)\n\n @staticmethod\n def primes(start: int = 1, end: int = 999) -> List[int]:\n \"\"\"Generate a list of prime numbers.\n\n :param start: First value of range.\n :param end: Last value of range.\n :return: A list of prime numbers from start to end.\n \"\"\"\n # TODO: It should generate random primes with passed length.\n sieve_size = (end // 2 - 1) if end % 2 == 0 else (end // 2)\n sieve = [True] * sieve_size\n\n primes = [] # list of primes\n # add 2 to the list if it's in the given range\n if end >= 2:\n primes.append(2)\n for i in range(sieve_size):\n if sieve[i]:\n value_at_i = i * 2 + 3\n primes.append(value_at_i)\n for j in range(i, sieve_size, value_at_i):\n sieve[j] = False\n\n chop_index = 0\n for i in range(len(primes)):\n if primes[i] >= start:\n chop_index = i\n break\n return primes[chop_index:]\n\n def digit(self, to_bin: bool = False) -> Union[str, int]:\n \"\"\"Get a random digit.\n\n :param to_bin: If True then convert to binary.\n :return: Digit.\n\n :Example:\n 4.\n \"\"\"\n digit = self.random.randint(0, 9)\n\n if to_bin:\n return bin(digit)\n\n return digit\n\n def between(self, minimum: int = 1, maximum: int = 1000) -> int:\n \"\"\"Generate a random number between minimum and maximum.\n\n :param minimum: Minimum of range.\n :param maximum: Maximum of range.\n :return: Number.\n \"\"\"\n return self.random.randint(minimum, maximum)\n\n def rating(self, maximum: float = 5.0) -> float:\n \"\"\"Generate a random rating for something.\n\n :param maximum: Maximum value (default is 5.0).\n :return: Rating.\n\n :Example:\n 4.7\n \"\"\"\n return self.random.uniform(0, maximum, 1)\n", "path": "mimesis/providers/numbers.py"}]} | 1,739 | 795 |
gh_patches_debug_29784 | rasdani/github-patches | git_diff | spectrochempy__spectrochempy-77 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
read_opus() shifts the xaxis
Author: @atravert (Arnaud TRAVERT)
Redmine Issue: 75, https://redmine.spectrochempy.fr/issues/75
---
A bug in brukeropusreader module leads to a shift of the xaxis.
It has been corrected on the spectrochempy/brukeropusreader fork (PR FIX wrong setting of wavenumbers axis #1)
but a change in read_opus() is also needed.
</issue>
<code>
[start of spectrochempy/core/readers/readopus.py]
1 # -*- coding: utf-8 -*-
2 #
3 # ======================================================================================================================
4 # Copyright (©) 2015-2020 LCS
5 # Laboratoire Catalyse et Spectrochimie, Caen, France.
6 # CeCILL-B FREE SOFTWARE LICENSE AGREEMENT
7 # See full LICENSE agreement in the root directory
8 # ======================================================================================================================
9
10 """This module to extend NDDataset with the import methods method.
11
12 """
13 __all__ = ['read_opus']
14
15 __dataset_methods__ = __all__
16
17 # ----------------------------------------------------------------------------------------------------------------------
18 # standard imports
19 # ----------------------------------------------------------------------------------------------------------------------
20
21
22 from brukeropusreader import read_file
23 from warnings import warn
24 from datetime import datetime, timezone, timedelta
25 from numpy import linspace
26
27 # ----------------------------------------------------------------------------------------------------------------------
28 # third party imports
29 # ----------------------------------------------------------------------------------------------------------------------
30 # ----------------------------------------------------------------------------------------------------------------------
31 # local imports
32 # ----------------------------------------------------------------------------------------------------------------------
33 from spectrochempy.core import debug_
34 from spectrochempy.core.dataset.nddataset import NDDataset
35 from spectrochempy.core.dataset.ndcoord import Coord
36 from spectrochempy.utils import readfilename
37
38
39 # ======================================================================================================================
40 # Public functions
41 # ======================================================================================================================
42
43 # .............................................................................
44 def read_opus(dataset=None, **kwargs):
45 """Open Bruker Opus file(s) and group them in a single dataset. Only the spectrum is
46 extracted ("AB" field). Returns an error if dimensions are incompatibles.
47
48 Parameters
49 ----------
50 filename : `None`, `str`, or list of `str`
51 Filename of the file(s) to load. If `None` : opens a dialog box to select
52 files. If `str` : a single filename. It list of str :
53 a list of filenames.
54 directory : str, optional, default="".
55 From where to read the specified filename. If not specified, read in
56 the defaults datadir.
57
58 Returns
59 -------
60 dataset : |NDDataset|
61 A dataset corresponding to the (set of) bruker file(s).
62
63 Examples
64 --------
65 >>> A = NDDataset.read_opus('irdata\\spectrum.0001')
66 >>> print(A)
67 NDDataset: [float64] a.u. (shape: (y:1, x:2568))
68 """
69 debug_("reading bruker opus files")
70
71 # filename will be given by a keyword parameter except if the first parameters is already
72 # the filename
73 filename = kwargs.get('filename', None)
74
75 # check if the first parameter is a dataset because we allow not to pass it
76 if not isinstance(dataset, NDDataset):
77 # probably did not specify a dataset
78 # so the first parameters must be the filename
79 if isinstance(dataset, (str, list)) and dataset != '':
80 filename = dataset
81
82 # check if directory was specified
83 directory = kwargs.get("directory", None)
84 sortbydate = kwargs.get("sortbydate", True)
85
86 # returns a list of files to read
87 files = readfilename(filename,
88 directory=directory,
89 filetypes=['Bruker files (*.*)',
90 'all files (*)'],
91 dictionary=False)
92 # todo: see how to use regular expression in Qt filters
93
94 if not files:
95 # there is no files, return nothing
96 return None
97
98 xaxis = None
99 intensities = []
100 names = []
101 acquisitiondates = []
102 timestamps = []
103 for file in files:
104 opus_data = read_file(file)
105 try:
106 opus_data["AB"]
107 except KeyError: # not an absorbance spectrum
108 warn("opus file {} could not be read".format(file))
109 continue
110
111 npt = opus_data['AB Data Parameter']['NPT']
112 fxv = opus_data['AB Data Parameter']['FXV']
113 lxv = opus_data['AB Data Parameter']['LXV']
114 xdata = linspace(fxv, lxv, npt)
115
116 if not xaxis:
117 xaxis = Coord(x=xdata, title='Wavenumbers', units='cm^-1')
118
119 elif (xdata != xaxis.data).any():
120 raise ValueError("spectra have incompatible dimensions (xaxis)")
121
122 intensities.append(opus_data["AB"][:npt])
123 names.append(opus_data["Sample"]['SNM'])
124 acqdate = opus_data["AB Data Parameter"]["DAT"]
125 acqtime = opus_data["AB Data Parameter"]["TIM"]
126 GMT_offset_hour = float(acqtime.split('GMT')[1].split(')')[0])
127 date_time = datetime.strptime(acqdate + '_' + acqtime.split()[0],
128 '%d/%m/%Y_%H:%M:%S.%f')
129 UTC_date_time = date_time - timedelta(hours=GMT_offset_hour)
130 UTC_date_time = UTC_date_time.replace(tzinfo=timezone.utc)
131 # Transform to timestamp for storage in the Coord object
132 # use datetime.fromtimestamp(d, timezone.utc)) to transform back to datetime
133 timestamp = UTC_date_time.timestamp()
134 acquisitiondates.append(UTC_date_time)
135 timestamps.append(timestamp)
136
137 # return if none of the files could be read:
138 if not xaxis:
139 return
140
141 yaxis = Coord(timestamps,
142 title='Acquisition timestamp (GMT)',
143 units='s',
144 labels=(acquisitiondates, names))
145
146 dataset = NDDataset(intensities)
147 dataset.set_coords(y=yaxis, x=xaxis)
148 dataset.units = 'absorbance'
149 dataset.title = 'Absorbance'
150
151 # Set origin, description and history
152 dataset.origin = "opus"
153 dataset.description = ('Dataset from opus files. \n')
154
155 if sortbydate:
156 dataset.sort(dim='y', inplace=True)
157
158 dataset.history = str(datetime.now()) + ':import from opus files \n'
159
160 # Set the NDDataset date
161 dataset._date = datetime.now()
162 dataset._modified = dataset.date
163 # debug_("end of reading")
164
165 return dataset
166
[end of spectrochempy/core/readers/readopus.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/spectrochempy/core/readers/readopus.py b/spectrochempy/core/readers/readopus.py
--- a/spectrochempy/core/readers/readopus.py
+++ b/spectrochempy/core/readers/readopus.py
@@ -65,7 +65,10 @@
>>> A = NDDataset.read_opus('irdata\\spectrum.0001')
>>> print(A)
NDDataset: [float64] a.u. (shape: (y:1, x:2568))
+
+
"""
+
debug_("reading bruker opus files")
# filename will be given by a keyword parameter except if the first parameters is already
@@ -114,7 +117,7 @@
xdata = linspace(fxv, lxv, npt)
if not xaxis:
- xaxis = Coord(x=xdata, title='Wavenumbers', units='cm^-1')
+ xaxis = Coord(xdata, title='Wavenumbers', units='cm^-1')
elif (xdata != xaxis.data).any():
raise ValueError("spectra have incompatible dimensions (xaxis)")
@@ -150,16 +153,13 @@
# Set origin, description and history
dataset.origin = "opus"
- dataset.description = ('Dataset from opus files. \n')
+ dataset.description = 'Dataset from opus files. \n'
if sortbydate:
dataset.sort(dim='y', inplace=True)
- dataset.history = str(datetime.now()) + ':import from opus files \n'
-
- # Set the NDDataset date
+ dataset.history = str(datetime.now()) + ': import from opus files \n'
dataset._date = datetime.now()
dataset._modified = dataset.date
- # debug_("end of reading")
return dataset
| {"golden_diff": "diff --git a/spectrochempy/core/readers/readopus.py b/spectrochempy/core/readers/readopus.py\n--- a/spectrochempy/core/readers/readopus.py\n+++ b/spectrochempy/core/readers/readopus.py\n@@ -65,7 +65,10 @@\n >>> A = NDDataset.read_opus('irdata\\\\spectrum.0001')\n >>> print(A)\n NDDataset: [float64] a.u. (shape: (y:1, x:2568))\n+\n+\n \"\"\"\n+\n debug_(\"reading bruker opus files\")\n \n # filename will be given by a keyword parameter except if the first parameters is already\n@@ -114,7 +117,7 @@\n xdata = linspace(fxv, lxv, npt)\n \n if not xaxis:\n- xaxis = Coord(x=xdata, title='Wavenumbers', units='cm^-1')\n+ xaxis = Coord(xdata, title='Wavenumbers', units='cm^-1')\n \n elif (xdata != xaxis.data).any():\n raise ValueError(\"spectra have incompatible dimensions (xaxis)\")\n@@ -150,16 +153,13 @@\n \n # Set origin, description and history\n dataset.origin = \"opus\"\n- dataset.description = ('Dataset from opus files. \\n')\n+ dataset.description = 'Dataset from opus files. \\n'\n \n if sortbydate:\n dataset.sort(dim='y', inplace=True)\n \n- dataset.history = str(datetime.now()) + ':import from opus files \\n'\n-\n- # Set the NDDataset date\n+ dataset.history = str(datetime.now()) + ': import from opus files \\n'\n dataset._date = datetime.now()\n dataset._modified = dataset.date\n- # debug_(\"end of reading\")\n \n return dataset\n", "issue": "read_opus() shifts the xaxis\nAuthor: @atravert (Arnaud TRAVERT)\n\nRedmine Issue: 75, https://redmine.spectrochempy.fr/issues/75\n\n---\n\nA bug in brukeropusreader module leads to a shift of the xaxis.\r\nIt has been corrected on the spectrochempy/brukeropusreader fork (PR FIX wrong setting of wavenumbers axis #1) \r\nbut a change in read_opus() is also needed.\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# ======================================================================================================================\n# Copyright (\u00a9) 2015-2020 LCS\n# Laboratoire Catalyse et Spectrochimie, Caen, France.\n# CeCILL-B FREE SOFTWARE LICENSE AGREEMENT\n# See full LICENSE agreement in the root directory\n# ======================================================================================================================\n\n\"\"\"This module to extend NDDataset with the import methods method.\n\n\"\"\"\n__all__ = ['read_opus']\n\n__dataset_methods__ = __all__\n\n# ----------------------------------------------------------------------------------------------------------------------\n# standard imports\n# ----------------------------------------------------------------------------------------------------------------------\n\n\nfrom brukeropusreader import read_file\nfrom warnings import warn\nfrom datetime import datetime, timezone, timedelta\nfrom numpy import linspace\n\n# ----------------------------------------------------------------------------------------------------------------------\n# third party imports\n# ----------------------------------------------------------------------------------------------------------------------\n# ----------------------------------------------------------------------------------------------------------------------\n# local imports\n# ----------------------------------------------------------------------------------------------------------------------\nfrom spectrochempy.core import debug_\nfrom spectrochempy.core.dataset.nddataset import NDDataset\nfrom spectrochempy.core.dataset.ndcoord import Coord\nfrom spectrochempy.utils import readfilename\n\n\n# ======================================================================================================================\n# Public functions\n# ======================================================================================================================\n\n# .............................................................................\ndef read_opus(dataset=None, **kwargs):\n \"\"\"Open Bruker Opus file(s) and group them in a single dataset. Only the spectrum is\n extracted (\"AB\" field). Returns an error if dimensions are incompatibles.\n\n Parameters\n ----------\n filename : `None`, `str`, or list of `str`\n Filename of the file(s) to load. If `None` : opens a dialog box to select\n files. If `str` : a single filename. It list of str :\n a list of filenames.\n directory : str, optional, default=\"\".\n From where to read the specified filename. If not specified, read in\n the defaults datadir.\n\n Returns\n -------\n dataset : |NDDataset|\n A dataset corresponding to the (set of) bruker file(s).\n\n Examples\n --------\n >>> A = NDDataset.read_opus('irdata\\\\spectrum.0001')\n >>> print(A)\n NDDataset: [float64] a.u. (shape: (y:1, x:2568))\n \"\"\"\n debug_(\"reading bruker opus files\")\n\n # filename will be given by a keyword parameter except if the first parameters is already\n # the filename\n filename = kwargs.get('filename', None)\n\n # check if the first parameter is a dataset because we allow not to pass it\n if not isinstance(dataset, NDDataset):\n # probably did not specify a dataset\n # so the first parameters must be the filename\n if isinstance(dataset, (str, list)) and dataset != '':\n filename = dataset\n\n # check if directory was specified\n directory = kwargs.get(\"directory\", None)\n sortbydate = kwargs.get(\"sortbydate\", True)\n\n # returns a list of files to read\n files = readfilename(filename,\n directory=directory,\n filetypes=['Bruker files (*.*)',\n 'all files (*)'],\n dictionary=False)\n # todo: see how to use regular expression in Qt filters\n\n if not files:\n # there is no files, return nothing\n return None\n\n xaxis = None\n intensities = []\n names = []\n acquisitiondates = []\n timestamps = []\n for file in files:\n opus_data = read_file(file)\n try:\n opus_data[\"AB\"]\n except KeyError: # not an absorbance spectrum\n warn(\"opus file {} could not be read\".format(file))\n continue\n\n npt = opus_data['AB Data Parameter']['NPT']\n fxv = opus_data['AB Data Parameter']['FXV']\n lxv = opus_data['AB Data Parameter']['LXV']\n xdata = linspace(fxv, lxv, npt)\n\n if not xaxis:\n xaxis = Coord(x=xdata, title='Wavenumbers', units='cm^-1')\n\n elif (xdata != xaxis.data).any():\n raise ValueError(\"spectra have incompatible dimensions (xaxis)\")\n\n intensities.append(opus_data[\"AB\"][:npt])\n names.append(opus_data[\"Sample\"]['SNM'])\n acqdate = opus_data[\"AB Data Parameter\"][\"DAT\"]\n acqtime = opus_data[\"AB Data Parameter\"][\"TIM\"]\n GMT_offset_hour = float(acqtime.split('GMT')[1].split(')')[0])\n date_time = datetime.strptime(acqdate + '_' + acqtime.split()[0],\n '%d/%m/%Y_%H:%M:%S.%f')\n UTC_date_time = date_time - timedelta(hours=GMT_offset_hour)\n UTC_date_time = UTC_date_time.replace(tzinfo=timezone.utc)\n # Transform to timestamp for storage in the Coord object\n # use datetime.fromtimestamp(d, timezone.utc)) to transform back to datetime\n timestamp = UTC_date_time.timestamp()\n acquisitiondates.append(UTC_date_time)\n timestamps.append(timestamp)\n\n # return if none of the files could be read:\n if not xaxis:\n return\n\n yaxis = Coord(timestamps,\n title='Acquisition timestamp (GMT)',\n units='s',\n labels=(acquisitiondates, names))\n\n dataset = NDDataset(intensities)\n dataset.set_coords(y=yaxis, x=xaxis)\n dataset.units = 'absorbance'\n dataset.title = 'Absorbance'\n\n # Set origin, description and history\n dataset.origin = \"opus\"\n dataset.description = ('Dataset from opus files. \\n')\n\n if sortbydate:\n dataset.sort(dim='y', inplace=True)\n\n dataset.history = str(datetime.now()) + ':import from opus files \\n'\n\n # Set the NDDataset date\n dataset._date = datetime.now()\n dataset._modified = dataset.date\n # debug_(\"end of reading\")\n\n return dataset\n", "path": "spectrochempy/core/readers/readopus.py"}]} | 2,327 | 425 |
gh_patches_debug_7357 | rasdani/github-patches | git_diff | scrapy__scrapy-1793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PY3: error decoding Content-Disposition header
This request
```
scrapy shell 'http://npe.com.cn/plus/save_to_doc.php?id=1666'
```
raises this error:
```
Traceback (most recent call last):
File "/Users/kmike/envs/dl/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy', 'console_scripts', 'scrapy')()
File "/Users/kmike/svn/scrapy/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Users/kmike/svn/scrapy/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/Users/kmike/svn/scrapy/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/Users/kmike/svn/scrapy/scrapy/commands/shell.py", line 71, in run
shell.start(url=url)
File "/Users/kmike/svn/scrapy/scrapy/shell.py", line 47, in start
self.fetch(url, spider)
File "/Users/kmike/svn/scrapy/scrapy/shell.py", line 112, in fetch
reactor, self._schedule, request, spider)
File "/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/internet/threads.py", line 122, in blockingCallFromThread
result.raiseException()
File "/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/python/failure.py", line 368, in raiseException
raise self.value.with_traceback(self.tb)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 25: invalid start byte
```
The error points to a wrong location (similar to #1760); the real traceback is
```
Traceback (most recent call last):
File "/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/internet/defer.py", line 1126, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/Users/kmike/svn/scrapy/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
File "/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/internet/defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/Users/kmike/svn/scrapy/scrapy/core/downloader/handlers/http11.py", line 272, in _cb_bodydone
respcls = responsetypes.from_args(headers=headers, url=url)
File "/Users/kmike/svn/scrapy/scrapy/responsetypes.py", line 110, in from_args
cls = self.from_headers(headers)
File "/Users/kmike/svn/scrapy/scrapy/responsetypes.py", line 78, in from_headers
cls = self.from_content_disposition(headers[b'Content-Disposition'])
File "/Users/kmike/svn/scrapy/scrapy/responsetypes.py", line 62, in from_content_disposition
filename = to_native_str(content_disposition).split(';')[1].split('=')[1]
File "/Users/kmike/svn/scrapy/scrapy/utils/python.py", line 129, in to_native_str
return to_unicode(text, encoding, errors)
File "/Users/kmike/svn/scrapy/scrapy/utils/python.py", line 107, in to_unicode
return text.decode(encoding, errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 25: invalid start byte
```
It looks like Content-Disposition is decoded using utf-8, but the encoding was not UTF-8.
</issue>
<code>
[start of scrapy/responsetypes.py]
1 """
2 This module implements a class which returns the appropriate Response class
3 based on different criteria.
4 """
5 from __future__ import absolute_import
6 from mimetypes import MimeTypes
7 from pkgutil import get_data
8 from io import StringIO
9 import six
10
11 from scrapy.http import Response
12 from scrapy.utils.misc import load_object
13 from scrapy.utils.python import isbinarytext, to_bytes, to_native_str
14
15
16 class ResponseTypes(object):
17
18 CLASSES = {
19 'text/html': 'scrapy.http.HtmlResponse',
20 'application/atom+xml': 'scrapy.http.XmlResponse',
21 'application/rdf+xml': 'scrapy.http.XmlResponse',
22 'application/rss+xml': 'scrapy.http.XmlResponse',
23 'application/xhtml+xml': 'scrapy.http.HtmlResponse',
24 'application/vnd.wap.xhtml+xml': 'scrapy.http.HtmlResponse',
25 'application/xml': 'scrapy.http.XmlResponse',
26 'application/json': 'scrapy.http.TextResponse',
27 'application/x-json': 'scrapy.http.TextResponse',
28 'application/javascript': 'scrapy.http.TextResponse',
29 'application/x-javascript': 'scrapy.http.TextResponse',
30 'text/xml': 'scrapy.http.XmlResponse',
31 'text/*': 'scrapy.http.TextResponse',
32 }
33
34 def __init__(self):
35 self.classes = {}
36 self.mimetypes = MimeTypes()
37 mimedata = get_data('scrapy', 'mime.types').decode('utf8')
38 self.mimetypes.readfp(StringIO(mimedata))
39 for mimetype, cls in six.iteritems(self.CLASSES):
40 self.classes[mimetype] = load_object(cls)
41
42 def from_mimetype(self, mimetype):
43 """Return the most appropriate Response class for the given mimetype"""
44 if mimetype is None:
45 return Response
46 elif mimetype in self.classes:
47 return self.classes[mimetype]
48 else:
49 basetype = "%s/*" % mimetype.split('/')[0]
50 return self.classes.get(basetype, Response)
51
52 def from_content_type(self, content_type, content_encoding=None):
53 """Return the most appropriate Response class from an HTTP Content-Type
54 header """
55 if content_encoding:
56 return Response
57 mimetype = to_native_str(content_type).split(';')[0].strip().lower()
58 return self.from_mimetype(mimetype)
59
60 def from_content_disposition(self, content_disposition):
61 try:
62 filename = to_native_str(content_disposition).split(';')[1].split('=')[1]
63 filename = filename.strip('"\'')
64 return self.from_filename(filename)
65 except IndexError:
66 return Response
67
68 def from_headers(self, headers):
69 """Return the most appropriate Response class by looking at the HTTP
70 headers"""
71 cls = Response
72 if b'Content-Type' in headers:
73 cls = self.from_content_type(
74 content_type=headers[b'Content-type'],
75 content_encoding=headers.get(b'Content-Encoding')
76 )
77 if cls is Response and b'Content-Disposition' in headers:
78 cls = self.from_content_disposition(headers[b'Content-Disposition'])
79 return cls
80
81 def from_filename(self, filename):
82 """Return the most appropriate Response class from a file name"""
83 mimetype, encoding = self.mimetypes.guess_type(filename)
84 if mimetype and not encoding:
85 return self.from_mimetype(mimetype)
86 else:
87 return Response
88
89 def from_body(self, body):
90 """Try to guess the appropriate response based on the body content.
91 This method is a bit magic and could be improved in the future, but
92 it's not meant to be used except for special cases where response types
93 cannot be guess using more straightforward methods."""
94 chunk = body[:5000]
95 chunk = to_bytes(chunk)
96 if isbinarytext(chunk):
97 return self.from_mimetype('application/octet-stream')
98 elif b"<html>" in chunk.lower():
99 return self.from_mimetype('text/html')
100 elif b"<?xml" in chunk.lower():
101 return self.from_mimetype('text/xml')
102 else:
103 return self.from_mimetype('text')
104
105 def from_args(self, headers=None, url=None, filename=None, body=None):
106 """Guess the most appropriate Response class based on
107 the given arguments."""
108 cls = Response
109 if headers is not None:
110 cls = self.from_headers(headers)
111 if cls is Response and url is not None:
112 cls = self.from_filename(url)
113 if cls is Response and filename is not None:
114 cls = self.from_filename(filename)
115 if cls is Response and body is not None:
116 cls = self.from_body(body)
117 return cls
118
119 responsetypes = ResponseTypes()
120
[end of scrapy/responsetypes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/responsetypes.py b/scrapy/responsetypes.py
--- a/scrapy/responsetypes.py
+++ b/scrapy/responsetypes.py
@@ -59,7 +59,8 @@
def from_content_disposition(self, content_disposition):
try:
- filename = to_native_str(content_disposition).split(';')[1].split('=')[1]
+ filename = to_native_str(content_disposition,
+ encoding='latin-1', errors='replace').split(';')[1].split('=')[1]
filename = filename.strip('"\'')
return self.from_filename(filename)
except IndexError:
| {"golden_diff": "diff --git a/scrapy/responsetypes.py b/scrapy/responsetypes.py\n--- a/scrapy/responsetypes.py\n+++ b/scrapy/responsetypes.py\n@@ -59,7 +59,8 @@\n \n def from_content_disposition(self, content_disposition):\n try:\n- filename = to_native_str(content_disposition).split(';')[1].split('=')[1]\n+ filename = to_native_str(content_disposition,\n+ encoding='latin-1', errors='replace').split(';')[1].split('=')[1]\n filename = filename.strip('\"\\'')\n return self.from_filename(filename)\n except IndexError:\n", "issue": "PY3: error decoding Content-Disposition header\nThis request\n\n```\nscrapy shell 'http://npe.com.cn/plus/save_to_doc.php?id=1666'\n```\n\nraises this error:\n\n```\nTraceback (most recent call last):\n File \"/Users/kmike/envs/dl/bin/scrapy\", line 9, in <module>\n load_entry_point('Scrapy', 'console_scripts', 'scrapy')()\n File \"/Users/kmike/svn/scrapy/scrapy/cmdline.py\", line 142, in execute\n _run_print_help(parser, _run_command, cmd, args, opts)\n File \"/Users/kmike/svn/scrapy/scrapy/cmdline.py\", line 88, in _run_print_help\n func(*a, **kw)\n File \"/Users/kmike/svn/scrapy/scrapy/cmdline.py\", line 149, in _run_command\n cmd.run(args, opts)\n File \"/Users/kmike/svn/scrapy/scrapy/commands/shell.py\", line 71, in run\n shell.start(url=url)\n File \"/Users/kmike/svn/scrapy/scrapy/shell.py\", line 47, in start\n self.fetch(url, spider)\n File \"/Users/kmike/svn/scrapy/scrapy/shell.py\", line 112, in fetch\n reactor, self._schedule, request, spider)\n File \"/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/internet/threads.py\", line 122, in blockingCallFromThread\n result.raiseException()\n File \"/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/python/failure.py\", line 368, in raiseException\n raise self.value.with_traceback(self.tb)\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 25: invalid start byte\n```\n\nThe error points to a wrong location (similar to #1760); the real traceback is\n\n```\nTraceback (most recent call last):\n File \"/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/internet/defer.py\", line 1126, in _inlineCallbacks\n result = result.throwExceptionIntoGenerator(g)\n File \"/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/python/failure.py\", line 389, in throwExceptionIntoGenerator\n return g.throw(self.type, self.value, self.tb)\n File \"/Users/kmike/svn/scrapy/scrapy/core/downloader/middleware.py\", line 43, in process_request\n defer.returnValue((yield download_func(request=request,spider=spider)))\n File \"/Users/kmike/envs/dl/lib/python3.5/site-packages/Twisted-15.5.0-py3.5.egg/twisted/internet/defer.py\", line 588, in _runCallbacks\n current.result = callback(current.result, *args, **kw)\n File \"/Users/kmike/svn/scrapy/scrapy/core/downloader/handlers/http11.py\", line 272, in _cb_bodydone\n respcls = responsetypes.from_args(headers=headers, url=url)\n File \"/Users/kmike/svn/scrapy/scrapy/responsetypes.py\", line 110, in from_args\n cls = self.from_headers(headers)\n File \"/Users/kmike/svn/scrapy/scrapy/responsetypes.py\", line 78, in from_headers\n cls = self.from_content_disposition(headers[b'Content-Disposition'])\n File \"/Users/kmike/svn/scrapy/scrapy/responsetypes.py\", line 62, in from_content_disposition\n filename = to_native_str(content_disposition).split(';')[1].split('=')[1]\n File \"/Users/kmike/svn/scrapy/scrapy/utils/python.py\", line 129, in to_native_str\n return to_unicode(text, encoding, errors)\n File \"/Users/kmike/svn/scrapy/scrapy/utils/python.py\", line 107, in to_unicode\n return text.decode(encoding, errors)\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 25: invalid start byte\n```\n\nIt looks like Content-Disposition is decoded using utf-8, but the encoding was not UTF-8.\n\n", "before_files": [{"content": "\"\"\"\nThis module implements a class which returns the appropriate Response class\nbased on different criteria.\n\"\"\"\nfrom __future__ import absolute_import\nfrom mimetypes import MimeTypes\nfrom pkgutil import get_data\nfrom io import StringIO\nimport six\n\nfrom scrapy.http import Response\nfrom scrapy.utils.misc import load_object\nfrom scrapy.utils.python import isbinarytext, to_bytes, to_native_str\n\n\nclass ResponseTypes(object):\n\n CLASSES = {\n 'text/html': 'scrapy.http.HtmlResponse',\n 'application/atom+xml': 'scrapy.http.XmlResponse',\n 'application/rdf+xml': 'scrapy.http.XmlResponse',\n 'application/rss+xml': 'scrapy.http.XmlResponse',\n 'application/xhtml+xml': 'scrapy.http.HtmlResponse',\n 'application/vnd.wap.xhtml+xml': 'scrapy.http.HtmlResponse',\n 'application/xml': 'scrapy.http.XmlResponse',\n 'application/json': 'scrapy.http.TextResponse',\n 'application/x-json': 'scrapy.http.TextResponse',\n 'application/javascript': 'scrapy.http.TextResponse',\n 'application/x-javascript': 'scrapy.http.TextResponse',\n 'text/xml': 'scrapy.http.XmlResponse',\n 'text/*': 'scrapy.http.TextResponse',\n }\n\n def __init__(self):\n self.classes = {}\n self.mimetypes = MimeTypes()\n mimedata = get_data('scrapy', 'mime.types').decode('utf8')\n self.mimetypes.readfp(StringIO(mimedata))\n for mimetype, cls in six.iteritems(self.CLASSES):\n self.classes[mimetype] = load_object(cls)\n\n def from_mimetype(self, mimetype):\n \"\"\"Return the most appropriate Response class for the given mimetype\"\"\"\n if mimetype is None:\n return Response\n elif mimetype in self.classes:\n return self.classes[mimetype]\n else:\n basetype = \"%s/*\" % mimetype.split('/')[0]\n return self.classes.get(basetype, Response)\n\n def from_content_type(self, content_type, content_encoding=None):\n \"\"\"Return the most appropriate Response class from an HTTP Content-Type\n header \"\"\"\n if content_encoding:\n return Response\n mimetype = to_native_str(content_type).split(';')[0].strip().lower()\n return self.from_mimetype(mimetype)\n\n def from_content_disposition(self, content_disposition):\n try:\n filename = to_native_str(content_disposition).split(';')[1].split('=')[1]\n filename = filename.strip('\"\\'')\n return self.from_filename(filename)\n except IndexError:\n return Response\n\n def from_headers(self, headers):\n \"\"\"Return the most appropriate Response class by looking at the HTTP\n headers\"\"\"\n cls = Response\n if b'Content-Type' in headers:\n cls = self.from_content_type(\n content_type=headers[b'Content-type'],\n content_encoding=headers.get(b'Content-Encoding')\n )\n if cls is Response and b'Content-Disposition' in headers:\n cls = self.from_content_disposition(headers[b'Content-Disposition'])\n return cls\n\n def from_filename(self, filename):\n \"\"\"Return the most appropriate Response class from a file name\"\"\"\n mimetype, encoding = self.mimetypes.guess_type(filename)\n if mimetype and not encoding:\n return self.from_mimetype(mimetype)\n else:\n return Response\n\n def from_body(self, body):\n \"\"\"Try to guess the appropriate response based on the body content.\n This method is a bit magic and could be improved in the future, but\n it's not meant to be used except for special cases where response types\n cannot be guess using more straightforward methods.\"\"\"\n chunk = body[:5000]\n chunk = to_bytes(chunk)\n if isbinarytext(chunk):\n return self.from_mimetype('application/octet-stream')\n elif b\"<html>\" in chunk.lower():\n return self.from_mimetype('text/html')\n elif b\"<?xml\" in chunk.lower():\n return self.from_mimetype('text/xml')\n else:\n return self.from_mimetype('text')\n\n def from_args(self, headers=None, url=None, filename=None, body=None):\n \"\"\"Guess the most appropriate Response class based on\n the given arguments.\"\"\"\n cls = Response\n if headers is not None:\n cls = self.from_headers(headers)\n if cls is Response and url is not None:\n cls = self.from_filename(url)\n if cls is Response and filename is not None:\n cls = self.from_filename(filename)\n if cls is Response and body is not None:\n cls = self.from_body(body)\n return cls\n\nresponsetypes = ResponseTypes()\n", "path": "scrapy/responsetypes.py"}]} | 2,817 | 141 |
gh_patches_debug_40197 | rasdani/github-patches | git_diff | fossasia__open-event-server-8379 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Organizer video tab shows errors if there are video rooms not using BBB
Organizer video tab shows errors if there are video rooms not using BBB.


Compare https://eventyay.com/events/3ea940a8/video/all (only for organizer accessible)
Related to https://github.com/fossasia/open-event-frontend/pull/7927
</issue>
<code>
[start of app/api/video_recordings.py]
1 from datetime import datetime
2
3 from flask_rest_jsonapi import ResourceDetail, ResourceList
4 from flask_rest_jsonapi.resource import ResourceRelationship
5
6 from app.api.helpers.db import get_or_create, safe_query_kwargs
7 from app.api.helpers.errors import ForbiddenError, UnprocessableEntityError
8 from app.api.helpers.permission_manager import has_access
9 from app.api.helpers.permissions import jwt_required
10 from app.api.schema.video_recordings import VideoRecordingSchema
11 from app.api.video_channels.bbb import BigBlueButton
12 from app.models import db
13 from app.models.video_recording import VideoRecording
14 from app.models.video_stream import VideoStream
15
16
17 class VideoRecordingList(ResourceList):
18 def before_get(self, args, kwargs):
19 if kwargs.get('video_stream_id'):
20 stream = safe_query_kwargs(VideoStream, kwargs, 'video_stream_id', 'id')
21
22 if not has_access('is_organizer', event_id=stream.event_id):
23 raise ForbiddenError(
24 {'pointer': 'event_id'},
25 'You need to be the event organizer to access video recordings.',
26 )
27
28 params = dict(
29 meetingID=stream.extra['response']['meetingID'],
30 )
31 channel = stream.channel
32 bbb = BigBlueButton(channel.api_url, channel.api_key)
33 result = bbb.request('getRecordings', params)
34
35 if result.data['response']['recordings']:
36 recordings = []
37 if type(result.data['response']['recordings']['recording']) is list:
38 recordings = result.data['response']['recordings']['recording']
39 else:
40 recordings.append(result.data['response']['recordings']['recording'])
41 for recording in recordings:
42 get_or_create(
43 VideoRecording,
44 bbb_record_id=recording['recordID'],
45 participants=recording['participants'],
46 url=recording['playback']['format']['url'],
47 start_time=datetime.fromtimestamp(
48 int(int(recording['startTime']) / 1000)
49 ),
50 end_time=datetime.fromtimestamp(
51 int(int(recording['endTime']) / 1000)
52 ),
53 video_stream=stream,
54 )
55
56 def query(self, view_kwargs):
57 query_ = VideoRecording.query
58 if view_kwargs.get('video_stream_id'):
59 stream = safe_query_kwargs(VideoStream, view_kwargs, 'video_stream_id')
60 query_ = VideoRecording.query.filter(
61 VideoRecording.video_stream_id == stream.id
62 )
63 else:
64 if not has_access('is_admin'):
65 raise ForbiddenError(
66 {'pointer': 'user'},
67 'You need to be the admin to access video recordings.',
68 )
69
70 return query_
71
72 methods = ['GET']
73 view_kwargs = True
74 decorators = (jwt_required,)
75 schema = VideoRecordingSchema
76 data_layer = {
77 'session': db.session,
78 'model': VideoRecording,
79 'methods': {
80 'query': query,
81 'before_get': before_get,
82 },
83 }
84
85
86 class VideoRecordingDetail(ResourceDetail):
87 def before_get_object(self, view_kwargs):
88 if view_kwargs.get('video_stream_id'):
89 video_stream = safe_query_kwargs(
90 VideoStream,
91 view_kwargs,
92 'video_stream_id',
93 )
94 view_kwargs['id'] = video_stream.id
95
96 def after_get_object(self, video_recording, view_kwargs):
97 if not has_access('is_organizer', event_id=video_recording.video_stream.event_id):
98 raise ForbiddenError(
99 {'pointer': 'event_id'},
100 'You need to be the event organizer to access video recordings.',
101 )
102
103 def before_delete_object(self, video_recording, kwargs):
104 """
105 before delete object method for recording detail
106 :param obj:
107 :param kwargs:
108 :return:
109 """
110 if not has_access('is_admin'):
111 raise ForbiddenError(
112 {'source': 'User'}, 'You are not authorized to access this.'
113 )
114 stream = video_recording.video_stream
115 params = dict(
116 recordID=video_recording.bbb_record_id,
117 )
118 channel = stream.channel
119 bbb = BigBlueButton(channel.api_url, channel.api_key)
120 result = bbb.request('deleteRecordings', params)
121
122 if not result.success:
123 raise UnprocessableEntityError(
124 {'source': 'recording_id'}, 'error while deleting recording'
125 )
126
127 methods = ['GET', 'DELETE']
128 schema = VideoRecordingSchema
129 decorators = (jwt_required,)
130 data_layer = {
131 'session': db.session,
132 'model': VideoRecording,
133 'methods': {
134 'before_get_object': before_get_object,
135 'after_get_object': after_get_object,
136 'before_delete_object': before_delete_object,
137 },
138 }
139
140
141 class VideoRecordingRelationship(ResourceRelationship):
142 schema = VideoRecordingSchema
143 methods = ['GET']
144 data_layer = {'session': db.session, 'model': VideoRecording}
145
[end of app/api/video_recordings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/video_recordings.py b/app/api/video_recordings.py
--- a/app/api/video_recordings.py
+++ b/app/api/video_recordings.py
@@ -19,39 +19,48 @@
if kwargs.get('video_stream_id'):
stream = safe_query_kwargs(VideoStream, kwargs, 'video_stream_id', 'id')
- if not has_access('is_organizer', event_id=stream.event_id):
- raise ForbiddenError(
- {'pointer': 'event_id'},
- 'You need to be the event organizer to access video recordings.',
- )
+ if stream.channel and stream.channel.provider == 'bbb':
+ if not has_access('is_organizer', event_id=stream.event_id):
+ raise ForbiddenError(
+ {'pointer': 'event_id'},
+ 'You need to be the event organizer to access video recordings.',
+ )
- params = dict(
- meetingID=stream.extra['response']['meetingID'],
- )
- channel = stream.channel
- bbb = BigBlueButton(channel.api_url, channel.api_key)
- result = bbb.request('getRecordings', params)
-
- if result.data['response']['recordings']:
- recordings = []
- if type(result.data['response']['recordings']['recording']) is list:
- recordings = result.data['response']['recordings']['recording']
- else:
- recordings.append(result.data['response']['recordings']['recording'])
- for recording in recordings:
- get_or_create(
- VideoRecording,
- bbb_record_id=recording['recordID'],
- participants=recording['participants'],
- url=recording['playback']['format']['url'],
- start_time=datetime.fromtimestamp(
- int(int(recording['startTime']) / 1000)
- ),
- end_time=datetime.fromtimestamp(
- int(int(recording['endTime']) / 1000)
- ),
- video_stream=stream,
+ if stream.extra is not None:
+ params = dict(
+ meetingID=stream.extra['response']['meetingID'],
)
+ channel = stream.channel
+ bbb = BigBlueButton(channel.api_url, channel.api_key)
+ result = bbb.request('getRecordings', params)
+
+ if result.data['response']['recordings']:
+ recordings = []
+ if (
+ type(result.data['response']['recordings']['recording'])
+ is list
+ ):
+ recordings = result.data['response']['recordings'][
+ 'recording'
+ ]
+ else:
+ recordings.append(
+ result.data['response']['recordings']['recording']
+ )
+ for recording in recordings:
+ get_or_create(
+ VideoRecording,
+ bbb_record_id=recording['recordID'],
+ participants=recording['participants'],
+ url=recording['playback']['format']['url'],
+ start_time=datetime.fromtimestamp(
+ int(int(recording['startTime']) / 1000)
+ ),
+ end_time=datetime.fromtimestamp(
+ int(int(recording['endTime']) / 1000)
+ ),
+ video_stream=stream,
+ )
def query(self, view_kwargs):
query_ = VideoRecording.query
| {"golden_diff": "diff --git a/app/api/video_recordings.py b/app/api/video_recordings.py\n--- a/app/api/video_recordings.py\n+++ b/app/api/video_recordings.py\n@@ -19,39 +19,48 @@\n if kwargs.get('video_stream_id'):\n stream = safe_query_kwargs(VideoStream, kwargs, 'video_stream_id', 'id')\n \n- if not has_access('is_organizer', event_id=stream.event_id):\n- raise ForbiddenError(\n- {'pointer': 'event_id'},\n- 'You need to be the event organizer to access video recordings.',\n- )\n+ if stream.channel and stream.channel.provider == 'bbb':\n+ if not has_access('is_organizer', event_id=stream.event_id):\n+ raise ForbiddenError(\n+ {'pointer': 'event_id'},\n+ 'You need to be the event organizer to access video recordings.',\n+ )\n \n- params = dict(\n- meetingID=stream.extra['response']['meetingID'],\n- )\n- channel = stream.channel\n- bbb = BigBlueButton(channel.api_url, channel.api_key)\n- result = bbb.request('getRecordings', params)\n-\n- if result.data['response']['recordings']:\n- recordings = []\n- if type(result.data['response']['recordings']['recording']) is list:\n- recordings = result.data['response']['recordings']['recording']\n- else:\n- recordings.append(result.data['response']['recordings']['recording'])\n- for recording in recordings:\n- get_or_create(\n- VideoRecording,\n- bbb_record_id=recording['recordID'],\n- participants=recording['participants'],\n- url=recording['playback']['format']['url'],\n- start_time=datetime.fromtimestamp(\n- int(int(recording['startTime']) / 1000)\n- ),\n- end_time=datetime.fromtimestamp(\n- int(int(recording['endTime']) / 1000)\n- ),\n- video_stream=stream,\n+ if stream.extra is not None:\n+ params = dict(\n+ meetingID=stream.extra['response']['meetingID'],\n )\n+ channel = stream.channel\n+ bbb = BigBlueButton(channel.api_url, channel.api_key)\n+ result = bbb.request('getRecordings', params)\n+\n+ if result.data['response']['recordings']:\n+ recordings = []\n+ if (\n+ type(result.data['response']['recordings']['recording'])\n+ is list\n+ ):\n+ recordings = result.data['response']['recordings'][\n+ 'recording'\n+ ]\n+ else:\n+ recordings.append(\n+ result.data['response']['recordings']['recording']\n+ )\n+ for recording in recordings:\n+ get_or_create(\n+ VideoRecording,\n+ bbb_record_id=recording['recordID'],\n+ participants=recording['participants'],\n+ url=recording['playback']['format']['url'],\n+ start_time=datetime.fromtimestamp(\n+ int(int(recording['startTime']) / 1000)\n+ ),\n+ end_time=datetime.fromtimestamp(\n+ int(int(recording['endTime']) / 1000)\n+ ),\n+ video_stream=stream,\n+ )\n \n def query(self, view_kwargs):\n query_ = VideoRecording.query\n", "issue": "Organizer video tab shows errors if there are video rooms not using BBB\nOrganizer video tab shows errors if there are video rooms not using BBB.\r\n\r\n\r\n\r\n\r\nCompare https://eventyay.com/events/3ea940a8/video/all (only for organizer accessible)\r\n\r\nRelated to https://github.com/fossasia/open-event-frontend/pull/7927\n", "before_files": [{"content": "from datetime import datetime\n\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList\nfrom flask_rest_jsonapi.resource import ResourceRelationship\n\nfrom app.api.helpers.db import get_or_create, safe_query_kwargs\nfrom app.api.helpers.errors import ForbiddenError, UnprocessableEntityError\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.schema.video_recordings import VideoRecordingSchema\nfrom app.api.video_channels.bbb import BigBlueButton\nfrom app.models import db\nfrom app.models.video_recording import VideoRecording\nfrom app.models.video_stream import VideoStream\n\n\nclass VideoRecordingList(ResourceList):\n def before_get(self, args, kwargs):\n if kwargs.get('video_stream_id'):\n stream = safe_query_kwargs(VideoStream, kwargs, 'video_stream_id', 'id')\n\n if not has_access('is_organizer', event_id=stream.event_id):\n raise ForbiddenError(\n {'pointer': 'event_id'},\n 'You need to be the event organizer to access video recordings.',\n )\n\n params = dict(\n meetingID=stream.extra['response']['meetingID'],\n )\n channel = stream.channel\n bbb = BigBlueButton(channel.api_url, channel.api_key)\n result = bbb.request('getRecordings', params)\n\n if result.data['response']['recordings']:\n recordings = []\n if type(result.data['response']['recordings']['recording']) is list:\n recordings = result.data['response']['recordings']['recording']\n else:\n recordings.append(result.data['response']['recordings']['recording'])\n for recording in recordings:\n get_or_create(\n VideoRecording,\n bbb_record_id=recording['recordID'],\n participants=recording['participants'],\n url=recording['playback']['format']['url'],\n start_time=datetime.fromtimestamp(\n int(int(recording['startTime']) / 1000)\n ),\n end_time=datetime.fromtimestamp(\n int(int(recording['endTime']) / 1000)\n ),\n video_stream=stream,\n )\n\n def query(self, view_kwargs):\n query_ = VideoRecording.query\n if view_kwargs.get('video_stream_id'):\n stream = safe_query_kwargs(VideoStream, view_kwargs, 'video_stream_id')\n query_ = VideoRecording.query.filter(\n VideoRecording.video_stream_id == stream.id\n )\n else:\n if not has_access('is_admin'):\n raise ForbiddenError(\n {'pointer': 'user'},\n 'You need to be the admin to access video recordings.',\n )\n\n return query_\n\n methods = ['GET']\n view_kwargs = True\n decorators = (jwt_required,)\n schema = VideoRecordingSchema\n data_layer = {\n 'session': db.session,\n 'model': VideoRecording,\n 'methods': {\n 'query': query,\n 'before_get': before_get,\n },\n }\n\n\nclass VideoRecordingDetail(ResourceDetail):\n def before_get_object(self, view_kwargs):\n if view_kwargs.get('video_stream_id'):\n video_stream = safe_query_kwargs(\n VideoStream,\n view_kwargs,\n 'video_stream_id',\n )\n view_kwargs['id'] = video_stream.id\n\n def after_get_object(self, video_recording, view_kwargs):\n if not has_access('is_organizer', event_id=video_recording.video_stream.event_id):\n raise ForbiddenError(\n {'pointer': 'event_id'},\n 'You need to be the event organizer to access video recordings.',\n )\n\n def before_delete_object(self, video_recording, kwargs):\n \"\"\"\n before delete object method for recording detail\n :param obj:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_admin'):\n raise ForbiddenError(\n {'source': 'User'}, 'You are not authorized to access this.'\n )\n stream = video_recording.video_stream\n params = dict(\n recordID=video_recording.bbb_record_id,\n )\n channel = stream.channel\n bbb = BigBlueButton(channel.api_url, channel.api_key)\n result = bbb.request('deleteRecordings', params)\n\n if not result.success:\n raise UnprocessableEntityError(\n {'source': 'recording_id'}, 'error while deleting recording'\n )\n\n methods = ['GET', 'DELETE']\n schema = VideoRecordingSchema\n decorators = (jwt_required,)\n data_layer = {\n 'session': db.session,\n 'model': VideoRecording,\n 'methods': {\n 'before_get_object': before_get_object,\n 'after_get_object': after_get_object,\n 'before_delete_object': before_delete_object,\n },\n }\n\n\nclass VideoRecordingRelationship(ResourceRelationship):\n schema = VideoRecordingSchema\n methods = ['GET']\n data_layer = {'session': db.session, 'model': VideoRecording}\n", "path": "app/api/video_recordings.py"}]} | 2,146 | 733 |
gh_patches_debug_39683 | rasdani/github-patches | git_diff | qutip__qutip-1754 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
hardware_info fails (again) on M1 MacBook Pro running Big Sur 11.4
**Describe the bug**
qutip fails to import with the following error:
```
Python 3.9.6 (default, Jun 28 2021, 19:24:41)
[Clang 12.0.5 (clang-1205.0.22.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import qutip
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.9/site-packages/qutip/__init__.py", line 115, in <module>
info = qutip.hardware_info.hardware_info()
File "/opt/homebrew/lib/python3.9/site-packages/qutip/hardware_info.py", line 133, in hardware_info
out = _mac_hardware_info()
File "/opt/homebrew/lib/python3.9/site-packages/qutip/hardware_info.py", line 50, in _mac_hardware_info
results.update({'cpu_freq': int(float(os.popen('sysctl hw.cpufrequency')
IndexError: list index out of range
```
This appears to be caused by Apple having removed hw.cpufrequency from the list of sysctl's (see below)
**To Reproduce**
Installed qutip using homebrew/pip3 (after using the workaround of specifying OPENBLAS properly before building scipy etc... so that part is all sorted). Then python3 and import qutip.
```python
from qutip import identity
print(identity(2))
```
The terminal output (aftrer I hacked up a workaround for this issue):
```
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
Qobj data =
[[1. 0.]
[0. 1.]]
```
**Expected behavior**
qutip successfully initializes :-)
**Your Environment**
Please use `qutip.about()` to get the information about your environment and paste it here.
```
>>> qutip.about()
QuTiP: Quantum Toolbox in Python
================================
Copyright (c) QuTiP team 2011 and later.
Current admin team: Alexander Pitchford, Nathan Shammah, Shahnawaz Ahmed, Neill Lambert, Eric Giguère, Boxi Li, Jake Lishman and Simon Cross.
Board members: Daniel Burgarth, Robert Johansson, Anton F. Kockum, Franco Nori and Will Zeng.
Original developers: R. J. Johansson & P. D. Nation.
Previous lead developers: Chris Granade & A. Grimsmo.
Currently developed through wide collaboration. See https://github.com/qutip for details.
QuTiP Version: 4.6.2
Numpy Version: 1.21.0
Scipy Version: 1.7.0
Cython Version: 0.29.23
Matplotlib Version: 3.4.2
Python Version: 3.9.6
Number of CPUs: 8
BLAS Info: OPENBLAS
OPENMP Installed: False
INTEL MKL Ext: False
Platform Info: Darwin (arm64)
Installation path: /opt/homebrew/lib/python3.9/site-packages/qutip
================================================================================
Please cite QuTiP in your publication.
================================================================================
```
**Additional context**
No sysctl hw.cpufrequency at all on this machine, so it blows up (some error handling in that function would be good :-)
```$ sysctl hw
hw.ncpu: 8
hw.byteorder: 1234
hw.memsize: 17179869184
hw.activecpu: 8
hw.optional.amx_version: 2
hw.optional.arm64: 1
hw.optional.armv8_1_atomics: 1
hw.optional.armv8_2_fhm: 1
hw.optional.armv8_2_sha3: 1
hw.optional.armv8_2_sha512: 1
hw.optional.armv8_crc32: 1
hw.optional.breakpoint: 6
hw.optional.floatingpoint: 1
hw.optional.neon: 1
hw.optional.neon_fp16: 1
hw.optional.neon_hpfp: 1
hw.optional.ucnormal_mem: 1
hw.optional.watchpoint: 4
hw.cacheconfig: 8 1 1 0 0 0 0 0 0 0
hw.cachelinesize: 128
hw.cachesize: 3616980992 65536 4194304 0 0 0 0 0 0 0
hw.cpu64bit_capable: 1
hw.cpufamily: 458787763
hw.cpusubfamily: 2
hw.cpusubtype: 2
hw.cputype: 16777228
hw.ephemeral_storage: 0
hw.l1dcachesize: 65536
hw.l1icachesize: 131072
hw.l2cachesize: 4194304
hw.logicalcpu: 8
hw.logicalcpu_max: 8
hw.osenvironment:
hw.packages: 1
hw.pagesize: 16384
hw.pagesize32: 16384
hw.physicalcpu: 8
hw.physicalcpu_max: 8
hw.serialdebugmode: 0
hw.tbfrequency: 24000000
hw.use_kernelmanagerd: 1
hw.use_recovery_securityd: 0
hw.targettype: J293
```
</issue>
<code>
[start of qutip/hardware_info.py]
1 __all__ = ['hardware_info']
2
3 import multiprocessing
4 import os
5 import sys
6
7 import numpy as np
8
9
10 def _mac_hardware_info():
11 info = dict()
12 results = dict()
13 for l in [l.split(':') for l in os.popen('sysctl hw').readlines()[1:]]:
14 info[l[0].strip(' "').replace(' ', '_').lower().strip('hw.')] = \
15 l[1].strip('.\n ')
16 results.update({'cpus': int(info['physicalcpu'])})
17 results.update({'cpu_freq': int(float(os.popen('sysctl hw.cpufrequency')
18 .readlines()[0].split(':')[
19 1]) / 1000000)})
20 results.update({'memsize': int(int(info['memsize']) / (1024 ** 2))})
21 # add OS information
22 results.update({'os': 'Mac OSX'})
23 return results
24
25
26 def _linux_hardware_info():
27 results = {}
28 # get cpu number
29 sockets = 0
30 cores_per_socket = 0
31 frequency = 0.0
32 with open("/proc/cpuinfo") as f:
33 for l in [l.split(':') for l in f.readlines()]:
34 if (l[0].strip() == "physical id"):
35 sockets = np.maximum(sockets,int(l[1].strip())+1)
36 if (l[0].strip() == "cpu cores"):
37 cores_per_socket = int(l[1].strip())
38 if (l[0].strip() == "cpu MHz"):
39 frequency = float(l[1].strip()) / 1000.
40 results.update({'cpus': sockets * cores_per_socket})
41 # get cpu frequency directly (bypasses freq scaling)
42 try:
43 with open(
44 "/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq") as f:
45 line = f.readlines()[0]
46 frequency = float(line.strip('\n')) / 1000000.
47 except:
48 pass
49 results.update({'cpu_freq': frequency})
50
51 # get total amount of memory
52 mem_info = dict()
53 with open("/proc/meminfo") as f:
54 for l in [l.split(':') for l in f.readlines()]:
55 mem_info[l[0]] = l[1].strip('.\n ').strip('kB')
56 results.update({'memsize': int(mem_info['MemTotal']) / 1024})
57 # add OS information
58 results.update({'os': 'Linux'})
59 return results
60
61
62 def _freebsd_hardware_info():
63 results = {}
64 results.update({'cpus': int(os.popen('sysctl -n hw.ncpu').readlines()[0])})
65 results.update(
66 {'cpu_freq': int(os.popen('sysctl -n dev.cpu.0.freq').readlines()[0])})
67 results.update({'memsize': int(
68 os.popen('sysctl -n hw.realmem').readlines()[0]) / 1024})
69 results.update({'os': 'FreeBSD'})
70 return results
71
72
73 def _win_hardware_info():
74 try:
75 from comtypes.client import CoGetObject
76 winmgmts_root = CoGetObject(r"winmgmts:root\cimv2")
77 cpus = winmgmts_root.ExecQuery("Select * from Win32_Processor")
78 ncpus = 0
79 for cpu in cpus:
80 ncpus += int(cpu.Properties_['NumberOfCores'].Value)
81 except:
82 ncpus = int(multiprocessing.cpu_count())
83 return {'os': 'Windows', 'cpus': ncpus}
84
85
86 def hardware_info():
87 """
88 Returns basic hardware information about the computer.
89
90 Gives actual number of CPU's in the machine, even when hyperthreading is
91 turned on.
92
93 Returns
94 -------
95 info : dict
96 Dictionary containing cpu and memory information.
97
98 """
99 if sys.platform == 'darwin':
100 out = _mac_hardware_info()
101 elif sys.platform == 'win32':
102 out = _win_hardware_info()
103 elif sys.platform in ['linux', 'linux2']:
104 out = _linux_hardware_info()
105 elif sys.platform.startswith('freebsd'):
106 out = _freebsd_hardware_info()
107 else:
108 out = {}
109 return out
110
111
112 if __name__ == '__main__':
113 print(hardware_info())
114
[end of qutip/hardware_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutip/hardware_info.py b/qutip/hardware_info.py
--- a/qutip/hardware_info.py
+++ b/qutip/hardware_info.py
@@ -8,15 +8,27 @@
def _mac_hardware_info():
- info = dict()
- results = dict()
- for l in [l.split(':') for l in os.popen('sysctl hw').readlines()[1:]]:
- info[l[0].strip(' "').replace(' ', '_').lower().strip('hw.')] = \
- l[1].strip('.\n ')
+ info = {}
+ results = {}
+ with os.popen('sysctl hw') as f:
+ lines = f.readlines()
+ for line in lines[1:]:
+ key, _, value = line.partition(':')
+ key = key.strip(' "').replace(' ', '_').lower().strip('hw.')
+ value = value.strip('.\n ')
+ info[key] = value
results.update({'cpus': int(info['physicalcpu'])})
- results.update({'cpu_freq': int(float(os.popen('sysctl hw.cpufrequency')
- .readlines()[0].split(':')[
- 1]) / 1000000)})
+ # Mac OS currently doesn't not provide hw.cpufrequency on the M1
+ with os.popen('sysctl hw.cpufrequency') as f:
+ cpu_freq_lines = f.readlines()
+ if cpu_freq_lines:
+ # Yay, hw.cpufrequency present
+ results.update({
+ 'cpu_freq': float(cpu_freq_lines[0].split(':')[1]) / 1000000,
+ })
+ else:
+ # No hw.cpufrequency, assume Apple M1 CPU (all are 3.2 GHz currently)
+ results['cpu_freq'] = 3.2
results.update({'memsize': int(int(info['memsize']) / (1024 ** 2))})
# add OS information
results.update({'os': 'Mac OSX'})
@@ -32,19 +44,19 @@
with open("/proc/cpuinfo") as f:
for l in [l.split(':') for l in f.readlines()]:
if (l[0].strip() == "physical id"):
- sockets = np.maximum(sockets,int(l[1].strip())+1)
+ sockets = np.maximum(sockets, int(l[1].strip()) + 1)
if (l[0].strip() == "cpu cores"):
cores_per_socket = int(l[1].strip())
if (l[0].strip() == "cpu MHz"):
frequency = float(l[1].strip()) / 1000.
- results.update({'cpus': sockets * cores_per_socket})
+ results.update({'cpus': int(sockets * cores_per_socket)})
# get cpu frequency directly (bypasses freq scaling)
try:
with open(
"/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq") as f:
line = f.readlines()[0]
frequency = float(line.strip('\n')) / 1000000.
- except:
+ except Exception:
pass
results.update({'cpu_freq': frequency})
@@ -78,7 +90,7 @@
ncpus = 0
for cpu in cpus:
ncpus += int(cpu.Properties_['NumberOfCores'].Value)
- except:
+ except Exception:
ncpus = int(multiprocessing.cpu_count())
return {'os': 'Windows', 'cpus': ncpus}
| {"golden_diff": "diff --git a/qutip/hardware_info.py b/qutip/hardware_info.py\n--- a/qutip/hardware_info.py\n+++ b/qutip/hardware_info.py\n@@ -8,15 +8,27 @@\n \n \n def _mac_hardware_info():\n- info = dict()\n- results = dict()\n- for l in [l.split(':') for l in os.popen('sysctl hw').readlines()[1:]]:\n- info[l[0].strip(' \"').replace(' ', '_').lower().strip('hw.')] = \\\n- l[1].strip('.\\n ')\n+ info = {}\n+ results = {}\n+ with os.popen('sysctl hw') as f:\n+ lines = f.readlines()\n+ for line in lines[1:]:\n+ key, _, value = line.partition(':')\n+ key = key.strip(' \"').replace(' ', '_').lower().strip('hw.')\n+ value = value.strip('.\\n ')\n+ info[key] = value\n results.update({'cpus': int(info['physicalcpu'])})\n- results.update({'cpu_freq': int(float(os.popen('sysctl hw.cpufrequency')\n- .readlines()[0].split(':')[\n- 1]) / 1000000)})\n+ # Mac OS currently doesn't not provide hw.cpufrequency on the M1\n+ with os.popen('sysctl hw.cpufrequency') as f:\n+ cpu_freq_lines = f.readlines()\n+ if cpu_freq_lines:\n+ # Yay, hw.cpufrequency present\n+ results.update({\n+ 'cpu_freq': float(cpu_freq_lines[0].split(':')[1]) / 1000000,\n+ })\n+ else:\n+ # No hw.cpufrequency, assume Apple M1 CPU (all are 3.2 GHz currently)\n+ results['cpu_freq'] = 3.2\n results.update({'memsize': int(int(info['memsize']) / (1024 ** 2))})\n # add OS information\n results.update({'os': 'Mac OSX'})\n@@ -32,19 +44,19 @@\n with open(\"/proc/cpuinfo\") as f:\n for l in [l.split(':') for l in f.readlines()]:\n if (l[0].strip() == \"physical id\"):\n- sockets = np.maximum(sockets,int(l[1].strip())+1)\n+ sockets = np.maximum(sockets, int(l[1].strip()) + 1)\n if (l[0].strip() == \"cpu cores\"):\n cores_per_socket = int(l[1].strip())\n if (l[0].strip() == \"cpu MHz\"):\n frequency = float(l[1].strip()) / 1000.\n- results.update({'cpus': sockets * cores_per_socket})\n+ results.update({'cpus': int(sockets * cores_per_socket)})\n # get cpu frequency directly (bypasses freq scaling)\n try:\n with open(\n \"/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq\") as f:\n line = f.readlines()[0]\n frequency = float(line.strip('\\n')) / 1000000.\n- except:\n+ except Exception:\n pass\n results.update({'cpu_freq': frequency})\n \n@@ -78,7 +90,7 @@\n ncpus = 0\n for cpu in cpus:\n ncpus += int(cpu.Properties_['NumberOfCores'].Value)\n- except:\n+ except Exception:\n ncpus = int(multiprocessing.cpu_count())\n return {'os': 'Windows', 'cpus': ncpus}\n", "issue": "hardware_info fails (again) on M1 MacBook Pro running Big Sur 11.4\n**Describe the bug**\r\nqutip fails to import with the following error:\r\n```\r\nPython 3.9.6 (default, Jun 28 2021, 19:24:41) \r\n[Clang 12.0.5 (clang-1205.0.22.9)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import qutip\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/opt/homebrew/lib/python3.9/site-packages/qutip/__init__.py\", line 115, in <module>\r\n info = qutip.hardware_info.hardware_info()\r\n File \"/opt/homebrew/lib/python3.9/site-packages/qutip/hardware_info.py\", line 133, in hardware_info\r\n out = _mac_hardware_info()\r\n File \"/opt/homebrew/lib/python3.9/site-packages/qutip/hardware_info.py\", line 50, in _mac_hardware_info\r\n results.update({'cpu_freq': int(float(os.popen('sysctl hw.cpufrequency')\r\nIndexError: list index out of range\r\n```\r\n\r\nThis appears to be caused by Apple having removed hw.cpufrequency from the list of sysctl's (see below)\r\n\r\n**To Reproduce**\r\nInstalled qutip using homebrew/pip3 (after using the workaround of specifying OPENBLAS properly before building scipy etc... so that part is all sorted). Then python3 and import qutip.\r\n\r\n```python\r\nfrom qutip import identity\r\nprint(identity(2))\r\n```\r\nThe terminal output (aftrer I hacked up a workaround for this issue):\r\n```\r\nQuantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True\r\nQobj data =\r\n[[1. 0.]\r\n [0. 1.]]\r\n```\r\n\r\n**Expected behavior**\r\nqutip successfully initializes :-)\r\n\r\n**Your Environment**\r\nPlease use `qutip.about()` to get the information about your environment and paste it here.\r\n```\r\n>>> qutip.about()\r\n\r\nQuTiP: Quantum Toolbox in Python\r\n================================\r\nCopyright (c) QuTiP team 2011 and later.\r\nCurrent admin team: Alexander Pitchford, Nathan Shammah, Shahnawaz Ahmed, Neill Lambert, Eric Gigu\u00e8re, Boxi Li, Jake Lishman and Simon Cross.\r\nBoard members: Daniel Burgarth, Robert Johansson, Anton F. Kockum, Franco Nori and Will Zeng.\r\nOriginal developers: R. J. Johansson & P. D. Nation.\r\nPrevious lead developers: Chris Granade & A. Grimsmo.\r\nCurrently developed through wide collaboration. See https://github.com/qutip for details.\r\n\r\nQuTiP Version: 4.6.2\r\nNumpy Version: 1.21.0\r\nScipy Version: 1.7.0\r\nCython Version: 0.29.23\r\nMatplotlib Version: 3.4.2\r\nPython Version: 3.9.6\r\nNumber of CPUs: 8\r\nBLAS Info: OPENBLAS\r\nOPENMP Installed: False\r\nINTEL MKL Ext: False\r\nPlatform Info: Darwin (arm64)\r\nInstallation path: /opt/homebrew/lib/python3.9/site-packages/qutip\r\n================================================================================\r\nPlease cite QuTiP in your publication.\r\n================================================================================\r\n```\r\n\r\n**Additional context**\r\n No sysctl hw.cpufrequency at all on this machine, so it blows up (some error handling in that function would be good :-)\r\n\r\n```$ sysctl hw\r\nhw.ncpu: 8\r\nhw.byteorder: 1234\r\nhw.memsize: 17179869184\r\nhw.activecpu: 8\r\nhw.optional.amx_version: 2\r\nhw.optional.arm64: 1\r\nhw.optional.armv8_1_atomics: 1\r\nhw.optional.armv8_2_fhm: 1\r\nhw.optional.armv8_2_sha3: 1\r\nhw.optional.armv8_2_sha512: 1\r\nhw.optional.armv8_crc32: 1\r\nhw.optional.breakpoint: 6\r\nhw.optional.floatingpoint: 1\r\nhw.optional.neon: 1\r\nhw.optional.neon_fp16: 1\r\nhw.optional.neon_hpfp: 1\r\nhw.optional.ucnormal_mem: 1\r\nhw.optional.watchpoint: 4\r\nhw.cacheconfig: 8 1 1 0 0 0 0 0 0 0\r\nhw.cachelinesize: 128\r\nhw.cachesize: 3616980992 65536 4194304 0 0 0 0 0 0 0\r\nhw.cpu64bit_capable: 1\r\nhw.cpufamily: 458787763\r\nhw.cpusubfamily: 2\r\nhw.cpusubtype: 2\r\nhw.cputype: 16777228\r\nhw.ephemeral_storage: 0\r\nhw.l1dcachesize: 65536\r\nhw.l1icachesize: 131072\r\nhw.l2cachesize: 4194304\r\nhw.logicalcpu: 8\r\nhw.logicalcpu_max: 8\r\nhw.osenvironment: \r\nhw.packages: 1\r\nhw.pagesize: 16384\r\nhw.pagesize32: 16384\r\nhw.physicalcpu: 8\r\nhw.physicalcpu_max: 8\r\nhw.serialdebugmode: 0\r\nhw.tbfrequency: 24000000\r\nhw.use_kernelmanagerd: 1\r\nhw.use_recovery_securityd: 0\r\nhw.targettype: J293\r\n```\r\n\r\n\n", "before_files": [{"content": "__all__ = ['hardware_info']\n\nimport multiprocessing\nimport os\nimport sys\n\nimport numpy as np\n\n\ndef _mac_hardware_info():\n info = dict()\n results = dict()\n for l in [l.split(':') for l in os.popen('sysctl hw').readlines()[1:]]:\n info[l[0].strip(' \"').replace(' ', '_').lower().strip('hw.')] = \\\n l[1].strip('.\\n ')\n results.update({'cpus': int(info['physicalcpu'])})\n results.update({'cpu_freq': int(float(os.popen('sysctl hw.cpufrequency')\n .readlines()[0].split(':')[\n 1]) / 1000000)})\n results.update({'memsize': int(int(info['memsize']) / (1024 ** 2))})\n # add OS information\n results.update({'os': 'Mac OSX'})\n return results\n\n\ndef _linux_hardware_info():\n results = {}\n # get cpu number\n sockets = 0\n cores_per_socket = 0\n frequency = 0.0\n with open(\"/proc/cpuinfo\") as f:\n for l in [l.split(':') for l in f.readlines()]:\n if (l[0].strip() == \"physical id\"):\n sockets = np.maximum(sockets,int(l[1].strip())+1)\n if (l[0].strip() == \"cpu cores\"):\n cores_per_socket = int(l[1].strip())\n if (l[0].strip() == \"cpu MHz\"):\n frequency = float(l[1].strip()) / 1000.\n results.update({'cpus': sockets * cores_per_socket})\n # get cpu frequency directly (bypasses freq scaling)\n try:\n with open(\n \"/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq\") as f:\n line = f.readlines()[0]\n frequency = float(line.strip('\\n')) / 1000000.\n except:\n pass\n results.update({'cpu_freq': frequency})\n\n # get total amount of memory\n mem_info = dict()\n with open(\"/proc/meminfo\") as f:\n for l in [l.split(':') for l in f.readlines()]:\n mem_info[l[0]] = l[1].strip('.\\n ').strip('kB')\n results.update({'memsize': int(mem_info['MemTotal']) / 1024})\n # add OS information\n results.update({'os': 'Linux'})\n return results\n\n\ndef _freebsd_hardware_info():\n results = {}\n results.update({'cpus': int(os.popen('sysctl -n hw.ncpu').readlines()[0])})\n results.update(\n {'cpu_freq': int(os.popen('sysctl -n dev.cpu.0.freq').readlines()[0])})\n results.update({'memsize': int(\n os.popen('sysctl -n hw.realmem').readlines()[0]) / 1024})\n results.update({'os': 'FreeBSD'})\n return results\n\n\ndef _win_hardware_info():\n try:\n from comtypes.client import CoGetObject\n winmgmts_root = CoGetObject(r\"winmgmts:root\\cimv2\")\n cpus = winmgmts_root.ExecQuery(\"Select * from Win32_Processor\")\n ncpus = 0\n for cpu in cpus:\n ncpus += int(cpu.Properties_['NumberOfCores'].Value)\n except:\n ncpus = int(multiprocessing.cpu_count())\n return {'os': 'Windows', 'cpus': ncpus}\n\n\ndef hardware_info():\n \"\"\"\n Returns basic hardware information about the computer.\n\n Gives actual number of CPU's in the machine, even when hyperthreading is\n turned on.\n\n Returns\n -------\n info : dict\n Dictionary containing cpu and memory information.\n\n \"\"\"\n if sys.platform == 'darwin':\n out = _mac_hardware_info()\n elif sys.platform == 'win32':\n out = _win_hardware_info()\n elif sys.platform in ['linux', 'linux2']:\n out = _linux_hardware_info()\n elif sys.platform.startswith('freebsd'):\n out = _freebsd_hardware_info()\n else:\n out = {}\n return out\n\n\nif __name__ == '__main__':\n print(hardware_info())\n", "path": "qutip/hardware_info.py"}]} | 3,065 | 814 |
gh_patches_debug_3599 | rasdani/github-patches | git_diff | certbot__certbot-606 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nginx plugin destroys config
I have a config file called webp.conf in /etc/nginx/conf.d/ which works great.
After running letsencrypt -d example.org run the webp.conf is broken because it's missing a closing }
https://pastebin.mozilla.org/8837365
Line 18 gets removed.
</issue>
<code>
[start of letsencrypt-nginx/letsencrypt_nginx/nginxparser.py]
1 """Very low-level nginx config parser based on pyparsing."""
2 import string
3
4 from pyparsing import (
5 Literal, White, Word, alphanums, CharsNotIn, Forward, Group,
6 Optional, OneOrMore, Regex, ZeroOrMore)
7 from pyparsing import stringEnd
8 from pyparsing import restOfLine
9
10 class RawNginxParser(object):
11 # pylint: disable=expression-not-assigned
12 """A class that parses nginx configuration with pyparsing."""
13
14 # constants
15 left_bracket = Literal("{").suppress()
16 right_bracket = Literal("}").suppress()
17 semicolon = Literal(";").suppress()
18 space = White().suppress()
19 key = Word(alphanums + "_/")
20 # Matches anything that is not a special character AND any chars in single
21 # or double quotes
22 value = Regex(r"((\".*\")?(\'.*\')?[^\{\};,]?)+")
23 location = CharsNotIn("{};," + string.whitespace)
24 # modifier for location uri [ = | ~ | ~* | ^~ ]
25 modifier = Literal("=") | Literal("~*") | Literal("~") | Literal("^~")
26
27 # rules
28 comment = Literal('#') + restOfLine()
29 assignment = (key + Optional(space + value, default=None) + semicolon)
30 location_statement = Optional(space + modifier) + Optional(space + location)
31 if_statement = Literal("if") + space + Regex(r"\(.+\)") + space
32 block = Forward()
33
34 block << Group(
35 (Group(key + location_statement) ^ Group(if_statement))
36 + left_bracket
37 + Group(ZeroOrMore(Group(comment | assignment) | block))
38 + right_bracket)
39
40 script = OneOrMore(Group(comment | assignment) | block) + stringEnd
41
42 def __init__(self, source):
43 self.source = source
44
45 def parse(self):
46 """Returns the parsed tree."""
47 return self.script.parseString(self.source)
48
49 def as_list(self):
50 """Returns the parsed tree as a list."""
51 return self.parse().asList()
52
53
54 class RawNginxDumper(object):
55 # pylint: disable=too-few-public-methods
56 """A class that dumps nginx configuration from the provided tree."""
57 def __init__(self, blocks, indentation=4):
58 self.blocks = blocks
59 self.indentation = indentation
60
61 def __iter__(self, blocks=None, current_indent=0, spacer=' '):
62 """Iterates the dumped nginx content."""
63 blocks = blocks or self.blocks
64 for key, values in blocks:
65 indentation = spacer * current_indent
66 if isinstance(key, list):
67 if current_indent:
68 yield ''
69 yield indentation + spacer.join(key) + ' {'
70
71 for parameter in values:
72 dumped = self.__iter__([parameter], current_indent + self.indentation)
73 for line in dumped:
74 yield line
75
76 yield indentation + '}'
77 else:
78 if key == '#':
79 yield spacer * current_indent + key + values
80 else:
81 if values is None:
82 yield spacer * current_indent + key + ';'
83 else:
84 yield spacer * current_indent + key + spacer + values + ';'
85
86 def as_string(self):
87 """Return the parsed block as a string."""
88 return '\n'.join(self) + '\n'
89
90
91 # Shortcut functions to respect Python's serialization interface
92 # (like pyyaml, picker or json)
93
94 def loads(source):
95 """Parses from a string.
96
97 :param str souce: The string to parse
98 :returns: The parsed tree
99 :rtype: list
100
101 """
102 return RawNginxParser(source).as_list()
103
104
105 def load(_file):
106 """Parses from a file.
107
108 :param file _file: The file to parse
109 :returns: The parsed tree
110 :rtype: list
111
112 """
113 return loads(_file.read())
114
115
116 def dumps(blocks, indentation=4):
117 """Dump to a string.
118
119 :param list block: The parsed tree
120 :param int indentation: The number of spaces to indent
121 :rtype: str
122
123 """
124 return RawNginxDumper(blocks, indentation).as_string()
125
126
127 def dump(blocks, _file, indentation=4):
128 """Dump to a file.
129
130 :param list block: The parsed tree
131 :param file _file: The file to dump to
132 :param int indentation: The number of spaces to indent
133 :rtype: NoneType
134
135 """
136 return _file.write(dumps(blocks, indentation))
137
[end of letsencrypt-nginx/letsencrypt_nginx/nginxparser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py b/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py
--- a/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py
+++ b/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py
@@ -37,7 +37,7 @@
+ Group(ZeroOrMore(Group(comment | assignment) | block))
+ right_bracket)
- script = OneOrMore(Group(comment | assignment) | block) + stringEnd
+ script = OneOrMore(Group(comment | assignment) ^ block) + stringEnd
def __init__(self, source):
self.source = source
| {"golden_diff": "diff --git a/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py b/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py\n--- a/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py\n+++ b/letsencrypt-nginx/letsencrypt_nginx/nginxparser.py\n@@ -37,7 +37,7 @@\n + Group(ZeroOrMore(Group(comment | assignment) | block))\n + right_bracket)\n \n- script = OneOrMore(Group(comment | assignment) | block) + stringEnd\n+ script = OneOrMore(Group(comment | assignment) ^ block) + stringEnd\n \n def __init__(self, source):\n self.source = source\n", "issue": "nginx plugin destroys config\nI have a config file called webp.conf in /etc/nginx/conf.d/ which works great.\nAfter running letsencrypt -d example.org run the webp.conf is broken because it's missing a closing }\nhttps://pastebin.mozilla.org/8837365\nLine 18 gets removed.\n\n", "before_files": [{"content": "\"\"\"Very low-level nginx config parser based on pyparsing.\"\"\"\nimport string\n\nfrom pyparsing import (\n Literal, White, Word, alphanums, CharsNotIn, Forward, Group,\n Optional, OneOrMore, Regex, ZeroOrMore)\nfrom pyparsing import stringEnd\nfrom pyparsing import restOfLine\n\nclass RawNginxParser(object):\n # pylint: disable=expression-not-assigned\n \"\"\"A class that parses nginx configuration with pyparsing.\"\"\"\n\n # constants\n left_bracket = Literal(\"{\").suppress()\n right_bracket = Literal(\"}\").suppress()\n semicolon = Literal(\";\").suppress()\n space = White().suppress()\n key = Word(alphanums + \"_/\")\n # Matches anything that is not a special character AND any chars in single\n # or double quotes\n value = Regex(r\"((\\\".*\\\")?(\\'.*\\')?[^\\{\\};,]?)+\")\n location = CharsNotIn(\"{};,\" + string.whitespace)\n # modifier for location uri [ = | ~ | ~* | ^~ ]\n modifier = Literal(\"=\") | Literal(\"~*\") | Literal(\"~\") | Literal(\"^~\")\n\n # rules\n comment = Literal('#') + restOfLine()\n assignment = (key + Optional(space + value, default=None) + semicolon)\n location_statement = Optional(space + modifier) + Optional(space + location)\n if_statement = Literal(\"if\") + space + Regex(r\"\\(.+\\)\") + space\n block = Forward()\n\n block << Group(\n (Group(key + location_statement) ^ Group(if_statement))\n + left_bracket\n + Group(ZeroOrMore(Group(comment | assignment) | block))\n + right_bracket)\n\n script = OneOrMore(Group(comment | assignment) | block) + stringEnd\n\n def __init__(self, source):\n self.source = source\n\n def parse(self):\n \"\"\"Returns the parsed tree.\"\"\"\n return self.script.parseString(self.source)\n\n def as_list(self):\n \"\"\"Returns the parsed tree as a list.\"\"\"\n return self.parse().asList()\n\n\nclass RawNginxDumper(object):\n # pylint: disable=too-few-public-methods\n \"\"\"A class that dumps nginx configuration from the provided tree.\"\"\"\n def __init__(self, blocks, indentation=4):\n self.blocks = blocks\n self.indentation = indentation\n\n def __iter__(self, blocks=None, current_indent=0, spacer=' '):\n \"\"\"Iterates the dumped nginx content.\"\"\"\n blocks = blocks or self.blocks\n for key, values in blocks:\n indentation = spacer * current_indent\n if isinstance(key, list):\n if current_indent:\n yield ''\n yield indentation + spacer.join(key) + ' {'\n\n for parameter in values:\n dumped = self.__iter__([parameter], current_indent + self.indentation)\n for line in dumped:\n yield line\n\n yield indentation + '}'\n else:\n if key == '#':\n yield spacer * current_indent + key + values\n else:\n if values is None:\n yield spacer * current_indent + key + ';'\n else:\n yield spacer * current_indent + key + spacer + values + ';'\n\n def as_string(self):\n \"\"\"Return the parsed block as a string.\"\"\"\n return '\\n'.join(self) + '\\n'\n\n\n# Shortcut functions to respect Python's serialization interface\n# (like pyyaml, picker or json)\n\ndef loads(source):\n \"\"\"Parses from a string.\n\n :param str souce: The string to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return RawNginxParser(source).as_list()\n\n\ndef load(_file):\n \"\"\"Parses from a file.\n\n :param file _file: The file to parse\n :returns: The parsed tree\n :rtype: list\n\n \"\"\"\n return loads(_file.read())\n\n\ndef dumps(blocks, indentation=4):\n \"\"\"Dump to a string.\n\n :param list block: The parsed tree\n :param int indentation: The number of spaces to indent\n :rtype: str\n\n \"\"\"\n return RawNginxDumper(blocks, indentation).as_string()\n\n\ndef dump(blocks, _file, indentation=4):\n \"\"\"Dump to a file.\n\n :param list block: The parsed tree\n :param file _file: The file to dump to\n :param int indentation: The number of spaces to indent\n :rtype: NoneType\n\n \"\"\"\n return _file.write(dumps(blocks, indentation))\n", "path": "letsencrypt-nginx/letsencrypt_nginx/nginxparser.py"}]} | 1,921 | 151 |
gh_patches_debug_3934 | rasdani/github-patches | git_diff | DDMAL__CantusDB-771 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
should editors get automatic proofreader access to their own sources?
It seems weird to me that on sources you (an editor) create, you can always access the Volpiano editor (assuming the source is non-empty!) but you have to be added as an editor to access the proofreading form.
<img width="969" alt="image" src="https://user-images.githubusercontent.com/67451875/209043248-1a0a8e13-0196-498d-a835-081fddc3ee13.png">
<img width="666" alt="image" src="https://user-images.githubusercontent.com/67451875/209043188-94b4b649-c1e7-41cc-9692-fd2a6947c28c.png">
If there were something preventing you from proofreading your own source as an intentional part of the workflow, this might be sort of useful (?) but as it is it just adds an extra step to get permissions you should in theory already have.
New source created doesn't show up in home page
I just created a new source from my account. It doesn't show up immediately on the home page in the My Sources sidebar (even after refreshing the page and re-logging in). It does however appear in the database as well as in the my-sources page.
</issue>
<code>
[start of django/cantusdb_project/main_app/views/source.py]
1 from django.views.generic import DetailView, ListView, CreateView, UpdateView
2 from django.db.models import Q, Prefetch
3 from main_app.models import Source, Provenance, Century
4 from main_app.forms import SourceCreateForm, SourceEditForm
5 from django.contrib import messages
6 from django.urls import reverse
7 from django.contrib.auth.mixins import LoginRequiredMixin
8 from django.http import HttpResponseRedirect
9 from django.contrib.auth.mixins import UserPassesTestMixin
10 from django.core.exceptions import PermissionDenied
11 from django.shortcuts import get_object_or_404
12 from main_app.views.chant import get_feast_selector_options
13
14
15 class SourceDetailView(DetailView):
16 model = Source
17 context_object_name = "source"
18 template_name = "source_detail.html"
19
20 def get_context_data(self, **kwargs):
21 source = self.get_object()
22 display_unpublished = self.request.user.is_authenticated
23 if (source.published is False) and (not display_unpublished):
24 raise PermissionDenied()
25
26 context = super().get_context_data(**kwargs)
27
28 if source.segment and source.segment.id == 4064:
29 # if this is a sequence source
30 context["sequences"] = source.sequence_set.order_by("s_sequence")
31 context["folios"] = (
32 source.sequence_set.values_list("folio", flat=True)
33 .distinct()
34 .order_by("folio")
35 )
36 else:
37 # if this is a chant source
38 folios = (
39 source.chant_set.values_list("folio", flat=True)
40 .distinct()
41 .order_by("folio")
42 )
43 context["folios"] = folios
44 # the options for the feast selector on the right, only chant sources have this
45 context["feasts_with_folios"] = get_feast_selector_options(source, folios)
46 return context
47
48
49 class SourceListView(ListView):
50 paginate_by = 100
51 context_object_name = "sources"
52 template_name = "source_list.html"
53
54 def get_context_data(self, **kwargs):
55 context = super().get_context_data(**kwargs)
56 context["provenances"] = (
57 Provenance.objects.all().order_by("name").values("id", "name")
58 )
59 context["centuries"] = (
60 Century.objects.all().order_by("name").values("id", "name")
61 )
62 return context
63
64 def get_queryset(self):
65 # use select_related() for foreign keys to reduce DB queries
66 queryset = Source.objects.select_related(
67 "rism_siglum", "segment", "provenance"
68 ).order_by("siglum")
69
70 display_unpublished = self.request.user.is_authenticated
71 if display_unpublished:
72 q_obj_filter = Q()
73 else:
74 q_obj_filter = Q(published=True)
75
76 if self.request.GET.get("century"):
77 century_name = Century.objects.get(id=self.request.GET.get("century")).name
78 q_obj_filter &= Q(century__name__icontains=century_name)
79
80 if self.request.GET.get("provenance"):
81 provenance_id = int(self.request.GET.get("provenance"))
82 q_obj_filter &= Q(provenance__id=provenance_id)
83 if self.request.GET.get("segment"):
84 segment_id = int(self.request.GET.get("segment"))
85 q_obj_filter &= Q(segment__id=segment_id)
86 if self.request.GET.get("fullSource") in ["true", "false"]:
87 full_source_str = self.request.GET.get("fullSource")
88 if full_source_str == "true":
89 full_source_q = Q(full_source=True) | Q(full_source=None)
90 q_obj_filter &= full_source_q
91 else:
92 q_obj_filter &= Q(full_source=False)
93
94 if self.request.GET.get("general"):
95 # Strip spaces at the beginning and end. Then make list of terms split on spaces
96 general_search_terms = self.request.GET.get("general").strip(" ").split(" ")
97 # We need a Q Object for each field we're gonna look into
98 title_q = Q()
99 siglum_q = Q()
100 rism_siglum_q = Q()
101 description_q = Q()
102 # it seems that old cantus don't look into title and provenance for the general search terms
103 # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsmünster' in the general search field
104 # provenance_q = Q()
105 summary_q = Q()
106
107 # For each term, add it to the Q object of each field with an OR operation.
108 # We split the terms so that the words can be separated in the actual
109 # field, allowing for a more flexible search, and a field needs
110 # to match only one of the terms
111 for term in general_search_terms:
112 title_q |= Q(title__icontains=term)
113 siglum_q |= Q(siglum__icontains=term)
114 rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(
115 rism_siglum__description__icontains=term
116 )
117 description_q |= Q(description__icontains=term)
118 summary_q |= Q(summary__icontains=term)
119 # provenance_q |= Q(provenance__name__icontains=term)
120 # All the Q objects are put together with OR.
121 # The end result is that at least one term has to match in at least one
122 # field
123 # general_search_q = (
124 # title_q | siglum_q | rism_siglum_q | description_q | provenance_q
125 # )
126 general_search_q = (
127 title_q | siglum_q | rism_siglum_q | description_q | summary_q
128 )
129 q_obj_filter &= general_search_q
130
131 # For the indexing notes search we follow the same procedure as above but with
132 # different fields
133 if self.request.GET.get("indexing"):
134 # Make list of terms split on spaces
135 indexing_search_terms = self.request.GET.get("indexing").split(" ")
136 # We need a Q Object for each field we're gonna look into
137 inventoried_by_q = Q()
138 full_text_entered_by_q = Q()
139 melodies_entered_by_q = Q()
140 proofreaders_q = Q()
141 other_editors_q = Q()
142 indexing_notes_q = Q()
143 # For each term, add it to the Q object of each field with an OR operation.
144 # We split the terms so that the words can be separated in the actual
145 # field, allowing for a more flexible search, and a field needs
146 # to match only one of the terms
147 for term in indexing_search_terms:
148 inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)
149 full_text_entered_by_q |= Q(
150 full_text_entered_by__full_name__icontains=term
151 )
152 melodies_entered_by_q |= Q(
153 melodies_entered_by__full_name__icontains=term
154 )
155 proofreaders_q |= Q(proofreaders__full_name__icontains=term)
156 other_editors_q |= Q(other_editors__full_name__icontains=term)
157 indexing_notes_q |= Q(indexing_notes__icontains=term)
158 # All the Q objects are put together with OR.
159 # The end result is that at least one term has to match in at least one
160 # field
161 indexing_search_q = (
162 inventoried_by_q
163 | full_text_entered_by_q
164 | melodies_entered_by_q
165 | proofreaders_q
166 | other_editors_q
167 | indexing_notes_q
168 )
169 q_obj_filter &= indexing_search_q
170
171 return queryset.filter(q_obj_filter).prefetch_related(
172 Prefetch("century", queryset=Century.objects.all().order_by("id"))
173 )
174
175
176 class SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):
177 model = Source
178 template_name = "source_create_form.html"
179 form_class = SourceCreateForm
180
181 def test_func(self):
182 user = self.request.user
183 # checks if the user is allowed to create sources
184 is_authorized = user.groups.filter(
185 Q(name="project manager") | Q(name="editor") | Q(name="contributor")
186 ).exists()
187
188 if is_authorized:
189 return True
190 else:
191 return False
192
193 def get_success_url(self):
194 return reverse("source-create")
195
196 def form_valid(self, form):
197 form.instance.created_by = self.request.user
198 source = form.save()
199
200 # assign this source to the "current_editors"
201 current_editors = source.current_editors.all()
202
203 for editor in current_editors:
204 editor.sources_user_can_edit.add(source)
205
206 messages.success(
207 self.request,
208 "Source created successfully!",
209 )
210
211 return HttpResponseRedirect(self.get_success_url())
212
213
214 class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
215 template_name = "source_edit.html"
216 model = Source
217 form_class = SourceEditForm
218 pk_url_kwarg = "source_id"
219
220 def test_func(self):
221 user = self.request.user
222 source_id = self.kwargs.get(self.pk_url_kwarg)
223 source = get_object_or_404(Source, id=source_id)
224
225 assigned_to_source = user.sources_user_can_edit.filter(id=source_id)
226
227 # checks if the user is a project manager
228 is_project_manager = user.groups.filter(name="project manager").exists()
229 # checks if the user is an editor
230 is_editor = user.groups.filter(name="editor").exists()
231 # checks if the user is a contributor
232 is_contributor = user.groups.filter(name="contributor").exists()
233
234 if (
235 (is_project_manager)
236 or (is_editor and assigned_to_source)
237 or (is_editor and source.created_by == user)
238 or (is_contributor and source.created_by == user)
239 ):
240 return True
241 else:
242 return False
243
244 def form_valid(self, form):
245 form.instance.last_updated_by = self.request.user
246
247 # remove this source from the old "current_editors"
248 # assign this source to the new "current_editors"
249
250 old_current_editors = list(
251 Source.objects.get(id=form.instance.id).current_editors.all()
252 )
253 new_current_editors = form.cleaned_data["current_editors"]
254 source = form.save()
255
256 for old_editor in old_current_editors:
257 old_editor.sources_user_can_edit.remove(source)
258
259 for new_editor in new_current_editors:
260 new_editor.sources_user_can_edit.add(source)
261
262 return HttpResponseRedirect(self.get_success_url())
263
[end of django/cantusdb_project/main_app/views/source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/views/source.py b/django/cantusdb_project/main_app/views/source.py
--- a/django/cantusdb_project/main_app/views/source.py
+++ b/django/cantusdb_project/main_app/views/source.py
@@ -199,6 +199,7 @@
# assign this source to the "current_editors"
current_editors = source.current_editors.all()
+ self.request.user.sources_user_can_edit.add(source)
for editor in current_editors:
editor.sources_user_can_edit.add(source)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/source.py b/django/cantusdb_project/main_app/views/source.py\n--- a/django/cantusdb_project/main_app/views/source.py\n+++ b/django/cantusdb_project/main_app/views/source.py\n@@ -199,6 +199,7 @@\n \n # assign this source to the \"current_editors\"\n current_editors = source.current_editors.all()\n+ self.request.user.sources_user_can_edit.add(source)\n \n for editor in current_editors:\n editor.sources_user_can_edit.add(source)\n", "issue": "should editors get automatic proofreader access to their own sources?\nIt seems weird to me that on sources you (an editor) create, you can always access the Volpiano editor (assuming the source is non-empty!) but you have to be added as an editor to access the proofreading form. \r\n<img width=\"969\" alt=\"image\" src=\"https://user-images.githubusercontent.com/67451875/209043248-1a0a8e13-0196-498d-a835-081fddc3ee13.png\">\r\n<img width=\"666\" alt=\"image\" src=\"https://user-images.githubusercontent.com/67451875/209043188-94b4b649-c1e7-41cc-9692-fd2a6947c28c.png\">\r\n\r\nIf there were something preventing you from proofreading your own source as an intentional part of the workflow, this might be sort of useful (?) but as it is it just adds an extra step to get permissions you should in theory already have. \r\n\nNew source created doesn't show up in home page\nI just created a new source from my account. It doesn't show up immediately on the home page in the My Sources sidebar (even after refreshing the page and re-logging in). It does however appear in the database as well as in the my-sources page.\n", "before_files": [{"content": "from django.views.generic import DetailView, ListView, CreateView, UpdateView\nfrom django.db.models import Q, Prefetch\nfrom main_app.models import Source, Provenance, Century\nfrom main_app.forms import SourceCreateForm, SourceEditForm\nfrom django.contrib import messages\nfrom django.urls import reverse\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.http import HttpResponseRedirect\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404\nfrom main_app.views.chant import get_feast_selector_options\n\n\nclass SourceDetailView(DetailView):\n model = Source\n context_object_name = \"source\"\n template_name = \"source_detail.html\"\n\n def get_context_data(self, **kwargs):\n source = self.get_object()\n display_unpublished = self.request.user.is_authenticated\n if (source.published is False) and (not display_unpublished):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n\n if source.segment and source.segment.id == 4064:\n # if this is a sequence source\n context[\"sequences\"] = source.sequence_set.order_by(\"s_sequence\")\n context[\"folios\"] = (\n source.sequence_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n else:\n # if this is a chant source\n folios = (\n source.chant_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n context[\"folios\"] = folios\n # the options for the feast selector on the right, only chant sources have this\n context[\"feasts_with_folios\"] = get_feast_selector_options(source, folios)\n return context\n\n\nclass SourceListView(ListView):\n paginate_by = 100\n context_object_name = \"sources\"\n template_name = \"source_list.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"provenances\"] = (\n Provenance.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n context[\"centuries\"] = (\n Century.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n return context\n\n def get_queryset(self):\n # use select_related() for foreign keys to reduce DB queries\n queryset = Source.objects.select_related(\n \"rism_siglum\", \"segment\", \"provenance\"\n ).order_by(\"siglum\")\n\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n q_obj_filter = Q()\n else:\n q_obj_filter = Q(published=True)\n\n if self.request.GET.get(\"century\"):\n century_name = Century.objects.get(id=self.request.GET.get(\"century\")).name\n q_obj_filter &= Q(century__name__icontains=century_name)\n\n if self.request.GET.get(\"provenance\"):\n provenance_id = int(self.request.GET.get(\"provenance\"))\n q_obj_filter &= Q(provenance__id=provenance_id)\n if self.request.GET.get(\"segment\"):\n segment_id = int(self.request.GET.get(\"segment\"))\n q_obj_filter &= Q(segment__id=segment_id)\n if self.request.GET.get(\"fullSource\") in [\"true\", \"false\"]:\n full_source_str = self.request.GET.get(\"fullSource\")\n if full_source_str == \"true\":\n full_source_q = Q(full_source=True) | Q(full_source=None)\n q_obj_filter &= full_source_q\n else:\n q_obj_filter &= Q(full_source=False)\n\n if self.request.GET.get(\"general\"):\n # Strip spaces at the beginning and end. Then make list of terms split on spaces\n general_search_terms = self.request.GET.get(\"general\").strip(\" \").split(\" \")\n # We need a Q Object for each field we're gonna look into\n title_q = Q()\n siglum_q = Q()\n rism_siglum_q = Q()\n description_q = Q()\n # it seems that old cantus don't look into title and provenance for the general search terms\n # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsm\u00fcnster' in the general search field\n # provenance_q = Q()\n summary_q = Q()\n\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in general_search_terms:\n title_q |= Q(title__icontains=term)\n siglum_q |= Q(siglum__icontains=term)\n rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(\n rism_siglum__description__icontains=term\n )\n description_q |= Q(description__icontains=term)\n summary_q |= Q(summary__icontains=term)\n # provenance_q |= Q(provenance__name__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n # general_search_q = (\n # title_q | siglum_q | rism_siglum_q | description_q | provenance_q\n # )\n general_search_q = (\n title_q | siglum_q | rism_siglum_q | description_q | summary_q\n )\n q_obj_filter &= general_search_q\n\n # For the indexing notes search we follow the same procedure as above but with\n # different fields\n if self.request.GET.get(\"indexing\"):\n # Make list of terms split on spaces\n indexing_search_terms = self.request.GET.get(\"indexing\").split(\" \")\n # We need a Q Object for each field we're gonna look into\n inventoried_by_q = Q()\n full_text_entered_by_q = Q()\n melodies_entered_by_q = Q()\n proofreaders_q = Q()\n other_editors_q = Q()\n indexing_notes_q = Q()\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in indexing_search_terms:\n inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)\n full_text_entered_by_q |= Q(\n full_text_entered_by__full_name__icontains=term\n )\n melodies_entered_by_q |= Q(\n melodies_entered_by__full_name__icontains=term\n )\n proofreaders_q |= Q(proofreaders__full_name__icontains=term)\n other_editors_q |= Q(other_editors__full_name__icontains=term)\n indexing_notes_q |= Q(indexing_notes__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n indexing_search_q = (\n inventoried_by_q\n | full_text_entered_by_q\n | melodies_entered_by_q\n | proofreaders_q\n | other_editors_q\n | indexing_notes_q\n )\n q_obj_filter &= indexing_search_q\n\n return queryset.filter(q_obj_filter).prefetch_related(\n Prefetch(\"century\", queryset=Century.objects.all().order_by(\"id\"))\n )\n\n\nclass SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):\n model = Source\n template_name = \"source_create_form.html\"\n form_class = SourceCreateForm\n\n def test_func(self):\n user = self.request.user\n # checks if the user is allowed to create sources\n is_authorized = user.groups.filter(\n Q(name=\"project manager\") | Q(name=\"editor\") | Q(name=\"contributor\")\n ).exists()\n\n if is_authorized:\n return True\n else:\n return False\n\n def get_success_url(self):\n return reverse(\"source-create\")\n\n def form_valid(self, form):\n form.instance.created_by = self.request.user\n source = form.save()\n\n # assign this source to the \"current_editors\"\n current_editors = source.current_editors.all()\n\n for editor in current_editors:\n editor.sources_user_can_edit.add(source)\n\n messages.success(\n self.request,\n \"Source created successfully!\",\n )\n\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n pk_url_kwarg = \"source_id\"\n\n def test_func(self):\n user = self.request.user\n source_id = self.kwargs.get(self.pk_url_kwarg)\n source = get_object_or_404(Source, id=source_id)\n\n assigned_to_source = user.sources_user_can_edit.filter(id=source_id)\n\n # checks if the user is a project manager\n is_project_manager = user.groups.filter(name=\"project manager\").exists()\n # checks if the user is an editor\n is_editor = user.groups.filter(name=\"editor\").exists()\n # checks if the user is a contributor\n is_contributor = user.groups.filter(name=\"contributor\").exists()\n\n if (\n (is_project_manager)\n or (is_editor and assigned_to_source)\n or (is_editor and source.created_by == user)\n or (is_contributor and source.created_by == user)\n ):\n return True\n else:\n return False\n\n def form_valid(self, form):\n form.instance.last_updated_by = self.request.user\n\n # remove this source from the old \"current_editors\"\n # assign this source to the new \"current_editors\"\n\n old_current_editors = list(\n Source.objects.get(id=form.instance.id).current_editors.all()\n )\n new_current_editors = form.cleaned_data[\"current_editors\"]\n source = form.save()\n\n for old_editor in old_current_editors:\n old_editor.sources_user_can_edit.remove(source)\n\n for new_editor in new_current_editors:\n new_editor.sources_user_can_edit.add(source)\n\n return HttpResponseRedirect(self.get_success_url())\n", "path": "django/cantusdb_project/main_app/views/source.py"}]} | 3,859 | 128 |
gh_patches_debug_34189 | rasdani/github-patches | git_diff | zulip__zulip-4356 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable enter to send in dev env
In dev environment every time we fresh login we have to check ```Enter to send``` to true. We should enable it by default for dev env only to increase testing speed.
```git grep enter_sends``` will help you.
</issue>
<code>
[start of zerver/lib/create_user.py]
1 from __future__ import absolute_import
2
3 from django.contrib.auth.models import UserManager
4 from django.utils import timezone
5 from zerver.models import UserProfile, Recipient, Subscription, Realm, Stream
6 import base64
7 import ujson
8 import os
9 import string
10 from six.moves import range
11
12 from typing import Optional, Text
13
14 def random_api_key():
15 # type: () -> Text
16 choices = string.ascii_letters + string.digits
17 altchars = ''.join([choices[ord(os.urandom(1)) % 62] for _ in range(2)]).encode("utf-8")
18 return base64.b64encode(os.urandom(24), altchars=altchars).decode("utf-8")
19
20 # create_user_profile is based on Django's User.objects.create_user,
21 # except that we don't save to the database so it can used in
22 # bulk_creates
23 #
24 # Only use this for bulk_create -- for normal usage one should use
25 # create_user (below) which will also make the Subscription and
26 # Recipient objects
27 def create_user_profile(realm, email, password, active, bot_type, full_name,
28 short_name, bot_owner, is_mirror_dummy, tos_version,
29 tutorial_status=UserProfile.TUTORIAL_WAITING):
30 # type: (Realm, Text, Optional[Text], bool, Optional[int], Text, Text, Optional[UserProfile], bool, Optional[Text], Optional[Text]) -> UserProfile
31 now = timezone.now()
32 email = UserManager.normalize_email(email)
33
34 user_profile = UserProfile(email=email, is_staff=False, is_active=active,
35 full_name=full_name, short_name=short_name,
36 last_login=now, date_joined=now, realm=realm,
37 pointer=-1, is_bot=bool(bot_type), bot_type=bot_type,
38 bot_owner=bot_owner, is_mirror_dummy=is_mirror_dummy,
39 tos_version=tos_version,
40 tutorial_status=tutorial_status,
41 onboarding_steps=ujson.dumps([]),
42 default_language=realm.default_language)
43
44 if bot_type or not active:
45 password = None
46
47 user_profile.set_password(password)
48
49 user_profile.api_key = random_api_key()
50 return user_profile
51
52 def create_user(email, password, realm, full_name, short_name,
53 active=True, bot_type=None, bot_owner=None, tos_version=None,
54 avatar_source=UserProfile.AVATAR_FROM_GRAVATAR,
55 is_mirror_dummy=False, default_sending_stream=None,
56 default_events_register_stream=None,
57 default_all_public_streams=None, user_profile_id=None):
58 # type: (Text, Text, Realm, Text, Text, bool, Optional[int], Optional[UserProfile], Optional[Text], Text, bool, Optional[Stream], Optional[Stream], Optional[bool], Optional[int]) -> UserProfile
59 user_profile = create_user_profile(realm, email, password, active, bot_type,
60 full_name, short_name, bot_owner,
61 is_mirror_dummy, tos_version)
62 user_profile.avatar_source = avatar_source
63 user_profile.default_sending_stream = default_sending_stream
64 user_profile.default_events_register_stream = default_events_register_stream
65 # Allow the ORM default to be used if not provided
66 if default_all_public_streams is not None:
67 user_profile.default_all_public_streams = default_all_public_streams
68
69 if user_profile_id is not None:
70 user_profile.id = user_profile_id
71
72 user_profile.save()
73 recipient = Recipient.objects.create(type_id=user_profile.id,
74 type=Recipient.PERSONAL)
75 Subscription.objects.create(user_profile=user_profile, recipient=recipient)
76 return user_profile
77
[end of zerver/lib/create_user.py]
[start of zerver/lib/bulk_create.py]
1 from __future__ import absolute_import
2 from typing import Any, Dict, Iterable, List, Mapping, Optional, Set, Tuple, Text
3
4 from zerver.lib.initial_password import initial_password
5 from zerver.models import Realm, Stream, UserProfile, Huddle, \
6 Subscription, Recipient, Client, RealmAuditLog, get_huddle_hash
7 from zerver.lib.create_user import create_user_profile
8
9 def bulk_create_users(realm, users_raw, bot_type=None, tos_version=None):
10 # type: (Realm, Set[Tuple[Text, Text, Text, bool]], Optional[int], Optional[Text]) -> None
11 """
12 Creates and saves a UserProfile with the given email.
13 Has some code based off of UserManage.create_user, but doesn't .save()
14 """
15 existing_users = frozenset(UserProfile.objects.values_list('email', flat=True))
16 users = sorted([user_raw for user_raw in users_raw if user_raw[0] not in existing_users])
17
18 # Now create user_profiles
19 profiles_to_create = [] # type: List[UserProfile]
20 for (email, full_name, short_name, active) in users:
21 profile = create_user_profile(realm, email,
22 initial_password(email), active, bot_type,
23 full_name, short_name, None, False, tos_version,
24 tutorial_status=UserProfile.TUTORIAL_FINISHED)
25 profiles_to_create.append(profile)
26 UserProfile.objects.bulk_create(profiles_to_create)
27
28 RealmAuditLog.objects.bulk_create(
29 [RealmAuditLog(realm=profile_.realm, modified_user=profile_,
30 event_type='user_created', event_time=profile_.date_joined)
31 for profile_ in profiles_to_create])
32
33 profiles_by_email = {} # type: Dict[Text, UserProfile]
34 profiles_by_id = {} # type: Dict[int, UserProfile]
35 for profile in UserProfile.objects.select_related().all():
36 profiles_by_email[profile.email] = profile
37 profiles_by_id[profile.id] = profile
38
39 recipients_to_create = [] # type: List[Recipient]
40 for (email, full_name, short_name, active) in users:
41 recipients_to_create.append(Recipient(type_id=profiles_by_email[email].id,
42 type=Recipient.PERSONAL))
43 Recipient.objects.bulk_create(recipients_to_create)
44
45 recipients_by_email = {} # type: Dict[Text, Recipient]
46 for recipient in Recipient.objects.filter(type=Recipient.PERSONAL):
47 recipients_by_email[profiles_by_id[recipient.type_id].email] = recipient
48
49 subscriptions_to_create = [] # type: List[Subscription]
50 for (email, full_name, short_name, active) in users:
51 subscriptions_to_create.append(
52 Subscription(user_profile_id=profiles_by_email[email].id,
53 recipient=recipients_by_email[email]))
54 Subscription.objects.bulk_create(subscriptions_to_create)
55
56 def bulk_create_streams(realm, stream_dict):
57 # type: (Realm, Dict[Text, Dict[Text, Any]]) -> None
58 existing_streams = frozenset([name.lower() for name in
59 Stream.objects.filter(realm=realm)
60 .values_list('name', flat=True)])
61 streams_to_create = [] # type: List[Stream]
62 for name, options in stream_dict.items():
63 if name.lower() not in existing_streams:
64 streams_to_create.append(
65 Stream(
66 realm=realm, name=name, description=options["description"],
67 invite_only=options["invite_only"]
68 )
69 )
70 Stream.objects.bulk_create(streams_to_create)
71
72 recipients_to_create = [] # type: List[Recipient]
73 for stream in Stream.objects.filter(realm=realm).values('id', 'name'):
74 if stream['name'].lower() not in existing_streams:
75 recipients_to_create.append(Recipient(type_id=stream['id'],
76 type=Recipient.STREAM))
77 Recipient.objects.bulk_create(recipients_to_create)
78
79 def bulk_create_clients(client_list):
80 # type: (Iterable[Text]) -> None
81 existing_clients = set(client.name for client in Client.objects.select_related().all()) # type: Set[Text]
82
83 clients_to_create = [] # type: List[Client]
84 for name in client_list:
85 if name not in existing_clients:
86 clients_to_create.append(Client(name=name))
87 existing_clients.add(name)
88 Client.objects.bulk_create(clients_to_create)
89
90 def bulk_create_huddles(users, huddle_user_list):
91 # type: (Dict[Text, UserProfile], Iterable[Iterable[Text]]) -> None
92 huddles = {} # type: Dict[Text, Huddle]
93 huddles_by_id = {} # type: Dict[int, Huddle]
94 huddle_set = set() # type: Set[Tuple[Text, Tuple[int, ...]]]
95 existing_huddles = set() # type: Set[Text]
96 for huddle in Huddle.objects.all():
97 existing_huddles.add(huddle.huddle_hash)
98 for huddle_users in huddle_user_list:
99 user_ids = [users[email].id for email in huddle_users] # type: List[int]
100 huddle_hash = get_huddle_hash(user_ids)
101 if huddle_hash in existing_huddles:
102 continue
103 huddle_set.add((huddle_hash, tuple(sorted(user_ids))))
104
105 huddles_to_create = [] # type: List[Huddle]
106 for (huddle_hash, _) in huddle_set:
107 huddles_to_create.append(Huddle(huddle_hash=huddle_hash))
108 Huddle.objects.bulk_create(huddles_to_create)
109
110 for huddle in Huddle.objects.all():
111 huddles[huddle.huddle_hash] = huddle
112 huddles_by_id[huddle.id] = huddle
113
114 recipients_to_create = [] # type: List[Recipient]
115 for (huddle_hash, _) in huddle_set:
116 recipients_to_create.append(Recipient(type_id=huddles[huddle_hash].id, type=Recipient.HUDDLE))
117 Recipient.objects.bulk_create(recipients_to_create)
118
119 huddle_recipients = {} # type: Dict[Text, Recipient]
120 for recipient in Recipient.objects.filter(type=Recipient.HUDDLE):
121 huddle_recipients[huddles_by_id[recipient.type_id].huddle_hash] = recipient
122
123 subscriptions_to_create = [] # type: List[Subscription]
124 for (huddle_hash, huddle_user_ids) in huddle_set:
125 for user_id in huddle_user_ids:
126 subscriptions_to_create.append(Subscription(active=True, user_profile_id=user_id,
127 recipient=huddle_recipients[huddle_hash]))
128 Subscription.objects.bulk_create(subscriptions_to_create)
129
[end of zerver/lib/bulk_create.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/lib/bulk_create.py b/zerver/lib/bulk_create.py
--- a/zerver/lib/bulk_create.py
+++ b/zerver/lib/bulk_create.py
@@ -21,7 +21,8 @@
profile = create_user_profile(realm, email,
initial_password(email), active, bot_type,
full_name, short_name, None, False, tos_version,
- tutorial_status=UserProfile.TUTORIAL_FINISHED)
+ tutorial_status=UserProfile.TUTORIAL_FINISHED,
+ enter_sends=True)
profiles_to_create.append(profile)
UserProfile.objects.bulk_create(profiles_to_create)
diff --git a/zerver/lib/create_user.py b/zerver/lib/create_user.py
--- a/zerver/lib/create_user.py
+++ b/zerver/lib/create_user.py
@@ -26,8 +26,9 @@
# Recipient objects
def create_user_profile(realm, email, password, active, bot_type, full_name,
short_name, bot_owner, is_mirror_dummy, tos_version,
- tutorial_status=UserProfile.TUTORIAL_WAITING):
- # type: (Realm, Text, Optional[Text], bool, Optional[int], Text, Text, Optional[UserProfile], bool, Optional[Text], Optional[Text]) -> UserProfile
+ tutorial_status=UserProfile.TUTORIAL_WAITING,
+ enter_sends=False):
+ # type: (Realm, Text, Optional[Text], bool, Optional[int], Text, Text, Optional[UserProfile], bool, Optional[Text], Optional[Text], bool) -> UserProfile
now = timezone.now()
email = UserManager.normalize_email(email)
@@ -38,6 +39,7 @@
bot_owner=bot_owner, is_mirror_dummy=is_mirror_dummy,
tos_version=tos_version,
tutorial_status=tutorial_status,
+ enter_sends=enter_sends,
onboarding_steps=ujson.dumps([]),
default_language=realm.default_language)
| {"golden_diff": "diff --git a/zerver/lib/bulk_create.py b/zerver/lib/bulk_create.py\n--- a/zerver/lib/bulk_create.py\n+++ b/zerver/lib/bulk_create.py\n@@ -21,7 +21,8 @@\n profile = create_user_profile(realm, email,\n initial_password(email), active, bot_type,\n full_name, short_name, None, False, tos_version,\n- tutorial_status=UserProfile.TUTORIAL_FINISHED)\n+ tutorial_status=UserProfile.TUTORIAL_FINISHED,\n+ enter_sends=True)\n profiles_to_create.append(profile)\n UserProfile.objects.bulk_create(profiles_to_create)\n \ndiff --git a/zerver/lib/create_user.py b/zerver/lib/create_user.py\n--- a/zerver/lib/create_user.py\n+++ b/zerver/lib/create_user.py\n@@ -26,8 +26,9 @@\n # Recipient objects\n def create_user_profile(realm, email, password, active, bot_type, full_name,\n short_name, bot_owner, is_mirror_dummy, tos_version,\n- tutorial_status=UserProfile.TUTORIAL_WAITING):\n- # type: (Realm, Text, Optional[Text], bool, Optional[int], Text, Text, Optional[UserProfile], bool, Optional[Text], Optional[Text]) -> UserProfile\n+ tutorial_status=UserProfile.TUTORIAL_WAITING,\n+ enter_sends=False):\n+ # type: (Realm, Text, Optional[Text], bool, Optional[int], Text, Text, Optional[UserProfile], bool, Optional[Text], Optional[Text], bool) -> UserProfile\n now = timezone.now()\n email = UserManager.normalize_email(email)\n \n@@ -38,6 +39,7 @@\n bot_owner=bot_owner, is_mirror_dummy=is_mirror_dummy,\n tos_version=tos_version,\n tutorial_status=tutorial_status,\n+ enter_sends=enter_sends,\n onboarding_steps=ujson.dumps([]),\n default_language=realm.default_language)\n", "issue": "Enable enter to send in dev env\nIn dev environment every time we fresh login we have to check ```Enter to send``` to true. We should enable it by default for dev env only to increase testing speed.\r\n\r\n```git grep enter_sends``` will help you.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom django.contrib.auth.models import UserManager\nfrom django.utils import timezone\nfrom zerver.models import UserProfile, Recipient, Subscription, Realm, Stream\nimport base64\nimport ujson\nimport os\nimport string\nfrom six.moves import range\n\nfrom typing import Optional, Text\n\ndef random_api_key():\n # type: () -> Text\n choices = string.ascii_letters + string.digits\n altchars = ''.join([choices[ord(os.urandom(1)) % 62] for _ in range(2)]).encode(\"utf-8\")\n return base64.b64encode(os.urandom(24), altchars=altchars).decode(\"utf-8\")\n\n# create_user_profile is based on Django's User.objects.create_user,\n# except that we don't save to the database so it can used in\n# bulk_creates\n#\n# Only use this for bulk_create -- for normal usage one should use\n# create_user (below) which will also make the Subscription and\n# Recipient objects\ndef create_user_profile(realm, email, password, active, bot_type, full_name,\n short_name, bot_owner, is_mirror_dummy, tos_version,\n tutorial_status=UserProfile.TUTORIAL_WAITING):\n # type: (Realm, Text, Optional[Text], bool, Optional[int], Text, Text, Optional[UserProfile], bool, Optional[Text], Optional[Text]) -> UserProfile\n now = timezone.now()\n email = UserManager.normalize_email(email)\n\n user_profile = UserProfile(email=email, is_staff=False, is_active=active,\n full_name=full_name, short_name=short_name,\n last_login=now, date_joined=now, realm=realm,\n pointer=-1, is_bot=bool(bot_type), bot_type=bot_type,\n bot_owner=bot_owner, is_mirror_dummy=is_mirror_dummy,\n tos_version=tos_version,\n tutorial_status=tutorial_status,\n onboarding_steps=ujson.dumps([]),\n default_language=realm.default_language)\n\n if bot_type or not active:\n password = None\n\n user_profile.set_password(password)\n\n user_profile.api_key = random_api_key()\n return user_profile\n\ndef create_user(email, password, realm, full_name, short_name,\n active=True, bot_type=None, bot_owner=None, tos_version=None,\n avatar_source=UserProfile.AVATAR_FROM_GRAVATAR,\n is_mirror_dummy=False, default_sending_stream=None,\n default_events_register_stream=None,\n default_all_public_streams=None, user_profile_id=None):\n # type: (Text, Text, Realm, Text, Text, bool, Optional[int], Optional[UserProfile], Optional[Text], Text, bool, Optional[Stream], Optional[Stream], Optional[bool], Optional[int]) -> UserProfile\n user_profile = create_user_profile(realm, email, password, active, bot_type,\n full_name, short_name, bot_owner,\n is_mirror_dummy, tos_version)\n user_profile.avatar_source = avatar_source\n user_profile.default_sending_stream = default_sending_stream\n user_profile.default_events_register_stream = default_events_register_stream\n # Allow the ORM default to be used if not provided\n if default_all_public_streams is not None:\n user_profile.default_all_public_streams = default_all_public_streams\n\n if user_profile_id is not None:\n user_profile.id = user_profile_id\n\n user_profile.save()\n recipient = Recipient.objects.create(type_id=user_profile.id,\n type=Recipient.PERSONAL)\n Subscription.objects.create(user_profile=user_profile, recipient=recipient)\n return user_profile\n", "path": "zerver/lib/create_user.py"}, {"content": "from __future__ import absolute_import\nfrom typing import Any, Dict, Iterable, List, Mapping, Optional, Set, Tuple, Text\n\nfrom zerver.lib.initial_password import initial_password\nfrom zerver.models import Realm, Stream, UserProfile, Huddle, \\\n Subscription, Recipient, Client, RealmAuditLog, get_huddle_hash\nfrom zerver.lib.create_user import create_user_profile\n\ndef bulk_create_users(realm, users_raw, bot_type=None, tos_version=None):\n # type: (Realm, Set[Tuple[Text, Text, Text, bool]], Optional[int], Optional[Text]) -> None\n \"\"\"\n Creates and saves a UserProfile with the given email.\n Has some code based off of UserManage.create_user, but doesn't .save()\n \"\"\"\n existing_users = frozenset(UserProfile.objects.values_list('email', flat=True))\n users = sorted([user_raw for user_raw in users_raw if user_raw[0] not in existing_users])\n\n # Now create user_profiles\n profiles_to_create = [] # type: List[UserProfile]\n for (email, full_name, short_name, active) in users:\n profile = create_user_profile(realm, email,\n initial_password(email), active, bot_type,\n full_name, short_name, None, False, tos_version,\n tutorial_status=UserProfile.TUTORIAL_FINISHED)\n profiles_to_create.append(profile)\n UserProfile.objects.bulk_create(profiles_to_create)\n\n RealmAuditLog.objects.bulk_create(\n [RealmAuditLog(realm=profile_.realm, modified_user=profile_,\n event_type='user_created', event_time=profile_.date_joined)\n for profile_ in profiles_to_create])\n\n profiles_by_email = {} # type: Dict[Text, UserProfile]\n profiles_by_id = {} # type: Dict[int, UserProfile]\n for profile in UserProfile.objects.select_related().all():\n profiles_by_email[profile.email] = profile\n profiles_by_id[profile.id] = profile\n\n recipients_to_create = [] # type: List[Recipient]\n for (email, full_name, short_name, active) in users:\n recipients_to_create.append(Recipient(type_id=profiles_by_email[email].id,\n type=Recipient.PERSONAL))\n Recipient.objects.bulk_create(recipients_to_create)\n\n recipients_by_email = {} # type: Dict[Text, Recipient]\n for recipient in Recipient.objects.filter(type=Recipient.PERSONAL):\n recipients_by_email[profiles_by_id[recipient.type_id].email] = recipient\n\n subscriptions_to_create = [] # type: List[Subscription]\n for (email, full_name, short_name, active) in users:\n subscriptions_to_create.append(\n Subscription(user_profile_id=profiles_by_email[email].id,\n recipient=recipients_by_email[email]))\n Subscription.objects.bulk_create(subscriptions_to_create)\n\ndef bulk_create_streams(realm, stream_dict):\n # type: (Realm, Dict[Text, Dict[Text, Any]]) -> None\n existing_streams = frozenset([name.lower() for name in\n Stream.objects.filter(realm=realm)\n .values_list('name', flat=True)])\n streams_to_create = [] # type: List[Stream]\n for name, options in stream_dict.items():\n if name.lower() not in existing_streams:\n streams_to_create.append(\n Stream(\n realm=realm, name=name, description=options[\"description\"],\n invite_only=options[\"invite_only\"]\n )\n )\n Stream.objects.bulk_create(streams_to_create)\n\n recipients_to_create = [] # type: List[Recipient]\n for stream in Stream.objects.filter(realm=realm).values('id', 'name'):\n if stream['name'].lower() not in existing_streams:\n recipients_to_create.append(Recipient(type_id=stream['id'],\n type=Recipient.STREAM))\n Recipient.objects.bulk_create(recipients_to_create)\n\ndef bulk_create_clients(client_list):\n # type: (Iterable[Text]) -> None\n existing_clients = set(client.name for client in Client.objects.select_related().all()) # type: Set[Text]\n\n clients_to_create = [] # type: List[Client]\n for name in client_list:\n if name not in existing_clients:\n clients_to_create.append(Client(name=name))\n existing_clients.add(name)\n Client.objects.bulk_create(clients_to_create)\n\ndef bulk_create_huddles(users, huddle_user_list):\n # type: (Dict[Text, UserProfile], Iterable[Iterable[Text]]) -> None\n huddles = {} # type: Dict[Text, Huddle]\n huddles_by_id = {} # type: Dict[int, Huddle]\n huddle_set = set() # type: Set[Tuple[Text, Tuple[int, ...]]]\n existing_huddles = set() # type: Set[Text]\n for huddle in Huddle.objects.all():\n existing_huddles.add(huddle.huddle_hash)\n for huddle_users in huddle_user_list:\n user_ids = [users[email].id for email in huddle_users] # type: List[int]\n huddle_hash = get_huddle_hash(user_ids)\n if huddle_hash in existing_huddles:\n continue\n huddle_set.add((huddle_hash, tuple(sorted(user_ids))))\n\n huddles_to_create = [] # type: List[Huddle]\n for (huddle_hash, _) in huddle_set:\n huddles_to_create.append(Huddle(huddle_hash=huddle_hash))\n Huddle.objects.bulk_create(huddles_to_create)\n\n for huddle in Huddle.objects.all():\n huddles[huddle.huddle_hash] = huddle\n huddles_by_id[huddle.id] = huddle\n\n recipients_to_create = [] # type: List[Recipient]\n for (huddle_hash, _) in huddle_set:\n recipients_to_create.append(Recipient(type_id=huddles[huddle_hash].id, type=Recipient.HUDDLE))\n Recipient.objects.bulk_create(recipients_to_create)\n\n huddle_recipients = {} # type: Dict[Text, Recipient]\n for recipient in Recipient.objects.filter(type=Recipient.HUDDLE):\n huddle_recipients[huddles_by_id[recipient.type_id].huddle_hash] = recipient\n\n subscriptions_to_create = [] # type: List[Subscription]\n for (huddle_hash, huddle_user_ids) in huddle_set:\n for user_id in huddle_user_ids:\n subscriptions_to_create.append(Subscription(active=True, user_profile_id=user_id,\n recipient=huddle_recipients[huddle_hash]))\n Subscription.objects.bulk_create(subscriptions_to_create)\n", "path": "zerver/lib/bulk_create.py"}]} | 3,226 | 420 |
gh_patches_debug_19447 | rasdani/github-patches | git_diff | pypa__setuptools-1704 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
build_meta breaks some setup.py scripts (e.g. numpy and pygame)
Setup:
* Setuptools 40.6.2
* Python 3.7.2
Some build scripts use `sys.argv[0]` to change the working directory to the parent directory of setup.py. For example, the setup.py script of numpy-1.15.4 contains the following code:
```
src_path = os.path.dirname(os.path.abspath(sys.argv[0]))
old_path = os.getcwd()
os.chdir(src_path)
```
However,`sys.argv[0]` is an empty string, setup.py is called from setuptools.build_meta. Then `os.path.abspath()`, is the working directory and `os.path.dirname()` its parent directory. This changes the current directory to the parent directory of the current path and breaks relative paths in the setup.py script.
While manually running `python3 setup.py bdist_wheel` succedes, calling `setuptools.build_meta.build_wheel()` in the Python REPL then gives the following error:
```
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py", line 158, in build_wheel
_run_setup()
File "/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py", line 85, in _run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 403, in <module>
File "setup.py", line 349, in setup_package
File "setup.py", line 147, in write_version_py
FileNotFoundError: [Errno 2] No such file or directory: 'numpy/version.py'
```
A similar error occurs for pygame-1.9.4. Maybe I overlooked something, but it seems that the problem originates from the way setuptools.build_meta calls setup.py.
build_meta breaks some setup.py scripts (e.g. numpy and pygame)
Setup:
* Setuptools 40.6.2
* Python 3.7.2
Some build scripts use `sys.argv[0]` to change the working directory to the parent directory of setup.py. For example, the setup.py script of numpy-1.15.4 contains the following code:
```
src_path = os.path.dirname(os.path.abspath(sys.argv[0]))
old_path = os.getcwd()
os.chdir(src_path)
```
However,`sys.argv[0]` is an empty string, setup.py is called from setuptools.build_meta. Then `os.path.abspath()`, is the working directory and `os.path.dirname()` its parent directory. This changes the current directory to the parent directory of the current path and breaks relative paths in the setup.py script.
While manually running `python3 setup.py bdist_wheel` succedes, calling `setuptools.build_meta.build_wheel()` in the Python REPL then gives the following error:
```
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py", line 158, in build_wheel
_run_setup()
File "/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py", line 85, in _run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 403, in <module>
File "setup.py", line 349, in setup_package
File "setup.py", line 147, in write_version_py
FileNotFoundError: [Errno 2] No such file or directory: 'numpy/version.py'
```
A similar error occurs for pygame-1.9.4. Maybe I overlooked something, but it seems that the problem originates from the way setuptools.build_meta calls setup.py.
</issue>
<code>
[start of setuptools/build_meta.py]
1 """A PEP 517 interface to setuptools
2
3 Previously, when a user or a command line tool (let's call it a "frontend")
4 needed to make a request of setuptools to take a certain action, for
5 example, generating a list of installation requirements, the frontend would
6 would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line.
7
8 PEP 517 defines a different method of interfacing with setuptools. Rather
9 than calling "setup.py" directly, the frontend should:
10
11 1. Set the current directory to the directory with a setup.py file
12 2. Import this module into a safe python interpreter (one in which
13 setuptools can potentially set global variables or crash hard).
14 3. Call one of the functions defined in PEP 517.
15
16 What each function does is defined in PEP 517. However, here is a "casual"
17 definition of the functions (this definition should not be relied on for
18 bug reports or API stability):
19
20 - `build_wheel`: build a wheel in the folder and return the basename
21 - `get_requires_for_build_wheel`: get the `setup_requires` to build
22 - `prepare_metadata_for_build_wheel`: get the `install_requires`
23 - `build_sdist`: build an sdist in the folder and return the basename
24 - `get_requires_for_build_sdist`: get the `setup_requires` to build
25
26 Again, this is not a formal definition! Just a "taste" of the module.
27 """
28
29 import io
30 import os
31 import sys
32 import tokenize
33 import shutil
34 import contextlib
35
36 import setuptools
37 import distutils
38 from setuptools.py31compat import TemporaryDirectory
39
40 from pkg_resources import parse_requirements
41 from pkg_resources.py31compat import makedirs
42
43 __all__ = ['get_requires_for_build_sdist',
44 'get_requires_for_build_wheel',
45 'prepare_metadata_for_build_wheel',
46 'build_wheel',
47 'build_sdist',
48 '__legacy__',
49 'SetupRequirementsError']
50
51 class SetupRequirementsError(BaseException):
52 def __init__(self, specifiers):
53 self.specifiers = specifiers
54
55
56 class Distribution(setuptools.dist.Distribution):
57 def fetch_build_eggs(self, specifiers):
58 specifier_list = list(map(str, parse_requirements(specifiers)))
59
60 raise SetupRequirementsError(specifier_list)
61
62 @classmethod
63 @contextlib.contextmanager
64 def patch(cls):
65 """
66 Replace
67 distutils.dist.Distribution with this class
68 for the duration of this context.
69 """
70 orig = distutils.core.Distribution
71 distutils.core.Distribution = cls
72 try:
73 yield
74 finally:
75 distutils.core.Distribution = orig
76
77
78 def _to_str(s):
79 """
80 Convert a filename to a string (on Python 2, explicitly
81 a byte string, not Unicode) as distutils checks for the
82 exact type str.
83 """
84 if sys.version_info[0] == 2 and not isinstance(s, str):
85 # Assume it's Unicode, as that's what the PEP says
86 # should be provided.
87 return s.encode(sys.getfilesystemencoding())
88 return s
89
90
91 def _get_immediate_subdirectories(a_dir):
92 return [name for name in os.listdir(a_dir)
93 if os.path.isdir(os.path.join(a_dir, name))]
94
95
96 def _file_with_extension(directory, extension):
97 matching = (
98 f for f in os.listdir(directory)
99 if f.endswith(extension)
100 )
101 file, = matching
102 return file
103
104
105 def _open_setup_script(setup_script):
106 if not os.path.exists(setup_script):
107 # Supply a default setup.py
108 return io.StringIO(u"from setuptools import setup; setup()")
109
110 return getattr(tokenize, 'open', open)(setup_script)
111
112
113 class _BuildMetaBackend(object):
114
115 def _fix_config(self, config_settings):
116 config_settings = config_settings or {}
117 config_settings.setdefault('--global-option', [])
118 return config_settings
119
120 def _get_build_requires(self, config_settings, requirements):
121 config_settings = self._fix_config(config_settings)
122
123 sys.argv = sys.argv[:1] + ['egg_info'] + \
124 config_settings["--global-option"]
125 try:
126 with Distribution.patch():
127 self.run_setup()
128 except SetupRequirementsError as e:
129 requirements += e.specifiers
130
131 return requirements
132
133 def run_setup(self, setup_script='setup.py'):
134 # Note that we can reuse our build directory between calls
135 # Correctness comes first, then optimization later
136 __file__ = setup_script
137 __name__ = '__main__'
138
139 with _open_setup_script(__file__) as f:
140 code = f.read().replace(r'\r\n', r'\n')
141
142 exec(compile(code, __file__, 'exec'), locals())
143
144 def get_requires_for_build_wheel(self, config_settings=None):
145 config_settings = self._fix_config(config_settings)
146 return self._get_build_requires(config_settings, requirements=['wheel'])
147
148 def get_requires_for_build_sdist(self, config_settings=None):
149 config_settings = self._fix_config(config_settings)
150 return self._get_build_requires(config_settings, requirements=[])
151
152 def prepare_metadata_for_build_wheel(self, metadata_directory,
153 config_settings=None):
154 sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
155 _to_str(metadata_directory)]
156 self.run_setup()
157
158 dist_info_directory = metadata_directory
159 while True:
160 dist_infos = [f for f in os.listdir(dist_info_directory)
161 if f.endswith('.dist-info')]
162
163 if (len(dist_infos) == 0 and
164 len(_get_immediate_subdirectories(dist_info_directory)) == 1):
165
166 dist_info_directory = os.path.join(
167 dist_info_directory, os.listdir(dist_info_directory)[0])
168 continue
169
170 assert len(dist_infos) == 1
171 break
172
173 # PEP 517 requires that the .dist-info directory be placed in the
174 # metadata_directory. To comply, we MUST copy the directory to the root
175 if dist_info_directory != metadata_directory:
176 shutil.move(
177 os.path.join(dist_info_directory, dist_infos[0]),
178 metadata_directory)
179 shutil.rmtree(dist_info_directory, ignore_errors=True)
180
181 return dist_infos[0]
182
183 def _build_with_temp_dir(self, setup_command, result_extension,
184 result_directory, config_settings):
185 config_settings = self._fix_config(config_settings)
186 result_directory = os.path.abspath(result_directory)
187
188 # Build in a temporary directory, then copy to the target.
189 makedirs(result_directory, exist_ok=True)
190 with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:
191 sys.argv = (sys.argv[:1] + setup_command +
192 ['--dist-dir', tmp_dist_dir] +
193 config_settings["--global-option"])
194 self.run_setup()
195
196 result_basename = _file_with_extension(tmp_dist_dir, result_extension)
197 result_path = os.path.join(result_directory, result_basename)
198 if os.path.exists(result_path):
199 # os.rename will fail overwriting on non-Unix.
200 os.remove(result_path)
201 os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)
202
203 return result_basename
204
205
206 def build_wheel(self, wheel_directory, config_settings=None,
207 metadata_directory=None):
208 return self._build_with_temp_dir(['bdist_wheel'], '.whl',
209 wheel_directory, config_settings)
210
211 def build_sdist(self, sdist_directory, config_settings=None):
212 return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],
213 '.tar.gz', sdist_directory,
214 config_settings)
215
216
217 class _BuildMetaLegacyBackend(_BuildMetaBackend):
218 """Compatibility backend for setuptools
219
220 This is a version of setuptools.build_meta that endeavors to maintain backwards
221 compatibility with pre-PEP 517 modes of invocation. It exists as a temporary
222 bridge between the old packaging mechanism and the new packaging mechanism,
223 and will eventually be removed.
224 """
225 def run_setup(self, setup_script='setup.py'):
226 # In order to maintain compatibility with scripts assuming that
227 # the setup.py script is in a directory on the PYTHONPATH, inject
228 # '' into sys.path. (pypa/setuptools#1642)
229 sys_path = list(sys.path) # Save the original path
230
231 script_dir = os.path.dirname(os.path.abspath(setup_script))
232 if script_dir not in sys.path:
233 sys.path.insert(0, script_dir)
234
235 try:
236 super(_BuildMetaLegacyBackend,
237 self).run_setup(setup_script=setup_script)
238 finally:
239 # While PEP 517 frontends should be calling each hook in a fresh
240 # subprocess according to the standard (and thus it should not be
241 # strictly necessary to restore the old sys.path), we'll restore
242 # the original path so that the path manipulation does not persist
243 # within the hook after run_setup is called.
244 sys.path[:] = sys_path
245
246 # The primary backend
247 _BACKEND = _BuildMetaBackend()
248
249 get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel
250 get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist
251 prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel
252 build_wheel = _BACKEND.build_wheel
253 build_sdist = _BACKEND.build_sdist
254
255
256 # The legacy backend
257 __legacy__ = _BuildMetaLegacyBackend()
258
[end of setuptools/build_meta.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py
--- a/setuptools/build_meta.py
+++ b/setuptools/build_meta.py
@@ -232,6 +232,12 @@
if script_dir not in sys.path:
sys.path.insert(0, script_dir)
+ # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to
+ # get the directory of the source code. They expect it to refer to the
+ # setup.py script.
+ sys_argv_0 = sys.argv[0]
+ sys.argv[0] = setup_script
+
try:
super(_BuildMetaLegacyBackend,
self).run_setup(setup_script=setup_script)
@@ -242,6 +248,7 @@
# the original path so that the path manipulation does not persist
# within the hook after run_setup is called.
sys.path[:] = sys_path
+ sys.argv[0] = sys_argv_0
# The primary backend
_BACKEND = _BuildMetaBackend()
| {"golden_diff": "diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py\n--- a/setuptools/build_meta.py\n+++ b/setuptools/build_meta.py\n@@ -232,6 +232,12 @@\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n \n+ # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to\n+ # get the directory of the source code. They expect it to refer to the\n+ # setup.py script.\n+ sys_argv_0 = sys.argv[0]\n+ sys.argv[0] = setup_script\n+\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n@@ -242,6 +248,7 @@\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n+ sys.argv[0] = sys_argv_0\n \n # The primary backend\n _BACKEND = _BuildMetaBackend()\n", "issue": "build_meta breaks some setup.py scripts (e.g. numpy and pygame)\nSetup:\r\n* Setuptools 40.6.2\r\n* Python 3.7.2\r\n\r\nSome build scripts use `sys.argv[0]` to change the working directory to the parent directory of setup.py. For example, the setup.py script of numpy-1.15.4 contains the following code: \r\n\r\n```\r\nsrc_path = os.path.dirname(os.path.abspath(sys.argv[0]))\r\nold_path = os.getcwd()\r\nos.chdir(src_path)\r\n```\r\n\r\nHowever,`sys.argv[0]` is an empty string, setup.py is called from setuptools.build_meta. Then `os.path.abspath()`, is the working directory and `os.path.dirname()` its parent directory. This changes the current directory to the parent directory of the current path and breaks relative paths in the setup.py script.\r\n\r\nWhile manually running `python3 setup.py bdist_wheel` succedes, calling `setuptools.build_meta.build_wheel()` in the Python REPL then gives the following error:\r\n```\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py\", line 158, in build_wheel\r\n _run_setup()\r\n File \"/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py\", line 85, in _run_setup\r\n exec(compile(code, __file__, 'exec'), locals())\r\n File \"setup.py\", line 403, in <module>\r\n File \"setup.py\", line 349, in setup_package\r\n File \"setup.py\", line 147, in write_version_py\r\nFileNotFoundError: [Errno 2] No such file or directory: 'numpy/version.py'\r\n``` \r\n\r\nA similar error occurs for pygame-1.9.4. Maybe I overlooked something, but it seems that the problem originates from the way setuptools.build_meta calls setup.py. \nbuild_meta breaks some setup.py scripts (e.g. numpy and pygame)\nSetup:\r\n* Setuptools 40.6.2\r\n* Python 3.7.2\r\n\r\nSome build scripts use `sys.argv[0]` to change the working directory to the parent directory of setup.py. For example, the setup.py script of numpy-1.15.4 contains the following code: \r\n\r\n```\r\nsrc_path = os.path.dirname(os.path.abspath(sys.argv[0]))\r\nold_path = os.getcwd()\r\nos.chdir(src_path)\r\n```\r\n\r\nHowever,`sys.argv[0]` is an empty string, setup.py is called from setuptools.build_meta. Then `os.path.abspath()`, is the working directory and `os.path.dirname()` its parent directory. This changes the current directory to the parent directory of the current path and breaks relative paths in the setup.py script.\r\n\r\nWhile manually running `python3 setup.py bdist_wheel` succedes, calling `setuptools.build_meta.build_wheel()` in the Python REPL then gives the following error:\r\n```\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py\", line 158, in build_wheel\r\n _run_setup()\r\n File \"/usr/local/lib/python3.7/site-packages/setuptools/build_meta.py\", line 85, in _run_setup\r\n exec(compile(code, __file__, 'exec'), locals())\r\n File \"setup.py\", line 403, in <module>\r\n File \"setup.py\", line 349, in setup_package\r\n File \"setup.py\", line 147, in write_version_py\r\nFileNotFoundError: [Errno 2] No such file or directory: 'numpy/version.py'\r\n``` \r\n\r\nA similar error occurs for pygame-1.9.4. Maybe I overlooked something, but it seems that the problem originates from the way setuptools.build_meta calls setup.py. \n", "before_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\nfrom setuptools.py31compat import TemporaryDirectory\n\nfrom pkg_resources import parse_requirements\nfrom pkg_resources.py31compat import makedirs\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def _build_with_temp_dir(self, setup_command, result_extension,\n result_directory, config_settings):\n config_settings = self._fix_config(config_settings)\n result_directory = os.path.abspath(result_directory)\n\n # Build in a temporary directory, then copy to the target.\n makedirs(result_directory, exist_ok=True)\n with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n self.run_setup()\n\n result_basename = _file_with_extension(tmp_dist_dir, result_extension)\n result_path = os.path.join(result_directory, result_basename)\n if os.path.exists(result_path):\n # os.rename will fail overwriting on non-Unix.\n os.remove(result_path)\n os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)\n\n return result_basename\n\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n wheel_directory, config_settings)\n\n def build_sdist(self, sdist_directory, config_settings=None):\n return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],\n '.tar.gz', sdist_directory,\n config_settings)\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It exists as a temporary\n bridge between the old packaging mechanism and the new packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}]} | 4,074 | 238 |
gh_patches_debug_57165 | rasdani/github-patches | git_diff | cal-itp__benefits-922 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Spanish translations on all error pages are not showing up.
## To Reproduce
Steps to reproduce the behavior:
1. Go to benefits.calitp.org/
2. Click on Spanish
3. Go to benefits.calitp.org/asfakljsfasdf
4. See error
<img width="705" alt="image" src="https://user-images.githubusercontent.com/3673236/190244616-0867bdbe-cd77-477f-9fd0-0cf8f3b8625a.png">
Happening for 404 and 500
## Expected behavior
All the text should be in Spanish, not just the Footer and the Button.
## Screenshots
<img width="705" alt="image" src="https://user-images.githubusercontent.com/3673236/190244616-0867bdbe-cd77-477f-9fd0-0cf8f3b8625a.png">
## Desktop (please complete the following information)
Both
## Smartphone (please complete the following information)
All
## Additional context
Fix translations on error pages (default arguments set once, need to use None and check for None instead)
</issue>
<code>
[start of benefits/core/viewmodels.py]
1 """
2 The core application: view model definitions for the root of the webapp.
3 """
4 from django.utils.translation import pgettext, gettext as _
5 from django.urls import reverse
6
7 from benefits.core import models
8
9 from . import session
10
11
12 class Button:
13 """
14 Represents a clickable button as styled <a> element (with optional label, optional transparent fallback text):
15 * classes: str, str[]
16 * id: str
17 * fallback_text: str
18 * label: str
19 * text: str
20 * url: str
21 * target: str
22 * rel: str
23 """
24
25 def __init__(self, **kwargs):
26 classes = kwargs.get("classes", [])
27 if isinstance(classes, str):
28 classes = classes.split()
29
30 self.classes = ["btn", "btn-lg"]
31 self.classes.extend(classes)
32 self.id = kwargs.get("id")
33 self.fallback_text = kwargs.get("fallback_text")
34 self.label = kwargs.get("label")
35 self.text = kwargs.get("text", "Button")
36 self.url = kwargs.get("url")
37 self.target = kwargs.get("target")
38 self.rel = kwargs.get("rel")
39
40 @staticmethod
41 def agency_contact_links(agency):
42 """Create link buttons for agency contact information."""
43 return [
44 Button.link(classes="agency", label=agency.long_name, text=agency.phone, url=f"tel:{agency.phone}"),
45 Button.link(
46 classes="agency", text=agency.info_url, url=agency.info_url, target="_blank", rel="noopener noreferrer"
47 ),
48 ]
49
50 @staticmethod
51 def home(request, text=None):
52 """Create a button back to this session's origin."""
53 if text is None:
54 text = _("core.buttons.return_home")
55
56 return Button.primary(text=text, url=session.origin(request))
57
58 @staticmethod
59 def link(**kwargs):
60 classes = kwargs.pop("classes", [])
61 if isinstance(classes, str):
62 classes = classes.split(" ")
63 classes.insert(0, "btn-link")
64 return Button(classes=classes, **kwargs)
65
66 @staticmethod
67 def primary(**kwargs):
68 classes = kwargs.pop("classes", [])
69 if isinstance(classes, str):
70 classes = classes.split(" ")
71 classes.insert(0, "btn-primary")
72 return Button(classes=classes, **kwargs)
73
74 @staticmethod
75 def outline_primary(**kwargs):
76 classes = kwargs.pop("classes", [])
77 if isinstance(classes, str):
78 classes = classes.split(" ")
79 classes.insert(0, "btn-outline-primary")
80 return Button(classes=classes, **kwargs)
81
82 @staticmethod
83 def login(**kwargs):
84 """Create a login.gov button, with a login.gov logo and fallback text"""
85 btn = Button.primary(fallback_text="Login.gov", id="login", **kwargs)
86 return btn
87
88 @staticmethod
89 def logout(**kwargs):
90 """Create a button that logs user out, with a login.gov button, with a login.gov logo and fallback text"""
91 btn = Button.primary(fallback_text="Login.gov", id="login", url=reverse("oauth:logout"), text="", **kwargs)
92 return btn
93
94
95 class Icon:
96 """Represents an icon."""
97
98 def __init__(self, icon, alt):
99 self.src = f"img/icon/{icon}.svg"
100 self.alt = alt
101
102
103 class Page:
104 """
105 Represents a page of content:
106 * title: str
107 * noimage: bool
108 * icon: core.viewmodels.Icon
109 * content_title: str
110 * paragraphs: str[]
111 * form: django.forms.Form
112 * forms: django.forms.Form[]
113 * button: core.viewmodels.Button
114 * buttons: core.viewmodels.Button[]
115 * classes: str[]
116 """
117
118 def __init__(self, **kwargs):
119 self.title = kwargs.get("title")
120 if self.title is None:
121 self.title = _("core.pages.index.prefix")
122 else:
123 self.title = f"{_('core.pages.index.prefix')}: {self.title}"
124
125 self.noimage = kwargs.get("noimage", False)
126 self.icon = kwargs.get("icon")
127 self.content_title = kwargs.get("content_title")
128 self.paragraphs = kwargs.get("paragraphs", [])
129 self.steps = kwargs.get("steps")
130
131 self.forms = kwargs.get("forms", [])
132 if not isinstance(self.forms, list):
133 self.forms = [self.forms]
134 if "form" in kwargs:
135 self.forms.append(kwargs.get("form"))
136
137 self.buttons = kwargs.get("buttons", [])
138 if not isinstance(self.buttons, list):
139 self.buttons = [self.buttons]
140 if "button" in kwargs:
141 self.buttons.append(kwargs.get("button"))
142
143 self.classes = kwargs.get("classes", [])
144 if not isinstance(self.classes, list):
145 self.classes = self.classes.split(" ")
146 if not self.noimage:
147 self.classes.append("with-image")
148
149 def context_dict(self):
150 """Return a context dict for a Page."""
151 return {"page": self}
152
153
154 class ErrorPage(Page):
155 """
156 Represents an error page:
157 * title: str
158 * icon: core.viewmodels.Icon
159 * content_title: str
160 * paragraphs: str[]
161 * button: core.viewmodels.Button
162 """
163
164 def __init__(self, **kwargs):
165 super().__init__(
166 title=kwargs.get("title", _("core.pages.error.title")),
167 icon=kwargs.get("icon", Icon("sadbus", pgettext("image alt text", "core.icons.sadbus"))),
168 content_title=kwargs.get("content_title", _("core.pages.error.title")),
169 paragraphs=kwargs.get("paragraphs", [_("core.pages.server_error.content_title")]),
170 button=kwargs.get("button"),
171 )
172
173 @staticmethod
174 def error(
175 title=_("core.pages.server_error.title"),
176 content_title=_("core.pages.server_error.title"),
177 paragraphs=[_("core.pages.server_error.p[0]"), _("core.pages.server_error.p[1]")],
178 **kwargs,
179 ):
180 """Create a new core.viewmodels.ErrorPage instance with defaults for a generic error."""
181 return ErrorPage(title=title, content_title=content_title, paragraphs=paragraphs, **kwargs)
182
183 @staticmethod
184 def not_found(
185 title=_("core.pages.not_found.title"),
186 content_title=_("core.pages.not_found.content_title"),
187 paragraphs=[_("core.pages.not_found.p[0]")],
188 **kwargs,
189 ):
190 """Create a new core.viewmodels.ErrorPage with defaults for a 404."""
191 path = kwargs.pop("path", None)
192 if path and title:
193 title = f"{title}: {path}"
194 elif path and not title:
195 title = path
196 return ErrorPage(title=title, content_title=content_title, paragraphs=paragraphs, **kwargs)
197
198
199 class PaymentProcessor:
200 """
201 Represents a core.models.PaymentProcessor:
202 * model: core.models.PaymentProcessor
203 * access_token_url: str
204 * element_id: str
205 * color: str
206 * [name: str]
207 * [loading_text: str]
208 """
209
210 def __init__(self, model, access_token_url, element_id, color, name=None, loading_text=_("core.buttons.wait")):
211 if isinstance(model, models.PaymentProcessor):
212 self.access_token_url = access_token_url
213 self.element_id = element_id
214 self.color = color
215 self.name = name or model.name
216 self.loading_text = loading_text
217 self.card_tokenize_url = model.card_tokenize_url
218 self.card_tokenize_func = model.card_tokenize_func
219 self.card_tokenize_env = model.card_tokenize_env
220
221 def context_dict(self):
222 """Return a context dict for a PaymentProcessor."""
223 return {"payment_processor": self}
224
225
226 class TransitAgency:
227 """
228 Represents a core.models.TransitAgency:
229 * model: core.models.TransitAgency
230 """
231
232 def __init__(self, model):
233 if isinstance(model, models.TransitAgency):
234 self.slug = model.slug
235 self.short_name = model.short_name
236 self.long_name = model.long_name
237 self.agency_id = model.agency_id
238 self.merchant_id = model.merchant_id
239 self.info_url = model.info_url
240 self.phone = model.phone
241
242 def context_dict(self):
243 """Return a context dict for a TransitAgency."""
244 return {"agency": self}
245
[end of benefits/core/viewmodels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benefits/core/viewmodels.py b/benefits/core/viewmodels.py
--- a/benefits/core/viewmodels.py
+++ b/benefits/core/viewmodels.py
@@ -1,7 +1,7 @@
"""
The core application: view model definitions for the root of the webapp.
"""
-from django.utils.translation import pgettext, gettext as _
+from django.utils.translation import pgettext, gettext_lazy as _
from django.urls import reverse
from benefits.core import models
| {"golden_diff": "diff --git a/benefits/core/viewmodels.py b/benefits/core/viewmodels.py\n--- a/benefits/core/viewmodels.py\n+++ b/benefits/core/viewmodels.py\n@@ -1,7 +1,7 @@\n \"\"\"\n The core application: view model definitions for the root of the webapp.\n \"\"\"\n-from django.utils.translation import pgettext, gettext as _\n+from django.utils.translation import pgettext, gettext_lazy as _\n from django.urls import reverse\n \n from benefits.core import models\n", "issue": "Bug: Spanish translations on all error pages are not showing up.\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to benefits.calitp.org/\r\n2. Click on Spanish\r\n3. Go to benefits.calitp.org/asfakljsfasdf\r\n4. See error\r\n\r\n<img width=\"705\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/190244616-0867bdbe-cd77-477f-9fd0-0cf8f3b8625a.png\">\r\n\r\nHappening for 404 and 500\r\n\r\n## Expected behavior\r\n\r\nAll the text should be in Spanish, not just the Footer and the Button.\r\n\r\n## Screenshots\r\n\r\n<img width=\"705\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/190244616-0867bdbe-cd77-477f-9fd0-0cf8f3b8625a.png\">\r\n\r\n## Desktop (please complete the following information)\r\n\r\nBoth \r\n\r\n## Smartphone (please complete the following information)\r\n\r\nAll\r\n\r\n## Additional context\r\n\r\nFix translations on error pages (default arguments set once, need to use None and check for None instead)\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThe core application: view model definitions for the root of the webapp.\n\"\"\"\nfrom django.utils.translation import pgettext, gettext as _\nfrom django.urls import reverse\n\nfrom benefits.core import models\n\nfrom . import session\n\n\nclass Button:\n \"\"\"\n Represents a clickable button as styled <a> element (with optional label, optional transparent fallback text):\n * classes: str, str[]\n * id: str\n * fallback_text: str\n * label: str\n * text: str\n * url: str\n * target: str\n * rel: str\n \"\"\"\n\n def __init__(self, **kwargs):\n classes = kwargs.get(\"classes\", [])\n if isinstance(classes, str):\n classes = classes.split()\n\n self.classes = [\"btn\", \"btn-lg\"]\n self.classes.extend(classes)\n self.id = kwargs.get(\"id\")\n self.fallback_text = kwargs.get(\"fallback_text\")\n self.label = kwargs.get(\"label\")\n self.text = kwargs.get(\"text\", \"Button\")\n self.url = kwargs.get(\"url\")\n self.target = kwargs.get(\"target\")\n self.rel = kwargs.get(\"rel\")\n\n @staticmethod\n def agency_contact_links(agency):\n \"\"\"Create link buttons for agency contact information.\"\"\"\n return [\n Button.link(classes=\"agency\", label=agency.long_name, text=agency.phone, url=f\"tel:{agency.phone}\"),\n Button.link(\n classes=\"agency\", text=agency.info_url, url=agency.info_url, target=\"_blank\", rel=\"noopener noreferrer\"\n ),\n ]\n\n @staticmethod\n def home(request, text=None):\n \"\"\"Create a button back to this session's origin.\"\"\"\n if text is None:\n text = _(\"core.buttons.return_home\")\n\n return Button.primary(text=text, url=session.origin(request))\n\n @staticmethod\n def link(**kwargs):\n classes = kwargs.pop(\"classes\", [])\n if isinstance(classes, str):\n classes = classes.split(\" \")\n classes.insert(0, \"btn-link\")\n return Button(classes=classes, **kwargs)\n\n @staticmethod\n def primary(**kwargs):\n classes = kwargs.pop(\"classes\", [])\n if isinstance(classes, str):\n classes = classes.split(\" \")\n classes.insert(0, \"btn-primary\")\n return Button(classes=classes, **kwargs)\n\n @staticmethod\n def outline_primary(**kwargs):\n classes = kwargs.pop(\"classes\", [])\n if isinstance(classes, str):\n classes = classes.split(\" \")\n classes.insert(0, \"btn-outline-primary\")\n return Button(classes=classes, **kwargs)\n\n @staticmethod\n def login(**kwargs):\n \"\"\"Create a login.gov button, with a login.gov logo and fallback text\"\"\"\n btn = Button.primary(fallback_text=\"Login.gov\", id=\"login\", **kwargs)\n return btn\n\n @staticmethod\n def logout(**kwargs):\n \"\"\"Create a button that logs user out, with a login.gov button, with a login.gov logo and fallback text\"\"\"\n btn = Button.primary(fallback_text=\"Login.gov\", id=\"login\", url=reverse(\"oauth:logout\"), text=\"\", **kwargs)\n return btn\n\n\nclass Icon:\n \"\"\"Represents an icon.\"\"\"\n\n def __init__(self, icon, alt):\n self.src = f\"img/icon/{icon}.svg\"\n self.alt = alt\n\n\nclass Page:\n \"\"\"\n Represents a page of content:\n * title: str\n * noimage: bool\n * icon: core.viewmodels.Icon\n * content_title: str\n * paragraphs: str[]\n * form: django.forms.Form\n * forms: django.forms.Form[]\n * button: core.viewmodels.Button\n * buttons: core.viewmodels.Button[]\n * classes: str[]\n \"\"\"\n\n def __init__(self, **kwargs):\n self.title = kwargs.get(\"title\")\n if self.title is None:\n self.title = _(\"core.pages.index.prefix\")\n else:\n self.title = f\"{_('core.pages.index.prefix')}: {self.title}\"\n\n self.noimage = kwargs.get(\"noimage\", False)\n self.icon = kwargs.get(\"icon\")\n self.content_title = kwargs.get(\"content_title\")\n self.paragraphs = kwargs.get(\"paragraphs\", [])\n self.steps = kwargs.get(\"steps\")\n\n self.forms = kwargs.get(\"forms\", [])\n if not isinstance(self.forms, list):\n self.forms = [self.forms]\n if \"form\" in kwargs:\n self.forms.append(kwargs.get(\"form\"))\n\n self.buttons = kwargs.get(\"buttons\", [])\n if not isinstance(self.buttons, list):\n self.buttons = [self.buttons]\n if \"button\" in kwargs:\n self.buttons.append(kwargs.get(\"button\"))\n\n self.classes = kwargs.get(\"classes\", [])\n if not isinstance(self.classes, list):\n self.classes = self.classes.split(\" \")\n if not self.noimage:\n self.classes.append(\"with-image\")\n\n def context_dict(self):\n \"\"\"Return a context dict for a Page.\"\"\"\n return {\"page\": self}\n\n\nclass ErrorPage(Page):\n \"\"\"\n Represents an error page:\n * title: str\n * icon: core.viewmodels.Icon\n * content_title: str\n * paragraphs: str[]\n * button: core.viewmodels.Button\n \"\"\"\n\n def __init__(self, **kwargs):\n super().__init__(\n title=kwargs.get(\"title\", _(\"core.pages.error.title\")),\n icon=kwargs.get(\"icon\", Icon(\"sadbus\", pgettext(\"image alt text\", \"core.icons.sadbus\"))),\n content_title=kwargs.get(\"content_title\", _(\"core.pages.error.title\")),\n paragraphs=kwargs.get(\"paragraphs\", [_(\"core.pages.server_error.content_title\")]),\n button=kwargs.get(\"button\"),\n )\n\n @staticmethod\n def error(\n title=_(\"core.pages.server_error.title\"),\n content_title=_(\"core.pages.server_error.title\"),\n paragraphs=[_(\"core.pages.server_error.p[0]\"), _(\"core.pages.server_error.p[1]\")],\n **kwargs,\n ):\n \"\"\"Create a new core.viewmodels.ErrorPage instance with defaults for a generic error.\"\"\"\n return ErrorPage(title=title, content_title=content_title, paragraphs=paragraphs, **kwargs)\n\n @staticmethod\n def not_found(\n title=_(\"core.pages.not_found.title\"),\n content_title=_(\"core.pages.not_found.content_title\"),\n paragraphs=[_(\"core.pages.not_found.p[0]\")],\n **kwargs,\n ):\n \"\"\"Create a new core.viewmodels.ErrorPage with defaults for a 404.\"\"\"\n path = kwargs.pop(\"path\", None)\n if path and title:\n title = f\"{title}: {path}\"\n elif path and not title:\n title = path\n return ErrorPage(title=title, content_title=content_title, paragraphs=paragraphs, **kwargs)\n\n\nclass PaymentProcessor:\n \"\"\"\n Represents a core.models.PaymentProcessor:\n * model: core.models.PaymentProcessor\n * access_token_url: str\n * element_id: str\n * color: str\n * [name: str]\n * [loading_text: str]\n \"\"\"\n\n def __init__(self, model, access_token_url, element_id, color, name=None, loading_text=_(\"core.buttons.wait\")):\n if isinstance(model, models.PaymentProcessor):\n self.access_token_url = access_token_url\n self.element_id = element_id\n self.color = color\n self.name = name or model.name\n self.loading_text = loading_text\n self.card_tokenize_url = model.card_tokenize_url\n self.card_tokenize_func = model.card_tokenize_func\n self.card_tokenize_env = model.card_tokenize_env\n\n def context_dict(self):\n \"\"\"Return a context dict for a PaymentProcessor.\"\"\"\n return {\"payment_processor\": self}\n\n\nclass TransitAgency:\n \"\"\"\n Represents a core.models.TransitAgency:\n * model: core.models.TransitAgency\n \"\"\"\n\n def __init__(self, model):\n if isinstance(model, models.TransitAgency):\n self.slug = model.slug\n self.short_name = model.short_name\n self.long_name = model.long_name\n self.agency_id = model.agency_id\n self.merchant_id = model.merchant_id\n self.info_url = model.info_url\n self.phone = model.phone\n\n def context_dict(self):\n \"\"\"Return a context dict for a TransitAgency.\"\"\"\n return {\"agency\": self}\n", "path": "benefits/core/viewmodels.py"}]} | 3,249 | 103 |
gh_patches_debug_16124 | rasdani/github-patches | git_diff | getnikola__nikola-3159 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Responsive youtube embed
I'm trying to set an embedded youtube link to 100% width in a .rst file. Is this possible?
I've tried:
```
.. youtube:: 3XsQCkF1SrE
:align: center
:width: 100%
```
</issue>
<code>
[start of nikola/plugins/compile/rest/youtube.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2018 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """YouTube directive for reStructuredText."""
28
29 from docutils import nodes
30 from docutils.parsers.rst import Directive, directives
31 from nikola.plugins.compile.rest import _align_choice, _align_options_base
32
33 from nikola.plugin_categories import RestExtension
34
35
36 class Plugin(RestExtension):
37 """Plugin for the youtube directive."""
38
39 name = "rest_youtube"
40
41 def set_site(self, site):
42 """Set Nikola site."""
43 self.site = site
44 directives.register_directive('youtube', Youtube)
45 return super(Plugin, self).set_site(site)
46
47
48 CODE = """\
49 <div class="youtube-video{align}">
50 <iframe width="{width}" height="{height}"
51 src="https://www.youtube-nocookie.com/embed/{yid}?rel=0&wmode=transparent"
52 frameborder="0" allow="encrypted-media" allowfullscreen
53 ></iframe>
54 </div>"""
55
56
57 class Youtube(Directive):
58 """reST extension for inserting youtube embedded videos.
59
60 Usage:
61 .. youtube:: lyViVmaBQDg
62 :height: 400
63 :width: 600
64
65 """
66
67 has_content = True
68 required_arguments = 1
69 option_spec = {
70 "width": directives.positive_int,
71 "height": directives.positive_int,
72 "align": _align_choice
73 }
74
75 def run(self):
76 """Run the youtube directive."""
77 self.check_content()
78 options = {
79 'yid': self.arguments[0],
80 'width': 560,
81 'height': 315,
82 }
83 options.update(self.options)
84 if self.options.get('align') in _align_options_base:
85 options['align'] = ' align-' + self.options['align']
86 else:
87 options['align'] = ''
88 return [nodes.raw('', CODE.format(**options), format='html')]
89
90 def check_content(self):
91 """Check if content exists."""
92 if self.content: # pragma: no cover
93 raise self.warning("This directive does not accept content. The "
94 "'key=value' format for options is deprecated, "
95 "use ':key: value' instead")
96
[end of nikola/plugins/compile/rest/youtube.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/compile/rest/youtube.py b/nikola/plugins/compile/rest/youtube.py
--- a/nikola/plugins/compile/rest/youtube.py
+++ b/nikola/plugins/compile/rest/youtube.py
@@ -67,8 +67,8 @@
has_content = True
required_arguments = 1
option_spec = {
- "width": directives.positive_int,
- "height": directives.positive_int,
+ "width": directives.unchanged,
+ "height": directives.unchanged,
"align": _align_choice
}
@@ -80,7 +80,7 @@
'width': 560,
'height': 315,
}
- options.update(self.options)
+ options.update({k: v for k, v in self.options.items() if v})
if self.options.get('align') in _align_options_base:
options['align'] = ' align-' + self.options['align']
else:
| {"golden_diff": "diff --git a/nikola/plugins/compile/rest/youtube.py b/nikola/plugins/compile/rest/youtube.py\n--- a/nikola/plugins/compile/rest/youtube.py\n+++ b/nikola/plugins/compile/rest/youtube.py\n@@ -67,8 +67,8 @@\n has_content = True\n required_arguments = 1\n option_spec = {\n- \"width\": directives.positive_int,\n- \"height\": directives.positive_int,\n+ \"width\": directives.unchanged,\n+ \"height\": directives.unchanged,\n \"align\": _align_choice\n }\n \n@@ -80,7 +80,7 @@\n 'width': 560,\n 'height': 315,\n }\n- options.update(self.options)\n+ options.update({k: v for k, v in self.options.items() if v})\n if self.options.get('align') in _align_options_base:\n options['align'] = ' align-' + self.options['align']\n else:\n", "issue": "Responsive youtube embed\nI'm trying to set an embedded youtube link to 100% width in a .rst file. Is this possible?\r\n\r\nI've tried:\r\n\r\n```\r\n.. youtube:: 3XsQCkF1SrE\r\n :align: center\r\n :width: 100%\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2018 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"YouTube directive for reStructuredText.\"\"\"\n\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive, directives\nfrom nikola.plugins.compile.rest import _align_choice, _align_options_base\n\nfrom nikola.plugin_categories import RestExtension\n\n\nclass Plugin(RestExtension):\n \"\"\"Plugin for the youtube directive.\"\"\"\n\n name = \"rest_youtube\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.site = site\n directives.register_directive('youtube', Youtube)\n return super(Plugin, self).set_site(site)\n\n\nCODE = \"\"\"\\\n<div class=\"youtube-video{align}\">\n<iframe width=\"{width}\" height=\"{height}\"\nsrc=\"https://www.youtube-nocookie.com/embed/{yid}?rel=0&wmode=transparent\"\nframeborder=\"0\" allow=\"encrypted-media\" allowfullscreen\n></iframe>\n</div>\"\"\"\n\n\nclass Youtube(Directive):\n \"\"\"reST extension for inserting youtube embedded videos.\n\n Usage:\n .. youtube:: lyViVmaBQDg\n :height: 400\n :width: 600\n\n \"\"\"\n\n has_content = True\n required_arguments = 1\n option_spec = {\n \"width\": directives.positive_int,\n \"height\": directives.positive_int,\n \"align\": _align_choice\n }\n\n def run(self):\n \"\"\"Run the youtube directive.\"\"\"\n self.check_content()\n options = {\n 'yid': self.arguments[0],\n 'width': 560,\n 'height': 315,\n }\n options.update(self.options)\n if self.options.get('align') in _align_options_base:\n options['align'] = ' align-' + self.options['align']\n else:\n options['align'] = ''\n return [nodes.raw('', CODE.format(**options), format='html')]\n\n def check_content(self):\n \"\"\"Check if content exists.\"\"\"\n if self.content: # pragma: no cover\n raise self.warning(\"This directive does not accept content. The \"\n \"'key=value' format for options is deprecated, \"\n \"use ':key: value' instead\")\n", "path": "nikola/plugins/compile/rest/youtube.py"}]} | 1,518 | 224 |
gh_patches_debug_8918 | rasdani/github-patches | git_diff | localstack__localstack-7373 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: java.lang.IllegalArgumentException: argument type mismatch with RequestHandler
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The following request handler:
```java
public class LegalDocPublisher implements RequestHandler<SQSEvent, Void> {
@Override
public Void handleRequest(final SQSEvent event, final Context context) {
return null;
}
}
```
causes
```
2022-10-10T06:38:23.362 INFO --- [ Thread-244] l.s.a.lambda_executors : Error executing Lambda "arn:aws:lambda:us-east-2:000000000000:function:LegalDocPublisher": InvocationException: Lambda process returned error status code: 1. Result: . Output:
Exception in thread "main" java.lang.IllegalArgumentException: argument type mismatch
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at cloud.localstack.LambdaExecutor.main(LambdaExecutor.java:117) File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 1423, in do_execute
execute_result = lambda_function_callable(inv_context.event, context)
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 579, in execute
result = lambda_executors.EXECUTOR_LOCAL.execute_java_lambda(
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 1532, in execute_java_lambda
invocation_result = self._execute_in_custom_runtime(cmd, lambda_function=lambda_function)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 1366, in _execute_in_custom_runtime
raise InvocationException(
```
when execution is triggered.
This works fine until LocalStack 1.0.4.
### Expected Behavior
No exceptions.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
LocalStack is started as part of integration tests run by Maven, via `docker-maven-plugin`.
### Environment
```markdown
- OS: 20.04
- LocalStack: 1.2.0
```
### Anything else?
AWS SDK version: 1.12.271
</issue>
<code>
[start of localstack/services/awslambda/packages.py]
1 import os
2 import platform
3 import stat
4 from typing import List
5
6 from localstack.packages import DownloadInstaller, InstallTarget, Package, PackageInstaller
7 from localstack.packages.core import ArchiveDownloadAndExtractInstaller, SystemNotSupportedException
8 from localstack.utils.platform import get_arch
9
10 LAMBDA_RUNTIME_INIT_URL = "https://github.com/localstack/lambda-runtime-init/releases/download/{version}/aws-lambda-rie-{arch}"
11
12 LAMBDA_RUNTIME_DEFAULT_VERSION = "v0.1.8-pre"
13
14 # GO Lambda runtime
15 GO_RUNTIME_VERSION = "0.4.0"
16 GO_RUNTIME_DOWNLOAD_URL_TEMPLATE = "https://github.com/localstack/awslamba-go-runtime/releases/download/v{version}/awslamba-go-runtime-{version}-{os}-{arch}.tar.gz"
17
18
19 class AWSLambdaRuntimePackage(Package):
20 def __init__(self, default_version: str = LAMBDA_RUNTIME_DEFAULT_VERSION):
21 super().__init__(name="AwsLambda", default_version=default_version)
22
23 def get_versions(self) -> List[str]:
24 return [
25 "v0.1.8-pre",
26 "v0.1.7-pre",
27 "v0.1.6-pre",
28 "v0.1.5-pre",
29 "v0.1.4-pre",
30 "v0.1.1-pre",
31 "v0.1-pre",
32 ]
33
34 def _get_installer(self, version: str) -> PackageInstaller:
35 return AWSLambdaRuntimePackageInstaller(name="awslambda-runtime", version=version)
36
37
38 class AWSLambdaRuntimePackageInstaller(DownloadInstaller):
39 def _get_download_url(self) -> str:
40 arch = get_arch()
41 arch = "x86_64" if arch == "amd64" else arch
42 return LAMBDA_RUNTIME_INIT_URL.format(version=self.version, arch=arch)
43
44 def _install(self, target: InstallTarget) -> None:
45 super()._install(target)
46 install_location = self.get_executable_path()
47 st = os.stat(install_location)
48 os.chmod(install_location, mode=st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
49
50
51 class AWSLambdaGoRuntimePackage(Package):
52 def __init__(self, default_version: str = GO_RUNTIME_VERSION):
53 super().__init__(name="AwsLambdaGo", default_version=default_version)
54
55 def get_versions(self) -> List[str]:
56 return [GO_RUNTIME_VERSION]
57
58 def _get_installer(self, version: str) -> PackageInstaller:
59 return AWSLambdaGoRuntimePackageInstaller(name="awslamba-go-runtime", version=version)
60
61
62 class AWSLambdaGoRuntimePackageInstaller(ArchiveDownloadAndExtractInstaller):
63 def _get_download_url(self) -> str:
64 system = platform.system().lower()
65 arch = get_arch()
66
67 if system not in ["linux"]:
68 raise SystemNotSupportedException(f"Unsupported os {system} for awslambda-go-runtime")
69 if arch not in ["amd64", "arm64"]:
70 raise SystemNotSupportedException(f"Unsupported arch {arch} for awslambda-go-runtime")
71
72 return GO_RUNTIME_DOWNLOAD_URL_TEMPLATE.format(
73 version=GO_RUNTIME_VERSION,
74 os=system,
75 arch=arch,
76 )
77
78 def _get_install_marker_path(self, install_dir: str) -> str:
79 return os.path.join(install_dir, "aws-lambda-mock")
80
81 def _install(self, target: InstallTarget) -> None:
82 super()._install(target)
83
84 install_dir = self._get_install_dir(target)
85 install_location = self._get_install_marker_path(install_dir)
86 st = os.stat(install_location)
87 os.chmod(install_location, st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
88
89 go_lambda_mockserver = os.path.join(install_dir, "mockserver")
90 st = os.stat(go_lambda_mockserver)
91 os.chmod(go_lambda_mockserver, st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
92
93
94 # version of the Maven dependency with Java utility code
95 LOCALSTACK_MAVEN_VERSION = "0.2.21"
96 MAVEN_REPO_URL = "https://repo1.maven.org/maven2"
97 URL_LOCALSTACK_FAT_JAR = (
98 "{mvn_repo}/cloud/localstack/localstack-utils/{ver}/localstack-utils-{ver}-fat.jar"
99 )
100
101
102 class AWSLambdaJavaPackage(Package):
103 def __init__(self):
104 super().__init__("LambdaJavaLibs", "0.2.21")
105
106 def get_versions(self) -> List[str]:
107 return ["0.2.21"]
108
109 def _get_installer(self, version: str) -> PackageInstaller:
110 return AWSLambdaJavaPackageInstaller("lambda-java-libs", version)
111
112
113 class AWSLambdaJavaPackageInstaller(DownloadInstaller):
114 def _get_download_url(self) -> str:
115 return URL_LOCALSTACK_FAT_JAR.format(ver=self.version, mvn_repo=MAVEN_REPO_URL)
116
117
118 awslambda_runtime_package = AWSLambdaRuntimePackage()
119 awslambda_go_runtime_package = AWSLambdaGoRuntimePackage()
120 lambda_java_libs_package = AWSLambdaJavaPackage()
121
[end of localstack/services/awslambda/packages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/localstack/services/awslambda/packages.py b/localstack/services/awslambda/packages.py
--- a/localstack/services/awslambda/packages.py
+++ b/localstack/services/awslambda/packages.py
@@ -101,10 +101,10 @@
class AWSLambdaJavaPackage(Package):
def __init__(self):
- super().__init__("LambdaJavaLibs", "0.2.21")
+ super().__init__("LambdaJavaLibs", "0.2.22")
def get_versions(self) -> List[str]:
- return ["0.2.21"]
+ return ["0.2.22", "0.2.21"]
def _get_installer(self, version: str) -> PackageInstaller:
return AWSLambdaJavaPackageInstaller("lambda-java-libs", version)
| {"golden_diff": "diff --git a/localstack/services/awslambda/packages.py b/localstack/services/awslambda/packages.py\n--- a/localstack/services/awslambda/packages.py\n+++ b/localstack/services/awslambda/packages.py\n@@ -101,10 +101,10 @@\n \n class AWSLambdaJavaPackage(Package):\n def __init__(self):\n- super().__init__(\"LambdaJavaLibs\", \"0.2.21\")\n+ super().__init__(\"LambdaJavaLibs\", \"0.2.22\")\n \n def get_versions(self) -> List[str]:\n- return [\"0.2.21\"]\n+ return [\"0.2.22\", \"0.2.21\"]\n \n def _get_installer(self, version: str) -> PackageInstaller:\n return AWSLambdaJavaPackageInstaller(\"lambda-java-libs\", version)\n", "issue": "bug: java.lang.IllegalArgumentException: argument type mismatch with RequestHandler\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nThe following request handler:\r\n\r\n```java\r\npublic class LegalDocPublisher implements RequestHandler<SQSEvent, Void> {\r\n @Override\r\n public Void handleRequest(final SQSEvent event, final Context context) {\r\n return null;\r\n }\r\n}\r\n```\r\n\r\ncauses \r\n\r\n```\r\n2022-10-10T06:38:23.362 INFO --- [ Thread-244] l.s.a.lambda_executors : Error executing Lambda \"arn:aws:lambda:us-east-2:000000000000:function:LegalDocPublisher\": InvocationException: Lambda process returned error status code: 1. Result: . Output:\r\nException in thread \"main\" java.lang.IllegalArgumentException: argument type mismatch\r\n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)\r\n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)\r\n at java.base/java.lang.reflect.Method.invoke(Unknown Source)\r\n at cloud.localstack.LambdaExecutor.main(LambdaExecutor.java:117) File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 1423, in do_execute\r\n execute_result = lambda_function_callable(inv_context.event, context)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_api.py\", line 579, in execute\r\n result = lambda_executors.EXECUTOR_LOCAL.execute_java_lambda(\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 1532, in execute_java_lambda\r\n invocation_result = self._execute_in_custom_runtime(cmd, lambda_function=lambda_function)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 1366, in _execute_in_custom_runtime\r\n raise InvocationException(\r\n```\r\n\r\nwhen execution is triggered.\r\n\r\nThis works fine until LocalStack 1.0.4.\r\n\r\n### Expected Behavior\r\n\r\nNo exceptions.\r\n\r\n### How are you starting LocalStack?\r\n\r\nCustom (please describe below)\r\n\r\n### Steps To Reproduce\r\n\r\nLocalStack is started as part of integration tests run by Maven, via `docker-maven-plugin`.\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: 20.04\r\n- LocalStack: 1.2.0\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nAWS SDK version: 1.12.271\n", "before_files": [{"content": "import os\nimport platform\nimport stat\nfrom typing import List\n\nfrom localstack.packages import DownloadInstaller, InstallTarget, Package, PackageInstaller\nfrom localstack.packages.core import ArchiveDownloadAndExtractInstaller, SystemNotSupportedException\nfrom localstack.utils.platform import get_arch\n\nLAMBDA_RUNTIME_INIT_URL = \"https://github.com/localstack/lambda-runtime-init/releases/download/{version}/aws-lambda-rie-{arch}\"\n\nLAMBDA_RUNTIME_DEFAULT_VERSION = \"v0.1.8-pre\"\n\n# GO Lambda runtime\nGO_RUNTIME_VERSION = \"0.4.0\"\nGO_RUNTIME_DOWNLOAD_URL_TEMPLATE = \"https://github.com/localstack/awslamba-go-runtime/releases/download/v{version}/awslamba-go-runtime-{version}-{os}-{arch}.tar.gz\"\n\n\nclass AWSLambdaRuntimePackage(Package):\n def __init__(self, default_version: str = LAMBDA_RUNTIME_DEFAULT_VERSION):\n super().__init__(name=\"AwsLambda\", default_version=default_version)\n\n def get_versions(self) -> List[str]:\n return [\n \"v0.1.8-pre\",\n \"v0.1.7-pre\",\n \"v0.1.6-pre\",\n \"v0.1.5-pre\",\n \"v0.1.4-pre\",\n \"v0.1.1-pre\",\n \"v0.1-pre\",\n ]\n\n def _get_installer(self, version: str) -> PackageInstaller:\n return AWSLambdaRuntimePackageInstaller(name=\"awslambda-runtime\", version=version)\n\n\nclass AWSLambdaRuntimePackageInstaller(DownloadInstaller):\n def _get_download_url(self) -> str:\n arch = get_arch()\n arch = \"x86_64\" if arch == \"amd64\" else arch\n return LAMBDA_RUNTIME_INIT_URL.format(version=self.version, arch=arch)\n\n def _install(self, target: InstallTarget) -> None:\n super()._install(target)\n install_location = self.get_executable_path()\n st = os.stat(install_location)\n os.chmod(install_location, mode=st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)\n\n\nclass AWSLambdaGoRuntimePackage(Package):\n def __init__(self, default_version: str = GO_RUNTIME_VERSION):\n super().__init__(name=\"AwsLambdaGo\", default_version=default_version)\n\n def get_versions(self) -> List[str]:\n return [GO_RUNTIME_VERSION]\n\n def _get_installer(self, version: str) -> PackageInstaller:\n return AWSLambdaGoRuntimePackageInstaller(name=\"awslamba-go-runtime\", version=version)\n\n\nclass AWSLambdaGoRuntimePackageInstaller(ArchiveDownloadAndExtractInstaller):\n def _get_download_url(self) -> str:\n system = platform.system().lower()\n arch = get_arch()\n\n if system not in [\"linux\"]:\n raise SystemNotSupportedException(f\"Unsupported os {system} for awslambda-go-runtime\")\n if arch not in [\"amd64\", \"arm64\"]:\n raise SystemNotSupportedException(f\"Unsupported arch {arch} for awslambda-go-runtime\")\n\n return GO_RUNTIME_DOWNLOAD_URL_TEMPLATE.format(\n version=GO_RUNTIME_VERSION,\n os=system,\n arch=arch,\n )\n\n def _get_install_marker_path(self, install_dir: str) -> str:\n return os.path.join(install_dir, \"aws-lambda-mock\")\n\n def _install(self, target: InstallTarget) -> None:\n super()._install(target)\n\n install_dir = self._get_install_dir(target)\n install_location = self._get_install_marker_path(install_dir)\n st = os.stat(install_location)\n os.chmod(install_location, st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)\n\n go_lambda_mockserver = os.path.join(install_dir, \"mockserver\")\n st = os.stat(go_lambda_mockserver)\n os.chmod(go_lambda_mockserver, st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)\n\n\n# version of the Maven dependency with Java utility code\nLOCALSTACK_MAVEN_VERSION = \"0.2.21\"\nMAVEN_REPO_URL = \"https://repo1.maven.org/maven2\"\nURL_LOCALSTACK_FAT_JAR = (\n \"{mvn_repo}/cloud/localstack/localstack-utils/{ver}/localstack-utils-{ver}-fat.jar\"\n)\n\n\nclass AWSLambdaJavaPackage(Package):\n def __init__(self):\n super().__init__(\"LambdaJavaLibs\", \"0.2.21\")\n\n def get_versions(self) -> List[str]:\n return [\"0.2.21\"]\n\n def _get_installer(self, version: str) -> PackageInstaller:\n return AWSLambdaJavaPackageInstaller(\"lambda-java-libs\", version)\n\n\nclass AWSLambdaJavaPackageInstaller(DownloadInstaller):\n def _get_download_url(self) -> str:\n return URL_LOCALSTACK_FAT_JAR.format(ver=self.version, mvn_repo=MAVEN_REPO_URL)\n\n\nawslambda_runtime_package = AWSLambdaRuntimePackage()\nawslambda_go_runtime_package = AWSLambdaGoRuntimePackage()\nlambda_java_libs_package = AWSLambdaJavaPackage()\n", "path": "localstack/services/awslambda/packages.py"}]} | 2,498 | 191 |
gh_patches_debug_334 | rasdani/github-patches | git_diff | searx__searx-2391 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SUGGESTION: Contacting the instance's maintainer(s)
Hello, so I use searx, but I personally think that there should be any way to contact the maintainer(s) of a public instance (email for example). It is harder to trust this awesome service if there is no way to contact the maintainer(s).
</issue>
<code>
[start of searx/brand.py]
1 GIT_URL = 'https://github.com/searx/searx'
2 GIT_BRANCH = 'master'
3 ISSUE_URL = 'https://github.com/searx/searx/issues'
4 SEARX_URL = 'https://searx.me'
5 DOCS_URL = 'https://searx.github.io/searx'
6 PUBLIC_INSTANCES = 'https://searx.space'
7
[end of searx/brand.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/brand.py b/searx/brand.py
--- a/searx/brand.py
+++ b/searx/brand.py
@@ -4,3 +4,4 @@
SEARX_URL = 'https://searx.me'
DOCS_URL = 'https://searx.github.io/searx'
PUBLIC_INSTANCES = 'https://searx.space'
+CONTACT_URL = 'mailto:[email protected]'
| {"golden_diff": "diff --git a/searx/brand.py b/searx/brand.py\n--- a/searx/brand.py\n+++ b/searx/brand.py\n@@ -4,3 +4,4 @@\n SEARX_URL = 'https://searx.me'\n DOCS_URL = 'https://searx.github.io/searx'\n PUBLIC_INSTANCES = 'https://searx.space'\n+CONTACT_URL = 'mailto:[email protected]'\n", "issue": "SUGGESTION: Contacting the instance's maintainer(s)\nHello, so I use searx, but I personally think that there should be any way to contact the maintainer(s) of a public instance (email for example). It is harder to trust this awesome service if there is no way to contact the maintainer(s). \r\n\n", "before_files": [{"content": "GIT_URL = 'https://github.com/searx/searx'\nGIT_BRANCH = 'master'\nISSUE_URL = 'https://github.com/searx/searx/issues'\nSEARX_URL = 'https://searx.me'\nDOCS_URL = 'https://searx.github.io/searx'\nPUBLIC_INSTANCES = 'https://searx.space'\n", "path": "searx/brand.py"}]} | 694 | 98 |
gh_patches_debug_33935 | rasdani/github-patches | git_diff | systemd__mkosi-1771 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support libarchive cpio
Hi,
please support libarchive cpio(bsdcpio), which does not have `--reproducible`
https://lists.gnu.org/archive/html/bug-cpio/2014-11/msg00000.html
https://github.com/systemd/mkosi/blob/2c45d0effb1871750a2e9f897510d2745cb6d6b9/mkosi/__init__.py#L3489
</issue>
<code>
[start of mkosi/archive.py]
1 # SPDX-License-Identifier: LGPL-2.1+
2
3 import os
4 from collections.abc import Iterable
5 from pathlib import Path
6 from typing import Optional
7
8 from mkosi.log import log_step
9 from mkosi.run import bwrap, finalize_passwd_mounts
10 from mkosi.util import tar_binary
11
12
13 def tar_exclude_apivfs_tmp() -> list[str]:
14 return [
15 "--exclude", "./dev/*",
16 "--exclude", "./proc/*",
17 "--exclude", "./sys/*",
18 "--exclude", "./tmp/*",
19 "--exclude", "./run/*",
20 "--exclude", "./var/tmp/*",
21 ]
22
23
24 def make_tar(src: Path, dst: Path) -> None:
25 log_step(f"Creating tar archive {dst}…")
26 bwrap(
27 [
28 tar_binary(),
29 "--create",
30 "--file", dst,
31 "--directory", src,
32 "--acls",
33 "--selinux",
34 "--xattrs",
35 "--sparse",
36 "--force-local",
37 *tar_exclude_apivfs_tmp(),
38 ".",
39 ],
40 # Make sure tar uses user/group information from the root directory instead of the host.
41 options=finalize_passwd_mounts(src) if (src / "etc/passwd").exists() else [],
42 )
43
44
45 def extract_tar(src: Path, dst: Path, log: bool = True) -> None:
46 if log:
47 log_step(f"Extracting tar archive {src}…")
48 bwrap(
49 [
50 tar_binary(),
51 "--extract",
52 "--file", src,
53 "--directory", dst,
54 "--keep-directory-symlink",
55 "--no-overwrite-dir",
56 "--same-permissions",
57 "--same-owner" if (dst / "etc/passwd").exists() else "--numeric-owner",
58 "--same-order",
59 "--acls",
60 "--selinux",
61 "--xattrs",
62 "--force-local",
63 *tar_exclude_apivfs_tmp(),
64 ],
65 # Make sure tar uses user/group information from the root directory instead of the host.
66 options=finalize_passwd_mounts(dst) if (dst / "etc/passwd").exists() else [],
67 )
68
69
70 def make_cpio(src: Path, dst: Path, files: Optional[Iterable[Path]] = None) -> None:
71 if not files:
72 files = src.rglob("*")
73
74 log_step(f"Creating cpio archive {dst}…")
75 bwrap(
76 [
77 "cpio",
78 "--create",
79 "--reproducible",
80 "--null",
81 "--format=newc",
82 "--quiet",
83 "--directory", src,
84 "-O", dst,
85 ],
86 input="\0".join(os.fspath(f.relative_to(src)) for f in files),
87 # Make sure tar uses user/group information from the root directory instead of the host.
88 options=finalize_passwd_mounts(dst),
89 )
90
[end of mkosi/archive.py]
[start of mkosi/util.py]
1 # SPDX-License-Identifier: LGPL-2.1+
2
3 import ast
4 import contextlib
5 import copy
6 import enum
7 import errno
8 import fcntl
9 import functools
10 import importlib
11 import itertools
12 import logging
13 import os
14 import pwd
15 import re
16 import resource
17 import shutil
18 import stat
19 import sys
20 import tempfile
21 from collections.abc import Iterable, Iterator, Mapping, Sequence
22 from pathlib import Path
23 from typing import Any, Callable, Optional, TypeVar
24
25 T = TypeVar("T")
26 V = TypeVar("V")
27
28
29 def dictify(f: Callable[..., Iterator[tuple[T, V]]]) -> Callable[..., dict[T, V]]:
30 def wrapper(*args: Any, **kwargs: Any) -> dict[T, V]:
31 return dict(f(*args, **kwargs))
32
33 return functools.update_wrapper(wrapper, f)
34
35
36 @dictify
37 def read_os_release() -> Iterator[tuple[str, str]]:
38 try:
39 filename = "/etc/os-release"
40 f = open(filename)
41 except FileNotFoundError:
42 filename = "/usr/lib/os-release"
43 f = open(filename)
44
45 with f:
46 for line_number, line in enumerate(f, start=1):
47 line = line.rstrip()
48 if not line or line.startswith("#"):
49 continue
50 if (m := re.match(r"([A-Z][A-Z_0-9]+)=(.*)", line)):
51 name, val = m.groups()
52 if val and val[0] in "\"'":
53 val = ast.literal_eval(val)
54 yield name, val
55 else:
56 print(f"{filename}:{line_number}: bad line {line!r}", file=sys.stderr)
57
58
59 def format_rlimit(rlimit: int) -> str:
60 limits = resource.getrlimit(rlimit)
61 soft = "infinity" if limits[0] == resource.RLIM_INFINITY else str(limits[0])
62 hard = "infinity" if limits[1] == resource.RLIM_INFINITY else str(limits[1])
63 return f"{soft}:{hard}"
64
65
66 def sort_packages(packages: Iterable[str]) -> list[str]:
67 """Sorts packages: normal first, paths second, conditional third"""
68
69 m = {"(": 2, "/": 1}
70 sort = lambda name: (m.get(name[0], 0), name)
71 return sorted(packages, key=sort)
72
73
74 def flatten(lists: Iterable[Iterable[T]]) -> list[T]:
75 """Flatten a sequence of sequences into a single list."""
76 return list(itertools.chain.from_iterable(lists))
77
78
79 class InvokingUser:
80 @staticmethod
81 def _uid_from_env() -> Optional[int]:
82 uid = os.getenv("SUDO_UID") or os.getenv("PKEXEC_UID")
83 return int(uid) if uid is not None else None
84
85 @classmethod
86 def uid(cls) -> int:
87 return cls._uid_from_env() or os.getuid()
88
89 @classmethod
90 def uid_gid(cls) -> tuple[int, int]:
91 if (uid := cls._uid_from_env()) is not None:
92 gid = int(os.getenv("SUDO_GID", pwd.getpwuid(uid).pw_gid))
93 return uid, gid
94 return os.getuid(), os.getgid()
95
96 @classmethod
97 def name(cls) -> str:
98 return pwd.getpwuid(cls.uid()).pw_name
99
100 @classmethod
101 def home(cls) -> Path:
102 return Path(f"~{cls.name()}").expanduser()
103
104 @classmethod
105 def is_running_user(cls) -> bool:
106 return cls.uid() == os.getuid()
107
108
109 @contextlib.contextmanager
110 def chdir(directory: Path) -> Iterator[None]:
111 old = Path.cwd()
112
113 if old == directory:
114 yield
115 return
116
117 try:
118 os.chdir(directory)
119 yield
120 finally:
121 os.chdir(old)
122
123
124 def qemu_check_kvm_support(log: bool) -> bool:
125 # some CI runners may present a non-working KVM device
126 try:
127 os.close(os.open("/dev/kvm", os.O_RDWR|os.O_CLOEXEC))
128 except OSError as e:
129 if e.errno == errno.ENOENT:
130 if log:
131 logging.warning("/dev/kvm not found. Not using KVM acceleration.")
132 return False
133 elif e.errno in (errno.EPERM, errno.EACCES):
134 if log:
135 logging.warning("Permission denied to access /dev/kvm. Not using KVM acceleration")
136 return False
137
138 raise e
139
140 return True
141
142
143 def qemu_check_vsock_support(log: bool) -> bool:
144 try:
145 os.close(os.open("/dev/vhost-vsock", os.O_RDWR|os.O_CLOEXEC))
146 except OSError as e:
147 if e.errno == errno.ENOENT:
148 if log:
149 logging.warning("/dev/vhost-vsock not found. Not adding a vsock device to the virtual machine.")
150 return False
151 elif e.errno in (errno.EPERM, errno.EACCES):
152 if log:
153 logging.warning("Permission denied to access /dev/vhost-vsock. Not adding a vsock device to the virtual machine.")
154 return False
155
156 raise e
157
158 return True
159
160
161 def format_bytes(num_bytes: int) -> str:
162 if num_bytes >= 1024**3:
163 return f"{num_bytes/1024**3 :0.1f}G"
164 if num_bytes >= 1024**2:
165 return f"{num_bytes/1024**2 :0.1f}M"
166 if num_bytes >= 1024:
167 return f"{num_bytes/1024 :0.1f}K"
168
169 return f"{num_bytes}B"
170
171
172 def make_executable(path: Path) -> None:
173 st = path.stat()
174 os.chmod(path, st.st_mode | stat.S_IEXEC)
175
176
177 def try_import(module: str) -> None:
178 try:
179 importlib.import_module(module)
180 except ModuleNotFoundError:
181 pass
182
183
184 @contextlib.contextmanager
185 def flock(path: Path) -> Iterator[int]:
186 fd = os.open(path, os.O_CLOEXEC|os.O_RDONLY)
187 try:
188 fcntl.fcntl(fd, fcntl.FD_CLOEXEC)
189 fcntl.flock(fd, fcntl.LOCK_EX)
190 yield fd
191 finally:
192 os.close(fd)
193
194
195 @contextlib.contextmanager
196 def scopedenv(env: Mapping[str, Any]) -> Iterator[None]:
197 old = copy.copy(os.environ)
198 os.environ |= env
199
200 # python caches the default temporary directory so when we might modify TMPDIR we have to make sure it
201 # gets recalculated (see https://docs.python.org/3/library/tempfile.html#tempfile.tempdir).
202 tempfile.tempdir = None
203
204 try:
205 yield
206 finally:
207 os.environ = old
208 tempfile.tempdir = None
209
210
211 class StrEnum(enum.Enum):
212 def __str__(self) -> str:
213 assert isinstance(self.value, str)
214 return self.value
215
216 # Used by enum.auto() to get the next value.
217 @staticmethod
218 def _generate_next_value_(name: str, start: int, count: int, last_values: Sequence[str]) -> str:
219 return name.replace("_", "-")
220
221 @classmethod
222 def values(cls) -> list[str]:
223 return list(map(str, cls))
224
225
226 def tar_binary() -> str:
227 # Some distros (Mandriva) install BSD tar as "tar", hence prefer
228 # "gtar" if it exists, which should be GNU tar wherever it exists.
229 # We are interested in exposing same behaviour everywhere hence
230 # it's preferable to use the same implementation of tar
231 # everywhere. In particular given the limited/different SELinux
232 # support in BSD tar and the different command line syntax
233 # compared to GNU tar.
234 return "gtar" if shutil.which("gtar") else "tar"
235
236
237 def one_zero(b: bool) -> str:
238 return "1" if b else "0"
239
240
241 @contextlib.contextmanager
242 def umask(mask: int) -> Iterator[None]:
243 old = os.umask(mask)
244 try:
245 yield
246 finally:
247 os.umask(old)
248
[end of mkosi/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkosi/archive.py b/mkosi/archive.py
--- a/mkosi/archive.py
+++ b/mkosi/archive.py
@@ -1,13 +1,28 @@
# SPDX-License-Identifier: LGPL-2.1+
import os
+import shutil
from collections.abc import Iterable
from pathlib import Path
from typing import Optional
from mkosi.log import log_step
from mkosi.run import bwrap, finalize_passwd_mounts
-from mkosi.util import tar_binary
+
+
+def tar_binary() -> str:
+ # Some distros (Mandriva) install BSD tar as "tar", hence prefer
+ # "gtar" if it exists, which should be GNU tar wherever it exists.
+ # We are interested in exposing same behaviour everywhere hence
+ # it's preferable to use the same implementation of tar
+ # everywhere. In particular given the limited/different SELinux
+ # support in BSD tar and the different command line syntax
+ # compared to GNU tar.
+ return "gtar" if shutil.which("gtar") else "tar"
+
+
+def cpio_binary() -> str:
+ return "gcpio" if shutil.which("gcpio") else "cpio"
def tar_exclude_apivfs_tmp() -> list[str]:
@@ -74,7 +89,7 @@
log_step(f"Creating cpio archive {dst}…")
bwrap(
[
- "cpio",
+ cpio_binary(),
"--create",
"--reproducible",
"--null",
diff --git a/mkosi/util.py b/mkosi/util.py
--- a/mkosi/util.py
+++ b/mkosi/util.py
@@ -14,7 +14,6 @@
import pwd
import re
import resource
-import shutil
import stat
import sys
import tempfile
@@ -223,17 +222,6 @@
return list(map(str, cls))
-def tar_binary() -> str:
- # Some distros (Mandriva) install BSD tar as "tar", hence prefer
- # "gtar" if it exists, which should be GNU tar wherever it exists.
- # We are interested in exposing same behaviour everywhere hence
- # it's preferable to use the same implementation of tar
- # everywhere. In particular given the limited/different SELinux
- # support in BSD tar and the different command line syntax
- # compared to GNU tar.
- return "gtar" if shutil.which("gtar") else "tar"
-
-
def one_zero(b: bool) -> str:
return "1" if b else "0"
| {"golden_diff": "diff --git a/mkosi/archive.py b/mkosi/archive.py\n--- a/mkosi/archive.py\n+++ b/mkosi/archive.py\n@@ -1,13 +1,28 @@\n # SPDX-License-Identifier: LGPL-2.1+\n \n import os\n+import shutil\n from collections.abc import Iterable\n from pathlib import Path\n from typing import Optional\n \n from mkosi.log import log_step\n from mkosi.run import bwrap, finalize_passwd_mounts\n-from mkosi.util import tar_binary\n+\n+\n+def tar_binary() -> str:\n+ # Some distros (Mandriva) install BSD tar as \"tar\", hence prefer\n+ # \"gtar\" if it exists, which should be GNU tar wherever it exists.\n+ # We are interested in exposing same behaviour everywhere hence\n+ # it's preferable to use the same implementation of tar\n+ # everywhere. In particular given the limited/different SELinux\n+ # support in BSD tar and the different command line syntax\n+ # compared to GNU tar.\n+ return \"gtar\" if shutil.which(\"gtar\") else \"tar\"\n+\n+\n+def cpio_binary() -> str:\n+ return \"gcpio\" if shutil.which(\"gcpio\") else \"cpio\"\n \n \n def tar_exclude_apivfs_tmp() -> list[str]:\n@@ -74,7 +89,7 @@\n log_step(f\"Creating cpio archive {dst}\u2026\")\n bwrap(\n [\n- \"cpio\",\n+ cpio_binary(),\n \"--create\",\n \"--reproducible\",\n \"--null\",\ndiff --git a/mkosi/util.py b/mkosi/util.py\n--- a/mkosi/util.py\n+++ b/mkosi/util.py\n@@ -14,7 +14,6 @@\n import pwd\n import re\n import resource\n-import shutil\n import stat\n import sys\n import tempfile\n@@ -223,17 +222,6 @@\n return list(map(str, cls))\n \n \n-def tar_binary() -> str:\n- # Some distros (Mandriva) install BSD tar as \"tar\", hence prefer\n- # \"gtar\" if it exists, which should be GNU tar wherever it exists.\n- # We are interested in exposing same behaviour everywhere hence\n- # it's preferable to use the same implementation of tar\n- # everywhere. In particular given the limited/different SELinux\n- # support in BSD tar and the different command line syntax\n- # compared to GNU tar.\n- return \"gtar\" if shutil.which(\"gtar\") else \"tar\"\n-\n-\n def one_zero(b: bool) -> str:\n return \"1\" if b else \"0\"\n", "issue": "Support libarchive cpio\nHi,\r\n\r\nplease support libarchive cpio(bsdcpio), which does not have `--reproducible`\r\n\r\nhttps://lists.gnu.org/archive/html/bug-cpio/2014-11/msg00000.html\r\n\r\nhttps://github.com/systemd/mkosi/blob/2c45d0effb1871750a2e9f897510d2745cb6d6b9/mkosi/__init__.py#L3489\n", "before_files": [{"content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport os\nfrom collections.abc import Iterable\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom mkosi.log import log_step\nfrom mkosi.run import bwrap, finalize_passwd_mounts\nfrom mkosi.util import tar_binary\n\n\ndef tar_exclude_apivfs_tmp() -> list[str]:\n return [\n \"--exclude\", \"./dev/*\",\n \"--exclude\", \"./proc/*\",\n \"--exclude\", \"./sys/*\",\n \"--exclude\", \"./tmp/*\",\n \"--exclude\", \"./run/*\",\n \"--exclude\", \"./var/tmp/*\",\n ]\n\n\ndef make_tar(src: Path, dst: Path) -> None:\n log_step(f\"Creating tar archive {dst}\u2026\")\n bwrap(\n [\n tar_binary(),\n \"--create\",\n \"--file\", dst,\n \"--directory\", src,\n \"--acls\",\n \"--selinux\",\n \"--xattrs\",\n \"--sparse\",\n \"--force-local\",\n *tar_exclude_apivfs_tmp(),\n \".\",\n ],\n # Make sure tar uses user/group information from the root directory instead of the host.\n options=finalize_passwd_mounts(src) if (src / \"etc/passwd\").exists() else [],\n )\n\n\ndef extract_tar(src: Path, dst: Path, log: bool = True) -> None:\n if log:\n log_step(f\"Extracting tar archive {src}\u2026\")\n bwrap(\n [\n tar_binary(),\n \"--extract\",\n \"--file\", src,\n \"--directory\", dst,\n \"--keep-directory-symlink\",\n \"--no-overwrite-dir\",\n \"--same-permissions\",\n \"--same-owner\" if (dst / \"etc/passwd\").exists() else \"--numeric-owner\",\n \"--same-order\",\n \"--acls\",\n \"--selinux\",\n \"--xattrs\",\n \"--force-local\",\n *tar_exclude_apivfs_tmp(),\n ],\n # Make sure tar uses user/group information from the root directory instead of the host.\n options=finalize_passwd_mounts(dst) if (dst / \"etc/passwd\").exists() else [],\n )\n\n\ndef make_cpio(src: Path, dst: Path, files: Optional[Iterable[Path]] = None) -> None:\n if not files:\n files = src.rglob(\"*\")\n\n log_step(f\"Creating cpio archive {dst}\u2026\")\n bwrap(\n [\n \"cpio\",\n \"--create\",\n \"--reproducible\",\n \"--null\",\n \"--format=newc\",\n \"--quiet\",\n \"--directory\", src,\n \"-O\", dst,\n ],\n input=\"\\0\".join(os.fspath(f.relative_to(src)) for f in files),\n # Make sure tar uses user/group information from the root directory instead of the host.\n options=finalize_passwd_mounts(dst),\n )\n", "path": "mkosi/archive.py"}, {"content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport ast\nimport contextlib\nimport copy\nimport enum\nimport errno\nimport fcntl\nimport functools\nimport importlib\nimport itertools\nimport logging\nimport os\nimport pwd\nimport re\nimport resource\nimport shutil\nimport stat\nimport sys\nimport tempfile\nfrom collections.abc import Iterable, Iterator, Mapping, Sequence\nfrom pathlib import Path\nfrom typing import Any, Callable, Optional, TypeVar\n\nT = TypeVar(\"T\")\nV = TypeVar(\"V\")\n\n\ndef dictify(f: Callable[..., Iterator[tuple[T, V]]]) -> Callable[..., dict[T, V]]:\n def wrapper(*args: Any, **kwargs: Any) -> dict[T, V]:\n return dict(f(*args, **kwargs))\n\n return functools.update_wrapper(wrapper, f)\n\n\n@dictify\ndef read_os_release() -> Iterator[tuple[str, str]]:\n try:\n filename = \"/etc/os-release\"\n f = open(filename)\n except FileNotFoundError:\n filename = \"/usr/lib/os-release\"\n f = open(filename)\n\n with f:\n for line_number, line in enumerate(f, start=1):\n line = line.rstrip()\n if not line or line.startswith(\"#\"):\n continue\n if (m := re.match(r\"([A-Z][A-Z_0-9]+)=(.*)\", line)):\n name, val = m.groups()\n if val and val[0] in \"\\\"'\":\n val = ast.literal_eval(val)\n yield name, val\n else:\n print(f\"{filename}:{line_number}: bad line {line!r}\", file=sys.stderr)\n\n\ndef format_rlimit(rlimit: int) -> str:\n limits = resource.getrlimit(rlimit)\n soft = \"infinity\" if limits[0] == resource.RLIM_INFINITY else str(limits[0])\n hard = \"infinity\" if limits[1] == resource.RLIM_INFINITY else str(limits[1])\n return f\"{soft}:{hard}\"\n\n\ndef sort_packages(packages: Iterable[str]) -> list[str]:\n \"\"\"Sorts packages: normal first, paths second, conditional third\"\"\"\n\n m = {\"(\": 2, \"/\": 1}\n sort = lambda name: (m.get(name[0], 0), name)\n return sorted(packages, key=sort)\n\n\ndef flatten(lists: Iterable[Iterable[T]]) -> list[T]:\n \"\"\"Flatten a sequence of sequences into a single list.\"\"\"\n return list(itertools.chain.from_iterable(lists))\n\n\nclass InvokingUser:\n @staticmethod\n def _uid_from_env() -> Optional[int]:\n uid = os.getenv(\"SUDO_UID\") or os.getenv(\"PKEXEC_UID\")\n return int(uid) if uid is not None else None\n\n @classmethod\n def uid(cls) -> int:\n return cls._uid_from_env() or os.getuid()\n\n @classmethod\n def uid_gid(cls) -> tuple[int, int]:\n if (uid := cls._uid_from_env()) is not None:\n gid = int(os.getenv(\"SUDO_GID\", pwd.getpwuid(uid).pw_gid))\n return uid, gid\n return os.getuid(), os.getgid()\n\n @classmethod\n def name(cls) -> str:\n return pwd.getpwuid(cls.uid()).pw_name\n\n @classmethod\n def home(cls) -> Path:\n return Path(f\"~{cls.name()}\").expanduser()\n\n @classmethod\n def is_running_user(cls) -> bool:\n return cls.uid() == os.getuid()\n\n\[email protected]\ndef chdir(directory: Path) -> Iterator[None]:\n old = Path.cwd()\n\n if old == directory:\n yield\n return\n\n try:\n os.chdir(directory)\n yield\n finally:\n os.chdir(old)\n\n\ndef qemu_check_kvm_support(log: bool) -> bool:\n # some CI runners may present a non-working KVM device\n try:\n os.close(os.open(\"/dev/kvm\", os.O_RDWR|os.O_CLOEXEC))\n except OSError as e:\n if e.errno == errno.ENOENT:\n if log:\n logging.warning(\"/dev/kvm not found. Not using KVM acceleration.\")\n return False\n elif e.errno in (errno.EPERM, errno.EACCES):\n if log:\n logging.warning(\"Permission denied to access /dev/kvm. Not using KVM acceleration\")\n return False\n\n raise e\n\n return True\n\n\ndef qemu_check_vsock_support(log: bool) -> bool:\n try:\n os.close(os.open(\"/dev/vhost-vsock\", os.O_RDWR|os.O_CLOEXEC))\n except OSError as e:\n if e.errno == errno.ENOENT:\n if log:\n logging.warning(\"/dev/vhost-vsock not found. Not adding a vsock device to the virtual machine.\")\n return False\n elif e.errno in (errno.EPERM, errno.EACCES):\n if log:\n logging.warning(\"Permission denied to access /dev/vhost-vsock. Not adding a vsock device to the virtual machine.\")\n return False\n\n raise e\n\n return True\n\n\ndef format_bytes(num_bytes: int) -> str:\n if num_bytes >= 1024**3:\n return f\"{num_bytes/1024**3 :0.1f}G\"\n if num_bytes >= 1024**2:\n return f\"{num_bytes/1024**2 :0.1f}M\"\n if num_bytes >= 1024:\n return f\"{num_bytes/1024 :0.1f}K\"\n\n return f\"{num_bytes}B\"\n\n\ndef make_executable(path: Path) -> None:\n st = path.stat()\n os.chmod(path, st.st_mode | stat.S_IEXEC)\n\n\ndef try_import(module: str) -> None:\n try:\n importlib.import_module(module)\n except ModuleNotFoundError:\n pass\n\n\[email protected]\ndef flock(path: Path) -> Iterator[int]:\n fd = os.open(path, os.O_CLOEXEC|os.O_RDONLY)\n try:\n fcntl.fcntl(fd, fcntl.FD_CLOEXEC)\n fcntl.flock(fd, fcntl.LOCK_EX)\n yield fd\n finally:\n os.close(fd)\n\n\[email protected]\ndef scopedenv(env: Mapping[str, Any]) -> Iterator[None]:\n old = copy.copy(os.environ)\n os.environ |= env\n\n # python caches the default temporary directory so when we might modify TMPDIR we have to make sure it\n # gets recalculated (see https://docs.python.org/3/library/tempfile.html#tempfile.tempdir).\n tempfile.tempdir = None\n\n try:\n yield\n finally:\n os.environ = old\n tempfile.tempdir = None\n\n\nclass StrEnum(enum.Enum):\n def __str__(self) -> str:\n assert isinstance(self.value, str)\n return self.value\n\n # Used by enum.auto() to get the next value.\n @staticmethod\n def _generate_next_value_(name: str, start: int, count: int, last_values: Sequence[str]) -> str:\n return name.replace(\"_\", \"-\")\n\n @classmethod\n def values(cls) -> list[str]:\n return list(map(str, cls))\n\n\ndef tar_binary() -> str:\n # Some distros (Mandriva) install BSD tar as \"tar\", hence prefer\n # \"gtar\" if it exists, which should be GNU tar wherever it exists.\n # We are interested in exposing same behaviour everywhere hence\n # it's preferable to use the same implementation of tar\n # everywhere. In particular given the limited/different SELinux\n # support in BSD tar and the different command line syntax\n # compared to GNU tar.\n return \"gtar\" if shutil.which(\"gtar\") else \"tar\"\n\n\ndef one_zero(b: bool) -> str:\n return \"1\" if b else \"0\"\n\n\[email protected]\ndef umask(mask: int) -> Iterator[None]:\n old = os.umask(mask)\n try:\n yield\n finally:\n os.umask(old)\n", "path": "mkosi/util.py"}]} | 3,879 | 587 |
gh_patches_debug_5972 | rasdani/github-patches | git_diff | ansible__ansible-40819 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ios_config incorrectly claims success when commands fail
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
ios_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
[ansible@localhost cmdauthz]$ ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
```
##### CONFIGURATION
<!---
If using Ansible 2.4 or above, paste the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible.cfg modification:
forks = 20
gathering = explicit
host_key_checking = false
timeout = 60
vault_password_file = ~/.ansible/vault-pass.txt
retry_files_enabled = false
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.
-->
AWS AMI CentOS7
```
[ansible@localhost cmdauthz]$ uname -a
Linux localhost.localdomain 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
```
##### SUMMARY
<!--- Explain the problem briefly -->
There is no stdout from "ios_config". When you issue commands that are TACACS-unauthorized, ansible still reports "changed", comes back and reports success, and ignores the fact they were rejected. I assume the module is ignoring all CLI output and only looking for the next config prompt to claim success. This makes it difficult to validate that unauthorized commands were rejected and authorized commands were accepted. Such a playbook is useful as a AAA security posture checker.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
Task sub-list shown below.
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: "SYS >> Capture current username"
set_fact:
USER: "{{ PARAM_CREDS.username }}"
- name: "IOS >> {{ USER }}: Issue unauthorized conf commands"
ios_config:
provider: "{{ PARAM_CREDS }}"
match: none
commands:
- "{{ item }}"
register: OUTPUT
with_items: "{{ unauth_conf_cmds_t2 }}"
- debug:
var: OUTPUT
...
```
Relevant variables included.
```
---
unauth_conf_cmds_t2:
- "ip bgp new-format"
- "interface Loopback12345"
...
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
When `ios_config` determines that the commands being issued are new (a change), of if, `match: none` is set, the module should attempt to issue command in question. It should also collect any parser output, to include `Command authorization failed.`, so that the playbook writer can perform checks against it. I would recommend returning `stdout` and `stdout_lines` in much the same way that ios_command works, for consistency.
Note that if you don't want to set up a TACACS server, using a `do` statement to run a show command from config mode would probably be a valid test to ensure output is being collected.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
`ios_config` returns correct fields per the documentation, but the lack of seeing CLI output makes it hard to discern why commands were rejected.
<!--- Paste verbatim command output between quotes below -->
```
[ansible@localhost cmdauthz]$ ansible-playbook cmdauthz-playbook.yml
PLAY [localhost] ************************************************************************************************************
TASK [SYS >> Define string match facts] *************************************************************************************
ok: [localhost]
PLAY [Verify AAA command authorization functionality] ***********************************************************************
TASK [SYS >> Capture current username] **************************************************************************************
ok: [APG_6010_PER]
TASK [IOS >> ansible2: Issue unauthorized conf commands] ********************************************************************
changed: [APG_6010_PER] => (item=ip bgp new-format)
changed: [APG_6010_PER] => (item=interface Loopback12345)
TASK [debug] ****************************************************************************************************************
ok: [APG_6010_PER] => {
"OUTPUT": {
"changed": true,
"msg": "All items completed",
"results": [
{
"_ansible_item_result": true,
"_ansible_no_log": false,
"_ansible_parsed": true,
"banners": {},
"changed": true,
"commands": [
"ip bgp new-format"
],
"invocation": {
"module_args": {
"after": null,
"auth_pass": null,
"authorize": null,
"backup": false,
"before": null,
"commands": [
"ip bgp new-format"
],
"config": null,
"defaults": false,
"force": false,
"host": null,
"lines": [
"ip bgp new-format"
],
"match": "none",
"multiline_delimiter": "@",
"parents": null,
"password": null,
"port": null,
"provider": {
"auth_pass": null,
"authorize": null,
"host": "APG_6010_PER",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 3022,
"ssh_keyfile": null,
"timeout": 30,
"username": "ansible2"
},
"replace": "line",
"save": false,
"src": null,
"ssh_keyfile": null,
"timeout": null,
"username": null
}
},
"item": "ip bgp new-format",
"updates": [
"ip bgp new-format"
]
},
{
"_ansible_item_result": true,
"_ansible_no_log": false,
"_ansible_parsed": true,
"banners": {},
"changed": true,
"commands": [
"interface Loopback12345"
],
"invocation": {
"module_args": {
"after": null,
"auth_pass": null,
"authorize": null,
"backup": false,
"before": null,
"commands": [
"interface Loopback12345"
],
"config": null,
"defaults": false,
"force": false,
"host": null,
"lines": [
"interface Loopback12345"
],
"match": "none",
"multiline_delimiter": "@",
"parents": null,
"password": null,
"port": null,
"provider": {
"auth_pass": null,
"authorize": null,
"host": "APG_6010_PER",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 3022,
"ssh_keyfile": null,
"timeout": 30,
"username": "ansible2"
},
"replace": "line",
"save": false,
"src": null,
"ssh_keyfile": null,
"timeout": null,
"username": null
}
},
"item": "interface Loopback12345",
"updates": [
"interface Loopback12345"
]
}
]
}
}
```
Manually logging into the router and issuing the commands is shown below. The commands are rejected and no changes are observed.
```
APG_6010_XPER#conf t
Enter configuration commands, one per line. End with CNTL/Z.
APG_6010_XPER(config)#ip bgp new-format
Command authorization failed.
APG_6010_XPER(config)#interface Loopback12345
Command authorization failed.
APG_6010_XPER(config)#end
APG_6010_XPER#show run | include Loopback12345|^ip_bgp
APG_6010_XPER#
APG_6010_XPER#show archive config differences nvram:startup-config
!Contextual Config Diffs:
!No changes were found
```
</issue>
<code>
[start of lib/ansible/plugins/terminal/ios.py]
1 #
2 # (c) 2016 Red Hat Inc.
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 #
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 import json
23 import re
24
25 from ansible.errors import AnsibleConnectionFailure
26 from ansible.module_utils._text import to_text, to_bytes
27 from ansible.plugins.terminal import TerminalBase
28
29
30 class TerminalModule(TerminalBase):
31
32 terminal_stdout_re = [
33 re.compile(br"[\r\n]?[\w\+\-\.:\/\[\]]+(?:\([^\)]+\)){0,3}(?:[>#]) ?$")
34 ]
35
36 terminal_stderr_re = [
37 re.compile(br"% ?Error"),
38 # re.compile(br"^% \w+", re.M),
39 re.compile(br"% ?Bad secret"),
40 re.compile(br"[\r\n%] Bad passwords"),
41 re.compile(br"invalid input", re.I),
42 re.compile(br"(?:incomplete|ambiguous) command", re.I),
43 re.compile(br"connection timed out", re.I),
44 re.compile(br"[^\r\n]+ not found"),
45 re.compile(br"'[^']' +returned error code: ?\d+"),
46 re.compile(br"Bad mask", re.I),
47 re.compile(br"% ?(\S+) ?overlaps with ?(\S+)", re.I),
48 re.compile(br"[%\S] ?Error: ?[\s]+", re.I),
49 re.compile(br"[%\S] ?Informational: ?[\s]+", re.I)
50 ]
51
52 def on_open_shell(self):
53 try:
54 for cmd in (b'terminal length 0', b'terminal width 512'):
55 self._exec_cli_command(cmd)
56 except AnsibleConnectionFailure:
57 raise AnsibleConnectionFailure('unable to set terminal parameters')
58
59 def on_become(self, passwd=None):
60 if self._get_prompt().endswith(b'#'):
61 return
62
63 cmd = {u'command': u'enable'}
64 if passwd:
65 # Note: python-3.5 cannot combine u"" and r"" together. Thus make
66 # an r string and use to_text to ensure it's text on both py2 and py3.
67 cmd[u'prompt'] = to_text(r"[\r\n]password: ?$", errors='surrogate_or_strict')
68 cmd[u'answer'] = passwd
69 cmd[u'prompt_retry_check'] = True
70 try:
71 self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))
72 prompt = self._get_prompt()
73 if prompt is None or not prompt.endswith(b'#'):
74 raise AnsibleConnectionFailure('failed to elevate privilege to enable mode still at prompt [%s]' % prompt)
75 except AnsibleConnectionFailure as e:
76 prompt = self._get_prompt()
77 raise AnsibleConnectionFailure('unable to elevate privilege to enable mode, at prompt [%s] with error: %s' % (prompt, e.message))
78
79 def on_unbecome(self):
80 prompt = self._get_prompt()
81 if prompt is None:
82 # if prompt is None most likely the terminal is hung up at a prompt
83 return
84
85 if b'(config' in prompt:
86 self._exec_cli_command(b'end')
87 self._exec_cli_command(b'disable')
88
89 elif prompt.endswith(b'#'):
90 self._exec_cli_command(b'disable')
91
[end of lib/ansible/plugins/terminal/ios.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/plugins/terminal/ios.py b/lib/ansible/plugins/terminal/ios.py
--- a/lib/ansible/plugins/terminal/ios.py
+++ b/lib/ansible/plugins/terminal/ios.py
@@ -46,7 +46,8 @@
re.compile(br"Bad mask", re.I),
re.compile(br"% ?(\S+) ?overlaps with ?(\S+)", re.I),
re.compile(br"[%\S] ?Error: ?[\s]+", re.I),
- re.compile(br"[%\S] ?Informational: ?[\s]+", re.I)
+ re.compile(br"[%\S] ?Informational: ?[\s]+", re.I),
+ re.compile(br"Command authorization failed")
]
def on_open_shell(self):
| {"golden_diff": "diff --git a/lib/ansible/plugins/terminal/ios.py b/lib/ansible/plugins/terminal/ios.py\n--- a/lib/ansible/plugins/terminal/ios.py\n+++ b/lib/ansible/plugins/terminal/ios.py\n@@ -46,7 +46,8 @@\n re.compile(br\"Bad mask\", re.I),\n re.compile(br\"% ?(\\S+) ?overlaps with ?(\\S+)\", re.I),\n re.compile(br\"[%\\S] ?Error: ?[\\s]+\", re.I),\n- re.compile(br\"[%\\S] ?Informational: ?[\\s]+\", re.I)\n+ re.compile(br\"[%\\S] ?Informational: ?[\\s]+\", re.I),\n+ re.compile(br\"Command authorization failed\")\n ]\n \n def on_open_shell(self):\n", "issue": "ios_config incorrectly claims success when commands fail\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Name of the module/plugin/task/feature -->\r\nios_config\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n[ansible@localhost cmdauthz]$ ansible --version\r\nansible 2.3.1.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!---\r\nIf using Ansible 2.4 or above, paste the results of \"ansible-config dump --only-changed\"\r\n\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).\r\n\r\n-->\r\nansible.cfg modification:\r\nforks = 20\r\ngathering = explicit\r\nhost_key_checking = false\r\ntimeout = 60\r\nvault_password_file = ~/.ansible/vault-pass.txt\r\nretry_files_enabled = false\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\nMention the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.\r\n-->\r\nAWS AMI CentOS7\r\n```\r\n[ansible@localhost cmdauthz]$ uname -a\r\nLinux localhost.localdomain 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\nThere is no stdout from \"ios_config\". When you issue commands that are TACACS-unauthorized, ansible still reports \"changed\", comes back and reports success, and ignores the fact they were rejected. I assume the module is ignoring all CLI output and only looking for the next config prompt to claim success. This makes it difficult to validate that unauthorized commands were rejected and authorized commands were accepted. Such a playbook is useful as a AAA security posture checker.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\nTask sub-list shown below.\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```\r\n---\r\n- name: \"SYS >> Capture current username\"\r\n set_fact:\r\n USER: \"{{ PARAM_CREDS.username }}\"\r\n\r\n- name: \"IOS >> {{ USER }}: Issue unauthorized conf commands\"\r\n ios_config:\r\n provider: \"{{ PARAM_CREDS }}\"\r\n match: none\r\n commands:\r\n - \"{{ item }}\"\r\n register: OUTPUT\r\n with_items: \"{{ unauth_conf_cmds_t2 }}\"\r\n\r\n- debug:\r\n var: OUTPUT\r\n...\r\n```\r\n\r\nRelevant variables included.\r\n\r\n```\r\n---\r\nunauth_conf_cmds_t2:\r\n- \"ip bgp new-format\"\r\n- \"interface Loopback12345\"\r\n...\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nWhen `ios_config` determines that the commands being issued are new (a change), of if, `match: none` is set, the module should attempt to issue command in question. It should also collect any parser output, to include `Command authorization failed.`, so that the playbook writer can perform checks against it. I would recommend returning `stdout` and `stdout_lines` in much the same way that ios_command works, for consistency.\r\n\r\nNote that if you don't want to set up a TACACS server, using a `do` statement to run a show command from config mode would probably be a valid test to ensure output is being collected.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n`ios_config` returns correct fields per the documentation, but the lack of seeing CLI output makes it hard to discern why commands were rejected.\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\n[ansible@localhost cmdauthz]$ ansible-playbook cmdauthz-playbook.yml \r\n\r\nPLAY [localhost] ************************************************************************************************************\r\n\r\nTASK [SYS >> Define string match facts] *************************************************************************************\r\nok: [localhost]\r\n\r\nPLAY [Verify AAA command authorization functionality] ***********************************************************************\r\n\r\nTASK [SYS >> Capture current username] **************************************************************************************\r\nok: [APG_6010_PER]\r\n\r\nTASK [IOS >> ansible2: Issue unauthorized conf commands] ********************************************************************\r\nchanged: [APG_6010_PER] => (item=ip bgp new-format)\r\nchanged: [APG_6010_PER] => (item=interface Loopback12345)\r\n\r\nTASK [debug] ****************************************************************************************************************\r\nok: [APG_6010_PER] => {\r\n \"OUTPUT\": {\r\n \"changed\": true, \r\n \"msg\": \"All items completed\", \r\n \"results\": [\r\n {\r\n \"_ansible_item_result\": true, \r\n \"_ansible_no_log\": false, \r\n \"_ansible_parsed\": true, \r\n \"banners\": {}, \r\n \"changed\": true, \r\n \"commands\": [\r\n \"ip bgp new-format\"\r\n ], \r\n \"invocation\": {\r\n \"module_args\": {\r\n \"after\": null, \r\n \"auth_pass\": null, \r\n \"authorize\": null, \r\n \"backup\": false, \r\n \"before\": null, \r\n \"commands\": [\r\n \"ip bgp new-format\"\r\n ], \r\n \"config\": null, \r\n \"defaults\": false, \r\n \"force\": false, \r\n \"host\": null, \r\n \"lines\": [\r\n \"ip bgp new-format\"\r\n ], \r\n \"match\": \"none\", \r\n \"multiline_delimiter\": \"@\", \r\n \"parents\": null, \r\n \"password\": null, \r\n \"port\": null, \r\n \"provider\": {\r\n \"auth_pass\": null, \r\n \"authorize\": null, \r\n \"host\": \"APG_6010_PER\", \r\n \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \r\n \"port\": 3022, \r\n \"ssh_keyfile\": null, \r\n \"timeout\": 30, \r\n \"username\": \"ansible2\"\r\n }, \r\n \"replace\": \"line\", \r\n \"save\": false, \r\n \"src\": null, \r\n \"ssh_keyfile\": null, \r\n \"timeout\": null, \r\n \"username\": null\r\n }\r\n }, \r\n \"item\": \"ip bgp new-format\", \r\n \"updates\": [\r\n \"ip bgp new-format\"\r\n ]\r\n }, \r\n {\r\n \"_ansible_item_result\": true, \r\n \"_ansible_no_log\": false, \r\n \"_ansible_parsed\": true, \r\n \"banners\": {}, \r\n \"changed\": true, \r\n \"commands\": [\r\n \"interface Loopback12345\"\r\n ], \r\n \"invocation\": {\r\n \"module_args\": {\r\n \"after\": null, \r\n \"auth_pass\": null, \r\n \"authorize\": null, \r\n \"backup\": false, \r\n \"before\": null, \r\n \"commands\": [\r\n \"interface Loopback12345\"\r\n ], \r\n \"config\": null, \r\n \"defaults\": false, \r\n \"force\": false, \r\n \"host\": null, \r\n \"lines\": [\r\n \"interface Loopback12345\"\r\n ], \r\n \"match\": \"none\", \r\n \"multiline_delimiter\": \"@\", \r\n \"parents\": null, \r\n \"password\": null, \r\n \"port\": null, \r\n \"provider\": {\r\n \"auth_pass\": null, \r\n \"authorize\": null, \r\n \"host\": \"APG_6010_PER\", \r\n \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \r\n \"port\": 3022, \r\n \"ssh_keyfile\": null, \r\n \"timeout\": 30, \r\n \"username\": \"ansible2\"\r\n }, \r\n \"replace\": \"line\", \r\n \"save\": false, \r\n \"src\": null, \r\n \"ssh_keyfile\": null, \r\n \"timeout\": null, \r\n \"username\": null\r\n }\r\n }, \r\n \"item\": \"interface Loopback12345\", \r\n \"updates\": [\r\n \"interface Loopback12345\"\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\nManually logging into the router and issuing the commands is shown below. The commands are rejected and no changes are observed.\r\n\r\n```\r\nAPG_6010_XPER#conf t\r\nEnter configuration commands, one per line. End with CNTL/Z.\r\nAPG_6010_XPER(config)#ip bgp new-format\r\nCommand authorization failed.\r\n\r\nAPG_6010_XPER(config)#interface Loopback12345\r\nCommand authorization failed.\r\n\r\nAPG_6010_XPER(config)#end\r\n\r\nAPG_6010_XPER#show run | include Loopback12345|^ip_bgp\r\nAPG_6010_XPER#\r\nAPG_6010_XPER#show archive config differences nvram:startup-config\r\n\r\n!Contextual Config Diffs:\r\n!No changes were found\r\n```\n", "before_files": [{"content": "#\n# (c) 2016 Red Hat Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport json\nimport re\n\nfrom ansible.errors import AnsibleConnectionFailure\nfrom ansible.module_utils._text import to_text, to_bytes\nfrom ansible.plugins.terminal import TerminalBase\n\n\nclass TerminalModule(TerminalBase):\n\n terminal_stdout_re = [\n re.compile(br\"[\\r\\n]?[\\w\\+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){0,3}(?:[>#]) ?$\")\n ]\n\n terminal_stderr_re = [\n re.compile(br\"% ?Error\"),\n # re.compile(br\"^% \\w+\", re.M),\n re.compile(br\"% ?Bad secret\"),\n re.compile(br\"[\\r\\n%] Bad passwords\"),\n re.compile(br\"invalid input\", re.I),\n re.compile(br\"(?:incomplete|ambiguous) command\", re.I),\n re.compile(br\"connection timed out\", re.I),\n re.compile(br\"[^\\r\\n]+ not found\"),\n re.compile(br\"'[^']' +returned error code: ?\\d+\"),\n re.compile(br\"Bad mask\", re.I),\n re.compile(br\"% ?(\\S+) ?overlaps with ?(\\S+)\", re.I),\n re.compile(br\"[%\\S] ?Error: ?[\\s]+\", re.I),\n re.compile(br\"[%\\S] ?Informational: ?[\\s]+\", re.I)\n ]\n\n def on_open_shell(self):\n try:\n for cmd in (b'terminal length 0', b'terminal width 512'):\n self._exec_cli_command(cmd)\n except AnsibleConnectionFailure:\n raise AnsibleConnectionFailure('unable to set terminal parameters')\n\n def on_become(self, passwd=None):\n if self._get_prompt().endswith(b'#'):\n return\n\n cmd = {u'command': u'enable'}\n if passwd:\n # Note: python-3.5 cannot combine u\"\" and r\"\" together. Thus make\n # an r string and use to_text to ensure it's text on both py2 and py3.\n cmd[u'prompt'] = to_text(r\"[\\r\\n]password: ?$\", errors='surrogate_or_strict')\n cmd[u'answer'] = passwd\n cmd[u'prompt_retry_check'] = True\n try:\n self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))\n prompt = self._get_prompt()\n if prompt is None or not prompt.endswith(b'#'):\n raise AnsibleConnectionFailure('failed to elevate privilege to enable mode still at prompt [%s]' % prompt)\n except AnsibleConnectionFailure as e:\n prompt = self._get_prompt()\n raise AnsibleConnectionFailure('unable to elevate privilege to enable mode, at prompt [%s] with error: %s' % (prompt, e.message))\n\n def on_unbecome(self):\n prompt = self._get_prompt()\n if prompt is None:\n # if prompt is None most likely the terminal is hung up at a prompt\n return\n\n if b'(config' in prompt:\n self._exec_cli_command(b'end')\n self._exec_cli_command(b'disable')\n\n elif prompt.endswith(b'#'):\n self._exec_cli_command(b'disable')\n", "path": "lib/ansible/plugins/terminal/ios.py"}]} | 3,705 | 176 |
gh_patches_debug_5157 | rasdani/github-patches | git_diff | python__peps-2090 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"PEP numbers never change" verbiage is not in PEP 1
## Problem
The introduction to PEP 0 references PEP 1 as a source for
> PEP numbers are assigned by the PEP editors, and once assigned are never changed [1].
But PEP 1 doesn't say anything about PEP numbers never changing.
## Research
While skimming PEP 1, I found information about the PEP editor assigning a number:
> Once the PEP is ready for the repository, a PEP editor will:
Assign a PEP number (almost always just the next available number, but sometimes it's a special/joke number, like 666 or 3141). (Clarification: For Python 3, numbers in the 3000s were used for Py3k-specific proposals. But now that all new features go into Python 3 only, the process is back to using numbers in the 100s again. Remember that numbers below 100 are meta-PEPs.)
and
> The PEP editors are individuals responsible for managing the administrative and editorial aspects of the PEP workflow (e.g. assigning PEP numbers and changing their status). See PEP Editor Responsibilities & Workflow for details.
But I didn't find any reference to that number never changing.
## Proposal:
Can we change PEP 0's introduction so that the reference is specific to assigning numbers?
```
PEP numbers are assigned by the PEP editors[1], and once assigned are never changed.
```
## Link
https://github.com/python/peps/blob/40ef5625b7d42655f49090ffd2c0860ecf8d1d9f/pep0/constants.py#L22-L27
</issue>
<code>
[start of pep0/constants.py]
1 # -*- coding: utf-8 -*-
2 text_type = str
3 title_length = 55
4 author_length = 40
5 table_separator = "== ==== " + "="*title_length + " " + "="*author_length
6 column_format = (
7 '%(type)1s%(status)1s %(number)4s %(title)-{title_length}s %(authors)-s'
8 ).format(title_length=title_length)
9
10 header = """\
11 PEP: 0
12 Title: Index of Python Enhancement Proposals (PEPs)
13 Version: N/A
14 Last-Modified: %s
15 Author: python-dev <[email protected]>
16 Status: Active
17 Type: Informational
18 Content-Type: text/x-rst
19 Created: 13-Jul-2000
20 """
21
22 intro = """\
23 This PEP contains the index of all Python Enhancement Proposals,
24 known as PEPs. PEP numbers are assigned by the PEP editors, and
25 once assigned are never changed [1_]. The version control history [2_] of
26 the PEP texts represent their historical record.
27 """
28
29 references = """\
30 .. [1] PEP 1: PEP Purpose and Guidelines
31 .. [2] View PEP history online: https://github.com/python/peps
32 """
33
34 footer = """\
35 ..
36 Local Variables:
37 mode: indented-text
38 indent-tabs-mode: nil
39 sentence-end-double-space: t
40 fill-column: 70
41 coding: utf-8
42 End:\
43 """
44
[end of pep0/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pep0/constants.py b/pep0/constants.py
--- a/pep0/constants.py
+++ b/pep0/constants.py
@@ -21,8 +21,8 @@
intro = """\
This PEP contains the index of all Python Enhancement Proposals,
-known as PEPs. PEP numbers are assigned by the PEP editors, and
-once assigned are never changed [1_]. The version control history [2_] of
+known as PEPs. PEP numbers are assigned by the PEP editors[1_], and
+once assigned are never changed. The version control history [2_] of
the PEP texts represent their historical record.
"""
| {"golden_diff": "diff --git a/pep0/constants.py b/pep0/constants.py\n--- a/pep0/constants.py\n+++ b/pep0/constants.py\n@@ -21,8 +21,8 @@\n \n intro = \"\"\"\\\n This PEP contains the index of all Python Enhancement Proposals,\n-known as PEPs. PEP numbers are assigned by the PEP editors, and\n-once assigned are never changed [1_]. The version control history [2_] of\n+known as PEPs. PEP numbers are assigned by the PEP editors[1_], and\n+once assigned are never changed. The version control history [2_] of\n the PEP texts represent their historical record.\n \"\"\"\n", "issue": "\"PEP numbers never change\" verbiage is not in PEP 1\n## Problem\r\n\r\nThe introduction to PEP 0 references PEP 1 as a source for\r\n\r\n> PEP numbers are assigned by the PEP editors, and once assigned are never changed [1].\r\n\r\nBut PEP 1 doesn't say anything about PEP numbers never changing.\r\n\r\n## Research\r\n\r\nWhile skimming PEP 1, I found information about the PEP editor assigning a number:\r\n\r\n> Once the PEP is ready for the repository, a PEP editor will:\r\nAssign a PEP number (almost always just the next available number, but sometimes it's a special/joke number, like 666 or 3141). (Clarification: For Python 3, numbers in the 3000s were used for Py3k-specific proposals. But now that all new features go into Python 3 only, the process is back to using numbers in the 100s again. Remember that numbers below 100 are meta-PEPs.)\r\n\r\nand\r\n\r\n> The PEP editors are individuals responsible for managing the administrative and editorial aspects of the PEP workflow (e.g. assigning PEP numbers and changing their status). See PEP Editor Responsibilities & Workflow for details.\r\n\r\nBut I didn't find any reference to that number never changing. \r\n\r\n## Proposal:\r\n\r\nCan we change PEP 0's introduction so that the reference is specific to assigning numbers?\r\n\r\n```\r\nPEP numbers are assigned by the PEP editors[1], and once assigned are never changed.\r\n```\r\n\r\n## Link\r\n\r\nhttps://github.com/python/peps/blob/40ef5625b7d42655f49090ffd2c0860ecf8d1d9f/pep0/constants.py#L22-L27\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\ntext_type = str\ntitle_length = 55\nauthor_length = 40\ntable_separator = \"== ==== \" + \"=\"*title_length + \" \" + \"=\"*author_length\ncolumn_format = (\n '%(type)1s%(status)1s %(number)4s %(title)-{title_length}s %(authors)-s'\n).format(title_length=title_length)\n\nheader = \"\"\"\\\nPEP: 0\nTitle: Index of Python Enhancement Proposals (PEPs)\nVersion: N/A\nLast-Modified: %s\nAuthor: python-dev <[email protected]>\nStatus: Active\nType: Informational\nContent-Type: text/x-rst\nCreated: 13-Jul-2000\n\"\"\"\n\nintro = \"\"\"\\\nThis PEP contains the index of all Python Enhancement Proposals,\nknown as PEPs. PEP numbers are assigned by the PEP editors, and\nonce assigned are never changed [1_]. The version control history [2_] of\nthe PEP texts represent their historical record.\n\"\"\"\n\nreferences = \"\"\"\\\n.. [1] PEP 1: PEP Purpose and Guidelines\n.. [2] View PEP history online: https://github.com/python/peps\n\"\"\"\n\nfooter = \"\"\"\f\\\n..\n Local Variables:\n mode: indented-text\n indent-tabs-mode: nil\n sentence-end-double-space: t\n fill-column: 70\n coding: utf-8\n End:\\\n\"\"\"\n", "path": "pep0/constants.py"}]} | 1,329 | 157 |
gh_patches_debug_15177 | rasdani/github-patches | git_diff | qtile__qtile-2439 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bad icon scaling in Systray widget
# Issue description
1. I added the system tray widget to my qtile bar with an icon size of 20.
2. I currently use the icon set Papirus Dark.
3. I launched the `nm-applet` through the autostart script and I obtained a really small icon for the `nm-applet` as you can see in the following image:

The previous small icon also happened if I launched `nm-applet` through the terminal.
One way I found to correct it was to setting the **icon size for the system tray to 16**. With this icon size, I obtained the following good results:

# Qtile version
`qtile 0.17.0`
# Configuration
My configuration can be found in the following [dotfiles](https://github.com/juanscr/dotfiles/tree/master/.config/qtile)
</issue>
<code>
[start of libqtile/widget/systray.py]
1 # Copyright (c) 2010 Aldo Cortesi
2 # Copyright (c) 2010-2011 dequis
3 # Copyright (c) 2010, 2012 roger
4 # Copyright (c) 2011 Mounier Florian
5 # Copyright (c) 2011-2012, 2014 Tycho Andersen
6 # Copyright (c) 2012 dmpayton
7 # Copyright (c) 2012-2013 Craig Barnes
8 # Copyright (c) 2013 hbc
9 # Copyright (c) 2013 Tao Sauvage
10 # Copyright (c) 2014 Sean Vig
11 #
12 # Permission is hereby granted, free of charge, to any person obtaining a copy
13 # of this software and associated documentation files (the "Software"), to deal
14 # in the Software without restriction, including without limitation the rights
15 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
16 # copies of the Software, and to permit persons to whom the Software is
17 # furnished to do so, subject to the following conditions:
18 #
19 # The above copyright notice and this permission notice shall be included in
20 # all copies or substantial portions of the Software.
21 #
22 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
23 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
24 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
25 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
26 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
27 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
28 # SOFTWARE.
29 import xcffib
30 from xcffib.xproto import (
31 ClientMessageData,
32 ClientMessageEvent,
33 EventMask,
34 SetMode,
35 )
36
37 from libqtile import bar
38 from libqtile.backend.x11 import window
39 from libqtile.widget import base
40
41 XEMBED_PROTOCOL_VERSION = 0
42
43
44 class Icon(window._Window):
45 _window_mask = EventMask.StructureNotify | \
46 EventMask.PropertyChange | \
47 EventMask.Exposure
48
49 def __init__(self, win, qtile, systray):
50 window._Window.__init__(self, win, qtile)
51 self.systray = systray
52 self.update_size()
53
54 def update_size(self):
55 icon_size = self.systray.icon_size
56 self.update_hints()
57
58 try:
59 width = self.hints["min_width"]
60 height = self.hints["min_height"]
61 except KeyError:
62 width = icon_size
63 height = icon_size
64
65 if height > icon_size:
66 width = width * icon_size // height
67 height = icon_size
68 if height <= 0:
69 width = icon_size
70 height = icon_size
71
72 self.width = width
73 self.height = height
74 return False
75
76 def handle_PropertyNotify(self, e): # noqa: N802
77 name = self.qtile.core.conn.atoms.get_name(e.atom)
78 if name == "_XEMBED_INFO":
79 info = self.window.get_property('_XEMBED_INFO', unpack=int)
80 if info and info[1]:
81 self.systray.bar.draw()
82
83 return False
84
85 def handle_DestroyNotify(self, event): # noqa: N802
86 wid = event.window
87 del(self.qtile.windows_map[wid])
88 del(self.systray.icons[wid])
89 self.systray.bar.draw()
90 return False
91
92 handle_UnmapNotify = handle_DestroyNotify # noqa: N815
93
94
95 class Systray(window._Window, base._Widget):
96 """A widget that manages system tray"""
97
98 _window_mask = EventMask.StructureNotify | \
99 EventMask.Exposure
100
101 orientations = base.ORIENTATION_HORIZONTAL
102
103 defaults = [
104 ('icon_size', 20, 'Icon width'),
105 ('padding', 5, 'Padding between icons'),
106 ]
107
108 def __init__(self, **config):
109 base._Widget.__init__(self, bar.CALCULATED, **config)
110 self.add_defaults(Systray.defaults)
111 self.icons = {}
112 self.screen = 0
113
114 def calculate_length(self):
115 width = sum(i.width for i in self.icons.values())
116 width += self.padding * len(self.icons)
117 return width
118
119 def _configure(self, qtile, bar):
120 base._Widget._configure(self, qtile, bar)
121
122 if self.configured:
123 return
124
125 self.conn = conn = qtile.core.conn
126 win = conn.create_window(-1, -1, 1, 1)
127 window._Window.__init__(self, window.XWindow(conn, win.wid), qtile)
128 qtile.windows_map[win.wid] = self
129
130 # Even when we have multiple "Screen"s, we are setting up as the system
131 # tray on a particular X display, that is the screen we need to
132 # reference in the atom
133 if qtile.current_screen:
134 self.screen = qtile.current_screen.index
135 self.bar = bar
136 atoms = conn.atoms
137
138 conn.conn.core.SetSelectionOwner(
139 win.wid,
140 atoms['_NET_SYSTEM_TRAY_S{:d}'.format(self.screen)],
141 xcffib.CurrentTime
142 )
143 data = [
144 xcffib.CurrentTime,
145 atoms['_NET_SYSTEM_TRAY_S{:d}'.format(self.screen)],
146 win.wid, 0, 0
147 ]
148 union = ClientMessageData.synthetic(data, "I" * 5)
149 event = ClientMessageEvent.synthetic(
150 format=32,
151 window=qtile.core._root.wid,
152 type=atoms['MANAGER'],
153 data=union
154 )
155 qtile.core._root.send_event(event, mask=EventMask.StructureNotify)
156
157 def handle_ClientMessage(self, event): # noqa: N802
158 atoms = self.conn.atoms
159
160 opcode = event.type
161 data = event.data.data32
162 message = data[1]
163 wid = data[2]
164
165 parent = self.bar.window.window
166
167 if opcode == atoms['_NET_SYSTEM_TRAY_OPCODE'] and message == 0:
168 w = window.XWindow(self.conn, wid)
169 icon = Icon(w, self.qtile, self)
170 self.icons[wid] = icon
171 self.qtile.windows_map[wid] = icon
172
173 self.conn.conn.core.ChangeSaveSet(SetMode.Insert, wid)
174 self.conn.conn.core.ReparentWindow(wid, parent.wid, 0, 0)
175 self.conn.conn.flush()
176
177 info = icon.window.get_property('_XEMBED_INFO', unpack=int)
178
179 if not info:
180 self.bar.draw()
181 return False
182
183 if info[1]:
184 self.bar.draw()
185
186 return False
187
188 def draw(self):
189 xoffset = self.padding
190 self.drawer.clear(self.background or self.bar.background)
191 self.drawer.draw(offsetx=self.offset, width=self.length)
192 for pos, icon in enumerate(self.icons.values()):
193 icon.window.set_attribute(backpixmap=self.drawer.pixmap)
194 icon.place(
195 self.offset + xoffset,
196 self.bar.height // 2 - self.icon_size // 2,
197 icon.width, self.icon_size,
198 0,
199 None
200 )
201 if icon.hidden:
202 icon.unhide()
203 data = [
204 self.conn.atoms["_XEMBED_EMBEDDED_NOTIFY"],
205 xcffib.xproto.Time.CurrentTime,
206 0,
207 self.bar.window.wid,
208 XEMBED_PROTOCOL_VERSION
209 ]
210 u = xcffib.xproto.ClientMessageData.synthetic(data, "I" * 5)
211 event = xcffib.xproto.ClientMessageEvent.synthetic(
212 format=32,
213 window=icon.wid,
214 type=self.conn.atoms["_XEMBED"],
215 data=u
216 )
217 self.window.send_event(event)
218
219 xoffset += icon.width + self.padding
220
221 def finalize(self):
222 base._Widget.finalize(self)
223 atoms = self.conn.atoms
224 self.conn.conn.core.SetSelectionOwner(
225 0,
226 atoms['_NET_SYSTEM_TRAY_S{:d}'.format(self.screen)],
227 xcffib.CurrentTime,
228 )
229 self.hide()
230
[end of libqtile/widget/systray.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/widget/systray.py b/libqtile/widget/systray.py
--- a/libqtile/widget/systray.py
+++ b/libqtile/widget/systray.py
@@ -55,19 +55,15 @@
icon_size = self.systray.icon_size
self.update_hints()
- try:
- width = self.hints["min_width"]
- height = self.hints["min_height"]
- except KeyError:
- width = icon_size
- height = icon_size
+ width = self.hints.get("min_width", icon_size)
+ height = self.hints.get("min_height", icon_size)
+
+ width = max(width, icon_size)
+ height = max(height, icon_size)
if height > icon_size:
width = width * icon_size // height
height = icon_size
- if height <= 0:
- width = icon_size
- height = icon_size
self.width = width
self.height = height
| {"golden_diff": "diff --git a/libqtile/widget/systray.py b/libqtile/widget/systray.py\n--- a/libqtile/widget/systray.py\n+++ b/libqtile/widget/systray.py\n@@ -55,19 +55,15 @@\n icon_size = self.systray.icon_size\n self.update_hints()\n \n- try:\n- width = self.hints[\"min_width\"]\n- height = self.hints[\"min_height\"]\n- except KeyError:\n- width = icon_size\n- height = icon_size\n+ width = self.hints.get(\"min_width\", icon_size)\n+ height = self.hints.get(\"min_height\", icon_size)\n+\n+ width = max(width, icon_size)\n+ height = max(height, icon_size)\n \n if height > icon_size:\n width = width * icon_size // height\n height = icon_size\n- if height <= 0:\n- width = icon_size\n- height = icon_size\n \n self.width = width\n self.height = height\n", "issue": "Bad icon scaling in Systray widget\n# Issue description\r\n1. I added the system tray widget to my qtile bar with an icon size of 20.\r\n2. I currently use the icon set Papirus Dark.\r\n3. I launched the `nm-applet` through the autostart script and I obtained a really small icon for the `nm-applet` as you can see in the following image: \r\n\r\n\r\n\r\nThe previous small icon also happened if I launched `nm-applet` through the terminal. \r\n\r\nOne way I found to correct it was to setting the **icon size for the system tray to 16**. With this icon size, I obtained the following good results:\r\n\r\n\r\n\r\n# Qtile version\r\n`qtile 0.17.0`\r\n\r\n# Configuration\r\n\r\nMy configuration can be found in the following [dotfiles](https://github.com/juanscr/dotfiles/tree/master/.config/qtile)\r\n\n", "before_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2010-2011 dequis\n# Copyright (c) 2010, 2012 roger\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2011-2012, 2014 Tycho Andersen\n# Copyright (c) 2012 dmpayton\n# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2013 hbc\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014 Sean Vig\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nimport xcffib\nfrom xcffib.xproto import (\n ClientMessageData,\n ClientMessageEvent,\n EventMask,\n SetMode,\n)\n\nfrom libqtile import bar\nfrom libqtile.backend.x11 import window\nfrom libqtile.widget import base\n\nXEMBED_PROTOCOL_VERSION = 0\n\n\nclass Icon(window._Window):\n _window_mask = EventMask.StructureNotify | \\\n EventMask.PropertyChange | \\\n EventMask.Exposure\n\n def __init__(self, win, qtile, systray):\n window._Window.__init__(self, win, qtile)\n self.systray = systray\n self.update_size()\n\n def update_size(self):\n icon_size = self.systray.icon_size\n self.update_hints()\n\n try:\n width = self.hints[\"min_width\"]\n height = self.hints[\"min_height\"]\n except KeyError:\n width = icon_size\n height = icon_size\n\n if height > icon_size:\n width = width * icon_size // height\n height = icon_size\n if height <= 0:\n width = icon_size\n height = icon_size\n\n self.width = width\n self.height = height\n return False\n\n def handle_PropertyNotify(self, e): # noqa: N802\n name = self.qtile.core.conn.atoms.get_name(e.atom)\n if name == \"_XEMBED_INFO\":\n info = self.window.get_property('_XEMBED_INFO', unpack=int)\n if info and info[1]:\n self.systray.bar.draw()\n\n return False\n\n def handle_DestroyNotify(self, event): # noqa: N802\n wid = event.window\n del(self.qtile.windows_map[wid])\n del(self.systray.icons[wid])\n self.systray.bar.draw()\n return False\n\n handle_UnmapNotify = handle_DestroyNotify # noqa: N815\n\n\nclass Systray(window._Window, base._Widget):\n \"\"\"A widget that manages system tray\"\"\"\n\n _window_mask = EventMask.StructureNotify | \\\n EventMask.Exposure\n\n orientations = base.ORIENTATION_HORIZONTAL\n\n defaults = [\n ('icon_size', 20, 'Icon width'),\n ('padding', 5, 'Padding between icons'),\n ]\n\n def __init__(self, **config):\n base._Widget.__init__(self, bar.CALCULATED, **config)\n self.add_defaults(Systray.defaults)\n self.icons = {}\n self.screen = 0\n\n def calculate_length(self):\n width = sum(i.width for i in self.icons.values())\n width += self.padding * len(self.icons)\n return width\n\n def _configure(self, qtile, bar):\n base._Widget._configure(self, qtile, bar)\n\n if self.configured:\n return\n\n self.conn = conn = qtile.core.conn\n win = conn.create_window(-1, -1, 1, 1)\n window._Window.__init__(self, window.XWindow(conn, win.wid), qtile)\n qtile.windows_map[win.wid] = self\n\n # Even when we have multiple \"Screen\"s, we are setting up as the system\n # tray on a particular X display, that is the screen we need to\n # reference in the atom\n if qtile.current_screen:\n self.screen = qtile.current_screen.index\n self.bar = bar\n atoms = conn.atoms\n\n conn.conn.core.SetSelectionOwner(\n win.wid,\n atoms['_NET_SYSTEM_TRAY_S{:d}'.format(self.screen)],\n xcffib.CurrentTime\n )\n data = [\n xcffib.CurrentTime,\n atoms['_NET_SYSTEM_TRAY_S{:d}'.format(self.screen)],\n win.wid, 0, 0\n ]\n union = ClientMessageData.synthetic(data, \"I\" * 5)\n event = ClientMessageEvent.synthetic(\n format=32,\n window=qtile.core._root.wid,\n type=atoms['MANAGER'],\n data=union\n )\n qtile.core._root.send_event(event, mask=EventMask.StructureNotify)\n\n def handle_ClientMessage(self, event): # noqa: N802\n atoms = self.conn.atoms\n\n opcode = event.type\n data = event.data.data32\n message = data[1]\n wid = data[2]\n\n parent = self.bar.window.window\n\n if opcode == atoms['_NET_SYSTEM_TRAY_OPCODE'] and message == 0:\n w = window.XWindow(self.conn, wid)\n icon = Icon(w, self.qtile, self)\n self.icons[wid] = icon\n self.qtile.windows_map[wid] = icon\n\n self.conn.conn.core.ChangeSaveSet(SetMode.Insert, wid)\n self.conn.conn.core.ReparentWindow(wid, parent.wid, 0, 0)\n self.conn.conn.flush()\n\n info = icon.window.get_property('_XEMBED_INFO', unpack=int)\n\n if not info:\n self.bar.draw()\n return False\n\n if info[1]:\n self.bar.draw()\n\n return False\n\n def draw(self):\n xoffset = self.padding\n self.drawer.clear(self.background or self.bar.background)\n self.drawer.draw(offsetx=self.offset, width=self.length)\n for pos, icon in enumerate(self.icons.values()):\n icon.window.set_attribute(backpixmap=self.drawer.pixmap)\n icon.place(\n self.offset + xoffset,\n self.bar.height // 2 - self.icon_size // 2,\n icon.width, self.icon_size,\n 0,\n None\n )\n if icon.hidden:\n icon.unhide()\n data = [\n self.conn.atoms[\"_XEMBED_EMBEDDED_NOTIFY\"],\n xcffib.xproto.Time.CurrentTime,\n 0,\n self.bar.window.wid,\n XEMBED_PROTOCOL_VERSION\n ]\n u = xcffib.xproto.ClientMessageData.synthetic(data, \"I\" * 5)\n event = xcffib.xproto.ClientMessageEvent.synthetic(\n format=32,\n window=icon.wid,\n type=self.conn.atoms[\"_XEMBED\"],\n data=u\n )\n self.window.send_event(event)\n\n xoffset += icon.width + self.padding\n\n def finalize(self):\n base._Widget.finalize(self)\n atoms = self.conn.atoms\n self.conn.conn.core.SetSelectionOwner(\n 0,\n atoms['_NET_SYSTEM_TRAY_S{:d}'.format(self.screen)],\n xcffib.CurrentTime,\n )\n self.hide()\n", "path": "libqtile/widget/systray.py"}]} | 3,277 | 234 |
gh_patches_debug_23820 | rasdani/github-patches | git_diff | mesonbuild__meson-5602 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
windres module doesn't flatten arguments
```meson
import('windows').compile_resources('file.rc', args : [[-DFOO'], '-DBAR])
```
results in
```
ERROR: List item must be one of <class 'str'>
```
</issue>
<code>
[start of mesonbuild/modules/windows.py]
1 # Copyright 2015 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import enum
16 import os
17 import re
18
19 from .. import mlog
20 from .. import mesonlib, build
21 from ..mesonlib import MachineChoice, MesonException, extract_as_list
22 from . import get_include_args
23 from . import ModuleReturnValue
24 from . import ExtensionModule
25 from ..interpreter import CustomTargetHolder
26 from ..interpreterbase import permittedKwargs, FeatureNewKwargs
27 from ..dependencies import ExternalProgram
28
29 class ResourceCompilerType(enum.Enum):
30 windres = 1
31 rc = 2
32
33 class WindowsModule(ExtensionModule):
34
35 def detect_compiler(self, compilers):
36 for l in ('c', 'cpp'):
37 if l in compilers:
38 return compilers[l]
39 raise MesonException('Resource compilation requires a C or C++ compiler.')
40
41 def _find_resource_compiler(self, state):
42 # FIXME: Does not handle `native: true` executables, see
43 # See https://github.com/mesonbuild/meson/issues/1531
44 # Take a parameter instead of the hardcoded definition below
45 for_machine = MachineChoice.HOST
46
47 if hasattr(self, '_rescomp'):
48 return self._rescomp
49
50 # Will try cross / native file and then env var
51 rescomp = ExternalProgram.from_bin_list(state.environment.binaries[for_machine], 'windres')
52
53 if not rescomp or not rescomp.found():
54 comp = self.detect_compiler(state.environment.coredata.compilers[for_machine])
55 if comp.id in {'msvc', 'clang-cl', 'intel-cl'}:
56 rescomp = ExternalProgram('rc', silent=True)
57 else:
58 rescomp = ExternalProgram('windres', silent=True)
59
60 if not rescomp.found():
61 raise MesonException('Could not find Windows resource compiler')
62
63 for (arg, match, rc_type) in [
64 ('/?', '^.*Microsoft.*Resource Compiler.*$', ResourceCompilerType.rc),
65 ('--version', '^.*GNU windres.*$', ResourceCompilerType.windres),
66 ]:
67 p, o, e = mesonlib.Popen_safe(rescomp.get_command() + [arg])
68 m = re.search(match, o, re.MULTILINE)
69 if m:
70 mlog.log('Windows resource compiler: %s' % m.group())
71 self._rescomp = (rescomp, rc_type)
72 break
73 else:
74 raise MesonException('Could not determine type of Windows resource compiler')
75
76 return self._rescomp
77
78 @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files', 'depends'])
79 @permittedKwargs({'args', 'include_directories', 'depend_files', 'depends'})
80 def compile_resources(self, state, args, kwargs):
81 extra_args = mesonlib.stringlistify(kwargs.get('args', []))
82 wrc_depend_files = extract_as_list(kwargs, 'depend_files', pop = True)
83 wrc_depends = extract_as_list(kwargs, 'depends', pop = True)
84 for d in wrc_depends:
85 if isinstance(d, CustomTargetHolder):
86 extra_args += get_include_args([d.outdir_include()])
87 inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)
88 for incd in inc_dirs:
89 if not isinstance(incd.held_object, (str, build.IncludeDirs)):
90 raise MesonException('Resource include dirs should be include_directories().')
91 extra_args += get_include_args(inc_dirs)
92
93 rescomp, rescomp_type = self._find_resource_compiler(state)
94 if rescomp_type == ResourceCompilerType.rc:
95 # RC is used to generate .res files, a special binary resource
96 # format, which can be passed directly to LINK (apparently LINK uses
97 # CVTRES internally to convert this to a COFF object)
98 suffix = 'res'
99 res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']
100 else:
101 # ld only supports object files, so windres is used to generate a
102 # COFF object
103 suffix = 'o'
104 res_args = extra_args + ['@INPUT@', '@OUTPUT@']
105
106 m = 'Argument {!r} has a space which may not work with windres due to ' \
107 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'
108 for arg in extra_args:
109 if ' ' in arg:
110 mlog.warning(m.format(arg))
111
112 res_targets = []
113
114 def add_target(src):
115 if isinstance(src, list):
116 for subsrc in src:
117 add_target(subsrc)
118 return
119
120 if hasattr(src, 'held_object'):
121 src = src.held_object
122
123 if isinstance(src, str):
124 name_format = 'file {!r}'
125 name = os.path.join(state.subdir, src)
126 elif isinstance(src, mesonlib.File):
127 name_format = 'file {!r}'
128 name = src.relative_name()
129 elif isinstance(src, build.CustomTarget):
130 if len(src.get_outputs()) > 1:
131 raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')
132
133 name_format = 'target {!r}'
134 name = src.get_id()
135 else:
136 raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))
137
138 # Path separators are not allowed in target names
139 name = name.replace('/', '_').replace('\\', '_')
140
141 res_kwargs = {
142 'output': name + '_@BASENAME@.' + suffix,
143 'input': [src],
144 'command': [rescomp] + res_args,
145 'depend_files': wrc_depend_files,
146 'depends': wrc_depends,
147 }
148
149 # instruct binutils windres to generate a preprocessor depfile
150 if rescomp_type == ResourceCompilerType.windres:
151 res_kwargs['depfile'] = res_kwargs['output'] + '.d'
152 res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']
153
154 res_targets.append(build.CustomTarget('Windows resource for ' + name_format.format(name), state.subdir, state.subproject, res_kwargs))
155
156 add_target(args)
157
158 return ModuleReturnValue(res_targets, [res_targets])
159
160 def initialize(*args, **kwargs):
161 return WindowsModule(*args, **kwargs)
162
[end of mesonbuild/modules/windows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py
--- a/mesonbuild/modules/windows.py
+++ b/mesonbuild/modules/windows.py
@@ -23,7 +23,7 @@
from . import ModuleReturnValue
from . import ExtensionModule
from ..interpreter import CustomTargetHolder
-from ..interpreterbase import permittedKwargs, FeatureNewKwargs
+from ..interpreterbase import permittedKwargs, FeatureNewKwargs, flatten
from ..dependencies import ExternalProgram
class ResourceCompilerType(enum.Enum):
@@ -78,7 +78,7 @@
@FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files', 'depends'])
@permittedKwargs({'args', 'include_directories', 'depend_files', 'depends'})
def compile_resources(self, state, args, kwargs):
- extra_args = mesonlib.stringlistify(kwargs.get('args', []))
+ extra_args = mesonlib.stringlistify(flatten(kwargs.get('args', [])))
wrc_depend_files = extract_as_list(kwargs, 'depend_files', pop = True)
wrc_depends = extract_as_list(kwargs, 'depends', pop = True)
for d in wrc_depends:
| {"golden_diff": "diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py\n--- a/mesonbuild/modules/windows.py\n+++ b/mesonbuild/modules/windows.py\n@@ -23,7 +23,7 @@\n from . import ModuleReturnValue\n from . import ExtensionModule\n from ..interpreter import CustomTargetHolder\n-from ..interpreterbase import permittedKwargs, FeatureNewKwargs\n+from ..interpreterbase import permittedKwargs, FeatureNewKwargs, flatten\n from ..dependencies import ExternalProgram\n \n class ResourceCompilerType(enum.Enum):\n@@ -78,7 +78,7 @@\n @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files', 'depends'])\n @permittedKwargs({'args', 'include_directories', 'depend_files', 'depends'})\n def compile_resources(self, state, args, kwargs):\n- extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n+ extra_args = mesonlib.stringlistify(flatten(kwargs.get('args', [])))\n wrc_depend_files = extract_as_list(kwargs, 'depend_files', pop = True)\n wrc_depends = extract_as_list(kwargs, 'depends', pop = True)\n for d in wrc_depends:\n", "issue": "windres module doesn't flatten arguments\n```meson\r\nimport('windows').compile_resources('file.rc', args : [[-DFOO'], '-DBAR])\r\n```\r\nresults in\r\n```\r\nERROR: List item must be one of <class 'str'>\r\n```\n", "before_files": [{"content": "# Copyright 2015 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport enum\nimport os\nimport re\n\nfrom .. import mlog\nfrom .. import mesonlib, build\nfrom ..mesonlib import MachineChoice, MesonException, extract_as_list\nfrom . import get_include_args\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreter import CustomTargetHolder\nfrom ..interpreterbase import permittedKwargs, FeatureNewKwargs\nfrom ..dependencies import ExternalProgram\n\nclass ResourceCompilerType(enum.Enum):\n windres = 1\n rc = 2\n\nclass WindowsModule(ExtensionModule):\n\n def detect_compiler(self, compilers):\n for l in ('c', 'cpp'):\n if l in compilers:\n return compilers[l]\n raise MesonException('Resource compilation requires a C or C++ compiler.')\n\n def _find_resource_compiler(self, state):\n # FIXME: Does not handle `native: true` executables, see\n # See https://github.com/mesonbuild/meson/issues/1531\n # Take a parameter instead of the hardcoded definition below\n for_machine = MachineChoice.HOST\n\n if hasattr(self, '_rescomp'):\n return self._rescomp\n\n # Will try cross / native file and then env var\n rescomp = ExternalProgram.from_bin_list(state.environment.binaries[for_machine], 'windres')\n\n if not rescomp or not rescomp.found():\n comp = self.detect_compiler(state.environment.coredata.compilers[for_machine])\n if comp.id in {'msvc', 'clang-cl', 'intel-cl'}:\n rescomp = ExternalProgram('rc', silent=True)\n else:\n rescomp = ExternalProgram('windres', silent=True)\n\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler')\n\n for (arg, match, rc_type) in [\n ('/?', '^.*Microsoft.*Resource Compiler.*$', ResourceCompilerType.rc),\n ('--version', '^.*GNU windres.*$', ResourceCompilerType.windres),\n ]:\n p, o, e = mesonlib.Popen_safe(rescomp.get_command() + [arg])\n m = re.search(match, o, re.MULTILINE)\n if m:\n mlog.log('Windows resource compiler: %s' % m.group())\n self._rescomp = (rescomp, rc_type)\n break\n else:\n raise MesonException('Could not determine type of Windows resource compiler')\n\n return self._rescomp\n\n @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files', 'depends'])\n @permittedKwargs({'args', 'include_directories', 'depend_files', 'depends'})\n def compile_resources(self, state, args, kwargs):\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n wrc_depend_files = extract_as_list(kwargs, 'depend_files', pop = True)\n wrc_depends = extract_as_list(kwargs, 'depends', pop = True)\n for d in wrc_depends:\n if isinstance(d, CustomTargetHolder):\n extra_args += get_include_args([d.outdir_include()])\n inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)\n for incd in inc_dirs:\n if not isinstance(incd.held_object, (str, build.IncludeDirs)):\n raise MesonException('Resource include dirs should be include_directories().')\n extra_args += get_include_args(inc_dirs)\n\n rescomp, rescomp_type = self._find_resource_compiler(state)\n if rescomp_type == ResourceCompilerType.rc:\n # RC is used to generate .res files, a special binary resource\n # format, which can be passed directly to LINK (apparently LINK uses\n # CVTRES internally to convert this to a COFF object)\n suffix = 'res'\n res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']\n else:\n # ld only supports object files, so windres is used to generate a\n # COFF object\n suffix = 'o'\n res_args = extra_args + ['@INPUT@', '@OUTPUT@']\n\n m = 'Argument {!r} has a space which may not work with windres due to ' \\\n 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'\n for arg in extra_args:\n if ' ' in arg:\n mlog.warning(m.format(arg))\n\n res_targets = []\n\n def add_target(src):\n if isinstance(src, list):\n for subsrc in src:\n add_target(subsrc)\n return\n\n if hasattr(src, 'held_object'):\n src = src.held_object\n\n if isinstance(src, str):\n name_format = 'file {!r}'\n name = os.path.join(state.subdir, src)\n elif isinstance(src, mesonlib.File):\n name_format = 'file {!r}'\n name = src.relative_name()\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n\n name_format = 'target {!r}'\n name = src.get_id()\n else:\n raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))\n\n # Path separators are not allowed in target names\n name = name.replace('/', '_').replace('\\\\', '_')\n\n res_kwargs = {\n 'output': name + '_@BASENAME@.' + suffix,\n 'input': [src],\n 'command': [rescomp] + res_args,\n 'depend_files': wrc_depend_files,\n 'depends': wrc_depends,\n }\n\n # instruct binutils windres to generate a preprocessor depfile\n if rescomp_type == ResourceCompilerType.windres:\n res_kwargs['depfile'] = res_kwargs['output'] + '.d'\n res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']\n\n res_targets.append(build.CustomTarget('Windows resource for ' + name_format.format(name), state.subdir, state.subproject, res_kwargs))\n\n add_target(args)\n\n return ModuleReturnValue(res_targets, [res_targets])\n\ndef initialize(*args, **kwargs):\n return WindowsModule(*args, **kwargs)\n", "path": "mesonbuild/modules/windows.py"}]} | 2,510 | 268 |
gh_patches_debug_51474 | rasdani/github-patches | git_diff | kivy__kivy-1926 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SoundLoader can't determine file types for URL's with URL parameters in them.
Kivy currently can't load audio files from URL's that have URL parameters in them (For example `https://audio.example.com/get/test.wav?dl=true&token=9a8s76f9a876`).
</issue>
<code>
[start of kivy/core/audio/__init__.py]
1 '''
2 Audio
3 =====
4
5 Load an audio sound and play it with::
6
7 from kivy.core.audio import SoundLoader
8
9 sound = SoundLoader.load('mytest.wav')
10 if sound:
11 print("Sound found at %s" % sound.source)
12 print("Sound is %.3f seconds" % sound.length)
13 sound.play()
14
15 You should not use the Sound class directly. The class returned by
16 **SoundLoader.load** will be the best sound provider for that particular file
17 type, so it might return different Sound classes depending the file type.
18
19 .. versionchanged:: 1.8.0
20 There is now 2 distinct Gstreamer implementation: one using Gi/Gst working
21 for both Python 2+3 with Gstreamer 1.0, and one using PyGST working
22 only for Python 2 + Gstreamer 0.10.
23 If you have issue with GStreamer, have a look at
24 :ref:`gstreamer-compatibility`
25
26 .. note::
27
28 Recording audio is not supported.
29
30 '''
31
32 __all__ = ('Sound', 'SoundLoader')
33
34 from kivy.logger import Logger
35 from kivy.event import EventDispatcher
36 from kivy.core import core_register_libs
37 from kivy.compat import PY2
38 from kivy.resources import resource_find
39 from kivy.properties import StringProperty, NumericProperty, OptionProperty, \
40 AliasProperty, BooleanProperty
41
42
43 class SoundLoader:
44 '''Load a sound, using the best loader for the given file type.
45 '''
46
47 _classes = []
48
49 @staticmethod
50 def register(classobj):
51 '''Register a new class to load the sound.'''
52 Logger.debug('Audio: register %s' % classobj.__name__)
53 SoundLoader._classes.append(classobj)
54
55 @staticmethod
56 def load(filename):
57 '''Load a sound, and return a Sound() instance.'''
58 rfn = resource_find(filename)
59 if rfn is not None:
60 filename = rfn
61 ext = filename.split('.')[-1].lower()
62 for classobj in SoundLoader._classes:
63 if ext in classobj.extensions():
64 return classobj(source=filename)
65 Logger.warning('Audio: Unable to find a loader for <%s>' %
66 filename)
67 return None
68
69
70 class Sound(EventDispatcher):
71 '''Represents a sound to play. This class is abstract, and cannot be used
72 directly.
73
74 Use SoundLoader to load a sound.
75
76 :Events:
77 `on_play` : None
78 Fired when the sound is played.
79 `on_stop` : None
80 Fired when the sound is stopped.
81 '''
82
83 source = StringProperty(None)
84 '''Filename / source of your audio file.
85
86 .. versionadded:: 1.3.0
87
88 :attr:`source` is a :class:`~kivy.properties.StringProperty` that defaults
89 to None and is read-only. Use the :meth:`SoundLoader.load` for loading
90 audio.
91 '''
92
93 volume = NumericProperty(1.)
94 '''Volume, in the range 0-1. 1 means full volume, 0 means mute.
95
96 .. versionadded:: 1.3.0
97
98 :attr:`volume` is a :class:`~kivy.properties.NumericProperty` and defaults
99 to 1.
100 '''
101
102 state = OptionProperty('stop', options=('stop', 'play'))
103 '''State of the sound, one of 'stop' or 'play'.
104
105 .. versionadded:: 1.3.0
106
107 :attr:`state` is a read-only :class:`~kivy.properties.OptionProperty`.'''
108
109 loop = BooleanProperty(False)
110 '''Set to True if the sound should automatically loop when it finishes.
111
112 .. versionadded:: 1.8.0
113
114 :attr:`loop` is a :class:`~kivy.properties.BooleanProperty` and defaults to
115 False.'''
116
117 #
118 # deprecated
119 #
120 def _get_status(self):
121 return self.state
122 status = AliasProperty(_get_status, None, bind=('state', ))
123 '''
124 .. deprecated:: 1.3.0
125 Use :attr:`state` instead.
126 '''
127
128 def _get_filename(self):
129 return self.source
130 filename = AliasProperty(_get_filename, None, bind=('source', ))
131 '''
132 .. deprecated:: 1.3.0
133 Use :attr:`source` instead.
134 '''
135
136 __events__ = ('on_play', 'on_stop')
137
138 def on_source(self, instance, filename):
139 self.unload()
140 if filename is None:
141 return
142 self.load()
143
144 def get_pos(self):
145 '''
146 Returns the current position of the audio file.
147 Returns 0 if not playing.
148
149 .. versionadded:: 1.4.1
150 '''
151 return 0
152
153 def _get_length(self):
154 return 0
155
156 length = property(lambda self: self._get_length(),
157 doc='Get length of the sound (in seconds).')
158
159 def load(self):
160 '''Load the file into memory.'''
161 pass
162
163 def unload(self):
164 '''Unload the file from memory.'''
165 pass
166
167 def play(self):
168 '''Play the file.'''
169 self.state = 'play'
170 self.dispatch('on_play')
171
172 def stop(self):
173 '''Stop playback.'''
174 self.state = 'stop'
175 self.dispatch('on_stop')
176
177 def seek(self, position):
178 '''Go to the <position> (in seconds).'''
179 pass
180
181 def on_play(self):
182 pass
183
184 def on_stop(self):
185 pass
186
187
188 # Little trick here, don't activate gstreamer on window
189 # seem to have lot of crackle or something...
190 audio_libs = []
191
192 # from now on, prefer our gstplayer instead of gi/pygst.
193 try:
194 from kivy.lib.gstplayer import GstPlayer # NOQA
195 audio_libs += [('gstplayer', 'audio_gstplayer')]
196 except ImportError:
197 #audio_libs += [('gi', 'audio_gi')]
198 if PY2:
199 audio_libs += [('pygst', 'audio_pygst')]
200 audio_libs += [('sdl', 'audio_sdl')]
201 audio_libs += [('pygame', 'audio_pygame')]
202
203 core_register_libs('audio', audio_libs)
204
[end of kivy/core/audio/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/core/audio/__init__.py b/kivy/core/audio/__init__.py
--- a/kivy/core/audio/__init__.py
+++ b/kivy/core/audio/__init__.py
@@ -59,6 +59,8 @@
if rfn is not None:
filename = rfn
ext = filename.split('.')[-1].lower()
+ if '?' in ext:
+ ext = ext.split('?')[0]
for classobj in SoundLoader._classes:
if ext in classobj.extensions():
return classobj(source=filename)
| {"golden_diff": "diff --git a/kivy/core/audio/__init__.py b/kivy/core/audio/__init__.py\n--- a/kivy/core/audio/__init__.py\n+++ b/kivy/core/audio/__init__.py\n@@ -59,6 +59,8 @@\n if rfn is not None:\n filename = rfn\n ext = filename.split('.')[-1].lower()\n+ if '?' in ext:\n+ ext = ext.split('?')[0]\n for classobj in SoundLoader._classes:\n if ext in classobj.extensions():\n return classobj(source=filename)\n", "issue": "SoundLoader can't determine file types for URL's with URL parameters in them.\nKivy currently can't load audio files from URL's that have URL parameters in them (For example `https://audio.example.com/get/test.wav?dl=true&token=9a8s76f9a876`).\n\n", "before_files": [{"content": "'''\nAudio\n=====\n\nLoad an audio sound and play it with::\n\n from kivy.core.audio import SoundLoader\n\n sound = SoundLoader.load('mytest.wav')\n if sound:\n print(\"Sound found at %s\" % sound.source)\n print(\"Sound is %.3f seconds\" % sound.length)\n sound.play()\n\nYou should not use the Sound class directly. The class returned by\n**SoundLoader.load** will be the best sound provider for that particular file\ntype, so it might return different Sound classes depending the file type.\n\n.. versionchanged:: 1.8.0\n There is now 2 distinct Gstreamer implementation: one using Gi/Gst working\n for both Python 2+3 with Gstreamer 1.0, and one using PyGST working\n only for Python 2 + Gstreamer 0.10.\n If you have issue with GStreamer, have a look at\n :ref:`gstreamer-compatibility`\n\n.. note::\n\n Recording audio is not supported.\n\n'''\n\n__all__ = ('Sound', 'SoundLoader')\n\nfrom kivy.logger import Logger\nfrom kivy.event import EventDispatcher\nfrom kivy.core import core_register_libs\nfrom kivy.compat import PY2\nfrom kivy.resources import resource_find\nfrom kivy.properties import StringProperty, NumericProperty, OptionProperty, \\\n AliasProperty, BooleanProperty\n\n\nclass SoundLoader:\n '''Load a sound, using the best loader for the given file type.\n '''\n\n _classes = []\n\n @staticmethod\n def register(classobj):\n '''Register a new class to load the sound.'''\n Logger.debug('Audio: register %s' % classobj.__name__)\n SoundLoader._classes.append(classobj)\n\n @staticmethod\n def load(filename):\n '''Load a sound, and return a Sound() instance.'''\n rfn = resource_find(filename)\n if rfn is not None:\n filename = rfn\n ext = filename.split('.')[-1].lower()\n for classobj in SoundLoader._classes:\n if ext in classobj.extensions():\n return classobj(source=filename)\n Logger.warning('Audio: Unable to find a loader for <%s>' %\n filename)\n return None\n\n\nclass Sound(EventDispatcher):\n '''Represents a sound to play. This class is abstract, and cannot be used\n directly.\n\n Use SoundLoader to load a sound.\n\n :Events:\n `on_play` : None\n Fired when the sound is played.\n `on_stop` : None\n Fired when the sound is stopped.\n '''\n\n source = StringProperty(None)\n '''Filename / source of your audio file.\n\n .. versionadded:: 1.3.0\n\n :attr:`source` is a :class:`~kivy.properties.StringProperty` that defaults\n to None and is read-only. Use the :meth:`SoundLoader.load` for loading\n audio.\n '''\n\n volume = NumericProperty(1.)\n '''Volume, in the range 0-1. 1 means full volume, 0 means mute.\n\n .. versionadded:: 1.3.0\n\n :attr:`volume` is a :class:`~kivy.properties.NumericProperty` and defaults\n to 1.\n '''\n\n state = OptionProperty('stop', options=('stop', 'play'))\n '''State of the sound, one of 'stop' or 'play'.\n\n .. versionadded:: 1.3.0\n\n :attr:`state` is a read-only :class:`~kivy.properties.OptionProperty`.'''\n\n loop = BooleanProperty(False)\n '''Set to True if the sound should automatically loop when it finishes.\n\n .. versionadded:: 1.8.0\n\n :attr:`loop` is a :class:`~kivy.properties.BooleanProperty` and defaults to\n False.'''\n\n #\n # deprecated\n #\n def _get_status(self):\n return self.state\n status = AliasProperty(_get_status, None, bind=('state', ))\n '''\n .. deprecated:: 1.3.0\n Use :attr:`state` instead.\n '''\n\n def _get_filename(self):\n return self.source\n filename = AliasProperty(_get_filename, None, bind=('source', ))\n '''\n .. deprecated:: 1.3.0\n Use :attr:`source` instead.\n '''\n\n __events__ = ('on_play', 'on_stop')\n\n def on_source(self, instance, filename):\n self.unload()\n if filename is None:\n return\n self.load()\n\n def get_pos(self):\n '''\n Returns the current position of the audio file.\n Returns 0 if not playing.\n\n .. versionadded:: 1.4.1\n '''\n return 0\n\n def _get_length(self):\n return 0\n\n length = property(lambda self: self._get_length(),\n doc='Get length of the sound (in seconds).')\n\n def load(self):\n '''Load the file into memory.'''\n pass\n\n def unload(self):\n '''Unload the file from memory.'''\n pass\n\n def play(self):\n '''Play the file.'''\n self.state = 'play'\n self.dispatch('on_play')\n\n def stop(self):\n '''Stop playback.'''\n self.state = 'stop'\n self.dispatch('on_stop')\n\n def seek(self, position):\n '''Go to the <position> (in seconds).'''\n pass\n\n def on_play(self):\n pass\n\n def on_stop(self):\n pass\n\n\n# Little trick here, don't activate gstreamer on window\n# seem to have lot of crackle or something...\naudio_libs = []\n\n# from now on, prefer our gstplayer instead of gi/pygst.\ntry:\n from kivy.lib.gstplayer import GstPlayer # NOQA\n audio_libs += [('gstplayer', 'audio_gstplayer')]\nexcept ImportError:\n #audio_libs += [('gi', 'audio_gi')]\n if PY2:\n audio_libs += [('pygst', 'audio_pygst')]\naudio_libs += [('sdl', 'audio_sdl')]\naudio_libs += [('pygame', 'audio_pygame')]\n\ncore_register_libs('audio', audio_libs)\n", "path": "kivy/core/audio/__init__.py"}]} | 2,477 | 124 |
gh_patches_debug_16408 | rasdani/github-patches | git_diff | Mailu__Mailu-2569 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
not allowing POP3/IMAP leads to infinite loop in webmail
v1.9.32
I noticed a small bug. If both are disabled, webmail is stuck in an infinite loop. I guess nobody ever tried it before since both are checked by default.
Not very consequential, but I figured you might want to know. Not sure about the use case either. I unchecked them because there was no need for this particular account and found it that way.
Cheers
</issue>
<code>
[start of core/admin/mailu/internal/nginx.py]
1 from mailu import models, utils
2 from flask import current_app as app
3
4 import re
5 import urllib
6 import ipaddress
7 import socket
8 import sqlalchemy.exc
9 import tenacity
10
11 SUPPORTED_AUTH_METHODS = ["none", "plain"]
12
13
14 STATUSES = {
15 "authentication": ("Authentication credentials invalid", {
16 "imap": "AUTHENTICATIONFAILED",
17 "smtp": "535 5.7.8",
18 "pop3": "-ERR Authentication failed"
19 }),
20 "encryption": ("Must issue a STARTTLS command first", {
21 "smtp": "530 5.7.0"
22 }),
23 "ratelimit": ("Temporary authentication failure (rate-limit)", {
24 "imap": "LIMIT",
25 "smtp": "451 4.3.2",
26 "pop3": "-ERR [LOGIN-DELAY] Retry later"
27 }),
28 }
29
30 def check_credentials(user, password, ip, protocol=None, auth_port=None):
31 if not user or not user.enabled or (protocol == "imap" and not user.enable_imap) or (protocol == "pop3" and not user.enable_pop):
32 return False
33 is_ok = False
34 # webmails
35 if auth_port in ['10143', '10025'] and password.startswith('token-'):
36 if utils.verify_temp_token(user.get_id(), password):
37 is_ok = True
38 # All tokens are 32 characters hex lowercase
39 if not is_ok and len(password) == 32:
40 for token in user.tokens:
41 if (token.check_password(password) and
42 (not token.ip or token.ip == ip)):
43 is_ok = True
44 break
45 if not is_ok and user.check_password(password):
46 is_ok = True
47 return is_ok
48
49 def handle_authentication(headers):
50 """ Handle an HTTP nginx authentication request
51 See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol
52 """
53 method = headers["Auth-Method"]
54 protocol = headers["Auth-Protocol"]
55 # Incoming mail, no authentication
56 if method == "none" and protocol == "smtp":
57 server, port = get_server(protocol, False)
58 if app.config["INBOUND_TLS_ENFORCE"]:
59 if "Auth-SSL" in headers and headers["Auth-SSL"] == "on":
60 return {
61 "Auth-Status": "OK",
62 "Auth-Server": server,
63 "Auth-Port": port
64 }
65 else:
66 status, code = get_status(protocol, "encryption")
67 return {
68 "Auth-Status": status,
69 "Auth-Error-Code" : code,
70 "Auth-Wait": 0
71 }
72 else:
73 return {
74 "Auth-Status": "OK",
75 "Auth-Server": server,
76 "Auth-Port": port
77 }
78 # Authenticated user
79 elif method == "plain":
80 is_valid_user = False
81 # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
82 # be ASCII and are generally considered ISO8859-1. However when passing
83 # the password, nginx does not transcode the input UTF string, thus
84 # we need to manually decode.
85 raw_user_email = urllib.parse.unquote(headers["Auth-User"])
86 raw_password = urllib.parse.unquote(headers["Auth-Pass"])
87 user_email = 'invalid'
88 try:
89 user_email = raw_user_email.encode("iso8859-1").decode("utf8")
90 password = raw_password.encode("iso8859-1").decode("utf8")
91 ip = urllib.parse.unquote(headers["Client-Ip"])
92 except:
93 app.logger.warn(f'Received undecodable user/password from nginx: {raw_user_email!r}/{raw_password!r}')
94 else:
95 try:
96 user = models.User.query.get(user_email) if '@' in user_email else None
97 except sqlalchemy.exc.StatementError as exc:
98 exc = str(exc).split('\n', 1)[0]
99 app.logger.warn(f'Invalid user {user_email!r}: {exc}')
100 else:
101 is_valid_user = user is not None
102 ip = urllib.parse.unquote(headers["Client-Ip"])
103 if check_credentials(user, password, ip, protocol, headers["Auth-Port"]):
104 server, port = get_server(headers["Auth-Protocol"], True)
105 return {
106 "Auth-Status": "OK",
107 "Auth-Server": server,
108 "Auth-User": user_email,
109 "Auth-User-Exists": is_valid_user,
110 "Auth-Port": port
111 }
112 status, code = get_status(protocol, "authentication")
113 return {
114 "Auth-Status": status,
115 "Auth-Error-Code": code,
116 "Auth-User": user_email,
117 "Auth-User-Exists": is_valid_user,
118 "Auth-Wait": 0
119 }
120 # Unexpected
121 return {}
122
123
124 def get_status(protocol, status):
125 """ Return the proper error code depending on the protocol
126 """
127 status, codes = STATUSES[status]
128 return status, codes[protocol]
129
130 def extract_host_port(host_and_port, default_port):
131 host, _, port = re.match('^(.*?)(:([0-9]*))?$', host_and_port).groups()
132 return host, int(port) if port else default_port
133
134 def get_server(protocol, authenticated=False):
135 if protocol == "imap":
136 hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)
137 elif protocol == "pop3":
138 hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)
139 elif protocol == "smtp":
140 if authenticated:
141 hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)
142 else:
143 hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)
144 try:
145 # test if hostname is already resolved to an ip adddress
146 ipaddress.ip_address(hostname)
147 except:
148 # hostname is not an ip address - so we need to resolve it
149 hostname = resolve_hostname(hostname)
150 return hostname, port
151
152 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
153 wait=tenacity.wait_random(min=2, max=5))
154 def resolve_hostname(hostname):
155 """ This function uses system DNS to resolve a hostname.
156 It is capable of retrying in case the host is not immediately available
157 """
158 return socket.gethostbyname(hostname)
159
[end of core/admin/mailu/internal/nginx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py
--- a/core/admin/mailu/internal/nginx.py
+++ b/core/admin/mailu/internal/nginx.py
@@ -27,12 +27,14 @@
}),
}
+WEBMAIL_PORTS = ['10143', '10025']
+
def check_credentials(user, password, ip, protocol=None, auth_port=None):
- if not user or not user.enabled or (protocol == "imap" and not user.enable_imap) or (protocol == "pop3" and not user.enable_pop):
+ if not user or not user.enabled or (protocol == "imap" and not user.enable_imap and not auth_port in WEBMAIL_PORTS) or (protocol == "pop3" and not user.enable_pop):
return False
is_ok = False
# webmails
- if auth_port in ['10143', '10025'] and password.startswith('token-'):
+ if auth_port in WEBMAIL_PORTS and password.startswith('token-'):
if utils.verify_temp_token(user.get_id(), password):
is_ok = True
# All tokens are 32 characters hex lowercase
| {"golden_diff": "diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py\n--- a/core/admin/mailu/internal/nginx.py\n+++ b/core/admin/mailu/internal/nginx.py\n@@ -27,12 +27,14 @@\n }),\n }\n \n+WEBMAIL_PORTS = ['10143', '10025']\n+\n def check_credentials(user, password, ip, protocol=None, auth_port=None):\n- if not user or not user.enabled or (protocol == \"imap\" and not user.enable_imap) or (protocol == \"pop3\" and not user.enable_pop):\n+ if not user or not user.enabled or (protocol == \"imap\" and not user.enable_imap and not auth_port in WEBMAIL_PORTS) or (protocol == \"pop3\" and not user.enable_pop):\n return False\n is_ok = False\n # webmails\n- if auth_port in ['10143', '10025'] and password.startswith('token-'):\n+ if auth_port in WEBMAIL_PORTS and password.startswith('token-'):\n if utils.verify_temp_token(user.get_id(), password):\n is_ok = True\n # All tokens are 32 characters hex lowercase\n", "issue": "not allowing POP3/IMAP leads to infinite loop in webmail\nv1.9.32\r\n\r\nI noticed a small bug. If both are disabled, webmail is stuck in an infinite loop. I guess nobody ever tried it before since both are checked by default.\r\n\r\nNot very consequential, but I figured you might want to know. Not sure about the use case either. I unchecked them because there was no need for this particular account and found it that way.\r\n\r\nCheers\n", "before_files": [{"content": "from mailu import models, utils\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport sqlalchemy.exc\nimport tenacity\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n \"encryption\": (\"Must issue a STARTTLS command first\", {\n \"smtp\": \"530 5.7.0\"\n }),\n \"ratelimit\": (\"Temporary authentication failure (rate-limit)\", {\n \"imap\": \"LIMIT\",\n \"smtp\": \"451 4.3.2\",\n \"pop3\": \"-ERR [LOGIN-DELAY] Retry later\"\n }),\n}\n\ndef check_credentials(user, password, ip, protocol=None, auth_port=None):\n if not user or not user.enabled or (protocol == \"imap\" and not user.enable_imap) or (protocol == \"pop3\" and not user.enable_pop):\n return False\n is_ok = False\n # webmails\n if auth_port in ['10143', '10025'] and password.startswith('token-'):\n if utils.verify_temp_token(user.get_id(), password):\n is_ok = True\n # All tokens are 32 characters hex lowercase\n if not is_ok and len(password) == 32:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n is_ok = True\n break\n if not is_ok and user.check_password(password):\n is_ok = True\n return is_ok\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(protocol, False)\n if app.config[\"INBOUND_TLS_ENFORCE\"]:\n if \"Auth-SSL\" in headers and headers[\"Auth-SSL\"] == \"on\":\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"encryption\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\" : code,\n \"Auth-Wait\": 0\n }\n else:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n is_valid_user = False\n # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n # be ASCII and are generally considered ISO8859-1. However when passing\n # the password, nginx does not transcode the input UTF string, thus\n # we need to manually decode.\n raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n user_email = 'invalid'\n try:\n user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n except:\n app.logger.warn(f'Received undecodable user/password from nginx: {raw_user_email!r}/{raw_password!r}')\n else:\n try:\n user = models.User.query.get(user_email) if '@' in user_email else None\n except sqlalchemy.exc.StatementError as exc:\n exc = str(exc).split('\\n', 1)[0]\n app.logger.warn(f'Invalid user {user_email!r}: {exc}')\n else:\n is_valid_user = user is not None\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n if check_credentials(user, password, ip, protocol, headers[\"Auth-Port\"]):\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-User\": user_email,\n \"Auth-User-Exists\": is_valid_user,\n \"Auth-Port\": port\n }\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-User\": user_email,\n \"Auth-User-Exists\": is_valid_user,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*?)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}]} | 2,431 | 265 |
gh_patches_debug_13686 | rasdani/github-patches | git_diff | cobbler__cobbler-3649 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SELinux issue when service is restarted
This issue was initially reported at
https://bugzilla.redhat.com/show_bug.cgi?id=1754430
There is a SELinux denial when the cobblerd service is restarted because of the permissions of the web.ss and others webui_sessions files.
I'm not sure to understand if this is 3.0.x only or also only exists in 2.8.x, but for me there is a need to understand why cobblerd (uid root) tries to read theses files...
Of course it can probably be fixed by using 640 perm on theses files. (to be tested) but it just workaround the problem.
</issue>
<code>
[start of cobbler/cobblerd.py]
1 """
2 Cobbler daemon for logging remote syslog traffic during automatic installation
3
4 Copyright 2007-2009, Red Hat, Inc and Others
5 Michael DeHaan <michael.dehaan AT gmail>
6
7 This program is free software; you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 2 of the License, or
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with this program; if not, write to the Free Software
19 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
20 02110-1301 USA
21 """
22
23 import binascii
24 import os
25 import pwd
26 import time
27
28 from cobbler import remote
29 from cobbler import utils
30 from cobbler.api import CobblerAPI
31
32
33 def core(cobbler_api: CobblerAPI):
34 """
35 Starts Cobbler.
36
37 :param cobbler_api: The cobbler_api instance which is used for this method.
38 """
39 settings = cobbler_api.settings()
40 xmlrpc_port = settings.xmlrpc_port
41
42 regen_ss_file()
43 do_xmlrpc_rw(cobbler_api, settings, xmlrpc_port)
44
45
46 def regen_ss_file():
47 """
48 This is only used for Kerberos auth at the moment. It identifies XMLRPC requests from Apache that have already been
49 cleared by Kerberos.
50 """
51 ssfile = "/var/lib/cobbler/web.ss"
52 with open("/dev/urandom", 'rb') as fd:
53 data = fd.read(512)
54
55 with open(ssfile, 'wb', 0o660) as fd:
56 fd.write(binascii.hexlify(data))
57
58 http_user = "apache"
59 family = utils.get_family()
60 if family == "debian":
61 http_user = "www-data"
62 elif family == "suse":
63 http_user = "wwwrun"
64 os.lchown("/var/lib/cobbler/web.ss", pwd.getpwnam(http_user)[2], -1)
65
66
67 def do_xmlrpc_rw(cobbler_api: CobblerAPI, settings, port):
68 """
69 This trys to bring up the Cobbler xmlrpc_api and restart it if it fails.
70
71 :param cobbler_api: The cobbler_api instance which is used for this method.
72 :param settings: The Cobbler settings instance which is used for this method.
73 :param port: The port where the xmlrpc api should run on.
74 """
75 xinterface = remote.ProxiedXMLRPCInterface(cobbler_api, remote.CobblerXMLRPCInterface)
76 server = remote.CobblerXMLRPCServer(('127.0.0.1', port))
77 server.logRequests = 0 # don't print stuff
78 xinterface.logger.debug("XMLRPC running on %s" % port)
79 server.register_instance(xinterface)
80
81 while True:
82 try:
83 print("SERVING!")
84 server.serve_forever()
85 except IOError:
86 # interrupted? try to serve again
87 time.sleep(0.5)
88
89
90 if __name__ == "__main__":
91 core(CobblerAPI())
92
[end of cobbler/cobblerd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cobbler/cobblerd.py b/cobbler/cobblerd.py
--- a/cobbler/cobblerd.py
+++ b/cobbler/cobblerd.py
@@ -52,7 +52,7 @@
with open("/dev/urandom", 'rb') as fd:
data = fd.read(512)
- with open(ssfile, 'wb', 0o660) as fd:
+ with open(ssfile, 'wb', 0o640) as fd:
fd.write(binascii.hexlify(data))
http_user = "apache"
@@ -61,7 +61,7 @@
http_user = "www-data"
elif family == "suse":
http_user = "wwwrun"
- os.lchown("/var/lib/cobbler/web.ss", pwd.getpwnam(http_user)[2], -1)
+ os.lchown(ssfile, 0, pwd.getpwnam(http_user)[3])
def do_xmlrpc_rw(cobbler_api: CobblerAPI, settings, port):
| {"golden_diff": "diff --git a/cobbler/cobblerd.py b/cobbler/cobblerd.py\n--- a/cobbler/cobblerd.py\n+++ b/cobbler/cobblerd.py\n@@ -52,7 +52,7 @@\n with open(\"/dev/urandom\", 'rb') as fd:\n data = fd.read(512)\n \n- with open(ssfile, 'wb', 0o660) as fd:\n+ with open(ssfile, 'wb', 0o640) as fd:\n fd.write(binascii.hexlify(data))\n \n http_user = \"apache\"\n@@ -61,7 +61,7 @@\n http_user = \"www-data\"\n elif family == \"suse\":\n http_user = \"wwwrun\"\n- os.lchown(\"/var/lib/cobbler/web.ss\", pwd.getpwnam(http_user)[2], -1)\n+ os.lchown(ssfile, 0, pwd.getpwnam(http_user)[3])\n \n \n def do_xmlrpc_rw(cobbler_api: CobblerAPI, settings, port):\n", "issue": "SELinux issue when service is restarted\nThis issue was initially reported at\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1754430\r\n\r\nThere is a SELinux denial when the cobblerd service is restarted because of the permissions of the web.ss and others webui_sessions files.\r\n\r\nI'm not sure to understand if this is 3.0.x only or also only exists in 2.8.x, but for me there is a need to understand why cobblerd (uid root) tries to read theses files...\r\nOf course it can probably be fixed by using 640 perm on theses files. (to be tested) but it just workaround the problem.\n", "before_files": [{"content": "\"\"\"\nCobbler daemon for logging remote syslog traffic during automatic installation\n\nCopyright 2007-2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\nimport binascii\nimport os\nimport pwd\nimport time\n\nfrom cobbler import remote\nfrom cobbler import utils\nfrom cobbler.api import CobblerAPI\n\n\ndef core(cobbler_api: CobblerAPI):\n \"\"\"\n Starts Cobbler.\n\n :param cobbler_api: The cobbler_api instance which is used for this method.\n \"\"\"\n settings = cobbler_api.settings()\n xmlrpc_port = settings.xmlrpc_port\n\n regen_ss_file()\n do_xmlrpc_rw(cobbler_api, settings, xmlrpc_port)\n\n\ndef regen_ss_file():\n \"\"\"\n This is only used for Kerberos auth at the moment. It identifies XMLRPC requests from Apache that have already been\n cleared by Kerberos.\n \"\"\"\n ssfile = \"/var/lib/cobbler/web.ss\"\n with open(\"/dev/urandom\", 'rb') as fd:\n data = fd.read(512)\n\n with open(ssfile, 'wb', 0o660) as fd:\n fd.write(binascii.hexlify(data))\n\n http_user = \"apache\"\n family = utils.get_family()\n if family == \"debian\":\n http_user = \"www-data\"\n elif family == \"suse\":\n http_user = \"wwwrun\"\n os.lchown(\"/var/lib/cobbler/web.ss\", pwd.getpwnam(http_user)[2], -1)\n\n\ndef do_xmlrpc_rw(cobbler_api: CobblerAPI, settings, port):\n \"\"\"\n This trys to bring up the Cobbler xmlrpc_api and restart it if it fails.\n\n :param cobbler_api: The cobbler_api instance which is used for this method.\n :param settings: The Cobbler settings instance which is used for this method.\n :param port: The port where the xmlrpc api should run on.\n \"\"\"\n xinterface = remote.ProxiedXMLRPCInterface(cobbler_api, remote.CobblerXMLRPCInterface)\n server = remote.CobblerXMLRPCServer(('127.0.0.1', port))\n server.logRequests = 0 # don't print stuff\n xinterface.logger.debug(\"XMLRPC running on %s\" % port)\n server.register_instance(xinterface)\n\n while True:\n try:\n print(\"SERVING!\")\n server.serve_forever()\n except IOError:\n # interrupted? try to serve again\n time.sleep(0.5)\n\n\nif __name__ == \"__main__\":\n core(CobblerAPI())\n", "path": "cobbler/cobblerd.py"}]} | 1,619 | 244 |
gh_patches_debug_25206 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-1049 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
About option of win_package
I found two issue.
I think Product-ID parameter is not "product_id" , is it "productid"?
Also , it seems the required columns is "yes".
```
fatal: [10.1.1.6]: FAILED! => {"changed": false, "failed": true, "msg": "Missing required argument: productid"
```
Therefore , it take a mistake about "ProductId" below an example on document of win_package:
```
# Playbook example
- name: Install the vc thingy
win_package:
name="Microsoft Visual C thingy"
path="http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe"
ProductId="{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}"
Arguments="/install /passive /norestart"
```
</issue>
<code>
[start of windows/win_package.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2014, Trond Hindenes <[email protected]>, and others
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 # this is a windows documentation stub. actual code lives in the .ps1
22 # file of the same name
23
24 DOCUMENTATION = '''
25 ---
26 module: win_package
27 version_added: "1.7"
28 short_description: Installs/Uninstalls a installable package, either from local file system or url
29 description:
30 - Installs or uninstalls a package
31 options:
32 path:
33 description:
34 - Location of the package to be installed (either on file system, network share or url)
35 required: true
36 default: null
37 aliases: []
38 name:
39 description:
40 - name of the package. Just for logging reasons, will use the value of path if name isn't specified
41 required: false
42 default: null
43 aliases: []
44 product_id:
45 description:
46 - product id of the installed package (used for checking if already installed)
47 required: false
48 default: null
49 aliases: []
50 arguments:
51 description:
52 - Any arguments the installer needs
53 default: null
54 aliases: []
55 state:
56 description:
57 - Install or Uninstall
58 choices:
59 - present
60 - absent
61 default: present
62 aliases: [ensure]
63 user_name:
64 description:
65 - Username of an account with access to the package if its located on a file share. Only needed if the winrm user doesn't have access to the package. Also specify user_password for this to function properly.
66 default: null
67 aliases: []
68 user_password:
69 description:
70 - Password of an account with access to the package if its located on a file share. Only needed if the winrm user doesn't have access to the package. Also specify user_name for this to function properly.
71 default: null
72 aliases: []
73 author: Trond Hindenes
74 '''
75
76 EXAMPLES = '''
77 # Playbook example
78 - name: Install the vc thingy
79 win_package:
80 name="Microsoft Visual C thingy"
81 path="http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe"
82 ProductId="{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}"
83 Arguments="/install /passive /norestart"
84
85
86 '''
87
88
[end of windows/win_package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/windows/win_package.py b/windows/win_package.py
--- a/windows/win_package.py
+++ b/windows/win_package.py
@@ -25,6 +25,7 @@
---
module: win_package
version_added: "1.7"
+author: Trond Hindenes
short_description: Installs/Uninstalls a installable package, either from local file system or url
description:
- Installs or uninstalls a package
@@ -44,9 +45,9 @@
product_id:
description:
- product id of the installed package (used for checking if already installed)
- required: false
+ required: true
default: null
- aliases: []
+ aliases: [productid]
arguments:
description:
- Any arguments the installer needs
@@ -79,7 +80,7 @@
win_package:
name="Microsoft Visual C thingy"
path="http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe"
- ProductId="{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}"
+ Product_Id="{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}"
Arguments="/install /passive /norestart"
| {"golden_diff": "diff --git a/windows/win_package.py b/windows/win_package.py\n--- a/windows/win_package.py\n+++ b/windows/win_package.py\n@@ -25,6 +25,7 @@\n ---\n module: win_package\n version_added: \"1.7\"\n+author: Trond Hindenes\n short_description: Installs/Uninstalls a installable package, either from local file system or url\n description:\n - Installs or uninstalls a package\n@@ -44,9 +45,9 @@\n product_id:\n description:\n - product id of the installed package (used for checking if already installed)\n- required: false\n+ required: true\n default: null\n- aliases: []\n+ aliases: [productid]\n arguments:\n description:\n - Any arguments the installer needs\n@@ -79,7 +80,7 @@\n win_package:\n name=\"Microsoft Visual C thingy\"\n path=\"http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe\"\n- ProductId=\"{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}\"\n+ Product_Id=\"{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}\"\n Arguments=\"/install /passive /norestart\"\n", "issue": "About option of win_package\nI found two issue.\n\nI think Product-ID parameter is not \"product_id\" , is it \"productid\"?\nAlso , it seems the required columns is \"yes\".\n\n```\nfatal: [10.1.1.6]: FAILED! => {\"changed\": false, \"failed\": true, \"msg\": \"Missing required argument: productid\"\n```\n\nTherefore , it take a mistake about \"ProductId\" below an example on document of win_package:\n\n```\n# Playbook example\n - name: Install the vc thingy\n win_package:\n name=\"Microsoft Visual C thingy\"\n path=\"http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe\"\n ProductId=\"{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}\"\n Arguments=\"/install /passive /norestart\"\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, Trond Hindenes <[email protected]>, and others\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# this is a windows documentation stub. actual code lives in the .ps1\n# file of the same name\n\nDOCUMENTATION = '''\n---\nmodule: win_package\nversion_added: \"1.7\"\nshort_description: Installs/Uninstalls a installable package, either from local file system or url\ndescription:\n - Installs or uninstalls a package\noptions:\n path:\n description:\n - Location of the package to be installed (either on file system, network share or url)\n required: true\n default: null\n aliases: []\n name:\n description:\n - name of the package. Just for logging reasons, will use the value of path if name isn't specified\n required: false\n default: null\n aliases: []\n product_id:\n description:\n - product id of the installed package (used for checking if already installed)\n required: false\n default: null\n aliases: []\n arguments:\n description:\n - Any arguments the installer needs\n default: null\n aliases: []\n state:\n description:\n - Install or Uninstall\n choices:\n - present\n - absent\n default: present\n aliases: [ensure]\n user_name:\n description:\n - Username of an account with access to the package if its located on a file share. Only needed if the winrm user doesn't have access to the package. Also specify user_password for this to function properly.\n default: null\n aliases: []\n user_password:\n description:\n - Password of an account with access to the package if its located on a file share. Only needed if the winrm user doesn't have access to the package. Also specify user_name for this to function properly.\n default: null\n aliases: []\nauthor: Trond Hindenes\n'''\n\nEXAMPLES = '''\n# Playbook example\n - name: Install the vc thingy\n win_package:\n name=\"Microsoft Visual C thingy\"\n path=\"http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe\"\n ProductId=\"{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}\"\n Arguments=\"/install /passive /norestart\"\n\n\n'''\n\n", "path": "windows/win_package.py"}]} | 1,663 | 345 |
gh_patches_debug_2029 | rasdani/github-patches | git_diff | netbox-community__netbox-15568 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in Tag model
### Deployment Type
Self-hosted
### NetBox Version
v3.7.4
### Python Version
3.8
### Steps to Reproduce
Typo in help_text where "this" is mistakenly repeated.
https://github.com/netbox-community/netbox/blob/69c0aac1051015660133b2ae3c86607dabd8084b/netbox/extras/models/tags.py#L40
### Expected Behavior
The object type(s) to which this tag can be applied.
### Observed Behavior
The object type(s) to which this this tag can be applied.
</issue>
<code>
[start of netbox/extras/models/tags.py]
1 from django.conf import settings
2 from django.db import models
3 from django.urls import reverse
4 from django.utils.text import slugify
5 from django.utils.translation import gettext_lazy as _
6 from taggit.models import TagBase, GenericTaggedItemBase
7
8 from netbox.models import ChangeLoggedModel
9 from netbox.models.features import CloningMixin, ExportTemplatesMixin
10 from utilities.choices import ColorChoices
11 from utilities.fields import ColorField
12
13 __all__ = (
14 'Tag',
15 'TaggedItem',
16 )
17
18
19 #
20 # Tags
21 #
22
23 class Tag(CloningMixin, ExportTemplatesMixin, ChangeLoggedModel, TagBase):
24 id = models.BigAutoField(
25 primary_key=True
26 )
27 color = ColorField(
28 verbose_name=_('color'),
29 default=ColorChoices.COLOR_GREY
30 )
31 description = models.CharField(
32 verbose_name=_('description'),
33 max_length=200,
34 blank=True,
35 )
36 object_types = models.ManyToManyField(
37 to='contenttypes.ContentType',
38 related_name='+',
39 blank=True,
40 help_text=_("The object type(s) to which this this tag can be applied.")
41 )
42
43 clone_fields = (
44 'color', 'description', 'object_types',
45 )
46
47 class Meta:
48 ordering = ['name']
49 verbose_name = _('tag')
50 verbose_name_plural = _('tags')
51
52 def get_absolute_url(self):
53 return reverse('extras:tag', args=[self.pk])
54
55 @property
56 def docs_url(self):
57 return f'{settings.STATIC_URL}docs/models/extras/tag/'
58
59 def slugify(self, tag, i=None):
60 # Allow Unicode in Tag slugs (avoids empty slugs for Tags with all-Unicode names)
61 slug = slugify(tag, allow_unicode=True)
62 if i is not None:
63 slug += "_%d" % i
64 return slug
65
66
67 class TaggedItem(GenericTaggedItemBase):
68 tag = models.ForeignKey(
69 to=Tag,
70 related_name="%(app_label)s_%(class)s_items",
71 on_delete=models.CASCADE
72 )
73
74 _netbox_private = True
75
76 class Meta:
77 indexes = [models.Index(fields=["content_type", "object_id"])]
78 verbose_name = _('tagged item')
79 verbose_name_plural = _('tagged items')
80
[end of netbox/extras/models/tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/extras/models/tags.py b/netbox/extras/models/tags.py
--- a/netbox/extras/models/tags.py
+++ b/netbox/extras/models/tags.py
@@ -37,7 +37,7 @@
to='contenttypes.ContentType',
related_name='+',
blank=True,
- help_text=_("The object type(s) to which this this tag can be applied.")
+ help_text=_("The object type(s) to which this tag can be applied.")
)
clone_fields = (
| {"golden_diff": "diff --git a/netbox/extras/models/tags.py b/netbox/extras/models/tags.py\n--- a/netbox/extras/models/tags.py\n+++ b/netbox/extras/models/tags.py\n@@ -37,7 +37,7 @@\n to='contenttypes.ContentType',\n related_name='+',\n blank=True,\n- help_text=_(\"The object type(s) to which this this tag can be applied.\")\n+ help_text=_(\"The object type(s) to which this tag can be applied.\")\n )\n \n clone_fields = (\n", "issue": "Typo in Tag model\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.4\n\n### Python Version\n\n3.8\n\n### Steps to Reproduce\n\nTypo in help_text where \"this\" is mistakenly repeated.\r\n\r\nhttps://github.com/netbox-community/netbox/blob/69c0aac1051015660133b2ae3c86607dabd8084b/netbox/extras/models/tags.py#L40\n\n### Expected Behavior\n\nThe object type(s) to which this tag can be applied.\n\n### Observed Behavior\n\nThe object type(s) to which this this tag can be applied.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext_lazy as _\nfrom taggit.models import TagBase, GenericTaggedItemBase\n\nfrom netbox.models import ChangeLoggedModel\nfrom netbox.models.features import CloningMixin, ExportTemplatesMixin\nfrom utilities.choices import ColorChoices\nfrom utilities.fields import ColorField\n\n__all__ = (\n 'Tag',\n 'TaggedItem',\n)\n\n\n#\n# Tags\n#\n\nclass Tag(CloningMixin, ExportTemplatesMixin, ChangeLoggedModel, TagBase):\n id = models.BigAutoField(\n primary_key=True\n )\n color = ColorField(\n verbose_name=_('color'),\n default=ColorChoices.COLOR_GREY\n )\n description = models.CharField(\n verbose_name=_('description'),\n max_length=200,\n blank=True,\n )\n object_types = models.ManyToManyField(\n to='contenttypes.ContentType',\n related_name='+',\n blank=True,\n help_text=_(\"The object type(s) to which this this tag can be applied.\")\n )\n\n clone_fields = (\n 'color', 'description', 'object_types',\n )\n\n class Meta:\n ordering = ['name']\n verbose_name = _('tag')\n verbose_name_plural = _('tags')\n\n def get_absolute_url(self):\n return reverse('extras:tag', args=[self.pk])\n\n @property\n def docs_url(self):\n return f'{settings.STATIC_URL}docs/models/extras/tag/'\n\n def slugify(self, tag, i=None):\n # Allow Unicode in Tag slugs (avoids empty slugs for Tags with all-Unicode names)\n slug = slugify(tag, allow_unicode=True)\n if i is not None:\n slug += \"_%d\" % i\n return slug\n\n\nclass TaggedItem(GenericTaggedItemBase):\n tag = models.ForeignKey(\n to=Tag,\n related_name=\"%(app_label)s_%(class)s_items\",\n on_delete=models.CASCADE\n )\n\n _netbox_private = True\n\n class Meta:\n indexes = [models.Index(fields=[\"content_type\", \"object_id\"])]\n verbose_name = _('tagged item')\n verbose_name_plural = _('tagged items')\n", "path": "netbox/extras/models/tags.py"}]} | 1,327 | 113 |
gh_patches_debug_15836 | rasdani/github-patches | git_diff | scverse__scanpy-1054 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sc.queries.enrich throws AssertionError with floats
<!-- Please give a clear and concise description of what the bug is: -->
I'm trying to run an enrichment analysis after filtering out certain genes via `sc.tl.filter_rank_genes_groups`, so I use `key='rank_genes_groups_filtered'` as an argument for `sc.queries.enrich`. Since the filtered values are replaced with `nan` I hoped they'd by ignored in the enrichment analysis, but it actually leads to an uninformative `AssertionError`.
My suggestion here is simply to filter `nan` values from the gene list around here and 2 lines later: https://github.com/theislab/scanpy/blob/249fc572471683357b86b8bbf41d3284118bc8f8/scanpy/queries/_queries.py#L296
I can make a little PR if we agree with this simple fix
Note you can reproduce this very simply without an adata object (but of course the likely use case is with an adata object as outlined above):
<!-- Put a minimal reproducible example that reproduces the bug in the code block below: -->
```
sc.queries.enrich([float('nan')])
```
Output:
<!-- Put your Error output in this code block (if applicable, else delete the block): -->
```pytb
AssertionError: query failed with error 500
```
#### Versions:
```
scanpy==1.4.5.post2 anndata==0.6.22.post1 umap==0.3.10 numpy==1.18.1 scipy==1.2.1 pandas==1.0.1 scikit-learn==0.22.1 statsmodels==0.11.0 python-igraph==0.8.0
```
</issue>
<code>
[start of scanpy/queries/_queries.py]
1 import collections.abc as cabc
2 from functools import singledispatch
3 from types import MappingProxyType
4 from typing import Any, Union, Optional, Iterable, Dict, Mapping
5
6 import pandas as pd
7 from anndata import AnnData
8
9 from ..get import rank_genes_groups_df
10 from .._utils import _doc_params
11
12
13 _doc_org = """\
14 org
15 Organism to query. Must be an organism in ensembl biomart. "hsapiens",
16 "mmusculus", "drerio", etc.\
17 """
18
19 _doc_host = """\
20 host
21 A valid BioMart host URL. Alternative values include archive urls (like
22 "grch37.ensembl.org") or regional mirrors (like "useast.ensembl.org").\
23 """
24
25 _doc_use_cache = """\
26 use_cache
27 Whether pybiomart should use a cache for requests. Will create a
28 `.pybiomart.sqlite` file in current directory if used.\
29 """
30
31
32 @_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)
33 def simple_query(
34 org: str,
35 attrs: Union[Iterable[str], str],
36 *,
37 filters: Optional[Dict[str, Any]] = None,
38 host: str = "www.ensembl.org",
39 use_cache: bool = False,
40 ) -> pd.DataFrame:
41 """\
42 A simple interface to biomart.
43
44 Params
45 ------
46 {doc_org}
47 attrs
48 What you want returned.
49 filters
50 What you want to pick out.
51 {doc_host}
52 {doc_use_cache}
53 """
54 if isinstance(attrs, str):
55 attrs = [attrs]
56 elif isinstance(attrs, cabc.Iterable):
57 attrs = list(attrs)
58 else:
59 raise TypeError(f"attrs must be of type list or str, was {type(attrs)}.")
60 try:
61 from pybiomart import Server
62 except ImportError:
63 raise ImportError(
64 "This method requires the `pybiomart` module to be installed."
65 )
66 server = Server(host, use_cache=use_cache)
67 dataset = server.marts["ENSEMBL_MART_ENSEMBL"].datasets[
68 "{}_gene_ensembl".format(org)
69 ]
70 res = dataset.query(attributes=attrs, filters=filters, use_attr_names=True)
71 return res
72
73
74 @_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)
75 def biomart_annotations(
76 org: str,
77 attrs: Iterable[str],
78 *,
79 host: str = "www.ensembl.org",
80 use_cache: bool = False,
81 ) -> pd.DataFrame:
82 """\
83 Retrieve gene annotations from ensembl biomart.
84
85 Parameters
86 ----------
87 {doc_org}
88 attrs
89 Attributes to query biomart for.
90 {doc_host}
91 {doc_use_cache}
92
93 Returns
94 -------
95 Dataframe containing annotations.
96
97 Examples
98 --------
99 Retrieve genes coordinates and chromosomes
100
101 >>> import scanpy as sc
102 >>> annot = sc.queries.biomart_annotations(
103 "hsapiens",
104 ["ensembl_gene_id", "start_position", "end_position", "chromosome_name"],
105 ).set_index("ensembl_gene_id")
106 >>> adata.var[annot.columns] = annot
107 """
108 return simple_query(org=org, attrs=attrs, host=host, use_cache=use_cache)
109
110
111 @_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)
112 def gene_coordinates(
113 org: str,
114 gene_name: str,
115 *,
116 gene_attr: str = "external_gene_name",
117 chr_exclude: Iterable[str] = (),
118 host: str = "www.ensembl.org",
119 use_cache: bool = False,
120 ) -> pd.DataFrame:
121 """\
122 Retrieve gene coordinates for specific organism through BioMart.
123
124 Parameters
125 ----------
126 {doc_org}
127 gene_name
128 The gene symbol (e.g. "hgnc_symbol" for human) for which to retrieve
129 coordinates.
130 gene_attr
131 The biomart attribute the gene symbol should show up for.
132 chr_exclude
133 A list of chromosomes to exclude from query.
134 {doc_host}
135 {doc_use_cache}
136
137 Returns
138 -------
139 Dataframe containing gene coordinates for the specified gene symbol.
140
141 Examples
142 --------
143 >>> import scanpy as sc
144 >>> sc.queries.gene_coordinates("hsapiens", "MT-TF")
145 """
146 res = simple_query(
147 org=org,
148 attrs=["chromosome_name", "start_position", "end_position"],
149 filters={gene_attr: gene_name},
150 host=host,
151 use_cache=use_cache,
152 )
153 return res[~res["chromosome_name"].isin(chr_exclude)]
154
155
156 @_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)
157 def mitochondrial_genes(
158 org: str,
159 *,
160 attrname: str = "external_gene_name",
161 host: str = "www.ensembl.org",
162 use_cache: bool = False,
163 chromosome: str = "MT",
164 ) -> pd.DataFrame:
165 """\
166 Mitochondrial gene symbols for specific organism through BioMart.
167
168 Parameters
169 ----------
170 {doc_org}
171 attrname
172 Biomart attribute field to return. Possible values include
173 "external_gene_name", "ensembl_gene_id", "hgnc_symbol", "mgi_symbol",
174 and "zfin_id_symbol".
175 {doc_host}
176 {doc_use_cache}
177 chromosome
178 Mitochrondrial chromosome name used in BioMart for organism.
179
180 Returns
181 -------
182 Dataframe containing identifiers for mitochondrial genes.
183
184 Examples
185 --------
186 >>> import scanpy as sc
187 >>> mito_gene_names = sc.queries.mitochondrial_genes("hsapiens")
188 >>> mito_ensembl_ids = sc.queries.mitochondrial_genes("hsapiens", attrname="ensembl_gene_id")
189 >>> mito_gene_names_fly = sc.queries.mitochondrial_genes("dmelanogaster", chromosome="mitochondrion_genome")
190 """
191 return simple_query(
192 org,
193 attrs=[attrname],
194 filters={"chromosome_name": [chromosome]},
195 host=host,
196 use_cache=use_cache,
197 )
198
199
200 @singledispatch
201 @_doc_params(doc_org=_doc_org)
202 def enrich(
203 container: Iterable[str],
204 *,
205 org: str = "hsapiens",
206 gprofiler_kwargs: Mapping[str, Any] = MappingProxyType({}),
207 ) -> pd.DataFrame:
208 """\
209 Get enrichment for DE results.
210
211 This is a thin convenience wrapper around the very useful gprofiler_.
212
213 This method dispatches on the first argument, leading to the following two
214 signatures::
215
216 enrich(container, ...)
217 enrich(adata: AnnData, group, key: str, ...)
218
219 Where::
220
221 enrich(adata, group, key, ...) = enrich(adata.uns[key]["names"][group], ...)
222
223 .. _gprofiler: https://pypi.org/project/gprofiler-official/#description
224
225 Parameters
226 ----------
227 container
228 Contains genes you'd like to search.
229 adata
230 AnnData object whose group will be looked for.
231 group
232 The group whose genes should be used for enrichment.
233 key
234 Key in `uns` to find group under.
235 {doc_org}
236 gprofiler_kwargs
237 Keyword arguments to pass to `GProfiler.profile`, see gprofiler_.
238
239 Returns
240 -------
241 Dataframe of enrichment results.
242
243 Examples
244 --------
245 Using `sc.queries.enrich` on a list of genes:
246
247 >>> import scanpy as sc
248 >>> sc.queries.enrich(['Klf4', 'Pax5', 'Sox2', 'Nanog'], org="hsapiens")
249
250 Using `sc.queries.enrich` on an :class:`anndata.AnnData` object:
251
252 >>> pbmcs = sc.datasets.pbmc68k_reduced()
253 >>> sc.tl.rank_genes_groups(pbmcs, "bulk_labels")
254 >>> sc.queries.enrich(pbmcs, "CD34+")
255 """
256 try:
257 from gprofiler import GProfiler
258 except ImportError:
259 raise ImportError(
260 "This method requires the `gprofiler-official` module to be installed."
261 )
262 gprofiler = GProfiler(user_agent="scanpy", return_dataframe=True)
263 gprofiler_kwargs = dict(gprofiler_kwargs)
264 for k in ["organism"]:
265 if gprofiler_kwargs.get(k) is not None:
266 raise ValueError(
267 f"Argument `{k}` should be passed directly through `enrich`, "
268 "not through `gprofiler_kwargs`"
269 )
270 return gprofiler.profile(list(container), organism=org, **gprofiler_kwargs)
271
272
273 @enrich.register(AnnData)
274 def _enrich_anndata(
275 adata: AnnData,
276 group: str,
277 *,
278 org: Optional[str] = "hsapiens",
279 key: str = "rank_genes_groups",
280 pval_cutoff: float = 0.05,
281 log2fc_min: Optional[float] = None,
282 log2fc_max: Optional[float] = None,
283 gene_symbols: Optional[str] = None,
284 gprofiler_kwargs: Mapping[str, Any] = MappingProxyType({}),
285 ) -> pd.DataFrame:
286 de = rank_genes_groups_df(
287 adata,
288 group=group,
289 key=key,
290 pval_cutoff=pval_cutoff,
291 log2fc_min=log2fc_min,
292 log2fc_max=log2fc_max,
293 gene_symbols=gene_symbols,
294 )
295 if gene_symbols is not None:
296 gene_list = list(de[gene_symbols])
297 else:
298 gene_list = list(de["names"])
299 return enrich(gene_list, org=org, gprofiler_kwargs=gprofiler_kwargs)
300
[end of scanpy/queries/_queries.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scanpy/queries/_queries.py b/scanpy/queries/_queries.py
--- a/scanpy/queries/_queries.py
+++ b/scanpy/queries/_queries.py
@@ -235,6 +235,9 @@
{doc_org}
gprofiler_kwargs
Keyword arguments to pass to `GProfiler.profile`, see gprofiler_.
+ **kwargs
+ All other keyword arguments are passed to `sc.get.rank_genes_groups_df`. E.g.
+ pval_cutoff, log2fc_min.
Returns
-------
@@ -293,7 +296,7 @@
gene_symbols=gene_symbols,
)
if gene_symbols is not None:
- gene_list = list(de[gene_symbols])
+ gene_list = list(de[gene_symbols].dropna())
else:
- gene_list = list(de["names"])
+ gene_list = list(de["names"].dropna())
return enrich(gene_list, org=org, gprofiler_kwargs=gprofiler_kwargs)
| {"golden_diff": "diff --git a/scanpy/queries/_queries.py b/scanpy/queries/_queries.py\n--- a/scanpy/queries/_queries.py\n+++ b/scanpy/queries/_queries.py\n@@ -235,6 +235,9 @@\n {doc_org}\n gprofiler_kwargs\n Keyword arguments to pass to `GProfiler.profile`, see gprofiler_.\n+ **kwargs\n+ All other keyword arguments are passed to `sc.get.rank_genes_groups_df`. E.g.\n+ pval_cutoff, log2fc_min.\n \n Returns\n -------\n@@ -293,7 +296,7 @@\n gene_symbols=gene_symbols,\n )\n if gene_symbols is not None:\n- gene_list = list(de[gene_symbols])\n+ gene_list = list(de[gene_symbols].dropna())\n else:\n- gene_list = list(de[\"names\"])\n+ gene_list = list(de[\"names\"].dropna())\n return enrich(gene_list, org=org, gprofiler_kwargs=gprofiler_kwargs)\n", "issue": "sc.queries.enrich throws AssertionError with floats\n<!-- Please give a clear and concise description of what the bug is: -->\r\nI'm trying to run an enrichment analysis after filtering out certain genes via `sc.tl.filter_rank_genes_groups`, so I use `key='rank_genes_groups_filtered'` as an argument for `sc.queries.enrich`. Since the filtered values are replaced with `nan` I hoped they'd by ignored in the enrichment analysis, but it actually leads to an uninformative `AssertionError`.\r\n\r\nMy suggestion here is simply to filter `nan` values from the gene list around here and 2 lines later: https://github.com/theislab/scanpy/blob/249fc572471683357b86b8bbf41d3284118bc8f8/scanpy/queries/_queries.py#L296\r\n\r\nI can make a little PR if we agree with this simple fix\r\n\r\nNote you can reproduce this very simply without an adata object (but of course the likely use case is with an adata object as outlined above):\r\n\r\n<!-- Put a minimal reproducible example that reproduces the bug in the code block below: -->\r\n```\r\nsc.queries.enrich([float('nan')])\r\n```\r\nOutput:\r\n<!-- Put your Error output in this code block (if applicable, else delete the block): -->\r\n```pytb\r\nAssertionError: query failed with error 500\r\n```\r\n\r\n#### Versions:\r\n```\r\nscanpy==1.4.5.post2 anndata==0.6.22.post1 umap==0.3.10 numpy==1.18.1 scipy==1.2.1 pandas==1.0.1 scikit-learn==0.22.1 statsmodels==0.11.0 python-igraph==0.8.0\r\n```\n", "before_files": [{"content": "import collections.abc as cabc\nfrom functools import singledispatch\nfrom types import MappingProxyType\nfrom typing import Any, Union, Optional, Iterable, Dict, Mapping\n\nimport pandas as pd\nfrom anndata import AnnData\n\nfrom ..get import rank_genes_groups_df\nfrom .._utils import _doc_params\n\n\n_doc_org = \"\"\"\\\norg\n Organism to query. Must be an organism in ensembl biomart. \"hsapiens\",\n \"mmusculus\", \"drerio\", etc.\\\n\"\"\"\n\n_doc_host = \"\"\"\\\nhost\n A valid BioMart host URL. Alternative values include archive urls (like\n \"grch37.ensembl.org\") or regional mirrors (like \"useast.ensembl.org\").\\\n\"\"\"\n\n_doc_use_cache = \"\"\"\\\nuse_cache\n Whether pybiomart should use a cache for requests. Will create a\n `.pybiomart.sqlite` file in current directory if used.\\\n\"\"\"\n\n\n@_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)\ndef simple_query(\n org: str,\n attrs: Union[Iterable[str], str],\n *,\n filters: Optional[Dict[str, Any]] = None,\n host: str = \"www.ensembl.org\",\n use_cache: bool = False,\n) -> pd.DataFrame:\n \"\"\"\\\n A simple interface to biomart.\n\n Params\n ------\n {doc_org}\n attrs\n What you want returned.\n filters\n What you want to pick out.\n {doc_host}\n {doc_use_cache}\n \"\"\"\n if isinstance(attrs, str):\n attrs = [attrs]\n elif isinstance(attrs, cabc.Iterable):\n attrs = list(attrs)\n else:\n raise TypeError(f\"attrs must be of type list or str, was {type(attrs)}.\")\n try:\n from pybiomart import Server\n except ImportError:\n raise ImportError(\n \"This method requires the `pybiomart` module to be installed.\"\n )\n server = Server(host, use_cache=use_cache)\n dataset = server.marts[\"ENSEMBL_MART_ENSEMBL\"].datasets[\n \"{}_gene_ensembl\".format(org)\n ]\n res = dataset.query(attributes=attrs, filters=filters, use_attr_names=True)\n return res\n\n\n@_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)\ndef biomart_annotations(\n org: str,\n attrs: Iterable[str],\n *,\n host: str = \"www.ensembl.org\",\n use_cache: bool = False,\n) -> pd.DataFrame:\n \"\"\"\\\n Retrieve gene annotations from ensembl biomart.\n\n Parameters\n ----------\n {doc_org}\n attrs\n Attributes to query biomart for.\n {doc_host}\n {doc_use_cache}\n\n Returns\n -------\n Dataframe containing annotations.\n\n Examples\n --------\n Retrieve genes coordinates and chromosomes\n\n >>> import scanpy as sc\n >>> annot = sc.queries.biomart_annotations(\n \"hsapiens\",\n [\"ensembl_gene_id\", \"start_position\", \"end_position\", \"chromosome_name\"],\n ).set_index(\"ensembl_gene_id\")\n >>> adata.var[annot.columns] = annot\n \"\"\"\n return simple_query(org=org, attrs=attrs, host=host, use_cache=use_cache)\n\n\n@_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)\ndef gene_coordinates(\n org: str,\n gene_name: str,\n *,\n gene_attr: str = \"external_gene_name\",\n chr_exclude: Iterable[str] = (),\n host: str = \"www.ensembl.org\",\n use_cache: bool = False,\n) -> pd.DataFrame:\n \"\"\"\\\n Retrieve gene coordinates for specific organism through BioMart.\n\n Parameters\n ----------\n {doc_org}\n gene_name\n The gene symbol (e.g. \"hgnc_symbol\" for human) for which to retrieve\n coordinates.\n gene_attr\n The biomart attribute the gene symbol should show up for.\n chr_exclude\n A list of chromosomes to exclude from query.\n {doc_host}\n {doc_use_cache}\n\n Returns\n -------\n Dataframe containing gene coordinates for the specified gene symbol.\n\n Examples\n --------\n >>> import scanpy as sc\n >>> sc.queries.gene_coordinates(\"hsapiens\", \"MT-TF\")\n \"\"\"\n res = simple_query(\n org=org,\n attrs=[\"chromosome_name\", \"start_position\", \"end_position\"],\n filters={gene_attr: gene_name},\n host=host,\n use_cache=use_cache,\n )\n return res[~res[\"chromosome_name\"].isin(chr_exclude)]\n\n\n@_doc_params(doc_org=_doc_org, doc_host=_doc_host, doc_use_cache=_doc_use_cache)\ndef mitochondrial_genes(\n org: str,\n *,\n attrname: str = \"external_gene_name\",\n host: str = \"www.ensembl.org\",\n use_cache: bool = False,\n chromosome: str = \"MT\",\n) -> pd.DataFrame:\n \"\"\"\\\n Mitochondrial gene symbols for specific organism through BioMart.\n\n Parameters\n ----------\n {doc_org}\n attrname\n Biomart attribute field to return. Possible values include\n \"external_gene_name\", \"ensembl_gene_id\", \"hgnc_symbol\", \"mgi_symbol\",\n and \"zfin_id_symbol\".\n {doc_host}\n {doc_use_cache}\n chromosome\n Mitochrondrial chromosome name used in BioMart for organism.\n\n Returns\n -------\n Dataframe containing identifiers for mitochondrial genes.\n\n Examples\n --------\n >>> import scanpy as sc\n >>> mito_gene_names = sc.queries.mitochondrial_genes(\"hsapiens\")\n >>> mito_ensembl_ids = sc.queries.mitochondrial_genes(\"hsapiens\", attrname=\"ensembl_gene_id\")\n >>> mito_gene_names_fly = sc.queries.mitochondrial_genes(\"dmelanogaster\", chromosome=\"mitochondrion_genome\")\n \"\"\"\n return simple_query(\n org,\n attrs=[attrname],\n filters={\"chromosome_name\": [chromosome]},\n host=host,\n use_cache=use_cache,\n )\n\n\n@singledispatch\n@_doc_params(doc_org=_doc_org)\ndef enrich(\n container: Iterable[str],\n *,\n org: str = \"hsapiens\",\n gprofiler_kwargs: Mapping[str, Any] = MappingProxyType({}),\n) -> pd.DataFrame:\n \"\"\"\\\n Get enrichment for DE results.\n\n This is a thin convenience wrapper around the very useful gprofiler_.\n\n This method dispatches on the first argument, leading to the following two\n signatures::\n\n enrich(container, ...)\n enrich(adata: AnnData, group, key: str, ...)\n\n Where::\n\n enrich(adata, group, key, ...) = enrich(adata.uns[key][\"names\"][group], ...)\n\n .. _gprofiler: https://pypi.org/project/gprofiler-official/#description\n\n Parameters\n ----------\n container\n Contains genes you'd like to search.\n adata\n AnnData object whose group will be looked for.\n group\n The group whose genes should be used for enrichment.\n key\n Key in `uns` to find group under.\n {doc_org}\n gprofiler_kwargs\n Keyword arguments to pass to `GProfiler.profile`, see gprofiler_.\n\n Returns\n -------\n Dataframe of enrichment results.\n\n Examples\n --------\n Using `sc.queries.enrich` on a list of genes:\n\n >>> import scanpy as sc\n >>> sc.queries.enrich(['Klf4', 'Pax5', 'Sox2', 'Nanog'], org=\"hsapiens\")\n\n Using `sc.queries.enrich` on an :class:`anndata.AnnData` object:\n\n >>> pbmcs = sc.datasets.pbmc68k_reduced()\n >>> sc.tl.rank_genes_groups(pbmcs, \"bulk_labels\")\n >>> sc.queries.enrich(pbmcs, \"CD34+\")\n \"\"\"\n try:\n from gprofiler import GProfiler\n except ImportError:\n raise ImportError(\n \"This method requires the `gprofiler-official` module to be installed.\"\n )\n gprofiler = GProfiler(user_agent=\"scanpy\", return_dataframe=True)\n gprofiler_kwargs = dict(gprofiler_kwargs)\n for k in [\"organism\"]:\n if gprofiler_kwargs.get(k) is not None:\n raise ValueError(\n f\"Argument `{k}` should be passed directly through `enrich`, \"\n \"not through `gprofiler_kwargs`\"\n )\n return gprofiler.profile(list(container), organism=org, **gprofiler_kwargs)\n\n\[email protected](AnnData)\ndef _enrich_anndata(\n adata: AnnData,\n group: str,\n *,\n org: Optional[str] = \"hsapiens\",\n key: str = \"rank_genes_groups\",\n pval_cutoff: float = 0.05,\n log2fc_min: Optional[float] = None,\n log2fc_max: Optional[float] = None,\n gene_symbols: Optional[str] = None,\n gprofiler_kwargs: Mapping[str, Any] = MappingProxyType({}),\n) -> pd.DataFrame:\n de = rank_genes_groups_df(\n adata,\n group=group,\n key=key,\n pval_cutoff=pval_cutoff,\n log2fc_min=log2fc_min,\n log2fc_max=log2fc_max,\n gene_symbols=gene_symbols,\n )\n if gene_symbols is not None:\n gene_list = list(de[gene_symbols])\n else:\n gene_list = list(de[\"names\"])\n return enrich(gene_list, org=org, gprofiler_kwargs=gprofiler_kwargs)\n", "path": "scanpy/queries/_queries.py"}]} | 3,888 | 230 |
gh_patches_debug_5047 | rasdani/github-patches | git_diff | ray-project__ray-3578 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix formatting of PyPI package description.
See https://pypi.org/project/ray/.
Note that we can test this out first at https://test.pypi.org/project/ray/.
</issue>
<code>
[start of python/setup.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6 import re
7 import shutil
8 import subprocess
9 import sys
10
11 from setuptools import setup, find_packages, Distribution
12 import setuptools.command.build_ext as _build_ext
13
14 # Ideally, we could include these files by putting them in a
15 # MANIFEST.in or using the package_data argument to setup, but the
16 # MANIFEST.in gets applied at the very beginning when setup.py runs
17 # before these files have been created, so we have to move the files
18 # manually.
19
20 # NOTE: The lists below must be kept in sync with ray/CMakeLists.txt.
21
22 ray_files = [
23 "ray/core/src/ray/thirdparty/redis/src/redis-server",
24 "ray/core/src/ray/gcs/redis_module/libray_redis_module.so",
25 "ray/core/src/plasma/plasma_store_server",
26 "ray/core/src/ray/raylet/libraylet_library_python.so",
27 "ray/core/src/ray/raylet/raylet_monitor", "ray/core/src/ray/raylet/raylet",
28 "ray/WebUI.ipynb"
29 ]
30
31 # These are the directories where automatically generated Python flatbuffer
32 # bindings are created.
33 generated_python_directories = [
34 "ray/core/generated", "ray/core/generated/ray",
35 "ray/core/generated/ray/protocol"
36 ]
37
38 optional_ray_files = []
39
40 ray_ui_files = [
41 "ray/core/src/catapult_files/index.html",
42 "ray/core/src/catapult_files/trace_viewer_full.html"
43 ]
44
45 ray_autoscaler_files = [
46 "ray/autoscaler/aws/example-full.yaml",
47 "ray/autoscaler/gcp/example-full.yaml",
48 "ray/autoscaler/local/example-full.yaml",
49 ]
50
51 if "RAY_USE_NEW_GCS" in os.environ and os.environ["RAY_USE_NEW_GCS"] == "on":
52 ray_files += [
53 "ray/core/src/credis/build/src/libmember.so",
54 "ray/core/src/credis/build/src/libmaster.so",
55 "ray/core/src/credis/redis/src/redis-server"
56 ]
57
58 # The UI files are mandatory if the INCLUDE_UI environment variable equals 1.
59 # Otherwise, they are optional.
60 if "INCLUDE_UI" in os.environ and os.environ["INCLUDE_UI"] == "1":
61 ray_files += ray_ui_files
62 else:
63 optional_ray_files += ray_ui_files
64
65 optional_ray_files += ray_autoscaler_files
66
67 extras = {
68 "rllib": ["pyyaml", "gym[atari]", "opencv-python", "lz4", "scipy"],
69 "debug": ["psutil", "setproctitle", "py-spy"],
70 }
71
72
73 class build_ext(_build_ext.build_ext):
74 def run(self):
75 # Note: We are passing in sys.executable so that we use the same
76 # version of Python to build pyarrow inside the build.sh script. Note
77 # that certain flags will not be passed along such as --user or sudo.
78 # TODO(rkn): Fix this.
79 subprocess.check_call(["../build.sh", "-p", sys.executable])
80
81 # We also need to install pyarrow along with Ray, so make sure that the
82 # relevant non-Python pyarrow files get copied.
83 pyarrow_files = []
84 for (root, dirs, filenames) in os.walk("./ray/pyarrow_files/pyarrow"):
85 for name in filenames:
86 pyarrow_files.append(os.path.join(root, name))
87
88 files_to_include = ray_files + pyarrow_files
89
90 # Copy over the autogenerated flatbuffer Python bindings.
91 for directory in generated_python_directories:
92 for filename in os.listdir(directory):
93 if filename[-3:] == ".py":
94 files_to_include.append(os.path.join(directory, filename))
95
96 for filename in files_to_include:
97 self.move_file(filename)
98
99 # Try to copy over the optional files.
100 for filename in optional_ray_files:
101 try:
102 self.move_file(filename)
103 except Exception:
104 print("Failed to copy optional file {}. This is ok."
105 .format(filename))
106
107 def move_file(self, filename):
108 # TODO(rkn): This feels very brittle. It may not handle all cases. See
109 # https://github.com/apache/arrow/blob/master/python/setup.py for an
110 # example.
111 source = filename
112 destination = os.path.join(self.build_lib, filename)
113 # Create the target directory if it doesn't already exist.
114 parent_directory = os.path.dirname(destination)
115 if not os.path.exists(parent_directory):
116 os.makedirs(parent_directory)
117 print("Copying {} to {}.".format(source, destination))
118 shutil.copy(source, destination)
119
120
121 class BinaryDistribution(Distribution):
122 def has_ext_modules(self):
123 return True
124
125
126 def find_version(*filepath):
127 # Extract version information from filepath
128 here = os.path.abspath(os.path.dirname(__file__))
129 with open(os.path.join(here, *filepath)) as fp:
130 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
131 fp.read(), re.M)
132 if version_match:
133 return version_match.group(1)
134 raise RuntimeError("Unable to find version string.")
135
136
137 requires = [
138 "numpy",
139 "filelock",
140 "funcsigs",
141 "click",
142 "colorama",
143 "pytest",
144 "pyyaml",
145 "redis",
146 # The six module is required by pyarrow.
147 "six >= 1.0.0",
148 "flatbuffers",
149 ]
150
151 if sys.version_info < (3, 0):
152 requires.append("faulthandler")
153
154 setup(
155 name="ray",
156 version=find_version("ray", "__init__.py"),
157 description=("A system for parallel and distributed Python that unifies "
158 "the ML ecosystem."),
159 long_description=open("../README.rst").read(),
160 url="https://github.com/ray-project/ray",
161 keywords=("ray distributed parallel machine-learning "
162 "reinforcement-learning deep-learning python"),
163 packages=find_packages(),
164 cmdclass={"build_ext": build_ext},
165 # The BinaryDistribution argument triggers build_ext.
166 distclass=BinaryDistribution,
167 install_requires=requires,
168 setup_requires=["cython >= 0.29"],
169 extras_require=extras,
170 entry_points={
171 "console_scripts": [
172 "ray=ray.scripts.scripts:main",
173 "rllib=ray.rllib.scripts:cli [rllib]"
174 ]
175 },
176 include_package_data=True,
177 zip_safe=False,
178 license="Apache 2.0")
179
[end of python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/setup.py b/python/setup.py
--- a/python/setup.py
+++ b/python/setup.py
@@ -154,6 +154,8 @@
setup(
name="ray",
version=find_version("ray", "__init__.py"),
+ author="Ray Team",
+ author_email="[email protected]",
description=("A system for parallel and distributed Python that unifies "
"the ML ecosystem."),
long_description=open("../README.rst").read(),
| {"golden_diff": "diff --git a/python/setup.py b/python/setup.py\n--- a/python/setup.py\n+++ b/python/setup.py\n@@ -154,6 +154,8 @@\n setup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n+ author=\"Ray Team\",\n+ author_email=\"[email protected]\",\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n", "issue": "Fix formatting of PyPI package description.\nSee https://pypi.org/project/ray/.\r\n\r\nNote that we can test this out first at https://test.pypi.org/project/ray/.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\n\n# NOTE: The lists below must be kept in sync with ray/CMakeLists.txt.\n\nray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n \"ray/core/src/ray/gcs/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store_server\",\n \"ray/core/src/ray/raylet/libraylet_library_python.so\",\n \"ray/core/src/ray/raylet/raylet_monitor\", \"ray/core/src/ray/raylet/raylet\",\n \"ray/WebUI.ipynb\"\n]\n\n# These are the directories where automatically generated Python flatbuffer\n# bindings are created.\ngenerated_python_directories = [\n \"ray/core/generated\", \"ray/core/generated/ray\",\n \"ray/core/generated/ray/protocol\"\n]\n\noptional_ray_files = []\n\nray_ui_files = [\n \"ray/core/src/catapult_files/index.html\",\n \"ray/core/src/catapult_files/trace_viewer_full.html\"\n]\n\nray_autoscaler_files = [\n \"ray/autoscaler/aws/example-full.yaml\",\n \"ray/autoscaler/gcp/example-full.yaml\",\n \"ray/autoscaler/local/example-full.yaml\",\n]\n\nif \"RAY_USE_NEW_GCS\" in os.environ and os.environ[\"RAY_USE_NEW_GCS\"] == \"on\":\n ray_files += [\n \"ray/core/src/credis/build/src/libmember.so\",\n \"ray/core/src/credis/build/src/libmaster.so\",\n \"ray/core/src/credis/redis/src/redis-server\"\n ]\n\n# The UI files are mandatory if the INCLUDE_UI environment variable equals 1.\n# Otherwise, they are optional.\nif \"INCLUDE_UI\" in os.environ and os.environ[\"INCLUDE_UI\"] == \"1\":\n ray_files += ray_ui_files\nelse:\n optional_ray_files += ray_ui_files\n\noptional_ray_files += ray_autoscaler_files\n\nextras = {\n \"rllib\": [\"pyyaml\", \"gym[atari]\", \"opencv-python\", \"lz4\", \"scipy\"],\n \"debug\": [\"psutil\", \"setproctitle\", \"py-spy\"],\n}\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build pyarrow inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n subprocess.check_call([\"../build.sh\", \"-p\", sys.executable])\n\n # We also need to install pyarrow along with Ray, so make sure that the\n # relevant non-Python pyarrow files get copied.\n pyarrow_files = []\n for (root, dirs, filenames) in os.walk(\"./ray/pyarrow_files/pyarrow\"):\n for name in filenames:\n pyarrow_files.append(os.path.join(root, name))\n\n files_to_include = ray_files + pyarrow_files\n\n # Copy over the autogenerated flatbuffer Python bindings.\n for directory in generated_python_directories:\n for filename in os.listdir(directory):\n if filename[-3:] == \".py\":\n files_to_include.append(os.path.join(directory, filename))\n\n for filename in files_to_include:\n self.move_file(filename)\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\ndef find_version(*filepath):\n # Extract version information from filepath\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *filepath)) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n \"numpy\",\n \"filelock\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"pytest\",\n \"pyyaml\",\n \"redis\",\n # The six module is required by pyarrow.\n \"six >= 1.0.0\",\n \"flatbuffers\",\n]\n\nif sys.version_info < (3, 0):\n requires.append(\"faulthandler\")\n\nsetup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n url=\"https://github.com/ray-project/ray\",\n keywords=(\"ray distributed parallel machine-learning \"\n \"reinforcement-learning deep-learning python\"),\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=requires,\n setup_requires=[\"cython >= 0.29\"],\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"ray=ray.scripts.scripts:main\",\n \"rllib=ray.rllib.scripts:cli [rllib]\"\n ]\n },\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}]} | 2,405 | 108 |
gh_patches_debug_2293 | rasdani/github-patches | git_diff | inventree__InvenTree-4285 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Increase worker-timeout to account for install times
I might have another instance of the same worker-timeout-loop during startup to add to the issue. My docker production installation at InvenTree:latest is running on a Raspberry Pi 3B+.
The loop occured after I had added the `inventree-brother-plugin` to `plugins.txt` - the initial plugin installation took too long during startup so that the worker processes timed out and were constantly restartet.
My "solution" was to increase the gunicorn timeout variable in the `.env` file to
```
# Options for gunicorn server
INVENTREE_GUNICORN_TIMEOUT=60
```
but maybe actions like pip installs should somehow generally not count against the worker timeout? (I'm not sure about the technical internals on this one at the moment...)
_Originally posted by @simonkuehling in https://github.com/inventree/InvenTree/issues/4180#issuecomment-1410348943_
</issue>
<code>
[start of docker/gunicorn.conf.py]
1 """Gunicorn configuration for InvenTree."""
2
3 import logging
4 import multiprocessing
5 import os
6
7 # Logger configuration
8 logger = logging.getLogger('inventree')
9 accesslog = '-'
10 errorlog = '-'
11 loglevel = os.environ.get('INVENTREE_LOG_LEVEL', 'warning').lower()
12 capture_output = True
13
14 # Worker configuration
15 # TODO: Implement support for gevent
16 # worker_class = 'gevent' # Allow multi-threading support
17 worker_tmp_dir = '/dev/shm' # Write temp file to RAM (faster)
18 threads = 4
19
20
21 # Worker timeout (default = 30 seconds)
22 timeout = os.environ.get('INVENTREE_GUNICORN_TIMEOUT', 30)
23
24 # Number of worker processes
25 workers = os.environ.get('INVENTREE_GUNICORN_WORKERS', None)
26
27 if workers is not None:
28 try:
29 workers = int(workers)
30 except ValueError:
31 workers = None
32
33 if workers is None:
34 workers = multiprocessing.cpu_count() * 2 + 1
35
36 logger.info(f"Starting gunicorn server with {workers} workers")
37
38 max_requests = 1000
39 max_requests_jitter = 50
40
[end of docker/gunicorn.conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/gunicorn.conf.py b/docker/gunicorn.conf.py
--- a/docker/gunicorn.conf.py
+++ b/docker/gunicorn.conf.py
@@ -18,8 +18,8 @@
threads = 4
-# Worker timeout (default = 30 seconds)
-timeout = os.environ.get('INVENTREE_GUNICORN_TIMEOUT', 30)
+# Worker timeout (default = 90 seconds)
+timeout = os.environ.get('INVENTREE_GUNICORN_TIMEOUT', 90)
# Number of worker processes
workers = os.environ.get('INVENTREE_GUNICORN_WORKERS', None)
| {"golden_diff": "diff --git a/docker/gunicorn.conf.py b/docker/gunicorn.conf.py\n--- a/docker/gunicorn.conf.py\n+++ b/docker/gunicorn.conf.py\n@@ -18,8 +18,8 @@\n threads = 4\n \n \n-# Worker timeout (default = 30 seconds)\n-timeout = os.environ.get('INVENTREE_GUNICORN_TIMEOUT', 30)\n+# Worker timeout (default = 90 seconds)\n+timeout = os.environ.get('INVENTREE_GUNICORN_TIMEOUT', 90)\n \n # Number of worker processes\n workers = os.environ.get('INVENTREE_GUNICORN_WORKERS', None)\n", "issue": "[BUG] Increase worker-timeout to account for install times\n I might have another instance of the same worker-timeout-loop during startup to add to the issue. My docker production installation at InvenTree:latest is running on a Raspberry Pi 3B+.\r\nThe loop occured after I had added the `inventree-brother-plugin` to `plugins.txt` - the initial plugin installation took too long during startup so that the worker processes timed out and were constantly restartet.\r\n\r\nMy \"solution\" was to increase the gunicorn timeout variable in the `.env` file to\r\n```\r\n# Options for gunicorn server\r\nINVENTREE_GUNICORN_TIMEOUT=60\r\n```\r\nbut maybe actions like pip installs should somehow generally not count against the worker timeout? (I'm not sure about the technical internals on this one at the moment...)\r\n\r\n_Originally posted by @simonkuehling in https://github.com/inventree/InvenTree/issues/4180#issuecomment-1410348943_\r\n \n", "before_files": [{"content": "\"\"\"Gunicorn configuration for InvenTree.\"\"\"\n\nimport logging\nimport multiprocessing\nimport os\n\n# Logger configuration\nlogger = logging.getLogger('inventree')\naccesslog = '-'\nerrorlog = '-'\nloglevel = os.environ.get('INVENTREE_LOG_LEVEL', 'warning').lower()\ncapture_output = True\n\n# Worker configuration\n# TODO: Implement support for gevent\n# worker_class = 'gevent' # Allow multi-threading support\nworker_tmp_dir = '/dev/shm' # Write temp file to RAM (faster)\nthreads = 4\n\n\n# Worker timeout (default = 30 seconds)\ntimeout = os.environ.get('INVENTREE_GUNICORN_TIMEOUT', 30)\n\n# Number of worker processes\nworkers = os.environ.get('INVENTREE_GUNICORN_WORKERS', None)\n\nif workers is not None:\n try:\n workers = int(workers)\n except ValueError:\n workers = None\n\nif workers is None:\n workers = multiprocessing.cpu_count() * 2 + 1\n\nlogger.info(f\"Starting gunicorn server with {workers} workers\")\n\nmax_requests = 1000\nmax_requests_jitter = 50\n", "path": "docker/gunicorn.conf.py"}]} | 1,085 | 138 |
gh_patches_debug_35648 | rasdani/github-patches | git_diff | searxng__searxng-2747 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEATURE REQUEST] language filtering and safe search with odysee
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
We can use, for example, language=de in the search URL. en, de-DE, and en-US also seem to work. There is no list of supported languages afaik, we just need to try things out one by one.
for safe search Moderate/Strict we should use nsfw=false in the URL
**Additional context**
The information that you need for this is here: https://github.com/searx/searx/issues/2504
----
Related
- https://github.com/searxng/searxng/pull/2656
- https://github.com/searxng/searxng/issues/590
- [lbr command line](https://gitlab.com/gardenappl/lbt/-/blob/main/lbt?ref_type=heads)
- [LBRY SDK ](https://github.com/lbryio/lbry-sdk/)
</issue>
<code>
[start of searx/engines/odysee.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Odysee_ is a decentralised video hosting platform.
4
5 .. _Odysee: https://github.com/OdyseeTeam/odysee-frontend
6 """
7
8 import time
9 from urllib.parse import urlencode
10 from datetime import datetime
11
12 # Engine metadata
13 about = {
14 "website": "https://odysee.com/",
15 "wikidata_id": "Q102046570",
16 "official_api_documentation": None,
17 "use_official_api": False,
18 "require_api_key": False,
19 "results": "JSON",
20 }
21
22 # Engine configuration
23 paging = True
24 results_per_page = 20
25 categories = ['videos']
26
27 # Search URL (Note: lighthouse.lbry.com/search works too, and may be faster at times)
28 base_url = "https://lighthouse.odysee.tv/search"
29
30
31 def request(query, params):
32 start_index = (params["pageno"] - 1) * results_per_page
33 query_params = {
34 "s": query,
35 "size": results_per_page,
36 "from": start_index,
37 "include": "channel,thumbnail_url,title,description,duration,release_time",
38 "mediaType": "video",
39 }
40
41 params["url"] = f"{base_url}?{urlencode(query_params)}"
42 return params
43
44
45 # Format the video duration
46 def format_duration(duration):
47 seconds = int(duration)
48 length = time.gmtime(seconds)
49 if length.tm_hour:
50 return time.strftime("%H:%M:%S", length)
51 return time.strftime("%M:%S", length)
52
53
54 def response(resp):
55 data = resp.json()
56 results = []
57
58 for item in data:
59 name = item["name"]
60 claim_id = item["claimId"]
61 title = item["title"]
62 thumbnail_url = item["thumbnail_url"]
63 description = item["description"] or ""
64 channel = item["channel"]
65 release_time = item["release_time"]
66 duration = item["duration"]
67
68 release_date = datetime.strptime(release_time.split("T")[0], "%Y-%m-%d")
69 formatted_date = datetime.utcfromtimestamp(release_date.timestamp())
70
71 url = f"https://odysee.com/{name}:{claim_id}"
72 iframe_url = f"https://odysee.com/$/embed/{name}:{claim_id}"
73 odysee_thumbnail = f"https://thumbnails.odycdn.com/optimize/s:390:0/quality:85/plain/{thumbnail_url}"
74 formatted_duration = format_duration(duration)
75
76 results.append(
77 {
78 "title": title,
79 "url": url,
80 "content": description,
81 "author": channel,
82 "publishedDate": formatted_date,
83 "length": formatted_duration,
84 "thumbnail": odysee_thumbnail,
85 "iframe_src": iframe_url,
86 "template": "videos.html",
87 }
88 )
89
90 return results
91
[end of searx/engines/odysee.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/odysee.py b/searx/engines/odysee.py
--- a/searx/engines/odysee.py
+++ b/searx/engines/odysee.py
@@ -9,6 +9,14 @@
from urllib.parse import urlencode
from datetime import datetime
+import babel
+
+from searx.network import get
+from searx.locales import language_tag
+from searx.enginelib.traits import EngineTraits
+
+traits: EngineTraits
+
# Engine metadata
about = {
"website": "https://odysee.com/",
@@ -21,6 +29,7 @@
# Engine configuration
paging = True
+time_range_support = True
results_per_page = 20
categories = ['videos']
@@ -29,6 +38,13 @@
def request(query, params):
+ time_range_dict = {
+ "day": "today",
+ "week": "thisweek",
+ "month": "thismonth",
+ "year": "thisyear",
+ }
+
start_index = (params["pageno"] - 1) * results_per_page
query_params = {
"s": query,
@@ -38,6 +54,13 @@
"mediaType": "video",
}
+ lang = traits.get_language(params['searxng_locale'], None)
+ if lang is not None:
+ query_params['language'] = lang
+
+ if params['time_range'] in time_range_dict:
+ query_params['time_filter'] = time_range_dict[params['time_range']]
+
params["url"] = f"{base_url}?{urlencode(query_params)}"
return params
@@ -88,3 +111,35 @@
)
return results
+
+
+def fetch_traits(engine_traits: EngineTraits):
+ """
+ Fetch languages from Odysee's source code.
+ """
+
+ resp = get(
+ 'https://raw.githubusercontent.com/OdyseeTeam/odysee-frontend/master/ui/constants/supported_browser_languages.js', # pylint: disable=line-too-long
+ timeout=60,
+ )
+
+ if not resp.ok:
+ print("ERROR: can't determine languages from Odysee")
+ return
+
+ for line in resp.text.split("\n")[1:-4]:
+ lang_tag = line.strip().split(": ")[0].replace("'", "")
+
+ try:
+ sxng_tag = language_tag(babel.Locale.parse(lang_tag, sep="-"))
+ except babel.UnknownLocaleError:
+ print("ERROR: %s is unknown by babel" % lang_tag)
+ continue
+
+ conflict = engine_traits.languages.get(sxng_tag)
+ if conflict:
+ if conflict != lang_tag:
+ print("CONFLICT: babel %s --> %s, %s" % (sxng_tag, conflict, lang_tag))
+ continue
+
+ engine_traits.languages[sxng_tag] = lang_tag
| {"golden_diff": "diff --git a/searx/engines/odysee.py b/searx/engines/odysee.py\n--- a/searx/engines/odysee.py\n+++ b/searx/engines/odysee.py\n@@ -9,6 +9,14 @@\n from urllib.parse import urlencode\n from datetime import datetime\n \n+import babel\n+\n+from searx.network import get\n+from searx.locales import language_tag\n+from searx.enginelib.traits import EngineTraits\n+\n+traits: EngineTraits\n+\n # Engine metadata\n about = {\n \"website\": \"https://odysee.com/\",\n@@ -21,6 +29,7 @@\n \n # Engine configuration\n paging = True\n+time_range_support = True\n results_per_page = 20\n categories = ['videos']\n \n@@ -29,6 +38,13 @@\n \n \n def request(query, params):\n+ time_range_dict = {\n+ \"day\": \"today\",\n+ \"week\": \"thisweek\",\n+ \"month\": \"thismonth\",\n+ \"year\": \"thisyear\",\n+ }\n+\n start_index = (params[\"pageno\"] - 1) * results_per_page\n query_params = {\n \"s\": query,\n@@ -38,6 +54,13 @@\n \"mediaType\": \"video\",\n }\n \n+ lang = traits.get_language(params['searxng_locale'], None)\n+ if lang is not None:\n+ query_params['language'] = lang\n+\n+ if params['time_range'] in time_range_dict:\n+ query_params['time_filter'] = time_range_dict[params['time_range']]\n+\n params[\"url\"] = f\"{base_url}?{urlencode(query_params)}\"\n return params\n \n@@ -88,3 +111,35 @@\n )\n \n return results\n+\n+\n+def fetch_traits(engine_traits: EngineTraits):\n+ \"\"\"\n+ Fetch languages from Odysee's source code.\n+ \"\"\"\n+\n+ resp = get(\n+ 'https://raw.githubusercontent.com/OdyseeTeam/odysee-frontend/master/ui/constants/supported_browser_languages.js', # pylint: disable=line-too-long\n+ timeout=60,\n+ )\n+\n+ if not resp.ok:\n+ print(\"ERROR: can't determine languages from Odysee\")\n+ return\n+\n+ for line in resp.text.split(\"\\n\")[1:-4]:\n+ lang_tag = line.strip().split(\": \")[0].replace(\"'\", \"\")\n+\n+ try:\n+ sxng_tag = language_tag(babel.Locale.parse(lang_tag, sep=\"-\"))\n+ except babel.UnknownLocaleError:\n+ print(\"ERROR: %s is unknown by babel\" % lang_tag)\n+ continue\n+\n+ conflict = engine_traits.languages.get(sxng_tag)\n+ if conflict:\n+ if conflict != lang_tag:\n+ print(\"CONFLICT: babel %s --> %s, %s\" % (sxng_tag, conflict, lang_tag))\n+ continue\n+\n+ engine_traits.languages[sxng_tag] = lang_tag\n", "issue": "[FEATURE REQUEST] language filtering and safe search with odysee\n**Is your feature request related to a problem? Please describe.**\r\nNo\r\n\r\n**Describe the solution you'd like**\r\nWe can use, for example, language=de in the search URL. en, de-DE, and en-US also seem to work. There is no list of supported languages afaik, we just need to try things out one by one.\r\n\r\nfor safe search Moderate/Strict we should use nsfw=false in the URL\r\n\r\n**Additional context**\r\nThe information that you need for this is here: https://github.com/searx/searx/issues/2504\r\n\r\n\r\n----\r\nRelated\r\n\r\n- https://github.com/searxng/searxng/pull/2656\r\n- https://github.com/searxng/searxng/issues/590\r\n- [lbr command line](https://gitlab.com/gardenappl/lbt/-/blob/main/lbt?ref_type=heads)\r\n- [LBRY SDK ](https://github.com/lbryio/lbry-sdk/)\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Odysee_ is a decentralised video hosting platform.\n\n.. _Odysee: https://github.com/OdyseeTeam/odysee-frontend\n\"\"\"\n\nimport time\nfrom urllib.parse import urlencode\nfrom datetime import datetime\n\n# Engine metadata\nabout = {\n \"website\": \"https://odysee.com/\",\n \"wikidata_id\": \"Q102046570\",\n \"official_api_documentation\": None,\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": \"JSON\",\n}\n\n# Engine configuration\npaging = True\nresults_per_page = 20\ncategories = ['videos']\n\n# Search URL (Note: lighthouse.lbry.com/search works too, and may be faster at times)\nbase_url = \"https://lighthouse.odysee.tv/search\"\n\n\ndef request(query, params):\n start_index = (params[\"pageno\"] - 1) * results_per_page\n query_params = {\n \"s\": query,\n \"size\": results_per_page,\n \"from\": start_index,\n \"include\": \"channel,thumbnail_url,title,description,duration,release_time\",\n \"mediaType\": \"video\",\n }\n\n params[\"url\"] = f\"{base_url}?{urlencode(query_params)}\"\n return params\n\n\n# Format the video duration\ndef format_duration(duration):\n seconds = int(duration)\n length = time.gmtime(seconds)\n if length.tm_hour:\n return time.strftime(\"%H:%M:%S\", length)\n return time.strftime(\"%M:%S\", length)\n\n\ndef response(resp):\n data = resp.json()\n results = []\n\n for item in data:\n name = item[\"name\"]\n claim_id = item[\"claimId\"]\n title = item[\"title\"]\n thumbnail_url = item[\"thumbnail_url\"]\n description = item[\"description\"] or \"\"\n channel = item[\"channel\"]\n release_time = item[\"release_time\"]\n duration = item[\"duration\"]\n\n release_date = datetime.strptime(release_time.split(\"T\")[0], \"%Y-%m-%d\")\n formatted_date = datetime.utcfromtimestamp(release_date.timestamp())\n\n url = f\"https://odysee.com/{name}:{claim_id}\"\n iframe_url = f\"https://odysee.com/$/embed/{name}:{claim_id}\"\n odysee_thumbnail = f\"https://thumbnails.odycdn.com/optimize/s:390:0/quality:85/plain/{thumbnail_url}\"\n formatted_duration = format_duration(duration)\n\n results.append(\n {\n \"title\": title,\n \"url\": url,\n \"content\": description,\n \"author\": channel,\n \"publishedDate\": formatted_date,\n \"length\": formatted_duration,\n \"thumbnail\": odysee_thumbnail,\n \"iframe_src\": iframe_url,\n \"template\": \"videos.html\",\n }\n )\n\n return results\n", "path": "searx/engines/odysee.py"}]} | 1,587 | 677 |
gh_patches_debug_17292 | rasdani/github-patches | git_diff | beetbox__beets-2870 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use Artist Credits for tag data, but "actual" artist data for filenaming
Currently beets will always normalise [artist credit](https://musicbrainz.org/doc/Artist_Credit) data to the current artist name. However, I want to see when playing the music when, e.g., [Orgi-E](https://musicbrainz.org/artist/345fe3da-b2cb-4ad4-a1a5-43afc903663d) was credited as [Klamfyr](https://musicbrainz.org/release/d09b3568-e9cc-4458-bcf7-0c215cca75ce), but I still like the normalisation for file tree organisation purposes. This should probably be an option though, as other people will likely want to always normalise the name (and others might want to not normalise the name in the path as well).
(Somewhat related morituri issues: thomasvs/morituri#80, thomasvs/morituri#48)
</issue>
<code>
[start of beets/autotag/__init__.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, Adrian Sampson.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Facilities for automatically determining files' correct metadata.
17 """
18
19 from __future__ import division, absolute_import, print_function
20
21 from beets import logging
22 from beets import config
23
24 # Parts of external interface.
25 from .hooks import AlbumInfo, TrackInfo, AlbumMatch, TrackMatch # noqa
26 from .match import tag_item, tag_album, Proposal # noqa
27 from .match import Recommendation # noqa
28
29 # Global logger.
30 log = logging.getLogger('beets')
31
32
33 # Additional utilities for the main interface.
34
35 def apply_item_metadata(item, track_info):
36 """Set an item's metadata from its matched TrackInfo object.
37 """
38 item.artist = track_info.artist
39 item.artist_sort = track_info.artist_sort
40 item.artist_credit = track_info.artist_credit
41 item.title = track_info.title
42 item.mb_trackid = track_info.track_id
43 if track_info.artist_id:
44 item.mb_artistid = track_info.artist_id
45 if track_info.data_source:
46 item.data_source = track_info.data_source
47
48 if track_info.lyricist is not None:
49 item.lyricist = track_info.lyricist
50 if track_info.composer is not None:
51 item.composer = track_info.composer
52 if track_info.composer_sort is not None:
53 item.composer_sort = track_info.composer_sort
54 if track_info.arranger is not None:
55 item.arranger = track_info.arranger
56
57 # At the moment, the other metadata is left intact (including album
58 # and track number). Perhaps these should be emptied?
59
60
61 def apply_metadata(album_info, mapping):
62 """Set the items' metadata to match an AlbumInfo object using a
63 mapping from Items to TrackInfo objects.
64 """
65 for item, track_info in mapping.items():
66 # Album, artist, track count.
67 if track_info.artist:
68 item.artist = track_info.artist
69 else:
70 item.artist = album_info.artist
71 item.albumartist = album_info.artist
72 item.album = album_info.album
73
74 # Artist sort and credit names.
75 item.artist_sort = track_info.artist_sort or album_info.artist_sort
76 item.artist_credit = (track_info.artist_credit or
77 album_info.artist_credit)
78 item.albumartist_sort = album_info.artist_sort
79 item.albumartist_credit = album_info.artist_credit
80
81 # Release date.
82 for prefix in '', 'original_':
83 if config['original_date'] and not prefix:
84 # Ignore specific release date.
85 continue
86
87 for suffix in 'year', 'month', 'day':
88 key = prefix + suffix
89 value = getattr(album_info, key) or 0
90
91 # If we don't even have a year, apply nothing.
92 if suffix == 'year' and not value:
93 break
94
95 # Otherwise, set the fetched value (or 0 for the month
96 # and day if not available).
97 item[key] = value
98
99 # If we're using original release date for both fields,
100 # also set item.year = info.original_year, etc.
101 if config['original_date']:
102 item[suffix] = value
103
104 # Title.
105 item.title = track_info.title
106
107 if config['per_disc_numbering']:
108 # We want to let the track number be zero, but if the medium index
109 # is not provided we need to fall back to the overall index.
110 if track_info.medium_index is not None:
111 item.track = track_info.medium_index
112 else:
113 item.track = track_info.index
114 item.tracktotal = track_info.medium_total or len(album_info.tracks)
115 else:
116 item.track = track_info.index
117 item.tracktotal = len(album_info.tracks)
118
119 # Disc and disc count.
120 item.disc = track_info.medium
121 item.disctotal = album_info.mediums
122
123 # MusicBrainz IDs.
124 item.mb_trackid = track_info.track_id
125 item.mb_albumid = album_info.album_id
126 if track_info.artist_id:
127 item.mb_artistid = track_info.artist_id
128 else:
129 item.mb_artistid = album_info.artist_id
130 item.mb_albumartistid = album_info.artist_id
131 item.mb_releasegroupid = album_info.releasegroup_id
132
133 # Compilation flag.
134 item.comp = album_info.va
135
136 # Miscellaneous metadata.
137 for field in ('albumtype',
138 'label',
139 'asin',
140 'catalognum',
141 'script',
142 'language',
143 'country',
144 'albumstatus',
145 'albumdisambig',
146 'data_source',):
147 value = getattr(album_info, field)
148 if value is not None:
149 item[field] = value
150 if track_info.disctitle is not None:
151 item.disctitle = track_info.disctitle
152
153 if track_info.media is not None:
154 item.media = track_info.media
155
156 if track_info.lyricist is not None:
157 item.lyricist = track_info.lyricist
158 if track_info.composer is not None:
159 item.composer = track_info.composer
160 if track_info.composer_sort is not None:
161 item.composer_sort = track_info.composer_sort
162 if track_info.arranger is not None:
163 item.arranger = track_info.arranger
164
165 item.track_alt = track_info.track_alt
166
[end of beets/autotag/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beets/autotag/__init__.py b/beets/autotag/__init__.py
--- a/beets/autotag/__init__.py
+++ b/beets/autotag/__init__.py
@@ -63,12 +63,19 @@
mapping from Items to TrackInfo objects.
"""
for item, track_info in mapping.items():
- # Album, artist, track count.
- if track_info.artist:
- item.artist = track_info.artist
+ # Artist or artist credit.
+ if config['artist_credit']:
+ item.artist = (track_info.artist_credit or
+ track_info.artist or
+ album_info.artist_credit or
+ album_info.artist)
+ item.albumartist = (album_info.artist_credit or
+ album_info.artist)
else:
- item.artist = album_info.artist
- item.albumartist = album_info.artist
+ item.artist = (track_info.artist or album_info.artist)
+ item.albumartist = album_info.artist
+
+ # Album.
item.album = album_info.album
# Artist sort and credit names.
| {"golden_diff": "diff --git a/beets/autotag/__init__.py b/beets/autotag/__init__.py\n--- a/beets/autotag/__init__.py\n+++ b/beets/autotag/__init__.py\n@@ -63,12 +63,19 @@\n mapping from Items to TrackInfo objects.\n \"\"\"\n for item, track_info in mapping.items():\n- # Album, artist, track count.\n- if track_info.artist:\n- item.artist = track_info.artist\n+ # Artist or artist credit.\n+ if config['artist_credit']:\n+ item.artist = (track_info.artist_credit or\n+ track_info.artist or\n+ album_info.artist_credit or\n+ album_info.artist)\n+ item.albumartist = (album_info.artist_credit or\n+ album_info.artist)\n else:\n- item.artist = album_info.artist\n- item.albumartist = album_info.artist\n+ item.artist = (track_info.artist or album_info.artist)\n+ item.albumartist = album_info.artist\n+\n+ # Album.\n item.album = album_info.album\n \n # Artist sort and credit names.\n", "issue": "Use Artist Credits for tag data, but \"actual\" artist data for filenaming\nCurrently beets will always normalise [artist credit](https://musicbrainz.org/doc/Artist_Credit) data to the current artist name. However, I want to see when playing the music when, e.g., [Orgi-E](https://musicbrainz.org/artist/345fe3da-b2cb-4ad4-a1a5-43afc903663d) was credited as [Klamfyr](https://musicbrainz.org/release/d09b3568-e9cc-4458-bcf7-0c215cca75ce), but I still like the normalisation for file tree organisation purposes. This should probably be an option though, as other people will likely want to always normalise the name (and others might want to not normalise the name in the path as well).\n\n(Somewhat related morituri issues: thomasvs/morituri#80, thomasvs/morituri#48)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Facilities for automatically determining files' correct metadata.\n\"\"\"\n\nfrom __future__ import division, absolute_import, print_function\n\nfrom beets import logging\nfrom beets import config\n\n# Parts of external interface.\nfrom .hooks import AlbumInfo, TrackInfo, AlbumMatch, TrackMatch # noqa\nfrom .match import tag_item, tag_album, Proposal # noqa\nfrom .match import Recommendation # noqa\n\n# Global logger.\nlog = logging.getLogger('beets')\n\n\n# Additional utilities for the main interface.\n\ndef apply_item_metadata(item, track_info):\n \"\"\"Set an item's metadata from its matched TrackInfo object.\n \"\"\"\n item.artist = track_info.artist\n item.artist_sort = track_info.artist_sort\n item.artist_credit = track_info.artist_credit\n item.title = track_info.title\n item.mb_trackid = track_info.track_id\n if track_info.artist_id:\n item.mb_artistid = track_info.artist_id\n if track_info.data_source:\n item.data_source = track_info.data_source\n\n if track_info.lyricist is not None:\n item.lyricist = track_info.lyricist\n if track_info.composer is not None:\n item.composer = track_info.composer\n if track_info.composer_sort is not None:\n item.composer_sort = track_info.composer_sort\n if track_info.arranger is not None:\n item.arranger = track_info.arranger\n\n # At the moment, the other metadata is left intact (including album\n # and track number). Perhaps these should be emptied?\n\n\ndef apply_metadata(album_info, mapping):\n \"\"\"Set the items' metadata to match an AlbumInfo object using a\n mapping from Items to TrackInfo objects.\n \"\"\"\n for item, track_info in mapping.items():\n # Album, artist, track count.\n if track_info.artist:\n item.artist = track_info.artist\n else:\n item.artist = album_info.artist\n item.albumartist = album_info.artist\n item.album = album_info.album\n\n # Artist sort and credit names.\n item.artist_sort = track_info.artist_sort or album_info.artist_sort\n item.artist_credit = (track_info.artist_credit or\n album_info.artist_credit)\n item.albumartist_sort = album_info.artist_sort\n item.albumartist_credit = album_info.artist_credit\n\n # Release date.\n for prefix in '', 'original_':\n if config['original_date'] and not prefix:\n # Ignore specific release date.\n continue\n\n for suffix in 'year', 'month', 'day':\n key = prefix + suffix\n value = getattr(album_info, key) or 0\n\n # If we don't even have a year, apply nothing.\n if suffix == 'year' and not value:\n break\n\n # Otherwise, set the fetched value (or 0 for the month\n # and day if not available).\n item[key] = value\n\n # If we're using original release date for both fields,\n # also set item.year = info.original_year, etc.\n if config['original_date']:\n item[suffix] = value\n\n # Title.\n item.title = track_info.title\n\n if config['per_disc_numbering']:\n # We want to let the track number be zero, but if the medium index\n # is not provided we need to fall back to the overall index.\n if track_info.medium_index is not None:\n item.track = track_info.medium_index\n else:\n item.track = track_info.index\n item.tracktotal = track_info.medium_total or len(album_info.tracks)\n else:\n item.track = track_info.index\n item.tracktotal = len(album_info.tracks)\n\n # Disc and disc count.\n item.disc = track_info.medium\n item.disctotal = album_info.mediums\n\n # MusicBrainz IDs.\n item.mb_trackid = track_info.track_id\n item.mb_albumid = album_info.album_id\n if track_info.artist_id:\n item.mb_artistid = track_info.artist_id\n else:\n item.mb_artistid = album_info.artist_id\n item.mb_albumartistid = album_info.artist_id\n item.mb_releasegroupid = album_info.releasegroup_id\n\n # Compilation flag.\n item.comp = album_info.va\n\n # Miscellaneous metadata.\n for field in ('albumtype',\n 'label',\n 'asin',\n 'catalognum',\n 'script',\n 'language',\n 'country',\n 'albumstatus',\n 'albumdisambig',\n 'data_source',):\n value = getattr(album_info, field)\n if value is not None:\n item[field] = value\n if track_info.disctitle is not None:\n item.disctitle = track_info.disctitle\n\n if track_info.media is not None:\n item.media = track_info.media\n\n if track_info.lyricist is not None:\n item.lyricist = track_info.lyricist\n if track_info.composer is not None:\n item.composer = track_info.composer\n if track_info.composer_sort is not None:\n item.composer_sort = track_info.composer_sort\n if track_info.arranger is not None:\n item.arranger = track_info.arranger\n\n item.track_alt = track_info.track_alt\n", "path": "beets/autotag/__init__.py"}]} | 2,458 | 249 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.