problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_22217 | rasdani/github-patches | git_diff | OCA__bank-payment-107 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test fails with Odoo, not OCB
https://travis-ci.org/OCA/bank-payment/builds/47806067
File "/home/travis/build/OCA/bank-payment/account_direct_debit/models/account_invoice.py", line 140, in __ init __
invoice_obj._columns['state'].selection.append(
KeyError: 'state'
</issue>
<code>
[start of account_direct_debit/models/account_invoice.py]
1 # -*- coding: utf-8 -*-
2 ##############################################################################
3 #
4 # Copyright (C) 2011 - 2013 Therp BV (<http://therp.nl>).
5 #
6 # All other contributions are (C) by their respective contributors
7 #
8 # All Rights Reserved
9 #
10 # This program is free software: you can redistribute it and/or modify
11 # it under the terms of the GNU Affero General Public License as
12 # published by the Free Software Foundation, either version 3 of the
13 # License, or (at your option) any later version.
14 #
15 # This program is distributed in the hope that it will be useful,
16 # but WITHOUT ANY WARRANTY; without even the implied warranty of
17 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
18 # GNU Affero General Public License for more details.
19 #
20 # You should have received a copy of the GNU Affero General Public License
21 # along with this program. If not, see <http://www.gnu.org/licenses/>.
22 #
23 ##############################################################################
24
25 """
26 This module adds support for Direct debit orders as applicable
27 in the Netherlands. Debit orders are advanced in total by the bank.
28 Amounts that cannot be debited or are canceled by account owners are
29 credited afterwards. Such a creditation is called a storno.
30
31 Invoice workflow:
32
33 1 the sale leads to
34 1300 Debtors 100
35 8000 Sales 100
36
37 Balance:
38 Debtors 2000 |
39 Sales | 2000
40
41 2 an external booking takes place
42 1100 Bank 100
43 1300 Debtors 100
44 This booking is reconciled with [1]
45 The invoice gets set to state 'paid', and 'reconciled' = True
46
47 Balance:
48 Debtors 1900 |
49 Bank 100 |
50 Sales | 2000
51
52 This module implements the following diversion:
53
54 2a the invoice is included in a direct debit order. When the order is
55 confirmed, a move is created per invoice:
56
57 2000 Transfer account 100 |
58 1300 Debtors | 100
59 Reconciliation takes place between 1 and 2a.
60 The invoice gets set to state 'paid', and 'reconciled' = True
61
62 Balance:
63 Debtors 0 |
64 Transfer account 2000 |
65 Bank 0 |
66 Sales | 2000
67
68 3a the direct debit order is booked on the bank account
69
70 Balance:
71 1100 Bank 2000 |
72 2000 Transfer account | 2000
73 Reconciliation takes place between 3a and 2a
74
75 Balance:
76 Debtors 0 |
77 Transfer account 0 |
78 Bank 2000 |
79 Sales | 2000
80
81 4 a storno from invoice [1] triggers a new booking on the bank account
82 1300 Debtors 100 |
83 1100 Bank | 100
84
85 Balance:
86 Debtors 100 |
87 Transfer account 0 |
88 Bank 1900 |
89 Sales | 2000
90
91 The reconciliation of 2a is undone. The booking of 2a is reconciled
92 with the booking of 4 instead.
93 The payment line attribute 'storno' is set to True and the invoice
94 state is no longer 'paid'.
95
96 Two cases need to be distinguisted:
97 1) If the storno is a manual storno from the partner, the invoice is set to
98 state 'debit_denied', with 'reconciled' = False
99 This module implements this option by allowing the bank module to call
100
101 netsvc.LocalService("workflow").trg_validate(
102 uid, 'account.invoice', ids, 'debit_denied', cr)
103
104 2) If the storno is an error generated by the bank (assumingly non-fatal),
105 the invoice is reopened for the next debit run. This is a call to
106 existing
107
108 netsvc.LocalService("workflow").trg_validate(
109 uid, 'account.invoice', ids, 'open_test', cr)
110
111 Should also be adding a log entry on the invoice for tracing purposes
112
113 self._log_event(cr, uid, ids, -1.0, 'Debit denied')
114
115 If not for that funny comment
116 "#TODO: implement messages system" in account/invoice.py
117
118 Repeating non-fatal fatal errors need to be dealt with manually by checking
119 open invoices with a matured invoice- or due date.
120 """
121
122 from openerp.osv import orm
123 from openerp.tools.translate import _
124
125
126 class AccountInvoice(orm.Model):
127 _inherit = "account.invoice"
128
129 def __init__(self, pool, cr):
130 """
131 Adding a state to the hardcoded state list of the inherited
132 model. The alternative is duplicating the field definition
133 in columns but only one module can do that!
134
135 Maybe apply a similar trick when overriding the buttons' 'states'
136 attributes in the form view, manipulating the xml in fields_view_get().
137 """
138 super(AccountInvoice, self).__init__(pool, cr)
139 invoice_obj = pool.get('account.invoice')
140 invoice_obj._columns['state'].selection.append(
141 ('debit_denied', 'Debit denied'))
142
143 def action_debit_denied(self, cr, uid, ids, context=None):
144 for invoice_id in ids:
145 if self.test_paid(cr, uid, [invoice_id], context):
146 number = self.read(
147 cr, uid, invoice_id, ['number'], context=context)['number']
148 raise orm.except_orm(
149 _('Error !'),
150 _("You cannot set invoice '%s' to state 'debit "
151 "denied', as it is still reconciled.") % number)
152 self.write(cr, uid, ids, {'state': 'debit_denied'}, context=context)
153 for inv_id, name in self.name_get(cr, uid, ids, context=context):
154 message = _("Invoice '%s': direct debit is denied.") % name
155 self.log(cr, uid, inv_id, message)
156 return True
157
158 def test_undo_debit_denied(self, cr, uid, ids, context=None):
159 """
160 Called from the workflow. Used to unset paid state on
161 invoices that were paid with bank transfers which are being cancelled
162 """
163 for invoice in self.read(cr, uid, ids, ['reconciled'], context):
164 if not invoice['reconciled']:
165 return False
166 return True
167
[end of account_direct_debit/models/account_invoice.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/account_direct_debit/models/account_invoice.py b/account_direct_debit/models/account_invoice.py
--- a/account_direct_debit/models/account_invoice.py
+++ b/account_direct_debit/models/account_invoice.py
@@ -126,7 +126,7 @@
class AccountInvoice(orm.Model):
_inherit = "account.invoice"
- def __init__(self, pool, cr):
+ def _register_hook(self, cr):
"""
Adding a state to the hardcoded state list of the inherited
model. The alternative is duplicating the field definition
@@ -135,9 +135,7 @@
Maybe apply a similar trick when overriding the buttons' 'states'
attributes in the form view, manipulating the xml in fields_view_get().
"""
- super(AccountInvoice, self).__init__(pool, cr)
- invoice_obj = pool.get('account.invoice')
- invoice_obj._columns['state'].selection.append(
+ self._columns['state'].selection.append(
('debit_denied', 'Debit denied'))
def action_debit_denied(self, cr, uid, ids, context=None):
| {"golden_diff": "diff --git a/account_direct_debit/models/account_invoice.py b/account_direct_debit/models/account_invoice.py\n--- a/account_direct_debit/models/account_invoice.py\n+++ b/account_direct_debit/models/account_invoice.py\n@@ -126,7 +126,7 @@\n class AccountInvoice(orm.Model):\n _inherit = \"account.invoice\"\n \n- def __init__(self, pool, cr):\n+ def _register_hook(self, cr):\n \"\"\"\n Adding a state to the hardcoded state list of the inherited\n model. The alternative is duplicating the field definition\n@@ -135,9 +135,7 @@\n Maybe apply a similar trick when overriding the buttons' 'states'\n attributes in the form view, manipulating the xml in fields_view_get().\n \"\"\"\n- super(AccountInvoice, self).__init__(pool, cr)\n- invoice_obj = pool.get('account.invoice')\n- invoice_obj._columns['state'].selection.append(\n+ self._columns['state'].selection.append(\n ('debit_denied', 'Debit denied'))\n \n def action_debit_denied(self, cr, uid, ids, context=None):\n", "issue": "Test fails with Odoo, not OCB\nhttps://travis-ci.org/OCA/bank-payment/builds/47806067\n\nFile \"/home/travis/build/OCA/bank-payment/account_direct_debit/models/account_invoice.py\", line 140, in __ init __\ninvoice_obj._columns['state'].selection.append(\nKeyError: 'state'\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# Copyright (C) 2011 - 2013 Therp BV (<http://therp.nl>).\n#\n# All other contributions are (C) by their respective contributors\n#\n# All Rights Reserved\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n##############################################################################\n\n\"\"\"\nThis module adds support for Direct debit orders as applicable\nin the Netherlands. Debit orders are advanced in total by the bank.\nAmounts that cannot be debited or are canceled by account owners are\ncredited afterwards. Such a creditation is called a storno.\n\nInvoice workflow:\n\n1 the sale leads to\n 1300 Debtors 100\n 8000 Sales 100\n\nBalance:\n Debtors 2000 |\n Sales | 2000\n\n2 an external booking takes place\n 1100 Bank 100\n 1300 Debtors 100\n This booking is reconciled with [1]\n The invoice gets set to state 'paid', and 'reconciled' = True\n\nBalance:\n Debtors 1900 |\n Bank 100 |\n Sales | 2000\n\nThis module implements the following diversion:\n\n2a the invoice is included in a direct debit order. When the order is\n confirmed, a move is created per invoice:\n\n 2000 Transfer account 100 |\n 1300 Debtors | 100\n Reconciliation takes place between 1 and 2a.\n The invoice gets set to state 'paid', and 'reconciled' = True\n\nBalance:\n Debtors 0 |\n Transfer account 2000 |\n Bank 0 |\n Sales | 2000\n\n3a the direct debit order is booked on the bank account\n\nBalance:\n 1100 Bank 2000 |\n 2000 Transfer account | 2000\n Reconciliation takes place between 3a and 2a\n\nBalance:\n Debtors 0 |\n Transfer account 0 |\n Bank 2000 |\n Sales | 2000\n\n4 a storno from invoice [1] triggers a new booking on the bank account\n 1300 Debtors 100 |\n 1100 Bank | 100\n\nBalance:\n Debtors 100 |\n Transfer account 0 |\n Bank 1900 |\n Sales | 2000\n\n The reconciliation of 2a is undone. The booking of 2a is reconciled\n with the booking of 4 instead.\n The payment line attribute 'storno' is set to True and the invoice\n state is no longer 'paid'.\n\nTwo cases need to be distinguisted:\n 1) If the storno is a manual storno from the partner, the invoice is set to\n state 'debit_denied', with 'reconciled' = False\n This module implements this option by allowing the bank module to call\n\n netsvc.LocalService(\"workflow\").trg_validate(\n uid, 'account.invoice', ids, 'debit_denied', cr)\n\n 2) If the storno is an error generated by the bank (assumingly non-fatal),\n the invoice is reopened for the next debit run. This is a call to\n existing\n\n netsvc.LocalService(\"workflow\").trg_validate(\n uid, 'account.invoice', ids, 'open_test', cr)\n\n Should also be adding a log entry on the invoice for tracing purposes\n\n self._log_event(cr, uid, ids, -1.0, 'Debit denied')\n\n If not for that funny comment\n \"#TODO: implement messages system\" in account/invoice.py\n\n Repeating non-fatal fatal errors need to be dealt with manually by checking\n open invoices with a matured invoice- or due date.\n\"\"\"\n\nfrom openerp.osv import orm\nfrom openerp.tools.translate import _\n\n\nclass AccountInvoice(orm.Model):\n _inherit = \"account.invoice\"\n\n def __init__(self, pool, cr):\n \"\"\"\n Adding a state to the hardcoded state list of the inherited\n model. The alternative is duplicating the field definition\n in columns but only one module can do that!\n\n Maybe apply a similar trick when overriding the buttons' 'states'\n attributes in the form view, manipulating the xml in fields_view_get().\n \"\"\"\n super(AccountInvoice, self).__init__(pool, cr)\n invoice_obj = pool.get('account.invoice')\n invoice_obj._columns['state'].selection.append(\n ('debit_denied', 'Debit denied'))\n\n def action_debit_denied(self, cr, uid, ids, context=None):\n for invoice_id in ids:\n if self.test_paid(cr, uid, [invoice_id], context):\n number = self.read(\n cr, uid, invoice_id, ['number'], context=context)['number']\n raise orm.except_orm(\n _('Error !'),\n _(\"You cannot set invoice '%s' to state 'debit \"\n \"denied', as it is still reconciled.\") % number)\n self.write(cr, uid, ids, {'state': 'debit_denied'}, context=context)\n for inv_id, name in self.name_get(cr, uid, ids, context=context):\n message = _(\"Invoice '%s': direct debit is denied.\") % name\n self.log(cr, uid, inv_id, message)\n return True\n\n def test_undo_debit_denied(self, cr, uid, ids, context=None):\n \"\"\"\n Called from the workflow. Used to unset paid state on\n invoices that were paid with bank transfers which are being cancelled\n \"\"\"\n for invoice in self.read(cr, uid, ids, ['reconciled'], context):\n if not invoice['reconciled']:\n return False\n return True\n", "path": "account_direct_debit/models/account_invoice.py"}]} | 2,495 | 246 |
gh_patches_debug_20110 | rasdani/github-patches | git_diff | pytorch__ignite-2639 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Code formatting issue with latest flake8
https://github.com/pytorch/ignite/runs/7781175697?check_suite_focus=true#step:11:84
```
Collecting flake8
Downloading flake8-5.0.4-py2.py3-none-any.whl (61 kB)
+ flake8 ignite tests examples --config setup.cfg
ignite/metrics/psnr.py:12:121: E501 line too long (121 > 120 characters)
```
</issue>
<code>
[start of ignite/metrics/psnr.py]
1 from typing import Callable, Sequence, Union
2
3 import torch
4
5 from ignite.exceptions import NotComputableError
6 from ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce
7
8 __all__ = ["PSNR"]
9
10
11 class PSNR(Metric):
12 r"""Computes average `Peak signal-to-noise ratio (PSNR) <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.
13
14 .. math::
15 \text{PSNR}(I, J) = 10 * \log_{10}\left(\frac{ MAX_{I}^2 }{ \text{ MSE } }\right)
16
17 where :math:`\text{MSE}` is `mean squared error <https://en.wikipedia.org/wiki/Mean_squared_error>`_.
18
19 - `y_pred` and `y` **must** have (batch_size, ...) shape.
20 - `y_pred` and `y` **must** have same dtype and same shape.
21
22 Args:
23 data_range: The data range of the target image (distance between minimum
24 and maximum possible values).
25 For other data types, please set the data range, otherwise an exception will be raised.
26 output_transform: A callable that is used to transform the Engine’s
27 process_function’s output into the form expected by the metric.
28 device: specifies which device updates are accumulated on.
29 Setting the metric’s device to be the same as your update arguments ensures
30 the update method is non-blocking. By default, CPU.
31
32 Examples:
33 To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.
34 The output of the engine's ``process_function`` needs to be in format of
35 ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.
36
37 For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.
38
39 .. include:: defaults.rst
40 :start-after: :orphan:
41
42 .. testcode::
43
44 psnr = PSNR(data_range=1.0)
45 psnr.attach(default_evaluator, 'psnr')
46 preds = torch.rand([4, 3, 16, 16])
47 target = preds * 0.75
48 state = default_evaluator.run([[preds, target]])
49 print(state.metrics['psnr'])
50
51 .. testoutput::
52
53 16.8671405...
54
55 This metric by default accepts Grayscale or RGB images. But if you have YCbCr or YUV images, only
56 Y channel is needed for computing PSNR. And, this can be done with ``output_transform``. For instance,
57
58 .. testcode::
59
60 def get_y_channel(output):
61 y_pred, y = output
62 # y_pred and y are (B, 3, H, W) and YCbCr or YUV images
63 # let's select y channel
64 return y_pred[:, 0, ...], y[:, 0, ...]
65
66 psnr = PSNR(data_range=219, output_transform=get_y_channel)
67 psnr.attach(default_evaluator, 'psnr')
68 preds = 219 * torch.rand([4, 3, 16, 16])
69 target = preds * 0.75
70 state = default_evaluator.run([[preds, target]])
71 print(state.metrics['psnr'])
72
73 .. testoutput::
74
75 16.7027966...
76
77 .. versionadded:: 0.4.3
78 """
79
80 def __init__(
81 self,
82 data_range: Union[int, float],
83 output_transform: Callable = lambda x: x,
84 device: Union[str, torch.device] = torch.device("cpu"),
85 ):
86 super().__init__(output_transform=output_transform, device=device)
87 self.data_range = data_range
88
89 def _check_shape_dtype(self, output: Sequence[torch.Tensor]) -> None:
90 y_pred, y = output
91 if y_pred.dtype != y.dtype:
92 raise TypeError(
93 f"Expected y_pred and y to have the same data type. Got y_pred: {y_pred.dtype} and y: {y.dtype}."
94 )
95
96 if y_pred.shape != y.shape:
97 raise ValueError(
98 f"Expected y_pred and y to have the same shape. Got y_pred: {y_pred.shape} and y: {y.shape}."
99 )
100
101 @reinit__is_reduced
102 def reset(self) -> None:
103 self._sum_of_batchwise_psnr = torch.tensor(0.0, dtype=torch.float64, device=self._device)
104 self._num_examples = 0
105
106 @reinit__is_reduced
107 def update(self, output: Sequence[torch.Tensor]) -> None:
108 self._check_shape_dtype(output)
109 y_pred, y = output[0].detach(), output[1].detach()
110
111 dim = tuple(range(1, y.ndim))
112 mse_error = torch.pow(y_pred.double() - y.view_as(y_pred).double(), 2).mean(dim=dim)
113 self._sum_of_batchwise_psnr += torch.sum(10.0 * torch.log10(self.data_range ** 2 / (mse_error + 1e-10))).to(
114 device=self._device
115 )
116 self._num_examples += y.shape[0]
117
118 @sync_all_reduce("_sum_of_batchwise_psnr", "_num_examples")
119 def compute(self) -> torch.Tensor:
120 if self._num_examples == 0:
121 raise NotComputableError("PSNR must have at least one example before it can be computed.")
122 return self._sum_of_batchwise_psnr / self._num_examples
123
[end of ignite/metrics/psnr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/metrics/psnr.py b/ignite/metrics/psnr.py
--- a/ignite/metrics/psnr.py
+++ b/ignite/metrics/psnr.py
@@ -9,7 +9,8 @@
class PSNR(Metric):
- r"""Computes average `Peak signal-to-noise ratio (PSNR) <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.
+ r"""Computes average
+ `Peak signal-to-noise ratio (PSNR) <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.
.. math::
\text{PSNR}(I, J) = 10 * \log_{10}\left(\frac{ MAX_{I}^2 }{ \text{ MSE } }\right)
@@ -34,7 +35,8 @@
The output of the engine's ``process_function`` needs to be in format of
``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.
- For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.
+ For more information on how metric works with :class:`~ignite.engine.engine.Engine`,
+ visit :ref:`attach-engine`.
.. include:: defaults.rst
:start-after: :orphan:
| {"golden_diff": "diff --git a/ignite/metrics/psnr.py b/ignite/metrics/psnr.py\n--- a/ignite/metrics/psnr.py\n+++ b/ignite/metrics/psnr.py\n@@ -9,7 +9,8 @@\n \n \n class PSNR(Metric):\n- r\"\"\"Computes average `Peak signal-to-noise ratio (PSNR) <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.\n+ r\"\"\"Computes average\n+ `Peak signal-to-noise ratio (PSNR) <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.\n \n .. math::\n \\text{PSNR}(I, J) = 10 * \\log_{10}\\left(\\frac{ MAX_{I}^2 }{ \\text{ MSE } }\\right)\n@@ -34,7 +35,8 @@\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n \n- For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.\n+ For more information on how metric works with :class:`~ignite.engine.engine.Engine`,\n+ visit :ref:`attach-engine`.\n \n .. include:: defaults.rst\n :start-after: :orphan:\n", "issue": "Code formatting issue with latest flake8\n\r\nhttps://github.com/pytorch/ignite/runs/7781175697?check_suite_focus=true#step:11:84\r\n\r\n```\r\nCollecting flake8\r\n Downloading flake8-5.0.4-py2.py3-none-any.whl (61 kB)\r\n\r\n+ flake8 ignite tests examples --config setup.cfg\r\nignite/metrics/psnr.py:12:121: E501 line too long (121 > 120 characters)\r\n```\n", "before_files": [{"content": "from typing import Callable, Sequence, Union\n\nimport torch\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce\n\n__all__ = [\"PSNR\"]\n\n\nclass PSNR(Metric):\n r\"\"\"Computes average `Peak signal-to-noise ratio (PSNR) <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.\n\n .. math::\n \\text{PSNR}(I, J) = 10 * \\log_{10}\\left(\\frac{ MAX_{I}^2 }{ \\text{ MSE } }\\right)\n\n where :math:`\\text{MSE}` is `mean squared error <https://en.wikipedia.org/wiki/Mean_squared_error>`_.\n\n - `y_pred` and `y` **must** have (batch_size, ...) shape.\n - `y_pred` and `y` **must** have same dtype and same shape.\n\n Args:\n data_range: The data range of the target image (distance between minimum\n and maximum possible values).\n For other data types, please set the data range, otherwise an exception will be raised.\n output_transform: A callable that is used to transform the Engine\u2019s\n process_function\u2019s output into the form expected by the metric.\n device: specifies which device updates are accumulated on.\n Setting the metric\u2019s device to be the same as your update arguments ensures\n the update method is non-blocking. By default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n\n For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n psnr = PSNR(data_range=1.0)\n psnr.attach(default_evaluator, 'psnr')\n preds = torch.rand([4, 3, 16, 16])\n target = preds * 0.75\n state = default_evaluator.run([[preds, target]])\n print(state.metrics['psnr'])\n\n .. testoutput::\n\n 16.8671405...\n\n This metric by default accepts Grayscale or RGB images. But if you have YCbCr or YUV images, only\n Y channel is needed for computing PSNR. And, this can be done with ``output_transform``. For instance,\n\n .. testcode::\n\n def get_y_channel(output):\n y_pred, y = output\n # y_pred and y are (B, 3, H, W) and YCbCr or YUV images\n # let's select y channel\n return y_pred[:, 0, ...], y[:, 0, ...]\n\n psnr = PSNR(data_range=219, output_transform=get_y_channel)\n psnr.attach(default_evaluator, 'psnr')\n preds = 219 * torch.rand([4, 3, 16, 16])\n target = preds * 0.75\n state = default_evaluator.run([[preds, target]])\n print(state.metrics['psnr'])\n\n .. testoutput::\n\n 16.7027966...\n\n .. versionadded:: 0.4.3\n \"\"\"\n\n def __init__(\n self,\n data_range: Union[int, float],\n output_transform: Callable = lambda x: x,\n device: Union[str, torch.device] = torch.device(\"cpu\"),\n ):\n super().__init__(output_transform=output_transform, device=device)\n self.data_range = data_range\n\n def _check_shape_dtype(self, output: Sequence[torch.Tensor]) -> None:\n y_pred, y = output\n if y_pred.dtype != y.dtype:\n raise TypeError(\n f\"Expected y_pred and y to have the same data type. Got y_pred: {y_pred.dtype} and y: {y.dtype}.\"\n )\n\n if y_pred.shape != y.shape:\n raise ValueError(\n f\"Expected y_pred and y to have the same shape. Got y_pred: {y_pred.shape} and y: {y.shape}.\"\n )\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_batchwise_psnr = torch.tensor(0.0, dtype=torch.float64, device=self._device)\n self._num_examples = 0\n\n @reinit__is_reduced\n def update(self, output: Sequence[torch.Tensor]) -> None:\n self._check_shape_dtype(output)\n y_pred, y = output[0].detach(), output[1].detach()\n\n dim = tuple(range(1, y.ndim))\n mse_error = torch.pow(y_pred.double() - y.view_as(y_pred).double(), 2).mean(dim=dim)\n self._sum_of_batchwise_psnr += torch.sum(10.0 * torch.log10(self.data_range ** 2 / (mse_error + 1e-10))).to(\n device=self._device\n )\n self._num_examples += y.shape[0]\n\n @sync_all_reduce(\"_sum_of_batchwise_psnr\", \"_num_examples\")\n def compute(self) -> torch.Tensor:\n if self._num_examples == 0:\n raise NotComputableError(\"PSNR must have at least one example before it can be computed.\")\n return self._sum_of_batchwise_psnr / self._num_examples\n", "path": "ignite/metrics/psnr.py"}]} | 2,184 | 305 |
gh_patches_debug_36049 | rasdani/github-patches | git_diff | mozilla__pontoon-2716 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pretranslated Fluent string has the ID translated or modified
It happened for at least two strings.
```
remember-pw-link = Remember your password? Sign in
```
Became
```
Remember-pw-link = Ricordi la password? Accedi
```
No clue why it changed to uppercase.
On the other hand, for
```
plan-price-interval-year =
{ $intervalCount ->
[one] { $amount } all’anno
*[other] { $amount } ogni { $intervalCount } anni
}
.title =
{ $intervalCount ->
[one] { $amount } all’anno
*[other] { $amount } ogni { $intervalCount } anni
}
```
The id was translated to `piano-prezzo-intervallo-anno`(but the translation was good besides that).
</issue>
<code>
[start of pontoon/pretranslation/pretranslate.py]
1 import operator
2
3 from fluent.syntax import FluentSerializer
4 from functools import reduce
5
6 from django.db.models import CharField, Value as V
7 from django.db.models.functions import Concat
8
9 from pontoon.base.models import User, TranslatedResource
10 from pontoon.machinery.utils import (
11 get_google_translate_data,
12 get_translation_memory_data,
13 )
14
15 from pontoon.base.templatetags.helpers import (
16 as_simple_translation,
17 is_single_input_ftl_string,
18 get_reconstructed_message,
19 )
20
21
22 serializer = FluentSerializer()
23
24
25 def get_translations(entity, locale):
26 """
27 Get pretranslations for the entity-locale pair
28
29 :arg Entity entity: the Entity object
30 :arg Locale locale: the Locale object
31
32 :returns: a list of tuple with:
33 - a pretranslation of the entity
34 - plural form
35 - user - tm_user/gt_user
36 """
37 tm_user = User.objects.get(email="[email protected]")
38 gt_user = User.objects.get(email="[email protected]")
39
40 strings = []
41 plural_forms = range(0, locale.nplurals or 1)
42
43 entity_string = (
44 as_simple_translation(entity.string)
45 if is_single_input_ftl_string(entity.string)
46 else entity.string
47 )
48
49 # Try to get matches from translation_memory
50 tm_response = get_translation_memory_data(
51 text=entity_string,
52 locale=locale,
53 )
54
55 tm_response = [t for t in tm_response if int(t["quality"]) == 100]
56
57 if tm_response:
58 if entity.string_plural == "":
59 translation = tm_response[0]["target"]
60
61 if entity.string != entity_string:
62 translation = serializer.serialize_entry(
63 get_reconstructed_message(entity.string, translation)
64 )
65
66 strings = [(translation, None, tm_user)]
67 else:
68 for plural_form in plural_forms:
69 strings.append((tm_response[0]["target"], plural_form, tm_user))
70
71 # Else fetch from google translate
72 elif locale.google_translate_code:
73 gt_response = get_google_translate_data(
74 text=entity.string,
75 locale=locale,
76 )
77
78 if gt_response["status"]:
79 if entity.string_plural == "":
80 strings = [(gt_response["translation"], None, gt_user)]
81 else:
82 for plural_form in plural_forms:
83 strings.append((gt_response["translation"], plural_form, gt_user))
84 return strings
85
86
87 def update_changed_instances(tr_filter, tr_dict, translations):
88 """
89 Update the latest activity and stats for changed Locales, ProjectLocales
90 & TranslatedResources
91 """
92 tr_filter = tuple(tr_filter)
93 # Combine all generated filters with an OK operator.
94 # `operator.ior` is the '|' Python operator, which turns into a logical OR
95 # when used between django ORM query objects.
96 tr_query = reduce(operator.ior, tr_filter)
97
98 translatedresources = TranslatedResource.objects.filter(tr_query).annotate(
99 locale_resource=Concat(
100 "locale_id", V("-"), "resource_id", output_field=CharField()
101 )
102 )
103
104 translatedresources.update_stats()
105
106 for tr in translatedresources:
107 index = tr_dict[tr.locale_resource]
108 translation = translations[index]
109 translation.update_latest_translation()
110
[end of pontoon/pretranslation/pretranslate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pontoon/pretranslation/pretranslate.py b/pontoon/pretranslation/pretranslate.py
--- a/pontoon/pretranslation/pretranslate.py
+++ b/pontoon/pretranslation/pretranslate.py
@@ -18,6 +18,7 @@
get_reconstructed_message,
)
+UNTRANSLATABLE_KEY = "AIzaSyDX3R5Y1kxh_8lJ4OAO"
serializer = FluentSerializer()
@@ -40,7 +41,7 @@
strings = []
plural_forms = range(0, locale.nplurals or 1)
- entity_string = (
+ tm_input = (
as_simple_translation(entity.string)
if is_single_input_ftl_string(entity.string)
else entity.string
@@ -48,7 +49,7 @@
# Try to get matches from translation_memory
tm_response = get_translation_memory_data(
- text=entity_string,
+ text=tm_input,
locale=locale,
)
@@ -58,7 +59,7 @@
if entity.string_plural == "":
translation = tm_response[0]["target"]
- if entity.string != entity_string:
+ if entity.string != tm_input:
translation = serializer.serialize_entry(
get_reconstructed_message(entity.string, translation)
)
@@ -70,12 +71,23 @@
# Else fetch from google translate
elif locale.google_translate_code:
+ gt_input = (
+ entity.string.replace(entity.key, UNTRANSLATABLE_KEY, 1)
+ if entity.resource.format == "ftl"
+ else entity.string
+ )
+
gt_response = get_google_translate_data(
- text=entity.string,
+ text=gt_input,
locale=locale,
)
if gt_response["status"]:
+ if entity.string != gt_input:
+ gt_response["translation"] = gt_response["translation"].replace(
+ UNTRANSLATABLE_KEY, entity.key
+ )
+
if entity.string_plural == "":
strings = [(gt_response["translation"], None, gt_user)]
else:
| {"golden_diff": "diff --git a/pontoon/pretranslation/pretranslate.py b/pontoon/pretranslation/pretranslate.py\n--- a/pontoon/pretranslation/pretranslate.py\n+++ b/pontoon/pretranslation/pretranslate.py\n@@ -18,6 +18,7 @@\n get_reconstructed_message,\n )\n \n+UNTRANSLATABLE_KEY = \"AIzaSyDX3R5Y1kxh_8lJ4OAO\"\n \n serializer = FluentSerializer()\n \n@@ -40,7 +41,7 @@\n strings = []\n plural_forms = range(0, locale.nplurals or 1)\n \n- entity_string = (\n+ tm_input = (\n as_simple_translation(entity.string)\n if is_single_input_ftl_string(entity.string)\n else entity.string\n@@ -48,7 +49,7 @@\n \n # Try to get matches from translation_memory\n tm_response = get_translation_memory_data(\n- text=entity_string,\n+ text=tm_input,\n locale=locale,\n )\n \n@@ -58,7 +59,7 @@\n if entity.string_plural == \"\":\n translation = tm_response[0][\"target\"]\n \n- if entity.string != entity_string:\n+ if entity.string != tm_input:\n translation = serializer.serialize_entry(\n get_reconstructed_message(entity.string, translation)\n )\n@@ -70,12 +71,23 @@\n \n # Else fetch from google translate\n elif locale.google_translate_code:\n+ gt_input = (\n+ entity.string.replace(entity.key, UNTRANSLATABLE_KEY, 1)\n+ if entity.resource.format == \"ftl\"\n+ else entity.string\n+ )\n+\n gt_response = get_google_translate_data(\n- text=entity.string,\n+ text=gt_input,\n locale=locale,\n )\n \n if gt_response[\"status\"]:\n+ if entity.string != gt_input:\n+ gt_response[\"translation\"] = gt_response[\"translation\"].replace(\n+ UNTRANSLATABLE_KEY, entity.key\n+ )\n+\n if entity.string_plural == \"\":\n strings = [(gt_response[\"translation\"], None, gt_user)]\n else:\n", "issue": "Pretranslated Fluent string has the ID translated or modified\nIt happened for at least two strings.\r\n\r\n```\r\nremember-pw-link = Remember your password? Sign in\r\n```\r\n\r\nBecame\r\n\r\n```\r\nRemember-pw-link = Ricordi la password? Accedi\r\n```\r\n\r\nNo clue why it changed to uppercase.\r\n\r\nOn the other hand, for \r\n\r\n```\r\nplan-price-interval-year =\r\n { $intervalCount ->\r\n [one] { $amount } all\u2019anno\r\n *[other] { $amount } ogni { $intervalCount } anni\r\n }\r\n .title =\r\n { $intervalCount ->\r\n [one] { $amount } all\u2019anno\r\n *[other] { $amount } ogni { $intervalCount } anni\r\n }\r\n```\r\n\r\nThe id was translated to `piano-prezzo-intervallo-anno`(but the translation was good besides that).\r\n\n", "before_files": [{"content": "import operator\n\nfrom fluent.syntax import FluentSerializer\nfrom functools import reduce\n\nfrom django.db.models import CharField, Value as V\nfrom django.db.models.functions import Concat\n\nfrom pontoon.base.models import User, TranslatedResource\nfrom pontoon.machinery.utils import (\n get_google_translate_data,\n get_translation_memory_data,\n)\n\nfrom pontoon.base.templatetags.helpers import (\n as_simple_translation,\n is_single_input_ftl_string,\n get_reconstructed_message,\n)\n\n\nserializer = FluentSerializer()\n\n\ndef get_translations(entity, locale):\n \"\"\"\n Get pretranslations for the entity-locale pair\n\n :arg Entity entity: the Entity object\n :arg Locale locale: the Locale object\n\n :returns: a list of tuple with:\n - a pretranslation of the entity\n - plural form\n - user - tm_user/gt_user\n \"\"\"\n tm_user = User.objects.get(email=\"[email protected]\")\n gt_user = User.objects.get(email=\"[email protected]\")\n\n strings = []\n plural_forms = range(0, locale.nplurals or 1)\n\n entity_string = (\n as_simple_translation(entity.string)\n if is_single_input_ftl_string(entity.string)\n else entity.string\n )\n\n # Try to get matches from translation_memory\n tm_response = get_translation_memory_data(\n text=entity_string,\n locale=locale,\n )\n\n tm_response = [t for t in tm_response if int(t[\"quality\"]) == 100]\n\n if tm_response:\n if entity.string_plural == \"\":\n translation = tm_response[0][\"target\"]\n\n if entity.string != entity_string:\n translation = serializer.serialize_entry(\n get_reconstructed_message(entity.string, translation)\n )\n\n strings = [(translation, None, tm_user)]\n else:\n for plural_form in plural_forms:\n strings.append((tm_response[0][\"target\"], plural_form, tm_user))\n\n # Else fetch from google translate\n elif locale.google_translate_code:\n gt_response = get_google_translate_data(\n text=entity.string,\n locale=locale,\n )\n\n if gt_response[\"status\"]:\n if entity.string_plural == \"\":\n strings = [(gt_response[\"translation\"], None, gt_user)]\n else:\n for plural_form in plural_forms:\n strings.append((gt_response[\"translation\"], plural_form, gt_user))\n return strings\n\n\ndef update_changed_instances(tr_filter, tr_dict, translations):\n \"\"\"\n Update the latest activity and stats for changed Locales, ProjectLocales\n & TranslatedResources\n \"\"\"\n tr_filter = tuple(tr_filter)\n # Combine all generated filters with an OK operator.\n # `operator.ior` is the '|' Python operator, which turns into a logical OR\n # when used between django ORM query objects.\n tr_query = reduce(operator.ior, tr_filter)\n\n translatedresources = TranslatedResource.objects.filter(tr_query).annotate(\n locale_resource=Concat(\n \"locale_id\", V(\"-\"), \"resource_id\", output_field=CharField()\n )\n )\n\n translatedresources.update_stats()\n\n for tr in translatedresources:\n index = tr_dict[tr.locale_resource]\n translation = translations[index]\n translation.update_latest_translation()\n", "path": "pontoon/pretranslation/pretranslate.py"}]} | 1,651 | 467 |
gh_patches_debug_21595 | rasdani/github-patches | git_diff | matrix-org__synapse-3927 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
run_as_background_process doesn't catch & log exceptions
(plus if it does raise in a loopingcall, we throw away the exception)
</issue>
<code>
[start of synapse/metrics/background_process_metrics.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2018 New Vector Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import threading
17
18 import six
19
20 from prometheus_client.core import REGISTRY, Counter, GaugeMetricFamily
21
22 from twisted.internet import defer
23
24 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
25
26 _background_process_start_count = Counter(
27 "synapse_background_process_start_count",
28 "Number of background processes started",
29 ["name"],
30 )
31
32 # we set registry=None in all of these to stop them getting registered with
33 # the default registry. Instead we collect them all via the CustomCollector,
34 # which ensures that we can update them before they are collected.
35 #
36 _background_process_ru_utime = Counter(
37 "synapse_background_process_ru_utime_seconds",
38 "User CPU time used by background processes, in seconds",
39 ["name"],
40 registry=None,
41 )
42
43 _background_process_ru_stime = Counter(
44 "synapse_background_process_ru_stime_seconds",
45 "System CPU time used by background processes, in seconds",
46 ["name"],
47 registry=None,
48 )
49
50 _background_process_db_txn_count = Counter(
51 "synapse_background_process_db_txn_count",
52 "Number of database transactions done by background processes",
53 ["name"],
54 registry=None,
55 )
56
57 _background_process_db_txn_duration = Counter(
58 "synapse_background_process_db_txn_duration_seconds",
59 ("Seconds spent by background processes waiting for database "
60 "transactions, excluding scheduling time"),
61 ["name"],
62 registry=None,
63 )
64
65 _background_process_db_sched_duration = Counter(
66 "synapse_background_process_db_sched_duration_seconds",
67 "Seconds spent by background processes waiting for database connections",
68 ["name"],
69 registry=None,
70 )
71
72 # map from description to a counter, so that we can name our logcontexts
73 # incrementally. (It actually duplicates _background_process_start_count, but
74 # it's much simpler to do so than to try to combine them.)
75 _background_process_counts = dict() # type: dict[str, int]
76
77 # map from description to the currently running background processes.
78 #
79 # it's kept as a dict of sets rather than a big set so that we can keep track
80 # of process descriptions that no longer have any active processes.
81 _background_processes = dict() # type: dict[str, set[_BackgroundProcess]]
82
83 # A lock that covers the above dicts
84 _bg_metrics_lock = threading.Lock()
85
86
87 class _Collector(object):
88 """A custom metrics collector for the background process metrics.
89
90 Ensures that all of the metrics are up-to-date with any in-flight processes
91 before they are returned.
92 """
93 def collect(self):
94 background_process_in_flight_count = GaugeMetricFamily(
95 "synapse_background_process_in_flight_count",
96 "Number of background processes in flight",
97 labels=["name"],
98 )
99
100 # We copy the dict so that it doesn't change from underneath us
101 with _bg_metrics_lock:
102 _background_processes_copy = dict(_background_processes)
103
104 for desc, processes in six.iteritems(_background_processes_copy):
105 background_process_in_flight_count.add_metric(
106 (desc,), len(processes),
107 )
108 for process in processes:
109 process.update_metrics()
110
111 yield background_process_in_flight_count
112
113 # now we need to run collect() over each of the static Counters, and
114 # yield each metric they return.
115 for m in (
116 _background_process_ru_utime,
117 _background_process_ru_stime,
118 _background_process_db_txn_count,
119 _background_process_db_txn_duration,
120 _background_process_db_sched_duration,
121 ):
122 for r in m.collect():
123 yield r
124
125
126 REGISTRY.register(_Collector())
127
128
129 class _BackgroundProcess(object):
130 def __init__(self, desc, ctx):
131 self.desc = desc
132 self._context = ctx
133 self._reported_stats = None
134
135 def update_metrics(self):
136 """Updates the metrics with values from this process."""
137 new_stats = self._context.get_resource_usage()
138 if self._reported_stats is None:
139 diff = new_stats
140 else:
141 diff = new_stats - self._reported_stats
142 self._reported_stats = new_stats
143
144 _background_process_ru_utime.labels(self.desc).inc(diff.ru_utime)
145 _background_process_ru_stime.labels(self.desc).inc(diff.ru_stime)
146 _background_process_db_txn_count.labels(self.desc).inc(
147 diff.db_txn_count,
148 )
149 _background_process_db_txn_duration.labels(self.desc).inc(
150 diff.db_txn_duration_sec,
151 )
152 _background_process_db_sched_duration.labels(self.desc).inc(
153 diff.db_sched_duration_sec,
154 )
155
156
157 def run_as_background_process(desc, func, *args, **kwargs):
158 """Run the given function in its own logcontext, with resource metrics
159
160 This should be used to wrap processes which are fired off to run in the
161 background, instead of being associated with a particular request.
162
163 It returns a Deferred which completes when the function completes, but it doesn't
164 follow the synapse logcontext rules, which makes it appropriate for passing to
165 clock.looping_call and friends (or for firing-and-forgetting in the middle of a
166 normal synapse inlineCallbacks function).
167
168 Args:
169 desc (str): a description for this background process type
170 func: a function, which may return a Deferred
171 args: positional args for func
172 kwargs: keyword args for func
173
174 Returns: Deferred which returns the result of func, but note that it does not
175 follow the synapse logcontext rules.
176 """
177 @defer.inlineCallbacks
178 def run():
179 with _bg_metrics_lock:
180 count = _background_process_counts.get(desc, 0)
181 _background_process_counts[desc] = count + 1
182
183 _background_process_start_count.labels(desc).inc()
184
185 with LoggingContext(desc) as context:
186 context.request = "%s-%i" % (desc, count)
187 proc = _BackgroundProcess(desc, context)
188
189 with _bg_metrics_lock:
190 _background_processes.setdefault(desc, set()).add(proc)
191
192 try:
193 yield func(*args, **kwargs)
194 finally:
195 proc.update_metrics()
196
197 with _bg_metrics_lock:
198 _background_processes[desc].remove(proc)
199
200 with PreserveLoggingContext():
201 return run()
202
[end of synapse/metrics/background_process_metrics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/synapse/metrics/background_process_metrics.py b/synapse/metrics/background_process_metrics.py
--- a/synapse/metrics/background_process_metrics.py
+++ b/synapse/metrics/background_process_metrics.py
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import logging
import threading
import six
@@ -23,6 +24,9 @@
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
+logger = logging.getLogger(__name__)
+
+
_background_process_start_count = Counter(
"synapse_background_process_start_count",
"Number of background processes started",
@@ -191,6 +195,8 @@
try:
yield func(*args, **kwargs)
+ except Exception:
+ logger.exception("Background process '%s' threw an exception", desc)
finally:
proc.update_metrics()
| {"golden_diff": "diff --git a/synapse/metrics/background_process_metrics.py b/synapse/metrics/background_process_metrics.py\n--- a/synapse/metrics/background_process_metrics.py\n+++ b/synapse/metrics/background_process_metrics.py\n@@ -13,6 +13,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import logging\n import threading\n \n import six\n@@ -23,6 +24,9 @@\n \n from synapse.util.logcontext import LoggingContext, PreserveLoggingContext\n \n+logger = logging.getLogger(__name__)\n+\n+\n _background_process_start_count = Counter(\n \"synapse_background_process_start_count\",\n \"Number of background processes started\",\n@@ -191,6 +195,8 @@\n \n try:\n yield func(*args, **kwargs)\n+ except Exception:\n+ logger.exception(\"Background process '%s' threw an exception\", desc)\n finally:\n proc.update_metrics()\n", "issue": "run_as_background_process doesn't catch & log exceptions \n(plus if it does raise in a loopingcall, we throw away the exception)\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2018 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport threading\n\nimport six\n\nfrom prometheus_client.core import REGISTRY, Counter, GaugeMetricFamily\n\nfrom twisted.internet import defer\n\nfrom synapse.util.logcontext import LoggingContext, PreserveLoggingContext\n\n_background_process_start_count = Counter(\n \"synapse_background_process_start_count\",\n \"Number of background processes started\",\n [\"name\"],\n)\n\n# we set registry=None in all of these to stop them getting registered with\n# the default registry. Instead we collect them all via the CustomCollector,\n# which ensures that we can update them before they are collected.\n#\n_background_process_ru_utime = Counter(\n \"synapse_background_process_ru_utime_seconds\",\n \"User CPU time used by background processes, in seconds\",\n [\"name\"],\n registry=None,\n)\n\n_background_process_ru_stime = Counter(\n \"synapse_background_process_ru_stime_seconds\",\n \"System CPU time used by background processes, in seconds\",\n [\"name\"],\n registry=None,\n)\n\n_background_process_db_txn_count = Counter(\n \"synapse_background_process_db_txn_count\",\n \"Number of database transactions done by background processes\",\n [\"name\"],\n registry=None,\n)\n\n_background_process_db_txn_duration = Counter(\n \"synapse_background_process_db_txn_duration_seconds\",\n (\"Seconds spent by background processes waiting for database \"\n \"transactions, excluding scheduling time\"),\n [\"name\"],\n registry=None,\n)\n\n_background_process_db_sched_duration = Counter(\n \"synapse_background_process_db_sched_duration_seconds\",\n \"Seconds spent by background processes waiting for database connections\",\n [\"name\"],\n registry=None,\n)\n\n# map from description to a counter, so that we can name our logcontexts\n# incrementally. (It actually duplicates _background_process_start_count, but\n# it's much simpler to do so than to try to combine them.)\n_background_process_counts = dict() # type: dict[str, int]\n\n# map from description to the currently running background processes.\n#\n# it's kept as a dict of sets rather than a big set so that we can keep track\n# of process descriptions that no longer have any active processes.\n_background_processes = dict() # type: dict[str, set[_BackgroundProcess]]\n\n# A lock that covers the above dicts\n_bg_metrics_lock = threading.Lock()\n\n\nclass _Collector(object):\n \"\"\"A custom metrics collector for the background process metrics.\n\n Ensures that all of the metrics are up-to-date with any in-flight processes\n before they are returned.\n \"\"\"\n def collect(self):\n background_process_in_flight_count = GaugeMetricFamily(\n \"synapse_background_process_in_flight_count\",\n \"Number of background processes in flight\",\n labels=[\"name\"],\n )\n\n # We copy the dict so that it doesn't change from underneath us\n with _bg_metrics_lock:\n _background_processes_copy = dict(_background_processes)\n\n for desc, processes in six.iteritems(_background_processes_copy):\n background_process_in_flight_count.add_metric(\n (desc,), len(processes),\n )\n for process in processes:\n process.update_metrics()\n\n yield background_process_in_flight_count\n\n # now we need to run collect() over each of the static Counters, and\n # yield each metric they return.\n for m in (\n _background_process_ru_utime,\n _background_process_ru_stime,\n _background_process_db_txn_count,\n _background_process_db_txn_duration,\n _background_process_db_sched_duration,\n ):\n for r in m.collect():\n yield r\n\n\nREGISTRY.register(_Collector())\n\n\nclass _BackgroundProcess(object):\n def __init__(self, desc, ctx):\n self.desc = desc\n self._context = ctx\n self._reported_stats = None\n\n def update_metrics(self):\n \"\"\"Updates the metrics with values from this process.\"\"\"\n new_stats = self._context.get_resource_usage()\n if self._reported_stats is None:\n diff = new_stats\n else:\n diff = new_stats - self._reported_stats\n self._reported_stats = new_stats\n\n _background_process_ru_utime.labels(self.desc).inc(diff.ru_utime)\n _background_process_ru_stime.labels(self.desc).inc(diff.ru_stime)\n _background_process_db_txn_count.labels(self.desc).inc(\n diff.db_txn_count,\n )\n _background_process_db_txn_duration.labels(self.desc).inc(\n diff.db_txn_duration_sec,\n )\n _background_process_db_sched_duration.labels(self.desc).inc(\n diff.db_sched_duration_sec,\n )\n\n\ndef run_as_background_process(desc, func, *args, **kwargs):\n \"\"\"Run the given function in its own logcontext, with resource metrics\n\n This should be used to wrap processes which are fired off to run in the\n background, instead of being associated with a particular request.\n\n It returns a Deferred which completes when the function completes, but it doesn't\n follow the synapse logcontext rules, which makes it appropriate for passing to\n clock.looping_call and friends (or for firing-and-forgetting in the middle of a\n normal synapse inlineCallbacks function).\n\n Args:\n desc (str): a description for this background process type\n func: a function, which may return a Deferred\n args: positional args for func\n kwargs: keyword args for func\n\n Returns: Deferred which returns the result of func, but note that it does not\n follow the synapse logcontext rules.\n \"\"\"\n @defer.inlineCallbacks\n def run():\n with _bg_metrics_lock:\n count = _background_process_counts.get(desc, 0)\n _background_process_counts[desc] = count + 1\n\n _background_process_start_count.labels(desc).inc()\n\n with LoggingContext(desc) as context:\n context.request = \"%s-%i\" % (desc, count)\n proc = _BackgroundProcess(desc, context)\n\n with _bg_metrics_lock:\n _background_processes.setdefault(desc, set()).add(proc)\n\n try:\n yield func(*args, **kwargs)\n finally:\n proc.update_metrics()\n\n with _bg_metrics_lock:\n _background_processes[desc].remove(proc)\n\n with PreserveLoggingContext():\n return run()\n", "path": "synapse/metrics/background_process_metrics.py"}]} | 2,531 | 207 |
gh_patches_debug_38587 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1232 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor function-based view to class-based
### Idea summary
Rewrite function-based view to class-based and add docstring
### Further details
In the accounts app's view, there is a function-based view called user_profile:
```python
@login_required
@full_profile
def user_profile(request, username=None):
if request.method == "GET":
if not username:
return HttpResponseRedirect(f"/profile/{request.user}")
else:
is_owner = username == request.user.username
try:
user = get_user_model().objects.get(username=username)
except get_user_model().DoesNotExist:
return HttpResponseRedirect("/404")
form = ProfileEditForm(
initial={
"username": user.username,
"email": user.email,
"first_name": user.profile.first_name or None,
"last_name": user.profile.last_name or None,
"about_me": user.profile.about_me or None,
},
readonly=True,
)
data = {
"username": user,
"profile_image_form": UpdateProfileImage,
"form": form if is_owner else None,
"readonly": True,
}
return TemplateResponse(request, "account.html", data)
```
All views in this file are created using classes, so I think it will be great to rewrite this view for better sticking to the code style. Moreover, this view has no docstring. I'm new to contributing to Open Source. I want to solve this issue, and I know-how.
</issue>
<code>
[start of project/accounts/urls/urls.py]
1 from django.urls import path
2 from django.contrib.auth import views as auth_views
3 from accounts.views import (
4 RegisterView,
5 SettingsView,
6 ProfileActivationView,
7 PasswordResetView,
8 PasswordResetDoneView,
9 PasswordResetConfirmView,
10 PasswordResetCompleteView,
11 ProfileSetupView,
12 user_profile,
13 )
14
15 urlpatterns = [
16 path(
17 "login/",
18 auth_views.LoginView.as_view(template_name="accounts/register/login.html"),
19 name="accounts_login",
20 ),
21 path("logout/", auth_views.LogoutView.as_view(), name="accounts_logout"),
22 path("register/", RegisterView.as_view(), name="accounts_register"),
23 path("settings/", SettingsView.as_view(), name="accounts_settings"),
24 path("setup/", ProfileSetupView.as_view(), name="accounts_profile_setup"),
25 path("profile/<str:username>/", user_profile, name="profile"),
26 path(
27 "activate_account/<uidb64>/<token>/",
28 ProfileActivationView.as_view(),
29 name="accounts_activate",
30 ),
31 path(
32 "accounts/password_reset/",
33 PasswordResetView.as_view(),
34 name="accounts_password_reset",
35 ),
36 path(
37 "accounts/password_reset_done/",
38 PasswordResetDoneView.as_view(),
39 name="accounts_password_reset_done",
40 ),
41 path(
42 "accounts/password_reset_confirm/<uidb64>/<token>/",
43 PasswordResetConfirmView.as_view(),
44 name="accounts_password_reset_confirm",
45 ),
46 path(
47 "accounts/password_reset_complete/",
48 PasswordResetCompleteView.as_view(),
49 name="accounts_password_reset_complete",
50 ),
51 ]
52
[end of project/accounts/urls/urls.py]
[start of project/accounts/views.py]
1 """
2 Class based views.
3
4 This module will include views for the accounts app.
5 """
6
7 from core.custom_decorators import full_profile, login_required
8 from django.conf import settings
9 from django.contrib.auth import get_user_model, login
10 from django.contrib.auth import views as auth_views
11 from django.contrib.auth.mixins import LoginRequiredMixin
12 from django.contrib.sites.shortcuts import get_current_site
13 from django.http import HttpResponseRedirect
14 from django.template.response import TemplateResponse
15 from django.urls import reverse_lazy
16 from django.utils.encoding import force_str
17 from django.utils.http import urlsafe_base64_decode
18 from django.views import View
19 from django.views.generic.edit import FormView, UpdateView
20
21 from accounts.authentication import account_activation_token, send_activation_email
22 from accounts.forms import ProfileEditForm, UpdateProfileImage, UserRegistrationForm
23 from accounts.models import Profile
24
25
26 class RegisterView(FormView):
27 """
28 A form view that handles user registration.
29 """
30
31 template_name = "accounts/register/register.html"
32 form_class = UserRegistrationForm
33 success_url = "/"
34
35 def _create_user(self, form):
36 username = form.cleaned_data["username"]
37 password = form.cleaned_data["password"]
38 email = form.cleaned_data["email"]
39 user = get_user_model().objects.create_user(username, email, password)
40 return user
41
42 def _send_email(self, user):
43 domain = get_current_site(self.request).domain
44 send_activation_email(user, domain)
45
46 def _login(self, user):
47 login(self.request, user)
48
49 def form_valid(self, form):
50 user = self._create_user(form)
51
52 self._send_email(user)
53 self._login(user)
54
55 return super(RegisterView, self).form_valid(form)
56
57
58 class PasswordResetView(auth_views.PasswordResetView):
59 template_name = "accounts/users/password_reset.html"
60 email_template_name = "accounts/users/password_reset_email.html"
61 subject_template_name = "accounts/users/password_reset_subject.txt"
62 from_email = settings.EMAIL_HOST_USER
63 success_url = reverse_lazy("accounts_password_reset_done")
64
65
66 class PasswordResetDoneView(auth_views.PasswordResetDoneView):
67 template_name = "accounts/users/password_reset_done.html"
68
69
70 class PasswordResetConfirmView(auth_views.PasswordResetConfirmView):
71 template_name = "accounts/users/password_reset_confirm.html"
72 success_url = reverse_lazy("accounts_password_reset_complete")
73
74
75 class PasswordResetCompleteView(auth_views.PasswordResetCompleteView):
76 template_name = "accounts/users/password_reset_complete.html"
77
78
79 class SettingsView(LoginRequiredMixin, UpdateView):
80 """A form view to edit Profile"""
81
82 login_url = "accounts_login"
83 form_class = ProfileEditForm
84 success_url = reverse_lazy("accounts_settings")
85 template_name = "accounts/update_settings.html"
86
87 def get_object(self, queryset=None):
88 return Profile.objects.get(user=self.request.user)
89
90 def get_initial(self):
91 profile = Profile.objects.get(user=self.request.user)
92 self.initial.update(
93 {
94 "username": profile.user.username,
95 "email": profile.user.email,
96 "first_name": profile.first_name or None,
97 "last_name": profile.last_name or None,
98 "about_me": profile.about_me or None,
99 }
100 )
101 return super(SettingsView, self).get_initial()
102
103
104 class ProfileActivationView(View):
105 """
106 This shows different views to the user when they are verifying
107 their account based on whether they are already verified or not.
108 """
109
110 def get(self, request, uidb64, token):
111
112 try:
113 uid = force_str(urlsafe_base64_decode(uidb64))
114 user = get_user_model().objects.get(pk=uid)
115
116 except (TypeError, ValueError, OverflowError, get_user_model().DoesNotExist):
117 user = None
118
119 if user is not None and account_activation_token.check_token(user, token):
120 profile = user.profile
121 if profile.is_verified:
122 redirect_link = {"href": "/", "label": "Back to Main"}
123 template_var = {
124 "title": "Email Already Verified",
125 "content": "You have already verified your email",
126 "link": redirect_link,
127 }
128 else:
129 profile.is_verified = True
130 profile.save()
131
132 redirect_link = {"href": "/", "label": "Back to Main"}
133 template_var = {
134 "title": "Email Verification Successful",
135 "content": "Thank you for verifying your email with CiviWiki",
136 "link": redirect_link,
137 }
138 else:
139 # invalid link
140 redirect_link = {"href": "/", "label": "Back to Main"}
141 template_var = {
142 "title": "Email Verification Error",
143 "content": "Email could not be verified",
144 "link": redirect_link,
145 }
146
147 return TemplateResponse(request, "general_message.html", template_var)
148
149
150 class ProfileSetupView(LoginRequiredMixin, View):
151 """A view to make the user profile full_profile"""
152
153 login_url = "accounts_login"
154
155 def get(self, request):
156 profile = Profile.objects.get(user=request.user)
157 if profile.full_profile:
158 return HttpResponseRedirect("/")
159 # start temp rep rendering TODO: REMOVE THIS
160 else:
161 data = {
162 "username": request.user.username,
163 "email": request.user.email,
164 }
165 return TemplateResponse(request, "accounts/user-setup.html", data)
166
167
168 @login_required
169 @full_profile
170 def user_profile(request, username=None):
171 if request.method == "GET":
172 if not username:
173 return HttpResponseRedirect(f"/profile/{request.user}")
174 else:
175 is_owner = username == request.user.username
176 try:
177 user = get_user_model().objects.get(username=username)
178 except get_user_model().DoesNotExist:
179 return HttpResponseRedirect("/404")
180
181 form = ProfileEditForm(
182 initial={
183 "username": user.username,
184 "email": user.email,
185 "first_name": user.profile.first_name or None,
186 "last_name": user.profile.last_name or None,
187 "about_me": user.profile.about_me or None,
188 },
189 readonly=True,
190 )
191 data = {
192 "username": user,
193 "profile_image_form": UpdateProfileImage,
194 "form": form if is_owner else None,
195 "readonly": True,
196 }
197 return TemplateResponse(request, "account.html", data)
198
[end of project/accounts/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/project/accounts/urls/urls.py b/project/accounts/urls/urls.py
--- a/project/accounts/urls/urls.py
+++ b/project/accounts/urls/urls.py
@@ -9,7 +9,7 @@
PasswordResetConfirmView,
PasswordResetCompleteView,
ProfileSetupView,
- user_profile,
+ UserProfileView,
)
urlpatterns = [
@@ -22,7 +22,7 @@
path("register/", RegisterView.as_view(), name="accounts_register"),
path("settings/", SettingsView.as_view(), name="accounts_settings"),
path("setup/", ProfileSetupView.as_view(), name="accounts_profile_setup"),
- path("profile/<str:username>/", user_profile, name="profile"),
+ path("profile/<str:username>/", UserProfileView.as_view(), name="profile"),
path(
"activate_account/<uidb64>/<token>/",
ProfileActivationView.as_view(),
diff --git a/project/accounts/views.py b/project/accounts/views.py
--- a/project/accounts/views.py
+++ b/project/accounts/views.py
@@ -4,7 +4,7 @@
This module will include views for the accounts app.
"""
-from core.custom_decorators import full_profile, login_required
+from core.custom_decorators import full_profile
from django.conf import settings
from django.contrib.auth import get_user_model, login
from django.contrib.auth import views as auth_views
@@ -15,6 +15,7 @@
from django.urls import reverse_lazy
from django.utils.encoding import force_str
from django.utils.http import urlsafe_base64_decode
+from django.utils.decorators import method_decorator
from django.views import View
from django.views.generic.edit import FormView, UpdateView
@@ -165,16 +166,18 @@
return TemplateResponse(request, "accounts/user-setup.html", data)
-@login_required
-@full_profile
-def user_profile(request, username=None):
- if request.method == "GET":
+class UserProfileView(LoginRequiredMixin, View):
+ """A view that shows profile for authorized users"""
+
+ @method_decorator(full_profile)
+ def get(self, request, username=None):
if not username:
return HttpResponseRedirect(f"/profile/{request.user}")
else:
is_owner = username == request.user.username
try:
user = get_user_model().objects.get(username=username)
+
except get_user_model().DoesNotExist:
return HttpResponseRedirect("/404")
| {"golden_diff": "diff --git a/project/accounts/urls/urls.py b/project/accounts/urls/urls.py\n--- a/project/accounts/urls/urls.py\n+++ b/project/accounts/urls/urls.py\n@@ -9,7 +9,7 @@\n PasswordResetConfirmView,\n PasswordResetCompleteView,\n ProfileSetupView,\n- user_profile,\n+ UserProfileView,\n )\n \n urlpatterns = [\n@@ -22,7 +22,7 @@\n path(\"register/\", RegisterView.as_view(), name=\"accounts_register\"),\n path(\"settings/\", SettingsView.as_view(), name=\"accounts_settings\"),\n path(\"setup/\", ProfileSetupView.as_view(), name=\"accounts_profile_setup\"),\n- path(\"profile/<str:username>/\", user_profile, name=\"profile\"),\n+ path(\"profile/<str:username>/\", UserProfileView.as_view(), name=\"profile\"),\n path(\n \"activate_account/<uidb64>/<token>/\",\n ProfileActivationView.as_view(),\ndiff --git a/project/accounts/views.py b/project/accounts/views.py\n--- a/project/accounts/views.py\n+++ b/project/accounts/views.py\n@@ -4,7 +4,7 @@\n This module will include views for the accounts app.\n \"\"\"\n \n-from core.custom_decorators import full_profile, login_required\n+from core.custom_decorators import full_profile\n from django.conf import settings\n from django.contrib.auth import get_user_model, login\n from django.contrib.auth import views as auth_views\n@@ -15,6 +15,7 @@\n from django.urls import reverse_lazy\n from django.utils.encoding import force_str\n from django.utils.http import urlsafe_base64_decode\n+from django.utils.decorators import method_decorator\n from django.views import View\n from django.views.generic.edit import FormView, UpdateView\n \n@@ -165,16 +166,18 @@\n return TemplateResponse(request, \"accounts/user-setup.html\", data)\n \n \n-@login_required\n-@full_profile\n-def user_profile(request, username=None):\n- if request.method == \"GET\":\n+class UserProfileView(LoginRequiredMixin, View):\n+ \"\"\"A view that shows profile for authorized users\"\"\"\n+\n+ @method_decorator(full_profile)\n+ def get(self, request, username=None):\n if not username:\n return HttpResponseRedirect(f\"/profile/{request.user}\")\n else:\n is_owner = username == request.user.username\n try:\n user = get_user_model().objects.get(username=username)\n+\n except get_user_model().DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n", "issue": "Refactor function-based view to class-based\n### Idea summary\n\nRewrite function-based view to class-based and add docstring\n\n### Further details\n\nIn the accounts app's view, there is a function-based view called user_profile: \r\n```python\r\n@login_required\r\n@full_profile\r\ndef user_profile(request, username=None):\r\n if request.method == \"GET\":\r\n if not username:\r\n return HttpResponseRedirect(f\"/profile/{request.user}\")\r\n else:\r\n is_owner = username == request.user.username\r\n try:\r\n user = get_user_model().objects.get(username=username)\r\n except get_user_model().DoesNotExist:\r\n return HttpResponseRedirect(\"/404\")\r\n\r\n form = ProfileEditForm(\r\n initial={\r\n \"username\": user.username,\r\n \"email\": user.email,\r\n \"first_name\": user.profile.first_name or None,\r\n \"last_name\": user.profile.last_name or None,\r\n \"about_me\": user.profile.about_me or None,\r\n },\r\n readonly=True,\r\n )\r\n data = {\r\n \"username\": user,\r\n \"profile_image_form\": UpdateProfileImage,\r\n \"form\": form if is_owner else None,\r\n \"readonly\": True,\r\n }\r\n return TemplateResponse(request, \"account.html\", data)\r\n```\r\nAll views in this file are created using classes, so I think it will be great to rewrite this view for better sticking to the code style. Moreover, this view has no docstring. I'm new to contributing to Open Source. I want to solve this issue, and I know-how. \n", "before_files": [{"content": "from django.urls import path\nfrom django.contrib.auth import views as auth_views\nfrom accounts.views import (\n RegisterView,\n SettingsView,\n ProfileActivationView,\n PasswordResetView,\n PasswordResetDoneView,\n PasswordResetConfirmView,\n PasswordResetCompleteView,\n ProfileSetupView,\n user_profile,\n)\n\nurlpatterns = [\n path(\n \"login/\",\n auth_views.LoginView.as_view(template_name=\"accounts/register/login.html\"),\n name=\"accounts_login\",\n ),\n path(\"logout/\", auth_views.LogoutView.as_view(), name=\"accounts_logout\"),\n path(\"register/\", RegisterView.as_view(), name=\"accounts_register\"),\n path(\"settings/\", SettingsView.as_view(), name=\"accounts_settings\"),\n path(\"setup/\", ProfileSetupView.as_view(), name=\"accounts_profile_setup\"),\n path(\"profile/<str:username>/\", user_profile, name=\"profile\"),\n path(\n \"activate_account/<uidb64>/<token>/\",\n ProfileActivationView.as_view(),\n name=\"accounts_activate\",\n ),\n path(\n \"accounts/password_reset/\",\n PasswordResetView.as_view(),\n name=\"accounts_password_reset\",\n ),\n path(\n \"accounts/password_reset_done/\",\n PasswordResetDoneView.as_view(),\n name=\"accounts_password_reset_done\",\n ),\n path(\n \"accounts/password_reset_confirm/<uidb64>/<token>/\",\n PasswordResetConfirmView.as_view(),\n name=\"accounts_password_reset_confirm\",\n ),\n path(\n \"accounts/password_reset_complete/\",\n PasswordResetCompleteView.as_view(),\n name=\"accounts_password_reset_complete\",\n ),\n]\n", "path": "project/accounts/urls/urls.py"}, {"content": "\"\"\"\nClass based views.\n\nThis module will include views for the accounts app.\n\"\"\"\n\nfrom core.custom_decorators import full_profile, login_required\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model, login\nfrom django.contrib.auth import views as auth_views\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.sites.shortcuts import get_current_site\nfrom django.http import HttpResponseRedirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse_lazy\nfrom django.utils.encoding import force_str\nfrom django.utils.http import urlsafe_base64_decode\nfrom django.views import View\nfrom django.views.generic.edit import FormView, UpdateView\n\nfrom accounts.authentication import account_activation_token, send_activation_email\nfrom accounts.forms import ProfileEditForm, UpdateProfileImage, UserRegistrationForm\nfrom accounts.models import Profile\n\n\nclass RegisterView(FormView):\n \"\"\"\n A form view that handles user registration.\n \"\"\"\n\n template_name = \"accounts/register/register.html\"\n form_class = UserRegistrationForm\n success_url = \"/\"\n\n def _create_user(self, form):\n username = form.cleaned_data[\"username\"]\n password = form.cleaned_data[\"password\"]\n email = form.cleaned_data[\"email\"]\n user = get_user_model().objects.create_user(username, email, password)\n return user\n\n def _send_email(self, user):\n domain = get_current_site(self.request).domain\n send_activation_email(user, domain)\n\n def _login(self, user):\n login(self.request, user)\n\n def form_valid(self, form):\n user = self._create_user(form)\n\n self._send_email(user)\n self._login(user)\n\n return super(RegisterView, self).form_valid(form)\n\n\nclass PasswordResetView(auth_views.PasswordResetView):\n template_name = \"accounts/users/password_reset.html\"\n email_template_name = \"accounts/users/password_reset_email.html\"\n subject_template_name = \"accounts/users/password_reset_subject.txt\"\n from_email = settings.EMAIL_HOST_USER\n success_url = reverse_lazy(\"accounts_password_reset_done\")\n\n\nclass PasswordResetDoneView(auth_views.PasswordResetDoneView):\n template_name = \"accounts/users/password_reset_done.html\"\n\n\nclass PasswordResetConfirmView(auth_views.PasswordResetConfirmView):\n template_name = \"accounts/users/password_reset_confirm.html\"\n success_url = reverse_lazy(\"accounts_password_reset_complete\")\n\n\nclass PasswordResetCompleteView(auth_views.PasswordResetCompleteView):\n template_name = \"accounts/users/password_reset_complete.html\"\n\n\nclass SettingsView(LoginRequiredMixin, UpdateView):\n \"\"\"A form view to edit Profile\"\"\"\n\n login_url = \"accounts_login\"\n form_class = ProfileEditForm\n success_url = reverse_lazy(\"accounts_settings\")\n template_name = \"accounts/update_settings.html\"\n\n def get_object(self, queryset=None):\n return Profile.objects.get(user=self.request.user)\n\n def get_initial(self):\n profile = Profile.objects.get(user=self.request.user)\n self.initial.update(\n {\n \"username\": profile.user.username,\n \"email\": profile.user.email,\n \"first_name\": profile.first_name or None,\n \"last_name\": profile.last_name or None,\n \"about_me\": profile.about_me or None,\n }\n )\n return super(SettingsView, self).get_initial()\n\n\nclass ProfileActivationView(View):\n \"\"\"\n This shows different views to the user when they are verifying\n their account based on whether they are already verified or not.\n \"\"\"\n\n def get(self, request, uidb64, token):\n\n try:\n uid = force_str(urlsafe_base64_decode(uidb64))\n user = get_user_model().objects.get(pk=uid)\n\n except (TypeError, ValueError, OverflowError, get_user_model().DoesNotExist):\n user = None\n\n if user is not None and account_activation_token.check_token(user, token):\n profile = user.profile\n if profile.is_verified:\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Already Verified\",\n \"content\": \"You have already verified your email\",\n \"link\": redirect_link,\n }\n else:\n profile.is_verified = True\n profile.save()\n\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Verification Successful\",\n \"content\": \"Thank you for verifying your email with CiviWiki\",\n \"link\": redirect_link,\n }\n else:\n # invalid link\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Verification Error\",\n \"content\": \"Email could not be verified\",\n \"link\": redirect_link,\n }\n\n return TemplateResponse(request, \"general_message.html\", template_var)\n\n\nclass ProfileSetupView(LoginRequiredMixin, View):\n \"\"\"A view to make the user profile full_profile\"\"\"\n\n login_url = \"accounts_login\"\n\n def get(self, request):\n profile = Profile.objects.get(user=request.user)\n if profile.full_profile:\n return HttpResponseRedirect(\"/\")\n # start temp rep rendering TODO: REMOVE THIS\n else:\n data = {\n \"username\": request.user.username,\n \"email\": request.user.email,\n }\n return TemplateResponse(request, \"accounts/user-setup.html\", data)\n\n\n@login_required\n@full_profile\ndef user_profile(request, username=None):\n if request.method == \"GET\":\n if not username:\n return HttpResponseRedirect(f\"/profile/{request.user}\")\n else:\n is_owner = username == request.user.username\n try:\n user = get_user_model().objects.get(username=username)\n except get_user_model().DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n\n form = ProfileEditForm(\n initial={\n \"username\": user.username,\n \"email\": user.email,\n \"first_name\": user.profile.first_name or None,\n \"last_name\": user.profile.last_name or None,\n \"about_me\": user.profile.about_me or None,\n },\n readonly=True,\n )\n data = {\n \"username\": user,\n \"profile_image_form\": UpdateProfileImage,\n \"form\": form if is_owner else None,\n \"readonly\": True,\n }\n return TemplateResponse(request, \"account.html\", data)\n", "path": "project/accounts/views.py"}]} | 3,121 | 527 |
gh_patches_debug_22239 | rasdani/github-patches | git_diff | dotkom__onlineweb4-2359 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Atendee list bug when adding or removing users
<!-- If this is a security issue or information leakage (having access to something you (probably) shouldn't), please send an email rather than opening a public issue. -->
## What kind of an issue is this?
- Bug report
## What is the expected behaviour?
It should look normal where every box and number is under its corresponding column. It should just look normal. Even if we remove users or add users through the dashboard, nothing should change.

## What is the current behaviour?
If you remove or add a user through the dashboard menu at the attendee list it will look like the screenshot above. We have randomly two boxes, the x for removing users is all the way to the right and the text "none" is occupying the remove column. If you refresh the site it will go back to expected behaviour, its only after deleteing/adding a user

<!-- if this is a bug report -->
## How do you reproduce this problem?
Remove or add a user to the attendee list manually.
<!-- if this is a bug report -->
<!-- provide steps to reproduce this problem, preferably in a bullet point list -->
1. go to the attendee list
2. Add a user by writing their name OR remove a user from the list
## Other information
This might be a bug which I didn't catch when I added "year of study" to the attendee list. I'm not sure if this was an issue before, but since it hasn't been brought up I will assume this is a bug from that pull request of mine
</issue>
<code>
[start of apps/events/dashboard/utils.py]
1 # -*- coding: utf-8 -*-
2 from django.urls import reverse
3
4 from apps.authentication.models import OnlineUser as User
5 from apps.events.models import Attendee, Event
6
7
8 def _get_attendee(attendee_id):
9 try:
10 return Attendee.objects.get(pk=attendee_id)
11 except Attendee.DoesNotExist:
12 return None
13
14
15 def event_ajax_handler(event: Event, request):
16 action = request.POST.get('action')
17 administrating_user = request.user
18 attendee_id = request.POST.get('attendee_id')
19 user_id = request.POST.get('user_id')
20
21 if action == 'attended':
22 attendee = _get_attendee(attendee_id)
23 if not attendee:
24 return {'message': f'Fant ingen påmeldte med oppgitt ID ({attendee_id}).', 'status': 400}
25 return handle_attended(attendee)
26 elif action == 'paid':
27 attendee = _get_attendee(attendee_id)
28 if not attendee:
29 return {'message': f'Fant ingen påmeldte med oppgitt ID ({attendee_id}).', 'status': 400}
30 return handle_paid(attendee)
31 elif action == 'add_attendee':
32 return handle_add_attendee(event, user_id)
33 elif action == 'remove_attendee':
34 return handle_remove_attendee(event, attendee_id, administrating_user)
35 else:
36 raise NotImplementedError
37
38
39 def handle_attended(attendee: Attendee):
40 """
41 Toggle attending-status of an attendee between attending and not attending
42 """
43 attendee.attended = not attendee.attended
44 attendee.save()
45
46 return {'message': 'OK', 'status': 200}
47
48
49 def handle_paid(attendee: Attendee):
50 """
51 Toggle paid status of an attendee between paid and not paid
52 """
53 attendee.paid = not attendee.paid
54 attendee.save()
55
56 return {'message': 'OK', 'status': 200}
57
58
59 def _get_attendee_data(attendee_qs):
60 attendees = []
61
62 for number, a in enumerate(attendee_qs):
63 attendees.append({
64 'number': number + 1,
65 'id': a.id,
66 'first_name': a.user.first_name,
67 'last_name': a.user.last_name,
68 'paid': a.paid,
69 'extras': str(a.extras),
70 'attended': a.attended,
71 'link': reverse('dashboard_attendee_details', kwargs={'attendee_id': a.id})
72 })
73
74 return attendees
75
76
77 def _get_event_context(event: Event, response={}):
78 response['attendees'] = _get_attendee_data(event.attendance_event.attending_attendees_qs)
79 response['waitlist'] = _get_attendee_data(event.attendance_event.waitlist_qs)
80
81 return response
82
83
84 def handle_add_attendee(event: Event, user_id: int):
85 resp = _get_event_context(event)
86 if event.attendance_event.number_of_seats_taken >= event.attendance_event.max_capacity:
87 if not event.attendance_event.waitlist:
88 return {'message': f'Det er ingen ledige plasser på {event.title}.', 'status': 400, **resp}
89
90 user = User.objects.filter(pk=user_id)
91 if user.count() != 1:
92 return {'message': f'Fant ingen bruker med oppgitt ID ({user_id}).', 'status': 400, **resp}
93 user = user[0]
94 if Attendee.objects.filter(user=user, event=event.attendance_event).count() != 0:
95 return {'message': f'{user} er allerede påmeldt {event.title}.', 'status': 400, **resp}
96
97 attendee = Attendee(user=user, event=event.attendance_event)
98 attendee.save()
99
100 resp = _get_event_context(event, resp)
101 return {'message': f'{user} ble meldt på {event}', 'status': 200, **resp}
102
103
104 def handle_remove_attendee(event: Event, attendee_id: int, admin_user: User):
105 resp = _get_event_context(event)
106 attendee = Attendee.objects.filter(pk=attendee_id)
107 if attendee.count() != 1:
108 return {'message': f'Fant ingen påmeldte med oppgitt ID ({attendee_id}).', 'status': 400, **resp}
109 attendee = attendee[0]
110 attendee.unattend(admin_user)
111
112 resp = _get_event_context(event, resp)
113 return {'message': f'{attendee.user} ble fjernet fra {attendee.event}', 'status': 200, **resp}
114
[end of apps/events/dashboard/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/events/dashboard/utils.py b/apps/events/dashboard/utils.py
--- a/apps/events/dashboard/utils.py
+++ b/apps/events/dashboard/utils.py
@@ -65,7 +65,9 @@
'id': a.id,
'first_name': a.user.first_name,
'last_name': a.user.last_name,
+ 'year_of_study': a.user.year,
'paid': a.paid,
+ 'payment_deadline': a.get_payment_deadline(),
'extras': str(a.extras),
'attended': a.attended,
'link': reverse('dashboard_attendee_details', kwargs={'attendee_id': a.id})
@@ -77,6 +79,8 @@
def _get_event_context(event: Event, response={}):
response['attendees'] = _get_attendee_data(event.attendance_event.attending_attendees_qs)
response['waitlist'] = _get_attendee_data(event.attendance_event.waitlist_qs)
+ response['is_payment_event'] = bool(event.attendance_event.payment())
+ response['has_extras'] = event.attendance_event.has_extras
return response
| {"golden_diff": "diff --git a/apps/events/dashboard/utils.py b/apps/events/dashboard/utils.py\n--- a/apps/events/dashboard/utils.py\n+++ b/apps/events/dashboard/utils.py\n@@ -65,7 +65,9 @@\n 'id': a.id,\n 'first_name': a.user.first_name,\n 'last_name': a.user.last_name,\n+ 'year_of_study': a.user.year,\n 'paid': a.paid,\n+ 'payment_deadline': a.get_payment_deadline(),\n 'extras': str(a.extras),\n 'attended': a.attended,\n 'link': reverse('dashboard_attendee_details', kwargs={'attendee_id': a.id})\n@@ -77,6 +79,8 @@\n def _get_event_context(event: Event, response={}):\n response['attendees'] = _get_attendee_data(event.attendance_event.attending_attendees_qs)\n response['waitlist'] = _get_attendee_data(event.attendance_event.waitlist_qs)\n+ response['is_payment_event'] = bool(event.attendance_event.payment())\n+ response['has_extras'] = event.attendance_event.has_extras\n \n return response\n", "issue": "Atendee list bug when adding or removing users\n<!-- If this is a security issue or information leakage (having access to something you (probably) shouldn't), please send an email rather than opening a public issue. -->\r\n\r\n## What kind of an issue is this?\r\n\r\n- Bug report\r\n\r\n\r\n## What is the expected behaviour?\r\nIt should look normal where every box and number is under its corresponding column. It should just look normal. Even if we remove users or add users through the dashboard, nothing should change.\r\n\r\n\r\n\r\n\r\n## What is the current behaviour?\r\nIf you remove or add a user through the dashboard menu at the attendee list it will look like the screenshot above. We have randomly two boxes, the x for removing users is all the way to the right and the text \"none\" is occupying the remove column. If you refresh the site it will go back to expected behaviour, its only after deleteing/adding a user\r\n\r\n\r\n<!-- if this is a bug report -->\r\n\r\n\r\n## How do you reproduce this problem? \r\nRemove or add a user to the attendee list manually.\r\n<!-- if this is a bug report -->\r\n<!-- provide steps to reproduce this problem, preferably in a bullet point list -->\r\n1. go to the attendee list\r\n2. Add a user by writing their name OR remove a user from the list\r\n## Other information\r\n\r\nThis might be a bug which I didn't catch when I added \"year of study\" to the attendee list. I'm not sure if this was an issue before, but since it hasn't been brought up I will assume this is a bug from that pull request of mine\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom django.urls import reverse\n\nfrom apps.authentication.models import OnlineUser as User\nfrom apps.events.models import Attendee, Event\n\n\ndef _get_attendee(attendee_id):\n try:\n return Attendee.objects.get(pk=attendee_id)\n except Attendee.DoesNotExist:\n return None\n\n\ndef event_ajax_handler(event: Event, request):\n action = request.POST.get('action')\n administrating_user = request.user\n attendee_id = request.POST.get('attendee_id')\n user_id = request.POST.get('user_id')\n\n if action == 'attended':\n attendee = _get_attendee(attendee_id)\n if not attendee:\n return {'message': f'Fant ingen p\u00e5meldte med oppgitt ID ({attendee_id}).', 'status': 400}\n return handle_attended(attendee)\n elif action == 'paid':\n attendee = _get_attendee(attendee_id)\n if not attendee:\n return {'message': f'Fant ingen p\u00e5meldte med oppgitt ID ({attendee_id}).', 'status': 400}\n return handle_paid(attendee)\n elif action == 'add_attendee':\n return handle_add_attendee(event, user_id)\n elif action == 'remove_attendee':\n return handle_remove_attendee(event, attendee_id, administrating_user)\n else:\n raise NotImplementedError\n\n\ndef handle_attended(attendee: Attendee):\n \"\"\"\n Toggle attending-status of an attendee between attending and not attending\n \"\"\"\n attendee.attended = not attendee.attended\n attendee.save()\n\n return {'message': 'OK', 'status': 200}\n\n\ndef handle_paid(attendee: Attendee):\n \"\"\"\n Toggle paid status of an attendee between paid and not paid\n \"\"\"\n attendee.paid = not attendee.paid\n attendee.save()\n\n return {'message': 'OK', 'status': 200}\n\n\ndef _get_attendee_data(attendee_qs):\n attendees = []\n\n for number, a in enumerate(attendee_qs):\n attendees.append({\n 'number': number + 1,\n 'id': a.id,\n 'first_name': a.user.first_name,\n 'last_name': a.user.last_name,\n 'paid': a.paid,\n 'extras': str(a.extras),\n 'attended': a.attended,\n 'link': reverse('dashboard_attendee_details', kwargs={'attendee_id': a.id})\n })\n\n return attendees\n\n\ndef _get_event_context(event: Event, response={}):\n response['attendees'] = _get_attendee_data(event.attendance_event.attending_attendees_qs)\n response['waitlist'] = _get_attendee_data(event.attendance_event.waitlist_qs)\n\n return response\n\n\ndef handle_add_attendee(event: Event, user_id: int):\n resp = _get_event_context(event)\n if event.attendance_event.number_of_seats_taken >= event.attendance_event.max_capacity:\n if not event.attendance_event.waitlist:\n return {'message': f'Det er ingen ledige plasser p\u00e5 {event.title}.', 'status': 400, **resp}\n\n user = User.objects.filter(pk=user_id)\n if user.count() != 1:\n return {'message': f'Fant ingen bruker med oppgitt ID ({user_id}).', 'status': 400, **resp}\n user = user[0]\n if Attendee.objects.filter(user=user, event=event.attendance_event).count() != 0:\n return {'message': f'{user} er allerede p\u00e5meldt {event.title}.', 'status': 400, **resp}\n\n attendee = Attendee(user=user, event=event.attendance_event)\n attendee.save()\n\n resp = _get_event_context(event, resp)\n return {'message': f'{user} ble meldt p\u00e5 {event}', 'status': 200, **resp}\n\n\ndef handle_remove_attendee(event: Event, attendee_id: int, admin_user: User):\n resp = _get_event_context(event)\n attendee = Attendee.objects.filter(pk=attendee_id)\n if attendee.count() != 1:\n return {'message': f'Fant ingen p\u00e5meldte med oppgitt ID ({attendee_id}).', 'status': 400, **resp}\n attendee = attendee[0]\n attendee.unattend(admin_user)\n\n resp = _get_event_context(event, resp)\n return {'message': f'{attendee.user} ble fjernet fra {attendee.event}', 'status': 200, **resp}\n", "path": "apps/events/dashboard/utils.py"}]} | 2,282 | 247 |
gh_patches_debug_26033 | rasdani/github-patches | git_diff | cowrie__cowrie-1237 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ls -l` user/group names need justification
`ls -l` does not properly pad the user/group names
**To Reproduce**
Steps to reproduce the behaviour:
1. ssh into a cowrie instance
2. `ls -l` on a directory with more than one user/group
3. the user names and group names don't line up between files
**Expected behaviour**
Nice justified columns of user/group names
</issue>
<code>
[start of src/cowrie/commands/ls.py]
1 # Copyright (c) 2009 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 from __future__ import absolute_import, division
5
6 import getopt
7 import os.path
8 import stat
9 import time
10
11 import cowrie.shell.fs as fs
12 from cowrie.shell.command import HoneyPotCommand
13 from cowrie.shell.pwd import Group, Passwd
14
15 commands = {}
16
17
18 class command_ls(HoneyPotCommand):
19
20 def uid2name(self, uid):
21 try:
22 return Passwd().getpwuid(uid)["pw_name"]
23 except Exception:
24 return str(uid)
25
26 def gid2name(self, gid):
27 try:
28 return Group().getgrgid(gid)["gr_name"]
29 except Exception:
30 return str(gid)
31
32 def call(self):
33 path = self.protocol.cwd
34 paths = []
35 self.showHidden = False
36 self.showDirectories = False
37 func = self.do_ls_normal
38
39 # Parse options or display no files
40 try:
41 opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',
42 ['help', 'version', 'param'])
43 except getopt.GetoptError as err:
44 self.write("ls: {}\n".format(err))
45 self.write("Try 'ls --help' for more information.\n")
46 return
47
48 for x, a in opts:
49 if x in ('-l'):
50 func = self.do_ls_l
51 if x in ('-a'):
52 self.showHidden = True
53 if x in ('-d'):
54 self.showDirectories = True
55
56 for arg in args:
57 paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))
58
59 if not paths:
60 func(path)
61 else:
62 for path in paths:
63 func(path)
64
65 def get_dir_files(self, path):
66 try:
67 if self.protocol.fs.isdir(path) and not self.showDirectories:
68 files = self.protocol.fs.get_path(path)[:]
69 if self.showHidden:
70 dot = self.protocol.fs.getfile(path)[:]
71 dot[fs.A_NAME] = '.'
72 files.append(dot)
73 dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]
74 if not dotdot:
75 dotdot = self.protocol.fs.getfile(path)[:]
76 dotdot[fs.A_NAME] = '..'
77 files.append(dotdot)
78 else:
79 files = [x for x in files if not x[fs.A_NAME].startswith('.')]
80 files.sort()
81 else:
82 files = (self.protocol.fs.getfile(path)[:],)
83 except Exception:
84 self.write(
85 'ls: cannot access %s: No such file or directory\n' % (path,))
86 return
87 return files
88
89 def do_ls_normal(self, path):
90 files = self.get_dir_files(path)
91
92 line = [x[fs.A_NAME] for x in files]
93 if not line:
94 return
95 count = 0
96 maxlen = max([len(x) for x in line])
97
98 try:
99 wincols = self.protocol.user.windowSize[1]
100 except AttributeError:
101 wincols = 80
102
103 perline = int(wincols / (maxlen + 1))
104 for f in line:
105 if count == perline:
106 count = 0
107 self.write('\n')
108 self.write(f.ljust(maxlen + 1))
109 count += 1
110 self.write('\n')
111
112 def do_ls_l(self, path):
113 files = self.get_dir_files(path)
114
115 largest = 0
116 if len(files):
117 largest = max([x[fs.A_SIZE] for x in files])
118
119 for file in files:
120 if file[fs.A_NAME].startswith('.') and not self.showHidden:
121 continue
122
123 perms = ['-'] * 10
124 if file[fs.A_MODE] & stat.S_IRUSR:
125 perms[1] = 'r'
126 if file[fs.A_MODE] & stat.S_IWUSR:
127 perms[2] = 'w'
128 if file[fs.A_MODE] & stat.S_IXUSR:
129 perms[3] = 'x'
130 if file[fs.A_MODE] & stat.S_ISUID:
131 perms[3] = 'S'
132 if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:
133 perms[3] = 's'
134
135 if file[fs.A_MODE] & stat.S_IRGRP:
136 perms[4] = 'r'
137 if file[fs.A_MODE] & stat.S_IWGRP:
138 perms[5] = 'w'
139 if file[fs.A_MODE] & stat.S_IXGRP:
140 perms[6] = 'x'
141 if file[fs.A_MODE] & stat.S_ISGID:
142 perms[6] = 'S'
143 if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:
144 perms[6] = 's'
145
146 if file[fs.A_MODE] & stat.S_IROTH:
147 perms[7] = 'r'
148 if file[fs.A_MODE] & stat.S_IWOTH:
149 perms[8] = 'w'
150 if file[fs.A_MODE] & stat.S_IXOTH:
151 perms[9] = 'x'
152 if file[fs.A_MODE] & stat.S_ISVTX:
153 perms[9] = 'T'
154 if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:
155 perms[9] = 't'
156
157 linktarget = ''
158
159 if file[fs.A_TYPE] == fs.T_DIR:
160 perms[0] = 'd'
161 elif file[fs.A_TYPE] == fs.T_LINK:
162 perms[0] = 'l'
163 linktarget = ' -> %s' % (file[fs.A_TARGET],)
164
165 perms = ''.join(perms)
166 ctime = time.localtime(file[fs.A_CTIME])
167
168 line = '%s 1 %s %s %s %s %s%s' % \
169 (perms,
170 self.uid2name(file[fs.A_UID]),
171 self.gid2name(file[fs.A_GID]),
172 str(file[fs.A_SIZE]).rjust(len(str(largest))),
173 time.strftime('%Y-%m-%d %H:%M', ctime),
174 file[fs.A_NAME],
175 linktarget)
176
177 self.write('{0}\n'.format(line))
178
179
180 commands['/bin/ls'] = command_ls
181 commands['ls'] = command_ls
182 commands['/bin/dir'] = command_ls
183 commands['dir'] = command_ls
184
[end of src/cowrie/commands/ls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cowrie/commands/ls.py b/src/cowrie/commands/ls.py
--- a/src/cowrie/commands/ls.py
+++ b/src/cowrie/commands/ls.py
@@ -112,9 +112,17 @@
def do_ls_l(self, path):
files = self.get_dir_files(path)
- largest = 0
+ filesize_str_extent = 0
if len(files):
- largest = max([x[fs.A_SIZE] for x in files])
+ filesize_str_extent = max([len(str(x[fs.A_SIZE])) for x in files])
+
+ user_name_str_extent = 0
+ if len(files):
+ user_name_str_extent = max([len(self.uid2name(x[fs.A_UID])) for x in files])
+
+ group_name_str_extent = 0
+ if len(files):
+ group_name_str_extent = max([len(self.gid2name(x[fs.A_GID])) for x in files])
for file in files:
if file[fs.A_NAME].startswith('.') and not self.showHidden:
@@ -167,9 +175,9 @@
line = '%s 1 %s %s %s %s %s%s' % \
(perms,
- self.uid2name(file[fs.A_UID]),
- self.gid2name(file[fs.A_GID]),
- str(file[fs.A_SIZE]).rjust(len(str(largest))),
+ self.uid2name(file[fs.A_UID]).ljust(user_name_str_extent),
+ self.gid2name(file[fs.A_GID]).ljust(group_name_str_extent),
+ str(file[fs.A_SIZE]).rjust(filesize_str_extent),
time.strftime('%Y-%m-%d %H:%M', ctime),
file[fs.A_NAME],
linktarget)
| {"golden_diff": "diff --git a/src/cowrie/commands/ls.py b/src/cowrie/commands/ls.py\n--- a/src/cowrie/commands/ls.py\n+++ b/src/cowrie/commands/ls.py\n@@ -112,9 +112,17 @@\n def do_ls_l(self, path):\n files = self.get_dir_files(path)\n \n- largest = 0\n+ filesize_str_extent = 0\n if len(files):\n- largest = max([x[fs.A_SIZE] for x in files])\n+ filesize_str_extent = max([len(str(x[fs.A_SIZE])) for x in files])\n+\n+ user_name_str_extent = 0\n+ if len(files):\n+ user_name_str_extent = max([len(self.uid2name(x[fs.A_UID])) for x in files])\n+\n+ group_name_str_extent = 0\n+ if len(files):\n+ group_name_str_extent = max([len(self.gid2name(x[fs.A_GID])) for x in files])\n \n for file in files:\n if file[fs.A_NAME].startswith('.') and not self.showHidden:\n@@ -167,9 +175,9 @@\n \n line = '%s 1 %s %s %s %s %s%s' % \\\n (perms,\n- self.uid2name(file[fs.A_UID]),\n- self.gid2name(file[fs.A_GID]),\n- str(file[fs.A_SIZE]).rjust(len(str(largest))),\n+ self.uid2name(file[fs.A_UID]).ljust(user_name_str_extent),\n+ self.gid2name(file[fs.A_GID]).ljust(group_name_str_extent),\n+ str(file[fs.A_SIZE]).rjust(filesize_str_extent),\n time.strftime('%Y-%m-%d %H:%M', ctime),\n file[fs.A_NAME],\n linktarget)\n", "issue": "`ls -l` user/group names need justification\n`ls -l` does not properly pad the user/group names\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behaviour:\r\n1. ssh into a cowrie instance\r\n2. `ls -l` on a directory with more than one user/group\r\n3. the user names and group names don't line up between files\r\n\r\n**Expected behaviour**\r\nNice justified columns of user/group names\r\n\n", "before_files": [{"content": "# Copyright (c) 2009 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\nfrom __future__ import absolute_import, division\n\nimport getopt\nimport os.path\nimport stat\nimport time\n\nimport cowrie.shell.fs as fs\nfrom cowrie.shell.command import HoneyPotCommand\nfrom cowrie.shell.pwd import Group, Passwd\n\ncommands = {}\n\n\nclass command_ls(HoneyPotCommand):\n\n def uid2name(self, uid):\n try:\n return Passwd().getpwuid(uid)[\"pw_name\"]\n except Exception:\n return str(uid)\n\n def gid2name(self, gid):\n try:\n return Group().getgrgid(gid)[\"gr_name\"]\n except Exception:\n return str(gid)\n\n def call(self):\n path = self.protocol.cwd\n paths = []\n self.showHidden = False\n self.showDirectories = False\n func = self.do_ls_normal\n\n # Parse options or display no files\n try:\n opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',\n ['help', 'version', 'param'])\n except getopt.GetoptError as err:\n self.write(\"ls: {}\\n\".format(err))\n self.write(\"Try 'ls --help' for more information.\\n\")\n return\n\n for x, a in opts:\n if x in ('-l'):\n func = self.do_ls_l\n if x in ('-a'):\n self.showHidden = True\n if x in ('-d'):\n self.showDirectories = True\n\n for arg in args:\n paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))\n\n if not paths:\n func(path)\n else:\n for path in paths:\n func(path)\n\n def get_dir_files(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n if self.showHidden:\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]\n if not dotdot:\n dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n files.sort()\n else:\n files = (self.protocol.fs.getfile(path)[:],)\n except Exception:\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n return files\n\n def do_ls_normal(self, path):\n files = self.get_dir_files(path)\n\n line = [x[fs.A_NAME] for x in files]\n if not line:\n return\n count = 0\n maxlen = max([len(x) for x in line])\n\n try:\n wincols = self.protocol.user.windowSize[1]\n except AttributeError:\n wincols = 80\n\n perline = int(wincols / (maxlen + 1))\n for f in line:\n if count == perline:\n count = 0\n self.write('\\n')\n self.write(f.ljust(maxlen + 1))\n count += 1\n self.write('\\n')\n\n def do_ls_l(self, path):\n files = self.get_dir_files(path)\n\n largest = 0\n if len(files):\n largest = max([x[fs.A_SIZE] for x in files])\n\n for file in files:\n if file[fs.A_NAME].startswith('.') and not self.showHidden:\n continue\n\n perms = ['-'] * 10\n if file[fs.A_MODE] & stat.S_IRUSR:\n perms[1] = 'r'\n if file[fs.A_MODE] & stat.S_IWUSR:\n perms[2] = 'w'\n if file[fs.A_MODE] & stat.S_IXUSR:\n perms[3] = 'x'\n if file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 'S'\n if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 's'\n\n if file[fs.A_MODE] & stat.S_IRGRP:\n perms[4] = 'r'\n if file[fs.A_MODE] & stat.S_IWGRP:\n perms[5] = 'w'\n if file[fs.A_MODE] & stat.S_IXGRP:\n perms[6] = 'x'\n if file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 'S'\n if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 's'\n\n if file[fs.A_MODE] & stat.S_IROTH:\n perms[7] = 'r'\n if file[fs.A_MODE] & stat.S_IWOTH:\n perms[8] = 'w'\n if file[fs.A_MODE] & stat.S_IXOTH:\n perms[9] = 'x'\n if file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 'T'\n if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 't'\n\n linktarget = ''\n\n if file[fs.A_TYPE] == fs.T_DIR:\n perms[0] = 'd'\n elif file[fs.A_TYPE] == fs.T_LINK:\n perms[0] = 'l'\n linktarget = ' -> %s' % (file[fs.A_TARGET],)\n\n perms = ''.join(perms)\n ctime = time.localtime(file[fs.A_CTIME])\n\n line = '%s 1 %s %s %s %s %s%s' % \\\n (perms,\n self.uid2name(file[fs.A_UID]),\n self.gid2name(file[fs.A_GID]),\n str(file[fs.A_SIZE]).rjust(len(str(largest))),\n time.strftime('%Y-%m-%d %H:%M', ctime),\n file[fs.A_NAME],\n linktarget)\n\n self.write('{0}\\n'.format(line))\n\n\ncommands['/bin/ls'] = command_ls\ncommands['ls'] = command_ls\ncommands['/bin/dir'] = command_ls\ncommands['dir'] = command_ls\n", "path": "src/cowrie/commands/ls.py"}]} | 2,548 | 416 |
gh_patches_debug_9897 | rasdani/github-patches | git_diff | freedomofpress__securedrop-4931 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[1.1.0-rc4] "Unable to create virtualenv. Check network settings and try again"
(Tested on a Tails 3.16 Admin Workstation by checking out 1.1.0-rc4 tag, without updating my servers.)
As expected, running `securedrop-admin` commands triggered the "run setup" step. However, the `securedrop-admin setup` step itself did not complete successfully; it went pretty far along but finally failed with this error:
"Unable to create virtualenv. Check network settings and try again"
Tor seems to be working fine. Possibly intermittent issues but good to warn users about and have mitigation instructions if it is likely to arise during updates.
</issue>
<code>
[start of admin/bootstrap.py]
1 # -*- mode: python; coding: utf-8 -*-
2 #
3 # Copyright (C) 2013-2018 Freedom of the Press Foundation & al
4 # Copyright (C) 2018 Loic Dachary <[email protected]>
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19
20 import argparse
21 import logging
22 import os
23 import shutil
24 import subprocess
25 import sys
26
27 sdlog = logging.getLogger(__name__)
28
29 DIR = os.path.dirname(os.path.realpath(__file__))
30 VENV_DIR = os.path.join(DIR, ".venv3")
31
32
33 def setup_logger(verbose=False):
34 """ Configure logging handler """
35 # Set default level on parent
36 sdlog.setLevel(logging.DEBUG)
37 level = logging.DEBUG if verbose else logging.INFO
38
39 stdout = logging.StreamHandler(sys.stdout)
40 stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
41 stdout.setLevel(level)
42 sdlog.addHandler(stdout)
43
44
45 def run_command(command):
46 """
47 Wrapper function to display stdout for running command,
48 similar to how shelling out in a Bash script displays rolling output.
49
50 Yields a list of the stdout from the `command`, and raises a
51 CalledProcessError if `command` returns non-zero.
52 """
53 popen = subprocess.Popen(command,
54 stdout=subprocess.PIPE,
55 stderr=subprocess.STDOUT)
56 for stdout_line in iter(popen.stdout.readline, b""):
57 yield stdout_line
58 popen.stdout.close()
59 return_code = popen.wait()
60 if return_code:
61 raise subprocess.CalledProcessError(return_code, command)
62
63
64 def is_tails():
65 try:
66 id = subprocess.check_output('lsb_release --id --short',
67 shell=True).strip()
68 except subprocess.CalledProcessError:
69 id = None
70
71 # dirty hack to unreliably detect Tails 4.0~beta2
72 if id == b'Debian':
73 if os.uname()[1] == 'amnesia':
74 id = 'Tails'
75
76 return id == 'Tails'
77
78
79 def clean_up_tails3_venv(virtualenv_dir=VENV_DIR):
80 """
81 Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is
82 based on Debian Buster and uses libpython3.7. This means that the Tails 3.x
83 virtualenv will not work under Tails 4.x, and will need to be destroyed and
84 rebuilt. We can detect if the version of libpython is 3.5 in the
85 admin/.venv3/ folder, and delete it if that's the case. This will ensure a
86 smooth upgrade from Tails 3.x to Tails 4.x.
87 """
88 if is_tails():
89 try:
90 dist = subprocess.check_output('lsb_release --codename --short',
91 shell=True).strip()
92 except subprocess.CalledProcessError:
93 dist = None
94
95 # tails4 is based on buster
96 if dist == b'buster':
97 python_lib_path = os.path.join(virtualenv_dir, "lib/python3.5")
98 if os.path.exists(os.path.join(python_lib_path)):
99 sdlog.info(
100 "Tails 3 Python 3 virtualenv detected. "
101 "Removing it."
102 )
103 shutil.rmtree(virtualenv_dir)
104 sdlog.info("Tails 3 Python 3 virtualenv deleted.")
105
106
107 def checkenv(args):
108 clean_up_tails3_venv(VENV_DIR)
109 if not os.path.exists(os.path.join(VENV_DIR, "bin/activate")):
110 sdlog.error('Please run "securedrop-admin setup".')
111 sys.exit(1)
112
113
114 def maybe_torify():
115 if is_tails():
116 return ['torify']
117 else:
118 return []
119
120
121 def install_apt_dependencies(args):
122 """
123 Install apt dependencies in Tails. In order to install Ansible in
124 a virtualenv, first there are a number of Python prerequisites.
125 """
126 sdlog.info("Installing SecureDrop Admin dependencies")
127 sdlog.info(("You'll be prompted for the temporary Tails admin password,"
128 " which was set on Tails login screen"))
129
130 apt_command = ['sudo', 'su', '-c',
131 "apt-get update && \
132 apt-get -q -o=Dpkg::Use-Pty=0 install -y \
133 python3-virtualenv \
134 python3-yaml \
135 python3-pip \
136 ccontrol \
137 virtualenv \
138 libffi-dev \
139 libssl-dev \
140 libpython3-dev",
141 ]
142
143 try:
144 # Print command results in real-time, to keep Admin apprised
145 # of progress during long-running command.
146 for output_line in run_command(apt_command):
147 print(output_line.decode('utf-8').rstrip())
148 except subprocess.CalledProcessError:
149 # Tails supports apt persistence, which was used by SecureDrop
150 # under Tails 2.x. If updates are being applied, don't try to pile
151 # on with more apt requests.
152 sdlog.error(("Failed to install apt dependencies. Check network"
153 " connection and try again."))
154 raise
155
156
157 def envsetup(args):
158 """Installs Admin tooling required for managing SecureDrop. Specifically:
159
160 * updates apt-cache
161 * installs apt packages for Python virtualenv
162 * creates virtualenv
163 * installs pip packages inside virtualenv
164
165 The virtualenv is created within the Persistence volume in Tails, so that
166 Ansible is available to the Admin on subsequent boots without requiring
167 installation of packages again.
168 """
169 # clean up tails 3.x venv when migrating to tails 4.x
170 clean_up_tails3_venv(VENV_DIR)
171
172 # virtualenv doesnt exist? Install dependencies and create
173 if not os.path.exists(VENV_DIR):
174
175 install_apt_dependencies(args)
176
177 # Technically you can create a virtualenv from within python
178 # but pip can only be run over tor on tails, and debugging that
179 # along with instaling a third-party dependency is not worth
180 # the effort here.
181 sdlog.info("Setting up virtualenv")
182 try:
183 sdlog.debug(subprocess.check_output(
184 maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],
185 stderr=subprocess.STDOUT))
186 except subprocess.CalledProcessError as e:
187 sdlog.debug(e.output)
188 sdlog.error(("Unable to create virtualenv. Check network settings"
189 " and try again."))
190 raise
191 else:
192 sdlog.info("Virtualenv already exists, not creating")
193
194 install_pip_dependencies(args)
195 if os.path.exists(os.path.join(DIR, 'setup.py')):
196 install_pip_self(args)
197
198 sdlog.info("Finished installing SecureDrop dependencies")
199
200
201 def install_pip_self(args):
202 pip_install_cmd = [
203 os.path.join(VENV_DIR, 'bin', 'pip3'),
204 'install', '-e', DIR
205 ]
206 try:
207 subprocess.check_output(maybe_torify() + pip_install_cmd,
208 stderr=subprocess.STDOUT)
209 except subprocess.CalledProcessError as e:
210 sdlog.debug(e.output)
211 sdlog.error("Unable to install self, run with -v for more information")
212 raise
213
214
215 def install_pip_dependencies(args, pip_install_cmd=[
216 os.path.join(VENV_DIR, 'bin', 'pip3'),
217 'install',
218 # Specify requirements file.
219 '-r', os.path.join(DIR, 'requirements.txt'),
220 '--require-hashes',
221 # Make sure to upgrade packages only if necessary.
222 '-U', '--upgrade-strategy', 'only-if-needed',
223 ]):
224 """
225 Install Python dependencies via pip into virtualenv.
226 """
227
228 sdlog.info("Checking Python dependencies for securedrop-admin")
229 try:
230 pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,
231 stderr=subprocess.STDOUT)
232 except subprocess.CalledProcessError as e:
233 sdlog.debug(e.output)
234 sdlog.error(("Failed to install pip dependencies. Check network"
235 " connection and try again."))
236 raise
237
238 sdlog.debug(pip_output)
239 if "Successfully installed" in str(pip_output):
240 sdlog.info("Python dependencies for securedrop-admin upgraded")
241 else:
242 sdlog.info("Python dependencies for securedrop-admin are up-to-date")
243
244
245 def parse_argv(argv):
246 parser = argparse.ArgumentParser()
247 parser.add_argument('-v', action='store_true', default=False,
248 help="Increase verbosity on output")
249 parser.set_defaults(func=envsetup)
250
251 subparsers = parser.add_subparsers()
252
253 envsetup_parser = subparsers.add_parser(
254 'envsetup',
255 help='Set up the admin virtualenv.'
256 )
257 envsetup_parser.set_defaults(func=envsetup)
258
259 checkenv_parser = subparsers.add_parser(
260 'checkenv',
261 help='Check that the admin virtualenv is properly set up.'
262 )
263 checkenv_parser.set_defaults(func=checkenv)
264
265 return parser.parse_args(argv)
266
267
268 if __name__ == "__main__":
269 args = parse_argv(sys.argv[1:])
270 setup_logger(args.v)
271
272 try:
273 args.func(args)
274 except Exception:
275 sys.exit(1)
276 else:
277 sys.exit(0)
278
[end of admin/bootstrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/admin/bootstrap.py b/admin/bootstrap.py
--- a/admin/bootstrap.py
+++ b/admin/bootstrap.py
@@ -64,12 +64,12 @@
def is_tails():
try:
id = subprocess.check_output('lsb_release --id --short',
- shell=True).strip()
+ shell=True).decode('utf-8').strip()
except subprocess.CalledProcessError:
id = None
# dirty hack to unreliably detect Tails 4.0~beta2
- if id == b'Debian':
+ if id == 'Debian':
if os.uname()[1] == 'amnesia':
id = 'Tails'
| {"golden_diff": "diff --git a/admin/bootstrap.py b/admin/bootstrap.py\n--- a/admin/bootstrap.py\n+++ b/admin/bootstrap.py\n@@ -64,12 +64,12 @@\n def is_tails():\n try:\n id = subprocess.check_output('lsb_release --id --short',\n- shell=True).strip()\n+ shell=True).decode('utf-8').strip()\n except subprocess.CalledProcessError:\n id = None\n \n # dirty hack to unreliably detect Tails 4.0~beta2\n- if id == b'Debian':\n+ if id == 'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n", "issue": "[1.1.0-rc4] \"Unable to create virtualenv. Check network settings and try again\"\n(Tested on a Tails 3.16 Admin Workstation by checking out 1.1.0-rc4 tag, without updating my servers.)\r\n\r\nAs expected, running `securedrop-admin` commands triggered the \"run setup\" step. However, the `securedrop-admin setup` step itself did not complete successfully; it went pretty far along but finally failed with this error:\r\n\r\n\"Unable to create virtualenv. Check network settings and try again\"\r\n\r\nTor seems to be working fine. Possibly intermittent issues but good to warn users about and have mitigation instructions if it is likely to arise during updates.\n", "before_files": [{"content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\nimport argparse\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nsdlog = logging.getLogger(__name__)\n\nDIR = os.path.dirname(os.path.realpath(__file__))\nVENV_DIR = os.path.join(DIR, \".venv3\")\n\n\ndef setup_logger(verbose=False):\n \"\"\" Configure logging handler \"\"\"\n # Set default level on parent\n sdlog.setLevel(logging.DEBUG)\n level = logging.DEBUG if verbose else logging.INFO\n\n stdout = logging.StreamHandler(sys.stdout)\n stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))\n stdout.setLevel(level)\n sdlog.addHandler(stdout)\n\n\ndef run_command(command):\n \"\"\"\n Wrapper function to display stdout for running command,\n similar to how shelling out in a Bash script displays rolling output.\n\n Yields a list of the stdout from the `command`, and raises a\n CalledProcessError if `command` returns non-zero.\n \"\"\"\n popen = subprocess.Popen(command,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n for stdout_line in iter(popen.stdout.readline, b\"\"):\n yield stdout_line\n popen.stdout.close()\n return_code = popen.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, command)\n\n\ndef is_tails():\n try:\n id = subprocess.check_output('lsb_release --id --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n id = None\n\n # dirty hack to unreliably detect Tails 4.0~beta2\n if id == b'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n\n return id == 'Tails'\n\n\ndef clean_up_tails3_venv(virtualenv_dir=VENV_DIR):\n \"\"\"\n Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is\n based on Debian Buster and uses libpython3.7. This means that the Tails 3.x\n virtualenv will not work under Tails 4.x, and will need to be destroyed and\n rebuilt. We can detect if the version of libpython is 3.5 in the\n admin/.venv3/ folder, and delete it if that's the case. This will ensure a\n smooth upgrade from Tails 3.x to Tails 4.x.\n \"\"\"\n if is_tails():\n try:\n dist = subprocess.check_output('lsb_release --codename --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n dist = None\n\n # tails4 is based on buster\n if dist == b'buster':\n python_lib_path = os.path.join(virtualenv_dir, \"lib/python3.5\")\n if os.path.exists(os.path.join(python_lib_path)):\n sdlog.info(\n \"Tails 3 Python 3 virtualenv detected. \"\n \"Removing it.\"\n )\n shutil.rmtree(virtualenv_dir)\n sdlog.info(\"Tails 3 Python 3 virtualenv deleted.\")\n\n\ndef checkenv(args):\n clean_up_tails3_venv(VENV_DIR)\n if not os.path.exists(os.path.join(VENV_DIR, \"bin/activate\")):\n sdlog.error('Please run \"securedrop-admin setup\".')\n sys.exit(1)\n\n\ndef maybe_torify():\n if is_tails():\n return ['torify']\n else:\n return []\n\n\ndef install_apt_dependencies(args):\n \"\"\"\n Install apt dependencies in Tails. In order to install Ansible in\n a virtualenv, first there are a number of Python prerequisites.\n \"\"\"\n sdlog.info(\"Installing SecureDrop Admin dependencies\")\n sdlog.info((\"You'll be prompted for the temporary Tails admin password,\"\n \" which was set on Tails login screen\"))\n\n apt_command = ['sudo', 'su', '-c',\n \"apt-get update && \\\n apt-get -q -o=Dpkg::Use-Pty=0 install -y \\\n python3-virtualenv \\\n python3-yaml \\\n python3-pip \\\n ccontrol \\\n virtualenv \\\n libffi-dev \\\n libssl-dev \\\n libpython3-dev\",\n ]\n\n try:\n # Print command results in real-time, to keep Admin apprised\n # of progress during long-running command.\n for output_line in run_command(apt_command):\n print(output_line.decode('utf-8').rstrip())\n except subprocess.CalledProcessError:\n # Tails supports apt persistence, which was used by SecureDrop\n # under Tails 2.x. If updates are being applied, don't try to pile\n # on with more apt requests.\n sdlog.error((\"Failed to install apt dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n\ndef envsetup(args):\n \"\"\"Installs Admin tooling required for managing SecureDrop. Specifically:\n\n * updates apt-cache\n * installs apt packages for Python virtualenv\n * creates virtualenv\n * installs pip packages inside virtualenv\n\n The virtualenv is created within the Persistence volume in Tails, so that\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n # clean up tails 3.x venv when migrating to tails 4.x\n clean_up_tails3_venv(VENV_DIR)\n\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(VENV_DIR):\n\n install_apt_dependencies(args)\n\n # Technically you can create a virtualenv from within python\n # but pip can only be run over tor on tails, and debugging that\n # along with instaling a third-party dependency is not worth\n # the effort here.\n sdlog.info(\"Setting up virtualenv\")\n try:\n sdlog.debug(subprocess.check_output(\n maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],\n stderr=subprocess.STDOUT))\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Unable to create virtualenv. Check network settings\"\n \" and try again.\"))\n raise\n else:\n sdlog.info(\"Virtualenv already exists, not creating\")\n\n install_pip_dependencies(args)\n if os.path.exists(os.path.join(DIR, 'setup.py')):\n install_pip_self(args)\n\n sdlog.info(\"Finished installing SecureDrop dependencies\")\n\n\ndef install_pip_self(args):\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install', '-e', DIR\n ]\n try:\n subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error(\"Unable to install self, run with -v for more information\")\n raise\n\n\ndef install_pip_dependencies(args, pip_install_cmd=[\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install',\n # Specify requirements file.\n '-r', os.path.join(DIR, 'requirements.txt'),\n '--require-hashes',\n # Make sure to upgrade packages only if necessary.\n '-U', '--upgrade-strategy', 'only-if-needed',\n]):\n \"\"\"\n Install Python dependencies via pip into virtualenv.\n \"\"\"\n\n sdlog.info(\"Checking Python dependencies for securedrop-admin\")\n try:\n pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Failed to install pip dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n sdlog.debug(pip_output)\n if \"Successfully installed\" in str(pip_output):\n sdlog.info(\"Python dependencies for securedrop-admin upgraded\")\n else:\n sdlog.info(\"Python dependencies for securedrop-admin are up-to-date\")\n\n\ndef parse_argv(argv):\n parser = argparse.ArgumentParser()\n parser.add_argument('-v', action='store_true', default=False,\n help=\"Increase verbosity on output\")\n parser.set_defaults(func=envsetup)\n\n subparsers = parser.add_subparsers()\n\n envsetup_parser = subparsers.add_parser(\n 'envsetup',\n help='Set up the admin virtualenv.'\n )\n envsetup_parser.set_defaults(func=envsetup)\n\n checkenv_parser = subparsers.add_parser(\n 'checkenv',\n help='Check that the admin virtualenv is properly set up.'\n )\n checkenv_parser.set_defaults(func=checkenv)\n\n return parser.parse_args(argv)\n\n\nif __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n\n try:\n args.func(args)\n except Exception:\n sys.exit(1)\n else:\n sys.exit(0)\n", "path": "admin/bootstrap.py"}]} | 3,558 | 152 |
gh_patches_debug_2042 | rasdani/github-patches | git_diff | aws__aws-cli-357 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip install awscli fails
I tried `pip install awscli` from https://github.com/aws/aws-cli/blob/develop/README.rst and failed:
http://sprunge.us/NfbW
/home/hendry/.pip/pip.log = http://ix.io/7SC
Hilarious how bad Python packaging is. I'm running Archlinux with Python 3.3.2.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import sys
3
4 from setuptools import setup, find_packages
5
6 import awscli
7
8
9 requires = ['botocore>=0.16.0,<0.17.0',
10 'bcdoc>=0.9.0,<0.10.0',
11 'six>=1.1.0',
12 'colorama==0.2.5',
13 'docutils>=0.10',
14 'rsa==3.1.1']
15
16 if sys.version_info[:2] == (2, 6):
17 # For python2.6 we have to require argparse since it
18 # was not in stdlib until 2.7.
19 requires.append('argparse>=1.1')
20
21
22 setup_options = dict(
23 name='awscli',
24 version=awscli.__version__,
25 description='Universal Command Line Environment for AWS.',
26 long_description=open('README.rst').read(),
27 author='Mitch Garnaat',
28 author_email='[email protected]',
29 url='http://aws.amazon.com/cli/',
30 scripts=['bin/aws', 'bin/aws.cmd',
31 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],
32 packages=find_packages('.', exclude=['tests*']),
33 package_dir={'awscli': 'awscli'},
34 package_data={'awscli': ['data/*.json', 'examples/*/*']},
35 install_requires=requires,
36 license="Apache License 2.0",
37 classifiers=(
38 'Development Status :: 5 - Production/Stable',
39 'Intended Audience :: Developers',
40 'Intended Audience :: System Administrators',
41 'Natural Language :: English',
42 'License :: OSI Approved :: Apache Software License',
43 'Programming Language :: Python',
44 'Programming Language :: Python :: 2.6',
45 'Programming Language :: Python :: 2.7',
46 'Programming Language :: Python :: 3',
47 'Programming Language :: Python :: 3.3',
48 ),
49 )
50
51 if 'py2exe' in sys.argv:
52 # This will actually give us a py2exe command.
53 import py2exe
54 # And we have some py2exe specific options.
55 setup_options['options'] = {
56 'py2exe': {
57 'optimize': 0,
58 'skip_archive': True,
59 'includes': ['ConfigParser', 'urllib', 'httplib',
60 'docutils.readers.standalone',
61 'docutils.parsers.rst',
62 'docutils.languages.en',
63 'xml.etree.ElementTree', 'HTMLParser',
64 'awscli.handlers'],
65 }
66 }
67 setup_options['console'] = ['bin/aws']
68
69
70 setup(**setup_options)
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
'six>=1.1.0',
'colorama==0.2.5',
'docutils>=0.10',
- 'rsa==3.1.1']
+ 'rsa==3.1.2']
if sys.version_info[:2] == (2, 6):
# For python2.6 we have to require argparse since it
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n 'six>=1.1.0',\n 'colorama==0.2.5',\n 'docutils>=0.10',\n- 'rsa==3.1.1']\n+ 'rsa==3.1.2']\n \n if sys.version_info[:2] == (2, 6):\n # For python2.6 we have to require argparse since it\n", "issue": "pip install awscli fails\nI tried `pip install awscli` from https://github.com/aws/aws-cli/blob/develop/README.rst and failed:\n\nhttp://sprunge.us/NfbW\n/home/hendry/.pip/pip.log = http://ix.io/7SC\n\nHilarious how bad Python packaging is. I'm running Archlinux with Python 3.3.2.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport sys\n\nfrom setuptools import setup, find_packages\n\nimport awscli\n\n\nrequires = ['botocore>=0.16.0,<0.17.0',\n 'bcdoc>=0.9.0,<0.10.0',\n 'six>=1.1.0',\n 'colorama==0.2.5',\n 'docutils>=0.10',\n 'rsa==3.1.1']\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have to require argparse since it\n # was not in stdlib until 2.7.\n requires.append('argparse>=1.1')\n\n\nsetup_options = dict(\n name='awscli',\n version=awscli.__version__,\n description='Universal Command Line Environment for AWS.',\n long_description=open('README.rst').read(),\n author='Mitch Garnaat',\n author_email='[email protected]',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],\n packages=find_packages('.', exclude=['tests*']),\n package_dir={'awscli': 'awscli'},\n package_data={'awscli': ['data/*.json', 'examples/*/*']},\n install_requires=requires,\n license=\"Apache License 2.0\",\n classifiers=(\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n ),\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'includes': ['ConfigParser', 'urllib', 'httplib',\n 'docutils.readers.standalone',\n 'docutils.parsers.rst',\n 'docutils.languages.en',\n 'xml.etree.ElementTree', 'HTMLParser',\n 'awscli.handlers'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]} | 1,316 | 115 |
gh_patches_debug_7073 | rasdani/github-patches | git_diff | pre-commit__pre-commit-287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make pre-commit consider a hook as "failed" if it modifies files and still (incorrectly?) exits 0
This would allow us to ditch autopep8-wrapper and support a bunch of hooks which refused to be scriptable (yapf, etc.)
</issue>
<code>
[start of pre_commit/commands/run.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import logging
5 import os
6 import sys
7
8 from pre_commit import color
9 from pre_commit import git
10 from pre_commit.logging_handler import LoggingHandler
11 from pre_commit.output import get_hook_message
12 from pre_commit.output import sys_stdout_write_wrapper
13 from pre_commit.staged_files_only import staged_files_only
14 from pre_commit.util import cmd_output
15 from pre_commit.util import noop_context
16
17
18 logger = logging.getLogger('pre_commit')
19
20
21 def _get_skips(environ):
22 skips = environ.get('SKIP', '')
23 return set(skip.strip() for skip in skips.split(',') if skip.strip())
24
25
26 def _hook_msg_start(hook, verbose):
27 return '{0}{1}'.format(
28 '[{0}] '.format(hook['id']) if verbose else '',
29 hook['name'],
30 )
31
32
33 def _print_no_files_skipped(hook, write, args):
34 write(get_hook_message(
35 _hook_msg_start(hook, args.verbose),
36 postfix='(no files to check) ',
37 end_msg='Skipped',
38 end_color=color.TURQUOISE,
39 use_color=args.color,
40 ))
41
42
43 def _print_user_skipped(hook, write, args):
44 write(get_hook_message(
45 _hook_msg_start(hook, args.verbose),
46 end_msg='Skipped',
47 end_color=color.YELLOW,
48 use_color=args.color,
49 ))
50
51
52 def get_changed_files(new, old):
53 return cmd_output(
54 'git', 'diff', '--name-only', '{0}..{1}'.format(old, new),
55 )[1].splitlines()
56
57
58 def get_filenames(args, include_expr, exclude_expr):
59 if args.origin and args.source:
60 getter = git.get_files_matching(
61 lambda: get_changed_files(args.origin, args.source),
62 )
63 elif args.files:
64 getter = git.get_files_matching(lambda: args.files)
65 elif args.all_files:
66 getter = git.get_all_files_matching
67 elif git.is_in_merge_conflict():
68 getter = git.get_conflicted_files_matching
69 else:
70 getter = git.get_staged_files_matching
71 return getter(include_expr, exclude_expr)
72
73
74 def _run_single_hook(hook, repo, args, write, skips=frozenset()):
75 filenames = get_filenames(args, hook['files'], hook['exclude'])
76 if hook['id'] in skips:
77 _print_user_skipped(hook, write, args)
78 return 0
79 elif not filenames:
80 _print_no_files_skipped(hook, write, args)
81 return 0
82
83 # Print the hook and the dots first in case the hook takes hella long to
84 # run.
85 write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
86 sys.stdout.flush()
87
88 retcode, stdout, stderr = repo.run_hook(hook, filenames)
89
90 if retcode:
91 retcode = 1
92 print_color = color.RED
93 pass_fail = 'Failed'
94 else:
95 retcode = 0
96 print_color = color.GREEN
97 pass_fail = 'Passed'
98
99 write(color.format_color(pass_fail, print_color, args.color) + '\n')
100
101 if (stdout or stderr) and (retcode or args.verbose):
102 write('hookid: {0}\n'.format(hook['id']))
103 write('\n')
104 for output in (stdout, stderr):
105 assert type(output) is bytes, type(output)
106 if output.strip():
107 write(output.strip() + b'\n')
108 write('\n')
109
110 return retcode
111
112
113 def _run_hooks(repo_hooks, args, write, environ):
114 """Actually run the hooks."""
115 skips = _get_skips(environ)
116 retval = 0
117 for repo, hook in repo_hooks:
118 retval |= _run_single_hook(hook, repo, args, write, skips)
119 return retval
120
121
122 def get_repo_hooks(runner):
123 for repo in runner.repositories:
124 for _, hook in repo.hooks:
125 yield (repo, hook)
126
127
128 def _has_unmerged_paths(runner):
129 _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])
130 return bool(stdout.strip())
131
132
133 def _has_unstaged_config(runner):
134 retcode, _, _ = runner.cmd_runner.run(
135 ('git', 'diff', '--exit-code', runner.config_file_path),
136 retcode=None,
137 )
138 # be explicit, other git errors don't mean it has an unstaged config.
139 return retcode == 1
140
141
142 def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
143 no_stash = args.no_stash or args.all_files or bool(args.files)
144 # Set up our logging handler
145 logger.addHandler(LoggingHandler(args.color, write=write))
146 logger.setLevel(logging.INFO)
147
148 # Check if we have unresolved merge conflict files and fail fast.
149 if _has_unmerged_paths(runner):
150 logger.error('Unmerged files. Resolve before committing.')
151 return 1
152 if bool(args.source) != bool(args.origin):
153 logger.error('Specify both --origin and --source.')
154 return 1
155 if _has_unstaged_config(runner) and not no_stash:
156 if args.allow_unstaged_config:
157 logger.warn(
158 'You have an unstaged config file and have specified the '
159 '--allow-unstaged-config option.\n'
160 'Note that your config will be stashed before the config is '
161 'parsed unless --no-stash is specified.',
162 )
163 else:
164 logger.error(
165 'Your .pre-commit-config.yaml is unstaged.\n'
166 '`git add .pre-commit-config.yaml` to fix this.\n'
167 'Run pre-commit with --allow-unstaged-config to silence this.'
168 )
169 return 1
170
171 if no_stash:
172 ctx = noop_context()
173 else:
174 ctx = staged_files_only(runner.cmd_runner)
175
176 with ctx:
177 repo_hooks = list(get_repo_hooks(runner))
178
179 if args.hook:
180 repo_hooks = [
181 (repo, hook) for repo, hook in repo_hooks
182 if hook['id'] == args.hook
183 ]
184 if not repo_hooks:
185 write('No hook with id `{0}`\n'.format(args.hook))
186 return 1
187
188 # Filter hooks for stages
189 repo_hooks = [
190 (repo, hook) for repo, hook in repo_hooks
191 if not hook['stages'] or args.hook_stage in hook['stages']
192 ]
193
194 return _run_hooks(repo_hooks, args, write, environ)
195
[end of pre_commit/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -85,7 +85,13 @@
write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
sys.stdout.flush()
+ diff_before = cmd_output('git', 'diff', retcode=None)
retcode, stdout, stderr = repo.run_hook(hook, filenames)
+ diff_after = cmd_output('git', 'diff', retcode=None)
+
+ # If the hook makes changes, fail the commit
+ if diff_before != diff_after:
+ retcode = 1
if retcode:
retcode = 1
| {"golden_diff": "diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py\n--- a/pre_commit/commands/run.py\n+++ b/pre_commit/commands/run.py\n@@ -85,7 +85,13 @@\n write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))\n sys.stdout.flush()\n \n+ diff_before = cmd_output('git', 'diff', retcode=None)\n retcode, stdout, stderr = repo.run_hook(hook, filenames)\n+ diff_after = cmd_output('git', 'diff', retcode=None)\n+\n+ # If the hook makes changes, fail the commit\n+ if diff_before != diff_after:\n+ retcode = 1\n \n if retcode:\n retcode = 1\n", "issue": "Make pre-commit consider a hook as \"failed\" if it modifies files and still (incorrectly?) exits 0\nThis would allow us to ditch autopep8-wrapper and support a bunch of hooks which refused to be scriptable (yapf, etc.)\n\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import git\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.output import sys_stdout_write_wrapper\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return set(skip.strip() for skip in skips.split(',') if skip.strip())\n\n\ndef _hook_msg_start(hook, verbose):\n return '{0}{1}'.format(\n '[{0}] '.format(hook['id']) if verbose else '',\n hook['name'],\n )\n\n\ndef _print_no_files_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix='(no files to check) ',\n end_msg='Skipped',\n end_color=color.TURQUOISE,\n use_color=args.color,\n ))\n\n\ndef _print_user_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg='Skipped',\n end_color=color.YELLOW,\n use_color=args.color,\n ))\n\n\ndef get_changed_files(new, old):\n return cmd_output(\n 'git', 'diff', '--name-only', '{0}..{1}'.format(old, new),\n )[1].splitlines()\n\n\ndef get_filenames(args, include_expr, exclude_expr):\n if args.origin and args.source:\n getter = git.get_files_matching(\n lambda: get_changed_files(args.origin, args.source),\n )\n elif args.files:\n getter = git.get_files_matching(lambda: args.files)\n elif args.all_files:\n getter = git.get_all_files_matching\n elif git.is_in_merge_conflict():\n getter = git.get_conflicted_files_matching\n else:\n getter = git.get_staged_files_matching\n return getter(include_expr, exclude_expr)\n\n\ndef _run_single_hook(hook, repo, args, write, skips=frozenset()):\n filenames = get_filenames(args, hook['files'], hook['exclude'])\n if hook['id'] in skips:\n _print_user_skipped(hook, write, args)\n return 0\n elif not filenames:\n _print_no_files_skipped(hook, write, args)\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))\n sys.stdout.flush()\n\n retcode, stdout, stderr = repo.run_hook(hook, filenames)\n\n if retcode:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n write(color.format_color(pass_fail, print_color, args.color) + '\\n')\n\n if (stdout or stderr) and (retcode or args.verbose):\n write('hookid: {0}\\n'.format(hook['id']))\n write('\\n')\n for output in (stdout, stderr):\n assert type(output) is bytes, type(output)\n if output.strip():\n write(output.strip() + b'\\n')\n write('\\n')\n\n return retcode\n\n\ndef _run_hooks(repo_hooks, args, write, environ):\n \"\"\"Actually run the hooks.\"\"\"\n skips = _get_skips(environ)\n retval = 0\n for repo, hook in repo_hooks:\n retval |= _run_single_hook(hook, repo, args, write, skips)\n return retval\n\n\ndef get_repo_hooks(runner):\n for repo in runner.repositories:\n for _, hook in repo.hooks:\n yield (repo, hook)\n\n\ndef _has_unmerged_paths(runner):\n _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])\n return bool(stdout.strip())\n\n\ndef _has_unstaged_config(runner):\n retcode, _, _ = runner.cmd_runner.run(\n ('git', 'diff', '--exit-code', runner.config_file_path),\n retcode=None,\n )\n # be explicit, other git errors don't mean it has an unstaged config.\n return retcode == 1\n\n\ndef run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):\n no_stash = args.no_stash or args.all_files or bool(args.files)\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths(runner):\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n if bool(args.source) != bool(args.origin):\n logger.error('Specify both --origin and --source.')\n return 1\n if _has_unstaged_config(runner) and not no_stash:\n if args.allow_unstaged_config:\n logger.warn(\n 'You have an unstaged config file and have specified the '\n '--allow-unstaged-config option.\\n'\n 'Note that your config will be stashed before the config is '\n 'parsed unless --no-stash is specified.',\n )\n else:\n logger.error(\n 'Your .pre-commit-config.yaml is unstaged.\\n'\n '`git add .pre-commit-config.yaml` to fix this.\\n'\n 'Run pre-commit with --allow-unstaged-config to silence this.'\n )\n return 1\n\n if no_stash:\n ctx = noop_context()\n else:\n ctx = staged_files_only(runner.cmd_runner)\n\n with ctx:\n repo_hooks = list(get_repo_hooks(runner))\n\n if args.hook:\n repo_hooks = [\n (repo, hook) for repo, hook in repo_hooks\n if hook['id'] == args.hook\n ]\n if not repo_hooks:\n write('No hook with id `{0}`\\n'.format(args.hook))\n return 1\n\n # Filter hooks for stages\n repo_hooks = [\n (repo, hook) for repo, hook in repo_hooks\n if not hook['stages'] or args.hook_stage in hook['stages']\n ]\n\n return _run_hooks(repo_hooks, args, write, environ)\n", "path": "pre_commit/commands/run.py"}]} | 2,507 | 168 |
gh_patches_debug_8933 | rasdani/github-patches | git_diff | akvo__akvo-rsr-1945 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Public projects filter in REST API not working correctly
## Test plan
All projects (and other objects) should be displayed in the REST API.
## Issue description
As a quick fix, just display all projects (public and private) in the API.
</issue>
<code>
[start of akvo/rest/viewsets.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from rest_framework import filters
8 from rest_framework import viewsets
9 from rest_framework.authentication import SessionAuthentication
10 from rest_framework.permissions import DjangoObjectPermissions
11
12 from .models import TastyTokenAuthentication
13
14 from akvo.rsr.models import Project
15
16
17 class BaseRSRViewSet(viewsets.ModelViewSet):
18 """
19 Base class used for the view sets for RSR models. Provides unified auth and perms settings.
20 Only public projects will be shown by filtering the queryset.
21 """
22 authentication_classes = (SessionAuthentication, TastyTokenAuthentication, )
23 permission_classes = (DjangoObjectPermissions, )
24 filter_backends = (filters.DjangoFilterBackend, filters.OrderingFilter, )
25 ordering_fields = '__all__'
26
27 def get_queryset(self):
28 """Filter out any private projects."""
29 for related_obj in self.queryset.model._meta.get_all_related_objects():
30 if related_obj.model == Project:
31 self.queryset = self.queryset.filter(project__is_public=True)
32 break
33 return super(BaseRSRViewSet, self).get_queryset()
34
[end of akvo/rest/viewsets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/viewsets.py b/akvo/rest/viewsets.py
--- a/akvo/rest/viewsets.py
+++ b/akvo/rest/viewsets.py
@@ -23,11 +23,3 @@
permission_classes = (DjangoObjectPermissions, )
filter_backends = (filters.DjangoFilterBackend, filters.OrderingFilter, )
ordering_fields = '__all__'
-
- def get_queryset(self):
- """Filter out any private projects."""
- for related_obj in self.queryset.model._meta.get_all_related_objects():
- if related_obj.model == Project:
- self.queryset = self.queryset.filter(project__is_public=True)
- break
- return super(BaseRSRViewSet, self).get_queryset()
| {"golden_diff": "diff --git a/akvo/rest/viewsets.py b/akvo/rest/viewsets.py\n--- a/akvo/rest/viewsets.py\n+++ b/akvo/rest/viewsets.py\n@@ -23,11 +23,3 @@\n permission_classes = (DjangoObjectPermissions, )\n filter_backends = (filters.DjangoFilterBackend, filters.OrderingFilter, )\n ordering_fields = '__all__'\n-\n- def get_queryset(self):\n- \"\"\"Filter out any private projects.\"\"\"\n- for related_obj in self.queryset.model._meta.get_all_related_objects():\n- if related_obj.model == Project:\n- self.queryset = self.queryset.filter(project__is_public=True)\n- break\n- return super(BaseRSRViewSet, self).get_queryset()\n", "issue": "Public projects filter in REST API not working correctly\n## Test plan\n\nAll projects (and other objects) should be displayed in the REST API.\n## Issue description\n\nAs a quick fix, just display all projects (public and private) in the API.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import filters\nfrom rest_framework import viewsets\nfrom rest_framework.authentication import SessionAuthentication\nfrom rest_framework.permissions import DjangoObjectPermissions\n\nfrom .models import TastyTokenAuthentication\n\nfrom akvo.rsr.models import Project\n\n\nclass BaseRSRViewSet(viewsets.ModelViewSet):\n \"\"\"\n Base class used for the view sets for RSR models. Provides unified auth and perms settings.\n Only public projects will be shown by filtering the queryset.\n \"\"\"\n authentication_classes = (SessionAuthentication, TastyTokenAuthentication, )\n permission_classes = (DjangoObjectPermissions, )\n filter_backends = (filters.DjangoFilterBackend, filters.OrderingFilter, )\n ordering_fields = '__all__'\n\n def get_queryset(self):\n \"\"\"Filter out any private projects.\"\"\"\n for related_obj in self.queryset.model._meta.get_all_related_objects():\n if related_obj.model == Project:\n self.queryset = self.queryset.filter(project__is_public=True)\n break\n return super(BaseRSRViewSet, self).get_queryset()\n", "path": "akvo/rest/viewsets.py"}]} | 935 | 167 |
gh_patches_debug_15907 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-2286 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PNG icons installed as scalable icons
The 1.1.11 pre-releases install various PNG icons in `/usr/share/icons/hicolor/scalable/apps/`, but the `scalable` hierarchy is reserved for scalable (SVG) icons.
</issue>
<code>
[start of lib/solaar/ui/icons.py]
1 # -*- python-mode -*-
2
3 ## Copyright (C) 2012-2013 Daniel Pavel
4 ##
5 ## This program is free software; you can redistribute it and/or modify
6 ## it under the terms of the GNU General Public License as published by
7 ## the Free Software Foundation; either version 2 of the License, or
8 ## (at your option) any later version.
9 ##
10 ## This program is distributed in the hope that it will be useful,
11 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
12 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 ## GNU General Public License for more details.
14 ##
15 ## You should have received a copy of the GNU General Public License along
16 ## with this program; if not, write to the Free Software Foundation, Inc.,
17 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
18
19 import logging
20
21 import solaar.gtk as gtk
22
23 from gi.repository import Gtk
24
25 logger = logging.getLogger(__name__)
26
27 #
28 #
29 #
30
31 _LARGE_SIZE = 64
32 Gtk.IconSize.LARGE = Gtk.icon_size_register('large', _LARGE_SIZE, _LARGE_SIZE)
33 # Gtk.IconSize.XLARGE = Gtk.icon_size_register('x-large', _LARGE_SIZE * 2, _LARGE_SIZE * 2)
34
35 TRAY_INIT = 'solaar-init'
36 TRAY_OKAY = 'solaar'
37 TRAY_ATTENTION = 'solaar-attention'
38
39 _default_theme = None
40
41
42 def _init_icon_paths():
43 global _default_theme
44 if _default_theme:
45 return
46
47 _default_theme = Gtk.IconTheme.get_default()
48 if logger.isEnabledFor(logging.DEBUG):
49 logger.debug('icon theme paths: %s', _default_theme.get_search_path())
50
51 if gtk.battery_icons_style == 'symbolic':
52 global TRAY_OKAY
53 TRAY_OKAY = TRAY_INIT # use monochrome tray icon
54 if not _default_theme.has_icon('battery-good-symbolic'):
55 logger.warning('failed to detect symbolic icons')
56 gtk.battery_icons_style = 'regular'
57 if gtk.battery_icons_style == 'regular':
58 if not _default_theme.has_icon('battery-good'):
59 logger.warning('failed to detect icons')
60 gtk.battery_icons_style = 'solaar'
61
62
63 #
64 #
65 #
66
67
68 def battery(level=None, charging=False):
69 icon_name = _battery_icon_name(level, charging)
70 if not _default_theme.has_icon(icon_name):
71 logger.warning('icon %s not found in current theme', icon_name)
72 return TRAY_OKAY # use Solaar icon if battery icon not available
73 elif logger.isEnabledFor(logging.DEBUG):
74 logger.debug('battery icon for %s:%s = %s', level, charging, icon_name)
75 return icon_name
76
77
78 # return first res where val >= guard
79 # _first_res(val,((guard,res),...))
80 def _first_res(val, pairs):
81 return next((res for guard, res in pairs if val >= guard), None)
82
83
84 def _battery_icon_name(level, charging):
85 _init_icon_paths()
86
87 if level is None or level < 0:
88 return 'battery-missing' + ('-symbolic' if gtk.battery_icons_style == 'symbolic' else '')
89
90 level_name = _first_res(level, ((90, 'full'), (30, 'good'), (20, 'low'), (5, 'caution'), (0, 'empty')))
91 return 'battery-%s%s%s' % (
92 level_name, '-charging' if charging else '', '-symbolic' if gtk.battery_icons_style == 'symbolic' else ''
93 )
94
95
96 #
97 #
98 #
99
100
101 def lux(level=None):
102 if level is None or level < 0:
103 return 'light_unknown'
104 return 'light_%03d' % (20 * ((level + 50) // 100))
105
106
107 #
108 #
109 #
110
111 _ICON_SETS = {}
112
113
114 def device_icon_set(name='_', kind=None):
115 icon_set = _ICON_SETS.get(name)
116 if icon_set is None:
117 icon_set = Gtk.IconSet.new()
118 _ICON_SETS[name] = icon_set
119
120 # names of possible icons, in reverse order of likelihood
121 # the theme will hopefully pick up the most appropriate
122 names = ['preferences-desktop-peripherals']
123 if kind:
124 if str(kind) == 'numpad':
125 names += ('input-keyboard', 'input-dialpad')
126 elif str(kind) == 'touchpad':
127 names += ('input-mouse', 'input-tablet')
128 elif str(kind) == 'trackball':
129 names += ('input-mouse', )
130 elif str(kind) == 'headset':
131 names += ('audio-headphones', 'audio-headset')
132 names += ('input-' + str(kind), )
133 # names += (name.replace(' ', '-'),)
134
135 source = Gtk.IconSource.new()
136 for n in names:
137 source.set_icon_name(n)
138 icon_set.add_source(source)
139 icon_set.names = names
140
141 return icon_set
142
143
144 def device_icon_file(name, kind=None, size=_LARGE_SIZE):
145 _init_icon_paths()
146
147 icon_set = device_icon_set(name, kind)
148 assert icon_set
149 for n in reversed(icon_set.names):
150 if _default_theme.has_icon(n):
151 return _default_theme.lookup_icon(n, size, 0).get_filename()
152
153
154 def device_icon_name(name, kind=None):
155 _init_icon_paths()
156
157 icon_set = device_icon_set(name, kind)
158 assert icon_set
159 for n in reversed(icon_set.names):
160 if _default_theme.has_icon(n):
161 return n
162
163
164 def icon_file(name, size=_LARGE_SIZE):
165 _init_icon_paths()
166
167 # has_icon() somehow returned False while lookup_icon returns non-None.
168 # I guess it happens because share/solaar/icons/ has no hicolor and
169 # resolution subdirs
170 theme_icon = _default_theme.lookup_icon(name, size, 0)
171 if theme_icon:
172 file_name = theme_icon.get_filename()
173 # if logger.isEnabledFor(logging.DEBUG):
174 # logger.debug("icon %s(%d) => %s", name, size, file_name)
175 return file_name
176
177 logger.warning('icon %s(%d) not found in current theme', name, size)
178
[end of lib/solaar/ui/icons.py]
[start of setup.py]
1 #!/usr/bin/env python3
2 import subprocess
3
4 from glob import glob as _glob
5
6 try:
7 from setuptools import setup
8 except ImportError:
9 from distutils.core import setup
10
11 NAME = 'Solaar'
12
13 with open('lib/solaar/version', 'r') as vfile:
14 version = vfile.read().strip()
15
16 try: # get commit from git describe
17 commit = subprocess.check_output(['git', 'describe', '--always'], stderr=subprocess.DEVNULL).strip().decode()
18 with open('lib/solaar/commit', 'w') as vfile:
19 vfile.write(f'{commit}\n')
20 except Exception: # get commit from Ubuntu dpkg-parsechangelog
21 try:
22 commit = subprocess.check_output(['dpkg-parsechangelog', '--show-field', 'Version'],
23 stderr=subprocess.DEVNULL).strip().decode()
24 commit = commit.split('~')
25 with open('lib/solaar/commit', 'w') as vfile:
26 vfile.write(f'{commit[0]}\n')
27 except Exception as e:
28 print('Exception using dpkg-parsechangelog', e)
29
30
31 def _data_files():
32 from os.path import dirname as _dirname
33
34 yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/solaar*.svg')
35 yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/light_*.png')
36
37 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
38 yield _dirname(mo), [mo]
39
40 yield 'share/applications', ['share/applications/solaar.desktop']
41 yield 'lib/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']
42 yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
43
44 del _dirname
45
46
47 setup(
48 name=NAME.lower(),
49 version=version,
50 description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',
51 long_description='''
52 Solaar is a Linux device manager for many Logitech peripherals that connect through
53 Unifying and other receivers or via USB or Bluetooth.
54 Solaar is able to pair/unpair devices with receivers and show and modify some of the
55 modifiable features of devices.
56 For instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),
57 author='Daniel Pavel',
58 license='GPLv2',
59 url='http://pwr-solaar.github.io/Solaar/',
60 classifiers=[
61 'Development Status :: 4 - Beta',
62 'Environment :: X11 Applications :: GTK',
63 'Environment :: Console',
64 'Intended Audience :: End Users/Desktop',
65 'License :: DFSG approved',
66 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
67 'Natural Language :: English',
68 'Programming Language :: Python :: 3 :: Only',
69 'Operating System :: POSIX :: Linux',
70 'Topic :: Utilities',
71 ],
72 platforms=['linux'],
73
74 # sudo apt install python-gi python3-gi \
75 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
76 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
77 python_requires='>=3.7',
78 install_requires=[
79 'evdev (>= 1.1.2) ; platform_system=="Linux"',
80 'pyudev (>= 0.13)',
81 'PyYAML (>= 3.12)',
82 'python-xlib (>= 0.27)',
83 'psutil (>= 5.4.3)',
84 'dbus-python ; platform_system=="Linux"',
85 ],
86 extras_require={
87 'report-descriptor': ['hid-parser'],
88 'desktop-notifications': ['Notify (>= 0.7)'],
89 'git-commit': ['python-git-info'],
90 'test': ['pytest'],
91 },
92 package_dir={'': 'lib'},
93 packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
94 data_files=list(_data_files()),
95 include_package_data=True,
96 scripts=_glob('bin/*'),
97 )
98
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/solaar/ui/icons.py b/lib/solaar/ui/icons.py
--- a/lib/solaar/ui/icons.py
+++ b/lib/solaar/ui/icons.py
@@ -101,7 +101,7 @@
def lux(level=None):
if level is None or level < 0:
return 'light_unknown'
- return 'light_%03d' % (20 * ((level + 50) // 100))
+ return 'solaar-light_%03d' % (20 * ((level + 50) // 100))
#
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -32,7 +32,7 @@
from os.path import dirname as _dirname
yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/solaar*.svg')
- yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/light_*.png')
+ yield 'share/icons/hicolor/32x32/apps', _glob('share/solaar/icons/light_*.png')
for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
yield _dirname(mo), [mo]
| {"golden_diff": "diff --git a/lib/solaar/ui/icons.py b/lib/solaar/ui/icons.py\n--- a/lib/solaar/ui/icons.py\n+++ b/lib/solaar/ui/icons.py\n@@ -101,7 +101,7 @@\n def lux(level=None):\n if level is None or level < 0:\n return 'light_unknown'\n- return 'light_%03d' % (20 * ((level + 50) // 100))\n+ return 'solaar-light_%03d' % (20 * ((level + 50) // 100))\n \n \n #\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -32,7 +32,7 @@\n from os.path import dirname as _dirname\n \n yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/solaar*.svg')\n- yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/light_*.png')\n+ yield 'share/icons/hicolor/32x32/apps', _glob('share/solaar/icons/light_*.png')\n \n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n", "issue": "PNG icons installed as scalable icons\nThe 1.1.11 pre-releases install various PNG icons in `/usr/share/icons/hicolor/scalable/apps/`, but the `scalable` hierarchy is reserved for scalable (SVG) icons.\n", "before_files": [{"content": "# -*- python-mode -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nimport logging\n\nimport solaar.gtk as gtk\n\nfrom gi.repository import Gtk\n\nlogger = logging.getLogger(__name__)\n\n#\n#\n#\n\n_LARGE_SIZE = 64\nGtk.IconSize.LARGE = Gtk.icon_size_register('large', _LARGE_SIZE, _LARGE_SIZE)\n# Gtk.IconSize.XLARGE = Gtk.icon_size_register('x-large', _LARGE_SIZE * 2, _LARGE_SIZE * 2)\n\nTRAY_INIT = 'solaar-init'\nTRAY_OKAY = 'solaar'\nTRAY_ATTENTION = 'solaar-attention'\n\n_default_theme = None\n\n\ndef _init_icon_paths():\n global _default_theme\n if _default_theme:\n return\n\n _default_theme = Gtk.IconTheme.get_default()\n if logger.isEnabledFor(logging.DEBUG):\n logger.debug('icon theme paths: %s', _default_theme.get_search_path())\n\n if gtk.battery_icons_style == 'symbolic':\n global TRAY_OKAY\n TRAY_OKAY = TRAY_INIT # use monochrome tray icon\n if not _default_theme.has_icon('battery-good-symbolic'):\n logger.warning('failed to detect symbolic icons')\n gtk.battery_icons_style = 'regular'\n if gtk.battery_icons_style == 'regular':\n if not _default_theme.has_icon('battery-good'):\n logger.warning('failed to detect icons')\n gtk.battery_icons_style = 'solaar'\n\n\n#\n#\n#\n\n\ndef battery(level=None, charging=False):\n icon_name = _battery_icon_name(level, charging)\n if not _default_theme.has_icon(icon_name):\n logger.warning('icon %s not found in current theme', icon_name)\n return TRAY_OKAY # use Solaar icon if battery icon not available\n elif logger.isEnabledFor(logging.DEBUG):\n logger.debug('battery icon for %s:%s = %s', level, charging, icon_name)\n return icon_name\n\n\n# return first res where val >= guard\n# _first_res(val,((guard,res),...))\ndef _first_res(val, pairs):\n return next((res for guard, res in pairs if val >= guard), None)\n\n\ndef _battery_icon_name(level, charging):\n _init_icon_paths()\n\n if level is None or level < 0:\n return 'battery-missing' + ('-symbolic' if gtk.battery_icons_style == 'symbolic' else '')\n\n level_name = _first_res(level, ((90, 'full'), (30, 'good'), (20, 'low'), (5, 'caution'), (0, 'empty')))\n return 'battery-%s%s%s' % (\n level_name, '-charging' if charging else '', '-symbolic' if gtk.battery_icons_style == 'symbolic' else ''\n )\n\n\n#\n#\n#\n\n\ndef lux(level=None):\n if level is None or level < 0:\n return 'light_unknown'\n return 'light_%03d' % (20 * ((level + 50) // 100))\n\n\n#\n#\n#\n\n_ICON_SETS = {}\n\n\ndef device_icon_set(name='_', kind=None):\n icon_set = _ICON_SETS.get(name)\n if icon_set is None:\n icon_set = Gtk.IconSet.new()\n _ICON_SETS[name] = icon_set\n\n # names of possible icons, in reverse order of likelihood\n # the theme will hopefully pick up the most appropriate\n names = ['preferences-desktop-peripherals']\n if kind:\n if str(kind) == 'numpad':\n names += ('input-keyboard', 'input-dialpad')\n elif str(kind) == 'touchpad':\n names += ('input-mouse', 'input-tablet')\n elif str(kind) == 'trackball':\n names += ('input-mouse', )\n elif str(kind) == 'headset':\n names += ('audio-headphones', 'audio-headset')\n names += ('input-' + str(kind), )\n # names += (name.replace(' ', '-'),)\n\n source = Gtk.IconSource.new()\n for n in names:\n source.set_icon_name(n)\n icon_set.add_source(source)\n icon_set.names = names\n\n return icon_set\n\n\ndef device_icon_file(name, kind=None, size=_LARGE_SIZE):\n _init_icon_paths()\n\n icon_set = device_icon_set(name, kind)\n assert icon_set\n for n in reversed(icon_set.names):\n if _default_theme.has_icon(n):\n return _default_theme.lookup_icon(n, size, 0).get_filename()\n\n\ndef device_icon_name(name, kind=None):\n _init_icon_paths()\n\n icon_set = device_icon_set(name, kind)\n assert icon_set\n for n in reversed(icon_set.names):\n if _default_theme.has_icon(n):\n return n\n\n\ndef icon_file(name, size=_LARGE_SIZE):\n _init_icon_paths()\n\n # has_icon() somehow returned False while lookup_icon returns non-None.\n # I guess it happens because share/solaar/icons/ has no hicolor and\n # resolution subdirs\n theme_icon = _default_theme.lookup_icon(name, size, 0)\n if theme_icon:\n file_name = theme_icon.get_filename()\n # if logger.isEnabledFor(logging.DEBUG):\n # logger.debug(\"icon %s(%d) => %s\", name, size, file_name)\n return file_name\n\n logger.warning('icon %s(%d) not found in current theme', name, size)\n", "path": "lib/solaar/ui/icons.py"}, {"content": "#!/usr/bin/env python3\nimport subprocess\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nNAME = 'Solaar'\n\nwith open('lib/solaar/version', 'r') as vfile:\n version = vfile.read().strip()\n\ntry: # get commit from git describe\n commit = subprocess.check_output(['git', 'describe', '--always'], stderr=subprocess.DEVNULL).strip().decode()\n with open('lib/solaar/commit', 'w') as vfile:\n vfile.write(f'{commit}\\n')\nexcept Exception: # get commit from Ubuntu dpkg-parsechangelog\n try:\n commit = subprocess.check_output(['dpkg-parsechangelog', '--show-field', 'Version'],\n stderr=subprocess.DEVNULL).strip().decode()\n commit = commit.split('~')\n with open('lib/solaar/commit', 'w') as vfile:\n vfile.write(f'{commit[0]}\\n')\n except Exception as e:\n print('Exception using dpkg-parsechangelog', e)\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/icons/hicolor/scalable/apps', _glob('share/solaar/icons/light_*.png')\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'lib/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=version,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2) ; platform_system==\"Linux\"',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n 'dbus-python ; platform_system==\"Linux\"',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n 'git-commit': ['python-git-info'],\n 'test': ['pytest'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n include_package_data=True,\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]} | 3,563 | 288 |
gh_patches_debug_725 | rasdani/github-patches | git_diff | rasterio__rasterio-1477 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python crashes while building overviews
After performing the below code Python crashes:
```python
import rasterio
from rasterio.enums import Resampling
factors = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]
dst = rasterio.open('rasterio/tests/data/RGB.byte.tif', 'r+')
dst.build_overviews(factors, Resampling.average)
```
```
*** Error in `python': malloc(): memory corruption: 0x0000000002e0f9c0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7fbe1c3fd7e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8213e)[0x7fbe1c40813e]
/lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x54)[0x7fbe1c40a184]
/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(CPLMalloc+0x20)[0x7fbe19ab2700]
/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(CPLCalloc+0x1c)[0x7fbe19ab27ac]
/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(_ZN12GTiffDataset15IBuildOverviewsEPKciPiiS2_PFidS1_PvES3_+0x10f0)[0x7fbe19554bd0]
/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(_ZN11GDALDataset14BuildOverviewsEPKciPiiS2_PFidS1_PvES3_+0x38)[0x7fbe198059f8]
/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/_io.cpython-35m-x86_64-linux-gnu.so(+0x3613a)[0x7fbe0595713a]
python(PyCFunction_Call+0x77)[0x4e9ba7]
python(PyEval_EvalFrameEx+0x614)[0x5372f4]
python[0x540199]
python(PyEval_EvalCode+0x1f)[0x540e4f]
python[0x60c272]
python(PyRun_InteractiveOneObject+0x2b1)[0x46b89f]
python(PyRun_InteractiveLoopFlags+0xe8)[0x46ba48]
python[0x46cfa0]
python[0x4cf2bd]
python(main+0xe1)[0x4cfeb1]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fbe1c3a6830]
python(_start+0x29)[0x5d6049]
```
</issue>
<code>
[start of rasterio/errors.py]
1 """Errors and Warnings."""
2
3 from click import FileError
4
5
6 class RasterioError(Exception):
7 """Root exception class"""
8
9
10 class WindowError(RasterioError):
11 """Raised when errors occur during window operations"""
12
13
14 class CRSError(ValueError):
15 """Raised when a CRS string or mapping is invalid or cannot serve
16 to define a coordinate transformation."""
17
18
19 class EnvError(RasterioError):
20 """Raised when the state of GDAL/AWS environment cannot be created
21 or modified."""
22
23
24 class DriverRegistrationError(ValueError):
25 """Raised when a format driver is requested but is not registered."""
26
27
28 class FileOverwriteError(FileError):
29 """Raised when Rasterio's CLI refuses to clobber output files."""
30
31 def __init__(self, message):
32 """Raise FileOverwriteError with message as hint."""
33 super(FileOverwriteError, self).__init__('', hint=message)
34
35
36 class RasterioIOError(IOError):
37 """Raised when a dataset cannot be opened using one of the
38 registered format drivers."""
39
40
41 class NodataShadowWarning(UserWarning):
42 """Warn that a dataset's nodata attribute is shadowing its alpha band."""
43
44 def __str__(self):
45 return ("The dataset's nodata attribute is shadowing "
46 "the alpha band. All masks will be determined "
47 "by the nodata attribute")
48
49
50 class NotGeoreferencedWarning(UserWarning):
51 """Warn that a dataset isn't georeferenced."""
52
53
54 class GDALBehaviorChangeException(RuntimeError):
55 """Raised when GDAL's behavior differs from the given arguments. For
56 example, antimeridian cutting is always on as of GDAL 2.2.0. Users
57 expecting it to be off will be presented with a MultiPolygon when the
58 rest of their code expects a Polygon.
59
60 # Raises an exception on GDAL >= 2.2.0
61 rasterio.warp.transform_geometry(
62 src_crs, dst_crs, antimeridian_cutting=False)
63 """
64
65
66 class GDALOptionNotImplementedError(RasterioError):
67 """A dataset opening or dataset creation option can't be supported
68
69 This will be raised from Rasterio's shim modules. For example, when
70 a user passes arguments to open_dataset() that can't be evaluated
71 by GDAL 1.x.
72 """
73
74 class GDALVersionError(RasterioError):
75 """Raised if the runtime version of GDAL does not meet the required
76 version of GDAL."""
77
78
79 class WindowEvaluationError(ValueError):
80 """Raised when window evaluation fails"""
81
82
83 class RasterioDeprecationWarning(UserWarning):
84 """Rasterio module deprecations"""
85
86
87 class RasterBlockError(RasterioError):
88 """Raised when raster block access fails"""
89
90
91 class BandOverviewError(UserWarning):
92 """Raised when a band overview access fails."""
93
94
95 class WarpOptionsError(RasterioError):
96 """Raised when options for a warp operation are invalid"""
97
98
99 class UnsupportedOperation(RasterioError):
100 """Raised when reading from a file opened in 'w' mode"""
101
[end of rasterio/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/errors.py b/rasterio/errors.py
--- a/rasterio/errors.py
+++ b/rasterio/errors.py
@@ -98,3 +98,7 @@
class UnsupportedOperation(RasterioError):
"""Raised when reading from a file opened in 'w' mode"""
+
+
+class OverviewCreationError(RasterioError):
+ """Raised when creation of an overview fails"""
| {"golden_diff": "diff --git a/rasterio/errors.py b/rasterio/errors.py\n--- a/rasterio/errors.py\n+++ b/rasterio/errors.py\n@@ -98,3 +98,7 @@\n \n class UnsupportedOperation(RasterioError):\n \"\"\"Raised when reading from a file opened in 'w' mode\"\"\"\n+\n+\n+class OverviewCreationError(RasterioError):\n+ \"\"\"Raised when creation of an overview fails\"\"\"\n", "issue": "Python crashes while building overviews\nAfter performing the below code Python crashes:\r\n\r\n```python\r\nimport rasterio\r\nfrom rasterio.enums import Resampling\r\n\r\nfactors = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]\r\ndst = rasterio.open('rasterio/tests/data/RGB.byte.tif', 'r+')\r\ndst.build_overviews(factors, Resampling.average)\r\n```\r\n\r\n```\r\n*** Error in `python': malloc(): memory corruption: 0x0000000002e0f9c0 ***\r\n======= Backtrace: =========\r\n/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7fbe1c3fd7e5]\r\n/lib/x86_64-linux-gnu/libc.so.6(+0x8213e)[0x7fbe1c40813e]\r\n/lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x54)[0x7fbe1c40a184]\r\n/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(CPLMalloc+0x20)[0x7fbe19ab2700]\r\n/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(CPLCalloc+0x1c)[0x7fbe19ab27ac]\r\n/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(_ZN12GTiffDataset15IBuildOverviewsEPKciPiiS2_PFidS1_PvES3_+0x10f0)[0x7fbe19554bd0]\r\n/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/.libs/libgdal-acedaae2.so.20.3.1(_ZN11GDALDataset14BuildOverviewsEPKciPiiS2_PFidS1_PvES3_+0x38)[0x7fbe198059f8]\r\n/home/rykov/sandbox/env/lib/python3.5/site-packages/rasterio/_io.cpython-35m-x86_64-linux-gnu.so(+0x3613a)[0x7fbe0595713a]\r\npython(PyCFunction_Call+0x77)[0x4e9ba7]\r\npython(PyEval_EvalFrameEx+0x614)[0x5372f4]\r\npython[0x540199]\r\npython(PyEval_EvalCode+0x1f)[0x540e4f]\r\npython[0x60c272]\r\npython(PyRun_InteractiveOneObject+0x2b1)[0x46b89f]\r\npython(PyRun_InteractiveLoopFlags+0xe8)[0x46ba48]\r\npython[0x46cfa0]\r\npython[0x4cf2bd]\r\npython(main+0xe1)[0x4cfeb1]\r\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fbe1c3a6830]\r\npython(_start+0x29)[0x5d6049]\r\n```\n", "before_files": [{"content": "\"\"\"Errors and Warnings.\"\"\"\n\nfrom click import FileError\n\n\nclass RasterioError(Exception):\n \"\"\"Root exception class\"\"\"\n\n\nclass WindowError(RasterioError):\n \"\"\"Raised when errors occur during window operations\"\"\"\n\n\nclass CRSError(ValueError):\n \"\"\"Raised when a CRS string or mapping is invalid or cannot serve\n to define a coordinate transformation.\"\"\"\n\n\nclass EnvError(RasterioError):\n \"\"\"Raised when the state of GDAL/AWS environment cannot be created\n or modified.\"\"\"\n\n\nclass DriverRegistrationError(ValueError):\n \"\"\"Raised when a format driver is requested but is not registered.\"\"\"\n\n\nclass FileOverwriteError(FileError):\n \"\"\"Raised when Rasterio's CLI refuses to clobber output files.\"\"\"\n\n def __init__(self, message):\n \"\"\"Raise FileOverwriteError with message as hint.\"\"\"\n super(FileOverwriteError, self).__init__('', hint=message)\n\n\nclass RasterioIOError(IOError):\n \"\"\"Raised when a dataset cannot be opened using one of the\n registered format drivers.\"\"\"\n\n\nclass NodataShadowWarning(UserWarning):\n \"\"\"Warn that a dataset's nodata attribute is shadowing its alpha band.\"\"\"\n\n def __str__(self):\n return (\"The dataset's nodata attribute is shadowing \"\n \"the alpha band. All masks will be determined \"\n \"by the nodata attribute\")\n\n\nclass NotGeoreferencedWarning(UserWarning):\n \"\"\"Warn that a dataset isn't georeferenced.\"\"\"\n\n\nclass GDALBehaviorChangeException(RuntimeError):\n \"\"\"Raised when GDAL's behavior differs from the given arguments. For\n example, antimeridian cutting is always on as of GDAL 2.2.0. Users\n expecting it to be off will be presented with a MultiPolygon when the\n rest of their code expects a Polygon.\n\n # Raises an exception on GDAL >= 2.2.0\n rasterio.warp.transform_geometry(\n src_crs, dst_crs, antimeridian_cutting=False)\n \"\"\"\n\n\nclass GDALOptionNotImplementedError(RasterioError):\n \"\"\"A dataset opening or dataset creation option can't be supported\n\n This will be raised from Rasterio's shim modules. For example, when\n a user passes arguments to open_dataset() that can't be evaluated\n by GDAL 1.x.\n \"\"\"\n\nclass GDALVersionError(RasterioError):\n \"\"\"Raised if the runtime version of GDAL does not meet the required\n version of GDAL.\"\"\"\n\n\nclass WindowEvaluationError(ValueError):\n \"\"\"Raised when window evaluation fails\"\"\"\n\n\nclass RasterioDeprecationWarning(UserWarning):\n \"\"\"Rasterio module deprecations\"\"\"\n\n\nclass RasterBlockError(RasterioError):\n \"\"\"Raised when raster block access fails\"\"\"\n\n\nclass BandOverviewError(UserWarning):\n \"\"\"Raised when a band overview access fails.\"\"\"\n\n\nclass WarpOptionsError(RasterioError):\n \"\"\"Raised when options for a warp operation are invalid\"\"\"\n\n\nclass UnsupportedOperation(RasterioError):\n \"\"\"Raised when reading from a file opened in 'w' mode\"\"\"\n", "path": "rasterio/errors.py"}]} | 2,213 | 91 |
gh_patches_debug_28841 | rasdani/github-patches | git_diff | ManimCommunity__manim-2567 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Renderer only uses one rendering mode for all Scenes in a file
## Description of bug / unexpected behavior
<!-- Add a clear and concise description of the problem you encountered. -->
When running `manim animations.py -ql -a`, all of the Scenes are rendered as _either_ .png files _or_ .mp4 files.
For example, if the first Scene in 'animations.py' has no animation, then manim will decide to render that Scene to a .png.
However, then if the next Scene has some animation, then manim will not intelligently switch to rendering .mp4, and instead will produce a .png containing the last frame of the intended animation.
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
If there are some Scenes with animations and some still Scenes in a file, when rendering all Scenes at once, manim should decide for each Scene whether to render to a .png or to an .mp4, based on whether there is animation or not.
## How to reproduce the issue
<!-- Provide a piece of code illustrating the undesired behavior. -->
<details><summary>Code for reproducing the problem</summary>
```py
Paste your code here.
```
</details>
## Additional media files
<!-- Paste in the files manim produced on rendering the code above. -->
<details><summary>Images/GIFs</summary>
<!-- PASTE MEDIA HERE -->
</details>
## Logs
<details><summary>Terminal output</summary>
<!-- Add "-v DEBUG" when calling manim to generate more detailed logs -->
```
PASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR
```
<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->
</details>
## System specifications
<details><summary>System Details</summary>
- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)):
- RAM:
- Python version (`python/py/python3 --version`):
- Installed modules (provide output from `pip list`):
```
PASTE HERE
```
</details>
<details><summary>LaTeX details</summary>
+ LaTeX distribution (e.g. TeX Live 2020):
+ Installed LaTeX packages:
<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->
</details>
<details><summary>FFMPEG</summary>
Output of `ffmpeg -version`:
```
PASTE HERE
```
</details>
## Additional comments
<!-- Add further context that you think might be relevant for this issue here. -->
</issue>
<code>
[start of manim/cli/render/commands.py]
1 """Manim's default subcommand, render.
2
3 Manim's render subcommand is accessed in the command-line interface via
4 ``manim``, but can be more explicitly accessed with ``manim render``. Here you
5 can specify options, and arguments for the render command.
6
7 """
8 from __future__ import annotations
9
10 import json
11 import sys
12 from pathlib import Path
13
14 import click
15 import cloup
16 import requests
17
18 from ... import __version__, config, console, error_console, logger
19 from ...constants import EPILOG
20 from ...utils.module_ops import scene_classes_from_file
21 from .ease_of_access_options import ease_of_access_options
22 from .global_options import global_options
23 from .output_options import output_options
24 from .render_options import render_options
25
26
27 @cloup.command(
28 context_settings=None,
29 no_args_is_help=True,
30 epilog=EPILOG,
31 )
32 @click.argument("file", type=Path, required=True)
33 @click.argument("scene_names", required=False, nargs=-1)
34 @global_options
35 @output_options
36 @render_options # type: ignore
37 @ease_of_access_options
38 def render(
39 **args,
40 ):
41 """Render SCENE(S) from the input FILE.
42
43 FILE is the file path of the script or a config file.
44
45 SCENES is an optional list of scenes in the file.
46 """
47
48 if args["use_opengl_renderer"]:
49 logger.warning(
50 "--use_opengl_renderer is deprecated, please use --renderer=opengl instead!",
51 )
52 args["renderer"] = "opengl"
53
54 if args["save_as_gif"]:
55 logger.warning("--save_as_gif is deprecated, please use --format=gif instead!")
56 args["format"] = "gif"
57
58 if args["save_pngs"]:
59 logger.warning("--save_pngs is deprecated, please use --format=png instead!")
60 args["format"] = "png"
61
62 if args["show_in_file_browser"]:
63 logger.warning(
64 "The short form of show_in_file_browser is deprecated and will be moved to support --format.",
65 )
66
67 class ClickArgs:
68 def __init__(self, args):
69 for name in args:
70 setattr(self, name, args[name])
71
72 def _get_kwargs(self):
73 return list(self.__dict__.items())
74
75 def __eq__(self, other):
76 if not isinstance(other, ClickArgs):
77 return NotImplemented
78 return vars(self) == vars(other)
79
80 def __contains__(self, key):
81 return key in self.__dict__
82
83 def __repr__(self):
84 return str(self.__dict__)
85
86 click_args = ClickArgs(args)
87 if args["jupyter"]:
88 return click_args
89
90 config.digest_args(click_args)
91 file = Path(config.input_file)
92 if config.renderer == "opengl":
93 from manim.renderer.opengl_renderer import OpenGLRenderer
94
95 try:
96 renderer = OpenGLRenderer()
97 keep_running = True
98 while keep_running:
99 for SceneClass in scene_classes_from_file(file):
100 scene = SceneClass(renderer)
101 rerun = scene.render()
102 if rerun or config["write_all"]:
103 renderer.num_plays = 0
104 continue
105 else:
106 keep_running = False
107 break
108 if config["write_all"]:
109 keep_running = False
110
111 except Exception:
112 error_console.print_exception()
113 sys.exit(1)
114 else:
115 for SceneClass in scene_classes_from_file(file):
116 try:
117 scene = SceneClass()
118 scene.render()
119 except Exception:
120 error_console.print_exception()
121 sys.exit(1)
122
123 if config.notify_outdated_version:
124 manim_info_url = "https://pypi.org/pypi/manim/json"
125 warn_prompt = "Cannot check if latest release of manim is installed"
126 req_info = {}
127
128 try:
129 req_info = requests.get(manim_info_url)
130 req_info.raise_for_status()
131
132 stable = req_info.json()["info"]["version"]
133 if stable != __version__:
134 console.print(
135 f"You are using manim version [red]v{__version__}[/red], but version [green]v{stable}[/green] is available.",
136 )
137 console.print(
138 "You should consider upgrading via [yellow]pip install -U manim[/yellow]",
139 )
140 except requests.exceptions.HTTPError:
141 logger.debug(f"HTTP Error: {warn_prompt}")
142 except requests.exceptions.ConnectionError:
143 logger.debug(f"Connection Error: {warn_prompt}")
144 except requests.exceptions.Timeout:
145 logger.debug(f"Timed Out: {warn_prompt}")
146 except json.JSONDecodeError:
147 logger.debug(warn_prompt)
148 logger.debug(f"Error decoding JSON from {manim_info_url}")
149 except Exception:
150 logger.debug(f"Something went wrong: {warn_prompt}")
151
152 return args
153
[end of manim/cli/render/commands.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/manim/cli/render/commands.py b/manim/cli/render/commands.py
--- a/manim/cli/render/commands.py
+++ b/manim/cli/render/commands.py
@@ -16,6 +16,7 @@
import requests
from ... import __version__, config, console, error_console, logger
+from ..._config import tempconfig
from ...constants import EPILOG
from ...utils.module_ops import scene_classes_from_file
from .ease_of_access_options import ease_of_access_options
@@ -97,8 +98,9 @@
keep_running = True
while keep_running:
for SceneClass in scene_classes_from_file(file):
- scene = SceneClass(renderer)
- rerun = scene.render()
+ with tempconfig(config):
+ scene = SceneClass(renderer)
+ rerun = scene.render()
if rerun or config["write_all"]:
renderer.num_plays = 0
continue
@@ -114,8 +116,9 @@
else:
for SceneClass in scene_classes_from_file(file):
try:
- scene = SceneClass()
- scene.render()
+ with tempconfig(config):
+ scene = SceneClass()
+ scene.render()
except Exception:
error_console.print_exception()
sys.exit(1)
| {"golden_diff": "diff --git a/manim/cli/render/commands.py b/manim/cli/render/commands.py\n--- a/manim/cli/render/commands.py\n+++ b/manim/cli/render/commands.py\n@@ -16,6 +16,7 @@\n import requests\n \n from ... import __version__, config, console, error_console, logger\n+from ..._config import tempconfig\n from ...constants import EPILOG\n from ...utils.module_ops import scene_classes_from_file\n from .ease_of_access_options import ease_of_access_options\n@@ -97,8 +98,9 @@\n keep_running = True\n while keep_running:\n for SceneClass in scene_classes_from_file(file):\n- scene = SceneClass(renderer)\n- rerun = scene.render()\n+ with tempconfig(config):\n+ scene = SceneClass(renderer)\n+ rerun = scene.render()\n if rerun or config[\"write_all\"]:\n renderer.num_plays = 0\n continue\n@@ -114,8 +116,9 @@\n else:\n for SceneClass in scene_classes_from_file(file):\n try:\n- scene = SceneClass()\n- scene.render()\n+ with tempconfig(config):\n+ scene = SceneClass()\n+ scene.render()\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n", "issue": "Renderer only uses one rendering mode for all Scenes in a file\n## Description of bug / unexpected behavior\r\n<!-- Add a clear and concise description of the problem you encountered. -->\r\n\r\nWhen running `manim animations.py -ql -a`, all of the Scenes are rendered as _either_ .png files _or_ .mp4 files.\r\nFor example, if the first Scene in 'animations.py' has no animation, then manim will decide to render that Scene to a .png.\r\nHowever, then if the next Scene has some animation, then manim will not intelligently switch to rendering .mp4, and instead will produce a .png containing the last frame of the intended animation.\r\n\r\n\r\n## Expected behavior\r\n<!-- Add a clear and concise description of what you expected to happen. -->\r\n\r\nIf there are some Scenes with animations and some still Scenes in a file, when rendering all Scenes at once, manim should decide for each Scene whether to render to a .png or to an .mp4, based on whether there is animation or not.\r\n\r\n\r\n## How to reproduce the issue\r\n<!-- Provide a piece of code illustrating the undesired behavior. -->\r\n\r\n<details><summary>Code for reproducing the problem</summary>\r\n\r\n```py\r\nPaste your code here.\r\n```\r\n\r\n</details>\r\n\r\n\r\n## Additional media files\r\n<!-- Paste in the files manim produced on rendering the code above. -->\r\n\r\n<details><summary>Images/GIFs</summary>\r\n\r\n<!-- PASTE MEDIA HERE -->\r\n\r\n</details>\r\n\r\n\r\n## Logs\r\n<details><summary>Terminal output</summary>\r\n<!-- Add \"-v DEBUG\" when calling manim to generate more detailed logs -->\r\n\r\n```\r\nPASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR\r\n```\r\n\r\n<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->\r\n\r\n</details>\r\n\r\n\r\n## System specifications\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)):\r\n- RAM:\r\n- Python version (`python/py/python3 --version`):\r\n- Installed modules (provide output from `pip list`):\r\n```\r\nPASTE HERE\r\n```\r\n</details>\r\n\r\n<details><summary>LaTeX details</summary>\r\n\r\n+ LaTeX distribution (e.g. TeX Live 2020):\r\n+ Installed LaTeX packages:\r\n<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->\r\n</details>\r\n\r\n<details><summary>FFMPEG</summary>\r\n\r\nOutput of `ffmpeg -version`:\r\n\r\n```\r\nPASTE HERE\r\n```\r\n</details>\r\n\r\n## Additional comments\r\n<!-- Add further context that you think might be relevant for this issue here. -->\r\n\n", "before_files": [{"content": "\"\"\"Manim's default subcommand, render.\n\nManim's render subcommand is accessed in the command-line interface via\n``manim``, but can be more explicitly accessed with ``manim render``. Here you\ncan specify options, and arguments for the render command.\n\n\"\"\"\nfrom __future__ import annotations\n\nimport json\nimport sys\nfrom pathlib import Path\n\nimport click\nimport cloup\nimport requests\n\nfrom ... import __version__, config, console, error_console, logger\nfrom ...constants import EPILOG\nfrom ...utils.module_ops import scene_classes_from_file\nfrom .ease_of_access_options import ease_of_access_options\nfrom .global_options import global_options\nfrom .output_options import output_options\nfrom .render_options import render_options\n\n\[email protected](\n context_settings=None,\n no_args_is_help=True,\n epilog=EPILOG,\n)\[email protected](\"file\", type=Path, required=True)\[email protected](\"scene_names\", required=False, nargs=-1)\n@global_options\n@output_options\n@render_options # type: ignore\n@ease_of_access_options\ndef render(\n **args,\n):\n \"\"\"Render SCENE(S) from the input FILE.\n\n FILE is the file path of the script or a config file.\n\n SCENES is an optional list of scenes in the file.\n \"\"\"\n\n if args[\"use_opengl_renderer\"]:\n logger.warning(\n \"--use_opengl_renderer is deprecated, please use --renderer=opengl instead!\",\n )\n args[\"renderer\"] = \"opengl\"\n\n if args[\"save_as_gif\"]:\n logger.warning(\"--save_as_gif is deprecated, please use --format=gif instead!\")\n args[\"format\"] = \"gif\"\n\n if args[\"save_pngs\"]:\n logger.warning(\"--save_pngs is deprecated, please use --format=png instead!\")\n args[\"format\"] = \"png\"\n\n if args[\"show_in_file_browser\"]:\n logger.warning(\n \"The short form of show_in_file_browser is deprecated and will be moved to support --format.\",\n )\n\n class ClickArgs:\n def __init__(self, args):\n for name in args:\n setattr(self, name, args[name])\n\n def _get_kwargs(self):\n return list(self.__dict__.items())\n\n def __eq__(self, other):\n if not isinstance(other, ClickArgs):\n return NotImplemented\n return vars(self) == vars(other)\n\n def __contains__(self, key):\n return key in self.__dict__\n\n def __repr__(self):\n return str(self.__dict__)\n\n click_args = ClickArgs(args)\n if args[\"jupyter\"]:\n return click_args\n\n config.digest_args(click_args)\n file = Path(config.input_file)\n if config.renderer == \"opengl\":\n from manim.renderer.opengl_renderer import OpenGLRenderer\n\n try:\n renderer = OpenGLRenderer()\n keep_running = True\n while keep_running:\n for SceneClass in scene_classes_from_file(file):\n scene = SceneClass(renderer)\n rerun = scene.render()\n if rerun or config[\"write_all\"]:\n renderer.num_plays = 0\n continue\n else:\n keep_running = False\n break\n if config[\"write_all\"]:\n keep_running = False\n\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n else:\n for SceneClass in scene_classes_from_file(file):\n try:\n scene = SceneClass()\n scene.render()\n except Exception:\n error_console.print_exception()\n sys.exit(1)\n\n if config.notify_outdated_version:\n manim_info_url = \"https://pypi.org/pypi/manim/json\"\n warn_prompt = \"Cannot check if latest release of manim is installed\"\n req_info = {}\n\n try:\n req_info = requests.get(manim_info_url)\n req_info.raise_for_status()\n\n stable = req_info.json()[\"info\"][\"version\"]\n if stable != __version__:\n console.print(\n f\"You are using manim version [red]v{__version__}[/red], but version [green]v{stable}[/green] is available.\",\n )\n console.print(\n \"You should consider upgrading via [yellow]pip install -U manim[/yellow]\",\n )\n except requests.exceptions.HTTPError:\n logger.debug(f\"HTTP Error: {warn_prompt}\")\n except requests.exceptions.ConnectionError:\n logger.debug(f\"Connection Error: {warn_prompt}\")\n except requests.exceptions.Timeout:\n logger.debug(f\"Timed Out: {warn_prompt}\")\n except json.JSONDecodeError:\n logger.debug(warn_prompt)\n logger.debug(f\"Error decoding JSON from {manim_info_url}\")\n except Exception:\n logger.debug(f\"Something went wrong: {warn_prompt}\")\n\n return args\n", "path": "manim/cli/render/commands.py"}]} | 2,489 | 283 |
gh_patches_debug_22804 | rasdani/github-patches | git_diff | pypi__warehouse-4184 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Expose project_urls in JSON API
https://packaging.python.org/tutorials/distributing-packages/#project-urls
Related to #3798 / #3820
I realized project_urls is not currently exposed by the JSON API. I propose adding it.
Though the keys in the project_urls dict can be anything, they're fairly standardized, enough to be useful when querying for them over and API. For example, [Flask's API response](https://pypi.org/pypi/Flask/json) lists its home_page as https://www.palletsprojects.com/p/flask/ (not it's github account which is fairly typical), and puts it's GitHub link in `project_urls['Code']`, which is not currently in the API response.
</issue>
<code>
[start of warehouse/legacy/api/json.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
14 from pyramid.view import view_config
15 from sqlalchemy.orm import Load
16 from sqlalchemy.orm.exc import NoResultFound
17
18 from warehouse.cache.http import cache_control
19 from warehouse.cache.origin import origin_cache
20 from warehouse.packaging.models import File, Release, Project
21
22
23 # Generate appropriate CORS headers for the JSON endpoint.
24 # We want to allow Cross-Origin requests here so that users can interact
25 # with these endpoints via XHR/Fetch APIs in the browser.
26 _CORS_HEADERS = {
27 "Access-Control-Allow-Origin": "*",
28 "Access-Control-Allow-Headers": ", ".join(
29 [
30 "Content-Type",
31 "If-Match",
32 "If-Modified-Since",
33 "If-None-Match",
34 "If-Unmodified-Since",
35 ]
36 ),
37 "Access-Control-Allow-Methods": "GET",
38 "Access-Control-Max-Age": "86400", # 1 day.
39 "Access-Control-Expose-Headers": ", ".join(["X-PyPI-Last-Serial"]),
40 }
41
42
43 @view_config(
44 route_name="legacy.api.json.project",
45 context=Project,
46 renderer="json",
47 decorator=[
48 cache_control(15 * 60), # 15 minutes
49 origin_cache(
50 1 * 24 * 60 * 60, # 1 day
51 stale_while_revalidate=5 * 60, # 5 minutes
52 stale_if_error=1 * 24 * 60 * 60, # 1 day
53 ),
54 ],
55 )
56 def json_project(project, request):
57 if project.name != request.matchdict.get("name", project.name):
58 return HTTPMovedPermanently(
59 request.current_route_path(name=project.name), headers=_CORS_HEADERS
60 )
61
62 try:
63 release = (
64 request.db.query(Release)
65 .filter(Release.project == project)
66 .order_by(Release.is_prerelease.nullslast(), Release._pypi_ordering.desc())
67 .limit(1)
68 .one()
69 )
70 except NoResultFound:
71 return HTTPNotFound(headers=_CORS_HEADERS)
72
73 return json_release(release, request)
74
75
76 @view_config(
77 route_name="legacy.api.json.release",
78 context=Release,
79 renderer="json",
80 decorator=[
81 cache_control(15 * 60), # 15 minutes
82 origin_cache(
83 1 * 24 * 60 * 60, # 1 day
84 stale_while_revalidate=5 * 60, # 5 minutes
85 stale_if_error=1 * 24 * 60 * 60, # 1 day
86 ),
87 ],
88 )
89 def json_release(release, request):
90 project = release.project
91
92 if project.name != request.matchdict.get("name", project.name):
93 return HTTPMovedPermanently(
94 request.current_route_path(name=project.name), headers=_CORS_HEADERS
95 )
96
97 # Apply CORS headers.
98 request.response.headers.update(_CORS_HEADERS)
99
100 # Get the latest serial number for this project.
101 request.response.headers["X-PyPI-Last-Serial"] = str(project.last_serial)
102
103 # Get all of the releases and files for this project.
104 release_files = (
105 request.db.query(Release, File)
106 .options(Load(Release).load_only("version"))
107 .outerjoin(File)
108 .filter(Release.project == project)
109 .order_by(Release._pypi_ordering.desc(), File.filename)
110 .all()
111 )
112
113 # Map our releases + files into a dictionary that maps each release to a
114 # list of all its files.
115 releases = {}
116 for r, file_ in release_files:
117 files = releases.setdefault(r, [])
118 if file_ is not None:
119 files.append(file_)
120
121 # Serialize our database objects to match the way that PyPI legacy
122 # presented this data.
123 releases = {
124 r.version: [
125 {
126 "filename": f.filename,
127 "packagetype": f.packagetype,
128 "python_version": f.python_version,
129 "has_sig": f.has_signature,
130 "comment_text": f.comment_text,
131 "md5_digest": f.md5_digest,
132 "digests": {"md5": f.md5_digest, "sha256": f.sha256_digest},
133 "size": f.size,
134 # TODO: Remove this once we've had a long enough time with it
135 # here to consider it no longer in use.
136 "downloads": -1,
137 "upload_time": f.upload_time.strftime("%Y-%m-%dT%H:%M:%S"),
138 "url": request.route_url("packaging.file", path=f.path),
139 }
140 for f in fs
141 ]
142 for r, fs in releases.items()
143 }
144
145 return {
146 "info": {
147 "name": project.name,
148 "version": release.version,
149 "summary": release.summary,
150 "description_content_type": release.description_content_type,
151 "description": release.description,
152 "keywords": release.keywords,
153 "license": release.license,
154 "classifiers": list(release.classifiers),
155 "author": release.author,
156 "author_email": release.author_email,
157 "maintainer": release.maintainer,
158 "maintainer_email": release.maintainer_email,
159 "requires_python": release.requires_python,
160 "platform": release.platform,
161 "downloads": {"last_day": -1, "last_week": -1, "last_month": -1},
162 "package_url": request.route_url("packaging.project", name=project.name),
163 "project_url": request.route_url("packaging.project", name=project.name),
164 "release_url": request.route_url(
165 "packaging.release", name=project.name, version=release.version
166 ),
167 "requires_dist": (
168 list(release.requires_dist) if release.requires_dist else None
169 ),
170 "docs_url": project.documentation_url,
171 "bugtrack_url": project.bugtrack_url,
172 "home_page": release.home_page,
173 "download_url": release.download_url,
174 },
175 "urls": releases[release.version],
176 "releases": releases,
177 "last_serial": project.last_serial,
178 }
179
[end of warehouse/legacy/api/json.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/legacy/api/json.py b/warehouse/legacy/api/json.py
--- a/warehouse/legacy/api/json.py
+++ b/warehouse/legacy/api/json.py
@@ -10,6 +10,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from collections import OrderedDict
+
from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
from pyramid.view import view_config
from sqlalchemy.orm import Load
@@ -161,6 +163,7 @@
"downloads": {"last_day": -1, "last_week": -1, "last_month": -1},
"package_url": request.route_url("packaging.project", name=project.name),
"project_url": request.route_url("packaging.project", name=project.name),
+ "project_urls": OrderedDict(release.urls) if release.urls else None,
"release_url": request.route_url(
"packaging.release", name=project.name, version=release.version
),
| {"golden_diff": "diff --git a/warehouse/legacy/api/json.py b/warehouse/legacy/api/json.py\n--- a/warehouse/legacy/api/json.py\n+++ b/warehouse/legacy/api/json.py\n@@ -10,6 +10,8 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from collections import OrderedDict\n+\n from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\n from pyramid.view import view_config\n from sqlalchemy.orm import Load\n@@ -161,6 +163,7 @@\n \"downloads\": {\"last_day\": -1, \"last_week\": -1, \"last_month\": -1},\n \"package_url\": request.route_url(\"packaging.project\", name=project.name),\n \"project_url\": request.route_url(\"packaging.project\", name=project.name),\n+ \"project_urls\": OrderedDict(release.urls) if release.urls else None,\n \"release_url\": request.route_url(\n \"packaging.release\", name=project.name, version=release.version\n ),\n", "issue": "Expose project_urls in JSON API\nhttps://packaging.python.org/tutorials/distributing-packages/#project-urls\r\n\r\nRelated to #3798 / #3820\r\n\r\nI realized project_urls is not currently exposed by the JSON API. I propose adding it.\r\n\r\nThough the keys in the project_urls dict can be anything, they're fairly standardized, enough to be useful when querying for them over and API. For example, [Flask's API response](https://pypi.org/pypi/Flask/json) lists its home_page as https://www.palletsprojects.com/p/flask/ (not it's github account which is fairly typical), and puts it's GitHub link in `project_urls['Code']`, which is not currently in the API response.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm import Load\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import File, Release, Project\n\n\n# Generate appropriate CORS headers for the JSON endpoint.\n# We want to allow Cross-Origin requests here so that users can interact\n# with these endpoints via XHR/Fetch APIs in the browser.\n_CORS_HEADERS = {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \", \".join(\n [\n \"Content-Type\",\n \"If-Match\",\n \"If-Modified-Since\",\n \"If-None-Match\",\n \"If-Unmodified-Since\",\n ]\n ),\n \"Access-Control-Allow-Methods\": \"GET\",\n \"Access-Control-Max-Age\": \"86400\", # 1 day.\n \"Access-Control-Expose-Headers\": \", \".join([\"X-PyPI-Last-Serial\"]),\n}\n\n\n@view_config(\n route_name=\"legacy.api.json.project\",\n context=Project,\n renderer=\"json\",\n decorator=[\n cache_control(15 * 60), # 15 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef json_project(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name), headers=_CORS_HEADERS\n )\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release.is_prerelease.nullslast(), Release._pypi_ordering.desc())\n .limit(1)\n .one()\n )\n except NoResultFound:\n return HTTPNotFound(headers=_CORS_HEADERS)\n\n return json_release(release, request)\n\n\n@view_config(\n route_name=\"legacy.api.json.release\",\n context=Release,\n renderer=\"json\",\n decorator=[\n cache_control(15 * 60), # 15 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef json_release(release, request):\n project = release.project\n\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name), headers=_CORS_HEADERS\n )\n\n # Apply CORS headers.\n request.response.headers.update(_CORS_HEADERS)\n\n # Get the latest serial number for this project.\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n\n # Get all of the releases and files for this project.\n release_files = (\n request.db.query(Release, File)\n .options(Load(Release).load_only(\"version\"))\n .outerjoin(File)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc(), File.filename)\n .all()\n )\n\n # Map our releases + files into a dictionary that maps each release to a\n # list of all its files.\n releases = {}\n for r, file_ in release_files:\n files = releases.setdefault(r, [])\n if file_ is not None:\n files.append(file_)\n\n # Serialize our database objects to match the way that PyPI legacy\n # presented this data.\n releases = {\n r.version: [\n {\n \"filename\": f.filename,\n \"packagetype\": f.packagetype,\n \"python_version\": f.python_version,\n \"has_sig\": f.has_signature,\n \"comment_text\": f.comment_text,\n \"md5_digest\": f.md5_digest,\n \"digests\": {\"md5\": f.md5_digest, \"sha256\": f.sha256_digest},\n \"size\": f.size,\n # TODO: Remove this once we've had a long enough time with it\n # here to consider it no longer in use.\n \"downloads\": -1,\n \"upload_time\": f.upload_time.strftime(\"%Y-%m-%dT%H:%M:%S\"),\n \"url\": request.route_url(\"packaging.file\", path=f.path),\n }\n for f in fs\n ]\n for r, fs in releases.items()\n }\n\n return {\n \"info\": {\n \"name\": project.name,\n \"version\": release.version,\n \"summary\": release.summary,\n \"description_content_type\": release.description_content_type,\n \"description\": release.description,\n \"keywords\": release.keywords,\n \"license\": release.license,\n \"classifiers\": list(release.classifiers),\n \"author\": release.author,\n \"author_email\": release.author_email,\n \"maintainer\": release.maintainer,\n \"maintainer_email\": release.maintainer_email,\n \"requires_python\": release.requires_python,\n \"platform\": release.platform,\n \"downloads\": {\"last_day\": -1, \"last_week\": -1, \"last_month\": -1},\n \"package_url\": request.route_url(\"packaging.project\", name=project.name),\n \"project_url\": request.route_url(\"packaging.project\", name=project.name),\n \"release_url\": request.route_url(\n \"packaging.release\", name=project.name, version=release.version\n ),\n \"requires_dist\": (\n list(release.requires_dist) if release.requires_dist else None\n ),\n \"docs_url\": project.documentation_url,\n \"bugtrack_url\": project.bugtrack_url,\n \"home_page\": release.home_page,\n \"download_url\": release.download_url,\n },\n \"urls\": releases[release.version],\n \"releases\": releases,\n \"last_serial\": project.last_serial,\n }\n", "path": "warehouse/legacy/api/json.py"}]} | 2,609 | 224 |
gh_patches_debug_8018 | rasdani/github-patches | git_diff | kymatio__kymatio-289 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC 3D benchmarks
Currently none are listed in the user's guide. We should probably include something here.
</issue>
<code>
[start of kymatio/scattering2d/scattering2d.py]
1 # Authors: Edouard Oyallon
2 # Scientific Ancestry: Edouard Oyallon, Laurent Sifre, Joan Bruna
3
4
5 __all__ = ['Scattering2D']
6
7 import torch
8 from .backend import cdgmm, Modulus, SubsampleFourier, fft, Pad, unpad
9 from .filter_bank import filter_bank
10 from .utils import compute_padding
11
12
13 class Scattering2D(object):
14 """Main module implementing the scattering transform in 2D.
15 The scattering transform computes two wavelet transform followed
16 by modulus non-linearity.
17 It can be summarized as::
18
19 S_J x = [S_J^0 x, S_J^1 x, S_J^2 x]
20
21 where::
22
23 S_J^0 x = x * phi_J
24 S_J^1 x = [|x * psi^1_lambda| * phi_J]_lambda
25 S_J^2 x = [||x * psi^1_lambda| * psi^2_mu| * phi_J]_{lambda, mu}
26
27 where * denotes the convolution (in space), phi_J is a low pass
28 filter, psi^1_lambda is a family of band pass
29 filters and psi^2_mu is another family of band pass filters.
30 Only Morlet filters are used in this implementation.
31 Convolutions are efficiently performed in the Fourier domain
32 with this implementation.
33
34 Example
35 -------
36 # 1) Define a Scattering object as:
37 s = Scattering2D(J, M, N)
38 # where (M, N) are the image sizes and 2**J the scale of the scattering
39 # 2) Forward on an input Variable x of shape B x 1 x M x N,
40 # where B is the batch size.
41 result_s = s(x)
42
43 Parameters
44 ----------
45 J : int
46 logscale of the scattering
47 shape : tuple of int
48 spatial support (M, N) of the input
49 L : int, optional
50 number of angles used for the wavelet transform
51 max_order : int, optional
52 The maximum order of scattering coefficients to compute. Must be either
53 `1` or `2`. Defaults to `2`.
54 pre_pad : boolean, optional
55 controls the padding: if set to False, a symmetric padding is applied
56 on the signal. If set to true, the software will assume the signal was
57 padded externally.
58
59 Attributes
60 ----------
61 J : int
62 logscale of the scattering
63 shape : tuple of int
64 spatial support (M, N) of the input
65 L : int, optional
66 number of angles used for the wavelet transform
67 max_order : int, optional
68 The maximum order of scattering coefficients to compute.
69 Must be either equal to `1` or `2`. Defaults to `2`.
70 pre_pad : boolean
71 controls the padding
72 Psi : dictionary
73 containing the wavelets filters at all resolutions. See
74 filter_bank.filter_bank for an exact description.
75 Phi : dictionary
76 containing the low-pass filters at all resolutions. See
77 filter_bank.filter_bank for an exact description.
78 M_padded, N_padded : int
79 spatial support of the padded input
80
81 Notes
82 -----
83 The design of the filters is optimized for the value L = 8
84
85 pre_pad is particularly useful when doing crops of a bigger
86 image because the padding is then extremely accurate. Defaults
87 to False.
88
89 """
90 def __init__(self, J, shape, L=8, max_order=2, pre_pad=False):
91 self.J, self.L = J, L
92 self.pre_pad = pre_pad
93 self.max_order = max_order
94 self.shape = shape
95
96 self.build()
97
98 def build(self):
99 self.M, self.N = self.shape
100 self.modulus = Modulus()
101 self.pad = Pad(2**self.J, pre_pad = self.pre_pad)
102 self.subsample_fourier = SubsampleFourier()
103 # Create the filters
104 self.M_padded, self.N_padded = compute_padding(self.M, self.N, self.J)
105 filters = filter_bank(self.M_padded, self.N_padded, self.J, self.L)
106 self.Psi = filters['psi']
107 self.Phi = [filters['phi'][j] for j in range(self.J)]
108
109 def _type(self, _type):
110 for key, item in enumerate(self.Psi):
111 for key2, item2 in self.Psi[key].items():
112 if torch.is_tensor(item2):
113 self.Psi[key][key2] = item2.type(_type)
114 self.Phi = [v.type(_type) for v in self.Phi]
115 self.pad.padding_module.type(_type)
116 return self
117
118 def cuda(self):
119 """
120 Moves the parameters of the scattering to the GPU
121 """
122 return self._type(torch.cuda.FloatTensor)
123
124 def cpu(self):
125 """
126 Moves the parameters of the scattering to the CPU
127 """
128 return self._type(torch.FloatTensor)
129
130 def forward(self, input):
131 """Forward pass of the scattering.
132
133 Parameters
134 ----------
135 input : tensor
136 tensor with 3 dimensions :math:`(B, C, M, N)` where :math:`(B, C)` are arbitrary.
137 :math:`B` typically is the batch size, whereas :math:`C` is the number of input channels.
138
139 Returns
140 -------
141 S : tensor
142 scattering of the input, a 4D tensor :math:`(B, C, D, Md, Nd)` where :math:`D` corresponds
143 to a new channel dimension and :math:`(Md, Nd)` are downsampled sizes by a factor :math:`2^J`.
144
145 """
146 if not torch.is_tensor(input):
147 raise(TypeError('The input should be a torch.cuda.FloatTensor, a torch.FloatTensor or a torch.DoubleTensor'))
148
149 if len(input.shape) < 2:
150 raise (RuntimeError('Input tensor must have at least two '
151 'dimensions'))
152
153 if (not input.is_contiguous()):
154 raise (RuntimeError('Tensor must be contiguous!'))
155
156 if((input.size(-1)!=self.N or input.size(-2)!=self.M) and not self.pre_pad):
157 raise (RuntimeError('Tensor must be of spatial size (%i,%i)!'%(self.M,self.N)))
158
159 if ((input.size(-1) != self.N_padded or input.size(-2) != self.M_padded) and self.pre_pad):
160 raise (RuntimeError('Padded tensor must be of spatial size (%i,%i)!' % (self.M_padded, self.N_padded)))
161
162 batch_shape = input.shape[:-2]
163 signal_shape = input.shape[-2:]
164
165 input = input.reshape((-1, 1) + signal_shape)
166
167 J = self.J
168 phi = self.Phi
169 psi = self.Psi
170
171 subsample_fourier = self.subsample_fourier
172 modulus = self.modulus
173 pad = self.pad
174 order0_size = 1
175 order1_size = self.L * J
176 order2_size = self.L ** 2 * J * (J - 1) // 2
177 output_size = order0_size + order1_size
178
179 if self.max_order == 2:
180 output_size += order2_size
181
182 S = input.new(input.size(0),
183 input.size(1),
184 output_size,
185 self.M_padded//(2**J)-2,
186 self.N_padded//(2**J)-2)
187 U_r = pad(input)
188 U_0_c = fft(U_r, 'C2C') # We trick here with U_r and U_2_c
189
190 # First low pass filter
191 U_1_c = subsample_fourier(cdgmm(U_0_c, phi[0]), k=2**J)
192
193 U_J_r = fft(U_1_c, 'C2R')
194
195 S[..., 0, :, :] = unpad(U_J_r)
196 n_order1 = 1
197 n_order2 = 1 + order1_size
198
199 for n1 in range(len(psi)):
200 j1 = psi[n1]['j']
201 U_1_c = cdgmm(U_0_c, psi[n1][0])
202 if(j1 > 0):
203 U_1_c = subsample_fourier(U_1_c, k=2 ** j1)
204 U_1_c = fft(U_1_c, 'C2C', inverse=True)
205 U_1_c = fft(modulus(U_1_c), 'C2C')
206
207 # Second low pass filter
208 U_2_c = subsample_fourier(cdgmm(U_1_c, phi[j1]), k=2**(J-j1))
209 U_J_r = fft(U_2_c, 'C2R')
210 S[..., n_order1, :, :] = unpad(U_J_r)
211 n_order1 += 1
212
213 if self.max_order == 2:
214 for n2 in range(len(psi)):
215 j2 = psi[n2]['j']
216 if(j1 < j2):
217 U_2_c = subsample_fourier(cdgmm(U_1_c, psi[n2][j1]), k=2 ** (j2-j1))
218 U_2_c = fft(U_2_c, 'C2C', inverse=True)
219 U_2_c = fft(modulus(U_2_c), 'C2C')
220
221 # Third low pass filter
222 U_2_c = subsample_fourier(cdgmm(U_2_c, phi[j2]), k=2 ** (J-j2))
223 U_J_r = fft(U_2_c, 'C2R')
224
225 S[..., n_order2, :, :] = unpad(U_J_r)
226 n_order2 += 1
227
228 scattering_shape = S.shape[-3:]
229 S = S.reshape(batch_shape + scattering_shape)
230
231 return S
232
233 def __call__(self, input):
234 return self.forward(input)
235
[end of kymatio/scattering2d/scattering2d.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kymatio/scattering2d/scattering2d.py b/kymatio/scattering2d/scattering2d.py
--- a/kymatio/scattering2d/scattering2d.py
+++ b/kymatio/scattering2d/scattering2d.py
@@ -34,9 +34,9 @@
Example
-------
# 1) Define a Scattering object as:
- s = Scattering2D(J, M, N)
+ s = Scattering2D(J, shape=(M, N))
# where (M, N) are the image sizes and 2**J the scale of the scattering
- # 2) Forward on an input Variable x of shape B x 1 x M x N,
+ # 2) Forward on an input Tensor x of shape B x M x N,
# where B is the batch size.
result_s = s(x)
| {"golden_diff": "diff --git a/kymatio/scattering2d/scattering2d.py b/kymatio/scattering2d/scattering2d.py\n--- a/kymatio/scattering2d/scattering2d.py\n+++ b/kymatio/scattering2d/scattering2d.py\n@@ -34,9 +34,9 @@\n Example\n -------\n # 1) Define a Scattering object as:\n- s = Scattering2D(J, M, N)\n+ s = Scattering2D(J, shape=(M, N))\n # where (M, N) are the image sizes and 2**J the scale of the scattering\n- # 2) Forward on an input Variable x of shape B x 1 x M x N,\n+ # 2) Forward on an input Tensor x of shape B x M x N,\n # where B is the batch size.\n result_s = s(x)\n", "issue": "DOC 3D benchmarks\nCurrently none are listed in the user's guide. We should probably include something here.\n", "before_files": [{"content": "# Authors: Edouard Oyallon\n# Scientific Ancestry: Edouard Oyallon, Laurent Sifre, Joan Bruna\n\n\n__all__ = ['Scattering2D']\n\nimport torch\nfrom .backend import cdgmm, Modulus, SubsampleFourier, fft, Pad, unpad\nfrom .filter_bank import filter_bank\nfrom .utils import compute_padding\n\n\nclass Scattering2D(object):\n \"\"\"Main module implementing the scattering transform in 2D.\n The scattering transform computes two wavelet transform followed\n by modulus non-linearity.\n It can be summarized as::\n\n S_J x = [S_J^0 x, S_J^1 x, S_J^2 x]\n\n where::\n\n S_J^0 x = x * phi_J\n S_J^1 x = [|x * psi^1_lambda| * phi_J]_lambda\n S_J^2 x = [||x * psi^1_lambda| * psi^2_mu| * phi_J]_{lambda, mu}\n\n where * denotes the convolution (in space), phi_J is a low pass\n filter, psi^1_lambda is a family of band pass\n filters and psi^2_mu is another family of band pass filters.\n Only Morlet filters are used in this implementation.\n Convolutions are efficiently performed in the Fourier domain\n with this implementation.\n\n Example\n -------\n # 1) Define a Scattering object as:\n s = Scattering2D(J, M, N)\n # where (M, N) are the image sizes and 2**J the scale of the scattering\n # 2) Forward on an input Variable x of shape B x 1 x M x N,\n # where B is the batch size.\n result_s = s(x)\n\n Parameters\n ----------\n J : int\n logscale of the scattering\n shape : tuple of int\n spatial support (M, N) of the input\n L : int, optional\n number of angles used for the wavelet transform\n max_order : int, optional\n The maximum order of scattering coefficients to compute. Must be either\n `1` or `2`. Defaults to `2`.\n pre_pad : boolean, optional\n controls the padding: if set to False, a symmetric padding is applied\n on the signal. If set to true, the software will assume the signal was\n padded externally.\n\n Attributes\n ----------\n J : int\n logscale of the scattering\n shape : tuple of int\n spatial support (M, N) of the input\n L : int, optional\n number of angles used for the wavelet transform\n max_order : int, optional\n The maximum order of scattering coefficients to compute.\n Must be either equal to `1` or `2`. Defaults to `2`.\n pre_pad : boolean\n controls the padding\n Psi : dictionary\n containing the wavelets filters at all resolutions. See\n filter_bank.filter_bank for an exact description.\n Phi : dictionary\n containing the low-pass filters at all resolutions. See\n filter_bank.filter_bank for an exact description.\n M_padded, N_padded : int\n spatial support of the padded input\n\n Notes\n -----\n The design of the filters is optimized for the value L = 8\n\n pre_pad is particularly useful when doing crops of a bigger\n image because the padding is then extremely accurate. Defaults\n to False.\n\n \"\"\"\n def __init__(self, J, shape, L=8, max_order=2, pre_pad=False):\n self.J, self.L = J, L\n self.pre_pad = pre_pad\n self.max_order = max_order\n self.shape = shape\n\n self.build()\n\n def build(self):\n self.M, self.N = self.shape\n self.modulus = Modulus()\n self.pad = Pad(2**self.J, pre_pad = self.pre_pad)\n self.subsample_fourier = SubsampleFourier()\n # Create the filters\n self.M_padded, self.N_padded = compute_padding(self.M, self.N, self.J)\n filters = filter_bank(self.M_padded, self.N_padded, self.J, self.L)\n self.Psi = filters['psi']\n self.Phi = [filters['phi'][j] for j in range(self.J)]\n\n def _type(self, _type):\n for key, item in enumerate(self.Psi):\n for key2, item2 in self.Psi[key].items():\n if torch.is_tensor(item2):\n self.Psi[key][key2] = item2.type(_type)\n self.Phi = [v.type(_type) for v in self.Phi]\n self.pad.padding_module.type(_type)\n return self\n\n def cuda(self):\n \"\"\"\n Moves the parameters of the scattering to the GPU\n \"\"\"\n return self._type(torch.cuda.FloatTensor)\n\n def cpu(self):\n \"\"\"\n Moves the parameters of the scattering to the CPU\n \"\"\"\n return self._type(torch.FloatTensor)\n\n def forward(self, input):\n \"\"\"Forward pass of the scattering.\n\n Parameters\n ----------\n input : tensor\n tensor with 3 dimensions :math:`(B, C, M, N)` where :math:`(B, C)` are arbitrary.\n :math:`B` typically is the batch size, whereas :math:`C` is the number of input channels.\n\n Returns\n -------\n S : tensor\n scattering of the input, a 4D tensor :math:`(B, C, D, Md, Nd)` where :math:`D` corresponds\n to a new channel dimension and :math:`(Md, Nd)` are downsampled sizes by a factor :math:`2^J`.\n\n \"\"\"\n if not torch.is_tensor(input):\n raise(TypeError('The input should be a torch.cuda.FloatTensor, a torch.FloatTensor or a torch.DoubleTensor'))\n\n if len(input.shape) < 2:\n raise (RuntimeError('Input tensor must have at least two '\n 'dimensions'))\n\n if (not input.is_contiguous()):\n raise (RuntimeError('Tensor must be contiguous!'))\n\n if((input.size(-1)!=self.N or input.size(-2)!=self.M) and not self.pre_pad):\n raise (RuntimeError('Tensor must be of spatial size (%i,%i)!'%(self.M,self.N)))\n\n if ((input.size(-1) != self.N_padded or input.size(-2) != self.M_padded) and self.pre_pad):\n raise (RuntimeError('Padded tensor must be of spatial size (%i,%i)!' % (self.M_padded, self.N_padded)))\n\n batch_shape = input.shape[:-2]\n signal_shape = input.shape[-2:]\n\n input = input.reshape((-1, 1) + signal_shape)\n\n J = self.J\n phi = self.Phi\n psi = self.Psi\n\n subsample_fourier = self.subsample_fourier\n modulus = self.modulus\n pad = self.pad\n order0_size = 1\n order1_size = self.L * J\n order2_size = self.L ** 2 * J * (J - 1) // 2\n output_size = order0_size + order1_size\n\n if self.max_order == 2:\n output_size += order2_size\n\n S = input.new(input.size(0),\n input.size(1),\n output_size,\n self.M_padded//(2**J)-2,\n self.N_padded//(2**J)-2)\n U_r = pad(input)\n U_0_c = fft(U_r, 'C2C') # We trick here with U_r and U_2_c\n\n # First low pass filter\n U_1_c = subsample_fourier(cdgmm(U_0_c, phi[0]), k=2**J)\n\n U_J_r = fft(U_1_c, 'C2R')\n\n S[..., 0, :, :] = unpad(U_J_r)\n n_order1 = 1\n n_order2 = 1 + order1_size\n\n for n1 in range(len(psi)):\n j1 = psi[n1]['j']\n U_1_c = cdgmm(U_0_c, psi[n1][0])\n if(j1 > 0):\n U_1_c = subsample_fourier(U_1_c, k=2 ** j1)\n U_1_c = fft(U_1_c, 'C2C', inverse=True)\n U_1_c = fft(modulus(U_1_c), 'C2C')\n\n # Second low pass filter\n U_2_c = subsample_fourier(cdgmm(U_1_c, phi[j1]), k=2**(J-j1))\n U_J_r = fft(U_2_c, 'C2R')\n S[..., n_order1, :, :] = unpad(U_J_r)\n n_order1 += 1\n\n if self.max_order == 2:\n for n2 in range(len(psi)):\n j2 = psi[n2]['j']\n if(j1 < j2):\n U_2_c = subsample_fourier(cdgmm(U_1_c, psi[n2][j1]), k=2 ** (j2-j1))\n U_2_c = fft(U_2_c, 'C2C', inverse=True)\n U_2_c = fft(modulus(U_2_c), 'C2C')\n \n # Third low pass filter\n U_2_c = subsample_fourier(cdgmm(U_2_c, phi[j2]), k=2 ** (J-j2))\n U_J_r = fft(U_2_c, 'C2R')\n \n S[..., n_order2, :, :] = unpad(U_J_r)\n n_order2 += 1\n\n scattering_shape = S.shape[-3:]\n S = S.reshape(batch_shape + scattering_shape)\n\n return S\n\n def __call__(self, input):\n return self.forward(input)\n", "path": "kymatio/scattering2d/scattering2d.py"}]} | 3,373 | 203 |
gh_patches_debug_31934 | rasdani/github-patches | git_diff | mlflow__mlflow-1800 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] HdfsArtifactRepository list_artifacts recursively lists all items under directory
Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
for information on what types of issues we address.
For help with debugging your code, please refer to [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow).
Please do not delete this template unless you are sure your issue is outside its scope.
### System information
- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: No
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Centos 7
- **MLflow installed from (source or binary)**: binary
- **MLflow version (run ``mlflow --version``)**: 1.1.0
- **Python version**: 3.6.8
- **npm version, if running the dev UI**: N/A
- **Exact command to reproduce**: mlflow artifacts list -r <run id for artifacts stored on hdfs>
### Describe the problem
list_artifacts of an artifact repository is expected to only list the files directly under the provided path (see https://github.com/mlflow/mlflow/blob/4b1868719837d1844f19b6242643222549ee2794/mlflow/store/cli.py#L74 ). HdfsArtifactRepository walks all files under the given path and returns them (see https://github.com/mlflow/mlflow/blob/4b1868719837d1844f19b6242643222549ee2794/mlflow/store/hdfs_artifact_repo.py#L89 ).
This behavior breaks the mflow server as it expects the behavior specified in the cli file.
### Code to reproduce issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
### Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks,
please include the full traceback. Large logs and files should be attached.
</issue>
<code>
[start of mlflow/store/hdfs_artifact_repo.py]
1 import os
2 import posixpath
3 import tempfile
4 from contextlib import contextmanager
5
6 from six.moves import urllib
7
8 from mlflow.entities import FileInfo
9 from mlflow.exceptions import MlflowException
10 from mlflow.store.artifact_repo import ArtifactRepository
11 from mlflow.utils.file_utils import mkdir, relative_path_to_artifact_path
12
13
14 class HdfsArtifactRepository(ArtifactRepository):
15 """
16 Stores artifacts on HDFS.
17
18 This repository is used with URIs of the form ``hdfs:/<path>``. The repository can only be used
19 together with the RestStore.
20 """
21
22 def __init__(self, artifact_uri):
23 self.host, self.port, self.path = _resolve_connection_params(artifact_uri)
24 super(HdfsArtifactRepository, self).__init__(artifact_uri)
25
26 def log_artifact(self, local_file, artifact_path=None):
27 """
28 Log artifact in hdfs.
29 :param local_file: source file path
30 :param artifact_path: when specified will attempt to write under artifact_uri/artifact_path
31 """
32 hdfs_base_path = _resolve_base_path(self.path, artifact_path)
33
34 with hdfs_system(host=self.host, port=self.port) as hdfs:
35 _, file_name = os.path.split(local_file)
36 destination = posixpath.join(hdfs_base_path, file_name)
37 with hdfs.open(destination, 'wb') as output:
38 output.write(open(local_file, "rb").read())
39
40 def log_artifacts(self, local_dir, artifact_path=None):
41 """
42 Log artifacts in hdfs.
43 Missing remote sub-directories will be created if needed.
44 :param local_dir: source dir path
45 :param artifact_path: when specified will attempt to write under artifact_uri/artifact_path
46 """
47 hdfs_base_path = _resolve_base_path(self.path, artifact_path)
48
49 with hdfs_system(host=self.host, port=self.port) as hdfs:
50
51 if not hdfs.exists(hdfs_base_path):
52 hdfs.mkdir(hdfs_base_path)
53
54 for subdir_path, _, files in os.walk(local_dir):
55
56 relative_path = _relative_path_local(local_dir, subdir_path)
57
58 hdfs_subdir_path = posixpath.join(hdfs_base_path, relative_path) \
59 if relative_path else hdfs_base_path
60
61 if not hdfs.exists(hdfs_subdir_path):
62 hdfs.mkdir(hdfs_subdir_path)
63
64 for each_file in files:
65 source = os.path.join(subdir_path, each_file)
66 destination = posixpath.join(hdfs_subdir_path, each_file)
67 with hdfs.open(destination, 'wb') as output_stream:
68 output_stream.write(open(source, "rb").read())
69
70 def list_artifacts(self, path=None):
71 """
72 Lists files and directories under artifacts directory for the current run_id.
73 (self.path contains the base path - hdfs:/some/path/run_id/artifacts)
74
75 :param path: Relative source path. Possible subdirectory existing under
76 hdfs:/some/path/run_id/artifacts
77 :return: List of files and directories under given path -
78 example:
79 ['conda.yaml', 'MLmodel', 'model.pkl']
80 """
81 hdfs_base_path = _resolve_base_path(self.path, path)
82 base_path_len = len(hdfs_base_path) + 1
83
84 with hdfs_system(host=self.host, port=self.port) as hdfs:
85 paths = []
86 for path, is_dir, size in self._walk_path(hdfs, hdfs_base_path):
87 paths.append(FileInfo(path[base_path_len:], is_dir, size))
88 return sorted(paths, key=lambda f: paths)
89
90 def _walk_path(self, hdfs, hdfs_path):
91 if hdfs.exists(hdfs_path):
92 if hdfs.isdir(hdfs_path):
93 for subdir, _, files in hdfs.walk(hdfs_path):
94 if subdir != hdfs_path:
95 yield subdir, hdfs.isdir(subdir), hdfs.info(subdir).get("size")
96 for f in files:
97 file_path = posixpath.join(subdir, f)
98 yield file_path, hdfs.isdir(file_path), hdfs.info(file_path).get("size")
99 else:
100 yield hdfs_path, False, hdfs.info(hdfs_path).get("size")
101
102 def download_artifacts(self, artifact_path, dst_path=None):
103 """
104 Download an artifact file or directory to a local directory/file if applicable, and
105 return a local path for it.
106 The caller is responsible for managing the lifecycle of the downloaded artifacts.
107
108 (self.path contains the base path - hdfs:/some/path/run_id/artifacts)
109
110 :param artifact_path: Relative source path to the desired artifacts file or directory.
111 :param dst_path: Absolute path of the local filesystem destination directory to which
112 to download the specified artifacts. This directory must already
113 exist. If unspecified, the artifacts will be downloaded to a new,
114 uniquely-named
115 directory on the local filesystem.
116
117 :return: Absolute path of the local filesystem location containing the downloaded
118 artifacts - file/directory.
119 """
120
121 hdfs_base_path = _resolve_base_path(self.path, artifact_path)
122 local_dir = _tmp_dir(dst_path)
123
124 with hdfs_system(host=self.host, port=self.port) as hdfs:
125
126 if not hdfs.isdir(hdfs_base_path):
127 local_path = os.path.join(local_dir, os.path.normpath(artifact_path))
128 _download_hdfs_file(hdfs, hdfs_base_path, local_path)
129 return local_path
130
131 for path, is_dir, _ in self._walk_path(hdfs, hdfs_base_path):
132
133 relative_path = _relative_path_remote(hdfs_base_path, path)
134 local_path = os.path.join(local_dir, relative_path) \
135 if relative_path else local_dir
136
137 if is_dir:
138 mkdir(local_path)
139 else:
140 _download_hdfs_file(hdfs, path, local_path)
141 return local_dir
142
143 def _download_file(self, remote_file_path, local_path):
144 raise MlflowException('This is not implemented. Should never be called.')
145
146
147 @contextmanager
148 def hdfs_system(host, port):
149 """
150 hdfs system context - Attempt to establish the connection to hdfs
151 and yields HadoopFileSystem
152
153 :param host: hostname or when relaying on the core-site.xml config use 'default'
154 :param port: port or when relaying on the core-site.xml config use 0
155 """
156 import pyarrow as pa
157
158 driver = os.getenv('MLFLOW_HDFS_DRIVER') or 'libhdfs'
159 kerb_ticket = os.getenv('MLFLOW_KERBEROS_TICKET_CACHE')
160 kerberos_user = os.getenv('MLFLOW_KERBEROS_USER')
161 extra_conf = _parse_extra_conf(os.getenv('MLFLOW_PYARROW_EXTRA_CONF'))
162
163 connected = pa.hdfs.connect(host=host or 'default',
164 port=port or 0,
165 user=kerberos_user,
166 driver=driver,
167 kerb_ticket=kerb_ticket,
168 extra_conf=extra_conf)
169 yield connected
170 connected.close()
171
172
173 def _resolve_connection_params(artifact_uri):
174 parsed = urllib.parse.urlparse(artifact_uri)
175 return parsed.hostname, parsed.port, parsed.path
176
177
178 def _resolve_base_path(path, artifact_path):
179 if path == artifact_path:
180 return path
181 if artifact_path:
182 return posixpath.join(path, artifact_path)
183 return path
184
185
186 def _relative_path(base_dir, subdir_path, path_module):
187 relative_path = path_module.relpath(subdir_path, base_dir)
188 return relative_path if relative_path is not '.' else None
189
190
191 def _relative_path_local(base_dir, subdir_path):
192 rel_path = _relative_path(base_dir, subdir_path, os.path)
193 return relative_path_to_artifact_path(rel_path) if rel_path is not None else None
194
195
196 def _relative_path_remote(base_dir, subdir_path):
197 return _relative_path(base_dir, subdir_path, posixpath)
198
199
200 def _tmp_dir(local_path):
201 return os.path.abspath(tempfile.mkdtemp(dir=local_path))
202
203
204 def _download_hdfs_file(hdfs, remote_file_path, local_file_path):
205 with open(local_file_path, 'wb') as f:
206 f.write(hdfs.open(remote_file_path, 'rb').read())
207
208
209 def _parse_extra_conf(extra_conf):
210 if extra_conf:
211 def as_pair(config):
212 key, val = config.split('=')
213 return key, val
214
215 list_of_key_val = [as_pair(conf) for conf in extra_conf.split(',')]
216 return dict(list_of_key_val)
217 return None
218
[end of mlflow/store/hdfs_artifact_repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/store/hdfs_artifact_repo.py b/mlflow/store/hdfs_artifact_repo.py
--- a/mlflow/store/hdfs_artifact_repo.py
+++ b/mlflow/store/hdfs_artifact_repo.py
@@ -74,17 +74,21 @@
:param path: Relative source path. Possible subdirectory existing under
hdfs:/some/path/run_id/artifacts
- :return: List of files and directories under given path -
- example:
- ['conda.yaml', 'MLmodel', 'model.pkl']
+ :return: List of FileInfos under given path
"""
hdfs_base_path = _resolve_base_path(self.path, path)
- base_path_len = len(hdfs_base_path) + 1
with hdfs_system(host=self.host, port=self.port) as hdfs:
paths = []
- for path, is_dir, size in self._walk_path(hdfs, hdfs_base_path):
- paths.append(FileInfo(path[base_path_len:], is_dir, size))
+ if hdfs.exists(hdfs_base_path):
+ for file_detail in hdfs.ls(hdfs_base_path, detail=True):
+ file_name = file_detail.get("name")
+ # Strip off anything that comes before the artifact root e.g. hdfs://name
+ offset = file_name.index(self.path)
+ rel_path = _relative_path_remote(self.path, file_name[offset:])
+ is_dir = file_detail.get("kind") == "directory"
+ size = file_detail.get("size")
+ paths.append(FileInfo(rel_path, is_dir, size))
return sorted(paths, key=lambda f: paths)
def _walk_path(self, hdfs, hdfs_path):
@@ -202,6 +206,9 @@
def _download_hdfs_file(hdfs, remote_file_path, local_file_path):
+ # Ensure all required directories exist. Without doing this nested files can't be downloaded.
+ dirs = os.path.dirname(local_file_path)
+ os.makedirs(dirs)
with open(local_file_path, 'wb') as f:
f.write(hdfs.open(remote_file_path, 'rb').read())
| {"golden_diff": "diff --git a/mlflow/store/hdfs_artifact_repo.py b/mlflow/store/hdfs_artifact_repo.py\n--- a/mlflow/store/hdfs_artifact_repo.py\n+++ b/mlflow/store/hdfs_artifact_repo.py\n@@ -74,17 +74,21 @@\n \n :param path: Relative source path. Possible subdirectory existing under\n hdfs:/some/path/run_id/artifacts\n- :return: List of files and directories under given path -\n- example:\n- ['conda.yaml', 'MLmodel', 'model.pkl']\n+ :return: List of FileInfos under given path\n \"\"\"\n hdfs_base_path = _resolve_base_path(self.path, path)\n- base_path_len = len(hdfs_base_path) + 1\n \n with hdfs_system(host=self.host, port=self.port) as hdfs:\n paths = []\n- for path, is_dir, size in self._walk_path(hdfs, hdfs_base_path):\n- paths.append(FileInfo(path[base_path_len:], is_dir, size))\n+ if hdfs.exists(hdfs_base_path):\n+ for file_detail in hdfs.ls(hdfs_base_path, detail=True):\n+ file_name = file_detail.get(\"name\")\n+ # Strip off anything that comes before the artifact root e.g. hdfs://name\n+ offset = file_name.index(self.path)\n+ rel_path = _relative_path_remote(self.path, file_name[offset:])\n+ is_dir = file_detail.get(\"kind\") == \"directory\"\n+ size = file_detail.get(\"size\")\n+ paths.append(FileInfo(rel_path, is_dir, size))\n return sorted(paths, key=lambda f: paths)\n \n def _walk_path(self, hdfs, hdfs_path):\n@@ -202,6 +206,9 @@\n \n \n def _download_hdfs_file(hdfs, remote_file_path, local_file_path):\n+ # Ensure all required directories exist. Without doing this nested files can't be downloaded.\n+ dirs = os.path.dirname(local_file_path)\n+ os.makedirs(dirs)\n with open(local_file_path, 'wb') as f:\n f.write(hdfs.open(remote_file_path, 'rb').read())\n", "issue": "[BUG] HdfsArtifactRepository list_artifacts recursively lists all items under directory\nThank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)\r\nfor information on what types of issues we address.\r\n\r\nFor help with debugging your code, please refer to [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow).\r\n\r\n \r\nPlease do not delete this template unless you are sure your issue is outside its scope.\r\n\r\n### System information\r\n- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: No\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Centos 7\r\n- **MLflow installed from (source or binary)**: binary\r\n- **MLflow version (run ``mlflow --version``)**: 1.1.0\r\n- **Python version**: 3.6.8\r\n- **npm version, if running the dev UI**: N/A\r\n- **Exact command to reproduce**: mlflow artifacts list -r <run id for artifacts stored on hdfs>\r\n\r\n### Describe the problem\r\nlist_artifacts of an artifact repository is expected to only list the files directly under the provided path (see https://github.com/mlflow/mlflow/blob/4b1868719837d1844f19b6242643222549ee2794/mlflow/store/cli.py#L74 ). HdfsArtifactRepository walks all files under the given path and returns them (see https://github.com/mlflow/mlflow/blob/4b1868719837d1844f19b6242643222549ee2794/mlflow/store/hdfs_artifact_repo.py#L89 ).\r\n\r\nThis behavior breaks the mflow server as it expects the behavior specified in the cli file.\r\n\r\n### Code to reproduce issue\r\nProvide a reproducible test case that is the bare minimum necessary to generate the problem.\r\n\r\n### Other info / logs\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks,\r\nplease include the full traceback. Large logs and files should be attached.\r\n\n", "before_files": [{"content": "import os\nimport posixpath\nimport tempfile\nfrom contextlib import contextmanager\n\nfrom six.moves import urllib\n\nfrom mlflow.entities import FileInfo\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.store.artifact_repo import ArtifactRepository\nfrom mlflow.utils.file_utils import mkdir, relative_path_to_artifact_path\n\n\nclass HdfsArtifactRepository(ArtifactRepository):\n \"\"\"\n Stores artifacts on HDFS.\n\n This repository is used with URIs of the form ``hdfs:/<path>``. The repository can only be used\n together with the RestStore.\n \"\"\"\n\n def __init__(self, artifact_uri):\n self.host, self.port, self.path = _resolve_connection_params(artifact_uri)\n super(HdfsArtifactRepository, self).__init__(artifact_uri)\n\n def log_artifact(self, local_file, artifact_path=None):\n \"\"\"\n Log artifact in hdfs.\n :param local_file: source file path\n :param artifact_path: when specified will attempt to write under artifact_uri/artifact_path\n \"\"\"\n hdfs_base_path = _resolve_base_path(self.path, artifact_path)\n\n with hdfs_system(host=self.host, port=self.port) as hdfs:\n _, file_name = os.path.split(local_file)\n destination = posixpath.join(hdfs_base_path, file_name)\n with hdfs.open(destination, 'wb') as output:\n output.write(open(local_file, \"rb\").read())\n\n def log_artifacts(self, local_dir, artifact_path=None):\n \"\"\"\n Log artifacts in hdfs.\n Missing remote sub-directories will be created if needed.\n :param local_dir: source dir path\n :param artifact_path: when specified will attempt to write under artifact_uri/artifact_path\n \"\"\"\n hdfs_base_path = _resolve_base_path(self.path, artifact_path)\n\n with hdfs_system(host=self.host, port=self.port) as hdfs:\n\n if not hdfs.exists(hdfs_base_path):\n hdfs.mkdir(hdfs_base_path)\n\n for subdir_path, _, files in os.walk(local_dir):\n\n relative_path = _relative_path_local(local_dir, subdir_path)\n\n hdfs_subdir_path = posixpath.join(hdfs_base_path, relative_path) \\\n if relative_path else hdfs_base_path\n\n if not hdfs.exists(hdfs_subdir_path):\n hdfs.mkdir(hdfs_subdir_path)\n\n for each_file in files:\n source = os.path.join(subdir_path, each_file)\n destination = posixpath.join(hdfs_subdir_path, each_file)\n with hdfs.open(destination, 'wb') as output_stream:\n output_stream.write(open(source, \"rb\").read())\n\n def list_artifacts(self, path=None):\n \"\"\"\n Lists files and directories under artifacts directory for the current run_id.\n (self.path contains the base path - hdfs:/some/path/run_id/artifacts)\n\n :param path: Relative source path. Possible subdirectory existing under\n hdfs:/some/path/run_id/artifacts\n :return: List of files and directories under given path -\n example:\n ['conda.yaml', 'MLmodel', 'model.pkl']\n \"\"\"\n hdfs_base_path = _resolve_base_path(self.path, path)\n base_path_len = len(hdfs_base_path) + 1\n\n with hdfs_system(host=self.host, port=self.port) as hdfs:\n paths = []\n for path, is_dir, size in self._walk_path(hdfs, hdfs_base_path):\n paths.append(FileInfo(path[base_path_len:], is_dir, size))\n return sorted(paths, key=lambda f: paths)\n\n def _walk_path(self, hdfs, hdfs_path):\n if hdfs.exists(hdfs_path):\n if hdfs.isdir(hdfs_path):\n for subdir, _, files in hdfs.walk(hdfs_path):\n if subdir != hdfs_path:\n yield subdir, hdfs.isdir(subdir), hdfs.info(subdir).get(\"size\")\n for f in files:\n file_path = posixpath.join(subdir, f)\n yield file_path, hdfs.isdir(file_path), hdfs.info(file_path).get(\"size\")\n else:\n yield hdfs_path, False, hdfs.info(hdfs_path).get(\"size\")\n\n def download_artifacts(self, artifact_path, dst_path=None):\n \"\"\"\n Download an artifact file or directory to a local directory/file if applicable, and\n return a local path for it.\n The caller is responsible for managing the lifecycle of the downloaded artifacts.\n\n (self.path contains the base path - hdfs:/some/path/run_id/artifacts)\n\n :param artifact_path: Relative source path to the desired artifacts file or directory.\n :param dst_path: Absolute path of the local filesystem destination directory to which\n to download the specified artifacts. This directory must already\n exist. If unspecified, the artifacts will be downloaded to a new,\n uniquely-named\n directory on the local filesystem.\n\n :return: Absolute path of the local filesystem location containing the downloaded\n artifacts - file/directory.\n \"\"\"\n\n hdfs_base_path = _resolve_base_path(self.path, artifact_path)\n local_dir = _tmp_dir(dst_path)\n\n with hdfs_system(host=self.host, port=self.port) as hdfs:\n\n if not hdfs.isdir(hdfs_base_path):\n local_path = os.path.join(local_dir, os.path.normpath(artifact_path))\n _download_hdfs_file(hdfs, hdfs_base_path, local_path)\n return local_path\n\n for path, is_dir, _ in self._walk_path(hdfs, hdfs_base_path):\n\n relative_path = _relative_path_remote(hdfs_base_path, path)\n local_path = os.path.join(local_dir, relative_path) \\\n if relative_path else local_dir\n\n if is_dir:\n mkdir(local_path)\n else:\n _download_hdfs_file(hdfs, path, local_path)\n return local_dir\n\n def _download_file(self, remote_file_path, local_path):\n raise MlflowException('This is not implemented. Should never be called.')\n\n\n@contextmanager\ndef hdfs_system(host, port):\n \"\"\"\n hdfs system context - Attempt to establish the connection to hdfs\n and yields HadoopFileSystem\n\n :param host: hostname or when relaying on the core-site.xml config use 'default'\n :param port: port or when relaying on the core-site.xml config use 0\n \"\"\"\n import pyarrow as pa\n\n driver = os.getenv('MLFLOW_HDFS_DRIVER') or 'libhdfs'\n kerb_ticket = os.getenv('MLFLOW_KERBEROS_TICKET_CACHE')\n kerberos_user = os.getenv('MLFLOW_KERBEROS_USER')\n extra_conf = _parse_extra_conf(os.getenv('MLFLOW_PYARROW_EXTRA_CONF'))\n\n connected = pa.hdfs.connect(host=host or 'default',\n port=port or 0,\n user=kerberos_user,\n driver=driver,\n kerb_ticket=kerb_ticket,\n extra_conf=extra_conf)\n yield connected\n connected.close()\n\n\ndef _resolve_connection_params(artifact_uri):\n parsed = urllib.parse.urlparse(artifact_uri)\n return parsed.hostname, parsed.port, parsed.path\n\n\ndef _resolve_base_path(path, artifact_path):\n if path == artifact_path:\n return path\n if artifact_path:\n return posixpath.join(path, artifact_path)\n return path\n\n\ndef _relative_path(base_dir, subdir_path, path_module):\n relative_path = path_module.relpath(subdir_path, base_dir)\n return relative_path if relative_path is not '.' else None\n\n\ndef _relative_path_local(base_dir, subdir_path):\n rel_path = _relative_path(base_dir, subdir_path, os.path)\n return relative_path_to_artifact_path(rel_path) if rel_path is not None else None\n\n\ndef _relative_path_remote(base_dir, subdir_path):\n return _relative_path(base_dir, subdir_path, posixpath)\n\n\ndef _tmp_dir(local_path):\n return os.path.abspath(tempfile.mkdtemp(dir=local_path))\n\n\ndef _download_hdfs_file(hdfs, remote_file_path, local_file_path):\n with open(local_file_path, 'wb') as f:\n f.write(hdfs.open(remote_file_path, 'rb').read())\n\n\ndef _parse_extra_conf(extra_conf):\n if extra_conf:\n def as_pair(config):\n key, val = config.split('=')\n return key, val\n\n list_of_key_val = [as_pair(conf) for conf in extra_conf.split(',')]\n return dict(list_of_key_val)\n return None\n", "path": "mlflow/store/hdfs_artifact_repo.py"}]} | 3,440 | 474 |
gh_patches_debug_13127 | rasdani/github-patches | git_diff | GoogleCloudPlatform__PerfKitBenchmarker-680 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
silo benchmark fails behind proxy
From @mateusz-blaszkowski in #475:
> silo - failed with Clone of 'git://github.com/kohler/masstree-beta.git' into submodule path 'masstree' failed. I run the test behind the proxy and this is the case. I would have changed the path to Git repository to https:// but it is hidden somewhere in 'dbtest' (look a the command which failed: cd /tmp/pkb/silo && MODE=perf DEBUG=0 CHECK_INVARIANTS=0 make -j80 dbtest). Oh, i found that the exact path is specified here: https://github.com/stephentu/silo/blob/cc11ca1ea949ef266ee12a9b1c310392519d9e3b/.gitmodules
We should switch it to `https://`.
</issue>
<code>
[start of perfkitbenchmarker/linux_packages/silo.py]
1 # Copyright 2014 PerfKitBenchmarker Authors. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 """Module containing Silo installation and cleanup functions."""
17
18 from perfkitbenchmarker import vm_util
19
20 GIT_REPO = 'https://github.com/stephentu/silo.git'
21 GIT_TAG = '62d2d498984bf69d3b46a74e310e1fd12fd1f692'
22 SILO_DIR = '%s/silo' % vm_util.VM_TMP_DIR
23 APT_PACKAGES = ('libjemalloc-dev libnuma-dev libdb++-dev '
24 'libmysqld-dev libaio-dev libssl-dev')
25 YUM_PACKAGES = ('jemalloc-devel numactl-devel libdb-cxx-devel mysql-devel '
26 'libaio-devel openssl-devel')
27
28
29 def _Install(vm):
30 """Installs the Silo package on the VM."""
31 nthreads = vm.num_cpus * 2
32 vm.Install('build_tools')
33 vm.RemoteCommand('git clone {0} {1}'.format(GIT_REPO, SILO_DIR))
34 vm.RemoteCommand('cd {0} && git checkout {1}'.format(SILO_DIR,
35 GIT_TAG))
36 vm.RemoteCommand('cd {0} && MODE=perf DEBUG=0 CHECK_INVARIANTS=0 make\
37 -j{1} dbtest'.format(SILO_DIR, nthreads))
38
39
40 def YumInstall(vm):
41 """Installs the Silo package on the VM."""
42 vm.InstallPackages(YUM_PACKAGES)
43 _Install(vm)
44
45
46 def AptInstall(vm):
47 """Installs the Silo package on the VM."""
48 vm.InstallPackages(APT_PACKAGES)
49 _Install(vm)
50
[end of perfkitbenchmarker/linux_packages/silo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/perfkitbenchmarker/linux_packages/silo.py b/perfkitbenchmarker/linux_packages/silo.py
--- a/perfkitbenchmarker/linux_packages/silo.py
+++ b/perfkitbenchmarker/linux_packages/silo.py
@@ -33,6 +33,9 @@
vm.RemoteCommand('git clone {0} {1}'.format(GIT_REPO, SILO_DIR))
vm.RemoteCommand('cd {0} && git checkout {1}'.format(SILO_DIR,
GIT_TAG))
+ # This is due to a failing clone command when executing behind a proxy.
+ # Replacing the protocol to https instead of git fixes the issue.
+ vm.RemoteCommand('git config --global url."https://".insteadOf git://')
vm.RemoteCommand('cd {0} && MODE=perf DEBUG=0 CHECK_INVARIANTS=0 make\
-j{1} dbtest'.format(SILO_DIR, nthreads))
| {"golden_diff": "diff --git a/perfkitbenchmarker/linux_packages/silo.py b/perfkitbenchmarker/linux_packages/silo.py\n--- a/perfkitbenchmarker/linux_packages/silo.py\n+++ b/perfkitbenchmarker/linux_packages/silo.py\n@@ -33,6 +33,9 @@\n vm.RemoteCommand('git clone {0} {1}'.format(GIT_REPO, SILO_DIR))\n vm.RemoteCommand('cd {0} && git checkout {1}'.format(SILO_DIR,\n GIT_TAG))\n+ # This is due to a failing clone command when executing behind a proxy.\n+ # Replacing the protocol to https instead of git fixes the issue.\n+ vm.RemoteCommand('git config --global url.\"https://\".insteadOf git://')\n vm.RemoteCommand('cd {0} && MODE=perf DEBUG=0 CHECK_INVARIANTS=0 make\\\n -j{1} dbtest'.format(SILO_DIR, nthreads))\n", "issue": "silo benchmark fails behind proxy\nFrom @mateusz-blaszkowski in #475: \n\n> silo - failed with Clone of 'git://github.com/kohler/masstree-beta.git' into submodule path 'masstree' failed. I run the test behind the proxy and this is the case. I would have changed the path to Git repository to https:// but it is hidden somewhere in 'dbtest' (look a the command which failed: cd /tmp/pkb/silo && MODE=perf DEBUG=0 CHECK_INVARIANTS=0 make -j80 dbtest). Oh, i found that the exact path is specified here: https://github.com/stephentu/silo/blob/cc11ca1ea949ef266ee12a9b1c310392519d9e3b/.gitmodules\n\nWe should switch it to `https://`.\n\n", "before_files": [{"content": "# Copyright 2014 PerfKitBenchmarker Authors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"Module containing Silo installation and cleanup functions.\"\"\"\n\nfrom perfkitbenchmarker import vm_util\n\nGIT_REPO = 'https://github.com/stephentu/silo.git'\nGIT_TAG = '62d2d498984bf69d3b46a74e310e1fd12fd1f692'\nSILO_DIR = '%s/silo' % vm_util.VM_TMP_DIR\nAPT_PACKAGES = ('libjemalloc-dev libnuma-dev libdb++-dev '\n 'libmysqld-dev libaio-dev libssl-dev')\nYUM_PACKAGES = ('jemalloc-devel numactl-devel libdb-cxx-devel mysql-devel '\n 'libaio-devel openssl-devel')\n\n\ndef _Install(vm):\n \"\"\"Installs the Silo package on the VM.\"\"\"\n nthreads = vm.num_cpus * 2\n vm.Install('build_tools')\n vm.RemoteCommand('git clone {0} {1}'.format(GIT_REPO, SILO_DIR))\n vm.RemoteCommand('cd {0} && git checkout {1}'.format(SILO_DIR,\n GIT_TAG))\n vm.RemoteCommand('cd {0} && MODE=perf DEBUG=0 CHECK_INVARIANTS=0 make\\\n -j{1} dbtest'.format(SILO_DIR, nthreads))\n\n\ndef YumInstall(vm):\n \"\"\"Installs the Silo package on the VM.\"\"\"\n vm.InstallPackages(YUM_PACKAGES)\n _Install(vm)\n\n\ndef AptInstall(vm):\n \"\"\"Installs the Silo package on the VM.\"\"\"\n vm.InstallPackages(APT_PACKAGES)\n _Install(vm)\n", "path": "perfkitbenchmarker/linux_packages/silo.py"}]} | 1,331 | 206 |
gh_patches_debug_9322 | rasdani/github-patches | git_diff | cobbler__cobbler-3507 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cobbler import rsync fails from mounted ISO media with SELinux enabled with return code 23
### Describe the bug
When you mount an ISO image - the permissions of the mounted files are read only:
```
# ls -la /mnt
total 38
dr-xr-xr-x. 1 root root 2048 Oct 29 22:06 ./
dr-xr-xr-x. 25 root root 4096 Oct 26 08:44 ../
dr-xr-xr-x. 1 root root 2048 Oct 29 22:06 AppStream/
dr-xr-xr-x. 1 root root 2048 Oct 29 22:06 BaseOS/
-r--r--r--. 1 root root 45 Oct 29 22:06 .discinfo
dr-xr-xr-x. 1 root root 2048 Oct 29 21:53 EFI/
-r--r--r--. 1 root root 299 Oct 29 22:06 EULA
-r--r--r--. 1 root root 745 Oct 29 22:06 extra_files.json
dr-xr-xr-x. 1 root root 2048 Oct 29 21:53 images/
dr-xr-xr-x. 1 root root 2048 Oct 29 21:53 isolinux/
-r--r--r--. 1 root root 18092 Oct 29 22:06 LICENSE
-r--r--r--. 1 root root 88 Oct 29 22:06 media.repo
-r--r--r--. 1 root root 1530 Oct 29 22:06 .treeinfo
```
When you run `cobbler import --path=/mnt` the rsync will fail:
```
running: ['rsync', '--archive', '--progress', '/mnt/', '/var/www/cobbler/distro_mirror/centos-stream-9']
received on stdout: sending incremental file list
Exception occurred: <class 'RuntimeError'>
Exception value: rsync import failed with return code 23!
Exception Info:
!!! TASK FAILED !!!
```
### Steps to reproduce
1. mount -o loop /path/to/ISO /mnt
2. cobbler import --path=/mnt
### Expected behavior
Distro is imported.
### Cobbler version
<!--- Paste output from `cobbler version` -->
````paste below
Cobbler 3.4.0
source: ?, ?
build time: Sat Nov 4 21:15:48 2023
````
### Operating system
CentOS Stream 8
### Cobbler log
<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->
````paste below
2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | importing from a network location, running rsync to fetch the files first
[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | running: ['rsync', '--archive', '--progress', '/tmp/Fedora-Server-x86_64-38-1.6.iso/', '/var/www/cobbler/distro_mirror/fedora-38']
[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | received on stdout: sending incremental file list
[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | Exception occurred: <class 'RuntimeError'>
[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | Exception value: rsync import failed with return code 23!
[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | Exception Info:
File "/usr/lib/python3.6/site-packages/cobbler/utils/thread.py", line 103, in run
return_code = self._run(self)
File "/usr/lib/python3.6/site-packages/cobbler/remote.py", line 398, in runner
self.options.get("os_version", None),
File "/usr/lib/python3.6/site-packages/cobbler/api.py", line 2327, in import_tree
os_version,
File "/usr/lib/python3.6/site-packages/cobbler/actions/importer.py", line 127, in run
f"rsync import failed with return code {rsync_return_code}!"
[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - ERROR | ### TASK FAILED ###
````
### Additional information
The problem is that the read-only permissions are copied to the destination, and then cobbler does not have permission to write to the destination without the `dac_override` permission which is not granted:
```
type=AVC msg=audit(1699229796.164:5238): avc: denied { dac_override } for pid=142026 comm="rsync" capability=1 scontext=system_u:system_r:cobblerd_t:s0 tcontext=system_u:system_r:cobblerd_t:s0 tclass=capability permissive=0
```
I will be submitting a pull request the changes the rsync options to grant write permissions on the destination, which is what we would want anyway.
</issue>
<code>
[start of cobbler/actions/importer.py]
1 """
2 This module contains the logic that kicks of the ``cobbler import`` process. This is extracted logic from ``api.py``
3 that is essentially calling ``modules/mangers/import_signatures.py`` with some preparatory code.
4 """
5 import logging
6 import os
7 from typing import TYPE_CHECKING, Optional
8
9 from cobbler import utils
10 from cobbler.utils import filesystem_helpers
11
12 if TYPE_CHECKING:
13 from cobbler.api import CobblerAPI
14
15
16 class Importer:
17 """
18 Wrapper class to adhere to the style of all other actions.
19 """
20
21 def __init__(self, api: "CobblerAPI") -> None:
22 """
23 Constructor to initialize the class.
24
25 :param api: The CobblerAPI.
26 """
27 self.api = api
28 self.logger = logging.getLogger()
29
30 def run(
31 self,
32 mirror_url: str,
33 mirror_name: str,
34 network_root: Optional[str] = None,
35 autoinstall_file: Optional[str] = None,
36 rsync_flags: Optional[str] = None,
37 arch: Optional[str] = None,
38 breed: Optional[str] = None,
39 os_version: Optional[str] = None,
40 ) -> bool:
41 """
42 Automatically import a directory tree full of distribution files.
43
44 :param mirror_url: Can be a string that represents a path, a user@host syntax for SSH, or an rsync:// address.
45 If mirror_url is a filesystem path and mirroring is not desired, set network_root to
46 something like "nfs://path/to/mirror_url/root"
47 :param mirror_name: The name of the mirror.
48 :param network_root: the remote path (nfs/http/ftp) for the distro files
49 :param autoinstall_file: user-specified response file, which will override the default
50 :param rsync_flags: Additional flags that will be passed to the rsync call that will sync everything to the
51 Cobbler webroot.
52 :param arch: user-specified architecture
53 :param breed: user-specified breed
54 :param os_version: user-specified OS version
55 """
56 self.api.log(
57 "import_tree",
58 [mirror_url, mirror_name, network_root, autoinstall_file, rsync_flags],
59 )
60
61 # Both --path and --name are required arguments.
62 if mirror_url is None or not mirror_url:
63 self.logger.info("import failed. no --path specified")
64 return False
65 if not mirror_name:
66 self.logger.info("import failed. no --name specified")
67 return False
68
69 path = os.path.normpath(
70 f"{self.api.settings().webdir}/distro_mirror/{mirror_name}"
71 )
72 if arch is not None:
73 arch = arch.lower()
74 if arch == "x86":
75 # be consistent
76 arch = "i386"
77 if path.split("-")[-1] != arch:
78 path += f"-{arch}"
79
80 # We need to mirror (copy) the files.
81 self.logger.info(
82 "importing from a network location, running rsync to fetch the files first"
83 )
84
85 filesystem_helpers.mkdir(path)
86
87 # Prevent rsync from creating the directory name twice if we are copying via rsync.
88
89 if not mirror_url.endswith("/"):
90 mirror_url = f"{mirror_url}/"
91
92 if (
93 mirror_url.startswith("http://")
94 or mirror_url.startswith("https://")
95 or mirror_url.startswith("ftp://")
96 or mirror_url.startswith("nfs://")
97 ):
98 # HTTP mirrors are kind of primitive. rsync is better. That's why this isn't documented in the manpage and
99 # we don't support them.
100 # TODO: how about adding recursive FTP as an option?
101 self.logger.info("unsupported protocol")
102 return False
103
104 # Good, we're going to use rsync.. We don't use SSH for public mirrors and local files.
105 # Presence of user@host syntax means use SSH
106 spacer = ""
107 if not mirror_url.startswith("rsync://") and not mirror_url.startswith("/"):
108 spacer = ' -e "ssh" '
109 rsync_cmd = ["rsync", "--archive"]
110 if spacer != "":
111 rsync_cmd.append(spacer)
112 rsync_cmd.append("--progress")
113 if rsync_flags:
114 rsync_cmd.append(rsync_flags)
115
116 # If --available-as was specified, limit the files we pull down via rsync to just those that are critical
117 # to detecting what the distro is
118 if network_root is not None:
119 rsync_cmd.append("--include-from=/etc/cobbler/import_rsync_whitelist")
120
121 rsync_cmd += [mirror_url, path]
122
123 # kick off the rsync now
124 rsync_return_code = utils.subprocess_call(rsync_cmd, shell=False)
125 if rsync_return_code != 0:
126 raise RuntimeError(
127 f"rsync import failed with return code {rsync_return_code}!"
128 )
129
130 if network_root is not None:
131 # In addition to mirroring, we're going to assume the path is available over http, ftp, and nfs, perhaps on
132 # an external filer. Scanning still requires --mirror is a filesystem path, but --available-as marks the
133 # network path. This allows users to point the path at a directory containing just the network boot files
134 # while the rest of the distro files are available somewhere else.
135
136 # Find the filesystem part of the path, after the server bits, as each distro URL needs to be calculated
137 # relative to this.
138
139 if not network_root.endswith("/"):
140 network_root += "/"
141 valid_roots = ["nfs://", "ftp://", "http://", "https://"]
142 for valid_root in valid_roots:
143 if network_root.startswith(valid_root):
144 break
145 else:
146 self.logger.info(
147 "Network root given to --available-as must be nfs://, ftp://, http://, or https://"
148 )
149 return False
150
151 if network_root.startswith("nfs://"):
152 try:
153 (_, _, _) = network_root.split(":", 3)
154 except ValueError:
155 self.logger.info(
156 "Network root given to --available-as is missing a colon, please see the manpage example."
157 )
158 return False
159
160 import_module = self.api.get_module_by_name("managers.import_signatures")
161 if import_module is None:
162 raise ImportError("Could not retrieve import signatures module!")
163 import_manager = import_module.get_import_manager(self.api)
164 import_manager.run(
165 path, mirror_name, network_root, autoinstall_file, arch, breed, os_version
166 )
167 return True
168
[end of cobbler/actions/importer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cobbler/actions/importer.py b/cobbler/actions/importer.py
--- a/cobbler/actions/importer.py
+++ b/cobbler/actions/importer.py
@@ -106,7 +106,8 @@
spacer = ""
if not mirror_url.startswith("rsync://") and not mirror_url.startswith("/"):
spacer = ' -e "ssh" '
- rsync_cmd = ["rsync", "--archive"]
+ # --archive but without -p to avoid copying read-only ISO permissions and making sure we have write access
+ rsync_cmd = ["rsync", "-rltgoD", "--chmod=ug=rwX"]
if spacer != "":
rsync_cmd.append(spacer)
rsync_cmd.append("--progress")
| {"golden_diff": "diff --git a/cobbler/actions/importer.py b/cobbler/actions/importer.py\n--- a/cobbler/actions/importer.py\n+++ b/cobbler/actions/importer.py\n@@ -106,7 +106,8 @@\n spacer = \"\"\n if not mirror_url.startswith(\"rsync://\") and not mirror_url.startswith(\"/\"):\n spacer = ' -e \"ssh\" '\n- rsync_cmd = [\"rsync\", \"--archive\"]\n+ # --archive but without -p to avoid copying read-only ISO permissions and making sure we have write access\n+ rsync_cmd = [\"rsync\", \"-rltgoD\", \"--chmod=ug=rwX\"]\n if spacer != \"\":\n rsync_cmd.append(spacer)\n rsync_cmd.append(\"--progress\")\n", "issue": "cobbler import rsync fails from mounted ISO media with SELinux enabled with return code 23\n### Describe the bug\r\n\r\nWhen you mount an ISO image - the permissions of the mounted files are read only:\r\n```\r\n# ls -la /mnt\r\ntotal 38\r\ndr-xr-xr-x. 1 root root 2048 Oct 29 22:06 ./\r\ndr-xr-xr-x. 25 root root 4096 Oct 26 08:44 ../\r\ndr-xr-xr-x. 1 root root 2048 Oct 29 22:06 AppStream/\r\ndr-xr-xr-x. 1 root root 2048 Oct 29 22:06 BaseOS/\r\n-r--r--r--. 1 root root 45 Oct 29 22:06 .discinfo\r\ndr-xr-xr-x. 1 root root 2048 Oct 29 21:53 EFI/\r\n-r--r--r--. 1 root root 299 Oct 29 22:06 EULA\r\n-r--r--r--. 1 root root 745 Oct 29 22:06 extra_files.json\r\ndr-xr-xr-x. 1 root root 2048 Oct 29 21:53 images/\r\ndr-xr-xr-x. 1 root root 2048 Oct 29 21:53 isolinux/\r\n-r--r--r--. 1 root root 18092 Oct 29 22:06 LICENSE\r\n-r--r--r--. 1 root root 88 Oct 29 22:06 media.repo\r\n-r--r--r--. 1 root root 1530 Oct 29 22:06 .treeinfo\r\n```\r\nWhen you run `cobbler import --path=/mnt` the rsync will fail:\r\n```\r\nrunning: ['rsync', '--archive', '--progress', '/mnt/', '/var/www/cobbler/distro_mirror/centos-stream-9']\r\nreceived on stdout: sending incremental file list\r\nException occurred: <class 'RuntimeError'>\r\nException value: rsync import failed with return code 23!\r\nException Info:\r\n!!! TASK FAILED !!!\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. mount -o loop /path/to/ISO /mnt\r\n2. cobbler import --path=/mnt\r\n\r\n### Expected behavior\r\n\r\nDistro is imported.\r\n\r\n### Cobbler version\r\n\r\n<!--- Paste output from `cobbler version` -->\r\n````paste below\r\nCobbler 3.4.0\r\n source: ?, ?\r\n build time: Sat Nov 4 21:15:48 2023\r\n````\r\n\r\n### Operating system\r\n\r\nCentOS Stream 8\r\n\r\n### Cobbler log\r\n\r\n<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->\r\n````paste below\r\n2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | importing from a network location, running rsync to fetch the files first\r\n[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | running: ['rsync', '--archive', '--progress', '/tmp/Fedora-Server-x86_64-38-1.6.iso/', '/var/www/cobbler/distro_mirror/fedora-38']\r\n[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | received on stdout: sending incremental file list\r\n\r\n[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | Exception occurred: <class 'RuntimeError'>\r\n[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | Exception value: rsync import failed with return code 23!\r\n[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - INFO | Exception Info:\r\n File \"/usr/lib/python3.6/site-packages/cobbler/utils/thread.py\", line 103, in run\r\n return_code = self._run(self)\r\n\r\n File \"/usr/lib/python3.6/site-packages/cobbler/remote.py\", line 398, in runner\r\n self.options.get(\"os_version\", None),\r\n\r\n File \"/usr/lib/python3.6/site-packages/cobbler/api.py\", line 2327, in import_tree\r\n os_version,\r\n\r\n File \"/usr/lib/python3.6/site-packages/cobbler/actions/importer.py\", line 127, in run\r\n f\"rsync import failed with return code {rsync_return_code}!\"\r\n\r\n[2023-11-04_160526_Media import_8a52b82c5b54433587b15bc9e9d2e21f] 2023-11-04T16:05:26 - ERROR | ### TASK FAILED ###\r\n````\r\n\r\n### Additional information\r\n\r\nThe problem is that the read-only permissions are copied to the destination, and then cobbler does not have permission to write to the destination without the `dac_override` permission which is not granted:\r\n```\r\ntype=AVC msg=audit(1699229796.164:5238): avc: denied { dac_override } for pid=142026 comm=\"rsync\" capability=1 scontext=system_u:system_r:cobblerd_t:s0 tcontext=system_u:system_r:cobblerd_t:s0 tclass=capability permissive=0\r\n```\r\n\r\nI will be submitting a pull request the changes the rsync options to grant write permissions on the destination, which is what we would want anyway.\n", "before_files": [{"content": "\"\"\"\nThis module contains the logic that kicks of the ``cobbler import`` process. This is extracted logic from ``api.py``\nthat is essentially calling ``modules/mangers/import_signatures.py`` with some preparatory code.\n\"\"\"\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, Optional\n\nfrom cobbler import utils\nfrom cobbler.utils import filesystem_helpers\n\nif TYPE_CHECKING:\n from cobbler.api import CobblerAPI\n\n\nclass Importer:\n \"\"\"\n Wrapper class to adhere to the style of all other actions.\n \"\"\"\n\n def __init__(self, api: \"CobblerAPI\") -> None:\n \"\"\"\n Constructor to initialize the class.\n\n :param api: The CobblerAPI.\n \"\"\"\n self.api = api\n self.logger = logging.getLogger()\n\n def run(\n self,\n mirror_url: str,\n mirror_name: str,\n network_root: Optional[str] = None,\n autoinstall_file: Optional[str] = None,\n rsync_flags: Optional[str] = None,\n arch: Optional[str] = None,\n breed: Optional[str] = None,\n os_version: Optional[str] = None,\n ) -> bool:\n \"\"\"\n Automatically import a directory tree full of distribution files.\n\n :param mirror_url: Can be a string that represents a path, a user@host syntax for SSH, or an rsync:// address.\n If mirror_url is a filesystem path and mirroring is not desired, set network_root to\n something like \"nfs://path/to/mirror_url/root\"\n :param mirror_name: The name of the mirror.\n :param network_root: the remote path (nfs/http/ftp) for the distro files\n :param autoinstall_file: user-specified response file, which will override the default\n :param rsync_flags: Additional flags that will be passed to the rsync call that will sync everything to the\n Cobbler webroot.\n :param arch: user-specified architecture\n :param breed: user-specified breed\n :param os_version: user-specified OS version\n \"\"\"\n self.api.log(\n \"import_tree\",\n [mirror_url, mirror_name, network_root, autoinstall_file, rsync_flags],\n )\n\n # Both --path and --name are required arguments.\n if mirror_url is None or not mirror_url:\n self.logger.info(\"import failed. no --path specified\")\n return False\n if not mirror_name:\n self.logger.info(\"import failed. no --name specified\")\n return False\n\n path = os.path.normpath(\n f\"{self.api.settings().webdir}/distro_mirror/{mirror_name}\"\n )\n if arch is not None:\n arch = arch.lower()\n if arch == \"x86\":\n # be consistent\n arch = \"i386\"\n if path.split(\"-\")[-1] != arch:\n path += f\"-{arch}\"\n\n # We need to mirror (copy) the files.\n self.logger.info(\n \"importing from a network location, running rsync to fetch the files first\"\n )\n\n filesystem_helpers.mkdir(path)\n\n # Prevent rsync from creating the directory name twice if we are copying via rsync.\n\n if not mirror_url.endswith(\"/\"):\n mirror_url = f\"{mirror_url}/\"\n\n if (\n mirror_url.startswith(\"http://\")\n or mirror_url.startswith(\"https://\")\n or mirror_url.startswith(\"ftp://\")\n or mirror_url.startswith(\"nfs://\")\n ):\n # HTTP mirrors are kind of primitive. rsync is better. That's why this isn't documented in the manpage and\n # we don't support them.\n # TODO: how about adding recursive FTP as an option?\n self.logger.info(\"unsupported protocol\")\n return False\n\n # Good, we're going to use rsync.. We don't use SSH for public mirrors and local files.\n # Presence of user@host syntax means use SSH\n spacer = \"\"\n if not mirror_url.startswith(\"rsync://\") and not mirror_url.startswith(\"/\"):\n spacer = ' -e \"ssh\" '\n rsync_cmd = [\"rsync\", \"--archive\"]\n if spacer != \"\":\n rsync_cmd.append(spacer)\n rsync_cmd.append(\"--progress\")\n if rsync_flags:\n rsync_cmd.append(rsync_flags)\n\n # If --available-as was specified, limit the files we pull down via rsync to just those that are critical\n # to detecting what the distro is\n if network_root is not None:\n rsync_cmd.append(\"--include-from=/etc/cobbler/import_rsync_whitelist\")\n\n rsync_cmd += [mirror_url, path]\n\n # kick off the rsync now\n rsync_return_code = utils.subprocess_call(rsync_cmd, shell=False)\n if rsync_return_code != 0:\n raise RuntimeError(\n f\"rsync import failed with return code {rsync_return_code}!\"\n )\n\n if network_root is not None:\n # In addition to mirroring, we're going to assume the path is available over http, ftp, and nfs, perhaps on\n # an external filer. Scanning still requires --mirror is a filesystem path, but --available-as marks the\n # network path. This allows users to point the path at a directory containing just the network boot files\n # while the rest of the distro files are available somewhere else.\n\n # Find the filesystem part of the path, after the server bits, as each distro URL needs to be calculated\n # relative to this.\n\n if not network_root.endswith(\"/\"):\n network_root += \"/\"\n valid_roots = [\"nfs://\", \"ftp://\", \"http://\", \"https://\"]\n for valid_root in valid_roots:\n if network_root.startswith(valid_root):\n break\n else:\n self.logger.info(\n \"Network root given to --available-as must be nfs://, ftp://, http://, or https://\"\n )\n return False\n\n if network_root.startswith(\"nfs://\"):\n try:\n (_, _, _) = network_root.split(\":\", 3)\n except ValueError:\n self.logger.info(\n \"Network root given to --available-as is missing a colon, please see the manpage example.\"\n )\n return False\n\n import_module = self.api.get_module_by_name(\"managers.import_signatures\")\n if import_module is None:\n raise ImportError(\"Could not retrieve import signatures module!\")\n import_manager = import_module.get_import_manager(self.api)\n import_manager.run(\n path, mirror_name, network_root, autoinstall_file, arch, breed, os_version\n )\n return True\n", "path": "cobbler/actions/importer.py"}]} | 4,039 | 168 |
gh_patches_debug_791 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-753 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
All training is failing with a `timm` error
**Describe the bug**
Unable to train on current master.
**To Reproduce**
Steps to reproduce the behavior:
1. Try to start any segmentation training.
2. See error:
```python-traceback
Traceback (most recent call last):
File "/software/gandlf_personal/gandlf_run", line 11, in <module>
from GANDLF.cli import main_run, copyrightMessage
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/cli/__init__.py", line 2, in <module>
from .main_run import main_run
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/cli/main_run.py", line 4, in <module>
from GANDLF.training_manager import TrainingManager, TrainingManager_split
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/training_manager.py", line 6, in <module>
from GANDLF.compute import training_loop
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/__init__.py", line 1, in <module>
from .training_loop import training_loop
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/training_loop.py", line 30, in <module>
from .generic import create_pytorch_objects
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/generic.py", line 3, in <module>
from GANDLF.models import get_model
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/models/__init__.py", line 32, in <module>
from .imagenet_unet import imagenet_unet_wrapper
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/models/imagenet_unet.py", line 7, in <module>
from segmentation_models_pytorch.base import (
File "/software/gandlf_personal/venv11/lib/python3.11/site-packages/segmentation_models_pytorch/__init__.py", line 2, in <module>
from . import encoders
File "/software/gandlf_personal/venv11/lib/python3.11/site-packages/segmentation_models_pytorch/encoders/__init__.py", line 1, in <module>
import timm
File "/software/gandlf_personal/venv11/lib/python3.11/site-packages/timm/__init__.py", line 2, in <module>
from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \
File "/software/gandlf_personal/venv11/lib/python3.11/site-packages/timm/models/__init__.py", line 28, in <module>
from .maxxvit import *
File "/software/gandlf_personal/venv11/lib/python3.11/site-packages/timm/models/maxxvit.py", line 225, in <module>
@dataclass
^^^^^^^^^
File "/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'timm.models.maxxvit.MaxxVitConvCfg'> for field conv_cfg is not allowed: use default_factory
```
**Expected behavior**
It should work.
**Screenshots**
N.A.
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.18-dev
**Desktop (please complete the following information):**
N.A.
**Additional context**
N.A.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import sys, re, os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 try:
13 with open("README.md") as readme_file:
14 readme = readme_file.read()
15 except Exception as error:
16 readme = "No README information found."
17 sys.stderr.write(
18 "Warning: Could not open '%s' due %s\n" % ("README.md", error)
19 )
20
21
22 class CustomInstallCommand(install):
23 def run(self):
24 install.run(self)
25
26
27 class CustomDevelopCommand(develop):
28 def run(self):
29 develop.run(self)
30
31
32 class CustomEggInfoCommand(egg_info):
33 def run(self):
34 egg_info.run(self)
35
36
37 try:
38 filepath = "GANDLF/version.py"
39 version_file = open(filepath)
40 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
41
42 except Exception as error:
43 __version__ = "0.0.1"
44 sys.stderr.write(
45 "Warning: Could not open '%s' due %s\n" % (filepath, error)
46 )
47
48 # Handle cases where specific files need to be bundled into the final package as installed via PyPI
49 dockerfiles = [
50 item
51 for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))
52 if (os.path.isfile(item) and item.startswith("Dockerfile-"))
53 ]
54 entrypoint_files = [
55 item
56 for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))
57 if (os.path.isfile(item) and item.startswith("gandlf_"))
58 ]
59 setup_files = ["setup.py", ".dockerignore", "pyproject.toml", "MANIFEST.in"]
60 all_extra_files = dockerfiles + entrypoint_files + setup_files
61 all_extra_files_pathcorrected = [
62 os.path.join("../", item) for item in all_extra_files
63 ]
64 # find_packages should only ever find these as subpackages of gandlf, not as top-level packages
65 # generate this dynamically?
66 # GANDLF.GANDLF is needed to prevent recursion madness in deployments
67 toplevel_package_excludes = [
68 "GANDLF.GANDLF",
69 "anonymize",
70 "cli",
71 "compute",
72 "data",
73 "grad_clipping",
74 "losses",
75 "metrics",
76 "models",
77 "optimizers",
78 "schedulers",
79 "utils",
80 ]
81
82
83 requirements = [
84 "torch==2.1.0",
85 "black==23.11.0",
86 "numpy==1.25.0",
87 "scipy",
88 "SimpleITK!=2.0.*",
89 "SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
90 "torchvision",
91 "tqdm",
92 "torchio==0.19.3",
93 "pandas>=2.0.0",
94 "scikit-learn>=0.23.2",
95 "scikit-image>=0.19.1",
96 "setuptools",
97 "seaborn",
98 "pyyaml",
99 "tiffslide",
100 "matplotlib",
101 "gdown",
102 "pytest",
103 "coverage",
104 "pytest-cov",
105 "psutil",
106 "medcam",
107 "opencv-python",
108 "torchmetrics==1.1.2",
109 "zarr==2.10.3",
110 "pydicom",
111 "onnx",
112 "torchinfo==1.7.0",
113 "segmentation-models-pytorch==0.3.2",
114 "ACSConv==0.1.1",
115 "docker",
116 "dicom-anonymizer",
117 "twine",
118 "zarr",
119 "keyring",
120 ]
121
122 if __name__ == "__main__":
123 setup(
124 name="GANDLF",
125 version=__version__,
126 author="MLCommons",
127 author_email="[email protected]",
128 python_requires=">3.8, <3.12",
129 packages=find_packages(
130 where=os.path.dirname(os.path.abspath(__file__)),
131 exclude=toplevel_package_excludes,
132 ),
133 cmdclass={
134 "install": CustomInstallCommand,
135 "develop": CustomDevelopCommand,
136 "egg_info": CustomEggInfoCommand,
137 },
138 scripts=[
139 "gandlf_run",
140 "gandlf_constructCSV",
141 "gandlf_collectStats",
142 "gandlf_patchMiner",
143 "gandlf_preprocess",
144 "gandlf_anonymizer",
145 "gandlf_verifyInstall",
146 "gandlf_configGenerator",
147 "gandlf_recoverConfig",
148 "gandlf_deploy",
149 "gandlf_optimizeModel",
150 "gandlf_generateMetrics",
151 ],
152 classifiers=[
153 "Development Status :: 3 - Alpha",
154 "Intended Audience :: Science/Research",
155 "License :: OSI Approved :: Apache Software License",
156 "Natural Language :: English",
157 "Operating System :: OS Independent",
158 "Programming Language :: Python :: 3.9",
159 "Programming Language :: Python :: 3.10",
160 "Programming Language :: Python :: 3.11",
161 "Topic :: Scientific/Engineering :: Medical Science Apps.",
162 ],
163 description=(
164 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
165 ),
166 install_requires=requirements,
167 license="Apache-2.0",
168 long_description=readme,
169 long_description_content_type="text/markdown",
170 include_package_data=True,
171 package_data={"GANDLF": all_extra_files_pathcorrected},
172 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
173 zip_safe=False,
174 )
175
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -110,7 +110,7 @@
"pydicom",
"onnx",
"torchinfo==1.7.0",
- "segmentation-models-pytorch==0.3.2",
+ "segmentation-models-pytorch==0.3.3",
"ACSConv==0.1.1",
"docker",
"dicom-anonymizer",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -110,7 +110,7 @@\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n- \"segmentation-models-pytorch==0.3.2\",\n+ \"segmentation-models-pytorch==0.3.3\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer\",\n", "issue": "All training is failing with a `timm` error\n**Describe the bug**\r\nUnable to train on current master.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Try to start any segmentation training.\r\n2. See error:\r\n```python-traceback\r\nTraceback (most recent call last):\r\n File \"/software/gandlf_personal/gandlf_run\", line 11, in <module>\r\n from GANDLF.cli import main_run, copyrightMessage\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/cli/__init__.py\", line 2, in <module>\r\n from .main_run import main_run\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/cli/main_run.py\", line 4, in <module>\r\n from GANDLF.training_manager import TrainingManager, TrainingManager_split\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/training_manager.py\", line 6, in <module>\r\n from GANDLF.compute import training_loop\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/__init__.py\", line 1, in <module>\r\n from .training_loop import training_loop\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/training_loop.py\", line 30, in <module>\r\n from .generic import create_pytorch_objects\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/generic.py\", line 3, in <module>\r\n from GANDLF.models import get_model\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/models/__init__.py\", line 32, in <module>\r\n from .imagenet_unet import imagenet_unet_wrapper\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/models/imagenet_unet.py\", line 7, in <module>\r\n from segmentation_models_pytorch.base import (\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/site-packages/segmentation_models_pytorch/__init__.py\", line 2, in <module>\r\n from . import encoders\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/site-packages/segmentation_models_pytorch/encoders/__init__.py\", line 1, in <module>\r\n import timm\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/site-packages/timm/__init__.py\", line 2, in <module>\r\n from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \\\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/site-packages/timm/models/__init__.py\", line 28, in <module>\r\n from .maxxvit import *\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/site-packages/timm/models/maxxvit.py\", line 225, in <module>\r\n @dataclass\r\n ^^^^^^^^^\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py\", line 1230, in dataclass\r\n return wrap(cls)\r\n ^^^^^^^^^\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py\", line 1220, in wrap\r\n return _process_class(cls, init, repr, eq, order, unsafe_hash,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py\", line 958, in _process_class\r\n cls_fields.append(_get_field(cls, name, type, kw_only))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/software/gandlf_personal/venv11/lib/python3.11/dataclasses.py\", line 815, in _get_field\r\n raise ValueError(f'mutable default {type(f.default)} for field '\r\nValueError: mutable default <class 'timm.models.maxxvit.MaxxVitConvCfg'> for field conv_cfg is not allowed: use default_factory\r\n```\r\n\r\n**Expected behavior**\r\nIt should work.\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.18-dev\r\n\r\n**Desktop (please complete the following information):**\r\nN.A.\r\n\r\n**Additional context**\r\nN.A.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport sys, re, os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\ntry:\n with open(\"README.md\") as readme_file:\n readme = readme_file.read()\nexcept Exception as error:\n readme = \"No README information found.\"\n sys.stderr.write(\n \"Warning: Could not open '%s' due %s\\n\" % (\"README.md\", error)\n )\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\n \"Warning: Could not open '%s' due %s\\n\" % (filepath, error)\n )\n\n# Handle cases where specific files need to be bundled into the final package as installed via PyPI\ndockerfiles = [\n item\n for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))\n if (os.path.isfile(item) and item.startswith(\"Dockerfile-\"))\n]\nentrypoint_files = [\n item\n for item in os.listdir(os.path.dirname(os.path.abspath(__file__)))\n if (os.path.isfile(item) and item.startswith(\"gandlf_\"))\n]\nsetup_files = [\"setup.py\", \".dockerignore\", \"pyproject.toml\", \"MANIFEST.in\"]\nall_extra_files = dockerfiles + entrypoint_files + setup_files\nall_extra_files_pathcorrected = [\n os.path.join(\"../\", item) for item in all_extra_files\n]\n# find_packages should only ever find these as subpackages of gandlf, not as top-level packages\n# generate this dynamically?\n# GANDLF.GANDLF is needed to prevent recursion madness in deployments\ntoplevel_package_excludes = [\n \"GANDLF.GANDLF\",\n \"anonymize\",\n \"cli\",\n \"compute\",\n \"data\",\n \"grad_clipping\",\n \"losses\",\n \"metrics\",\n \"models\",\n \"optimizers\",\n \"schedulers\",\n \"utils\",\n]\n\n\nrequirements = [\n \"torch==2.1.0\",\n \"black==23.11.0\",\n \"numpy==1.25.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.19.3\",\n \"pandas>=2.0.0\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"gdown\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==1.1.2\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.2\",\n \"ACSConv==0.1.1\",\n \"docker\",\n \"dicom-anonymizer\",\n \"twine\",\n \"zarr\",\n \"keyring\",\n]\n\nif __name__ == \"__main__\":\n setup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">3.8, <3.12\",\n packages=find_packages(\n where=os.path.dirname(os.path.abspath(__file__)),\n exclude=toplevel_package_excludes,\n ),\n cmdclass={\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n \"gandlf_recoverConfig\",\n \"gandlf_deploy\",\n \"gandlf_optimizeModel\",\n \"gandlf_generateMetrics\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n package_data={\"GANDLF\": all_extra_files_pathcorrected},\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n )\n", "path": "setup.py"}]} | 3,337 | 113 |
gh_patches_debug_27124 | rasdani/github-patches | git_diff | chainer__chainer-6807 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
F.mean_absolute_error numerically unstable with float16 arrays
In #5053, float16 support has been enabled for [F.mean_absolute_error](https://github.com/chainer/chainer/blob/master/chainer/functions/loss/mean_absolute_error.py), but it seems to produce NaN values quite easily. Usually this happens when using big batch sizes and/or if the absolute error difference is large.
The calculation is done by summing over all the absolute differences, and then dividing by the number of elements in the array. However, it appears that the summing can produce large numbers outside the possible range for `float16`. The current implementation can be broken down as follows:
```python
def forward_cpu(self, inputs):
x0, x1 = inputs
self.diff = x0 - x1
diff = self.diff.ravel()
abs_diff = abs(diff)
summed_abs_diff = abs_diff.sum() # numerically unstable, can result in inf
mean_abs_error = np.array(summed_abs_diff / diff.size, dtype=diff.dtype)
return mean_abs_error
```
Code to reproduce error:
```python
import chainer.functions as F
import numpy as np
a = np.full(shape=(64,1,16,16), fill_value=2, dtype=np.float16)
b = np.full(shape=(64,1,16,16), fill_value=-2, dtype=np.float16)
loss = F.mean_absolute_error(a,b)
# /home/user/.local/share/virtualenvs/.../lib/python3.6/site-packages/numpy/core/_methods.py:36: RuntimeWarning: overflow encountered in reduce
# return umr_sum(a, axis, dtype, out, keepdims, initial)
# variable(inf)
loss = F.mean_absolute_error(a.astype("float32"), b.astype("float32"))
# variable(4.)
```
Note that the actual loss (4) would still be valid in the float16 range, it is just that summing over many values results in an `inf`, which cannot then be divided to get a proper number.
Workaround ideas:
I've noticed the new `mixed16` mode that was implemented in #6456, and was wondering if there might be a similar way to do the intermediate calculations in `float32`, and cast the result back into `float16`? Thoughts?
System info:
```
Platform: Linux-4.15.0-46-generic-x86_64-with-debian-buster-sid
Chainer: 6.0.0b3
NumPy: 1.16.2
CuPy:
CuPy Version : 6.0.0b3
CUDA Root : /usr/local/cuda
CUDA Build Version : 10000
CUDA Driver Version : 10010
CUDA Runtime Version : 10000
cuDNN Build Version : 7402
cuDNN Version : 7402
NCCL Build Version : 2307
NCCL Runtime Version : 2307
iDeep: Not Available
```
</issue>
<code>
[start of chainer/functions/loss/mean_absolute_error.py]
1 import numpy
2
3 import chainer
4 from chainer import backend
5 from chainer import function_node
6 from chainer.utils import type_check
7
8
9 class MeanAbsoluteError(function_node.FunctionNode):
10
11 """Mean absolute error function."""
12
13 def check_type_forward(self, in_types):
14 type_check._argname(in_types, ('x0', 'x1'))
15 type_check.expect(
16 in_types[0].dtype.kind == 'f',
17 in_types[0].dtype == in_types[1].dtype,
18 in_types[0].shape == in_types[1].shape
19 )
20
21 def forward_cpu(self, inputs):
22 x0, x1 = inputs
23 self.diff = x0 - x1
24 diff = self.diff.ravel()
25 return numpy.array(abs(diff).sum() / diff.size, dtype=diff.dtype),
26
27 def forward_gpu(self, inputs):
28 x0, x1 = inputs
29 self.diff = x0 - x1
30 diff = self.diff.ravel()
31 return abs(diff).sum() / diff.dtype.type(diff.size),
32
33 def backward(self, indexes, grad_outputs):
34 gy, = grad_outputs
35 coeff = gy * gy.data.dtype.type(1. / self.diff.size)
36 coeff = chainer.functions.broadcast_to(coeff, self.diff.shape)
37 gx0 = coeff * backend.get_array_module(gy.data).sign(self.diff)
38 return gx0, -gx0
39
40
41 def mean_absolute_error(x0, x1):
42 """Mean absolute error function.
43
44 The function computes the mean absolute error between two variables. The
45 mean is taken over the minibatch. Args ``x0`` and ``x1`` must have the
46 same dimensions. This function first calculates the absolute value
47 differences between the corresponding elements in x0 and x1, and then
48 returns the mean of those differences.
49
50 Args:
51 x0 (:class:`~chainer.Variable` or :ref:`ndarray`): Input variable.
52 x1 (:class:`~chainer.Variable` or :ref:`ndarray`): Input variable.
53
54 Returns:
55 ~chainer.Variable:
56 A variable holding an array representing the mean absolute
57 error of two inputs.
58
59 .. admonition:: Example
60
61 1D array examples:
62
63 >>> x = np.array([1, 2, 3]).astype(np.float32)
64 >>> y = np.array([0, 0, 0]).astype(np.float32)
65 >>> F.mean_absolute_error(x, y)
66 variable(2.)
67 >>> x = np.array([1, 2, 3, 4, 5, 6]).astype(np.float32)
68 >>> y = np.array([7, 8, 9, 10, 11, 12]).astype(np.float32)
69 >>> F.mean_absolute_error(x, y)
70 variable(6.)
71
72 2D array example:
73
74 In this example, there are 4 elements, and thus 4 errors
75 >>> x = np.array([[1, 2], [3, 4]]).astype(np.float32)
76 >>> y = np.array([[8, 8], [8, 8]]).astype(np.float32)
77 >>> F.mean_absolute_error(x, y)
78 variable(5.5)
79
80 3D array example:
81
82 In this example, there are 8 elements, and thus 8 errors
83 >>> x = np.reshape(np.array([1, 2, 3, 4, 5, 6, 7, 8]), (2, 2, 2))
84 >>> y = np.reshape(np.array([8, 8, 8, 8, 8, 8, 8, 8]), (2, 2, 2))
85 >>> x = x.astype(np.float32)
86 >>> y = y.astype(np.float32)
87 >>> F.mean_absolute_error(x, y)
88 variable(3.5)
89
90 """
91 return MeanAbsoluteError().apply((x0, x1))[0]
92
[end of chainer/functions/loss/mean_absolute_error.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/loss/mean_absolute_error.py b/chainer/functions/loss/mean_absolute_error.py
--- a/chainer/functions/loss/mean_absolute_error.py
+++ b/chainer/functions/loss/mean_absolute_error.py
@@ -6,6 +6,15 @@
from chainer.utils import type_check
+def _get_intermediate_dtype(dtype):
+ # Returns the dtype for intermediate calculation.
+ # For float16 input, float32 is used.
+ # Otherwise the same dtype as the parameter is used.
+ if dtype == numpy.float16:
+ return numpy.float32
+ return dtype
+
+
class MeanAbsoluteError(function_node.FunctionNode):
"""Mean absolute error function."""
@@ -21,14 +30,19 @@
def forward_cpu(self, inputs):
x0, x1 = inputs
self.diff = x0 - x1
- diff = self.diff.ravel()
- return numpy.array(abs(diff).sum() / diff.size, dtype=diff.dtype),
+ orig_dtype = self.diff.dtype
+ dtype = _get_intermediate_dtype(orig_dtype)
+ diff = self.diff.ravel().astype(dtype, copy=False)
+ return numpy.array(abs(diff).sum() / diff.size, dtype=orig_dtype),
def forward_gpu(self, inputs):
x0, x1 = inputs
self.diff = x0 - x1
- diff = self.diff.ravel()
- return abs(diff).sum() / diff.dtype.type(diff.size),
+ orig_dtype = self.diff.dtype
+ dtype = _get_intermediate_dtype(orig_dtype)
+ diff = self.diff.ravel().astype(dtype, copy=False)
+ return (abs(diff).sum() / diff.dtype.type(diff.size)).astype(
+ orig_dtype, copy=False),
def backward(self, indexes, grad_outputs):
gy, = grad_outputs
| {"golden_diff": "diff --git a/chainer/functions/loss/mean_absolute_error.py b/chainer/functions/loss/mean_absolute_error.py\n--- a/chainer/functions/loss/mean_absolute_error.py\n+++ b/chainer/functions/loss/mean_absolute_error.py\n@@ -6,6 +6,15 @@\n from chainer.utils import type_check\n \n \n+def _get_intermediate_dtype(dtype):\n+ # Returns the dtype for intermediate calculation.\n+ # For float16 input, float32 is used.\n+ # Otherwise the same dtype as the parameter is used.\n+ if dtype == numpy.float16:\n+ return numpy.float32\n+ return dtype\n+\n+\n class MeanAbsoluteError(function_node.FunctionNode):\n \n \"\"\"Mean absolute error function.\"\"\"\n@@ -21,14 +30,19 @@\n def forward_cpu(self, inputs):\n x0, x1 = inputs\n self.diff = x0 - x1\n- diff = self.diff.ravel()\n- return numpy.array(abs(diff).sum() / diff.size, dtype=diff.dtype),\n+ orig_dtype = self.diff.dtype\n+ dtype = _get_intermediate_dtype(orig_dtype)\n+ diff = self.diff.ravel().astype(dtype, copy=False)\n+ return numpy.array(abs(diff).sum() / diff.size, dtype=orig_dtype),\n \n def forward_gpu(self, inputs):\n x0, x1 = inputs\n self.diff = x0 - x1\n- diff = self.diff.ravel()\n- return abs(diff).sum() / diff.dtype.type(diff.size),\n+ orig_dtype = self.diff.dtype\n+ dtype = _get_intermediate_dtype(orig_dtype)\n+ diff = self.diff.ravel().astype(dtype, copy=False)\n+ return (abs(diff).sum() / diff.dtype.type(diff.size)).astype(\n+ orig_dtype, copy=False),\n \n def backward(self, indexes, grad_outputs):\n gy, = grad_outputs\n", "issue": "F.mean_absolute_error numerically unstable with float16 arrays\nIn #5053, float16 support has been enabled for [F.mean_absolute_error](https://github.com/chainer/chainer/blob/master/chainer/functions/loss/mean_absolute_error.py), but it seems to produce NaN values quite easily. Usually this happens when using big batch sizes and/or if the absolute error difference is large.\r\n\r\nThe calculation is done by summing over all the absolute differences, and then dividing by the number of elements in the array. However, it appears that the summing can produce large numbers outside the possible range for `float16`. The current implementation can be broken down as follows:\r\n\r\n```python\r\ndef forward_cpu(self, inputs):\r\n x0, x1 = inputs\r\n self.diff = x0 - x1\r\n diff = self.diff.ravel()\r\n abs_diff = abs(diff)\r\n summed_abs_diff = abs_diff.sum() # numerically unstable, can result in inf\r\n mean_abs_error = np.array(summed_abs_diff / diff.size, dtype=diff.dtype)\r\n return mean_abs_error\r\n```\r\n\r\nCode to reproduce error:\r\n\r\n```python\r\nimport chainer.functions as F\r\nimport numpy as np\r\n\r\na = np.full(shape=(64,1,16,16), fill_value=2, dtype=np.float16)\r\nb = np.full(shape=(64,1,16,16), fill_value=-2, dtype=np.float16)\r\n\r\nloss = F.mean_absolute_error(a,b)\r\n# /home/user/.local/share/virtualenvs/.../lib/python3.6/site-packages/numpy/core/_methods.py:36: RuntimeWarning: overflow encountered in reduce\r\n# return umr_sum(a, axis, dtype, out, keepdims, initial)\r\n# variable(inf)\r\n\r\nloss = F.mean_absolute_error(a.astype(\"float32\"), b.astype(\"float32\"))\r\n# variable(4.)\r\n```\r\n\r\nNote that the actual loss (4) would still be valid in the float16 range, it is just that summing over many values results in an `inf`, which cannot then be divided to get a proper number.\r\n\r\nWorkaround ideas:\r\n\r\nI've noticed the new `mixed16` mode that was implemented in #6456, and was wondering if there might be a similar way to do the intermediate calculations in `float32`, and cast the result back into `float16`? Thoughts?\r\n\r\nSystem info:\r\n```\r\nPlatform: Linux-4.15.0-46-generic-x86_64-with-debian-buster-sid\r\nChainer: 6.0.0b3\r\nNumPy: 1.16.2\r\nCuPy:\r\n CuPy Version : 6.0.0b3\r\n CUDA Root : /usr/local/cuda\r\n CUDA Build Version : 10000\r\n CUDA Driver Version : 10010\r\n CUDA Runtime Version : 10000\r\n cuDNN Build Version : 7402\r\n cuDNN Version : 7402\r\n NCCL Build Version : 2307\r\n NCCL Runtime Version : 2307\r\niDeep: Not Available\r\n```\n", "before_files": [{"content": "import numpy\n\nimport chainer\nfrom chainer import backend\nfrom chainer import function_node\nfrom chainer.utils import type_check\n\n\nclass MeanAbsoluteError(function_node.FunctionNode):\n\n \"\"\"Mean absolute error function.\"\"\"\n\n def check_type_forward(self, in_types):\n type_check._argname(in_types, ('x0', 'x1'))\n type_check.expect(\n in_types[0].dtype.kind == 'f',\n in_types[0].dtype == in_types[1].dtype,\n in_types[0].shape == in_types[1].shape\n )\n\n def forward_cpu(self, inputs):\n x0, x1 = inputs\n self.diff = x0 - x1\n diff = self.diff.ravel()\n return numpy.array(abs(diff).sum() / diff.size, dtype=diff.dtype),\n\n def forward_gpu(self, inputs):\n x0, x1 = inputs\n self.diff = x0 - x1\n diff = self.diff.ravel()\n return abs(diff).sum() / diff.dtype.type(diff.size),\n\n def backward(self, indexes, grad_outputs):\n gy, = grad_outputs\n coeff = gy * gy.data.dtype.type(1. / self.diff.size)\n coeff = chainer.functions.broadcast_to(coeff, self.diff.shape)\n gx0 = coeff * backend.get_array_module(gy.data).sign(self.diff)\n return gx0, -gx0\n\n\ndef mean_absolute_error(x0, x1):\n \"\"\"Mean absolute error function.\n\n The function computes the mean absolute error between two variables. The\n mean is taken over the minibatch. Args ``x0`` and ``x1`` must have the\n same dimensions. This function first calculates the absolute value\n differences between the corresponding elements in x0 and x1, and then\n returns the mean of those differences.\n\n Args:\n x0 (:class:`~chainer.Variable` or :ref:`ndarray`): Input variable.\n x1 (:class:`~chainer.Variable` or :ref:`ndarray`): Input variable.\n\n Returns:\n ~chainer.Variable:\n A variable holding an array representing the mean absolute\n error of two inputs.\n\n .. admonition:: Example\n\n 1D array examples:\n\n >>> x = np.array([1, 2, 3]).astype(np.float32)\n >>> y = np.array([0, 0, 0]).astype(np.float32)\n >>> F.mean_absolute_error(x, y)\n variable(2.)\n >>> x = np.array([1, 2, 3, 4, 5, 6]).astype(np.float32)\n >>> y = np.array([7, 8, 9, 10, 11, 12]).astype(np.float32)\n >>> F.mean_absolute_error(x, y)\n variable(6.)\n\n 2D array example:\n\n In this example, there are 4 elements, and thus 4 errors\n >>> x = np.array([[1, 2], [3, 4]]).astype(np.float32)\n >>> y = np.array([[8, 8], [8, 8]]).astype(np.float32)\n >>> F.mean_absolute_error(x, y)\n variable(5.5)\n\n 3D array example:\n\n In this example, there are 8 elements, and thus 8 errors\n >>> x = np.reshape(np.array([1, 2, 3, 4, 5, 6, 7, 8]), (2, 2, 2))\n >>> y = np.reshape(np.array([8, 8, 8, 8, 8, 8, 8, 8]), (2, 2, 2))\n >>> x = x.astype(np.float32)\n >>> y = y.astype(np.float32)\n >>> F.mean_absolute_error(x, y)\n variable(3.5)\n\n \"\"\"\n return MeanAbsoluteError().apply((x0, x1))[0]\n", "path": "chainer/functions/loss/mean_absolute_error.py"}]} | 2,298 | 413 |
gh_patches_debug_32491 | rasdani/github-patches | git_diff | openai__gym-1573 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support wrapper transformations to VecEnv
@tristandeleu @pzhokhov @christopherhesse It might be rather convenient for the user experience to provide a list of wrapper transformations for the atomic environments when creating vectorized environment, e.g.
```python
transforms = [AtariPreprocessing, SignReward, ...]
env = gym.vector.make('Pong-v0', 16, True, transforms=transforms)
```
For additional arguments, the user is required to use `partial()` to define them within the transform list. So that each internal environment is wrapped according to the transformation list.
</issue>
<code>
[start of gym/vector/__init__.py]
1 from gym.vector.async_vector_env import AsyncVectorEnv
2 from gym.vector.sync_vector_env import SyncVectorEnv
3 from gym.vector.vector_env import VectorEnv
4
5 __all__ = ['AsyncVectorEnv', 'SyncVectorEnv', 'VectorEnv', 'make']
6
7 def make(id, num_envs=1, asynchronous=True, **kwargs):
8 """Create a vectorized environment from multiple copies of an environment,
9 from its id
10
11 Parameters
12 ----------
13 id : str
14 The environment ID. This must be a valid ID from the registry.
15
16 num_envs : int
17 Number of copies of the environment.
18
19 asynchronous : bool (default: `True`)
20 If `True`, wraps the environments in an `AsyncVectorEnv` (which uses
21 `multiprocessing` to run the environments in parallel). If `False`,
22 wraps the environments in a `SyncVectorEnv`.
23
24 Returns
25 -------
26 env : `gym.vector.VectorEnv` instance
27 The vectorized environment.
28
29 Example
30 -------
31 >>> import gym
32 >>> env = gym.vector.make('CartPole-v1', 3)
33 >>> env.reset()
34 array([[-0.04456399, 0.04653909, 0.01326909, -0.02099827],
35 [ 0.03073904, 0.00145001, -0.03088818, -0.03131252],
36 [ 0.03468829, 0.01500225, 0.01230312, 0.01825218]],
37 dtype=float32)
38 """
39 from gym.envs import make as make_
40 def _make_env():
41 return make_(id, **kwargs)
42 env_fns = [_make_env for _ in range(num_envs)]
43 return AsyncVectorEnv(env_fns) if asynchronous else SyncVectorEnv(env_fns)
44
[end of gym/vector/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gym/vector/__init__.py b/gym/vector/__init__.py
--- a/gym/vector/__init__.py
+++ b/gym/vector/__init__.py
@@ -1,10 +1,15 @@
+try:
+ from collections.abc import Iterable
+except ImportError:
+ Iterable = (tuple, list)
+
from gym.vector.async_vector_env import AsyncVectorEnv
from gym.vector.sync_vector_env import SyncVectorEnv
from gym.vector.vector_env import VectorEnv
__all__ = ['AsyncVectorEnv', 'SyncVectorEnv', 'VectorEnv', 'make']
-def make(id, num_envs=1, asynchronous=True, **kwargs):
+def make(id, num_envs=1, asynchronous=True, wrappers=None, **kwargs):
"""Create a vectorized environment from multiple copies of an environment,
from its id
@@ -20,6 +25,10 @@
If `True`, wraps the environments in an `AsyncVectorEnv` (which uses
`multiprocessing` to run the environments in parallel). If `False`,
wraps the environments in a `SyncVectorEnv`.
+
+ wrappers : Callable or Iterable of Callables (default: `None`)
+ If not `None`, then apply the wrappers to each internal
+ environment during creation.
Returns
-------
@@ -38,6 +47,15 @@
"""
from gym.envs import make as make_
def _make_env():
- return make_(id, **kwargs)
+ env = make_(id, **kwargs)
+ if wrappers is not None:
+ if callable(wrappers):
+ env = wrappers(env)
+ elif isinstance(wrappers, Iterable) and all([callable(w) for w in wrappers]):
+ for wrapper in wrappers:
+ env = wrapper(env)
+ else:
+ raise NotImplementedError
+ return env
env_fns = [_make_env for _ in range(num_envs)]
return AsyncVectorEnv(env_fns) if asynchronous else SyncVectorEnv(env_fns)
| {"golden_diff": "diff --git a/gym/vector/__init__.py b/gym/vector/__init__.py\n--- a/gym/vector/__init__.py\n+++ b/gym/vector/__init__.py\n@@ -1,10 +1,15 @@\n+try:\n+ from collections.abc import Iterable\n+except ImportError:\n+ Iterable = (tuple, list)\n+\n from gym.vector.async_vector_env import AsyncVectorEnv\n from gym.vector.sync_vector_env import SyncVectorEnv\n from gym.vector.vector_env import VectorEnv\n \n __all__ = ['AsyncVectorEnv', 'SyncVectorEnv', 'VectorEnv', 'make']\n \n-def make(id, num_envs=1, asynchronous=True, **kwargs):\n+def make(id, num_envs=1, asynchronous=True, wrappers=None, **kwargs):\n \"\"\"Create a vectorized environment from multiple copies of an environment,\n from its id\n \n@@ -20,6 +25,10 @@\n If `True`, wraps the environments in an `AsyncVectorEnv` (which uses \n `multiprocessing` to run the environments in parallel). If `False`,\n wraps the environments in a `SyncVectorEnv`.\n+ \n+ wrappers : Callable or Iterable of Callables (default: `None`)\n+ If not `None`, then apply the wrappers to each internal \n+ environment during creation. \n \n Returns\n -------\n@@ -38,6 +47,15 @@\n \"\"\"\n from gym.envs import make as make_\n def _make_env():\n- return make_(id, **kwargs)\n+ env = make_(id, **kwargs)\n+ if wrappers is not None:\n+ if callable(wrappers):\n+ env = wrappers(env)\n+ elif isinstance(wrappers, Iterable) and all([callable(w) for w in wrappers]):\n+ for wrapper in wrappers:\n+ env = wrapper(env)\n+ else:\n+ raise NotImplementedError\n+ return env\n env_fns = [_make_env for _ in range(num_envs)]\n return AsyncVectorEnv(env_fns) if asynchronous else SyncVectorEnv(env_fns)\n", "issue": "Support wrapper transformations to VecEnv\n@tristandeleu @pzhokhov @christopherhesse It might be rather convenient for the user experience to provide a list of wrapper transformations for the atomic environments when creating vectorized environment, e.g.\r\n\r\n```python\r\ntransforms = [AtariPreprocessing, SignReward, ...]\r\nenv = gym.vector.make('Pong-v0', 16, True, transforms=transforms)\r\n```\r\nFor additional arguments, the user is required to use `partial()` to define them within the transform list. So that each internal environment is wrapped according to the transformation list. \n", "before_files": [{"content": "from gym.vector.async_vector_env import AsyncVectorEnv\nfrom gym.vector.sync_vector_env import SyncVectorEnv\nfrom gym.vector.vector_env import VectorEnv\n\n__all__ = ['AsyncVectorEnv', 'SyncVectorEnv', 'VectorEnv', 'make']\n\ndef make(id, num_envs=1, asynchronous=True, **kwargs):\n \"\"\"Create a vectorized environment from multiple copies of an environment,\n from its id\n\n Parameters\n ----------\n id : str\n The environment ID. This must be a valid ID from the registry.\n\n num_envs : int\n Number of copies of the environment. \n\n asynchronous : bool (default: `True`)\n If `True`, wraps the environments in an `AsyncVectorEnv` (which uses \n `multiprocessing` to run the environments in parallel). If `False`,\n wraps the environments in a `SyncVectorEnv`.\n\n Returns\n -------\n env : `gym.vector.VectorEnv` instance\n The vectorized environment.\n\n Example\n -------\n >>> import gym\n >>> env = gym.vector.make('CartPole-v1', 3)\n >>> env.reset()\n array([[-0.04456399, 0.04653909, 0.01326909, -0.02099827],\n [ 0.03073904, 0.00145001, -0.03088818, -0.03131252],\n [ 0.03468829, 0.01500225, 0.01230312, 0.01825218]],\n dtype=float32)\n \"\"\"\n from gym.envs import make as make_\n def _make_env():\n return make_(id, **kwargs)\n env_fns = [_make_env for _ in range(num_envs)]\n return AsyncVectorEnv(env_fns) if asynchronous else SyncVectorEnv(env_fns)\n", "path": "gym/vector/__init__.py"}]} | 1,214 | 449 |
gh_patches_debug_24975 | rasdani/github-patches | git_diff | streamlit__streamlit-7033 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Using help param causes use_container_width to be ignored with st.button for version 1.18
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Using both `help` and `use_container_width` parameters with `st.button` with version 1.18 results in `use_container_width` being ignored
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-6161)
```Python
import streamlit as st
c1, c2, c3 = st.columns([1, 1, 1])
with c1:
st.button('button 1', use_container_width=True)
with c2:
st.button('button 2', use_container_width=True)
with c3:
st.button('button 3', use_container_width=True, help = 'example')
st.button("test", use_container_width=True, help='test')
```
### Steps To Reproduce
Run app that uses `help` and `use_container_width` parameters for `st.button` with version 1.18
### Expected Behavior
Expected behavior is that `use_container_width` impacts width of button widget
### Current Behavior
Current behavior:
<img width="631" alt="Screenshot 2023-02-21 at 11 48 14 AM" src="https://user-images.githubusercontent.com/16749069/220443951-e1ee3abc-0210-4a04-85b4-85b07ade9cc9.png">
`use_container_width` is ignored
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.18.0
- Python version:
- Operating System:
- Browser:
- Virtual environment:
### Additional Information
_No response_
### Are you willing to submit a PR?
- [ ] Yes, I am willing to submit a PR!
</issue>
<code>
[start of e2e/scripts/st_button.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16 from streamlit import runtime
17
18 # st.session_state can only be used in streamlit
19 if runtime.exists():
20
21 def on_click(x, y):
22 if "click_count" not in st.session_state:
23 st.session_state.click_count = 0
24
25 st.session_state.click_count += 1
26 st.session_state.x = x
27 st.session_state.y = y
28
29 i1 = st.button(
30 "button 1", key="button", on_click=on_click, args=(1,), kwargs={"y": 2}
31 )
32 st.write("value:", i1)
33 st.write("value from state:", st.session_state["button"])
34
35 button_was_clicked = "click_count" in st.session_state
36 st.write("Button was clicked:", button_was_clicked)
37
38 if button_was_clicked:
39 st.write("times clicked:", st.session_state.click_count)
40 st.write("arg value:", st.session_state.x)
41 st.write("kwarg value:", st.session_state.y)
42
43 i2 = st.checkbox("reset button return value")
44
45 i3 = st.button("button 2", disabled=True)
46 st.write("value 2:", i3)
47
48 i4 = st.button("button 3", type="primary")
49 st.write("value 3:", i4)
50
51 i5 = st.button("button 4", type="primary", disabled=True)
52 st.write("value 4:", i5)
53
54 st.button("button 5", use_container_width=True)
55
56 cols = st.columns(3)
57
58 # Order of conn_types matters to preserve the order in st_button.spec.js and the snapshot
59 conn_types = [
60 "snowflake",
61 "bigquery",
62 "huggingface",
63 "aws_s3",
64 "http_file",
65 "postgresql",
66 "gsheets",
67 "custom",
68 ]
69 for i in range(len(conn_types)):
70 cols[i % 3].button(conn_types[i], use_container_width=True)
71
[end of e2e/scripts/st_button.py]
[start of e2e/scripts/st_form_use_container_width_submit_button.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 with st.form("my_form"):
18 st.write("Inside the form")
19 slider_val = st.slider("Form slider")
20 checkbox_val = st.checkbox("Form checkbox")
21 submitted = st.form_submit_button("Submit", use_container_width=True)
22 if submitted:
23 st.write("slider", slider_val, "checkbox", checkbox_val)
24
[end of e2e/scripts/st_form_use_container_width_submit_button.py]
[start of e2e/scripts/st_download_button.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 st.download_button(
18 "Download button label",
19 data="Hello world!",
20 file_name="hello.txt",
21 )
22
23 st.download_button(
24 "Download button label",
25 data="Hello world!",
26 file_name="hello.txt",
27 key="disabled_dl_button",
28 disabled=True,
29 )
30
31 st.download_button(
32 "Download RAR archive file",
33 data=b"bytes",
34 file_name="archive.rar",
35 mime="application/vnd.rar",
36 )
37
38 st.download_button(
39 "Download button with use_container_width=True",
40 data="Hello world!",
41 file_name="hello.txt",
42 use_container_width=True,
43 )
44
[end of e2e/scripts/st_download_button.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/e2e/scripts/st_button.py b/e2e/scripts/st_button.py
--- a/e2e/scripts/st_button.py
+++ b/e2e/scripts/st_button.py
@@ -51,7 +51,11 @@
i5 = st.button("button 4", type="primary", disabled=True)
st.write("value 4:", i5)
-st.button("button 5", use_container_width=True)
+st.button("button 5 - containerWidth", use_container_width=True)
+
+st.button(
+ "button 6 - containerWidth + help", use_container_width=True, help="help text"
+)
cols = st.columns(3)
diff --git a/e2e/scripts/st_download_button.py b/e2e/scripts/st_download_button.py
--- a/e2e/scripts/st_download_button.py
+++ b/e2e/scripts/st_download_button.py
@@ -41,3 +41,11 @@
file_name="hello.txt",
use_container_width=True,
)
+
+st.download_button(
+ "Download button with help text and use_container_width=True",
+ data="Hello world!",
+ file_name="hello.txt",
+ use_container_width=True,
+ help="Example help text",
+)
diff --git a/e2e/scripts/st_form_use_container_width_submit_button.py b/e2e/scripts/st_form_use_container_width_submit_button.py
--- a/e2e/scripts/st_form_use_container_width_submit_button.py
+++ b/e2e/scripts/st_form_use_container_width_submit_button.py
@@ -21,3 +21,13 @@
submitted = st.form_submit_button("Submit", use_container_width=True)
if submitted:
st.write("slider", slider_val, "checkbox", checkbox_val)
+
+with st.form("my_form_2"):
+ st.write("Inside the second form")
+ slider_val = st.slider("Form slider 2")
+ checkbox_val = st.checkbox("Form checkbox 2")
+ submitted = st.form_submit_button(
+ "Submit", help="Submit by clicking", use_container_width=True
+ )
+ if submitted:
+ st.write("slider 2:", slider_val, "checkbox 2:", checkbox_val)
| {"golden_diff": "diff --git a/e2e/scripts/st_button.py b/e2e/scripts/st_button.py\n--- a/e2e/scripts/st_button.py\n+++ b/e2e/scripts/st_button.py\n@@ -51,7 +51,11 @@\n i5 = st.button(\"button 4\", type=\"primary\", disabled=True)\n st.write(\"value 4:\", i5)\n \n-st.button(\"button 5\", use_container_width=True)\n+st.button(\"button 5 - containerWidth\", use_container_width=True)\n+\n+st.button(\n+ \"button 6 - containerWidth + help\", use_container_width=True, help=\"help text\"\n+)\n \n cols = st.columns(3)\n \ndiff --git a/e2e/scripts/st_download_button.py b/e2e/scripts/st_download_button.py\n--- a/e2e/scripts/st_download_button.py\n+++ b/e2e/scripts/st_download_button.py\n@@ -41,3 +41,11 @@\n file_name=\"hello.txt\",\n use_container_width=True,\n )\n+\n+st.download_button(\n+ \"Download button with help text and use_container_width=True\",\n+ data=\"Hello world!\",\n+ file_name=\"hello.txt\",\n+ use_container_width=True,\n+ help=\"Example help text\",\n+)\ndiff --git a/e2e/scripts/st_form_use_container_width_submit_button.py b/e2e/scripts/st_form_use_container_width_submit_button.py\n--- a/e2e/scripts/st_form_use_container_width_submit_button.py\n+++ b/e2e/scripts/st_form_use_container_width_submit_button.py\n@@ -21,3 +21,13 @@\n submitted = st.form_submit_button(\"Submit\", use_container_width=True)\n if submitted:\n st.write(\"slider\", slider_val, \"checkbox\", checkbox_val)\n+\n+with st.form(\"my_form_2\"):\n+ st.write(\"Inside the second form\")\n+ slider_val = st.slider(\"Form slider 2\")\n+ checkbox_val = st.checkbox(\"Form checkbox 2\")\n+ submitted = st.form_submit_button(\n+ \"Submit\", help=\"Submit by clicking\", use_container_width=True\n+ )\n+ if submitted:\n+ st.write(\"slider 2:\", slider_val, \"checkbox 2:\", checkbox_val)\n", "issue": "Using help param causes use_container_width to be ignored with st.button for version 1.18\n### Checklist\r\n\r\n- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I have provided sufficient information below to help reproduce this issue.\r\n\r\n### Summary\r\n\r\nUsing both `help` and `use_container_width` parameters with `st.button` with version 1.18 results in `use_container_width` being ignored\r\n\r\n### Reproducible Code Example\r\n\r\n[](https://issues.streamlitapp.com/?issue=gh-6161)\r\n\r\n```Python\r\nimport streamlit as st\r\nc1, c2, c3 = st.columns([1, 1, 1])\r\n\r\nwith c1:\r\n st.button('button 1', use_container_width=True)\r\nwith c2:\r\n st.button('button 2', use_container_width=True)\r\nwith c3:\r\n st.button('button 3', use_container_width=True, help = 'example')\r\nst.button(\"test\", use_container_width=True, help='test')\r\n```\r\n\r\n\r\n### Steps To Reproduce\r\n\r\nRun app that uses `help` and `use_container_width` parameters for `st.button` with version 1.18\r\n\r\n### Expected Behavior\r\n\r\nExpected behavior is that `use_container_width` impacts width of button widget\r\n\r\n### Current Behavior\r\n\r\nCurrent behavior: \r\n<img width=\"631\" alt=\"Screenshot 2023-02-21 at 11 48 14 AM\" src=\"https://user-images.githubusercontent.com/16749069/220443951-e1ee3abc-0210-4a04-85b4-85b07ade9cc9.png\">\r\n\r\n`use_container_width` is ignored\r\n\r\n### Is this a regression?\r\n\r\n- [X] Yes, this used to work in a previous version.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.18.0\r\n- Python version:\r\n- Operating System:\r\n- Browser:\r\n- Virtual environment:\r\n\r\n\r\n### Additional Information\r\n\r\n_No response_\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [ ] Yes, I am willing to submit a PR!\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\nfrom streamlit import runtime\n\n# st.session_state can only be used in streamlit\nif runtime.exists():\n\n def on_click(x, y):\n if \"click_count\" not in st.session_state:\n st.session_state.click_count = 0\n\n st.session_state.click_count += 1\n st.session_state.x = x\n st.session_state.y = y\n\n i1 = st.button(\n \"button 1\", key=\"button\", on_click=on_click, args=(1,), kwargs={\"y\": 2}\n )\n st.write(\"value:\", i1)\n st.write(\"value from state:\", st.session_state[\"button\"])\n\n button_was_clicked = \"click_count\" in st.session_state\n st.write(\"Button was clicked:\", button_was_clicked)\n\n if button_was_clicked:\n st.write(\"times clicked:\", st.session_state.click_count)\n st.write(\"arg value:\", st.session_state.x)\n st.write(\"kwarg value:\", st.session_state.y)\n\ni2 = st.checkbox(\"reset button return value\")\n\ni3 = st.button(\"button 2\", disabled=True)\nst.write(\"value 2:\", i3)\n\ni4 = st.button(\"button 3\", type=\"primary\")\nst.write(\"value 3:\", i4)\n\ni5 = st.button(\"button 4\", type=\"primary\", disabled=True)\nst.write(\"value 4:\", i5)\n\nst.button(\"button 5\", use_container_width=True)\n\ncols = st.columns(3)\n\n# Order of conn_types matters to preserve the order in st_button.spec.js and the snapshot\nconn_types = [\n \"snowflake\",\n \"bigquery\",\n \"huggingface\",\n \"aws_s3\",\n \"http_file\",\n \"postgresql\",\n \"gsheets\",\n \"custom\",\n]\nfor i in range(len(conn_types)):\n cols[i % 3].button(conn_types[i], use_container_width=True)\n", "path": "e2e/scripts/st_button.py"}, {"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nwith st.form(\"my_form\"):\n st.write(\"Inside the form\")\n slider_val = st.slider(\"Form slider\")\n checkbox_val = st.checkbox(\"Form checkbox\")\n submitted = st.form_submit_button(\"Submit\", use_container_width=True)\n if submitted:\n st.write(\"slider\", slider_val, \"checkbox\", checkbox_val)\n", "path": "e2e/scripts/st_form_use_container_width_submit_button.py"}, {"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.download_button(\n \"Download button label\",\n data=\"Hello world!\",\n file_name=\"hello.txt\",\n)\n\nst.download_button(\n \"Download button label\",\n data=\"Hello world!\",\n file_name=\"hello.txt\",\n key=\"disabled_dl_button\",\n disabled=True,\n)\n\nst.download_button(\n \"Download RAR archive file\",\n data=b\"bytes\",\n file_name=\"archive.rar\",\n mime=\"application/vnd.rar\",\n)\n\nst.download_button(\n \"Download button with use_container_width=True\",\n data=\"Hello world!\",\n file_name=\"hello.txt\",\n use_container_width=True,\n)\n", "path": "e2e/scripts/st_download_button.py"}]} | 2,441 | 477 |
gh_patches_debug_17234 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-1164 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TabularForecastingData must include predict_data_frame
## 📚 Documentation
The [NBEATS tutorial](https://lightning-flash.readthedocs.io/en/stable/notebooks/flash_tutorials/electricity_forecasting.html) uses `0.5.x`, which references `model.predict()` at the end to generate predictions. In `0.6.0`, `model.predict()` was deprecated in favor of `trainer.predict(model, datamodule=datamodule)`.
If you try to do this by passing the `datamodule` created via:
```python
datamodule = TabularForecastingData.from_data_frame(
time_idx="time_idx",
target="price actual",
group_ids=["constant"],
max_encoder_length=max_encoder_length,
max_prediction_length=max_prediction_length,
time_varying_unknown_reals=["price actual"],
train_data_frame=df_energy_daily[df_energy_daily["time_idx"] <= training_cutoff],
val_data_frame=df_energy_daily,
batch_size=256,
)
```
to `trainer.predict(...)`, you'll get the following error:
```python
MisconfigurationException: No `predict_dataloader()` method defined to run `Trainer.predict`.
```
The solution is to be found [here](https://lightning-flash.readthedocs.io/en/stable/reference/tabular_forecasting.html), which clearly shows how to make the prediction datamodule:
```python
# 4. Generate predictions
datamodule = TabularForecastingData.from_data_frame(predict_data_frame=data, parameters=datamodule.parameters)
predictions = trainer.predict(model, datamodule=datamodule)
```
__Suggestion:__
* Update the tutorial to use `0.6.0`
* Add a small section to the [API docs](https://lightning-flash.readthedocs.io/en/stable/api/generated/flash.tabular.forecasting.data.TabularForecastingData.html#flash.tabular.forecasting.data.TabularForecastingData.from_data_frame) explaining that `predict_data_frame` must be specified in order to make use of `trainer.predict(...)`
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import glob
14 import os
15 import shutil
16 import sys
17 import warnings
18 from importlib.util import module_from_spec, spec_from_file_location
19
20 import pt_lightning_sphinx_theme
21
22 _PATH_HERE = os.path.abspath(os.path.dirname(__file__))
23 _PATH_ROOT = os.path.join(_PATH_HERE, "..", "..")
24 _PATH_RAW_NB = os.path.join(_PATH_ROOT, "_notebooks")
25 sys.path.insert(0, os.path.abspath(_PATH_ROOT))
26 sys.path.append(os.path.join(_PATH_RAW_NB, ".actions"))
27
28 _SHOULD_COPY_NOTEBOOKS = True
29
30 try:
31 from helpers import HelperCLI
32 except Exception:
33 _SHOULD_COPY_NOTEBOOKS = False
34 warnings.warn("To build the code, please run: `git submodule update --init --recursive`", stacklevel=2)
35
36
37 def _load_py_module(fname, pkg="flash"):
38 spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_ROOT, pkg, fname))
39 py = module_from_spec(spec)
40 spec.loader.exec_module(py)
41 return py
42
43
44 try:
45 from flash import __about__ as about
46 from flash.core.utilities import providers
47
48 except ModuleNotFoundError:
49
50 about = _load_py_module("__about__.py")
51 providers = _load_py_module("core/utilities/providers.py")
52
53 SPHINX_MOCK_REQUIREMENTS = int(os.environ.get("SPHINX_MOCK_REQUIREMENTS", True))
54
55 html_favicon = "_static/images/icon.svg"
56
57 # -- Project information -----------------------------------------------------
58
59 project = "Flash"
60 copyright = "2020-2021, PyTorch Lightning"
61 author = "PyTorch Lightning"
62
63 # -- Project documents -------------------------------------------------------
64 if _SHOULD_COPY_NOTEBOOKS:
65 HelperCLI.copy_notebooks(_PATH_RAW_NB, _PATH_HERE, "notebooks", patterns=["flash_tutorials"])
66
67
68 def _transform_changelog(path_in: str, path_out: str) -> None:
69 with open(path_in) as fp:
70 chlog_lines = fp.readlines()
71 # enrich short subsub-titles to be unique
72 chlog_ver = ""
73 for i, ln in enumerate(chlog_lines):
74 if ln.startswith("## "):
75 chlog_ver = ln[2:].split("-")[0].strip()
76 elif ln.startswith("### "):
77 ln = ln.replace("###", f"### {chlog_ver} -")
78 chlog_lines[i] = ln
79 with open(path_out, "w") as fp:
80 fp.writelines(chlog_lines)
81
82
83 generated_dir = os.path.join(_PATH_HERE, "generated")
84
85 os.makedirs(generated_dir, exist_ok=True)
86 # copy all documents from GH templates like contribution guide
87 for md in glob.glob(os.path.join(_PATH_ROOT, ".github", "*.md")):
88 shutil.copy(md, os.path.join(generated_dir, os.path.basename(md)))
89 # copy also the changelog
90 _transform_changelog(os.path.join(_PATH_ROOT, "CHANGELOG.md"), os.path.join(generated_dir, "CHANGELOG.md"))
91
92 # -- Generate providers ------------------------------------------------------
93
94 lines = []
95 for provider in providers.PROVIDERS:
96 lines.append(f"- {str(provider)}\n")
97
98 generated_dir = os.path.join("integrations", "generated")
99 os.makedirs(generated_dir, exist_ok=True)
100
101 with open(os.path.join(generated_dir, "providers.rst"), "w") as f:
102 f.writelines(sorted(lines, key=str.casefold))
103
104 # -- General configuration ---------------------------------------------------
105
106 # Add any Sphinx extension module names here, as strings. They can be
107 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
108 # ones.
109 extensions = [
110 "sphinx.ext.autodoc",
111 "sphinx.ext.doctest",
112 "sphinx.ext.intersphinx",
113 "sphinx.ext.todo",
114 "sphinx.ext.viewcode",
115 "sphinx.ext.autosummary",
116 "sphinx.ext.napoleon",
117 "sphinx.ext.imgmath",
118 "myst_parser",
119 "nbsphinx",
120 "sphinx_autodoc_typehints",
121 "sphinx_copybutton",
122 "sphinx_paramlinks",
123 "sphinx_togglebutton",
124 "pt_lightning_sphinx_theme.extensions.lightning_tutorials",
125 ]
126
127 # autodoc: Default to members and undoc-members
128 autodoc_default_options = {"members": True}
129
130 # autodoc: Don't inherit docstrings (e.g. for nn.Module.forward)
131 autodoc_inherit_docstrings = False
132
133 # Add any paths that contain templates here, relative to this directory.
134 templates_path = ["_templates"]
135
136 # https://berkeley-stat159-f17.github.io/stat159-f17/lectures/14-sphinx..html#conf.py-(cont.)
137 # https://stackoverflow.com/questions/38526888/embed-ipython-notebook-in-sphinx-document
138 # I execute the notebooks manually in advance. If notebooks test the code,
139 # they should be run at build time.
140 nbsphinx_execute = "never"
141 nbsphinx_allow_errors = True
142 nbsphinx_requirejs_path = ""
143
144 # List of patterns, relative to source directory, that match files and
145 # directories to ignore when looking for source files.
146 # This pattern also affects html_static_path and html_extra_path.
147 exclude_patterns = ["generated/PULL_REQUEST_TEMPLATE.md"]
148
149 # myst-parser, forcing to parse all html pages with mathjax
150 # https://github.com/executablebooks/MyST-Parser/issues/394
151 myst_update_mathjax = False
152
153 # The suffix(es) of source filenames.
154 # You can specify multiple suffix as a list of string:
155 #
156 source_parsers = {".rst": "restructuredtext", ".txt": "markdown", ".md": "markdown", ".ipynb": "nbsphinx"}
157
158 # The master toctree document.
159 master_doc = "index"
160
161 needs_sphinx = "4.0"
162
163 # -- Options for intersphinx extension ---------------------------------------
164
165 # Example configuration for intersphinx: refer to the Python standard library.
166 intersphinx_mapping = {
167 "python": ("https://docs.python.org/3", None),
168 "torch": ("https://pytorch.org/docs/stable/", None),
169 "numpy": ("https://numpy.org/doc/stable/", None),
170 "PIL": ("https://pillow.readthedocs.io/en/stable/", None),
171 "pytorchvideo": ("https://pytorchvideo.readthedocs.io/en/latest/", None),
172 "pytorch_lightning": ("https://pytorch-lightning.readthedocs.io/en/stable/", None),
173 "fiftyone": ("https://voxel51.com/docs/fiftyone/", "fiftyone_objects.inv"),
174 }
175
176 # -- Options for HTML output -------------------------------------------------
177
178 # The theme to use for HTML and HTML Help pages. See the documentation for
179 # a list of builtin themes.
180 #
181 html_theme = "pt_lightning_sphinx_theme"
182 html_theme_path = [pt_lightning_sphinx_theme.get_html_theme_path()]
183
184 # Theme options are theme-specific and customize the look and feel of a theme
185 # further. For a list of options available for each theme, see the
186 # documentation.
187
188 html_theme_options = {
189 "pytorch_project": "https://pytorchlightning.ai",
190 "canonical_url": about.__docs_url__,
191 "collapse_navigation": False,
192 "display_version": True,
193 "logo_only": False,
194 }
195
196 # Add any paths that contain custom static files (such as style sheets) here,
197 # relative to this directory. They are copied after the builtin static files,
198 # so a file named "default.css" will overwrite the builtin "default.css".
199 html_static_path = ["_static"]
200
201 html_css_files = []
202
203
204 def setup(app):
205 # this is for hiding doctest decoration,
206 # see: http://z4r.github.io/python/2011/12/02/hides-the-prompts-and-output/
207 app.add_js_file("copybutton.js")
208 app.add_css_file("main.css")
209
210
211 # Ignoring Third-party packages
212 # https://stackoverflow.com/questions/15889621/sphinx-how-to-exclude-imports-in-automodule
213 def _package_list_from_file(pfile):
214 assert os.path.isfile(pfile)
215 with open(pfile) as fp:
216 lines = fp.readlines()
217 list_pkgs = []
218 for ln in lines:
219 found = [ln.index(ch) for ch in list(",=<>#@") if ch in ln]
220 pkg = ln[: min(found)] if found else ln
221 if pkg.strip():
222 list_pkgs.append(pkg.strip())
223 return list_pkgs
224
225
226 # define mapping from PyPI names to python imports
227 PACKAGE_MAPPING = {
228 "pytorch-lightning": "pytorch_lightning",
229 "scikit-learn": "sklearn",
230 "Pillow": "PIL",
231 "PyYAML": "yaml",
232 "rouge-score": "rouge_score",
233 "lightning-bolts": "pl_bolts",
234 "pytorch-tabnet": "pytorch_tabnet",
235 "pyDeprecate": "deprecate",
236 }
237 MOCK_PACKAGES = ["numpy", "PyYAML", "tqdm"]
238 if SPHINX_MOCK_REQUIREMENTS:
239 # mock also base packages when we are on RTD since we don't install them there
240 MOCK_PACKAGES += _package_list_from_file(os.path.join(_PATH_ROOT, "requirements.txt"))
241 # replace PyPI packages by importing ones
242 MOCK_PACKAGES = [PACKAGE_MAPPING.get(pkg, pkg) for pkg in MOCK_PACKAGES]
243
244 autodoc_mock_imports = MOCK_PACKAGES
245
246 # only run doctests marked with a ".. doctest::" directive
247 doctest_test_doctest_blocks = ""
248 doctest_global_setup = """
249 import torch
250 import pytorch_lightning as pl
251 import flash
252 """
253
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -28,7 +28,7 @@
_SHOULD_COPY_NOTEBOOKS = True
try:
- from helpers import HelperCLI
+ from assistant import AssistantCLI
except Exception:
_SHOULD_COPY_NOTEBOOKS = False
warnings.warn("To build the code, please run: `git submodule update --init --recursive`", stacklevel=2)
@@ -62,7 +62,7 @@
# -- Project documents -------------------------------------------------------
if _SHOULD_COPY_NOTEBOOKS:
- HelperCLI.copy_notebooks(_PATH_RAW_NB, _PATH_HERE, "notebooks", patterns=["flash_tutorials"])
+ AssistantCLI.copy_notebooks(_PATH_RAW_NB, _PATH_HERE, "notebooks", patterns=["flash_tutorials"])
def _transform_changelog(path_in: str, path_out: str) -> None:
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -28,7 +28,7 @@\n _SHOULD_COPY_NOTEBOOKS = True\n \n try:\n- from helpers import HelperCLI\n+ from assistant import AssistantCLI\n except Exception:\n _SHOULD_COPY_NOTEBOOKS = False\n warnings.warn(\"To build the code, please run: `git submodule update --init --recursive`\", stacklevel=2)\n@@ -62,7 +62,7 @@\n \n # -- Project documents -------------------------------------------------------\n if _SHOULD_COPY_NOTEBOOKS:\n- HelperCLI.copy_notebooks(_PATH_RAW_NB, _PATH_HERE, \"notebooks\", patterns=[\"flash_tutorials\"])\n+ AssistantCLI.copy_notebooks(_PATH_RAW_NB, _PATH_HERE, \"notebooks\", patterns=[\"flash_tutorials\"])\n \n \n def _transform_changelog(path_in: str, path_out: str) -> None:\n", "issue": "TabularForecastingData must include predict_data_frame\n## \ud83d\udcda Documentation\r\n\r\nThe [NBEATS tutorial](https://lightning-flash.readthedocs.io/en/stable/notebooks/flash_tutorials/electricity_forecasting.html) uses `0.5.x`, which references `model.predict()` at the end to generate predictions. In `0.6.0`, `model.predict()` was deprecated in favor of `trainer.predict(model, datamodule=datamodule)`.\r\n\r\nIf you try to do this by passing the `datamodule` created via:\r\n\r\n```python\r\ndatamodule = TabularForecastingData.from_data_frame(\r\n time_idx=\"time_idx\",\r\n target=\"price actual\",\r\n group_ids=[\"constant\"],\r\n max_encoder_length=max_encoder_length,\r\n max_prediction_length=max_prediction_length,\r\n time_varying_unknown_reals=[\"price actual\"],\r\n train_data_frame=df_energy_daily[df_energy_daily[\"time_idx\"] <= training_cutoff],\r\n val_data_frame=df_energy_daily,\r\n batch_size=256,\r\n)\r\n```\r\n\r\nto `trainer.predict(...)`, you'll get the following error:\r\n\r\n```python\r\nMisconfigurationException: No `predict_dataloader()` method defined to run `Trainer.predict`.\r\n```\r\n\r\nThe solution is to be found [here](https://lightning-flash.readthedocs.io/en/stable/reference/tabular_forecasting.html), which clearly shows how to make the prediction datamodule:\r\n\r\n```python\r\n# 4. Generate predictions\r\ndatamodule = TabularForecastingData.from_data_frame(predict_data_frame=data, parameters=datamodule.parameters)\r\npredictions = trainer.predict(model, datamodule=datamodule)\r\n```\r\n\r\n__Suggestion:__\r\n* Update the tutorial to use `0.6.0`\r\n* Add a small section to the [API docs](https://lightning-flash.readthedocs.io/en/stable/api/generated/flash.tabular.forecasting.data.TabularForecastingData.html#flash.tabular.forecasting.data.TabularForecastingData.from_data_frame) explaining that `predict_data_frame` must be specified in order to make use of `trainer.predict(...)` \n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport glob\nimport os\nimport shutil\nimport sys\nimport warnings\nfrom importlib.util import module_from_spec, spec_from_file_location\n\nimport pt_lightning_sphinx_theme\n\n_PATH_HERE = os.path.abspath(os.path.dirname(__file__))\n_PATH_ROOT = os.path.join(_PATH_HERE, \"..\", \"..\")\n_PATH_RAW_NB = os.path.join(_PATH_ROOT, \"_notebooks\")\nsys.path.insert(0, os.path.abspath(_PATH_ROOT))\nsys.path.append(os.path.join(_PATH_RAW_NB, \".actions\"))\n\n_SHOULD_COPY_NOTEBOOKS = True\n\ntry:\n from helpers import HelperCLI\nexcept Exception:\n _SHOULD_COPY_NOTEBOOKS = False\n warnings.warn(\"To build the code, please run: `git submodule update --init --recursive`\", stacklevel=2)\n\n\ndef _load_py_module(fname, pkg=\"flash\"):\n spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_ROOT, pkg, fname))\n py = module_from_spec(spec)\n spec.loader.exec_module(py)\n return py\n\n\ntry:\n from flash import __about__ as about\n from flash.core.utilities import providers\n\nexcept ModuleNotFoundError:\n\n about = _load_py_module(\"__about__.py\")\n providers = _load_py_module(\"core/utilities/providers.py\")\n\nSPHINX_MOCK_REQUIREMENTS = int(os.environ.get(\"SPHINX_MOCK_REQUIREMENTS\", True))\n\nhtml_favicon = \"_static/images/icon.svg\"\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Flash\"\ncopyright = \"2020-2021, PyTorch Lightning\"\nauthor = \"PyTorch Lightning\"\n\n# -- Project documents -------------------------------------------------------\nif _SHOULD_COPY_NOTEBOOKS:\n HelperCLI.copy_notebooks(_PATH_RAW_NB, _PATH_HERE, \"notebooks\", patterns=[\"flash_tutorials\"])\n\n\ndef _transform_changelog(path_in: str, path_out: str) -> None:\n with open(path_in) as fp:\n chlog_lines = fp.readlines()\n # enrich short subsub-titles to be unique\n chlog_ver = \"\"\n for i, ln in enumerate(chlog_lines):\n if ln.startswith(\"## \"):\n chlog_ver = ln[2:].split(\"-\")[0].strip()\n elif ln.startswith(\"### \"):\n ln = ln.replace(\"###\", f\"### {chlog_ver} -\")\n chlog_lines[i] = ln\n with open(path_out, \"w\") as fp:\n fp.writelines(chlog_lines)\n\n\ngenerated_dir = os.path.join(_PATH_HERE, \"generated\")\n\nos.makedirs(generated_dir, exist_ok=True)\n# copy all documents from GH templates like contribution guide\nfor md in glob.glob(os.path.join(_PATH_ROOT, \".github\", \"*.md\")):\n shutil.copy(md, os.path.join(generated_dir, os.path.basename(md)))\n# copy also the changelog\n_transform_changelog(os.path.join(_PATH_ROOT, \"CHANGELOG.md\"), os.path.join(generated_dir, \"CHANGELOG.md\"))\n\n# -- Generate providers ------------------------------------------------------\n\nlines = []\nfor provider in providers.PROVIDERS:\n lines.append(f\"- {str(provider)}\\n\")\n\ngenerated_dir = os.path.join(\"integrations\", \"generated\")\nos.makedirs(generated_dir, exist_ok=True)\n\nwith open(os.path.join(generated_dir, \"providers.rst\"), \"w\") as f:\n f.writelines(sorted(lines, key=str.casefold))\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.imgmath\",\n \"myst_parser\",\n \"nbsphinx\",\n \"sphinx_autodoc_typehints\",\n \"sphinx_copybutton\",\n \"sphinx_paramlinks\",\n \"sphinx_togglebutton\",\n \"pt_lightning_sphinx_theme.extensions.lightning_tutorials\",\n]\n\n# autodoc: Default to members and undoc-members\nautodoc_default_options = {\"members\": True}\n\n# autodoc: Don't inherit docstrings (e.g. for nn.Module.forward)\nautodoc_inherit_docstrings = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# https://berkeley-stat159-f17.github.io/stat159-f17/lectures/14-sphinx..html#conf.py-(cont.)\n# https://stackoverflow.com/questions/38526888/embed-ipython-notebook-in-sphinx-document\n# I execute the notebooks manually in advance. If notebooks test the code,\n# they should be run at build time.\nnbsphinx_execute = \"never\"\nnbsphinx_allow_errors = True\nnbsphinx_requirejs_path = \"\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"generated/PULL_REQUEST_TEMPLATE.md\"]\n\n# myst-parser, forcing to parse all html pages with mathjax\n# https://github.com/executablebooks/MyST-Parser/issues/394\nmyst_update_mathjax = False\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_parsers = {\".rst\": \"restructuredtext\", \".txt\": \"markdown\", \".md\": \"markdown\", \".ipynb\": \"nbsphinx\"}\n\n# The master toctree document.\nmaster_doc = \"index\"\n\nneeds_sphinx = \"4.0\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"PIL\": (\"https://pillow.readthedocs.io/en/stable/\", None),\n \"pytorchvideo\": (\"https://pytorchvideo.readthedocs.io/en/latest/\", None),\n \"pytorch_lightning\": (\"https://pytorch-lightning.readthedocs.io/en/stable/\", None),\n \"fiftyone\": (\"https://voxel51.com/docs/fiftyone/\", \"fiftyone_objects.inv\"),\n}\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pt_lightning_sphinx_theme\"\nhtml_theme_path = [pt_lightning_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"pytorch_project\": \"https://pytorchlightning.ai\",\n \"canonical_url\": about.__docs_url__,\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": False,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_css_files = []\n\n\ndef setup(app):\n # this is for hiding doctest decoration,\n # see: http://z4r.github.io/python/2011/12/02/hides-the-prompts-and-output/\n app.add_js_file(\"copybutton.js\")\n app.add_css_file(\"main.css\")\n\n\n# Ignoring Third-party packages\n# https://stackoverflow.com/questions/15889621/sphinx-how-to-exclude-imports-in-automodule\ndef _package_list_from_file(pfile):\n assert os.path.isfile(pfile)\n with open(pfile) as fp:\n lines = fp.readlines()\n list_pkgs = []\n for ln in lines:\n found = [ln.index(ch) for ch in list(\",=<>#@\") if ch in ln]\n pkg = ln[: min(found)] if found else ln\n if pkg.strip():\n list_pkgs.append(pkg.strip())\n return list_pkgs\n\n\n# define mapping from PyPI names to python imports\nPACKAGE_MAPPING = {\n \"pytorch-lightning\": \"pytorch_lightning\",\n \"scikit-learn\": \"sklearn\",\n \"Pillow\": \"PIL\",\n \"PyYAML\": \"yaml\",\n \"rouge-score\": \"rouge_score\",\n \"lightning-bolts\": \"pl_bolts\",\n \"pytorch-tabnet\": \"pytorch_tabnet\",\n \"pyDeprecate\": \"deprecate\",\n}\nMOCK_PACKAGES = [\"numpy\", \"PyYAML\", \"tqdm\"]\nif SPHINX_MOCK_REQUIREMENTS:\n # mock also base packages when we are on RTD since we don't install them there\n MOCK_PACKAGES += _package_list_from_file(os.path.join(_PATH_ROOT, \"requirements.txt\"))\n# replace PyPI packages by importing ones\nMOCK_PACKAGES = [PACKAGE_MAPPING.get(pkg, pkg) for pkg in MOCK_PACKAGES]\n\nautodoc_mock_imports = MOCK_PACKAGES\n\n# only run doctests marked with a \".. doctest::\" directive\ndoctest_test_doctest_blocks = \"\"\ndoctest_global_setup = \"\"\"\nimport torch\nimport pytorch_lightning as pl\nimport flash\n\"\"\"\n", "path": "docs/source/conf.py"}]} | 3,860 | 208 |
gh_patches_debug_33159 | rasdani/github-patches | git_diff | kubeflow__pipelines-6054 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[v2compat] re-evaluate execution custom properties schema
* [ ] rename task execution `task_name` to `display_name`?
</issue>
<code>
[start of sdk/python/kfp/compiler/_default_transformers.py]
1 # Copyright 2019 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import warnings
16 from kubernetes import client as k8s_client
17 from typing import Callable, Dict, Optional, Text
18 from kfp.dsl._container_op import BaseOp, ContainerOp
19
20 def add_pod_env(op: BaseOp) -> BaseOp:
21 """Adds environment info if the Pod has the label `add-pod-env = true`.
22 """
23 if isinstance(
24 op, ContainerOp
25 ) and op.pod_labels and 'add-pod-env' in op.pod_labels and op.pod_labels[
26 'add-pod-env'] == 'true':
27 return add_kfp_pod_env(op)
28
29
30 def add_kfp_pod_env(op: BaseOp) -> BaseOp:
31 """Adds KFP pod environment info to the specified ContainerOp.
32 """
33 if not isinstance(op, ContainerOp):
34 warnings.warn(
35 'Trying to add default KFP environment variables to an Op that is '
36 'not a ContainerOp. Ignoring request.')
37 return op
38
39 op.container.add_env_variable(
40 k8s_client.V1EnvVar(name='KFP_POD_NAME',
41 value_from=k8s_client.V1EnvVarSource(
42 field_ref=k8s_client.V1ObjectFieldSelector(
43 field_path='metadata.name')))
44 ).add_env_variable(
45 k8s_client.V1EnvVar(name='KFP_NAMESPACE',
46 value_from=k8s_client.V1EnvVarSource(
47 field_ref=k8s_client.V1ObjectFieldSelector(
48 field_path='metadata.namespace')))
49 ).add_env_variable(
50 k8s_client.V1EnvVar(
51 name='WORKFLOW_ID',
52 value_from=k8s_client.
53 V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(
54 field_path="metadata.labels['workflows.argoproj.io/workflow']")))
55 ).add_env_variable(
56 k8s_client.V1EnvVar(
57 name='ENABLE_CACHING',
58 value_from=k8s_client.
59 V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(
60 field_path="metadata.labels['pipelines.kubeflow.org/enable_caching']")))
61 )
62 return op
63
64
65 def add_pod_labels(labels: Optional[Dict] = None) -> Callable:
66 """Adds provided pod labels to each pod."""
67
68 def _add_pod_labels(task):
69 for k, v in labels.items():
70 # Only append but not update.
71 # This is needed to bypass TFX pipelines/components.
72 if k not in task.pod_labels:
73 task.add_pod_label(k, v)
74 return task
75
76 return _add_pod_labels
77
[end of sdk/python/kfp/compiler/_default_transformers.py]
[start of sdk/python/kfp/compiler/v2_compat.py]
1 # Copyright 2021 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Utility functions for enabling v2-compatible pipelines in v1."""
15 import collections
16 import json
17 from typing import Optional
18
19 from kfp import dsl
20 from kfp.compiler import _default_transformers
21 from kfp.pipeline_spec import pipeline_spec_pb2
22 from kfp.v2 import compiler
23
24 from kubernetes import client as k8s_client
25
26 _DEFAULT_LAUNCHER_IMAGE = "gcr.io/ml-pipeline/kfp-launcher:1.6.4"
27
28
29 def update_op(op: dsl.ContainerOp,
30 pipeline_name: dsl.PipelineParam,
31 pipeline_root: dsl.PipelineParam,
32 launcher_image: Optional[str] = None) -> None:
33 """Updates the passed in Op for running in v2-compatible mode.
34
35 Args:
36 op: The Op to update.
37 pipeline_spec: The PipelineSpec for the pipeline under which `op`
38 runs.
39 pipeline_root: The root output directory for pipeline artifacts.
40 launcher_image: An optional launcher image. Useful for tests.
41 """
42 op.is_v2 = True
43 # Inject the launcher binary and overwrite the entrypoint.
44 image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE
45 launcher_container = dsl.UserContainer(name="kfp-launcher",
46 image=image_name,
47 command="/bin/mount_launcher.sh",
48 mirror_volume_mounts=True)
49
50 op.add_init_container(launcher_container)
51 op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))
52 op.add_volume_mount(
53 k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))
54
55 # op.command + op.args will have the following sections:
56 # 1. args passed to kfp-launcher
57 # 2. a separator "--"
58 # 3. parameters in format "key1=value1", "key2=value2", ...
59 # 4. a separator "--" as end of arguments passed to launcher
60 # 5. (start of op.args) arguments of the original user program command + args
61 #
62 # example:
63 # - command:
64 # - /kfp-launcher/launch
65 # - '--mlmd_server_address'
66 # - $(METADATA_GRPC_SERVICE_HOST)
67 # - '--mlmd_server_port'
68 # - $(METADATA_GRPC_SERVICE_PORT)
69 # - ... # more launcher params
70 # - '--pipeline_task_id'
71 # - $(KFP_POD_NAME)
72 # - '--pipeline_root'
73 # - ''
74 # - '--' # start of parameter values
75 # - first=first
76 # - second=second
77 # - '--' # start of user command and args
78 # args:
79 # - sh
80 # - '-ec'
81 # - |
82 # program_path=$(mktemp)
83 # printf "%s" "$0" > "$program_path"
84 # python3 -u "$program_path" "$@"
85 # - >
86 # import json
87 # import xxx
88 # ...
89 op.command = [
90 "/kfp-launcher/launch",
91 "--mlmd_server_address",
92 "$(METADATA_GRPC_SERVICE_HOST)",
93 "--mlmd_server_port",
94 "$(METADATA_GRPC_SERVICE_PORT)",
95 "--runtime_info_json",
96 "$(KFP_V2_RUNTIME_INFO)",
97 "--container_image",
98 "$(KFP_V2_IMAGE)",
99 "--task_name",
100 op.name,
101 "--pipeline_name",
102 pipeline_name,
103 "--pipeline_run_id",
104 "$(WORKFLOW_ID)",
105 "--pipeline_task_id",
106 "$(KFP_POD_NAME)",
107 "--pipeline_root",
108 pipeline_root,
109 "--enable_caching",
110 "$(ENABLE_CACHING)",
111 ]
112
113 # Mount necessary environment variables.
114 op.apply(_default_transformers.add_kfp_pod_env)
115 op.container.add_env_variable(
116 k8s_client.V1EnvVar(name="KFP_V2_IMAGE", value=op.container.image))
117
118 config_map_ref = k8s_client.V1ConfigMapEnvSource(
119 name='metadata-grpc-configmap', optional=True)
120 op.container.add_env_from(
121 k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))
122
123 op.arguments = list(op.container_spec.command) + list(op.container_spec.args)
124
125 runtime_info = {
126 "inputParameters": collections.OrderedDict(),
127 "inputArtifacts": collections.OrderedDict(),
128 "outputParameters": collections.OrderedDict(),
129 "outputArtifacts": collections.OrderedDict(),
130 }
131
132 op.command += ["--"]
133 component_spec = op.component_spec
134 for parameter, spec in sorted(
135 component_spec.input_definitions.parameters.items()):
136 parameter_info = {
137 "type":
138 pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),
139 }
140 op.command += [f"{parameter}={op._parameter_arguments[parameter]}"]
141 runtime_info["inputParameters"][parameter] = parameter_info
142 op.command += ["--"]
143
144 for artifact_name, spec in sorted(
145 component_spec.input_definitions.artifacts.items()):
146 artifact_info = {
147 "metadataPath": op.input_artifact_paths[artifact_name],
148 "schemaTitle": spec.artifact_type.schema_title,
149 "instanceSchema": spec.artifact_type.instance_schema,
150 }
151 runtime_info["inputArtifacts"][artifact_name] = artifact_info
152
153 for parameter, spec in sorted(
154 component_spec.output_definitions.parameters.items()):
155 parameter_info = {
156 "type":
157 pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),
158 "path":
159 op.file_outputs[parameter],
160 }
161 runtime_info["outputParameters"][parameter] = parameter_info
162
163 for artifact_name, spec in sorted(
164 component_spec.output_definitions.artifacts.items()):
165 # TODO: Assert instance_schema.
166 artifact_info = {
167 # Type used to register output artifacts.
168 "schemaTitle": spec.artifact_type.schema_title,
169 "instanceSchema": spec.artifact_type.instance_schema,
170 # File used to write out the registered artifact ID.
171 "metadataPath": op.file_outputs[artifact_name],
172 }
173 runtime_info["outputArtifacts"][artifact_name] = artifact_info
174
175 op.container.add_env_variable(
176 k8s_client.V1EnvVar(name="KFP_V2_RUNTIME_INFO",
177 value=json.dumps(runtime_info)))
178
179 op.pod_annotations['pipelines.kubeflow.org/v2_component'] = "true"
180 op.pod_labels['pipelines.kubeflow.org/v2_component']= "true"
181
[end of sdk/python/kfp/compiler/v2_compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/compiler/_default_transformers.py b/sdk/python/kfp/compiler/_default_transformers.py
--- a/sdk/python/kfp/compiler/_default_transformers.py
+++ b/sdk/python/kfp/compiler/_default_transformers.py
@@ -41,6 +41,11 @@
value_from=k8s_client.V1EnvVarSource(
field_ref=k8s_client.V1ObjectFieldSelector(
field_path='metadata.name')))
+ ).add_env_variable(
+ k8s_client.V1EnvVar(name='KFP_POD_UID',
+ value_from=k8s_client.V1EnvVarSource(
+ field_ref=k8s_client.V1ObjectFieldSelector(
+ field_path='metadata.uid')))
).add_env_variable(
k8s_client.V1EnvVar(name='KFP_NAMESPACE',
value_from=k8s_client.V1EnvVarSource(
@@ -52,6 +57,12 @@
value_from=k8s_client.
V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(
field_path="metadata.labels['workflows.argoproj.io/workflow']")))
+ ).add_env_variable(
+ k8s_client.V1EnvVar(
+ name='KFP_RUN_ID',
+ value_from=k8s_client.
+ V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(
+ field_path="metadata.labels['pipeline/runid']")))
).add_env_variable(
k8s_client.V1EnvVar(
name='ENABLE_CACHING',
diff --git a/sdk/python/kfp/compiler/v2_compat.py b/sdk/python/kfp/compiler/v2_compat.py
--- a/sdk/python/kfp/compiler/v2_compat.py
+++ b/sdk/python/kfp/compiler/v2_compat.py
@@ -100,10 +100,16 @@
op.name,
"--pipeline_name",
pipeline_name,
- "--pipeline_run_id",
- "$(WORKFLOW_ID)",
- "--pipeline_task_id",
+ "--run_id",
+ "$(KFP_RUN_ID)",
+ "--run_resource",
+ "workflows.argoproj.io/$(WORKFLOW_ID)",
+ "--namespace",
+ "$(KFP_NAMESPACE)",
+ "--pod_name",
"$(KFP_POD_NAME)",
+ "--pod_uid",
+ "$(KFP_POD_UID)",
"--pipeline_root",
pipeline_root,
"--enable_caching",
| {"golden_diff": "diff --git a/sdk/python/kfp/compiler/_default_transformers.py b/sdk/python/kfp/compiler/_default_transformers.py\n--- a/sdk/python/kfp/compiler/_default_transformers.py\n+++ b/sdk/python/kfp/compiler/_default_transformers.py\n@@ -41,6 +41,11 @@\n value_from=k8s_client.V1EnvVarSource(\n field_ref=k8s_client.V1ObjectFieldSelector(\n field_path='metadata.name')))\n+ ).add_env_variable(\n+ k8s_client.V1EnvVar(name='KFP_POD_UID',\n+ value_from=k8s_client.V1EnvVarSource(\n+ field_ref=k8s_client.V1ObjectFieldSelector(\n+ field_path='metadata.uid')))\n ).add_env_variable(\n k8s_client.V1EnvVar(name='KFP_NAMESPACE',\n value_from=k8s_client.V1EnvVarSource(\n@@ -52,6 +57,12 @@\n value_from=k8s_client.\n V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(\n field_path=\"metadata.labels['workflows.argoproj.io/workflow']\")))\n+ ).add_env_variable(\n+ k8s_client.V1EnvVar(\n+ name='KFP_RUN_ID',\n+ value_from=k8s_client.\n+ V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(\n+ field_path=\"metadata.labels['pipeline/runid']\")))\n ).add_env_variable(\n k8s_client.V1EnvVar(\n name='ENABLE_CACHING',\ndiff --git a/sdk/python/kfp/compiler/v2_compat.py b/sdk/python/kfp/compiler/v2_compat.py\n--- a/sdk/python/kfp/compiler/v2_compat.py\n+++ b/sdk/python/kfp/compiler/v2_compat.py\n@@ -100,10 +100,16 @@\n op.name,\n \"--pipeline_name\",\n pipeline_name,\n- \"--pipeline_run_id\",\n- \"$(WORKFLOW_ID)\",\n- \"--pipeline_task_id\",\n+ \"--run_id\",\n+ \"$(KFP_RUN_ID)\",\n+ \"--run_resource\",\n+ \"workflows.argoproj.io/$(WORKFLOW_ID)\",\n+ \"--namespace\",\n+ \"$(KFP_NAMESPACE)\",\n+ \"--pod_name\",\n \"$(KFP_POD_NAME)\",\n+ \"--pod_uid\",\n+ \"$(KFP_POD_UID)\",\n \"--pipeline_root\",\n pipeline_root,\n \"--enable_caching\",\n", "issue": "[v2compat] re-evaluate execution custom properties schema\n* [ ] rename task execution `task_name` to `display_name`?\n", "before_files": [{"content": "# Copyright 2019 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport warnings\nfrom kubernetes import client as k8s_client\nfrom typing import Callable, Dict, Optional, Text\nfrom kfp.dsl._container_op import BaseOp, ContainerOp\n\ndef add_pod_env(op: BaseOp) -> BaseOp:\n \"\"\"Adds environment info if the Pod has the label `add-pod-env = true`.\n \"\"\"\n if isinstance(\n op, ContainerOp\n ) and op.pod_labels and 'add-pod-env' in op.pod_labels and op.pod_labels[\n 'add-pod-env'] == 'true':\n return add_kfp_pod_env(op)\n\n\ndef add_kfp_pod_env(op: BaseOp) -> BaseOp:\n \"\"\"Adds KFP pod environment info to the specified ContainerOp.\n \"\"\"\n if not isinstance(op, ContainerOp):\n warnings.warn(\n 'Trying to add default KFP environment variables to an Op that is '\n 'not a ContainerOp. Ignoring request.')\n return op\n\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name='KFP_POD_NAME',\n value_from=k8s_client.V1EnvVarSource(\n field_ref=k8s_client.V1ObjectFieldSelector(\n field_path='metadata.name')))\n ).add_env_variable(\n k8s_client.V1EnvVar(name='KFP_NAMESPACE',\n value_from=k8s_client.V1EnvVarSource(\n field_ref=k8s_client.V1ObjectFieldSelector(\n field_path='metadata.namespace')))\n ).add_env_variable(\n k8s_client.V1EnvVar(\n name='WORKFLOW_ID',\n value_from=k8s_client.\n V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(\n field_path=\"metadata.labels['workflows.argoproj.io/workflow']\")))\n ).add_env_variable(\n k8s_client.V1EnvVar(\n name='ENABLE_CACHING',\n value_from=k8s_client.\n V1EnvVarSource(field_ref=k8s_client.V1ObjectFieldSelector(\n field_path=\"metadata.labels['pipelines.kubeflow.org/enable_caching']\")))\n )\n return op\n\n\ndef add_pod_labels(labels: Optional[Dict] = None) -> Callable:\n \"\"\"Adds provided pod labels to each pod.\"\"\"\n\n def _add_pod_labels(task):\n for k, v in labels.items():\n # Only append but not update.\n # This is needed to bypass TFX pipelines/components.\n if k not in task.pod_labels:\n task.add_pod_label(k, v)\n return task\n\n return _add_pod_labels\n", "path": "sdk/python/kfp/compiler/_default_transformers.py"}, {"content": "# Copyright 2021 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility functions for enabling v2-compatible pipelines in v1.\"\"\"\nimport collections\nimport json\nfrom typing import Optional\n\nfrom kfp import dsl\nfrom kfp.compiler import _default_transformers\nfrom kfp.pipeline_spec import pipeline_spec_pb2\nfrom kfp.v2 import compiler\n\nfrom kubernetes import client as k8s_client\n\n_DEFAULT_LAUNCHER_IMAGE = \"gcr.io/ml-pipeline/kfp-launcher:1.6.4\"\n\n\ndef update_op(op: dsl.ContainerOp,\n pipeline_name: dsl.PipelineParam,\n pipeline_root: dsl.PipelineParam,\n launcher_image: Optional[str] = None) -> None:\n \"\"\"Updates the passed in Op for running in v2-compatible mode.\n\n Args:\n op: The Op to update.\n pipeline_spec: The PipelineSpec for the pipeline under which `op`\n runs.\n pipeline_root: The root output directory for pipeline artifacts.\n launcher_image: An optional launcher image. Useful for tests.\n \"\"\"\n op.is_v2 = True\n # Inject the launcher binary and overwrite the entrypoint.\n image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE\n launcher_container = dsl.UserContainer(name=\"kfp-launcher\",\n image=image_name,\n command=\"/bin/mount_launcher.sh\",\n mirror_volume_mounts=True)\n\n op.add_init_container(launcher_container)\n op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))\n op.add_volume_mount(\n k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))\n\n # op.command + op.args will have the following sections:\n # 1. args passed to kfp-launcher\n # 2. a separator \"--\"\n # 3. parameters in format \"key1=value1\", \"key2=value2\", ...\n # 4. a separator \"--\" as end of arguments passed to launcher\n # 5. (start of op.args) arguments of the original user program command + args\n #\n # example:\n # - command:\n # - /kfp-launcher/launch\n # - '--mlmd_server_address'\n # - $(METADATA_GRPC_SERVICE_HOST)\n # - '--mlmd_server_port'\n # - $(METADATA_GRPC_SERVICE_PORT)\n # - ... # more launcher params\n # - '--pipeline_task_id'\n # - $(KFP_POD_NAME)\n # - '--pipeline_root'\n # - ''\n # - '--' # start of parameter values\n # - first=first\n # - second=second\n # - '--' # start of user command and args\n # args:\n # - sh\n # - '-ec'\n # - |\n # program_path=$(mktemp)\n # printf \"%s\" \"$0\" > \"$program_path\"\n # python3 -u \"$program_path\" \"$@\"\n # - >\n # import json\n # import xxx\n # ...\n op.command = [\n \"/kfp-launcher/launch\",\n \"--mlmd_server_address\",\n \"$(METADATA_GRPC_SERVICE_HOST)\",\n \"--mlmd_server_port\",\n \"$(METADATA_GRPC_SERVICE_PORT)\",\n \"--runtime_info_json\",\n \"$(KFP_V2_RUNTIME_INFO)\",\n \"--container_image\",\n \"$(KFP_V2_IMAGE)\",\n \"--task_name\",\n op.name,\n \"--pipeline_name\",\n pipeline_name,\n \"--pipeline_run_id\",\n \"$(WORKFLOW_ID)\",\n \"--pipeline_task_id\",\n \"$(KFP_POD_NAME)\",\n \"--pipeline_root\",\n pipeline_root,\n \"--enable_caching\",\n \"$(ENABLE_CACHING)\",\n ]\n\n # Mount necessary environment variables.\n op.apply(_default_transformers.add_kfp_pod_env)\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_IMAGE\", value=op.container.image))\n\n config_map_ref = k8s_client.V1ConfigMapEnvSource(\n name='metadata-grpc-configmap', optional=True)\n op.container.add_env_from(\n k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))\n\n op.arguments = list(op.container_spec.command) + list(op.container_spec.args)\n\n runtime_info = {\n \"inputParameters\": collections.OrderedDict(),\n \"inputArtifacts\": collections.OrderedDict(),\n \"outputParameters\": collections.OrderedDict(),\n \"outputArtifacts\": collections.OrderedDict(),\n }\n\n op.command += [\"--\"]\n component_spec = op.component_spec\n for parameter, spec in sorted(\n component_spec.input_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n }\n op.command += [f\"{parameter}={op._parameter_arguments[parameter]}\"]\n runtime_info[\"inputParameters\"][parameter] = parameter_info\n op.command += [\"--\"]\n\n for artifact_name, spec in sorted(\n component_spec.input_definitions.artifacts.items()):\n artifact_info = {\n \"metadataPath\": op.input_artifact_paths[artifact_name],\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n }\n runtime_info[\"inputArtifacts\"][artifact_name] = artifact_info\n\n for parameter, spec in sorted(\n component_spec.output_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n \"path\":\n op.file_outputs[parameter],\n }\n runtime_info[\"outputParameters\"][parameter] = parameter_info\n\n for artifact_name, spec in sorted(\n component_spec.output_definitions.artifacts.items()):\n # TODO: Assert instance_schema.\n artifact_info = {\n # Type used to register output artifacts.\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n # File used to write out the registered artifact ID.\n \"metadataPath\": op.file_outputs[artifact_name],\n }\n runtime_info[\"outputArtifacts\"][artifact_name] = artifact_info\n\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_RUNTIME_INFO\",\n value=json.dumps(runtime_info)))\n\n op.pod_annotations['pipelines.kubeflow.org/v2_component'] = \"true\"\n op.pod_labels['pipelines.kubeflow.org/v2_component']= \"true\"\n", "path": "sdk/python/kfp/compiler/v2_compat.py"}]} | 3,393 | 530 |
gh_patches_debug_12946 | rasdani/github-patches | git_diff | awslabs__gluonts-1884 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeepAR with NegativeBinomial cannot generate values above 1e6
## Description
A DeepAR model with NegativeBinomial output distribution cannot generate values significantly above 1e6.
## To Reproduce
I attach a jupyter notebook where I generate artificial timeseries with values between 0 and 1e8, train a model and plot the forecast. I compressed the notebook with zip as .ipynb files are not supported as attachments.
[1e6.ipynb.zip](https://github.com/awslabs/gluon-ts/files/8069187/1e6.ipynb.zip)
## Error message or code output
Please see the attached notebook.

## Environment
- Operating system: Ubuntu 20.04, linux kernel 5.13.0-28-generic
- Python version: 3.8.10
- GluonTS version: 0.8.1
- MXNet version: 1.9.0
I vaguely recall that
I observed this issue alredy in gluonts versions 0.4.x.
</issue>
<code>
[start of src/gluonts/mx/distribution/neg_binomial.py]
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 from typing import Dict, List, Optional, Tuple
15
16 import numpy as np
17
18 from gluonts.core.component import validated
19 from gluonts.mx import Tensor
20
21 from .deterministic import DeterministicOutput
22 from .distribution import Distribution, _sample_multiple, getF, softplus
23 from .distribution_output import DistributionOutput
24 from .mixture import MixtureDistributionOutput
25
26
27 class NegativeBinomial(Distribution):
28 r"""
29 Negative binomial distribution, i.e. the distribution of the number of
30 successes in a sequence of independent Bernoulli trials.
31
32 Parameters
33 ----------
34 mu
35 Tensor containing the means, of shape `(*batch_shape, *event_shape)`.
36 alpha
37 Tensor of the shape parameters, of shape `(*batch_shape, *event_shape)`.
38 F
39 """
40
41 is_reparameterizable = False
42
43 @validated()
44 def __init__(self, mu: Tensor, alpha: Tensor) -> None:
45 self.mu = mu
46 self.alpha = alpha
47
48 @property
49 def F(self):
50 return getF(self.mu)
51
52 @property
53 def batch_shape(self) -> Tuple:
54 return self.mu.shape
55
56 @property
57 def event_shape(self) -> Tuple:
58 return ()
59
60 @property
61 def event_dim(self) -> int:
62 return 0
63
64 def log_prob(self, x: Tensor) -> Tensor:
65 alphaInv = 1.0 / self.alpha
66 alpha_times_mu = self.alpha * self.mu
67 F = self.F
68 ll = (
69 x * F.log(alpha_times_mu / (1.0 + alpha_times_mu))
70 - alphaInv * F.log1p(alpha_times_mu)
71 + F.gammaln(x + alphaInv)
72 - F.gammaln(x + 1.0)
73 - F.gammaln(alphaInv)
74 )
75 return ll
76
77 @property
78 def mean(self) -> Tensor:
79 return self.mu
80
81 @property
82 def stddev(self) -> Tensor:
83 return self.F.sqrt(self.mu * (1.0 + self.mu * self.alpha))
84
85 def sample(
86 self, num_samples: Optional[int] = None, dtype=np.float32
87 ) -> Tensor:
88 def s(mu: Tensor, alpha: Tensor) -> Tensor:
89 F = self.F
90 tol = 1e-5
91 r = 1.0 / alpha
92 theta = alpha * mu
93 r = F.minimum(F.maximum(tol, r), 1e10)
94 theta = F.minimum(F.maximum(tol, theta), 1e10)
95 x = F.minimum(F.random.gamma(r, theta), 1e6)
96 return F.random.poisson(lam=x, dtype=dtype)
97
98 return _sample_multiple(
99 s, mu=self.mu, alpha=self.alpha, num_samples=num_samples
100 )
101
102 @property
103 def args(self) -> List:
104 return [self.mu, self.alpha]
105
106
107 class NegativeBinomialOutput(DistributionOutput):
108 args_dim: Dict[str, int] = {"mu": 1, "alpha": 1}
109 distr_cls: type = NegativeBinomial
110
111 @classmethod
112 def domain_map(cls, F, mu, alpha):
113 epsilon = np.finfo(cls._dtype).eps # machine epsilon
114
115 mu = softplus(F, mu) + epsilon
116 alpha = softplus(F, alpha) + epsilon
117 return mu.squeeze(axis=-1), alpha.squeeze(axis=-1)
118
119 # Overwrites the parent class method.
120 # We cannot scale using the affine transformation since negative binomial should return integers.
121 # Instead we scale the parameters.
122 def distribution(
123 self,
124 distr_args,
125 loc: Optional[Tensor] = None,
126 scale: Optional[Tensor] = None,
127 ) -> NegativeBinomial:
128 mu, alpha = distr_args
129 if scale is None:
130 return NegativeBinomial(mu, alpha)
131 else:
132 F = getF(mu)
133 mu = F.broadcast_mul(mu, scale)
134 return NegativeBinomial(mu, alpha, F)
135
136 @property
137 def event_shape(self) -> Tuple:
138 return ()
139
140
141 def ZeroInflatedNegativeBinomialOutput() -> MixtureDistributionOutput:
142 return MixtureDistributionOutput(
143 distr_outputs=[NegativeBinomialOutput(), DeterministicOutput(0)]
144 )
145
[end of src/gluonts/mx/distribution/neg_binomial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/gluonts/mx/distribution/neg_binomial.py b/src/gluonts/mx/distribution/neg_binomial.py
--- a/src/gluonts/mx/distribution/neg_binomial.py
+++ b/src/gluonts/mx/distribution/neg_binomial.py
@@ -87,13 +87,9 @@
) -> Tensor:
def s(mu: Tensor, alpha: Tensor) -> Tensor:
F = self.F
- tol = 1e-5
r = 1.0 / alpha
theta = alpha * mu
- r = F.minimum(F.maximum(tol, r), 1e10)
- theta = F.minimum(F.maximum(tol, theta), 1e10)
- x = F.minimum(F.random.gamma(r, theta), 1e6)
- return F.random.poisson(lam=x, dtype=dtype)
+ return F.random.poisson(lam=F.random.gamma(r, theta), dtype=dtype)
return _sample_multiple(
s, mu=self.mu, alpha=self.alpha, num_samples=num_samples
| {"golden_diff": "diff --git a/src/gluonts/mx/distribution/neg_binomial.py b/src/gluonts/mx/distribution/neg_binomial.py\n--- a/src/gluonts/mx/distribution/neg_binomial.py\n+++ b/src/gluonts/mx/distribution/neg_binomial.py\n@@ -87,13 +87,9 @@\n ) -> Tensor:\n def s(mu: Tensor, alpha: Tensor) -> Tensor:\n F = self.F\n- tol = 1e-5\n r = 1.0 / alpha\n theta = alpha * mu\n- r = F.minimum(F.maximum(tol, r), 1e10)\n- theta = F.minimum(F.maximum(tol, theta), 1e10)\n- x = F.minimum(F.random.gamma(r, theta), 1e6)\n- return F.random.poisson(lam=x, dtype=dtype)\n+ return F.random.poisson(lam=F.random.gamma(r, theta), dtype=dtype)\n \n return _sample_multiple(\n s, mu=self.mu, alpha=self.alpha, num_samples=num_samples\n", "issue": "DeepAR with NegativeBinomial cannot generate values above 1e6\n## Description\r\nA DeepAR model with NegativeBinomial output distribution cannot generate values significantly above 1e6.\r\n\r\n## To Reproduce\r\nI attach a jupyter notebook where I generate artificial timeseries with values between 0 and 1e8, train a model and plot the forecast. I compressed the notebook with zip as .ipynb files are not supported as attachments.\r\n\r\n[1e6.ipynb.zip](https://github.com/awslabs/gluon-ts/files/8069187/1e6.ipynb.zip)\r\n\r\n## Error message or code output\r\nPlease see the attached notebook.\r\n\r\n\r\n\r\n## Environment\r\n- Operating system: Ubuntu 20.04, linux kernel 5.13.0-28-generic\r\n- Python version: 3.8.10\r\n- GluonTS version: 0.8.1\r\n- MXNet version: 1.9.0\r\n\r\nI vaguely recall that \r\nI observed this issue alredy in gluonts versions 0.4.x.\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom typing import Dict, List, Optional, Tuple\n\nimport numpy as np\n\nfrom gluonts.core.component import validated\nfrom gluonts.mx import Tensor\n\nfrom .deterministic import DeterministicOutput\nfrom .distribution import Distribution, _sample_multiple, getF, softplus\nfrom .distribution_output import DistributionOutput\nfrom .mixture import MixtureDistributionOutput\n\n\nclass NegativeBinomial(Distribution):\n r\"\"\"\n Negative binomial distribution, i.e. the distribution of the number of\n successes in a sequence of independent Bernoulli trials.\n\n Parameters\n ----------\n mu\n Tensor containing the means, of shape `(*batch_shape, *event_shape)`.\n alpha\n Tensor of the shape parameters, of shape `(*batch_shape, *event_shape)`.\n F\n \"\"\"\n\n is_reparameterizable = False\n\n @validated()\n def __init__(self, mu: Tensor, alpha: Tensor) -> None:\n self.mu = mu\n self.alpha = alpha\n\n @property\n def F(self):\n return getF(self.mu)\n\n @property\n def batch_shape(self) -> Tuple:\n return self.mu.shape\n\n @property\n def event_shape(self) -> Tuple:\n return ()\n\n @property\n def event_dim(self) -> int:\n return 0\n\n def log_prob(self, x: Tensor) -> Tensor:\n alphaInv = 1.0 / self.alpha\n alpha_times_mu = self.alpha * self.mu\n F = self.F\n ll = (\n x * F.log(alpha_times_mu / (1.0 + alpha_times_mu))\n - alphaInv * F.log1p(alpha_times_mu)\n + F.gammaln(x + alphaInv)\n - F.gammaln(x + 1.0)\n - F.gammaln(alphaInv)\n )\n return ll\n\n @property\n def mean(self) -> Tensor:\n return self.mu\n\n @property\n def stddev(self) -> Tensor:\n return self.F.sqrt(self.mu * (1.0 + self.mu * self.alpha))\n\n def sample(\n self, num_samples: Optional[int] = None, dtype=np.float32\n ) -> Tensor:\n def s(mu: Tensor, alpha: Tensor) -> Tensor:\n F = self.F\n tol = 1e-5\n r = 1.0 / alpha\n theta = alpha * mu\n r = F.minimum(F.maximum(tol, r), 1e10)\n theta = F.minimum(F.maximum(tol, theta), 1e10)\n x = F.minimum(F.random.gamma(r, theta), 1e6)\n return F.random.poisson(lam=x, dtype=dtype)\n\n return _sample_multiple(\n s, mu=self.mu, alpha=self.alpha, num_samples=num_samples\n )\n\n @property\n def args(self) -> List:\n return [self.mu, self.alpha]\n\n\nclass NegativeBinomialOutput(DistributionOutput):\n args_dim: Dict[str, int] = {\"mu\": 1, \"alpha\": 1}\n distr_cls: type = NegativeBinomial\n\n @classmethod\n def domain_map(cls, F, mu, alpha):\n epsilon = np.finfo(cls._dtype).eps # machine epsilon\n\n mu = softplus(F, mu) + epsilon\n alpha = softplus(F, alpha) + epsilon\n return mu.squeeze(axis=-1), alpha.squeeze(axis=-1)\n\n # Overwrites the parent class method.\n # We cannot scale using the affine transformation since negative binomial should return integers.\n # Instead we scale the parameters.\n def distribution(\n self,\n distr_args,\n loc: Optional[Tensor] = None,\n scale: Optional[Tensor] = None,\n ) -> NegativeBinomial:\n mu, alpha = distr_args\n if scale is None:\n return NegativeBinomial(mu, alpha)\n else:\n F = getF(mu)\n mu = F.broadcast_mul(mu, scale)\n return NegativeBinomial(mu, alpha, F)\n\n @property\n def event_shape(self) -> Tuple:\n return ()\n\n\ndef ZeroInflatedNegativeBinomialOutput() -> MixtureDistributionOutput:\n return MixtureDistributionOutput(\n distr_outputs=[NegativeBinomialOutput(), DeterministicOutput(0)]\n )\n", "path": "src/gluonts/mx/distribution/neg_binomial.py"}]} | 2,263 | 247 |
gh_patches_debug_16813 | rasdani/github-patches | git_diff | nautobot__nautobot-5593 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugins not loaded with Gunicorn
### Environment
* Nautobot version (Docker tag too if applicable): 2.2.1
* Python version: 3.11
* Database platform, version: psql
* Middleware(s):
### Steps to Reproduce
1. Use systemd
2. With gunicorn 21.2.0 or 22.0.0
<!-- What did you expect to happen? -->
### Expected Behavior
All applications to show
### Observed Behavior
We attempted to upgrade our dev environment from 2.1.9 to 2.2.1 but are hitting a weird issue where our plugins are reported as missing. We are only loading 1 or 2 basic plugins right now while we work on updating all our other plugins for 2.x. Oddly we are only seeing this issue on 1 out of 3 identical servers with identical Nautobot installs.
This looks very much like this issue from 2021: [Plugin Load Failure · Issue #95 · nautobot/nautobot (github.com)](https://github.com/nautobot/nautobot/issues/95)
</issue>
<code>
[start of nautobot/core/wsgi.py]
1 import logging
2 import os
3
4 from django.core import cache
5 from django.core.wsgi import get_wsgi_application
6 from django.db import connections
7
8 os.environ["DJANGO_SETTINGS_MODULE"] = "nautobot_config"
9
10 # Use try/except because we might not be running uWSGI. If `settings.WEBSERVER_WARMUP` is `True`,
11 # will first call `get_internal_wsgi_application` which does not have `uwsgi` module loaded
12 # already. Therefore, `settings.WEBSERVER_WARMUP` to `False` for this code to be loaded.
13 try:
14 import uwsgidecorators
15
16 @uwsgidecorators.postfork
17 def fix_uwsgi():
18 import uwsgi
19
20 logging.getLogger(__name__).info(
21 f"Closing existing DB and cache connections on worker {uwsgi.worker_id()} after uWSGI forked ..."
22 )
23 connections.close_all()
24 cache.close_caches()
25
26 except ImportError:
27 pass
28
29 application = get_wsgi_application()
30
[end of nautobot/core/wsgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nautobot/core/wsgi.py b/nautobot/core/wsgi.py
--- a/nautobot/core/wsgi.py
+++ b/nautobot/core/wsgi.py
@@ -1,11 +1,18 @@
import logging
-import os
from django.core import cache
from django.core.wsgi import get_wsgi_application
from django.db import connections
-os.environ["DJANGO_SETTINGS_MODULE"] = "nautobot_config"
+import nautobot
+
+# This is the Django default left here for visibility on how the Nautobot pattern
+# differs.
+# os.environ.setdefault("DJANGO_SETTINGS_MODULE", "nautobot.core.settings")
+
+# Instead of just pointing to `DJANGO_SETTINGS_MODULE` and letting Django run with it,
+# we're using the custom Nautobot loader code to read environment or config path for us.
+nautobot.setup()
# Use try/except because we might not be running uWSGI. If `settings.WEBSERVER_WARMUP` is `True`,
# will first call `get_internal_wsgi_application` which does not have `uwsgi` module loaded
| {"golden_diff": "diff --git a/nautobot/core/wsgi.py b/nautobot/core/wsgi.py\n--- a/nautobot/core/wsgi.py\n+++ b/nautobot/core/wsgi.py\n@@ -1,11 +1,18 @@\n import logging\n-import os\n \n from django.core import cache\n from django.core.wsgi import get_wsgi_application\n from django.db import connections\n \n-os.environ[\"DJANGO_SETTINGS_MODULE\"] = \"nautobot_config\"\n+import nautobot\n+\n+# This is the Django default left here for visibility on how the Nautobot pattern\n+# differs.\n+# os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"nautobot.core.settings\")\n+\n+# Instead of just pointing to `DJANGO_SETTINGS_MODULE` and letting Django run with it,\n+# we're using the custom Nautobot loader code to read environment or config path for us.\n+nautobot.setup()\n \n # Use try/except because we might not be running uWSGI. If `settings.WEBSERVER_WARMUP` is `True`,\n # will first call `get_internal_wsgi_application` which does not have `uwsgi` module loaded\n", "issue": "Plugins not loaded with Gunicorn\n\r\n### Environment\r\n\r\n* Nautobot version (Docker tag too if applicable): 2.2.1\r\n* Python version: 3.11\r\n* Database platform, version: psql\r\n* Middleware(s):\r\n\r\n\r\n### Steps to Reproduce\r\n1. Use systemd\r\n2. With gunicorn 21.2.0 or 22.0.0\r\n\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nAll applications to show \r\n\r\n### Observed Behavior\r\n\r\nWe attempted to upgrade our dev environment from 2.1.9 to 2.2.1 but are hitting a weird issue where our plugins are reported as missing. We are only loading 1 or 2 basic plugins right now while we work on updating all our other plugins for 2.x. Oddly we are only seeing this issue on 1 out of 3 identical servers with identical Nautobot installs.\r\n\r\nThis looks very much like this issue from 2021: [Plugin Load Failure \u00b7 Issue #95 \u00b7 nautobot/nautobot (github.com)](https://github.com/nautobot/nautobot/issues/95)\n", "before_files": [{"content": "import logging\nimport os\n\nfrom django.core import cache\nfrom django.core.wsgi import get_wsgi_application\nfrom django.db import connections\n\nos.environ[\"DJANGO_SETTINGS_MODULE\"] = \"nautobot_config\"\n\n# Use try/except because we might not be running uWSGI. If `settings.WEBSERVER_WARMUP` is `True`,\n# will first call `get_internal_wsgi_application` which does not have `uwsgi` module loaded\n# already. Therefore, `settings.WEBSERVER_WARMUP` to `False` for this code to be loaded.\ntry:\n import uwsgidecorators\n\n @uwsgidecorators.postfork\n def fix_uwsgi():\n import uwsgi\n\n logging.getLogger(__name__).info(\n f\"Closing existing DB and cache connections on worker {uwsgi.worker_id()} after uWSGI forked ...\"\n )\n connections.close_all()\n cache.close_caches()\n\nexcept ImportError:\n pass\n\napplication = get_wsgi_application()\n", "path": "nautobot/core/wsgi.py"}]} | 1,057 | 239 |
gh_patches_debug_22558 | rasdani/github-patches | git_diff | sublimelsp__LSP-925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Forced-break in hover popup can break syntax highlighting
Using: OSX / typescript-language-server
Line-break, that LSP forces in the popup, can cause syntax highlighting to break. For example, if breaking a plain string in JS syntax.
<img width="512" alt="line-break" src="https://user-images.githubusercontent.com/153197/72525594-cfa7ff00-3864-11ea-9e8a-c183e07995a1.png">
Notice that in the screenshot the whole string should have a yellow color. Syntax highlighting breaks because line break within a string is not a valid syntax.
</issue>
<code>
[start of plugin/hover.py]
1 import mdpopups
2 import sublime
3 import sublime_plugin
4 import webbrowser
5 import os
6 import textwrap
7 from html import escape
8 from .code_actions import actions_manager, run_code_action_or_command
9 from .code_actions import CodeActionOrCommand
10 from .core.configurations import is_supported_syntax
11 from .core.popups import popups
12 from .core.protocol import Request, DiagnosticSeverity, Diagnostic, DiagnosticRelatedInformation, Point
13 from .core.registry import session_for_view, LspTextCommand, windows
14 from .core.settings import client_configs, settings
15 from .core.typing import List, Optional, Any, Dict
16 from .core.views import text_document_position_params
17 from .diagnostics import filter_by_point, view_diagnostics
18
19
20 SUBLIME_WORD_MASK = 515
21
22
23 class HoverHandler(sublime_plugin.ViewEventListener):
24 def __init__(self, view: sublime.View) -> None:
25 self.view = view
26
27 @classmethod
28 def is_applicable(cls, view_settings: dict) -> bool:
29 if 'hover' in settings.disabled_capabilities:
30 return False
31 syntax = view_settings.get('syntax')
32 if syntax:
33 return is_supported_syntax(syntax, client_configs.all)
34 else:
35 return False
36
37 def on_hover(self, point: int, hover_zone: int) -> None:
38 if hover_zone != sublime.HOVER_TEXT or self.view.is_popup_visible():
39 return
40 self.view.run_command("lsp_hover", {"point": point})
41
42
43 _test_contents = [] # type: List[str]
44
45
46 class_for_severity = {
47 DiagnosticSeverity.Error: 'errors',
48 DiagnosticSeverity.Warning: 'warnings',
49 DiagnosticSeverity.Information: 'info',
50 DiagnosticSeverity.Hint: 'hints'
51 }
52
53
54 class GotoKind:
55
56 __slots__ = ("lsp_name", "label", "subl_cmd_name")
57
58 def __init__(self, lsp_name: str, label: str, subl_cmd_name: str) -> None:
59 self.lsp_name = lsp_name
60 self.label = label
61 self.subl_cmd_name = subl_cmd_name
62
63
64 goto_kinds = [
65 GotoKind("definition", "Definition", "definition"),
66 GotoKind("typeDefinition", "Type Definition", "type_definition"),
67 GotoKind("declaration", "Declaration", "declaration"),
68 GotoKind("implementation", "Implementation", "implementation")
69 ]
70
71
72 class LspHoverCommand(LspTextCommand):
73 def __init__(self, view: sublime.View) -> None:
74 super().__init__(view)
75 self._base_dir = None # type: Optional[str]
76
77 def is_likely_at_symbol(self, point: int) -> bool:
78 word_at_sel = self.view.classify(point)
79 return bool(word_at_sel & SUBLIME_WORD_MASK)
80
81 def run(self, edit: sublime.Edit, point: Optional[int] = None) -> None:
82 hover_point = point or self.view.sel()[0].begin()
83 self._base_dir = windows.lookup(self.view.window()).get_project_path(self.view.file_name() or "")
84
85 self._hover = None # type: Optional[Any]
86 self._actions_by_config = {} # type: Dict[str, List[CodeActionOrCommand]]
87 self._diagnostics_by_config = {} # type: Dict[str, List[Diagnostic]]
88
89 if self.is_likely_at_symbol(hover_point):
90 self.request_symbol_hover(hover_point)
91
92 self._diagnostics_by_config = filter_by_point(view_diagnostics(self.view),
93 Point(*self.view.rowcol(hover_point)))
94 if self._diagnostics_by_config:
95 self.request_code_actions(hover_point)
96 self.request_show_hover(hover_point)
97
98 def request_symbol_hover(self, point: int) -> None:
99 # todo: session_for_view looks up windowmanager twice (config and for sessions)
100 # can we memoize some part (eg. where no point is provided?)
101 session = session_for_view(self.view, 'hoverProvider', point)
102 if session:
103 document_position = text_document_position_params(self.view, point)
104 if session.client:
105 session.client.send_request(
106 Request.hover(document_position),
107 lambda response: self.handle_response(response, point))
108
109 def request_code_actions(self, point: int) -> None:
110 actions_manager.request(self.view, point, lambda response: self.handle_code_actions(response, point),
111 self._diagnostics_by_config)
112
113 def handle_code_actions(self, responses: Dict[str, List[CodeActionOrCommand]], point: int) -> None:
114 self._actions_by_config = responses
115 self.request_show_hover(point)
116
117 def handle_response(self, response: Optional[Any], point: int) -> None:
118 self._hover = response
119 self.request_show_hover(point)
120
121 def symbol_actions_content(self) -> str:
122 actions = []
123 for goto_kind in goto_kinds:
124 if self.has_client_with_capability(goto_kind.lsp_name + "Provider"):
125 actions.append("<a href='{}'>{}</a>".format(goto_kind.lsp_name, goto_kind.label))
126 if self.has_client_with_capability('referencesProvider'):
127 actions.append("<a href='{}'>{}</a>".format('references', 'References'))
128 if self.has_client_with_capability('renameProvider'):
129 actions.append("<a href='{}'>{}</a>".format('rename', 'Rename'))
130 return "<p>" + " | ".join(actions) + "</p>"
131
132 def format_diagnostic_related_info(self, info: DiagnosticRelatedInformation) -> str:
133 file_path = info.location.file_path
134 if self._base_dir and file_path.startswith(self._base_dir):
135 file_path = os.path.relpath(file_path, self._base_dir)
136 location = "{}:{}:{}".format(file_path, info.location.range.start.row+1, info.location.range.start.col+1)
137 return "<a href='location:{}'>{}</a>: {}".format(location, location, escape(info.message))
138
139 def format_diagnostic(self, diagnostic: 'Diagnostic') -> str:
140 diagnostic_message = escape(diagnostic.message, False).replace('\n', '<br>')
141 related_infos = [self.format_diagnostic_related_info(info) for info in diagnostic.related_info]
142 related_content = "<pre class='related_info'>" + "<br>".join(related_infos) + "</pre>" if related_infos else ""
143
144 if diagnostic.source:
145 return "<pre class=\"{}\">[{}] {}{}</pre>".format(class_for_severity[diagnostic.severity],
146 diagnostic.source, diagnostic_message, related_content)
147 else:
148 return "<pre class=\"{}\">{}{}</pre>".format(class_for_severity[diagnostic.severity], diagnostic_message,
149 related_content)
150
151 def diagnostics_content(self) -> str:
152 formatted = []
153 for config_name in self._diagnostics_by_config:
154 by_severity = {} # type: Dict[int, List[str]]
155 formatted.append("<div class='diagnostics'>")
156 for diagnostic in self._diagnostics_by_config[config_name]:
157 by_severity.setdefault(diagnostic.severity, []).append(self.format_diagnostic(diagnostic))
158
159 for severity, items in by_severity.items():
160 formatted.append("<div>")
161 formatted.extend(items)
162 formatted.append("</div>")
163
164 if config_name in self._actions_by_config:
165 action_count = len(self._actions_by_config[config_name])
166 if action_count > 0:
167 formatted.append("<div class=\"actions\"><a href='{}:{}'>{} ({})</a></div>".format(
168 'code-actions', config_name, 'Code Actions', action_count))
169
170 formatted.append("</div>")
171
172 return "".join(formatted)
173
174 def hover_content(self) -> str:
175 contents = [] # type: List[Any]
176 if isinstance(self._hover, dict):
177 response_content = self._hover.get('contents')
178 if response_content:
179 if isinstance(response_content, list):
180 contents = response_content
181 else:
182 contents = [response_content]
183
184 formatted = []
185 for item in contents:
186 value = ""
187 language = None
188 if isinstance(item, str):
189 value = item
190 else:
191 value = item.get("value")
192 language = item.get("language")
193
194 if '\n' not in value:
195 value = "\n".join(textwrap.wrap(value, 80))
196
197 if language:
198 formatted.append("```{}\n{}\n```\n".format(language, value))
199 else:
200 formatted.append(value)
201
202 if formatted:
203 return mdpopups.md2html(self.view, "\n".join(formatted))
204
205 return ""
206
207 def request_show_hover(self, point: int) -> None:
208 sublime.set_timeout(lambda: self.show_hover(point), 50)
209
210 def show_hover(self, point: int) -> None:
211 contents = self.diagnostics_content() + self.hover_content()
212 if contents and settings.show_symbol_action_links:
213 contents += self.symbol_actions_content()
214
215 _test_contents.clear()
216 _test_contents.append(contents) # for testing only
217
218 if contents:
219 mdpopups.show_popup(
220 self.view,
221 contents,
222 css=popups.stylesheet,
223 md=False,
224 flags=sublime.HIDE_ON_MOUSE_MOVE_AWAY,
225 location=point,
226 wrapper_class=popups.classname,
227 max_width=800,
228 on_navigate=lambda href: self.on_hover_navigate(href, point))
229
230 def on_hover_navigate(self, href: str, point: int) -> None:
231 for goto_kind in goto_kinds:
232 if href == goto_kind.lsp_name:
233 self.run_command_from_point(point, "lsp_symbol_" + goto_kind.subl_cmd_name)
234 return
235 if href == 'references':
236 self.run_command_from_point(point, "lsp_symbol_references")
237 elif href == 'rename':
238 self.run_command_from_point(point, "lsp_symbol_rename")
239 elif href.startswith('code-actions'):
240 _, config_name = href.split(":")
241 titles = [command["title"] for command in self._actions_by_config[config_name]]
242 sel = self.view.sel()
243 sel.clear()
244 sel.add(sublime.Region(point, point))
245
246 self.view.show_popup_menu(titles, lambda i: self.handle_code_action_select(config_name, i))
247 elif href.startswith('location'):
248 _, file_path, location = href.split(":", 2)
249 file_path = os.path.join(self._base_dir, file_path) if self._base_dir else file_path
250 window = self.view.window()
251 if window:
252 window.open_file(file_path + ":" + location, sublime.ENCODED_POSITION | sublime.TRANSIENT)
253 else:
254 webbrowser.open_new_tab(href)
255
256 def handle_code_action_select(self, config_name: str, index: int) -> None:
257 if index > -1:
258 selected = self._actions_by_config[config_name][index]
259 run_code_action_or_command(self.view, config_name, selected)
260
261 def run_command_from_point(self, point: int, command_name: str, args: Optional[Any] = None) -> None:
262 sel = self.view.sel()
263 sel.clear()
264 sel.add(sublime.Region(point, point))
265 self.view.run_command(command_name, args)
266
[end of plugin/hover.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/hover.py b/plugin/hover.py
--- a/plugin/hover.py
+++ b/plugin/hover.py
@@ -3,7 +3,6 @@
import sublime_plugin
import webbrowser
import os
-import textwrap
from html import escape
from .code_actions import actions_manager, run_code_action_or_command
from .code_actions import CodeActionOrCommand
@@ -191,16 +190,14 @@
value = item.get("value")
language = item.get("language")
- if '\n' not in value:
- value = "\n".join(textwrap.wrap(value, 80))
-
if language:
formatted.append("```{}\n{}\n```\n".format(language, value))
else:
formatted.append(value)
if formatted:
- return mdpopups.md2html(self.view, "\n".join(formatted))
+ frontmatter_config = mdpopups.format_frontmatter({'allow_code_wrap': True})
+ return mdpopups.md2html(self.view, frontmatter_config + "\n".join(formatted))
return ""
| {"golden_diff": "diff --git a/plugin/hover.py b/plugin/hover.py\n--- a/plugin/hover.py\n+++ b/plugin/hover.py\n@@ -3,7 +3,6 @@\n import sublime_plugin\n import webbrowser\n import os\n-import textwrap\n from html import escape\n from .code_actions import actions_manager, run_code_action_or_command\n from .code_actions import CodeActionOrCommand\n@@ -191,16 +190,14 @@\n value = item.get(\"value\")\n language = item.get(\"language\")\n \n- if '\\n' not in value:\n- value = \"\\n\".join(textwrap.wrap(value, 80))\n-\n if language:\n formatted.append(\"```{}\\n{}\\n```\\n\".format(language, value))\n else:\n formatted.append(value)\n \n if formatted:\n- return mdpopups.md2html(self.view, \"\\n\".join(formatted))\n+ frontmatter_config = mdpopups.format_frontmatter({'allow_code_wrap': True})\n+ return mdpopups.md2html(self.view, frontmatter_config + \"\\n\".join(formatted))\n \n return \"\"\n", "issue": "Forced-break in hover popup can break syntax highlighting\nUsing: OSX / typescript-language-server\r\n\r\nLine-break, that LSP forces in the popup, can cause syntax highlighting to break. For example, if breaking a plain string in JS syntax.\r\n\r\n<img width=\"512\" alt=\"line-break\" src=\"https://user-images.githubusercontent.com/153197/72525594-cfa7ff00-3864-11ea-9e8a-c183e07995a1.png\">\r\n\r\nNotice that in the screenshot the whole string should have a yellow color. Syntax highlighting breaks because line break within a string is not a valid syntax.\n", "before_files": [{"content": "import mdpopups\nimport sublime\nimport sublime_plugin\nimport webbrowser\nimport os\nimport textwrap\nfrom html import escape\nfrom .code_actions import actions_manager, run_code_action_or_command\nfrom .code_actions import CodeActionOrCommand\nfrom .core.configurations import is_supported_syntax\nfrom .core.popups import popups\nfrom .core.protocol import Request, DiagnosticSeverity, Diagnostic, DiagnosticRelatedInformation, Point\nfrom .core.registry import session_for_view, LspTextCommand, windows\nfrom .core.settings import client_configs, settings\nfrom .core.typing import List, Optional, Any, Dict\nfrom .core.views import text_document_position_params\nfrom .diagnostics import filter_by_point, view_diagnostics\n\n\nSUBLIME_WORD_MASK = 515\n\n\nclass HoverHandler(sublime_plugin.ViewEventListener):\n def __init__(self, view: sublime.View) -> None:\n self.view = view\n\n @classmethod\n def is_applicable(cls, view_settings: dict) -> bool:\n if 'hover' in settings.disabled_capabilities:\n return False\n syntax = view_settings.get('syntax')\n if syntax:\n return is_supported_syntax(syntax, client_configs.all)\n else:\n return False\n\n def on_hover(self, point: int, hover_zone: int) -> None:\n if hover_zone != sublime.HOVER_TEXT or self.view.is_popup_visible():\n return\n self.view.run_command(\"lsp_hover\", {\"point\": point})\n\n\n_test_contents = [] # type: List[str]\n\n\nclass_for_severity = {\n DiagnosticSeverity.Error: 'errors',\n DiagnosticSeverity.Warning: 'warnings',\n DiagnosticSeverity.Information: 'info',\n DiagnosticSeverity.Hint: 'hints'\n}\n\n\nclass GotoKind:\n\n __slots__ = (\"lsp_name\", \"label\", \"subl_cmd_name\")\n\n def __init__(self, lsp_name: str, label: str, subl_cmd_name: str) -> None:\n self.lsp_name = lsp_name\n self.label = label\n self.subl_cmd_name = subl_cmd_name\n\n\ngoto_kinds = [\n GotoKind(\"definition\", \"Definition\", \"definition\"),\n GotoKind(\"typeDefinition\", \"Type Definition\", \"type_definition\"),\n GotoKind(\"declaration\", \"Declaration\", \"declaration\"),\n GotoKind(\"implementation\", \"Implementation\", \"implementation\")\n]\n\n\nclass LspHoverCommand(LspTextCommand):\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self._base_dir = None # type: Optional[str]\n\n def is_likely_at_symbol(self, point: int) -> bool:\n word_at_sel = self.view.classify(point)\n return bool(word_at_sel & SUBLIME_WORD_MASK)\n\n def run(self, edit: sublime.Edit, point: Optional[int] = None) -> None:\n hover_point = point or self.view.sel()[0].begin()\n self._base_dir = windows.lookup(self.view.window()).get_project_path(self.view.file_name() or \"\")\n\n self._hover = None # type: Optional[Any]\n self._actions_by_config = {} # type: Dict[str, List[CodeActionOrCommand]]\n self._diagnostics_by_config = {} # type: Dict[str, List[Diagnostic]]\n\n if self.is_likely_at_symbol(hover_point):\n self.request_symbol_hover(hover_point)\n\n self._diagnostics_by_config = filter_by_point(view_diagnostics(self.view),\n Point(*self.view.rowcol(hover_point)))\n if self._diagnostics_by_config:\n self.request_code_actions(hover_point)\n self.request_show_hover(hover_point)\n\n def request_symbol_hover(self, point: int) -> None:\n # todo: session_for_view looks up windowmanager twice (config and for sessions)\n # can we memoize some part (eg. where no point is provided?)\n session = session_for_view(self.view, 'hoverProvider', point)\n if session:\n document_position = text_document_position_params(self.view, point)\n if session.client:\n session.client.send_request(\n Request.hover(document_position),\n lambda response: self.handle_response(response, point))\n\n def request_code_actions(self, point: int) -> None:\n actions_manager.request(self.view, point, lambda response: self.handle_code_actions(response, point),\n self._diagnostics_by_config)\n\n def handle_code_actions(self, responses: Dict[str, List[CodeActionOrCommand]], point: int) -> None:\n self._actions_by_config = responses\n self.request_show_hover(point)\n\n def handle_response(self, response: Optional[Any], point: int) -> None:\n self._hover = response\n self.request_show_hover(point)\n\n def symbol_actions_content(self) -> str:\n actions = []\n for goto_kind in goto_kinds:\n if self.has_client_with_capability(goto_kind.lsp_name + \"Provider\"):\n actions.append(\"<a href='{}'>{}</a>\".format(goto_kind.lsp_name, goto_kind.label))\n if self.has_client_with_capability('referencesProvider'):\n actions.append(\"<a href='{}'>{}</a>\".format('references', 'References'))\n if self.has_client_with_capability('renameProvider'):\n actions.append(\"<a href='{}'>{}</a>\".format('rename', 'Rename'))\n return \"<p>\" + \" | \".join(actions) + \"</p>\"\n\n def format_diagnostic_related_info(self, info: DiagnosticRelatedInformation) -> str:\n file_path = info.location.file_path\n if self._base_dir and file_path.startswith(self._base_dir):\n file_path = os.path.relpath(file_path, self._base_dir)\n location = \"{}:{}:{}\".format(file_path, info.location.range.start.row+1, info.location.range.start.col+1)\n return \"<a href='location:{}'>{}</a>: {}\".format(location, location, escape(info.message))\n\n def format_diagnostic(self, diagnostic: 'Diagnostic') -> str:\n diagnostic_message = escape(diagnostic.message, False).replace('\\n', '<br>')\n related_infos = [self.format_diagnostic_related_info(info) for info in diagnostic.related_info]\n related_content = \"<pre class='related_info'>\" + \"<br>\".join(related_infos) + \"</pre>\" if related_infos else \"\"\n\n if diagnostic.source:\n return \"<pre class=\\\"{}\\\">[{}] {}{}</pre>\".format(class_for_severity[diagnostic.severity],\n diagnostic.source, diagnostic_message, related_content)\n else:\n return \"<pre class=\\\"{}\\\">{}{}</pre>\".format(class_for_severity[diagnostic.severity], diagnostic_message,\n related_content)\n\n def diagnostics_content(self) -> str:\n formatted = []\n for config_name in self._diagnostics_by_config:\n by_severity = {} # type: Dict[int, List[str]]\n formatted.append(\"<div class='diagnostics'>\")\n for diagnostic in self._diagnostics_by_config[config_name]:\n by_severity.setdefault(diagnostic.severity, []).append(self.format_diagnostic(diagnostic))\n\n for severity, items in by_severity.items():\n formatted.append(\"<div>\")\n formatted.extend(items)\n formatted.append(\"</div>\")\n\n if config_name in self._actions_by_config:\n action_count = len(self._actions_by_config[config_name])\n if action_count > 0:\n formatted.append(\"<div class=\\\"actions\\\"><a href='{}:{}'>{} ({})</a></div>\".format(\n 'code-actions', config_name, 'Code Actions', action_count))\n\n formatted.append(\"</div>\")\n\n return \"\".join(formatted)\n\n def hover_content(self) -> str:\n contents = [] # type: List[Any]\n if isinstance(self._hover, dict):\n response_content = self._hover.get('contents')\n if response_content:\n if isinstance(response_content, list):\n contents = response_content\n else:\n contents = [response_content]\n\n formatted = []\n for item in contents:\n value = \"\"\n language = None\n if isinstance(item, str):\n value = item\n else:\n value = item.get(\"value\")\n language = item.get(\"language\")\n\n if '\\n' not in value:\n value = \"\\n\".join(textwrap.wrap(value, 80))\n\n if language:\n formatted.append(\"```{}\\n{}\\n```\\n\".format(language, value))\n else:\n formatted.append(value)\n\n if formatted:\n return mdpopups.md2html(self.view, \"\\n\".join(formatted))\n\n return \"\"\n\n def request_show_hover(self, point: int) -> None:\n sublime.set_timeout(lambda: self.show_hover(point), 50)\n\n def show_hover(self, point: int) -> None:\n contents = self.diagnostics_content() + self.hover_content()\n if contents and settings.show_symbol_action_links:\n contents += self.symbol_actions_content()\n\n _test_contents.clear()\n _test_contents.append(contents) # for testing only\n\n if contents:\n mdpopups.show_popup(\n self.view,\n contents,\n css=popups.stylesheet,\n md=False,\n flags=sublime.HIDE_ON_MOUSE_MOVE_AWAY,\n location=point,\n wrapper_class=popups.classname,\n max_width=800,\n on_navigate=lambda href: self.on_hover_navigate(href, point))\n\n def on_hover_navigate(self, href: str, point: int) -> None:\n for goto_kind in goto_kinds:\n if href == goto_kind.lsp_name:\n self.run_command_from_point(point, \"lsp_symbol_\" + goto_kind.subl_cmd_name)\n return\n if href == 'references':\n self.run_command_from_point(point, \"lsp_symbol_references\")\n elif href == 'rename':\n self.run_command_from_point(point, \"lsp_symbol_rename\")\n elif href.startswith('code-actions'):\n _, config_name = href.split(\":\")\n titles = [command[\"title\"] for command in self._actions_by_config[config_name]]\n sel = self.view.sel()\n sel.clear()\n sel.add(sublime.Region(point, point))\n\n self.view.show_popup_menu(titles, lambda i: self.handle_code_action_select(config_name, i))\n elif href.startswith('location'):\n _, file_path, location = href.split(\":\", 2)\n file_path = os.path.join(self._base_dir, file_path) if self._base_dir else file_path\n window = self.view.window()\n if window:\n window.open_file(file_path + \":\" + location, sublime.ENCODED_POSITION | sublime.TRANSIENT)\n else:\n webbrowser.open_new_tab(href)\n\n def handle_code_action_select(self, config_name: str, index: int) -> None:\n if index > -1:\n selected = self._actions_by_config[config_name][index]\n run_code_action_or_command(self.view, config_name, selected)\n\n def run_command_from_point(self, point: int, command_name: str, args: Optional[Any] = None) -> None:\n sel = self.view.sel()\n sel.clear()\n sel.add(sublime.Region(point, point))\n self.view.run_command(command_name, args)\n", "path": "plugin/hover.py"}]} | 3,782 | 243 |
gh_patches_debug_3475 | rasdani/github-patches | git_diff | ckan__ckan-7033 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IConfigurer plugin load order
**CKAN version**
(all)
**Describe the bug**
`update_config` runs through all IConfigurer plugins from first to last calling `plugin.update_config`. The pattern for other interfaces is that the "first plugin wins", but this is difficult to implement when later plugins override values from earlier ones in the list.
**Steps to reproduce**
Enable two plugins that set the same config value using IConfigurer
**Expected behavior**
First plugin value should win, like with other interfaces.
**Additional details**
ckanext-envvars recommends adding `envvars` last in the list of plugins, which makes sense but if other plugins depend on/override values configured in envvars (e.g. ckanext-scheming) they won't be available at `update_config` time.
</issue>
<code>
[start of ckan/config/environment.py]
1 # encoding: utf-8
2 '''CKAN environment configuration'''
3 from __future__ import annotations
4
5 import os
6 import logging
7 import warnings
8 import pytz
9
10 from typing import Union, cast
11
12 import sqlalchemy
13
14 import ckan.model as model
15 import ckan.plugins as p
16 import ckan.lib.plugins as lib_plugins
17 import ckan.lib.helpers as helpers
18 import ckan.lib.app_globals as app_globals
19 from ckan.lib.redis import is_redis_available
20 import ckan.lib.search as search
21 import ckan.logic as logic
22 import ckan.authz as authz
23 from ckan.lib.webassets_tools import webassets_init
24 from ckan.lib.i18n import build_js_translations
25
26 from ckan.common import CKANConfig, config, config_declaration
27 from ckan.exceptions import CkanConfigurationException
28 from ckan.types import Config
29
30 log = logging.getLogger(__name__)
31
32 # Suppress benign warning 'Unbuilt egg for setuptools'
33 warnings.simplefilter('ignore', UserWarning)
34
35
36 def load_environment(conf: Union[Config, CKANConfig]):
37 """
38 Configure the Pylons environment via the ``pylons.config`` object. This
39 code should only need to be run once.
40 """
41 os.environ['CKAN_CONFIG'] = cast(str, conf['__file__'])
42
43 valid_base_public_folder_names = ['public', 'public-bs3']
44 static_files = conf.get('ckan.base_public_folder', 'public')
45 conf['ckan.base_public_folder'] = static_files
46
47 if static_files not in valid_base_public_folder_names:
48 raise CkanConfigurationException(
49 'You provided an invalid value for ckan.base_public_folder. '
50 'Possible values are: "public" and "public-bs3".'
51 )
52
53 log.info('Loading static files from %s' % static_files)
54
55 # Initialize main CKAN config object
56 config.update(conf)
57
58 # Setup the SQLAlchemy database engine
59 # Suppress a couple of sqlalchemy warnings
60 msgs = ['^Unicode type received non-unicode bind param value',
61 "^Did not recognize type 'BIGINT' of column 'size'",
62 "^Did not recognize type 'tsvector' of column 'search_vector'"
63 ]
64 for msg in msgs:
65 warnings.filterwarnings('ignore', msg, sqlalchemy.exc.SAWarning)
66
67 # load all CKAN plugins
68 p.load_all()
69
70 # Check Redis availability
71 if not is_redis_available():
72 log.critical('Could not connect to Redis.')
73
74 app_globals.reset()
75
76 # Build JavaScript translations. Must be done after plugins have
77 # been loaded.
78 build_js_translations()
79
80
81 # A mapping of config settings that can be overridden by env vars.
82 # Note: Do not remove the following lines, they are used in the docs
83 # Start CONFIG_FROM_ENV_VARS
84 CONFIG_FROM_ENV_VARS: dict[str, str] = {
85 'sqlalchemy.url': 'CKAN_SQLALCHEMY_URL',
86 'ckan.datastore.write_url': 'CKAN_DATASTORE_WRITE_URL',
87 'ckan.datastore.read_url': 'CKAN_DATASTORE_READ_URL',
88 'ckan.redis.url': 'CKAN_REDIS_URL',
89 'solr_url': 'CKAN_SOLR_URL',
90 'solr_user': 'CKAN_SOLR_USER',
91 'solr_password': 'CKAN_SOLR_PASSWORD',
92 'ckan.site_id': 'CKAN_SITE_ID',
93 'ckan.site_url': 'CKAN_SITE_URL',
94 'ckan.storage_path': 'CKAN_STORAGE_PATH',
95 'ckan.datapusher.url': 'CKAN_DATAPUSHER_URL',
96 'smtp.server': 'CKAN_SMTP_SERVER',
97 'smtp.starttls': 'CKAN_SMTP_STARTTLS',
98 'smtp.user': 'CKAN_SMTP_USER',
99 'smtp.password': 'CKAN_SMTP_PASSWORD',
100 'smtp.mail_from': 'CKAN_SMTP_MAIL_FROM',
101 'ckan.max_resource_size': 'CKAN_MAX_UPLOAD_SIZE_MB'
102 }
103 # End CONFIG_FROM_ENV_VARS
104
105
106 def update_config() -> None:
107 ''' This code needs to be run when the config is changed to take those
108 changes into account. It is called whenever a plugin is loaded as the
109 plugin might have changed the config values (for instance it might
110 change ckan.site_url) '''
111
112 config_declaration.setup()
113 config_declaration.make_safe(config)
114 config_declaration.normalize(config)
115
116 webassets_init()
117
118 for plugin in p.PluginImplementations(p.IConfigurer):
119 # must do update in place as this does not work:
120 # config = plugin.update_config(config)
121 plugin.update_config(config)
122
123 for option in CONFIG_FROM_ENV_VARS:
124 from_env = os.environ.get(CONFIG_FROM_ENV_VARS[option], None)
125 if from_env:
126 config[option] = from_env
127
128 if config.get_value("config.mode") == "strict":
129 _, errors = config_declaration.validate(config)
130 if errors:
131 msg = "\n".join(
132 "{}: {}".format(key, "; ".join(issues))
133 for key, issues in errors.items()
134 )
135 raise CkanConfigurationException(msg)
136
137 root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
138
139 site_url = config.get_value('ckan.site_url')
140 if not site_url:
141 raise RuntimeError(
142 'ckan.site_url is not configured and it must have a value.'
143 ' Please amend your .ini file.')
144 if not site_url.lower().startswith('http'):
145 raise RuntimeError(
146 'ckan.site_url should be a full URL, including the schema '
147 '(http or https)')
148 # Remove backslash from site_url if present
149 config['ckan.site_url'] = site_url.rstrip('/')
150
151 display_timezone = config.get_value('ckan.display_timezone')
152 if (display_timezone and
153 display_timezone != 'server' and
154 display_timezone not in pytz.all_timezones):
155 raise CkanConfigurationException(
156 "ckan.display_timezone is not 'server' or a valid timezone"
157 )
158
159 # Init SOLR settings and check if the schema is compatible
160 # from ckan.lib.search import SolrSettings, check_solr_schema_version
161
162 # lib.search is imported here as we need the config enabled and parsed
163 search.SolrSettings.init(config.get_value('solr_url'),
164 config.get_value('solr_user'),
165 config.get_value('solr_password'))
166 search.check_solr_schema_version()
167
168 lib_plugins.reset_package_plugins()
169 lib_plugins.register_package_plugins()
170 lib_plugins.reset_group_plugins()
171 lib_plugins.register_group_plugins()
172
173 # initialise the globals
174 app_globals.app_globals._init()
175
176 helpers.load_plugin_helpers()
177
178 # Templates and CSS loading from configuration
179 valid_base_templates_folder_names = ['templates', 'templates-bs3']
180 templates = config.get('ckan.base_templates_folder', 'templates')
181 config['ckan.base_templates_folder'] = templates
182
183 if templates not in valid_base_templates_folder_names:
184 raise CkanConfigurationException(
185 'You provided an invalid value for ckan.base_templates_folder. '
186 'Possible values are: "templates" and "templates-bs3".'
187 )
188
189 jinja2_templates_path = os.path.join(root, templates)
190 log.info('Loading templates from %s' % jinja2_templates_path)
191 template_paths = [jinja2_templates_path]
192
193 extra_template_paths = config.get_value('extra_template_paths')
194 if extra_template_paths:
195 # must be first for them to override defaults
196 template_paths = extra_template_paths.split(',') + template_paths
197 config['computed_template_paths'] = template_paths
198
199 # Enable pessimistic disconnect handling (added in SQLAlchemy 1.2)
200 # to eliminate database errors due to stale pooled connections
201 config.setdefault('sqlalchemy.pool_pre_ping', True)
202 # Initialize SQLAlchemy
203 engine = sqlalchemy.engine_from_config(config)
204 model.init_model(engine)
205
206 for plugin in p.PluginImplementations(p.IConfigurable):
207 plugin.configure(config)
208
209 # clear other caches
210 logic.clear_actions_cache()
211 logic.clear_validators_cache()
212 authz.clear_auth_functions_cache()
213
214 # Here we create the site user if they are not already in the database
215 try:
216 logic.get_action('get_site_user')({'ignore_auth': True}, {})
217 except (sqlalchemy.exc.ProgrammingError, sqlalchemy.exc.OperationalError):
218 # The database is not yet initialised. It happens in `ckan db init`
219 pass
220 except sqlalchemy.exc.IntegrityError:
221 # Race condition, user already exists.
222 pass
223
224 # Close current session and open database connections to ensure a clean
225 # clean environment even if an error occurs later on
226 model.Session.remove()
227 model.Session.bind.dispose()
228
[end of ckan/config/environment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckan/config/environment.py b/ckan/config/environment.py
--- a/ckan/config/environment.py
+++ b/ckan/config/environment.py
@@ -115,7 +115,7 @@
webassets_init()
- for plugin in p.PluginImplementations(p.IConfigurer):
+ for plugin in reversed(list(p.PluginImplementations(p.IConfigurer))):
# must do update in place as this does not work:
# config = plugin.update_config(config)
plugin.update_config(config)
| {"golden_diff": "diff --git a/ckan/config/environment.py b/ckan/config/environment.py\n--- a/ckan/config/environment.py\n+++ b/ckan/config/environment.py\n@@ -115,7 +115,7 @@\n \n webassets_init()\n \n- for plugin in p.PluginImplementations(p.IConfigurer):\n+ for plugin in reversed(list(p.PluginImplementations(p.IConfigurer))):\n # must do update in place as this does not work:\n # config = plugin.update_config(config)\n plugin.update_config(config)\n", "issue": "IConfigurer plugin load order\n**CKAN version**\r\n(all)\r\n\r\n**Describe the bug**\r\n`update_config` runs through all IConfigurer plugins from first to last calling `plugin.update_config`. The pattern for other interfaces is that the \"first plugin wins\", but this is difficult to implement when later plugins override values from earlier ones in the list.\r\n\r\n**Steps to reproduce**\r\nEnable two plugins that set the same config value using IConfigurer\r\n\r\n**Expected behavior**\r\nFirst plugin value should win, like with other interfaces.\r\n\r\n**Additional details**\r\nckanext-envvars recommends adding `envvars` last in the list of plugins, which makes sense but if other plugins depend on/override values configured in envvars (e.g. ckanext-scheming) they won't be available at `update_config` time.\n", "before_files": [{"content": "# encoding: utf-8\n'''CKAN environment configuration'''\nfrom __future__ import annotations\n\nimport os\nimport logging\nimport warnings\nimport pytz\n\nfrom typing import Union, cast\n\nimport sqlalchemy\n\nimport ckan.model as model\nimport ckan.plugins as p\nimport ckan.lib.plugins as lib_plugins\nimport ckan.lib.helpers as helpers\nimport ckan.lib.app_globals as app_globals\nfrom ckan.lib.redis import is_redis_available\nimport ckan.lib.search as search\nimport ckan.logic as logic\nimport ckan.authz as authz\nfrom ckan.lib.webassets_tools import webassets_init\nfrom ckan.lib.i18n import build_js_translations\n\nfrom ckan.common import CKANConfig, config, config_declaration\nfrom ckan.exceptions import CkanConfigurationException\nfrom ckan.types import Config\n\nlog = logging.getLogger(__name__)\n\n# Suppress benign warning 'Unbuilt egg for setuptools'\nwarnings.simplefilter('ignore', UserWarning)\n\n\ndef load_environment(conf: Union[Config, CKANConfig]):\n \"\"\"\n Configure the Pylons environment via the ``pylons.config`` object. This\n code should only need to be run once.\n \"\"\"\n os.environ['CKAN_CONFIG'] = cast(str, conf['__file__'])\n\n valid_base_public_folder_names = ['public', 'public-bs3']\n static_files = conf.get('ckan.base_public_folder', 'public')\n conf['ckan.base_public_folder'] = static_files\n\n if static_files not in valid_base_public_folder_names:\n raise CkanConfigurationException(\n 'You provided an invalid value for ckan.base_public_folder. '\n 'Possible values are: \"public\" and \"public-bs3\".'\n )\n\n log.info('Loading static files from %s' % static_files)\n\n # Initialize main CKAN config object\n config.update(conf)\n\n # Setup the SQLAlchemy database engine\n # Suppress a couple of sqlalchemy warnings\n msgs = ['^Unicode type received non-unicode bind param value',\n \"^Did not recognize type 'BIGINT' of column 'size'\",\n \"^Did not recognize type 'tsvector' of column 'search_vector'\"\n ]\n for msg in msgs:\n warnings.filterwarnings('ignore', msg, sqlalchemy.exc.SAWarning)\n\n # load all CKAN plugins\n p.load_all()\n\n # Check Redis availability\n if not is_redis_available():\n log.critical('Could not connect to Redis.')\n\n app_globals.reset()\n\n # Build JavaScript translations. Must be done after plugins have\n # been loaded.\n build_js_translations()\n\n\n# A mapping of config settings that can be overridden by env vars.\n# Note: Do not remove the following lines, they are used in the docs\n# Start CONFIG_FROM_ENV_VARS\nCONFIG_FROM_ENV_VARS: dict[str, str] = {\n 'sqlalchemy.url': 'CKAN_SQLALCHEMY_URL',\n 'ckan.datastore.write_url': 'CKAN_DATASTORE_WRITE_URL',\n 'ckan.datastore.read_url': 'CKAN_DATASTORE_READ_URL',\n 'ckan.redis.url': 'CKAN_REDIS_URL',\n 'solr_url': 'CKAN_SOLR_URL',\n 'solr_user': 'CKAN_SOLR_USER',\n 'solr_password': 'CKAN_SOLR_PASSWORD',\n 'ckan.site_id': 'CKAN_SITE_ID',\n 'ckan.site_url': 'CKAN_SITE_URL',\n 'ckan.storage_path': 'CKAN_STORAGE_PATH',\n 'ckan.datapusher.url': 'CKAN_DATAPUSHER_URL',\n 'smtp.server': 'CKAN_SMTP_SERVER',\n 'smtp.starttls': 'CKAN_SMTP_STARTTLS',\n 'smtp.user': 'CKAN_SMTP_USER',\n 'smtp.password': 'CKAN_SMTP_PASSWORD',\n 'smtp.mail_from': 'CKAN_SMTP_MAIL_FROM',\n 'ckan.max_resource_size': 'CKAN_MAX_UPLOAD_SIZE_MB'\n}\n# End CONFIG_FROM_ENV_VARS\n\n\ndef update_config() -> None:\n ''' This code needs to be run when the config is changed to take those\n changes into account. It is called whenever a plugin is loaded as the\n plugin might have changed the config values (for instance it might\n change ckan.site_url) '''\n\n config_declaration.setup()\n config_declaration.make_safe(config)\n config_declaration.normalize(config)\n\n webassets_init()\n\n for plugin in p.PluginImplementations(p.IConfigurer):\n # must do update in place as this does not work:\n # config = plugin.update_config(config)\n plugin.update_config(config)\n\n for option in CONFIG_FROM_ENV_VARS:\n from_env = os.environ.get(CONFIG_FROM_ENV_VARS[option], None)\n if from_env:\n config[option] = from_env\n\n if config.get_value(\"config.mode\") == \"strict\":\n _, errors = config_declaration.validate(config)\n if errors:\n msg = \"\\n\".join(\n \"{}: {}\".format(key, \"; \".join(issues))\n for key, issues in errors.items()\n )\n raise CkanConfigurationException(msg)\n\n root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n site_url = config.get_value('ckan.site_url')\n if not site_url:\n raise RuntimeError(\n 'ckan.site_url is not configured and it must have a value.'\n ' Please amend your .ini file.')\n if not site_url.lower().startswith('http'):\n raise RuntimeError(\n 'ckan.site_url should be a full URL, including the schema '\n '(http or https)')\n # Remove backslash from site_url if present\n config['ckan.site_url'] = site_url.rstrip('/')\n\n display_timezone = config.get_value('ckan.display_timezone')\n if (display_timezone and\n display_timezone != 'server' and\n display_timezone not in pytz.all_timezones):\n raise CkanConfigurationException(\n \"ckan.display_timezone is not 'server' or a valid timezone\"\n )\n\n # Init SOLR settings and check if the schema is compatible\n # from ckan.lib.search import SolrSettings, check_solr_schema_version\n\n # lib.search is imported here as we need the config enabled and parsed\n search.SolrSettings.init(config.get_value('solr_url'),\n config.get_value('solr_user'),\n config.get_value('solr_password'))\n search.check_solr_schema_version()\n\n lib_plugins.reset_package_plugins()\n lib_plugins.register_package_plugins()\n lib_plugins.reset_group_plugins()\n lib_plugins.register_group_plugins()\n\n # initialise the globals\n app_globals.app_globals._init()\n\n helpers.load_plugin_helpers()\n\n # Templates and CSS loading from configuration\n valid_base_templates_folder_names = ['templates', 'templates-bs3']\n templates = config.get('ckan.base_templates_folder', 'templates')\n config['ckan.base_templates_folder'] = templates\n\n if templates not in valid_base_templates_folder_names:\n raise CkanConfigurationException(\n 'You provided an invalid value for ckan.base_templates_folder. '\n 'Possible values are: \"templates\" and \"templates-bs3\".'\n )\n\n jinja2_templates_path = os.path.join(root, templates)\n log.info('Loading templates from %s' % jinja2_templates_path)\n template_paths = [jinja2_templates_path]\n\n extra_template_paths = config.get_value('extra_template_paths')\n if extra_template_paths:\n # must be first for them to override defaults\n template_paths = extra_template_paths.split(',') + template_paths\n config['computed_template_paths'] = template_paths\n\n # Enable pessimistic disconnect handling (added in SQLAlchemy 1.2)\n # to eliminate database errors due to stale pooled connections\n config.setdefault('sqlalchemy.pool_pre_ping', True)\n # Initialize SQLAlchemy\n engine = sqlalchemy.engine_from_config(config)\n model.init_model(engine)\n\n for plugin in p.PluginImplementations(p.IConfigurable):\n plugin.configure(config)\n\n # clear other caches\n logic.clear_actions_cache()\n logic.clear_validators_cache()\n authz.clear_auth_functions_cache()\n\n # Here we create the site user if they are not already in the database\n try:\n logic.get_action('get_site_user')({'ignore_auth': True}, {})\n except (sqlalchemy.exc.ProgrammingError, sqlalchemy.exc.OperationalError):\n # The database is not yet initialised. It happens in `ckan db init`\n pass\n except sqlalchemy.exc.IntegrityError:\n # Race condition, user already exists.\n pass\n\n # Close current session and open database connections to ensure a clean\n # clean environment even if an error occurs later on\n model.Session.remove()\n model.Session.bind.dispose()\n", "path": "ckan/config/environment.py"}]} | 3,141 | 113 |
gh_patches_debug_12067 | rasdani/github-patches | git_diff | sktime__sktime-1453 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] sktime.datatypes._panel._examples raises pandas.core.common.SettingWithCopyError
**Describe the bug**
Attempting to install [tsai](https://pypi.org/project/tsai/) as an upstream package also installs this package, but the install raises an error that traces to line 67 in "/opt/conda/lib/python3.8/site-packages/sktime/datatypes/_panel/_examples.py"
```
X.iloc[0][0] = pd.Series([1, 2, 3])
```
**To Reproduce**
Importing any code that executes the code starting at line 67 of /sktime/datatypes/_panel/_examples.py can raise a Pandas error, depending on Pandas version that may be installed
**Expected behavior**
No error should be raised on install or import of sktime as a dependency.
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
System:
python: 3.9.1 (default, Sep 16 2021, 11:42:30) [Clang 12.0.5 (clang-1205.0.22.11)]
executable: /.../.pyenv/versions/3.9.1/bin/python
machine: macOS-11.6-x86_64-i386-64bit
Python dependencies:
pip: 21.2.4
setuptools: 49.2.1
sklearn: 1.0
sktime: 0.8.0
statsmodels: 0.12.2
numpy: 1.20.3
scipy: 1.7.1
Cython: None
pandas: 1.3.3
matplotlib: 3.4.3
joblib: 1.0.1
numba: 0.53.1
pmdarima: None
tsfresh: 0.18.0
</issue>
<code>
[start of sktime/datatypes/_panel/_examples.py]
1 # -*- coding: utf-8 -*-
2 """Example generation for testing.
3
4 Exports dict of examples, useful for testing as fixtures.
5
6 example_dict: dict indexed by triple
7 1st element = mtype - str
8 2nd element = considered as this scitype - str
9 3rd element = int - index of example
10 elements are data objects, considered examples for the mtype
11 all examples with same index are considered "same" on scitype content
12 if None, indicates that representation is not possible
13
14 example_lossy: dict of bool indexed by pairs of str
15 1st element = mtype - str
16 2nd element = considered as this scitype - str
17 3rd element = int - index of example
18 elements are bool, indicate whether representation has information removed
19 all examples with same index are considered "same" on scitype content
20
21 overall, conversions from non-lossy representations to any other ones
22 should yield the element exactly, identidally (given same index)
23 """
24
25 import pandas as pd
26 import numpy as np
27
28 example_dict = dict()
29 example_dict_lossy = dict()
30
31 ###
32
33
34 X = np.array(
35 [[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 55, 6]], [[1, 2, 3], [42, 5, 6]]],
36 dtype=np.int64,
37 )
38
39 example_dict[("numpy3D", "Panel", 0)] = X
40 example_dict_lossy[("numpy3D", "Panel", 0)] = False
41
42 cols = [f"var_{i}" for i in range(2)]
43 Xlist = [
44 pd.DataFrame([[1, 4], [2, 5], [3, 6]], columns=cols),
45 pd.DataFrame([[1, 4], [2, 55], [3, 6]], columns=cols),
46 pd.DataFrame([[1, 42], [2, 5], [3, 6]], columns=cols),
47 ]
48
49 example_dict[("df-list", "Panel", 0)] = Xlist
50 example_dict_lossy[("df-list", "Panel", 0)] = False
51
52 cols = ["instances", "timepoints"] + [f"var_{i}" for i in range(2)]
53
54 Xlist = [
55 pd.DataFrame([[0, 0, 1, 4], [0, 1, 2, 5], [0, 2, 3, 6]], columns=cols),
56 pd.DataFrame([[1, 0, 1, 4], [1, 1, 2, 55], [1, 2, 3, 6]], columns=cols),
57 pd.DataFrame([[2, 0, 1, 42], [2, 1, 2, 5], [2, 2, 3, 6]], columns=cols),
58 ]
59 X = pd.concat(Xlist)
60 X = X.set_index(["instances", "timepoints"])
61
62 example_dict[("pd-multiindex", "Panel", 0)] = X
63 example_dict_lossy[("pd-multiindex", "Panel", 0)] = False
64
65 cols = [f"var_{i}" for i in range(2)]
66 X = pd.DataFrame(columns=cols, index=[0, 1, 2])
67 X.iloc[0][0] = pd.Series([1, 2, 3])
68 X.iloc[0][1] = pd.Series([4, 5, 6])
69 X.iloc[1][0] = pd.Series([1, 2, 3])
70 X.iloc[1][1] = pd.Series([4, 55, 6])
71 X.iloc[2][0] = pd.Series([1, 2, 3])
72 X.iloc[2][1] = pd.Series([42, 5, 6])
73
74 example_dict[("nested_univ", "Panel", 0)] = X
75 example_dict_lossy[("nested_univ", "Panel", 0)] = False
76
[end of sktime/datatypes/_panel/_examples.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sktime/datatypes/_panel/_examples.py b/sktime/datatypes/_panel/_examples.py
--- a/sktime/datatypes/_panel/_examples.py
+++ b/sktime/datatypes/_panel/_examples.py
@@ -64,12 +64,13 @@
cols = [f"var_{i}" for i in range(2)]
X = pd.DataFrame(columns=cols, index=[0, 1, 2])
-X.iloc[0][0] = pd.Series([1, 2, 3])
-X.iloc[0][1] = pd.Series([4, 5, 6])
-X.iloc[1][0] = pd.Series([1, 2, 3])
-X.iloc[1][1] = pd.Series([4, 55, 6])
-X.iloc[2][0] = pd.Series([1, 2, 3])
-X.iloc[2][1] = pd.Series([42, 5, 6])
+X["var_0"] = pd.Series(
+ [pd.Series([1, 2, 3]), pd.Series([1, 2, 3]), pd.Series([1, 2, 3])]
+)
+
+X["var_1"] = pd.Series(
+ [pd.Series([4, 5, 6]), pd.Series([4, 55, 6]), pd.Series([42, 5, 6])]
+)
example_dict[("nested_univ", "Panel", 0)] = X
example_dict_lossy[("nested_univ", "Panel", 0)] = False
| {"golden_diff": "diff --git a/sktime/datatypes/_panel/_examples.py b/sktime/datatypes/_panel/_examples.py\n--- a/sktime/datatypes/_panel/_examples.py\n+++ b/sktime/datatypes/_panel/_examples.py\n@@ -64,12 +64,13 @@\n \n cols = [f\"var_{i}\" for i in range(2)]\n X = pd.DataFrame(columns=cols, index=[0, 1, 2])\n-X.iloc[0][0] = pd.Series([1, 2, 3])\n-X.iloc[0][1] = pd.Series([4, 5, 6])\n-X.iloc[1][0] = pd.Series([1, 2, 3])\n-X.iloc[1][1] = pd.Series([4, 55, 6])\n-X.iloc[2][0] = pd.Series([1, 2, 3])\n-X.iloc[2][1] = pd.Series([42, 5, 6])\n+X[\"var_0\"] = pd.Series(\n+ [pd.Series([1, 2, 3]), pd.Series([1, 2, 3]), pd.Series([1, 2, 3])]\n+)\n+\n+X[\"var_1\"] = pd.Series(\n+ [pd.Series([4, 5, 6]), pd.Series([4, 55, 6]), pd.Series([42, 5, 6])]\n+)\n \n example_dict[(\"nested_univ\", \"Panel\", 0)] = X\n example_dict_lossy[(\"nested_univ\", \"Panel\", 0)] = False\n", "issue": "[BUG] sktime.datatypes._panel._examples raises pandas.core.common.SettingWithCopyError\n**Describe the bug**\r\nAttempting to install [tsai](https://pypi.org/project/tsai/) as an upstream package also installs this package, but the install raises an error that traces to line 67 in \"/opt/conda/lib/python3.8/site-packages/sktime/datatypes/_panel/_examples.py\"\r\n```\r\n X.iloc[0][0] = pd.Series([1, 2, 3])\r\n```\r\n\r\n**To Reproduce**\r\nImporting any code that executes the code starting at line 67 of /sktime/datatypes/_panel/_examples.py can raise a Pandas error, depending on Pandas version that may be installed\r\n\r\n**Expected behavior**\r\nNo error should be raised on install or import of sktime as a dependency. \r\n\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\r\n**Versions**\r\nSystem:\r\n python: 3.9.1 (default, Sep 16 2021, 11:42:30) [Clang 12.0.5 (clang-1205.0.22.11)]\r\nexecutable: /.../.pyenv/versions/3.9.1/bin/python\r\n machine: macOS-11.6-x86_64-i386-64bit\r\n\r\nPython dependencies:\r\n pip: 21.2.4\r\n setuptools: 49.2.1\r\n sklearn: 1.0\r\n sktime: 0.8.0\r\n statsmodels: 0.12.2\r\n numpy: 1.20.3\r\n scipy: 1.7.1\r\n Cython: None\r\n pandas: 1.3.3\r\n matplotlib: 3.4.3\r\n joblib: 1.0.1\r\n numba: 0.53.1\r\n pmdarima: None\r\n tsfresh: 0.18.0\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Example generation for testing.\n\nExports dict of examples, useful for testing as fixtures.\n\nexample_dict: dict indexed by triple\n 1st element = mtype - str\n 2nd element = considered as this scitype - str\n 3rd element = int - index of example\nelements are data objects, considered examples for the mtype\n all examples with same index are considered \"same\" on scitype content\n if None, indicates that representation is not possible\n\nexample_lossy: dict of bool indexed by pairs of str\n 1st element = mtype - str\n 2nd element = considered as this scitype - str\n 3rd element = int - index of example\nelements are bool, indicate whether representation has information removed\n all examples with same index are considered \"same\" on scitype content\n\noverall, conversions from non-lossy representations to any other ones\n should yield the element exactly, identidally (given same index)\n\"\"\"\n\nimport pandas as pd\nimport numpy as np\n\nexample_dict = dict()\nexample_dict_lossy = dict()\n\n###\n\n\nX = np.array(\n [[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 55, 6]], [[1, 2, 3], [42, 5, 6]]],\n dtype=np.int64,\n)\n\nexample_dict[(\"numpy3D\", \"Panel\", 0)] = X\nexample_dict_lossy[(\"numpy3D\", \"Panel\", 0)] = False\n\ncols = [f\"var_{i}\" for i in range(2)]\nXlist = [\n pd.DataFrame([[1, 4], [2, 5], [3, 6]], columns=cols),\n pd.DataFrame([[1, 4], [2, 55], [3, 6]], columns=cols),\n pd.DataFrame([[1, 42], [2, 5], [3, 6]], columns=cols),\n]\n\nexample_dict[(\"df-list\", \"Panel\", 0)] = Xlist\nexample_dict_lossy[(\"df-list\", \"Panel\", 0)] = False\n\ncols = [\"instances\", \"timepoints\"] + [f\"var_{i}\" for i in range(2)]\n\nXlist = [\n pd.DataFrame([[0, 0, 1, 4], [0, 1, 2, 5], [0, 2, 3, 6]], columns=cols),\n pd.DataFrame([[1, 0, 1, 4], [1, 1, 2, 55], [1, 2, 3, 6]], columns=cols),\n pd.DataFrame([[2, 0, 1, 42], [2, 1, 2, 5], [2, 2, 3, 6]], columns=cols),\n]\nX = pd.concat(Xlist)\nX = X.set_index([\"instances\", \"timepoints\"])\n\nexample_dict[(\"pd-multiindex\", \"Panel\", 0)] = X\nexample_dict_lossy[(\"pd-multiindex\", \"Panel\", 0)] = False\n\ncols = [f\"var_{i}\" for i in range(2)]\nX = pd.DataFrame(columns=cols, index=[0, 1, 2])\nX.iloc[0][0] = pd.Series([1, 2, 3])\nX.iloc[0][1] = pd.Series([4, 5, 6])\nX.iloc[1][0] = pd.Series([1, 2, 3])\nX.iloc[1][1] = pd.Series([4, 55, 6])\nX.iloc[2][0] = pd.Series([1, 2, 3])\nX.iloc[2][1] = pd.Series([42, 5, 6])\n\nexample_dict[(\"nested_univ\", \"Panel\", 0)] = X\nexample_dict_lossy[(\"nested_univ\", \"Panel\", 0)] = False\n", "path": "sktime/datatypes/_panel/_examples.py"}]} | 2,035 | 355 |
gh_patches_debug_319 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-730 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
better identification of Solaar versions
`git describe` produces
0.9.2-339-g39791be
Instead it should produce something based on 1.0.1
`git describe --tags` produces
1.0.1-58-g39791be
which is much better.
I think that all that is required is to upgrade the 1.0.1 tag that already exists.
</issue>
<code>
[start of lib/solaar/__init__.py]
1 # -*- python-mode -*-
2 # -*- coding: UTF-8 -*-
3
4 ## Copyright (C) 2012-2013 Daniel Pavel
5 ##
6 ## This program is free software; you can redistribute it and/or modify
7 ## it under the terms of the GNU General Public License as published by
8 ## the Free Software Foundation; either version 2 of the License, or
9 ## (at your option) any later version.
10 ##
11 ## This program is distributed in the hope that it will be useful,
12 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
13 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 ## GNU General Public License for more details.
15 ##
16 ## You should have received a copy of the GNU General Public License along
17 ## with this program; if not, write to the Free Software Foundation, Inc.,
18 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
19
20 from __future__ import absolute_import, division, print_function, unicode_literals
21
22 __version__ = '1.0.1'
23 NAME = 'Solaar'
24
[end of lib/solaar/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/solaar/__init__.py b/lib/solaar/__init__.py
--- a/lib/solaar/__init__.py
+++ b/lib/solaar/__init__.py
@@ -19,5 +19,5 @@
from __future__ import absolute_import, division, print_function, unicode_literals
-__version__ = '1.0.1'
+__version__ = '1.0.2-rc1'
NAME = 'Solaar'
| {"golden_diff": "diff --git a/lib/solaar/__init__.py b/lib/solaar/__init__.py\n--- a/lib/solaar/__init__.py\n+++ b/lib/solaar/__init__.py\n@@ -19,5 +19,5 @@\n \n from __future__ import absolute_import, division, print_function, unicode_literals\n \n-__version__ = '1.0.1'\n+__version__ = '1.0.2-rc1'\n NAME = 'Solaar'\n", "issue": "better identification of Solaar versions\n`git describe` produces\r\n0.9.2-339-g39791be\r\nInstead it should produce something based on 1.0.1\r\n`git describe --tags` produces\r\n1.0.1-58-g39791be\r\nwhich is much better.\r\n\r\nI think that all that is required is to upgrade the 1.0.1 tag that already exists.\n", "before_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\n__version__ = '1.0.1'\nNAME = 'Solaar'\n", "path": "lib/solaar/__init__.py"}]} | 911 | 107 |
gh_patches_debug_9857 | rasdani/github-patches | git_diff | saulpw__visidata-2160 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[parquet] can't load parquet directory anymore: `IsADirectoryError`
**Small description**
Hi @saulpw @anjakefala @takacsd - it seems that forcing opening the path as file with `.open()` - introduced with #2133 - breaks the use case where the multiple parquet files are stored in a directory, and this directory is then read by visidata. This is common with Hive partitioning or when working with spark. A simple fix would be to check if the path is a directory with `os.path.is_dir()` and then retaining old behavior of passing it as a string to `read_table()`. If it is not an existing directory, we move to the new way of opening as a binary buffer.
I have already added this workaround to my clone of visidata, and it fixes my issue, but maybe you have some better ideas how to handle it instead of `if-else` statement in the `ParquetSheet`.
**Expected result**
```bash
vd -f parquet parquet_dir
```
should load a parquet into visidata
**Actual result with screenshot**

**Additional context**
```bash
# freshest develop
visidata@9fd728b72c115e50e99c24b455caaf020381b48e
pyarrow==12.0.0
python 3.10.2
```
</issue>
<code>
[start of visidata/loaders/parquet.py]
1 from visidata import Sheet, VisiData, TypedWrapper, anytype, date, vlen, Column, vd
2 from collections import defaultdict
3
4
5 @VisiData.api
6 def open_parquet(vd, p):
7 return ParquetSheet(p.name, source=p)
8
9
10 class ParquetColumn(Column):
11 def calcValue(self, row):
12 val = self.source[row["__rownum__"]]
13 if val.type == 'large_string':
14 return memoryview(val.as_buffer())[:2**20].tobytes().decode('utf-8')
15 else:
16 return val.as_py()
17
18
19 class ParquetSheet(Sheet):
20 # rowdef: {'__rownum__':int, parquet_col:overridden_value, ...}
21 def iterload(self):
22 pa = vd.importExternal("pyarrow", "pyarrow")
23 pq = vd.importExternal("pyarrow.parquet", "pyarrow")
24 from visidata.loaders.arrow import arrow_to_vdtype
25
26 with self.source.open('rb') as f:
27 self.tbl = pq.read_table(f)
28
29 self.columns = []
30 for colname, col in zip(self.tbl.column_names, self.tbl.columns):
31 c = ParquetColumn(colname,
32 type=arrow_to_vdtype(col.type),
33 source=col,
34 cache=(col.type.id == pa.lib.Type_LARGE_STRING))
35 self.addColumn(c)
36
37 for i in range(self.tbl.num_rows):
38 yield dict(__rownum__=i)
39
40
41 @VisiData.api
42 def save_parquet(vd, p, sheet):
43 pa = vd.importExternal("pyarrow")
44 pq = vd.importExternal("pyarrow.parquet", "pyarrow")
45
46 typemap = {
47 anytype: pa.string(),
48 int: pa.int64(),
49 vlen: pa.int64(),
50 float: pa.float64(),
51 str: pa.string(),
52 date: pa.date64(),
53 # list: pa.array(),
54 }
55
56 for t in vd.numericTypes:
57 if t not in typemap:
58 typemap[t] = pa.float64()
59
60 databycol = defaultdict(list) # col -> [values]
61
62 for typedvals in sheet.iterdispvals(format=False):
63 for col, val in typedvals.items():
64 if isinstance(val, TypedWrapper):
65 val = None
66
67 databycol[col].append(val)
68
69 data = [
70 pa.array(vals, type=typemap.get(col.type, pa.string()))
71 for col, vals in databycol.items()
72 ]
73
74 schema = pa.schema(
75 [(c.name, typemap.get(c.type, pa.string())) for c in sheet.visibleCols]
76 )
77 with p.open_bytes(mode="w") as outf:
78 with pq.ParquetWriter(outf, schema) as writer:
79 writer.write_batch(
80 pa.record_batch(data, names=[c.name for c in sheet.visibleCols])
81 )
82
[end of visidata/loaders/parquet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/loaders/parquet.py b/visidata/loaders/parquet.py
--- a/visidata/loaders/parquet.py
+++ b/visidata/loaders/parquet.py
@@ -23,8 +23,11 @@
pq = vd.importExternal("pyarrow.parquet", "pyarrow")
from visidata.loaders.arrow import arrow_to_vdtype
- with self.source.open('rb') as f:
- self.tbl = pq.read_table(f)
+ if self.source.is_dir():
+ self.tbl = pq.read_table(str(self.source))
+ else:
+ with self.source.open('rb') as f:
+ self.tbl = pq.read_table(f)
self.columns = []
for colname, col in zip(self.tbl.column_names, self.tbl.columns):
| {"golden_diff": "diff --git a/visidata/loaders/parquet.py b/visidata/loaders/parquet.py\n--- a/visidata/loaders/parquet.py\n+++ b/visidata/loaders/parquet.py\n@@ -23,8 +23,11 @@\n pq = vd.importExternal(\"pyarrow.parquet\", \"pyarrow\")\n from visidata.loaders.arrow import arrow_to_vdtype\n \n- with self.source.open('rb') as f:\n- self.tbl = pq.read_table(f)\n+ if self.source.is_dir():\n+ self.tbl = pq.read_table(str(self.source))\n+ else: \n+ with self.source.open('rb') as f:\n+ self.tbl = pq.read_table(f)\n \n self.columns = []\n for colname, col in zip(self.tbl.column_names, self.tbl.columns):\n", "issue": "[parquet] can't load parquet directory anymore: `IsADirectoryError`\n**Small description**\r\n\r\nHi @saulpw @anjakefala @takacsd - it seems that forcing opening the path as file with `.open()` - introduced with #2133 - breaks the use case where the multiple parquet files are stored in a directory, and this directory is then read by visidata. This is common with Hive partitioning or when working with spark. A simple fix would be to check if the path is a directory with `os.path.is_dir()` and then retaining old behavior of passing it as a string to `read_table()`. If it is not an existing directory, we move to the new way of opening as a binary buffer.\r\n\r\nI have already added this workaround to my clone of visidata, and it fixes my issue, but maybe you have some better ideas how to handle it instead of `if-else` statement in the `ParquetSheet`.\r\n\r\n**Expected result**\r\n\r\n```bash\r\nvd -f parquet parquet_dir\r\n```\r\nshould load a parquet into visidata\r\n\r\n**Actual result with screenshot**\r\n\r\n\r\n**Additional context**\r\n\r\n```bash\r\n# freshest develop\r\nvisidata@9fd728b72c115e50e99c24b455caaf020381b48e\r\n\r\npyarrow==12.0.0\r\npython 3.10.2\r\n```\r\n\n", "before_files": [{"content": "from visidata import Sheet, VisiData, TypedWrapper, anytype, date, vlen, Column, vd\nfrom collections import defaultdict\n\n\[email protected]\ndef open_parquet(vd, p):\n return ParquetSheet(p.name, source=p)\n\n\nclass ParquetColumn(Column):\n def calcValue(self, row):\n val = self.source[row[\"__rownum__\"]]\n if val.type == 'large_string':\n return memoryview(val.as_buffer())[:2**20].tobytes().decode('utf-8')\n else:\n return val.as_py()\n\n\nclass ParquetSheet(Sheet):\n # rowdef: {'__rownum__':int, parquet_col:overridden_value, ...}\n def iterload(self):\n pa = vd.importExternal(\"pyarrow\", \"pyarrow\")\n pq = vd.importExternal(\"pyarrow.parquet\", \"pyarrow\")\n from visidata.loaders.arrow import arrow_to_vdtype\n\n with self.source.open('rb') as f:\n self.tbl = pq.read_table(f)\n\n self.columns = []\n for colname, col in zip(self.tbl.column_names, self.tbl.columns):\n c = ParquetColumn(colname,\n type=arrow_to_vdtype(col.type),\n source=col,\n cache=(col.type.id == pa.lib.Type_LARGE_STRING))\n self.addColumn(c)\n\n for i in range(self.tbl.num_rows):\n yield dict(__rownum__=i)\n\n\[email protected]\ndef save_parquet(vd, p, sheet):\n pa = vd.importExternal(\"pyarrow\")\n pq = vd.importExternal(\"pyarrow.parquet\", \"pyarrow\")\n\n typemap = {\n anytype: pa.string(),\n int: pa.int64(),\n vlen: pa.int64(),\n float: pa.float64(),\n str: pa.string(),\n date: pa.date64(),\n # list: pa.array(),\n }\n\n for t in vd.numericTypes:\n if t not in typemap:\n typemap[t] = pa.float64()\n\n databycol = defaultdict(list) # col -> [values]\n\n for typedvals in sheet.iterdispvals(format=False):\n for col, val in typedvals.items():\n if isinstance(val, TypedWrapper):\n val = None\n\n databycol[col].append(val)\n\n data = [\n pa.array(vals, type=typemap.get(col.type, pa.string()))\n for col, vals in databycol.items()\n ]\n\n schema = pa.schema(\n [(c.name, typemap.get(c.type, pa.string())) for c in sheet.visibleCols]\n )\n with p.open_bytes(mode=\"w\") as outf:\n with pq.ParquetWriter(outf, schema) as writer:\n writer.write_batch(\n pa.record_batch(data, names=[c.name for c in sheet.visibleCols])\n )\n", "path": "visidata/loaders/parquet.py"}]} | 1,694 | 182 |
gh_patches_debug_33730 | rasdani/github-patches | git_diff | sonic-net__sonic-mgmt-1253 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker shell module failed in the latest sonic-mgmt
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/Azure/SONiC/wiki#report-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
The task with sell type docker always fails on the latest sonic-mgmt, example:
```
- name: Gather information from lldp
lldp:
vars:
ansible_shell_type: docker
ansible_python_interpreter: docker exec -i lldp python
```
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1. run dip_sip or lag_2 CT
2.
3.
**Describe the results you received:**
```
TASK [test : Gathering peer VM information from lldp] **************************
task path: /var/user/jenkins/bfn-sonic-mgmt/ansible/roles/test/tasks/lag_2.yml:26
Thursday 28 November 2019 10:28:47 +0000 (0:00:00.263) 0:00:26.753 *****
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 145, in run
res = self._execute()
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 612, in _execute
self._set_connection_options(variables, templar)
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 1012, in _set_connection_options
self._set_plugin_options('shell', final_vars, templar, task_keys)
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 978, in _set_plugin_options
plugin.set_options(task_keys=task_keys, var_options=options)
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/shell/__init__.py", line 70, in set_options
env = self.get_option('environment')
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'Requested entry (plugin_type: shell plugin: docker setting: environment ) was not defined in configuration.'
fatal: [cab18-2-dut]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
**Describe the results you expected:**
**Additional information you deem important:**
<!--
software/ASIC/Hardware Flatform version and info
-->
**Output of `show version`:**
```
SONiC Software Version: SONiC.HEAD.488-dirty-20191127.082217
Distribution: Debian 9.11
Kernel: 4.9.0-9-2-amd64
Build commit: 7622a30d
Build date: Wed Nov 27 11:15:51 UTC 2019
Built by: johnar@jenkins-worker-11
```
**Attach debug file `sudo generate_dump`:**
```
(paste your output here)
```
</issue>
<code>
[start of ansible/shell_plugins/docker.py]
1 from __future__ import (absolute_import, division)
2 __metaclass__ = type
3
4 import os
5 import re
6 import pipes
7 import ansible.constants as C
8 import time
9 import random
10 import shlex
11 import getopt
12 from ansible.module_utils.six import text_type
13 from ansible.plugins.shell.sh import ShellModule as sh
14 from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
15
16 class ShellModule(sh):
17
18 def __init__(self, *args, **kwargs):
19 super(ShellModule, self).__init__(*args, **kwargs)
20 self.dtemps = []
21
22 def join_path(self, *args):
23 ## HACK! HACK! HACK!
24 ## We observe the interactions between ShellModule and ActionModule, and
25 ## find the temporary directories Ansible created on remote machine. So we
26 ## collect them and copied to docker container in build_module_command
27 if len(args) >= 2 and (args[0].startswith('/home/') or args[0].startswith('/root/')) and args[1] == '':
28 self.dtemps.append(args[0])
29
30 return super(ShellModule, self).join_path(*args)
31
32 def build_module_command(self, env_string, shebang, cmd, arg_path=None, rm_tmp=None):
33 # assert(self.container_name)
34 argv = shlex.split(shebang.replace("#!", ""))
35 assert(argv[0] == 'docker')
36 assert(argv[1] == 'exec')
37 opts, args = getopt.getopt(argv[2:], 'i')
38 self.container_name = args[0]
39
40 # Inject environment variable before python in the shebang string
41 assert(args[1].endswith('python'))
42 args[1] = 'env {0} {1}'.format(env_string, args[1])
43 argv_env = argv[0:2] + [o for opt in opts for o in opt] + args
44 shebang_env = ' '.join(argv_env)
45
46 ## Note: Docker cp behavior
47 ## DEST_PATH exists and is a directory
48 ## SRC_PATH does end with /.
49 ## the content of the source directory is copied into this directory
50 ## Ref: https://docs.docker.com/engine/reference/commandline/cp/
51 pre = ''.join('docker exec {1} mkdir -p {0}; docker cp {0}/. {1}:{0}; '
52 .format(dtemp, self.container_name) for dtemp in self.dtemps)
53
54 if rm_tmp:
55 post = ''.join('docker exec {1} rm -rf {0}; '
56 .format(dtemp, self.container_name) for dtemp in self.dtemps)
57 else:
58 post = ''
59
60 return pre + super(ShellModule, self).build_module_command('', shebang_env, cmd, arg_path, rm_tmp) + '; ' + post
61
62 def checksum(self, path, python_interp):
63 """
64 Return the command to calculate the checksum for the file in ansible controlled machine
65 Arguments:
66 path:
67 the file path
68 python_interp:
69 the path for the python interpreter
70 Example:
71 path:
72 /zebra.conf
73 python_interp:
74 docker exec -i debian python
75 cmd:
76 rc=flag; [ -r /zebra.conf ] || rc=2; [ -f /zebra.conf ] || rc=1; [ -d /zebra.conf ] && rc=3; python -V 2>/dev/null || rc=4; [ x"$rc" != "xflag" ] && echo "${rc} "/zebra.conf && exit 0; (python -c '...' 2>/dev/null) || (echo '0 '/zebra.conf)
77 returns:
78 docker exec -i debian sh -c "rc=flag; [ -r /zebra.conf ] || rc=2; [ -f /zebra.conf ] || rc=1; [ -d /zebra.conf ] && rc=3; python -V 2>/dev/null || rc=4; [ x\"\$rc\" != \"xflag\" ] && echo \"\${rc} \"/zebra.conf && exit 0; (python -c '...' 2>/dev/null) || (echo '0 '/zebra.conf)"
79 """
80 ## Super class implements this function by sh commands and python scripts
81 ## If python_interp is modified to 'docker CONTAINER python', it will only influence the python
82 ## script part in super class. Instead we should influence both
83 simple_interp = 'python'
84 assert(python_interp.startswith('docker exec '))
85 assert(python_interp.endswith(' ' + simple_interp))
86
87 docker_prefix = re.sub(simple_interp, '', python_interp)
88 cmd = super(ShellModule, self).checksum(path, simple_interp)
89 ## Escape the cmd:
90 ## " --> \"
91 cmd_escaped = cmd.replace('"', '\\"')
92 ## $ --> \$
93 cmd_escaped = cmd_escaped.replace('$', '\\$')
94 return '%s sh -c "%s"' % (docker_prefix, cmd_escaped)
95
[end of ansible/shell_plugins/docker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ansible/shell_plugins/docker.py b/ansible/shell_plugins/docker.py
--- a/ansible/shell_plugins/docker.py
+++ b/ansible/shell_plugins/docker.py
@@ -1,6 +1,17 @@
from __future__ import (absolute_import, division)
__metaclass__ = type
+DOCUMENTATION = '''
+name: docker
+plugin_type: shell
+short_description: "docker shell plugin"
+version_added: historical
+description:
+ - This module allows you to execute commands directly in docker on the remote host
+extends_documentation_fragment:
+ - shell_common
+'''
+
import os
import re
import pipes
@@ -29,7 +40,24 @@
return super(ShellModule, self).join_path(*args)
- def build_module_command(self, env_string, shebang, cmd, arg_path=None, rm_tmp=None):
+ def remove(self, path, recurse=False):
+ argv = self.get_option('ansible_python_interpreter').split()
+ assert(argv[0] == 'docker')
+ assert(argv[1] == 'exec')
+ opts, args = getopt.getopt(argv[2:], 'i')
+ self.container_name = args[0]
+
+ remove_files_on_host_cmd = super(ShellModule, self).remove(path, recurse)
+
+ cmd = remove_files_on_host_cmd + "; docker exec -i "
+ cmd += self.container_name + " rm -f "
+ if recurse:
+ cmd += '-r '
+ cmd += " ".join(self.dtemps)
+
+ return cmd
+
+ def build_module_command(self, env_string, shebang, cmd, arg_path=None):
# assert(self.container_name)
argv = shlex.split(shebang.replace("#!", ""))
assert(argv[0] == 'docker')
@@ -51,13 +79,7 @@
pre = ''.join('docker exec {1} mkdir -p {0}; docker cp {0}/. {1}:{0}; '
.format(dtemp, self.container_name) for dtemp in self.dtemps)
- if rm_tmp:
- post = ''.join('docker exec {1} rm -rf {0}; '
- .format(dtemp, self.container_name) for dtemp in self.dtemps)
- else:
- post = ''
-
- return pre + super(ShellModule, self).build_module_command('', shebang_env, cmd, arg_path, rm_tmp) + '; ' + post
+ return pre + super(ShellModule, self).build_module_command('', shebang_env, cmd, arg_path)
def checksum(self, path, python_interp):
"""
| {"golden_diff": "diff --git a/ansible/shell_plugins/docker.py b/ansible/shell_plugins/docker.py\n--- a/ansible/shell_plugins/docker.py\n+++ b/ansible/shell_plugins/docker.py\n@@ -1,6 +1,17 @@\n from __future__ import (absolute_import, division)\n __metaclass__ = type\n \n+DOCUMENTATION = '''\n+name: docker\n+plugin_type: shell\n+short_description: \"docker shell plugin\"\n+version_added: historical\n+description:\n+ - This module allows you to execute commands directly in docker on the remote host\n+extends_documentation_fragment:\n+ - shell_common\n+'''\n+\n import os\n import re\n import pipes\n@@ -29,7 +40,24 @@\n \n return super(ShellModule, self).join_path(*args)\n \n- def build_module_command(self, env_string, shebang, cmd, arg_path=None, rm_tmp=None):\n+ def remove(self, path, recurse=False):\n+ argv = self.get_option('ansible_python_interpreter').split()\n+ assert(argv[0] == 'docker')\n+ assert(argv[1] == 'exec')\n+ opts, args = getopt.getopt(argv[2:], 'i')\n+ self.container_name = args[0]\n+\n+ remove_files_on_host_cmd = super(ShellModule, self).remove(path, recurse)\n+\n+ cmd = remove_files_on_host_cmd + \"; docker exec -i \"\n+ cmd += self.container_name + \" rm -f \"\n+ if recurse:\n+ cmd += '-r '\n+ cmd += \" \".join(self.dtemps)\n+\n+ return cmd\n+\n+ def build_module_command(self, env_string, shebang, cmd, arg_path=None):\n # assert(self.container_name)\n argv = shlex.split(shebang.replace(\"#!\", \"\"))\n assert(argv[0] == 'docker')\n@@ -51,13 +79,7 @@\n pre = ''.join('docker exec {1} mkdir -p {0}; docker cp {0}/. {1}:{0}; '\n .format(dtemp, self.container_name) for dtemp in self.dtemps)\n \n- if rm_tmp:\n- post = ''.join('docker exec {1} rm -rf {0}; '\n- .format(dtemp, self.container_name) for dtemp in self.dtemps)\n- else:\n- post = ''\n-\n- return pre + super(ShellModule, self).build_module_command('', shebang_env, cmd, arg_path, rm_tmp) + '; ' + post\n+ return pre + super(ShellModule, self).build_module_command('', shebang_env, cmd, arg_path)\n \n def checksum(self, path, python_interp):\n \"\"\"\n", "issue": "Docker shell module failed in the latest sonic-mgmt\n<!--\r\nIf you are reporting a new issue, make sure that we do not have any duplicates\r\nalready open. You can ensure this by searching the issue list for this\r\nrepository. If there is a duplicate, please close your issue and add a comment\r\nto the existing issue instead.\r\n\r\nIf you suspect your issue is a bug, please edit your issue description to\r\ninclude the BUG REPORT INFORMATION shown below. If you fail to provide this\r\ninformation within 7 days, we cannot debug your issue and will close it. We\r\nwill, however, reopen it if you later provide the information.\r\n\r\nFor more information about reporting issues, see\r\nhttps://github.com/Azure/SONiC/wiki#report-issues\r\n\r\n---------------------------------------------------\r\nGENERAL SUPPORT INFORMATION\r\n---------------------------------------------------\r\n\r\nThe GitHub issue tracker is for bug reports and feature requests.\r\nGeneral support can be found at the following locations:\r\n\r\n- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject\r\n\r\n---------------------------------------------------\r\nBUG REPORT INFORMATION\r\n---------------------------------------------------\r\nUse the commands below to provide key information from your environment:\r\nYou do NOT have to include this information if this is a FEATURE REQUEST\r\n-->\r\n\r\n**Description**\r\nThe task with sell type docker always fails on the latest sonic-mgmt, example:\r\n```\r\n- name: Gather information from lldp\r\n lldp:\r\n vars:\r\n ansible_shell_type: docker\r\n ansible_python_interpreter: docker exec -i lldp python\r\n```\r\n\r\n<!--\r\nBriefly describe the problem you are having in a few paragraphs.\r\n-->\r\n\r\n**Steps to reproduce the issue:**\r\n1. run dip_sip or lag_2 CT \r\n2.\r\n3.\r\n\r\n**Describe the results you received:**\r\n```\r\nTASK [test : Gathering peer VM information from lldp] **************************\r\ntask path: /var/user/jenkins/bfn-sonic-mgmt/ansible/roles/test/tasks/lag_2.yml:26\r\nThursday 28 November 2019 10:28:47 +0000 (0:00:00.263) 0:00:26.753 ***** \r\nThe full traceback is:\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py\", line 145, in run\r\n res = self._execute()\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py\", line 612, in _execute\r\n self._set_connection_options(variables, templar)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py\", line 1012, in _set_connection_options\r\n self._set_plugin_options('shell', final_vars, templar, task_keys)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_executor.py\", line 978, in _set_plugin_options\r\n plugin.set_options(task_keys=task_keys, var_options=options)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/plugins/shell/__init__.py\", line 70, in set_options\r\n env = self.get_option('environment')\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/plugins/__init__.py\", line 60, in get_option\r\n raise KeyError(to_native(e))\r\nKeyError: 'Requested entry (plugin_type: shell plugin: docker setting: environment ) was not defined in configuration.'\r\n\r\nfatal: [cab18-2-dut]: FAILED! => {\r\n \"msg\": \"Unexpected failure during module execution.\", \r\n \"stdout\": \"\"\r\n}\r\n\r\n```\r\n\r\n**Describe the results you expected:**\r\n\r\n\r\n**Additional information you deem important:**\r\n<!--\r\nsoftware/ASIC/Hardware Flatform version and info\r\n-->\r\n **Output of `show version`:**\r\n\r\n ```\r\nSONiC Software Version: SONiC.HEAD.488-dirty-20191127.082217\r\nDistribution: Debian 9.11\r\nKernel: 4.9.0-9-2-amd64\r\nBuild commit: 7622a30d\r\nBuild date: Wed Nov 27 11:15:51 UTC 2019\r\nBuilt by: johnar@jenkins-worker-11\r\n\r\n ```\r\n\r\n **Attach debug file `sudo generate_dump`:**\r\n\r\n ```\r\n (paste your output here)\r\n ```\r\n\n", "before_files": [{"content": "from __future__ import (absolute_import, division)\n__metaclass__ = type\n\nimport os\nimport re\nimport pipes\nimport ansible.constants as C\nimport time\nimport random\nimport shlex\nimport getopt\nfrom ansible.module_utils.six import text_type\nfrom ansible.plugins.shell.sh import ShellModule as sh\nfrom ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound\n\nclass ShellModule(sh):\n\n def __init__(self, *args, **kwargs):\n super(ShellModule, self).__init__(*args, **kwargs)\n self.dtemps = []\n\n def join_path(self, *args):\n ## HACK! HACK! HACK!\n ## We observe the interactions between ShellModule and ActionModule, and\n ## find the temporary directories Ansible created on remote machine. So we\n ## collect them and copied to docker container in build_module_command\n if len(args) >= 2 and (args[0].startswith('/home/') or args[0].startswith('/root/')) and args[1] == '':\n self.dtemps.append(args[0])\n\n return super(ShellModule, self).join_path(*args)\n\n def build_module_command(self, env_string, shebang, cmd, arg_path=None, rm_tmp=None):\n # assert(self.container_name)\n argv = shlex.split(shebang.replace(\"#!\", \"\"))\n assert(argv[0] == 'docker')\n assert(argv[1] == 'exec')\n opts, args = getopt.getopt(argv[2:], 'i')\n self.container_name = args[0]\n\n # Inject environment variable before python in the shebang string\n assert(args[1].endswith('python'))\n args[1] = 'env {0} {1}'.format(env_string, args[1])\n argv_env = argv[0:2] + [o for opt in opts for o in opt] + args\n shebang_env = ' '.join(argv_env)\n\n ## Note: Docker cp behavior\n ## DEST_PATH exists and is a directory\n ## SRC_PATH does end with /.\n ## the content of the source directory is copied into this directory\n ## Ref: https://docs.docker.com/engine/reference/commandline/cp/\n pre = ''.join('docker exec {1} mkdir -p {0}; docker cp {0}/. {1}:{0}; '\n .format(dtemp, self.container_name) for dtemp in self.dtemps)\n\n if rm_tmp:\n post = ''.join('docker exec {1} rm -rf {0}; '\n .format(dtemp, self.container_name) for dtemp in self.dtemps)\n else:\n post = ''\n\n return pre + super(ShellModule, self).build_module_command('', shebang_env, cmd, arg_path, rm_tmp) + '; ' + post\n\n def checksum(self, path, python_interp):\n \"\"\"\n Return the command to calculate the checksum for the file in ansible controlled machine\n Arguments:\n path:\n the file path\n python_interp:\n the path for the python interpreter\n Example:\n path:\n /zebra.conf\n python_interp:\n docker exec -i debian python\n cmd:\n rc=flag; [ -r /zebra.conf ] || rc=2; [ -f /zebra.conf ] || rc=1; [ -d /zebra.conf ] && rc=3; python -V 2>/dev/null || rc=4; [ x\"$rc\" != \"xflag\" ] && echo \"${rc} \"/zebra.conf && exit 0; (python -c '...' 2>/dev/null) || (echo '0 '/zebra.conf)\n returns:\n docker exec -i debian sh -c \"rc=flag; [ -r /zebra.conf ] || rc=2; [ -f /zebra.conf ] || rc=1; [ -d /zebra.conf ] && rc=3; python -V 2>/dev/null || rc=4; [ x\\\"\\$rc\\\" != \\\"xflag\\\" ] && echo \\\"\\${rc} \\\"/zebra.conf && exit 0; (python -c '...' 2>/dev/null) || (echo '0 '/zebra.conf)\"\n \"\"\"\n ## Super class implements this function by sh commands and python scripts\n ## If python_interp is modified to 'docker CONTAINER python', it will only influence the python\n ## script part in super class. Instead we should influence both\n simple_interp = 'python'\n assert(python_interp.startswith('docker exec '))\n assert(python_interp.endswith(' ' + simple_interp))\n\n docker_prefix = re.sub(simple_interp, '', python_interp)\n cmd = super(ShellModule, self).checksum(path, simple_interp)\n ## Escape the cmd:\n ## \" --> \\\"\n cmd_escaped = cmd.replace('\"', '\\\\\"')\n ## $ --> \\$\n cmd_escaped = cmd_escaped.replace('$', '\\\\$')\n return '%s sh -c \"%s\"' % (docker_prefix, cmd_escaped)\n", "path": "ansible/shell_plugins/docker.py"}]} | 2,780 | 590 |
gh_patches_debug_22540 | rasdani/github-patches | git_diff | Kinto__kinto-1087 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Port Kinto for Pyramid 1.8
Pyramid 1.8 is breaking a number of things:
- BasicAuth authn policy
- pserve http_port config.
Right now it is still an alpha release but it gets installed on time to time.
</issue>
<code>
[start of kinto/core/__init__.py]
1 """Main entry point
2 """
3 import pkg_resources
4 import tempfile
5
6 from cornice import Service as CorniceService
7 from pyramid.settings import aslist
8
9 from kinto.core import errors
10 from kinto.core import events
11 from kinto.core.initialization import ( # NOQA
12 initialize, install_middlewares,
13 load_default_settings)
14 from kinto.core.utils import (
15 follow_subrequest, current_service, current_resource_name,
16 prefixed_userid, prefixed_principals)
17 from kinto.core.logs import logger
18
19
20 # Module version, as defined in PEP-0396.
21 __version__ = pkg_resources.get_distribution('kinto').version # FIXME?
22
23
24 DEFAULT_SETTINGS = {
25 'backoff': None,
26 'batch_max_requests': 25,
27 'cache_backend': '',
28 'cache_url': '',
29 'cache_pool_size': 25,
30 'cache_prefix': '',
31 'cache_max_size_bytes': 524288,
32 'cors_origins': '*',
33 'cors_max_age_seconds': 3600,
34 'eos': None,
35 'eos_message': None,
36 'eos_url': None,
37 'error_info_link': 'https://github.com/Kinto/kinto/issues/',
38 'http_host': None,
39 'http_scheme': None,
40 'id_generator': 'kinto.core.storage.generators.UUID4',
41 'includes': '',
42 'initialization_sequence': (
43 'kinto.core.initialization.setup_request_bound_data',
44 'kinto.core.initialization.setup_json_serializer',
45 'kinto.core.initialization.setup_logging',
46 'kinto.core.initialization.setup_storage',
47 'kinto.core.initialization.setup_permission',
48 'kinto.core.initialization.setup_cache',
49 'kinto.core.initialization.setup_requests_scheme',
50 'kinto.core.initialization.setup_version_redirection',
51 'kinto.core.initialization.setup_deprecation',
52 'kinto.core.initialization.setup_authentication',
53 'kinto.core.initialization.setup_backoff',
54 'kinto.core.initialization.setup_statsd',
55 'kinto.core.initialization.setup_listeners',
56 'kinto.core.events.setup_transaction_hook',
57 ),
58 'event_listeners': '',
59 'heartbeat_timeout_seconds': 10,
60 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',
61 'newrelic_config': None,
62 'newrelic_env': 'dev',
63 'paginate_by': None,
64 'permission_backend': '',
65 'permission_url': '',
66 'permission_pool_size': 25,
67 'profiler_dir': tempfile.gettempdir(),
68 'profiler_enabled': False,
69 'project_docs': '',
70 'project_name': '',
71 'project_version': '',
72 'readonly': False,
73 'retry_after_seconds': 30,
74 'statsd_backend': 'kinto.core.statsd',
75 'statsd_prefix': 'kinto.core',
76 'statsd_url': None,
77 'storage_backend': '',
78 'storage_url': '',
79 'storage_max_fetch_size': 10000,
80 'storage_pool_size': 25,
81 'tm.annotate_user': False, # Do annotate transactions with the user-id.
82 'transaction_per_request': True,
83 'userid_hmac_secret': '',
84 'version_json_path': 'version.json',
85 'version_prefix_redirect_enabled': True,
86 'trailing_slash_redirect_enabled': True,
87 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',
88 'multiauth.policies': 'basicauth',
89 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'
90 'BasicAuthAuthenticationPolicy'),
91 'multiauth.authorization_policy': ('kinto.core.authorization.'
92 'AuthorizationPolicy'),
93 'swagger_file': 'swagger.yaml',
94 }
95
96
97 class Service(CorniceService):
98 """Subclass of the default cornice service.
99
100 This is useful in order to attach specific behaviours without monkey
101 patching the default cornice service (which would impact other uses of it)
102 """
103 default_cors_headers = ('Backoff', 'Retry-After', 'Alert',
104 'Content-Length')
105
106 def error_handler(self, request):
107 return errors.json_error_handler(request)
108
109 @classmethod
110 def init_from_settings(cls, settings):
111 cls.cors_origins = tuple(aslist(settings['cors_origins']))
112 cors_max_age = settings['cors_max_age_seconds']
113 cls.cors_max_age = int(cors_max_age) if cors_max_age else None
114
115
116 def includeme(config):
117 settings = config.get_settings()
118
119 # Heartbeat registry.
120 config.registry.heartbeats = {}
121
122 # Public settings registry.
123 config.registry.public_settings = {'batch_max_requests', 'readonly'}
124
125 # Directive to declare arbitrary API capabilities.
126 def add_api_capability(config, identifier, description="", url="", **kw):
127 existing = config.registry.api_capabilities.get(identifier)
128 if existing:
129 error_msg = "The '{}' API capability was already registered ({})."
130 raise ValueError(error_msg.format(identifier, existing))
131
132 capability = dict(description=description, url=url, **kw)
133 config.registry.api_capabilities[identifier] = capability
134
135 config.add_directive('add_api_capability', add_api_capability)
136 config.registry.api_capabilities = {}
137
138 # Resource events helpers.
139 config.add_request_method(events.get_resource_events,
140 name='get_resource_events')
141 config.add_request_method(events.notify_resource_event,
142 name='notify_resource_event')
143
144 # Setup cornice.
145 config.include("cornice")
146
147 # Per-request transaction.
148 config.include("pyramid_tm")
149
150 # Add CORS settings to the base kinto.core Service class.
151 Service.init_from_settings(settings)
152
153 # Setup components.
154 for step in aslist(settings['initialization_sequence']):
155 step_func = config.maybe_dotted(step)
156 step_func(config)
157
158 # Custom helpers.
159 config.add_request_method(follow_subrequest)
160 config.add_request_method(prefixed_userid, property=True)
161 config.add_request_method(prefixed_principals, reify=True)
162 config.add_request_method(lambda r: {
163 'id': r.prefixed_userid,
164 'principals': r.prefixed_principals},
165 name='get_user_info')
166 config.add_request_method(current_resource_name, reify=True)
167 config.add_request_method(current_service, reify=True)
168 config.commit()
169
170 # Include plugins after init, unlike pyramid includes.
171 includes = aslist(settings['includes'])
172 for app in includes:
173 config.include(app)
174
175 # # Show settings to output.
176 # for key, value in settings.items():
177 # logger.info('Using {} = {}'.format(key, value))
178
179 # Scan views.
180 config.scan("kinto.core.views")
181
182 # Give sign of life.
183 msg = "Running {project_name} {project_version}."
184 logger.info(msg.format_map(settings))
185
[end of kinto/core/__init__.py]
[start of kinto/core/authentication.py]
1 from pyramid import authentication as base_auth
2
3 from kinto.core import utils
4
5
6 class BasicAuthAuthenticationPolicy(base_auth.BasicAuthAuthenticationPolicy):
7 """Basic auth implementation.
8
9 Allow any user with any credentials (e.g. there is no need to create an
10 account).
11
12 """
13 def __init__(self, *args, **kwargs):
14 def noop_check(*a):
15 return []
16 super().__init__(noop_check, *args, **kwargs)
17
18 def effective_principals(self, request):
19 # Bypass default Pyramid construction of principals because
20 # Pyramid multiauth already adds userid, Authenticated and Everyone
21 # principals.
22 return []
23
24 def unauthenticated_userid(self, request):
25 settings = request.registry.settings
26
27 credentials = self._get_credentials(request)
28 if credentials:
29 username, password = credentials
30 if not username:
31 return
32
33 hmac_secret = settings['userid_hmac_secret']
34 credentials = '{}:{}'.format(*credentials)
35 userid = utils.hmac_digest(hmac_secret, credentials)
36 return userid
37
38
39 def includeme(config):
40 config.add_api_capability(
41 "basicauth",
42 description="Very basic authentication sessions. Not for production use.",
43 url="http://kinto.readthedocs.io/en/stable/api/1.x/authentication.html",
44 )
45
[end of kinto/core/authentication.py]
[start of setup.py]
1 import codecs
2 import os
3 from setuptools import setup, find_packages
4
5 here = os.path.abspath(os.path.dirname(__file__))
6
7
8 def read_file(filename):
9 """Open a related file and return its content."""
10 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
11 content = f.read()
12 return content
13
14
15 README = read_file('README.rst')
16 CHANGELOG = read_file('CHANGELOG.rst')
17 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
18
19 REQUIREMENTS = [
20 'colander >= 1.3.2',
21 'colorama',
22 'cornice >= 2.4',
23 'jsonschema',
24 'jsonpatch',
25 'python-dateutil',
26 'pyramid >1.7,<1.8',
27 'pyramid_multiauth >= 0.8', # User on policy selected event.
28 'ruamel.yaml',
29 'transaction',
30 'pyramid_tm',
31 'requests',
32 'structlog >= 16.1.0',
33 'enum34',
34 'waitress',
35 'ujson >= 1.35'
36 ]
37
38 POSTGRESQL_REQUIRES = [
39 'SQLAlchemy',
40 'psycopg2 > 2.5',
41 'zope.sqlalchemy',
42 ]
43
44 REDIS_REQUIRES = [
45 'kinto_redis'
46 ]
47
48 SETUP_REQUIRES = [
49 'pytest-runner'
50 ]
51
52 TEST_REQUIREMENTS = [
53 'bravado_core',
54 'pytest',
55 'WebTest'
56 ]
57
58 DEPENDENCY_LINKS = [
59 ]
60
61 MONITORING_REQUIRES = [
62 'raven',
63 'statsd',
64 'newrelic',
65 'werkzeug',
66 ]
67
68 ENTRY_POINTS = {
69 'paste.app_factory': [
70 'main = kinto:main',
71 ],
72 'console_scripts': [
73 'kinto = kinto.__main__:main'
74 ],
75 }
76
77
78 setup(name='kinto',
79 version='6.0.0.dev0',
80 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
81 long_description="{}\n\n{}\n\n{}".format(README, CHANGELOG, CONTRIBUTORS),
82 license='Apache License (2.0)',
83 classifiers=[
84 "Programming Language :: Python",
85 "Programming Language :: Python :: 3",
86 "Programming Language :: Python :: 3.5",
87 "Programming Language :: Python :: 3.6",
88 "Programming Language :: Python :: Implementation :: CPython",
89 "Topic :: Internet :: WWW/HTTP",
90 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
91 "License :: OSI Approved :: Apache Software License"
92 ],
93 keywords="web sync json storage services",
94 author='Mozilla Services',
95 author_email='[email protected]',
96 url='https://github.com/Kinto/kinto',
97 packages=find_packages(),
98 package_data={'': ['*.rst', '*.py', '*.yaml']},
99 include_package_data=True,
100 zip_safe=False,
101 setup_requires=SETUP_REQUIRES,
102 tests_require=TEST_REQUIREMENTS,
103 install_requires=REQUIREMENTS,
104 extras_require={
105 'redis': REDIS_REQUIRES,
106 'postgresql': POSTGRESQL_REQUIRES,
107 'monitoring': MONITORING_REQUIRES,
108 },
109 test_suite="tests",
110 dependency_links=DEPENDENCY_LINKS,
111 entry_points=ENTRY_POINTS)
112
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py
--- a/kinto/core/__init__.py
+++ b/kinto/core/__init__.py
@@ -20,7 +20,6 @@
# Module version, as defined in PEP-0396.
__version__ = pkg_resources.get_distribution('kinto').version # FIXME?
-
DEFAULT_SETTINGS = {
'backoff': None,
'batch_max_requests': 25,
diff --git a/kinto/core/authentication.py b/kinto/core/authentication.py
--- a/kinto/core/authentication.py
+++ b/kinto/core/authentication.py
@@ -24,7 +24,7 @@
def unauthenticated_userid(self, request):
settings = request.registry.settings
- credentials = self._get_credentials(request)
+ credentials = base_auth.extract_http_basic_credentials(request)
if credentials:
username, password = credentials
if not username:
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,7 +23,7 @@
'jsonschema',
'jsonpatch',
'python-dateutil',
- 'pyramid >1.7,<1.8',
+ 'pyramid > 1.8',
'pyramid_multiauth >= 0.8', # User on policy selected event.
'ruamel.yaml',
'transaction',
| {"golden_diff": "diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py\n--- a/kinto/core/__init__.py\n+++ b/kinto/core/__init__.py\n@@ -20,7 +20,6 @@\n # Module version, as defined in PEP-0396.\n __version__ = pkg_resources.get_distribution('kinto').version # FIXME?\n \n-\n DEFAULT_SETTINGS = {\n 'backoff': None,\n 'batch_max_requests': 25,\ndiff --git a/kinto/core/authentication.py b/kinto/core/authentication.py\n--- a/kinto/core/authentication.py\n+++ b/kinto/core/authentication.py\n@@ -24,7 +24,7 @@\n def unauthenticated_userid(self, request):\n settings = request.registry.settings\n \n- credentials = self._get_credentials(request)\n+ credentials = base_auth.extract_http_basic_credentials(request)\n if credentials:\n username, password = credentials\n if not username:\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,7 +23,7 @@\n 'jsonschema',\n 'jsonpatch',\n 'python-dateutil',\n- 'pyramid >1.7,<1.8',\n+ 'pyramid > 1.8',\n 'pyramid_multiauth >= 0.8', # User on policy selected event.\n 'ruamel.yaml',\n 'transaction',\n", "issue": "Port Kinto for Pyramid 1.8\nPyramid 1.8 is breaking a number of things:\r\n\r\n- BasicAuth authn policy\r\n- pserve http_port config.\r\n\r\nRight now it is still an alpha release but it gets installed on time to time.\n", "before_files": [{"content": "\"\"\"Main entry point\n\"\"\"\nimport pkg_resources\nimport tempfile\n\nfrom cornice import Service as CorniceService\nfrom pyramid.settings import aslist\n\nfrom kinto.core import errors\nfrom kinto.core import events\nfrom kinto.core.initialization import ( # NOQA\n initialize, install_middlewares,\n load_default_settings)\nfrom kinto.core.utils import (\n follow_subrequest, current_service, current_resource_name,\n prefixed_userid, prefixed_principals)\nfrom kinto.core.logs import logger\n\n\n# Module version, as defined in PEP-0396.\n__version__ = pkg_resources.get_distribution('kinto').version # FIXME?\n\n\nDEFAULT_SETTINGS = {\n 'backoff': None,\n 'batch_max_requests': 25,\n 'cache_backend': '',\n 'cache_url': '',\n 'cache_pool_size': 25,\n 'cache_prefix': '',\n 'cache_max_size_bytes': 524288,\n 'cors_origins': '*',\n 'cors_max_age_seconds': 3600,\n 'eos': None,\n 'eos_message': None,\n 'eos_url': None,\n 'error_info_link': 'https://github.com/Kinto/kinto/issues/',\n 'http_host': None,\n 'http_scheme': None,\n 'id_generator': 'kinto.core.storage.generators.UUID4',\n 'includes': '',\n 'initialization_sequence': (\n 'kinto.core.initialization.setup_request_bound_data',\n 'kinto.core.initialization.setup_json_serializer',\n 'kinto.core.initialization.setup_logging',\n 'kinto.core.initialization.setup_storage',\n 'kinto.core.initialization.setup_permission',\n 'kinto.core.initialization.setup_cache',\n 'kinto.core.initialization.setup_requests_scheme',\n 'kinto.core.initialization.setup_version_redirection',\n 'kinto.core.initialization.setup_deprecation',\n 'kinto.core.initialization.setup_authentication',\n 'kinto.core.initialization.setup_backoff',\n 'kinto.core.initialization.setup_statsd',\n 'kinto.core.initialization.setup_listeners',\n 'kinto.core.events.setup_transaction_hook',\n ),\n 'event_listeners': '',\n 'heartbeat_timeout_seconds': 10,\n 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',\n 'newrelic_config': None,\n 'newrelic_env': 'dev',\n 'paginate_by': None,\n 'permission_backend': '',\n 'permission_url': '',\n 'permission_pool_size': 25,\n 'profiler_dir': tempfile.gettempdir(),\n 'profiler_enabled': False,\n 'project_docs': '',\n 'project_name': '',\n 'project_version': '',\n 'readonly': False,\n 'retry_after_seconds': 30,\n 'statsd_backend': 'kinto.core.statsd',\n 'statsd_prefix': 'kinto.core',\n 'statsd_url': None,\n 'storage_backend': '',\n 'storage_url': '',\n 'storage_max_fetch_size': 10000,\n 'storage_pool_size': 25,\n 'tm.annotate_user': False, # Do annotate transactions with the user-id.\n 'transaction_per_request': True,\n 'userid_hmac_secret': '',\n 'version_json_path': 'version.json',\n 'version_prefix_redirect_enabled': True,\n 'trailing_slash_redirect_enabled': True,\n 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',\n 'multiauth.policies': 'basicauth',\n 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'\n 'BasicAuthAuthenticationPolicy'),\n 'multiauth.authorization_policy': ('kinto.core.authorization.'\n 'AuthorizationPolicy'),\n 'swagger_file': 'swagger.yaml',\n}\n\n\nclass Service(CorniceService):\n \"\"\"Subclass of the default cornice service.\n\n This is useful in order to attach specific behaviours without monkey\n patching the default cornice service (which would impact other uses of it)\n \"\"\"\n default_cors_headers = ('Backoff', 'Retry-After', 'Alert',\n 'Content-Length')\n\n def error_handler(self, request):\n return errors.json_error_handler(request)\n\n @classmethod\n def init_from_settings(cls, settings):\n cls.cors_origins = tuple(aslist(settings['cors_origins']))\n cors_max_age = settings['cors_max_age_seconds']\n cls.cors_max_age = int(cors_max_age) if cors_max_age else None\n\n\ndef includeme(config):\n settings = config.get_settings()\n\n # Heartbeat registry.\n config.registry.heartbeats = {}\n\n # Public settings registry.\n config.registry.public_settings = {'batch_max_requests', 'readonly'}\n\n # Directive to declare arbitrary API capabilities.\n def add_api_capability(config, identifier, description=\"\", url=\"\", **kw):\n existing = config.registry.api_capabilities.get(identifier)\n if existing:\n error_msg = \"The '{}' API capability was already registered ({}).\"\n raise ValueError(error_msg.format(identifier, existing))\n\n capability = dict(description=description, url=url, **kw)\n config.registry.api_capabilities[identifier] = capability\n\n config.add_directive('add_api_capability', add_api_capability)\n config.registry.api_capabilities = {}\n\n # Resource events helpers.\n config.add_request_method(events.get_resource_events,\n name='get_resource_events')\n config.add_request_method(events.notify_resource_event,\n name='notify_resource_event')\n\n # Setup cornice.\n config.include(\"cornice\")\n\n # Per-request transaction.\n config.include(\"pyramid_tm\")\n\n # Add CORS settings to the base kinto.core Service class.\n Service.init_from_settings(settings)\n\n # Setup components.\n for step in aslist(settings['initialization_sequence']):\n step_func = config.maybe_dotted(step)\n step_func(config)\n\n # Custom helpers.\n config.add_request_method(follow_subrequest)\n config.add_request_method(prefixed_userid, property=True)\n config.add_request_method(prefixed_principals, reify=True)\n config.add_request_method(lambda r: {\n 'id': r.prefixed_userid,\n 'principals': r.prefixed_principals},\n name='get_user_info')\n config.add_request_method(current_resource_name, reify=True)\n config.add_request_method(current_service, reify=True)\n config.commit()\n\n # Include plugins after init, unlike pyramid includes.\n includes = aslist(settings['includes'])\n for app in includes:\n config.include(app)\n\n # # Show settings to output.\n # for key, value in settings.items():\n # logger.info('Using {} = {}'.format(key, value))\n\n # Scan views.\n config.scan(\"kinto.core.views\")\n\n # Give sign of life.\n msg = \"Running {project_name} {project_version}.\"\n logger.info(msg.format_map(settings))\n", "path": "kinto/core/__init__.py"}, {"content": "from pyramid import authentication as base_auth\n\nfrom kinto.core import utils\n\n\nclass BasicAuthAuthenticationPolicy(base_auth.BasicAuthAuthenticationPolicy):\n \"\"\"Basic auth implementation.\n\n Allow any user with any credentials (e.g. there is no need to create an\n account).\n\n \"\"\"\n def __init__(self, *args, **kwargs):\n def noop_check(*a):\n return []\n super().__init__(noop_check, *args, **kwargs)\n\n def effective_principals(self, request):\n # Bypass default Pyramid construction of principals because\n # Pyramid multiauth already adds userid, Authenticated and Everyone\n # principals.\n return []\n\n def unauthenticated_userid(self, request):\n settings = request.registry.settings\n\n credentials = self._get_credentials(request)\n if credentials:\n username, password = credentials\n if not username:\n return\n\n hmac_secret = settings['userid_hmac_secret']\n credentials = '{}:{}'.format(*credentials)\n userid = utils.hmac_digest(hmac_secret, credentials)\n return userid\n\n\ndef includeme(config):\n config.add_api_capability(\n \"basicauth\",\n description=\"Very basic authentication sessions. Not for production use.\",\n url=\"http://kinto.readthedocs.io/en/stable/api/1.x/authentication.html\",\n )\n", "path": "kinto/core/authentication.py"}, {"content": "import codecs\nimport os\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\nREQUIREMENTS = [\n 'colander >= 1.3.2',\n 'colorama',\n 'cornice >= 2.4',\n 'jsonschema',\n 'jsonpatch',\n 'python-dateutil',\n 'pyramid >1.7,<1.8',\n 'pyramid_multiauth >= 0.8', # User on policy selected event.\n 'ruamel.yaml',\n 'transaction',\n 'pyramid_tm',\n 'requests',\n 'structlog >= 16.1.0',\n 'enum34',\n 'waitress',\n 'ujson >= 1.35'\n]\n\nPOSTGRESQL_REQUIRES = [\n 'SQLAlchemy',\n 'psycopg2 > 2.5',\n 'zope.sqlalchemy',\n]\n\nREDIS_REQUIRES = [\n 'kinto_redis'\n]\n\nSETUP_REQUIRES = [\n 'pytest-runner'\n]\n\nTEST_REQUIREMENTS = [\n 'bravado_core',\n 'pytest',\n 'WebTest'\n]\n\nDEPENDENCY_LINKS = [\n]\n\nMONITORING_REQUIRES = [\n 'raven',\n 'statsd',\n 'newrelic',\n 'werkzeug',\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\n\nsetup(name='kinto',\n version='6.0.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=\"{}\\n\\n{}\\n\\n{}\".format(README, CHANGELOG, CONTRIBUTORS),\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage services\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n package_data={'': ['*.rst', '*.py', '*.yaml']},\n include_package_data=True,\n zip_safe=False,\n setup_requires=SETUP_REQUIRES,\n tests_require=TEST_REQUIREMENTS,\n install_requires=REQUIREMENTS,\n extras_require={\n 'redis': REDIS_REQUIRES,\n 'postgresql': POSTGRESQL_REQUIRES,\n 'monitoring': MONITORING_REQUIRES,\n },\n test_suite=\"tests\",\n dependency_links=DEPENDENCY_LINKS,\n entry_points=ENTRY_POINTS)\n", "path": "setup.py"}]} | 3,857 | 309 |
gh_patches_debug_40687 | rasdani/github-patches | git_diff | facebookresearch__hydra-1545 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support multirun partial failure in Nevergrad sweeper
Context here: https://github.com/facebookresearch/hydra/issues/1377
</issue>
<code>
[start of plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, Optional
4
5 from hydra.core.config_store import ConfigStore
6
7
8 @dataclass
9 class ScalarConfigSpec:
10 """Representation of all the options to define
11 a scalar.
12 """
13
14 # lower bound if any
15 lower: Optional[float] = None
16
17 # upper bound if any
18 upper: Optional[float] = None
19
20 # initial value
21 # default to the middle point if completely bounded
22 init: Optional[float] = None
23
24 # step size for an update
25 # defaults to 1 if unbounded
26 # or 1/6 of the range if completely bounded
27 step: Optional[float] = None
28
29 # cast to integer
30 integer: bool = False
31
32 # logarithmically distributed
33 log: bool = False
34
35
36 @dataclass
37 class OptimConf:
38
39 # name of the Nevergrad optimizer to use. Here is a sample:
40 # - "OnePlusOne" extremely simple and robust, especially at low budget, but
41 # tends to converge early.
42 # - "CMA" very good algorithm, but may require a significant budget (> 120)
43 # - "TwoPointsDE": an algorithm good in a wide range of settings, for significant
44 # budgets (> 120).
45 # - "NGOpt" an algorithm aiming at identifying the best optimizer given your input
46 # definition (updated regularly)
47 # find out more within nevergrad's documentation:
48 # https://github.com/facebookresearch/nevergrad/
49 optimizer: str = "NGOpt"
50
51 # total number of function evaluations to perform
52 budget: int = 80
53
54 # number of parallel workers for performing function evaluations
55 num_workers: int = 10
56
57 # set to true if the function evaluations are noisy
58 noisy: bool = False
59
60 # set to true for performing maximization instead of minimization
61 maximize: bool = False
62
63 # optimization seed, for reproducibility
64 seed: Optional[int] = None
65
66
67 @dataclass
68 class NevergradSweeperConf:
69 _target_: str = (
70 "hydra_plugins.hydra_nevergrad_sweeper.nevergrad_sweeper.NevergradSweeper"
71 )
72
73 # configuration of the optimizer
74 optim: OptimConf = OptimConf()
75
76 # default parametrization of the search space
77 # can be specified:
78 # - as a string, like commandline arguments
79 # - as a list, for categorical variables
80 # - as a full scalar specification
81 parametrization: Dict[str, Any] = field(default_factory=dict)
82
83
84 ConfigStore.instance().store(
85 group="hydra/sweeper",
86 name="nevergrad",
87 node=NevergradSweeperConf,
88 provider="nevergrad",
89 )
90
[end of plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py]
[start of plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import logging
3 from typing import (
4 Any,
5 Dict,
6 List,
7 MutableMapping,
8 MutableSequence,
9 Optional,
10 Tuple,
11 Union,
12 )
13
14 import nevergrad as ng
15 from hydra.core.override_parser.overrides_parser import OverridesParser
16 from hydra.core.override_parser.types import (
17 ChoiceSweep,
18 IntervalSweep,
19 Override,
20 Transformer,
21 )
22 from hydra.core.plugins import Plugins
23 from hydra.plugins.launcher import Launcher
24 from hydra.plugins.sweeper import Sweeper
25 from hydra.types import HydraContext, TaskFunction
26 from omegaconf import DictConfig, ListConfig, OmegaConf
27
28 from .config import OptimConf, ScalarConfigSpec
29
30 log = logging.getLogger(__name__)
31
32
33 def create_nevergrad_param_from_config(
34 config: Union[MutableSequence[Any], MutableMapping[str, Any]]
35 ) -> Any:
36 if isinstance(config, MutableSequence):
37 if isinstance(config, ListConfig):
38 config = OmegaConf.to_container(config, resolve=True) # type: ignore
39 return ng.p.Choice(config)
40 if isinstance(config, MutableMapping):
41 specs = ScalarConfigSpec(**config)
42 init = ["init", "lower", "upper"]
43 init_params = {x: getattr(specs, x) for x in init}
44 if not specs.log:
45 scalar = ng.p.Scalar(**init_params)
46 if specs.step is not None:
47 scalar.set_mutation(sigma=specs.step)
48 else:
49 if specs.step is not None:
50 init_params["exponent"] = specs.step
51 scalar = ng.p.Log(**init_params)
52 if specs.integer:
53 scalar.set_integer_casting()
54 return scalar
55 return config
56
57
58 def create_nevergrad_parameter_from_override(override: Override) -> Any:
59 val = override.value()
60 if not override.is_sweep_override():
61 return val
62 if override.is_choice_sweep():
63 assert isinstance(val, ChoiceSweep)
64 vals = [x for x in override.sweep_iterator(transformer=Transformer.encode)]
65 if "ordered" in val.tags:
66 return ng.p.TransitionChoice(vals)
67 else:
68 return ng.p.Choice(vals)
69 elif override.is_range_sweep():
70 vals = [x for x in override.sweep_iterator(transformer=Transformer.encode)]
71 return ng.p.Choice(vals)
72 elif override.is_interval_sweep():
73 assert isinstance(val, IntervalSweep)
74 if "log" in val.tags:
75 scalar = ng.p.Log(lower=val.start, upper=val.end)
76 else:
77 scalar = ng.p.Scalar(lower=val.start, upper=val.end) # type: ignore
78 if isinstance(val.start, int):
79 scalar.set_integer_casting()
80 return scalar
81
82
83 class NevergradSweeperImpl(Sweeper):
84 def __init__(
85 self,
86 optim: OptimConf,
87 parametrization: Optional[DictConfig],
88 ):
89 self.opt_config = optim
90 self.config: Optional[DictConfig] = None
91 self.launcher: Optional[Launcher] = None
92 self.hydra_context: Optional[HydraContext] = None
93 self.job_results = None
94 self.parametrization: Dict[str, Any] = {}
95 if parametrization is not None:
96 assert isinstance(parametrization, DictConfig)
97 self.parametrization = {
98 str(x): create_nevergrad_param_from_config(y)
99 for x, y in parametrization.items()
100 }
101 self.job_idx: Optional[int] = None
102
103 def setup(
104 self,
105 *,
106 hydra_context: HydraContext,
107 task_function: TaskFunction,
108 config: DictConfig,
109 ) -> None:
110 self.job_idx = 0
111 self.config = config
112 self.hydra_context = hydra_context
113 self.launcher = Plugins.instance().instantiate_launcher(
114 hydra_context=hydra_context, task_function=task_function, config=config
115 )
116
117 def sweep(self, arguments: List[str]) -> None:
118
119 assert self.config is not None
120 assert self.launcher is not None
121 assert self.job_idx is not None
122 direction = -1 if self.opt_config.maximize else 1
123 name = "maximization" if self.opt_config.maximize else "minimization"
124 # Override the parametrization from commandline
125 params = dict(self.parametrization)
126
127 parser = OverridesParser.create()
128 parsed = parser.parse_overrides(arguments)
129
130 for override in parsed:
131 params[
132 override.get_key_element()
133 ] = create_nevergrad_parameter_from_override(override)
134
135 parametrization = ng.p.Dict(**params)
136 parametrization.function.deterministic = not self.opt_config.noisy
137 parametrization.random_state.seed(self.opt_config.seed)
138 # log and build the optimizer
139 opt = self.opt_config.optimizer
140 remaining_budget = self.opt_config.budget
141 nw = self.opt_config.num_workers
142 log.info(
143 f"NevergradSweeper(optimizer={opt}, budget={remaining_budget}, "
144 f"num_workers={nw}) {name}"
145 )
146 log.info(f"with parametrization {parametrization}")
147 log.info(f"Sweep output dir: {self.config.hydra.sweep.dir}")
148 optimizer = ng.optimizers.registry[opt](parametrization, remaining_budget, nw)
149 # loop!
150 all_returns: List[Any] = []
151 best: Tuple[float, ng.p.Parameter] = (float("inf"), parametrization)
152 while remaining_budget > 0:
153 batch = min(nw, remaining_budget)
154 remaining_budget -= batch
155 candidates = [optimizer.ask() for _ in range(batch)]
156 overrides = list(
157 tuple(f"{x}={y}" for x, y in c.value.items()) for c in candidates
158 )
159 self.validate_batch_is_legal(overrides)
160 returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)
161 self.job_idx += len(returns)
162 # would have been nice to avoid waiting for all jobs to finish
163 # aka batch size Vs steady state (launching a new job whenever one is done)
164 for cand, ret in zip(candidates, returns):
165 loss = direction * ret.return_value
166 optimizer.tell(cand, loss)
167 if loss < best[0]:
168 best = (loss, cand)
169 all_returns.extend(returns)
170 recom = optimizer.provide_recommendation()
171 results_to_serialize = {
172 "name": "nevergrad",
173 "best_evaluated_params": best[1].value,
174 "best_evaluated_result": direction * best[0],
175 }
176 OmegaConf.save(
177 OmegaConf.create(results_to_serialize),
178 f"{self.config.hydra.sweep.dir}/optimization_results.yaml",
179 )
180 log.info(
181 "Best parameters: %s", " ".join(f"{x}={y}" for x, y in recom.value.items())
182 )
183
[end of plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py]
[start of plugins/hydra_nevergrad_sweeper/example/my_app.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import logging
3
4 import hydra
5 from omegaconf import DictConfig
6
7 log = logging.getLogger(__name__)
8
9
10 @hydra.main(config_path=".", config_name="config")
11 def dummy_training(cfg: DictConfig) -> float:
12 """A dummy function to minimize
13 Minimum is 0.0 at:
14 lr = 0.12, dropout=0.33, db=mnist, batch_size=4
15 """
16 do = cfg.dropout
17 bs = cfg.batch_size
18 out = float(
19 abs(do - 0.33) + int(cfg.db == "mnist") + abs(cfg.lr - 0.12) + abs(bs - 4)
20 )
21 log.info(
22 f"dummy_training(dropout={do:.3f}, lr={cfg.lr:.3f}, db={cfg.db}, batch_size={bs}) = {out:.3f}",
23 )
24 return out
25
26
27 if __name__ == "__main__":
28 dummy_training()
29
[end of plugins/hydra_nevergrad_sweeper/example/my_app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/hydra_nevergrad_sweeper/example/my_app.py b/plugins/hydra_nevergrad_sweeper/example/my_app.py
--- a/plugins/hydra_nevergrad_sweeper/example/my_app.py
+++ b/plugins/hydra_nevergrad_sweeper/example/my_app.py
@@ -21,6 +21,8 @@
log.info(
f"dummy_training(dropout={do:.3f}, lr={cfg.lr:.3f}, db={cfg.db}, batch_size={bs}) = {out:.3f}",
)
+ if cfg.error:
+ raise RuntimeError("cfg.error is True")
return out
diff --git a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py
--- a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py
+++ b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py
@@ -1,5 +1,6 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import logging
+import math
from typing import (
Any,
Dict,
@@ -12,6 +13,7 @@
)
import nevergrad as ng
+from hydra.core import utils
from hydra.core.override_parser.overrides_parser import OverridesParser
from hydra.core.override_parser.types import (
ChoiceSweep,
@@ -158,14 +160,32 @@
)
self.validate_batch_is_legal(overrides)
returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)
- self.job_idx += len(returns)
# would have been nice to avoid waiting for all jobs to finish
# aka batch size Vs steady state (launching a new job whenever one is done)
+ self.job_idx += len(returns)
+ # check job status and prepare losses
+ failures = 0
for cand, ret in zip(candidates, returns):
- loss = direction * ret.return_value
- optimizer.tell(cand, loss)
- if loss < best[0]:
- best = (loss, cand)
+ if ret.status == utils.JobStatus.COMPLETED:
+ rectified_loss = direction * ret.return_value
+ else:
+ rectified_loss = math.inf
+ failures += 1
+ try:
+ ret.return_value
+ except Exception as e:
+ log.warning(f"Returning infinity for failed experiment: {e}")
+ optimizer.tell(cand, rectified_loss)
+ if rectified_loss < best[0]:
+ best = (rectified_loss, cand)
+ # raise if too many failures
+ if failures / len(returns) > self.opt_config.max_failure_rate:
+ log.error(
+ f"Failed {failures} times out of {len(returns)} "
+ f"with max_failure_rate={self.opt_config.max_failure_rate}"
+ )
+ for ret in returns:
+ ret.return_value # delegate raising to JobReturn, with actual traceback
all_returns.extend(returns)
recom = optimizer.provide_recommendation()
results_to_serialize = {
diff --git a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py
--- a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py
+++ b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py
@@ -63,6 +63,9 @@
# optimization seed, for reproducibility
seed: Optional[int] = None
+ # maximum authorized failure rate for a batch of parameters
+ max_failure_rate: float = 0.0
+
@dataclass
class NevergradSweeperConf:
| {"golden_diff": "diff --git a/plugins/hydra_nevergrad_sweeper/example/my_app.py b/plugins/hydra_nevergrad_sweeper/example/my_app.py\n--- a/plugins/hydra_nevergrad_sweeper/example/my_app.py\n+++ b/plugins/hydra_nevergrad_sweeper/example/my_app.py\n@@ -21,6 +21,8 @@\n log.info(\n f\"dummy_training(dropout={do:.3f}, lr={cfg.lr:.3f}, db={cfg.db}, batch_size={bs}) = {out:.3f}\",\n )\n+ if cfg.error:\n+ raise RuntimeError(\"cfg.error is True\")\n return out\n \n \ndiff --git a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py\n--- a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py\n+++ b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py\n@@ -1,5 +1,6 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import logging\n+import math\n from typing import (\n Any,\n Dict,\n@@ -12,6 +13,7 @@\n )\n \n import nevergrad as ng\n+from hydra.core import utils\n from hydra.core.override_parser.overrides_parser import OverridesParser\n from hydra.core.override_parser.types import (\n ChoiceSweep,\n@@ -158,14 +160,32 @@\n )\n self.validate_batch_is_legal(overrides)\n returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)\n- self.job_idx += len(returns)\n # would have been nice to avoid waiting for all jobs to finish\n # aka batch size Vs steady state (launching a new job whenever one is done)\n+ self.job_idx += len(returns)\n+ # check job status and prepare losses\n+ failures = 0\n for cand, ret in zip(candidates, returns):\n- loss = direction * ret.return_value\n- optimizer.tell(cand, loss)\n- if loss < best[0]:\n- best = (loss, cand)\n+ if ret.status == utils.JobStatus.COMPLETED:\n+ rectified_loss = direction * ret.return_value\n+ else:\n+ rectified_loss = math.inf\n+ failures += 1\n+ try:\n+ ret.return_value\n+ except Exception as e:\n+ log.warning(f\"Returning infinity for failed experiment: {e}\")\n+ optimizer.tell(cand, rectified_loss)\n+ if rectified_loss < best[0]:\n+ best = (rectified_loss, cand)\n+ # raise if too many failures\n+ if failures / len(returns) > self.opt_config.max_failure_rate:\n+ log.error(\n+ f\"Failed {failures} times out of {len(returns)} \"\n+ f\"with max_failure_rate={self.opt_config.max_failure_rate}\"\n+ )\n+ for ret in returns:\n+ ret.return_value # delegate raising to JobReturn, with actual traceback\n all_returns.extend(returns)\n recom = optimizer.provide_recommendation()\n results_to_serialize = {\ndiff --git a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py\n--- a/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py\n+++ b/plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py\n@@ -63,6 +63,9 @@\n # optimization seed, for reproducibility\n seed: Optional[int] = None\n \n+ # maximum authorized failure rate for a batch of parameters\n+ max_failure_rate: float = 0.0\n+\n \n @dataclass\n class NevergradSweeperConf:\n", "issue": "Support multirun partial failure in Nevergrad sweeper\nContext here: https://github.com/facebookresearch/hydra/issues/1377\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass ScalarConfigSpec:\n \"\"\"Representation of all the options to define\n a scalar.\n \"\"\"\n\n # lower bound if any\n lower: Optional[float] = None\n\n # upper bound if any\n upper: Optional[float] = None\n\n # initial value\n # default to the middle point if completely bounded\n init: Optional[float] = None\n\n # step size for an update\n # defaults to 1 if unbounded\n # or 1/6 of the range if completely bounded\n step: Optional[float] = None\n\n # cast to integer\n integer: bool = False\n\n # logarithmically distributed\n log: bool = False\n\n\n@dataclass\nclass OptimConf:\n\n # name of the Nevergrad optimizer to use. Here is a sample:\n # - \"OnePlusOne\" extremely simple and robust, especially at low budget, but\n # tends to converge early.\n # - \"CMA\" very good algorithm, but may require a significant budget (> 120)\n # - \"TwoPointsDE\": an algorithm good in a wide range of settings, for significant\n # budgets (> 120).\n # - \"NGOpt\" an algorithm aiming at identifying the best optimizer given your input\n # definition (updated regularly)\n # find out more within nevergrad's documentation:\n # https://github.com/facebookresearch/nevergrad/\n optimizer: str = \"NGOpt\"\n\n # total number of function evaluations to perform\n budget: int = 80\n\n # number of parallel workers for performing function evaluations\n num_workers: int = 10\n\n # set to true if the function evaluations are noisy\n noisy: bool = False\n\n # set to true for performing maximization instead of minimization\n maximize: bool = False\n\n # optimization seed, for reproducibility\n seed: Optional[int] = None\n\n\n@dataclass\nclass NevergradSweeperConf:\n _target_: str = (\n \"hydra_plugins.hydra_nevergrad_sweeper.nevergrad_sweeper.NevergradSweeper\"\n )\n\n # configuration of the optimizer\n optim: OptimConf = OptimConf()\n\n # default parametrization of the search space\n # can be specified:\n # - as a string, like commandline arguments\n # - as a list, for categorical variables\n # - as a full scalar specification\n parametrization: Dict[str, Any] = field(default_factory=dict)\n\n\nConfigStore.instance().store(\n group=\"hydra/sweeper\",\n name=\"nevergrad\",\n node=NevergradSweeperConf,\n provider=\"nevergrad\",\n)\n", "path": "plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/config.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging\nfrom typing import (\n Any,\n Dict,\n List,\n MutableMapping,\n MutableSequence,\n Optional,\n Tuple,\n Union,\n)\n\nimport nevergrad as ng\nfrom hydra.core.override_parser.overrides_parser import OverridesParser\nfrom hydra.core.override_parser.types import (\n ChoiceSweep,\n IntervalSweep,\n Override,\n Transformer,\n)\nfrom hydra.core.plugins import Plugins\nfrom hydra.plugins.launcher import Launcher\nfrom hydra.plugins.sweeper import Sweeper\nfrom hydra.types import HydraContext, TaskFunction\nfrom omegaconf import DictConfig, ListConfig, OmegaConf\n\nfrom .config import OptimConf, ScalarConfigSpec\n\nlog = logging.getLogger(__name__)\n\n\ndef create_nevergrad_param_from_config(\n config: Union[MutableSequence[Any], MutableMapping[str, Any]]\n) -> Any:\n if isinstance(config, MutableSequence):\n if isinstance(config, ListConfig):\n config = OmegaConf.to_container(config, resolve=True) # type: ignore\n return ng.p.Choice(config)\n if isinstance(config, MutableMapping):\n specs = ScalarConfigSpec(**config)\n init = [\"init\", \"lower\", \"upper\"]\n init_params = {x: getattr(specs, x) for x in init}\n if not specs.log:\n scalar = ng.p.Scalar(**init_params)\n if specs.step is not None:\n scalar.set_mutation(sigma=specs.step)\n else:\n if specs.step is not None:\n init_params[\"exponent\"] = specs.step\n scalar = ng.p.Log(**init_params)\n if specs.integer:\n scalar.set_integer_casting()\n return scalar\n return config\n\n\ndef create_nevergrad_parameter_from_override(override: Override) -> Any:\n val = override.value()\n if not override.is_sweep_override():\n return val\n if override.is_choice_sweep():\n assert isinstance(val, ChoiceSweep)\n vals = [x for x in override.sweep_iterator(transformer=Transformer.encode)]\n if \"ordered\" in val.tags:\n return ng.p.TransitionChoice(vals)\n else:\n return ng.p.Choice(vals)\n elif override.is_range_sweep():\n vals = [x for x in override.sweep_iterator(transformer=Transformer.encode)]\n return ng.p.Choice(vals)\n elif override.is_interval_sweep():\n assert isinstance(val, IntervalSweep)\n if \"log\" in val.tags:\n scalar = ng.p.Log(lower=val.start, upper=val.end)\n else:\n scalar = ng.p.Scalar(lower=val.start, upper=val.end) # type: ignore\n if isinstance(val.start, int):\n scalar.set_integer_casting()\n return scalar\n\n\nclass NevergradSweeperImpl(Sweeper):\n def __init__(\n self,\n optim: OptimConf,\n parametrization: Optional[DictConfig],\n ):\n self.opt_config = optim\n self.config: Optional[DictConfig] = None\n self.launcher: Optional[Launcher] = None\n self.hydra_context: Optional[HydraContext] = None\n self.job_results = None\n self.parametrization: Dict[str, Any] = {}\n if parametrization is not None:\n assert isinstance(parametrization, DictConfig)\n self.parametrization = {\n str(x): create_nevergrad_param_from_config(y)\n for x, y in parametrization.items()\n }\n self.job_idx: Optional[int] = None\n\n def setup(\n self,\n *,\n hydra_context: HydraContext,\n task_function: TaskFunction,\n config: DictConfig,\n ) -> None:\n self.job_idx = 0\n self.config = config\n self.hydra_context = hydra_context\n self.launcher = Plugins.instance().instantiate_launcher(\n hydra_context=hydra_context, task_function=task_function, config=config\n )\n\n def sweep(self, arguments: List[str]) -> None:\n\n assert self.config is not None\n assert self.launcher is not None\n assert self.job_idx is not None\n direction = -1 if self.opt_config.maximize else 1\n name = \"maximization\" if self.opt_config.maximize else \"minimization\"\n # Override the parametrization from commandline\n params = dict(self.parametrization)\n\n parser = OverridesParser.create()\n parsed = parser.parse_overrides(arguments)\n\n for override in parsed:\n params[\n override.get_key_element()\n ] = create_nevergrad_parameter_from_override(override)\n\n parametrization = ng.p.Dict(**params)\n parametrization.function.deterministic = not self.opt_config.noisy\n parametrization.random_state.seed(self.opt_config.seed)\n # log and build the optimizer\n opt = self.opt_config.optimizer\n remaining_budget = self.opt_config.budget\n nw = self.opt_config.num_workers\n log.info(\n f\"NevergradSweeper(optimizer={opt}, budget={remaining_budget}, \"\n f\"num_workers={nw}) {name}\"\n )\n log.info(f\"with parametrization {parametrization}\")\n log.info(f\"Sweep output dir: {self.config.hydra.sweep.dir}\")\n optimizer = ng.optimizers.registry[opt](parametrization, remaining_budget, nw)\n # loop!\n all_returns: List[Any] = []\n best: Tuple[float, ng.p.Parameter] = (float(\"inf\"), parametrization)\n while remaining_budget > 0:\n batch = min(nw, remaining_budget)\n remaining_budget -= batch\n candidates = [optimizer.ask() for _ in range(batch)]\n overrides = list(\n tuple(f\"{x}={y}\" for x, y in c.value.items()) for c in candidates\n )\n self.validate_batch_is_legal(overrides)\n returns = self.launcher.launch(overrides, initial_job_idx=self.job_idx)\n self.job_idx += len(returns)\n # would have been nice to avoid waiting for all jobs to finish\n # aka batch size Vs steady state (launching a new job whenever one is done)\n for cand, ret in zip(candidates, returns):\n loss = direction * ret.return_value\n optimizer.tell(cand, loss)\n if loss < best[0]:\n best = (loss, cand)\n all_returns.extend(returns)\n recom = optimizer.provide_recommendation()\n results_to_serialize = {\n \"name\": \"nevergrad\",\n \"best_evaluated_params\": best[1].value,\n \"best_evaluated_result\": direction * best[0],\n }\n OmegaConf.save(\n OmegaConf.create(results_to_serialize),\n f\"{self.config.hydra.sweep.dir}/optimization_results.yaml\",\n )\n log.info(\n \"Best parameters: %s\", \" \".join(f\"{x}={y}\" for x, y in recom.value.items())\n )\n", "path": "plugins/hydra_nevergrad_sweeper/hydra_plugins/hydra_nevergrad_sweeper/_impl.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging\n\nimport hydra\nfrom omegaconf import DictConfig\n\nlog = logging.getLogger(__name__)\n\n\[email protected](config_path=\".\", config_name=\"config\")\ndef dummy_training(cfg: DictConfig) -> float:\n \"\"\"A dummy function to minimize\n Minimum is 0.0 at:\n lr = 0.12, dropout=0.33, db=mnist, batch_size=4\n \"\"\"\n do = cfg.dropout\n bs = cfg.batch_size\n out = float(\n abs(do - 0.33) + int(cfg.db == \"mnist\") + abs(cfg.lr - 0.12) + abs(bs - 4)\n )\n log.info(\n f\"dummy_training(dropout={do:.3f}, lr={cfg.lr:.3f}, db={cfg.db}, batch_size={bs}) = {out:.3f}\",\n )\n return out\n\n\nif __name__ == \"__main__\":\n dummy_training()\n", "path": "plugins/hydra_nevergrad_sweeper/example/my_app.py"}]} | 3,732 | 930 |
gh_patches_debug_31784 | rasdani/github-patches | git_diff | opentensor__bittensor-1231 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
new_hotkey is listed twice under 'btcli --help' menu
new_hotkey is listed twice under 'btcli --help' menu
</issue>
<code>
[start of bittensor/_cli/__init__.py]
1 """
2 Create and init the CLI class, which handles the coldkey, hotkey and money transfer
3 """
4 # The MIT License (MIT)
5 # Copyright © 2021 Yuma Rao
6 # Copyright © 2022 Opentensor Foundation
7
8 # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
9 # documentation files (the “Software”), to deal in the Software without restriction, including without limitation
10 # the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
11 # and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
12
13 # The above copyright notice and this permission notice shall be included in all copies or substantial portions of
14 # the Software.
15
16 # THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
17 # THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
18 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19 # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
20 # DEALINGS IN THE SOFTWARE.
21
22 import sys
23 import argparse
24 import bittensor
25 from . import cli_impl
26 from .commands import *
27 from typing import List, Optional
28 from .naka_cli_impl import CLI as naka_CLI
29 console = bittensor.__console__
30
31 # Turn off rich console locals trace.
32 from rich.traceback import install
33 install(show_locals=False)
34
35 class cli:
36 """
37 Create and init the CLI class, which handles the coldkey, hotkey and tao transfer
38 """
39 def __new__(
40 cls,
41 config: Optional['bittensor.Config'] = None,
42 args: Optional[List[str]] = None,
43 ) -> 'bittensor.CLI':
44 r""" Creates a new bittensor.cli from passed arguments.
45 Args:
46 config (:obj:`bittensor.Config`, `optional`):
47 bittensor.cli.config()
48 args (`List[str]`, `optional`):
49 The arguments to parse from the command line.
50 """
51 if config == None:
52 config = cli.config(args)
53 cli.check_config( config )
54 if config.subtensor:
55 network = config.subtensor.get('network', bittensor.defaults.subtensor.network)
56
57 if network == 'nakamoto':
58 # Use nakamoto version of the CLI
59 return naka_CLI(config=config)
60 else:
61 return cli_impl.CLI( config = config)
62
63 @staticmethod
64 def config(args: List[str]) -> 'bittensor.config':
65 """ From the argument parser, add config to bittensor.executor and local config
66 Return: bittensor.config object
67 """
68 parser = argparse.ArgumentParser(
69 description=f"bittensor cli v{bittensor.__version__}",
70 usage="btcli <command> <command args>",
71 add_help=True)
72
73 cmd_parsers = parser.add_subparsers(dest='command')
74 RunCommand.add_args( cmd_parsers )
75 HelpCommand.add_args( cmd_parsers )
76 ListCommand.add_args( cmd_parsers )
77 QueryCommand.add_args( cmd_parsers )
78 StakeCommand.add_args( cmd_parsers )
79 UpdateCommand.add_args( cmd_parsers )
80 InspectCommand.add_args( cmd_parsers )
81 WeightsCommand.add_args( cmd_parsers )
82 UnStakeCommand.add_args( cmd_parsers )
83 OverviewCommand.add_args( cmd_parsers )
84 RegisterCommand.add_args( cmd_parsers )
85 TransferCommand.add_args( cmd_parsers )
86 NominateCommand.add_args( cmd_parsers )
87 NewHotkeyCommand.add_args( cmd_parsers )
88 MetagraphCommand.add_args( cmd_parsers )
89 SetWeightsCommand.add_args( cmd_parsers )
90 NewColdkeyCommand.add_args( cmd_parsers )
91 NewHotkeyCommand.add_args( cmd_parsers )
92 MyDelegatesCommand.add_args( cmd_parsers )
93 ListSubnetsCommand.add_args( cmd_parsers )
94 RegenHotkeyCommand.add_args( cmd_parsers )
95 RegenColdkeyCommand.add_args( cmd_parsers )
96 DelegateStakeCommand.add_args( cmd_parsers )
97 DelegateUnstakeCommand.add_args( cmd_parsers )
98 ListDelegatesCommand.add_args( cmd_parsers )
99 RegenColdkeypubCommand.add_args( cmd_parsers )
100 RecycleRegisterCommand.add_args( cmd_parsers )
101
102 # If no arguments are passed, print help text.
103 if len(args) == 0:
104 parser.print_help()
105 sys.exit()
106
107 return bittensor.config( parser, args=args )
108
109 @staticmethod
110 def check_config (config: 'bittensor.Config'):
111 """ Check if the essential config exist under different command
112 """
113 if config.command == "run":
114 RunCommand.check_config( config )
115 elif config.command == "transfer":
116 TransferCommand.check_config( config )
117 elif config.command == "register":
118 RegisterCommand.check_config( config )
119 elif config.command == "unstake":
120 UnStakeCommand.check_config( config )
121 elif config.command == "stake":
122 StakeCommand.check_config( config )
123 elif config.command == "overview":
124 OverviewCommand.check_config( config )
125 elif config.command == "new_coldkey":
126 NewColdkeyCommand.check_config( config )
127 elif config.command == "new_hotkey":
128 NewHotkeyCommand.check_config( config )
129 elif config.command == "regen_coldkey":
130 RegenColdkeyCommand.check_config( config )
131 elif config.command == "regen_coldkeypub":
132 RegenColdkeypubCommand.check_config( config )
133 elif config.command == "regen_hotkey":
134 RegenHotkeyCommand.check_config( config )
135 elif config.command == "metagraph":
136 MetagraphCommand.check_config( config )
137 elif config.command == "weights":
138 WeightsCommand.check_config( config )
139 elif config.command == "set_weights":
140 SetWeightsCommand.check_config( config )
141 elif config.command == "list":
142 ListCommand.check_config( config )
143 elif config.command == "inspect":
144 InspectCommand.check_config( config )
145 elif config.command == "query":
146 QueryCommand.check_config( config )
147 elif config.command == "help":
148 HelpCommand.check_config( config )
149 elif config.command == "update":
150 UpdateCommand.check_config( config )
151 elif config.command == "nominate":
152 NominateCommand.check_config( config )
153 elif config.command == "list_delegates":
154 ListDelegatesCommand.check_config( config )
155 elif config.command == "list_subnets":
156 ListSubnetsCommand.check_config( config )
157 elif config.command == "delegate":
158 DelegateStakeCommand.check_config( config )
159 elif config.command == "undelegate":
160 DelegateUnstakeCommand.check_config( config )
161 elif config.command == "my_delegates":
162 MyDelegatesCommand.check_config( config )
163 elif config.command == "recycle_register":
164 RecycleRegisterCommand.check_config( config )
165 else:
166 console.print(":cross_mark:[red]Unknown command: {}[/red]".format(config.command))
167 sys.exit()
168
169
[end of bittensor/_cli/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bittensor/_cli/__init__.py b/bittensor/_cli/__init__.py
--- a/bittensor/_cli/__init__.py
+++ b/bittensor/_cli/__init__.py
@@ -59,11 +59,10 @@
return naka_CLI(config=config)
else:
return cli_impl.CLI( config = config)
-
- @staticmethod
- def config(args: List[str]) -> 'bittensor.config':
- """ From the argument parser, add config to bittensor.executor and local config
- Return: bittensor.config object
+
+ @staticmethod
+ def __create_parser__() -> 'argparse.ArgumentParser':
+ """ Creates the argument parser for the bittensor cli.
"""
parser = argparse.ArgumentParser(
description=f"bittensor cli v{bittensor.__version__}",
@@ -88,7 +87,6 @@
MetagraphCommand.add_args( cmd_parsers )
SetWeightsCommand.add_args( cmd_parsers )
NewColdkeyCommand.add_args( cmd_parsers )
- NewHotkeyCommand.add_args( cmd_parsers )
MyDelegatesCommand.add_args( cmd_parsers )
ListSubnetsCommand.add_args( cmd_parsers )
RegenHotkeyCommand.add_args( cmd_parsers )
@@ -99,6 +97,15 @@
RegenColdkeypubCommand.add_args( cmd_parsers )
RecycleRegisterCommand.add_args( cmd_parsers )
+ return parser
+
+ @staticmethod
+ def config(args: List[str]) -> 'bittensor.config':
+ """ From the argument parser, add config to bittensor.executor and local config
+ Return: bittensor.config object
+ """
+ parser = cli.__create_parser__()
+
# If no arguments are passed, print help text.
if len(args) == 0:
parser.print_help()
| {"golden_diff": "diff --git a/bittensor/_cli/__init__.py b/bittensor/_cli/__init__.py\n--- a/bittensor/_cli/__init__.py\n+++ b/bittensor/_cli/__init__.py\n@@ -59,11 +59,10 @@\n return naka_CLI(config=config)\n else:\n return cli_impl.CLI( config = config)\n-\n- @staticmethod \n- def config(args: List[str]) -> 'bittensor.config':\n- \"\"\" From the argument parser, add config to bittensor.executor and local config \n- Return: bittensor.config object\n+ \n+ @staticmethod\n+ def __create_parser__() -> 'argparse.ArgumentParser':\n+ \"\"\" Creates the argument parser for the bittensor cli.\n \"\"\"\n parser = argparse.ArgumentParser(\n description=f\"bittensor cli v{bittensor.__version__}\",\n@@ -88,7 +87,6 @@\n MetagraphCommand.add_args( cmd_parsers )\n SetWeightsCommand.add_args( cmd_parsers )\n NewColdkeyCommand.add_args( cmd_parsers )\n- NewHotkeyCommand.add_args( cmd_parsers )\n MyDelegatesCommand.add_args( cmd_parsers )\n ListSubnetsCommand.add_args( cmd_parsers )\n RegenHotkeyCommand.add_args( cmd_parsers )\n@@ -99,6 +97,15 @@\n RegenColdkeypubCommand.add_args( cmd_parsers )\n RecycleRegisterCommand.add_args( cmd_parsers )\n \n+ return parser\n+\n+ @staticmethod \n+ def config(args: List[str]) -> 'bittensor.config':\n+ \"\"\" From the argument parser, add config to bittensor.executor and local config \n+ Return: bittensor.config object\n+ \"\"\"\n+ parser = cli.__create_parser__()\n+\n # If no arguments are passed, print help text.\n if len(args) == 0:\n parser.print_help()\n", "issue": "new_hotkey is listed twice under 'btcli --help' menu\nnew_hotkey is listed twice under 'btcli --help' menu\n", "before_files": [{"content": "\"\"\"\nCreate and init the CLI class, which handles the coldkey, hotkey and money transfer \n\"\"\"\n# The MIT License (MIT)\n# Copyright \u00a9 2021 Yuma Rao\n# Copyright \u00a9 2022 Opentensor Foundation\n\n# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated \n# documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation \n# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, \n# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\n# The above copyright notice and this permission notice shall be included in all copies or substantial portions of \n# the Software.\n\n# THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO\n# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL \n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION \n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER \n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nimport argparse\nimport bittensor\nfrom . import cli_impl\nfrom .commands import *\nfrom typing import List, Optional\nfrom .naka_cli_impl import CLI as naka_CLI\nconsole = bittensor.__console__\n\n# Turn off rich console locals trace.\nfrom rich.traceback import install\ninstall(show_locals=False)\n\nclass cli:\n \"\"\"\n Create and init the CLI class, which handles the coldkey, hotkey and tao transfer \n \"\"\"\n def __new__(\n cls,\n config: Optional['bittensor.Config'] = None,\n args: Optional[List[str]] = None, \n ) -> 'bittensor.CLI':\n r\"\"\" Creates a new bittensor.cli from passed arguments.\n Args:\n config (:obj:`bittensor.Config`, `optional`): \n bittensor.cli.config()\n args (`List[str]`, `optional`): \n The arguments to parse from the command line.\n \"\"\"\n if config == None: \n config = cli.config(args)\n cli.check_config( config )\n if config.subtensor:\n network = config.subtensor.get('network', bittensor.defaults.subtensor.network)\n\n if network == 'nakamoto':\n # Use nakamoto version of the CLI\n return naka_CLI(config=config)\n else:\n return cli_impl.CLI( config = config)\n\n @staticmethod \n def config(args: List[str]) -> 'bittensor.config':\n \"\"\" From the argument parser, add config to bittensor.executor and local config \n Return: bittensor.config object\n \"\"\"\n parser = argparse.ArgumentParser(\n description=f\"bittensor cli v{bittensor.__version__}\",\n usage=\"btcli <command> <command args>\",\n add_help=True)\n\n cmd_parsers = parser.add_subparsers(dest='command')\n RunCommand.add_args( cmd_parsers )\n HelpCommand.add_args( cmd_parsers ) \n ListCommand.add_args( cmd_parsers )\n QueryCommand.add_args( cmd_parsers )\n StakeCommand.add_args( cmd_parsers )\n UpdateCommand.add_args( cmd_parsers )\n InspectCommand.add_args( cmd_parsers ) \n WeightsCommand.add_args( cmd_parsers )\n UnStakeCommand.add_args( cmd_parsers )\n OverviewCommand.add_args( cmd_parsers )\n RegisterCommand.add_args( cmd_parsers )\n TransferCommand.add_args( cmd_parsers )\n NominateCommand.add_args( cmd_parsers )\n NewHotkeyCommand.add_args( cmd_parsers )\n MetagraphCommand.add_args( cmd_parsers )\n SetWeightsCommand.add_args( cmd_parsers )\n NewColdkeyCommand.add_args( cmd_parsers )\n NewHotkeyCommand.add_args( cmd_parsers )\n MyDelegatesCommand.add_args( cmd_parsers )\n ListSubnetsCommand.add_args( cmd_parsers )\n RegenHotkeyCommand.add_args( cmd_parsers )\n RegenColdkeyCommand.add_args( cmd_parsers )\n DelegateStakeCommand.add_args( cmd_parsers )\n DelegateUnstakeCommand.add_args( cmd_parsers )\n ListDelegatesCommand.add_args( cmd_parsers )\n RegenColdkeypubCommand.add_args( cmd_parsers )\n RecycleRegisterCommand.add_args( cmd_parsers )\n\n # If no arguments are passed, print help text.\n if len(args) == 0:\n parser.print_help()\n sys.exit()\n\n return bittensor.config( parser, args=args )\n\n @staticmethod \n def check_config (config: 'bittensor.Config'):\n \"\"\" Check if the essential config exist under different command\n \"\"\"\n if config.command == \"run\":\n RunCommand.check_config( config )\n elif config.command == \"transfer\":\n TransferCommand.check_config( config )\n elif config.command == \"register\":\n RegisterCommand.check_config( config )\n elif config.command == \"unstake\":\n UnStakeCommand.check_config( config )\n elif config.command == \"stake\":\n StakeCommand.check_config( config )\n elif config.command == \"overview\":\n OverviewCommand.check_config( config )\n elif config.command == \"new_coldkey\":\n NewColdkeyCommand.check_config( config )\n elif config.command == \"new_hotkey\":\n NewHotkeyCommand.check_config( config )\n elif config.command == \"regen_coldkey\":\n RegenColdkeyCommand.check_config( config )\n elif config.command == \"regen_coldkeypub\":\n RegenColdkeypubCommand.check_config( config )\n elif config.command == \"regen_hotkey\":\n RegenHotkeyCommand.check_config( config )\n elif config.command == \"metagraph\":\n MetagraphCommand.check_config( config )\n elif config.command == \"weights\":\n WeightsCommand.check_config( config )\n elif config.command == \"set_weights\":\n SetWeightsCommand.check_config( config )\n elif config.command == \"list\":\n ListCommand.check_config( config )\n elif config.command == \"inspect\":\n InspectCommand.check_config( config )\n elif config.command == \"query\":\n QueryCommand.check_config( config )\n elif config.command == \"help\":\n HelpCommand.check_config( config )\n elif config.command == \"update\":\n UpdateCommand.check_config( config )\n elif config.command == \"nominate\":\n NominateCommand.check_config( config )\n elif config.command == \"list_delegates\":\n ListDelegatesCommand.check_config( config )\n elif config.command == \"list_subnets\":\n ListSubnetsCommand.check_config( config )\n elif config.command == \"delegate\":\n DelegateStakeCommand.check_config( config )\n elif config.command == \"undelegate\":\n DelegateUnstakeCommand.check_config( config )\n elif config.command == \"my_delegates\":\n MyDelegatesCommand.check_config( config )\n elif config.command == \"recycle_register\":\n RecycleRegisterCommand.check_config( config )\n else:\n console.print(\":cross_mark:[red]Unknown command: {}[/red]\".format(config.command))\n sys.exit()\n\n ", "path": "bittensor/_cli/__init__.py"}]} | 2,529 | 425 |
gh_patches_debug_17528 | rasdani/github-patches | git_diff | allegro__ralph-3222 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Table 'ralph_ng.transitions_transition' doesn't exist
when I follow the document to setup a develop environment, I met the error" default: django.db.utils.ProgrammingError: (1146, "Table 'ralph_ng.transitions_transition' doesn't exist") ". I think it is because there are no such tables when newly install ralph3 develop environment but ralph3 try to migrate them(from ralph2). I am on mac and have download the default box manually which will be used in vagrant up.
</issue>
<code>
[start of src/ralph/lib/transitions/checks.py]
1 from django.core.checks import Error
2 from django.db.utils import OperationalError
3 from django.template.base import TemplateDoesNotExist
4 from django.template.loader import get_template
5
6
7 def check_transition_templates(transition_templates):
8 # to prevent AppRegistryNotReady
9 from ralph.lib.transitions.models import Transition
10
11 errors = []
12 if transition_templates:
13 if not isinstance(transition_templates, (list, tuple)):
14 errors.append(Error(
15 'TRANSITION_TEMPLATES must be a list or a tuple',
16 id='transitions.E001'
17 ))
18 else:
19 for index, item in enumerate(transition_templates):
20 try:
21 path, template = item
22 except (ValueError, TypeError):
23 errors.append(Error(
24 'Element #{} must be a two elements tuple'.format(
25 index
26 ),
27 id='transitions.E003'
28 ))
29 continue
30 try:
31 get_template(path)
32 except TemplateDoesNotExist:
33 errors.append(Error(
34 'Template {} ({}) doesn\'t exist'.format(
35 template, path
36 ),
37 hint='Check TRANSITION_TEMPLATES settings',
38 id='transitions.E002'
39 ))
40 excluded_templates = ['']
41 if transition_templates:
42 try:
43 excluded_templates.extend(
44 {template for template, _ in transition_templates}
45 )
46 except ValueError:
47 pass
48 transitions_with_custom_templates = Transition.objects.exclude(
49 template_name__in=excluded_templates
50 )
51 try:
52 for transition in transitions_with_custom_templates:
53 errors.append(Error(
54 'Template {} for {} transition is '
55 'defined only in transition'.format(
56 transition.template_name, transition
57 ),
58 hint=(
59 'Change your TRANSITION_TEMPLATES settings by adding'
60 ' ({}, "Your template name") and then '
61 'edit {} transition').format(
62 transition.template_name, transition
63 ),
64 id='transitions.E004'
65 ))
66 except OperationalError:
67 pass
68 return errors
69
[end of src/ralph/lib/transitions/checks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ralph/lib/transitions/checks.py b/src/ralph/lib/transitions/checks.py
--- a/src/ralph/lib/transitions/checks.py
+++ b/src/ralph/lib/transitions/checks.py
@@ -1,9 +1,14 @@
+import logging
+
from django.core.checks import Error
-from django.db.utils import OperationalError
+from django.db.utils import DatabaseError
from django.template.base import TemplateDoesNotExist
from django.template.loader import get_template
+logger = logging.getLogger(__name__)
+
+
def check_transition_templates(transition_templates):
# to prevent AppRegistryNotReady
from ralph.lib.transitions.models import Transition
@@ -63,6 +68,6 @@
),
id='transitions.E004'
))
- except OperationalError:
- pass
+ except DatabaseError as e:
+ logger.error(e)
return errors
| {"golden_diff": "diff --git a/src/ralph/lib/transitions/checks.py b/src/ralph/lib/transitions/checks.py\n--- a/src/ralph/lib/transitions/checks.py\n+++ b/src/ralph/lib/transitions/checks.py\n@@ -1,9 +1,14 @@\n+import logging\n+\n from django.core.checks import Error\n-from django.db.utils import OperationalError\n+from django.db.utils import DatabaseError\n from django.template.base import TemplateDoesNotExist\n from django.template.loader import get_template\n \n \n+logger = logging.getLogger(__name__)\n+\n+\n def check_transition_templates(transition_templates):\n # to prevent AppRegistryNotReady\n from ralph.lib.transitions.models import Transition\n@@ -63,6 +68,6 @@\n ),\n id='transitions.E004'\n ))\n- except OperationalError:\n- pass\n+ except DatabaseError as e:\n+ logger.error(e)\n return errors\n", "issue": "Table 'ralph_ng.transitions_transition' doesn't exist\nwhen I follow the document to setup a develop environment, I met the error\" default: django.db.utils.ProgrammingError: (1146, \"Table 'ralph_ng.transitions_transition' doesn't exist\") \". I think it is because there are no such tables when newly install ralph3 develop environment but ralph3 try to migrate them(from ralph2). I am on mac and have download the default box manually which will be used in vagrant up.\n", "before_files": [{"content": "from django.core.checks import Error\nfrom django.db.utils import OperationalError\nfrom django.template.base import TemplateDoesNotExist\nfrom django.template.loader import get_template\n\n\ndef check_transition_templates(transition_templates):\n # to prevent AppRegistryNotReady\n from ralph.lib.transitions.models import Transition\n\n errors = []\n if transition_templates:\n if not isinstance(transition_templates, (list, tuple)):\n errors.append(Error(\n 'TRANSITION_TEMPLATES must be a list or a tuple',\n id='transitions.E001'\n ))\n else:\n for index, item in enumerate(transition_templates):\n try:\n path, template = item\n except (ValueError, TypeError):\n errors.append(Error(\n 'Element #{} must be a two elements tuple'.format(\n index\n ),\n id='transitions.E003'\n ))\n continue\n try:\n get_template(path)\n except TemplateDoesNotExist:\n errors.append(Error(\n 'Template {} ({}) doesn\\'t exist'.format(\n template, path\n ),\n hint='Check TRANSITION_TEMPLATES settings',\n id='transitions.E002'\n ))\n excluded_templates = ['']\n if transition_templates:\n try:\n excluded_templates.extend(\n {template for template, _ in transition_templates}\n )\n except ValueError:\n pass\n transitions_with_custom_templates = Transition.objects.exclude(\n template_name__in=excluded_templates\n )\n try:\n for transition in transitions_with_custom_templates:\n errors.append(Error(\n 'Template {} for {} transition is '\n 'defined only in transition'.format(\n transition.template_name, transition\n ),\n hint=(\n 'Change your TRANSITION_TEMPLATES settings by adding'\n ' ({}, \"Your template name\") and then '\n 'edit {} transition').format(\n transition.template_name, transition\n ),\n id='transitions.E004'\n ))\n except OperationalError:\n pass\n return errors\n", "path": "src/ralph/lib/transitions/checks.py"}]} | 1,207 | 202 |
gh_patches_debug_33816 | rasdani/github-patches | git_diff | marshmallow-code__webargs-464 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RFC: Only accept delimited string in DelimitedList
`DelimitedList` accepts either a list or a delimited string (e.g. "foo,bar,baz").
I'd like to make it more strict by only accepting a delimited list. Rather than adding a `strict` parameter, I'm thinking of dropping the whole "also accept a list" feature.
Any reason to support both?
I understand it inherits from `List` because once the string is parsed, it can be deserialized as a normal list. But are there cases where you'd expect either a list or a delimited string?
</issue>
<code>
[start of src/webargs/fields.py]
1 """Field classes.
2
3 Includes all fields from `marshmallow.fields` in addition to a custom
4 `Nested` field and `DelimitedList`.
5
6 All fields can optionally take a special `location` keyword argument, which
7 tells webargs where to parse the request argument from.
8
9 .. code-block:: python
10
11 args = {
12 "active": fields.Bool(location="query"),
13 "content_type": fields.Str(data_key="Content-Type", location="headers"),
14 }
15
16 Note: `data_key` replaced `load_from` in marshmallow 3.
17 When using marshmallow 2, use `load_from`.
18 """
19 import marshmallow as ma
20
21 # Expose all fields from marshmallow.fields.
22 from marshmallow.fields import * # noqa: F40
23 from webargs.compat import MARSHMALLOW_VERSION_INFO
24 from webargs.dict2schema import dict2schema
25
26 __all__ = ["DelimitedList"] + ma.fields.__all__
27
28
29 class Nested(ma.fields.Nested):
30 """Same as `marshmallow.fields.Nested`, except can be passed a dictionary as
31 the first argument, which will be converted to a `marshmallow.Schema`.
32
33 .. note::
34
35 The schema class here will always be `marshmallow.Schema`, regardless
36 of whether a custom schema class is set on the parser. Pass an explicit schema
37 class if necessary.
38 """
39
40 def __init__(self, nested, *args, **kwargs):
41 if isinstance(nested, dict):
42 nested = dict2schema(nested)
43 super().__init__(nested, *args, **kwargs)
44
45
46 class DelimitedList(ma.fields.List):
47 """Same as `marshmallow.fields.List`, except can load from either a list or
48 a delimited string (e.g. "foo,bar,baz").
49
50 :param Field cls_or_instance: A field class or instance.
51 :param str delimiter: Delimiter between values.
52 :param bool as_string: Dump values to string.
53 """
54
55 delimiter = ","
56
57 def __init__(self, cls_or_instance, delimiter=None, as_string=False, **kwargs):
58 self.delimiter = delimiter or self.delimiter
59 self.as_string = as_string
60 super().__init__(cls_or_instance, **kwargs)
61
62 def _serialize(self, value, attr, obj):
63 ret = super()._serialize(value, attr, obj)
64 if self.as_string:
65 return self.delimiter.join(format(each) for each in ret)
66 return ret
67
68 def _deserialize(self, value, attr, data, **kwargs):
69 try:
70 ret = (
71 value
72 if ma.utils.is_iterable_but_not_string(value)
73 else value.split(self.delimiter)
74 )
75 except AttributeError:
76 if MARSHMALLOW_VERSION_INFO[0] < 3:
77 self.fail("invalid")
78 else:
79 raise self.make_error("invalid")
80 return super()._deserialize(ret, attr, data, **kwargs)
81
[end of src/webargs/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/webargs/fields.py b/src/webargs/fields.py
--- a/src/webargs/fields.py
+++ b/src/webargs/fields.py
@@ -44,37 +44,35 @@
class DelimitedList(ma.fields.List):
- """Same as `marshmallow.fields.List`, except can load from either a list or
- a delimited string (e.g. "foo,bar,baz").
+ """A field which is similar to a List, but takes its input as a delimited
+ string (e.g. "foo,bar,baz").
+
+ Like List, it can be given a nested field type which it will use to
+ de/serialize each element of the list.
:param Field cls_or_instance: A field class or instance.
:param str delimiter: Delimiter between values.
- :param bool as_string: Dump values to string.
"""
+ default_error_messages = {"invalid": "Not a valid delimited list."}
delimiter = ","
- def __init__(self, cls_or_instance, delimiter=None, as_string=False, **kwargs):
+ def __init__(self, cls_or_instance, delimiter=None, **kwargs):
self.delimiter = delimiter or self.delimiter
- self.as_string = as_string
super().__init__(cls_or_instance, **kwargs)
def _serialize(self, value, attr, obj):
- ret = super()._serialize(value, attr, obj)
- if self.as_string:
- return self.delimiter.join(format(each) for each in ret)
- return ret
+ # serializing will start with List serialization, so that we correctly
+ # output lists of non-primitive types, e.g. DelimitedList(DateTime)
+ return self.delimiter.join(
+ format(each) for each in super()._serialize(value, attr, obj)
+ )
def _deserialize(self, value, attr, data, **kwargs):
- try:
- ret = (
- value
- if ma.utils.is_iterable_but_not_string(value)
- else value.split(self.delimiter)
- )
- except AttributeError:
+ # attempting to deserialize from a non-string source is an error
+ if not isinstance(value, (str, bytes)):
if MARSHMALLOW_VERSION_INFO[0] < 3:
self.fail("invalid")
else:
raise self.make_error("invalid")
- return super()._deserialize(ret, attr, data, **kwargs)
+ return super()._deserialize(value.split(self.delimiter), attr, data, **kwargs)
| {"golden_diff": "diff --git a/src/webargs/fields.py b/src/webargs/fields.py\n--- a/src/webargs/fields.py\n+++ b/src/webargs/fields.py\n@@ -44,37 +44,35 @@\n \n \n class DelimitedList(ma.fields.List):\n- \"\"\"Same as `marshmallow.fields.List`, except can load from either a list or\n- a delimited string (e.g. \"foo,bar,baz\").\n+ \"\"\"A field which is similar to a List, but takes its input as a delimited\n+ string (e.g. \"foo,bar,baz\").\n+\n+ Like List, it can be given a nested field type which it will use to\n+ de/serialize each element of the list.\n \n :param Field cls_or_instance: A field class or instance.\n :param str delimiter: Delimiter between values.\n- :param bool as_string: Dump values to string.\n \"\"\"\n \n+ default_error_messages = {\"invalid\": \"Not a valid delimited list.\"}\n delimiter = \",\"\n \n- def __init__(self, cls_or_instance, delimiter=None, as_string=False, **kwargs):\n+ def __init__(self, cls_or_instance, delimiter=None, **kwargs):\n self.delimiter = delimiter or self.delimiter\n- self.as_string = as_string\n super().__init__(cls_or_instance, **kwargs)\n \n def _serialize(self, value, attr, obj):\n- ret = super()._serialize(value, attr, obj)\n- if self.as_string:\n- return self.delimiter.join(format(each) for each in ret)\n- return ret\n+ # serializing will start with List serialization, so that we correctly\n+ # output lists of non-primitive types, e.g. DelimitedList(DateTime)\n+ return self.delimiter.join(\n+ format(each) for each in super()._serialize(value, attr, obj)\n+ )\n \n def _deserialize(self, value, attr, data, **kwargs):\n- try:\n- ret = (\n- value\n- if ma.utils.is_iterable_but_not_string(value)\n- else value.split(self.delimiter)\n- )\n- except AttributeError:\n+ # attempting to deserialize from a non-string source is an error\n+ if not isinstance(value, (str, bytes)):\n if MARSHMALLOW_VERSION_INFO[0] < 3:\n self.fail(\"invalid\")\n else:\n raise self.make_error(\"invalid\")\n- return super()._deserialize(ret, attr, data, **kwargs)\n+ return super()._deserialize(value.split(self.delimiter), attr, data, **kwargs)\n", "issue": "RFC: Only accept delimited string in DelimitedList\n`DelimitedList` accepts either a list or a delimited string (e.g. \"foo,bar,baz\").\r\n\r\nI'd like to make it more strict by only accepting a delimited list. Rather than adding a `strict` parameter, I'm thinking of dropping the whole \"also accept a list\" feature.\r\n\r\nAny reason to support both?\r\n\r\nI understand it inherits from `List` because once the string is parsed, it can be deserialized as a normal list. But are there cases where you'd expect either a list or a delimited string?\n", "before_files": [{"content": "\"\"\"Field classes.\n\nIncludes all fields from `marshmallow.fields` in addition to a custom\n`Nested` field and `DelimitedList`.\n\nAll fields can optionally take a special `location` keyword argument, which\ntells webargs where to parse the request argument from.\n\n.. code-block:: python\n\n args = {\n \"active\": fields.Bool(location=\"query\"),\n \"content_type\": fields.Str(data_key=\"Content-Type\", location=\"headers\"),\n }\n\nNote: `data_key` replaced `load_from` in marshmallow 3.\nWhen using marshmallow 2, use `load_from`.\n\"\"\"\nimport marshmallow as ma\n\n# Expose all fields from marshmallow.fields.\nfrom marshmallow.fields import * # noqa: F40\nfrom webargs.compat import MARSHMALLOW_VERSION_INFO\nfrom webargs.dict2schema import dict2schema\n\n__all__ = [\"DelimitedList\"] + ma.fields.__all__\n\n\nclass Nested(ma.fields.Nested):\n \"\"\"Same as `marshmallow.fields.Nested`, except can be passed a dictionary as\n the first argument, which will be converted to a `marshmallow.Schema`.\n\n .. note::\n\n The schema class here will always be `marshmallow.Schema`, regardless\n of whether a custom schema class is set on the parser. Pass an explicit schema\n class if necessary.\n \"\"\"\n\n def __init__(self, nested, *args, **kwargs):\n if isinstance(nested, dict):\n nested = dict2schema(nested)\n super().__init__(nested, *args, **kwargs)\n\n\nclass DelimitedList(ma.fields.List):\n \"\"\"Same as `marshmallow.fields.List`, except can load from either a list or\n a delimited string (e.g. \"foo,bar,baz\").\n\n :param Field cls_or_instance: A field class or instance.\n :param str delimiter: Delimiter between values.\n :param bool as_string: Dump values to string.\n \"\"\"\n\n delimiter = \",\"\n\n def __init__(self, cls_or_instance, delimiter=None, as_string=False, **kwargs):\n self.delimiter = delimiter or self.delimiter\n self.as_string = as_string\n super().__init__(cls_or_instance, **kwargs)\n\n def _serialize(self, value, attr, obj):\n ret = super()._serialize(value, attr, obj)\n if self.as_string:\n return self.delimiter.join(format(each) for each in ret)\n return ret\n\n def _deserialize(self, value, attr, data, **kwargs):\n try:\n ret = (\n value\n if ma.utils.is_iterable_but_not_string(value)\n else value.split(self.delimiter)\n )\n except AttributeError:\n if MARSHMALLOW_VERSION_INFO[0] < 3:\n self.fail(\"invalid\")\n else:\n raise self.make_error(\"invalid\")\n return super()._deserialize(ret, attr, data, **kwargs)\n", "path": "src/webargs/fields.py"}]} | 1,439 | 565 |
gh_patches_debug_16085 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-508 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create canonical ordering for tables and return it by default
## Problem
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
We should have a canonical ordering for each table whenever possible. This will make infinite scroll easier to deal with.
## Proposed solution
<!-- A clear and concise description of your proposed solution or feature. -->
We should order by primary key by default if the table has one. Otherwise, we should use the entire row as a sorting key (it won't be possible to tell the difference if two identical rows "switch order"). We should always return rows in these orders when they are accessed unless the user specifies a different `ORDER BY`.
## Additional context
<!-- Add any other context or screenshots about the feature request here.-->
- Please see #361
</issue>
<code>
[start of db/records.py]
1 import logging
2 from sqlalchemy import delete, select, Column, func
3 from sqlalchemy.inspection import inspect
4 from sqlalchemy_filters import apply_filters, apply_sort
5 from sqlalchemy_filters.exceptions import FieldNotFound
6
7
8 logger = logging.getLogger(__name__)
9
10
11 # Grouping exceptions follow the sqlalchemy_filters exceptions patterns
12 class BadGroupFormat(Exception):
13 pass
14
15
16 class GroupFieldNotFound(FieldNotFound):
17 pass
18
19
20 def _get_primary_key_column(table):
21 primary_key_list = list(inspect(table).primary_key)
22 # We do not support getting by composite primary keys
23 assert len(primary_key_list) == 1
24 return primary_key_list[0]
25
26
27 def _create_col_objects(table, column_list):
28 return [
29 table.columns[col] if type(col) == str else table.columns[col.name]
30 for col in column_list
31 ]
32
33
34 def _get_query(table, limit, offset, order_by, filters):
35 query = select(table).limit(limit).offset(offset)
36 if order_by is not None:
37 query = apply_sort(query, order_by)
38 if filters is not None:
39 query = apply_filters(query, filters)
40 return query
41
42
43 def _execute_query(query, engine):
44 with engine.begin() as conn:
45 records = conn.execute(query).fetchall()
46 return records
47
48
49 def get_record(table, engine, id_value):
50 primary_key_column = _get_primary_key_column(table)
51 query = select(table).where(primary_key_column == id_value)
52 result = _execute_query(query, engine)
53 assert len(result) <= 1
54 return result[0] if result else None
55
56
57 def get_records(
58 table, engine, limit=None, offset=None, order_by=[], filters=[],
59 ):
60 """
61 Returns records from a table.
62
63 Args:
64 table: SQLAlchemy table object
65 engine: SQLAlchemy engine object
66 limit: int, gives number of rows to return
67 offset: int, gives number of rows to skip
68 order_by: list of dictionaries, where each dictionary has a 'field' and
69 'direction' field.
70 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
71 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
72 field, in addition to an 'value' field if appropriate.
73 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
74 """
75 query = _get_query(table, limit, offset, order_by, filters)
76 return _execute_query(query, engine)
77
78
79 def get_group_counts(
80 table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],
81 ):
82 """
83 Returns counts by specified groupings
84
85 Args:
86 table: SQLAlchemy table object
87 engine: SQLAlchemy engine object
88 limit: int, gives number of rows to return
89 offset: int, gives number of rows to skip
90 group_by: list or tuple of column names or column objects to group by
91 order_by: list of dictionaries, where each dictionary has a 'field' and
92 'direction' field.
93 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
94 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
95 field, in addition to an 'value' field if appropriate.
96 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
97 """
98 if type(group_by) not in (tuple, list):
99 raise BadGroupFormat(f"Group spec {group_by} must be list or tuple.")
100 for field in group_by:
101 if type(field) not in (str, Column):
102 raise BadGroupFormat(f"Group field {field} must be a string or Column.")
103 field_name = field if type(field) == str else field.name
104 if field_name not in table.c:
105 raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
106
107 table_columns = _create_col_objects(table, group_by)
108 count_query = (
109 select(*table_columns, func.count(table_columns[0]))
110 .group_by(*table_columns)
111 )
112 if filters is not None:
113 count_query = apply_filters(count_query, filters)
114 filtered_count_query = _get_filtered_group_by_count_query(
115 table, engine, group_by, limit, offset, order_by, filters, count_query
116 )
117 if filtered_count_query is not None:
118 records = _execute_query(filtered_count_query, engine)
119 # Last field is the count, preceding fields are the group by fields
120 counts = {(*record[:-1],): record[-1] for record in records}
121 else:
122 counts = {}
123 return counts
124
125
126 def _get_filtered_group_by_count_query(
127 table, engine, group_by, limit, offset, order_by, filters, count_query
128 ):
129 # Get the list of groups that we should count.
130 # We're considering limit and offset here so that we only count relevant groups
131 relevant_subtable_query = _get_query(table, limit, offset, order_by, filters)
132 relevant_subtable_cte = relevant_subtable_query.cte()
133 cte_columns = _create_col_objects(relevant_subtable_cte, group_by)
134 distinct_tuples = get_distinct_tuple_values(cte_columns, engine, output_table=table)
135 if distinct_tuples:
136 limited_filters = [
137 {
138 "or": [
139 distinct_tuples_to_filter(distinct_tuple_spec)
140 for distinct_tuple_spec in distinct_tuples
141 ]
142 }
143 ]
144 filtered_count_query = apply_filters(count_query, limited_filters)
145 else:
146 filtered_count_query = None
147 return filtered_count_query
148
149
150 def get_distinct_tuple_values(
151 column_list, engine, table=None, limit=None, offset=None, output_table=None
152 ):
153 """
154 Returns distinct tuples from a given list of columns.
155
156 Args:
157 column_list: list of column names or SQLAlchemy column objects
158 engine: SQLAlchemy engine object
159 table: SQLAlchemy table object
160 limit: int, gives number of rows to return
161 offset: int, gives number of rows to skip
162
163 If no table is given, the column_list must consist entirely of
164 SQLAlchemy column objects associated with a table.
165 """
166 if table is not None:
167 column_objects = _create_col_objects(table, column_list)
168 else:
169 column_objects = column_list
170 try:
171 assert all([type(col) == Column for col in column_objects])
172 except AssertionError as e:
173 logger.error("All columns must be str or sqlalchemy.Column type")
174 raise e
175
176 query = (
177 select(*column_objects)
178 .distinct()
179 .limit(limit)
180 .offset(offset)
181 )
182 result = _execute_query(query, engine)
183 if output_table is not None:
184 column_objects = [output_table.columns[col.name] for col in column_objects]
185 return [tuple(zip(column_objects, row)) for row in result]
186
187
188 def distinct_tuples_to_filter(distinct_tuples):
189 filters = []
190 for col, value in distinct_tuples:
191 filters.append({
192 "field": col,
193 "op": "==",
194 "value": value,
195 })
196 return filters
197
198
199 def create_record_or_records(table, engine, record_data):
200 """
201 record_data can be a dictionary, tuple, or list of dictionaries or tuples.
202 if record_data is a list, it creates multiple records.
203 """
204 id_value = None
205 with engine.begin() as connection:
206 result = connection.execute(table.insert(), record_data)
207 # If there was only a single record created, return the record.
208 if result.rowcount == 1:
209 # We need to manually commit insertion so that we can retrieve the record.
210 connection.commit()
211 id_value = result.inserted_primary_key[0]
212 if id_value is not None:
213 return get_record(table, engine, id_value)
214 # Do not return any records if multiple rows were added.
215 return None
216
217
218 def create_records_from_csv(
219 table,
220 engine,
221 csv_filename,
222 column_names,
223 header,
224 delimiter=None,
225 escape=None,
226 quote=None,
227 ):
228 with open(csv_filename, 'rb') as csv_file:
229 with engine.begin() as conn:
230 cursor = conn.connection.cursor()
231 relation = '.'.join('"{}"'.format(part) for part in (table.schema, table.name))
232 formatted_columns = '({})'.format(','.join([f'"{column_name}"' for column_name in column_names]))
233
234 copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'
235 if header:
236 copy_sql += " HEADER"
237 if delimiter:
238 copy_sql += f" DELIMITER E'{delimiter}'"
239 if escape:
240 copy_sql += f" ESCAPE '{escape}'"
241 if quote:
242 if quote == "'":
243 quote = "''"
244 copy_sql += f" QUOTE '{quote}'"
245
246 cursor.copy_expert(copy_sql, csv_file)
247
248
249 def update_record(table, engine, id_value, record_data):
250 primary_key_column = _get_primary_key_column(table)
251 with engine.begin() as connection:
252 connection.execute(
253 table.update().where(primary_key_column == id_value).values(record_data)
254 )
255 return get_record(table, engine, id_value)
256
257
258 def delete_record(table, engine, id_value):
259 primary_key_column = _get_primary_key_column(table)
260 query = delete(table).where(primary_key_column == id_value)
261 with engine.begin() as conn:
262 return conn.execute(query)
263
[end of db/records.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/records.py b/db/records.py
--- a/db/records.py
+++ b/db/records.py
@@ -72,6 +72,17 @@
field, in addition to an 'value' field if appropriate.
See: https://github.com/centerofci/sqlalchemy-filters#filters-format
"""
+ if not order_by:
+ # Set default ordering if none was requested
+ if len(table.primary_key.columns) > 0:
+ # If there are primary keys, order by all primary keys
+ order_by = [{'field': col, 'direction': 'asc'}
+ for col in table.primary_key.columns]
+ else:
+ # If there aren't primary keys, order by all columns
+ order_by = [{'field': col, 'direction': 'asc'}
+ for col in table.columns]
+
query = _get_query(table, limit, offset, order_by, filters)
return _execute_query(query, engine)
| {"golden_diff": "diff --git a/db/records.py b/db/records.py\n--- a/db/records.py\n+++ b/db/records.py\n@@ -72,6 +72,17 @@\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n+ if not order_by:\n+ # Set default ordering if none was requested\n+ if len(table.primary_key.columns) > 0:\n+ # If there are primary keys, order by all primary keys\n+ order_by = [{'field': col, 'direction': 'asc'}\n+ for col in table.primary_key.columns]\n+ else:\n+ # If there aren't primary keys, order by all columns\n+ order_by = [{'field': col, 'direction': 'asc'}\n+ for col in table.columns]\n+\n query = _get_query(table, limit, offset, order_by, filters)\n return _execute_query(query, engine)\n", "issue": "Create canonical ordering for tables and return it by default\n## Problem\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nWe should have a canonical ordering for each table whenever possible. This will make infinite scroll easier to deal with.\r\n\r\n## Proposed solution\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nWe should order by primary key by default if the table has one. Otherwise, we should use the entire row as a sorting key (it won't be possible to tell the difference if two identical rows \"switch order\"). We should always return rows in these orders when they are accessed unless the user specifies a different `ORDER BY`.\r\n\r\n## Additional context\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\n- Please see #361\n", "before_files": [{"content": "import logging\nfrom sqlalchemy import delete, select, Column, func\nfrom sqlalchemy.inspection import inspect\nfrom sqlalchemy_filters import apply_filters, apply_sort\nfrom sqlalchemy_filters.exceptions import FieldNotFound\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Grouping exceptions follow the sqlalchemy_filters exceptions patterns\nclass BadGroupFormat(Exception):\n pass\n\n\nclass GroupFieldNotFound(FieldNotFound):\n pass\n\n\ndef _get_primary_key_column(table):\n primary_key_list = list(inspect(table).primary_key)\n # We do not support getting by composite primary keys\n assert len(primary_key_list) == 1\n return primary_key_list[0]\n\n\ndef _create_col_objects(table, column_list):\n return [\n table.columns[col] if type(col) == str else table.columns[col.name]\n for col in column_list\n ]\n\n\ndef _get_query(table, limit, offset, order_by, filters):\n query = select(table).limit(limit).offset(offset)\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n return query\n\n\ndef _execute_query(query, engine):\n with engine.begin() as conn:\n records = conn.execute(query).fetchall()\n return records\n\n\ndef get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n result = _execute_query(query, engine)\n assert len(result) <= 1\n return result[0] if result else None\n\n\ndef get_records(\n table, engine, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns records from a table.\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n query = _get_query(table, limit, offset, order_by, filters)\n return _execute_query(query, engine)\n\n\ndef get_group_counts(\n table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns counts by specified groupings\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n group_by: list or tuple of column names or column objects to group by\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n if type(group_by) not in (tuple, list):\n raise BadGroupFormat(f\"Group spec {group_by} must be list or tuple.\")\n for field in group_by:\n if type(field) not in (str, Column):\n raise BadGroupFormat(f\"Group field {field} must be a string or Column.\")\n field_name = field if type(field) == str else field.name\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n\n table_columns = _create_col_objects(table, group_by)\n count_query = (\n select(*table_columns, func.count(table_columns[0]))\n .group_by(*table_columns)\n )\n if filters is not None:\n count_query = apply_filters(count_query, filters)\n filtered_count_query = _get_filtered_group_by_count_query(\n table, engine, group_by, limit, offset, order_by, filters, count_query\n )\n if filtered_count_query is not None:\n records = _execute_query(filtered_count_query, engine)\n # Last field is the count, preceding fields are the group by fields\n counts = {(*record[:-1],): record[-1] for record in records}\n else:\n counts = {}\n return counts\n\n\ndef _get_filtered_group_by_count_query(\n table, engine, group_by, limit, offset, order_by, filters, count_query\n):\n # Get the list of groups that we should count.\n # We're considering limit and offset here so that we only count relevant groups\n relevant_subtable_query = _get_query(table, limit, offset, order_by, filters)\n relevant_subtable_cte = relevant_subtable_query.cte()\n cte_columns = _create_col_objects(relevant_subtable_cte, group_by)\n distinct_tuples = get_distinct_tuple_values(cte_columns, engine, output_table=table)\n if distinct_tuples:\n limited_filters = [\n {\n \"or\": [\n distinct_tuples_to_filter(distinct_tuple_spec)\n for distinct_tuple_spec in distinct_tuples\n ]\n }\n ]\n filtered_count_query = apply_filters(count_query, limited_filters)\n else:\n filtered_count_query = None\n return filtered_count_query\n\n\ndef get_distinct_tuple_values(\n column_list, engine, table=None, limit=None, offset=None, output_table=None\n):\n \"\"\"\n Returns distinct tuples from a given list of columns.\n\n Args:\n column_list: list of column names or SQLAlchemy column objects\n engine: SQLAlchemy engine object\n table: SQLAlchemy table object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n\n If no table is given, the column_list must consist entirely of\n SQLAlchemy column objects associated with a table.\n \"\"\"\n if table is not None:\n column_objects = _create_col_objects(table, column_list)\n else:\n column_objects = column_list\n try:\n assert all([type(col) == Column for col in column_objects])\n except AssertionError as e:\n logger.error(\"All columns must be str or sqlalchemy.Column type\")\n raise e\n\n query = (\n select(*column_objects)\n .distinct()\n .limit(limit)\n .offset(offset)\n )\n result = _execute_query(query, engine)\n if output_table is not None:\n column_objects = [output_table.columns[col.name] for col in column_objects]\n return [tuple(zip(column_objects, row)) for row in result]\n\n\ndef distinct_tuples_to_filter(distinct_tuples):\n filters = []\n for col, value in distinct_tuples:\n filters.append({\n \"field\": col,\n \"op\": \"==\",\n \"value\": value,\n })\n return filters\n\n\ndef create_record_or_records(table, engine, record_data):\n \"\"\"\n record_data can be a dictionary, tuple, or list of dictionaries or tuples.\n if record_data is a list, it creates multiple records.\n \"\"\"\n id_value = None\n with engine.begin() as connection:\n result = connection.execute(table.insert(), record_data)\n # If there was only a single record created, return the record.\n if result.rowcount == 1:\n # We need to manually commit insertion so that we can retrieve the record.\n connection.commit()\n id_value = result.inserted_primary_key[0]\n if id_value is not None:\n return get_record(table, engine, id_value)\n # Do not return any records if multiple rows were added.\n return None\n\n\ndef create_records_from_csv(\n table,\n engine,\n csv_filename,\n column_names,\n header,\n delimiter=None,\n escape=None,\n quote=None,\n):\n with open(csv_filename, 'rb') as csv_file:\n with engine.begin() as conn:\n cursor = conn.connection.cursor()\n relation = '.'.join('\"{}\"'.format(part) for part in (table.schema, table.name))\n formatted_columns = '({})'.format(','.join([f'\"{column_name}\"' for column_name in column_names]))\n\n copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV'\n if header:\n copy_sql += \" HEADER\"\n if delimiter:\n copy_sql += f\" DELIMITER E'{delimiter}'\"\n if escape:\n copy_sql += f\" ESCAPE '{escape}'\"\n if quote:\n if quote == \"'\":\n quote = \"''\"\n copy_sql += f\" QUOTE '{quote}'\"\n\n cursor.copy_expert(copy_sql, csv_file)\n\n\ndef update_record(table, engine, id_value, record_data):\n primary_key_column = _get_primary_key_column(table)\n with engine.begin() as connection:\n connection.execute(\n table.update().where(primary_key_column == id_value).values(record_data)\n )\n return get_record(table, engine, id_value)\n\n\ndef delete_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = delete(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n return conn.execute(query)\n", "path": "db/records.py"}]} | 3,442 | 218 |
gh_patches_debug_49452 | rasdani/github-patches | git_diff | wagtail__wagtail-840 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Paginator and search pagination expect different parameters for page
The Paginator (as in `django.core.paginator`) used pretty much everywhere uses `page` as the query parameter. The search view, however, [expects](https://github.com/torchbox/wagtail/blob/100797796df0bc8ca96035092f32a9275d2b3713/wagtail/wagtailsearch/views/queries.py#L28) a `p` query parameter for pagination.
While not a bug, it is a bit confusing and makes it less elegant to share a pagination include. Certainly made me scratch my head.
Worth a PR?
Cheers,
Dan
</issue>
<code>
[start of wagtail/wagtailsearch/views/frontend.py]
1 import json
2
3 from django.conf import settings
4 from django.shortcuts import render
5 from django.http import HttpResponse
6 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
7
8 from wagtail.wagtailcore import models
9 from wagtail.wagtailsearch.models import Query
10
11
12 def search(
13 request,
14 template=None,
15 template_ajax=None,
16 results_per_page=10,
17 use_json=False,
18 json_attrs=['title', 'url'],
19 show_unpublished=False,
20 search_title_only=False,
21 extra_filters={},
22 path=None,
23 ):
24
25 # Get default templates
26 if template is None:
27 if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):
28 template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE
29 else:
30 template = 'wagtailsearch/search_results.html'
31
32 if template_ajax is None:
33 if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):
34 template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX
35 else:
36 template_ajax = template
37
38 # Get query string and page from GET paramters
39 query_string = request.GET.get('q', '')
40 page = request.GET.get('p', 1)
41
42 # Search
43 if query_string != '':
44 search_results = models.Page.search(
45 query_string,
46 show_unpublished=show_unpublished,
47 search_title_only=search_title_only,
48 extra_filters=extra_filters,
49 path=path if path else request.site.root_page.path
50 )
51
52 # Get query object
53 query = Query.get(query_string)
54
55 # Add hit
56 query.add_hit()
57
58 # Pagination
59 paginator = Paginator(search_results, results_per_page)
60 try:
61 search_results = paginator.page(page)
62 except PageNotAnInteger:
63 search_results = paginator.page(1)
64 except EmptyPage:
65 search_results = paginator.page(paginator.num_pages)
66 else:
67 query = None
68 search_results = None
69
70 if use_json: # Return a json response
71 if search_results:
72 search_results_json = []
73 for result in search_results:
74 result_specific = result.specific
75
76 search_results_json.append(dict(
77 (attr, getattr(result_specific, attr))
78 for attr in json_attrs
79 if hasattr(result_specific, attr)
80 ))
81
82 return HttpResponse(json.dumps(search_results_json))
83 else:
84 return HttpResponse('[]')
85 else: # Render a template
86 if request.is_ajax() and template_ajax:
87 template = template_ajax
88
89 return render(request, template, dict(
90 query_string=query_string,
91 search_results=search_results,
92 is_ajax=request.is_ajax(),
93 query=query
94 ))
95
[end of wagtail/wagtailsearch/views/frontend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailsearch/views/frontend.py b/wagtail/wagtailsearch/views/frontend.py
--- a/wagtail/wagtailsearch/views/frontend.py
+++ b/wagtail/wagtailsearch/views/frontend.py
@@ -37,7 +37,7 @@
# Get query string and page from GET paramters
query_string = request.GET.get('q', '')
- page = request.GET.get('p', 1)
+ page = request.GET.get('page', request.GET.get('p', 1))
# Search
if query_string != '':
| {"golden_diff": "diff --git a/wagtail/wagtailsearch/views/frontend.py b/wagtail/wagtailsearch/views/frontend.py\n--- a/wagtail/wagtailsearch/views/frontend.py\n+++ b/wagtail/wagtailsearch/views/frontend.py\n@@ -37,7 +37,7 @@\n \n # Get query string and page from GET paramters\n query_string = request.GET.get('q', '')\n- page = request.GET.get('p', 1)\n+ page = request.GET.get('page', request.GET.get('p', 1))\n \n # Search\n if query_string != '':\n", "issue": "Paginator and search pagination expect different parameters for page\nThe Paginator (as in `django.core.paginator`) used pretty much everywhere uses `page` as the query parameter. The search view, however, [expects](https://github.com/torchbox/wagtail/blob/100797796df0bc8ca96035092f32a9275d2b3713/wagtail/wagtailsearch/views/queries.py#L28) a `p` query parameter for pagination.\n\nWhile not a bug, it is a bit confusing and makes it less elegant to share a pagination include. Certainly made me scratch my head.\n\nWorth a PR?\n\nCheers,\nDan\n\n", "before_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.http import HttpResponse\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\n\nfrom wagtail.wagtailcore import models\nfrom wagtail.wagtailsearch.models import Query\n\n\ndef search(\n request,\n template=None,\n template_ajax=None,\n results_per_page=10,\n use_json=False,\n json_attrs=['title', 'url'],\n show_unpublished=False,\n search_title_only=False,\n extra_filters={},\n path=None,\n ):\n\n # Get default templates\n if template is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):\n template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE\n else:\n template = 'wagtailsearch/search_results.html'\n\n if template_ajax is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):\n template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX\n else:\n template_ajax = template\n\n # Get query string and page from GET paramters\n query_string = request.GET.get('q', '')\n page = request.GET.get('p', 1)\n\n # Search\n if query_string != '':\n search_results = models.Page.search(\n query_string,\n show_unpublished=show_unpublished,\n search_title_only=search_title_only,\n extra_filters=extra_filters,\n path=path if path else request.site.root_page.path\n )\n\n # Get query object\n query = Query.get(query_string)\n\n # Add hit\n query.add_hit()\n\n # Pagination\n paginator = Paginator(search_results, results_per_page)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n search_results = paginator.page(1)\n except EmptyPage:\n search_results = paginator.page(paginator.num_pages)\n else:\n query = None\n search_results = None\n\n if use_json: # Return a json response\n if search_results:\n search_results_json = []\n for result in search_results:\n result_specific = result.specific\n\n search_results_json.append(dict(\n (attr, getattr(result_specific, attr))\n for attr in json_attrs\n if hasattr(result_specific, attr)\n ))\n\n return HttpResponse(json.dumps(search_results_json))\n else:\n return HttpResponse('[]')\n else: # Render a template\n if request.is_ajax() and template_ajax:\n template = template_ajax\n\n return render(request, template, dict(\n query_string=query_string,\n search_results=search_results,\n is_ajax=request.is_ajax(),\n query=query\n ))\n", "path": "wagtail/wagtailsearch/views/frontend.py"}]} | 1,445 | 131 |
gh_patches_debug_10307 | rasdani/github-patches | git_diff | getnikola__nikola-2238 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deploy crashes with state system
Will investigate later.
``` pytb
Traceback (most recent call last):
File "/home/kwpolska/virtualenvs/nikola-py3/lib/python3.5/site-packages/doit/doit_cmd.py", line 168, in run
return command.parse_execute(args)
File "/home/kwpolska/virtualenvs/nikola-py3/lib/python3.5/site-packages/doit/cmd_base.py", line 122, in parse_execute
return self.execute(params, args)
File "/home/kwpolska/git/nikola/nikola/plugin_categories.py", line 124, in execute
return self._execute(options, args)
File "/home/kwpolska/git/nikola/nikola/plugins/command/deploy.py", line 135, in _execute
self.site.state.set('last_deploy', new_deploy.isoformat())
File "/home/kwpolska/git/nikola/nikola/state.py", line 64, in set
self._save()
File "/home/kwpolska/git/nikola/nikola/state.py", line 82, in _save
json.dump(self._local.data, outf, sort_keys=True, indent=2)
File "/usr/lib64/python3.5/json/__init__.py", line 179, in dump
fp.write(chunk)
File "/home/kwpolska/virtualenvs/nikola-py3/lib/python3.5/tempfile.py", line 483, in func_wrapper
return func(*args, **kwargs)
TypeError: a bytes-like object is required, not 'str'
```
</issue>
<code>
[start of nikola/state.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2016 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Persistent state implementation."""
28
29 import json
30 import os
31 import shutil
32 import tempfile
33 import threading
34
35 from . import utils
36
37
38 class Persistor():
39 """Persist stuff in a place.
40
41 This is an intentionally dumb implementation. It is *not* meant to be
42 fast, or useful for arbitrarily large data. Use lightly.
43
44 Intentionally it has no namespaces, sections, etc. Use as a
45 responsible adult.
46 """
47
48 def __init__(self, path):
49 """Where do you want it persisted."""
50 self._path = path
51 utils.makedirs(os.path.dirname(path))
52 self._local = threading.local()
53 self._local.data = {}
54
55 def get(self, key):
56 """Get data stored in key."""
57 self._read()
58 return self._local.data.get(key)
59
60 def set(self, key, value):
61 """Store value in key."""
62 self._read()
63 self._local.data[key] = value
64 self._save()
65
66 def delete(self, key):
67 """Delete key and the value it contains."""
68 self._read()
69 if key in self._local.data:
70 self._local.data.pop(key)
71 self._save()
72
73 def _read(self):
74 if os.path.isfile(self._path):
75 with open(self._path) as inf:
76 self._local.data = json.load(inf)
77
78 def _save(self):
79 dname = os.path.dirname(self._path)
80 with tempfile.NamedTemporaryFile(dir=dname, delete=False) as outf:
81 tname = outf.name
82 json.dump(self._local.data, outf, sort_keys=True, indent=2)
83 shutil.move(tname, self._path)
84
[end of nikola/state.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/state.py b/nikola/state.py
--- a/nikola/state.py
+++ b/nikola/state.py
@@ -78,6 +78,11 @@
def _save(self):
dname = os.path.dirname(self._path)
with tempfile.NamedTemporaryFile(dir=dname, delete=False) as outf:
+ # TODO replace with encoding='utf-8' and mode 'w+' in v8
tname = outf.name
- json.dump(self._local.data, outf, sort_keys=True, indent=2)
+ data = json.dumps(self._local.data, sort_keys=True, indent=2)
+ try:
+ outf.write(data)
+ except TypeError:
+ outf.write(data.encode('utf-8'))
shutil.move(tname, self._path)
| {"golden_diff": "diff --git a/nikola/state.py b/nikola/state.py\n--- a/nikola/state.py\n+++ b/nikola/state.py\n@@ -78,6 +78,11 @@\n def _save(self):\n dname = os.path.dirname(self._path)\n with tempfile.NamedTemporaryFile(dir=dname, delete=False) as outf:\n+ # TODO replace with encoding='utf-8' and mode 'w+' in v8\n tname = outf.name\n- json.dump(self._local.data, outf, sort_keys=True, indent=2)\n+ data = json.dumps(self._local.data, sort_keys=True, indent=2)\n+ try:\n+ outf.write(data)\n+ except TypeError:\n+ outf.write(data.encode('utf-8'))\n shutil.move(tname, self._path)\n", "issue": "deploy crashes with state system\nWill investigate later.\n\n``` pytb\nTraceback (most recent call last):\n File \"/home/kwpolska/virtualenvs/nikola-py3/lib/python3.5/site-packages/doit/doit_cmd.py\", line 168, in run\n return command.parse_execute(args)\n File \"/home/kwpolska/virtualenvs/nikola-py3/lib/python3.5/site-packages/doit/cmd_base.py\", line 122, in parse_execute\n return self.execute(params, args)\n File \"/home/kwpolska/git/nikola/nikola/plugin_categories.py\", line 124, in execute\n return self._execute(options, args)\n File \"/home/kwpolska/git/nikola/nikola/plugins/command/deploy.py\", line 135, in _execute\n self.site.state.set('last_deploy', new_deploy.isoformat())\n File \"/home/kwpolska/git/nikola/nikola/state.py\", line 64, in set\n self._save()\n File \"/home/kwpolska/git/nikola/nikola/state.py\", line 82, in _save\n json.dump(self._local.data, outf, sort_keys=True, indent=2)\n File \"/usr/lib64/python3.5/json/__init__.py\", line 179, in dump\n fp.write(chunk)\n File \"/home/kwpolska/virtualenvs/nikola-py3/lib/python3.5/tempfile.py\", line 483, in func_wrapper\n return func(*args, **kwargs)\nTypeError: a bytes-like object is required, not 'str'\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2016 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Persistent state implementation.\"\"\"\n\nimport json\nimport os\nimport shutil\nimport tempfile\nimport threading\n\nfrom . import utils\n\n\nclass Persistor():\n \"\"\"Persist stuff in a place.\n\n This is an intentionally dumb implementation. It is *not* meant to be\n fast, or useful for arbitrarily large data. Use lightly.\n\n Intentionally it has no namespaces, sections, etc. Use as a\n responsible adult.\n \"\"\"\n\n def __init__(self, path):\n \"\"\"Where do you want it persisted.\"\"\"\n self._path = path\n utils.makedirs(os.path.dirname(path))\n self._local = threading.local()\n self._local.data = {}\n\n def get(self, key):\n \"\"\"Get data stored in key.\"\"\"\n self._read()\n return self._local.data.get(key)\n\n def set(self, key, value):\n \"\"\"Store value in key.\"\"\"\n self._read()\n self._local.data[key] = value\n self._save()\n\n def delete(self, key):\n \"\"\"Delete key and the value it contains.\"\"\"\n self._read()\n if key in self._local.data:\n self._local.data.pop(key)\n self._save()\n\n def _read(self):\n if os.path.isfile(self._path):\n with open(self._path) as inf:\n self._local.data = json.load(inf)\n\n def _save(self):\n dname = os.path.dirname(self._path)\n with tempfile.NamedTemporaryFile(dir=dname, delete=False) as outf:\n tname = outf.name\n json.dump(self._local.data, outf, sort_keys=True, indent=2)\n shutil.move(tname, self._path)\n", "path": "nikola/state.py"}]} | 1,682 | 180 |
gh_patches_debug_29916 | rasdani/github-patches | git_diff | chainer__chainer-4738 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
F.bilinear requires huge GPU memory
The following code results `cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 18014398509481984 bytes (total 18014399785863168 bytes)`
```
import chainer
import cupy
b = chainer.links.Bilinear(256, 256, 256).to_gpu()
e1 = cupy.random.randn(64, 256).astype('f')
e2 = cupy.random.randn(64, 256).astype('f')
y = b(e1, e2)
print(y)
```
How to fix: merge cupy/cupy#1218 (or do not use `cupy.einsum`).
I confirmed the code run in ~5sec with
- chainer: master(6bab773dec70f291108ab2575622805252f9a208)
- cupy: (Merge: cupy/cupy@6162f9a cupy/cupy@7f89bd0)
</issue>
<code>
[start of chainer/functions/connection/bilinear.py]
1 import numpy
2
3 import chainer
4 from chainer.backends import cuda
5 from chainer import function_node
6 from chainer.utils import type_check
7
8
9 def _as_mat(x):
10 if x.ndim == 2:
11 return x
12 return x.reshape(len(x), -1)
13
14
15 def _ij_ik_il_to_jkl(a, b, c):
16 ab = chainer.functions.matmul(a[:, :, None], b[:, None, :]) # ijk
17 return chainer.functions.matmul(_as_mat(ab).T, c).reshape(
18 a.shape[1], b.shape[1], c.shape[1])
19
20
21 def _ij_ik_jkl_to_il(a, b, c):
22 ab = chainer.functions.matmul(a[:, :, None], b[:, None, :]) # ijk
23 c = c.reshape(-1, c.shape[-1]) # [jk]l
24 return chainer.functions.matmul(_as_mat(ab), c)
25
26
27 def _ij_il_jkl_to_ik(a, b, c):
28 return _ij_ik_jkl_to_il(a, b, chainer.functions.swapaxes(c, 1, 2))
29
30
31 def _ik_il_jkl_to_ij(a, b, c):
32 return _ij_ik_jkl_to_il(a, b, chainer.functions.rollaxis(c, 0, c.ndim))
33
34
35 class BilinearFunction(function_node.FunctionNode):
36 def check_type_forward(self, in_types):
37 n_in = type_check.eval(in_types.size())
38 if n_in != 3 and n_in != 6:
39 raise type_check.InvalidType(
40 '{0} or {1}'.format(
41 in_types.size() == 3, in_types.size() == 6),
42 '{0} == {1}'.format(in_types.size(), n_in))
43
44 e1_type, e2_type, W_type = in_types[:3]
45 type_check_prod = type_check.make_variable(numpy.prod, 'prod')
46 type_check.expect(
47 e1_type.dtype == numpy.float32,
48 e1_type.ndim >= 2,
49 e2_type.dtype == numpy.float32,
50 e2_type.ndim >= 2,
51 e1_type.shape[0] == e2_type.shape[0],
52 W_type.dtype == numpy.float32,
53 W_type.ndim == 3,
54 type_check_prod(e1_type.shape[1:]) == W_type.shape[0],
55 type_check_prod(e2_type.shape[1:]) == W_type.shape[1],
56 )
57
58 if n_in == 6:
59 out_size = W_type.shape[2]
60 V1_type, V2_type, b_type = in_types[3:]
61 type_check.expect(
62 V1_type.dtype == numpy.float32,
63 V1_type.ndim == 2,
64 V1_type.shape[0] == W_type.shape[0],
65 V1_type.shape[1] == out_size,
66 V2_type.dtype == numpy.float32,
67 V2_type.ndim == 2,
68 V2_type.shape[0] == W_type.shape[1],
69 V2_type.shape[1] == out_size,
70 b_type.dtype == numpy.float32,
71 b_type.ndim == 1,
72 b_type.shape[0] == out_size,
73 )
74
75 def forward(self, inputs):
76 self.retain_inputs(tuple(range(len(inputs))))
77
78 e1 = _as_mat(inputs[0])
79 e2 = _as_mat(inputs[1])
80 W = inputs[2]
81
82 xp = cuda.get_array_module(*inputs)
83 y = xp.einsum('ij,ik,jkl->il', e1, e2, W)
84
85 if len(inputs) == 6:
86 V1, V2, b = inputs[3:]
87 y += e1.dot(V1)
88 y += e2.dot(V2)
89 y += b
90 return y,
91
92 def backward(self, indexes, grad_outputs):
93 inputs = self.get_retained_inputs()
94 e1, e2, W = inputs[:3]
95 gy, = grad_outputs
96
97 if len(inputs) == 6:
98 V1, V2 = inputs[3], inputs[4]
99 return BilinearFunctionGrad().apply((e1, e2, W, V1, V2, gy))
100 return BilinearFunctionGrad().apply((e1, e2, W, gy))
101
102
103 class BilinearFunctionGrad(function_node.FunctionNode):
104
105 def forward(self, inputs):
106 self.retain_inputs(tuple(range(len(inputs))))
107
108 e1 = _as_mat(inputs[0])
109 e2 = _as_mat(inputs[1])
110 W, gy = inputs[2], inputs[-1]
111
112 xp = cuda.get_array_module(*inputs)
113 ge1 = xp.einsum('ik,jkl,il->ij', e2, W, gy)
114 ge2 = xp.einsum('ij,jkl,il->ik', e1, W, gy)
115 gW = xp.einsum('ij,ik,il->jkl', e1, e2, gy)
116
117 ret = ge1.reshape(inputs[0].shape), ge2.reshape(inputs[1].shape), gW
118
119 if len(inputs) == 6:
120 V1, V2 = inputs[3], inputs[4]
121 gV1 = e1.T.dot(gy)
122 gV2 = e2.T.dot(gy)
123 gb = gy.sum(0)
124 ge1 += gy.dot(V1.T)
125 ge2 += gy.dot(V2.T)
126 ret += gV1, gV2, gb
127
128 return ret
129
130 def backward(self, indexes, grad_outputs):
131 inputs = self.get_retained_inputs()
132
133 e1 = _as_mat(inputs[0])
134 e2 = _as_mat(inputs[1])
135 W, gy = inputs[2], inputs[-1]
136
137 gge1 = _as_mat(grad_outputs[0])
138 gge2 = _as_mat(grad_outputs[1])
139 ggW = grad_outputs[2]
140
141 dge1_de2 = _ij_il_jkl_to_ik(gge1, gy, W)
142 dge1_dW = _ij_ik_il_to_jkl(gge1, e2, gy)
143 dge1_dgy = _ij_ik_jkl_to_il(gge1, e2, W)
144
145 dge2_de1 = _ik_il_jkl_to_ij(gge2, gy, W)
146 dge2_dW = _ij_ik_il_to_jkl(e1, gge2, gy)
147 dge2_dgy = _ij_ik_jkl_to_il(e1, gge2, W)
148
149 dgW_de1 = _ik_il_jkl_to_ij(e2, gy, ggW)
150 dgW_de2 = _ij_il_jkl_to_ik(e1, gy, ggW)
151 dgW_dgy = _ij_ik_jkl_to_il(e1, e2, ggW)
152
153 ge1 = dgW_de1 + dge2_de1
154 ge2 = dgW_de2 + dge1_de2
155 gW = dge1_dW + dge2_dW
156 ggy = dgW_dgy + dge1_dgy + dge2_dgy
157
158 if len(inputs) == 6:
159 V1, V2 = inputs[3], inputs[4]
160 ggV1, ggV2, ggb = grad_outputs[3:]
161
162 gV1 = chainer.functions.matmul(gge1, gy, transa=True)
163 gV2 = chainer.functions.matmul(gge2, gy, transa=True)
164
165 ge1 += chainer.functions.matmul(gy, ggV1, transb=True)
166 ge2 += chainer.functions.matmul(gy, ggV2, transb=True)
167 ggy += chainer.functions.matmul(gge1, V1)
168 ggy += chainer.functions.matmul(gge2, V2)
169 ggy += chainer.functions.matmul(e1, ggV1)
170 ggy += chainer.functions.matmul(e2, ggV2)
171 ggy += chainer.functions.broadcast_to(ggb, ggy.shape)
172
173 ge1 = ge1.reshape(inputs[0].shape)
174 ge2 = ge2.reshape(inputs[1].shape)
175
176 if len(inputs) == 6:
177 return ge1, ge2, gW, gV1, gV2, ggy
178 return ge1, ge2, gW, ggy
179
180
181 def bilinear(e1, e2, W, V1=None, V2=None, b=None):
182 """Applies a bilinear function based on given parameters.
183
184 This is a building block of Neural Tensor Network (see the reference paper
185 below). It takes two input variables and one or four parameters, and
186 outputs one variable.
187
188 To be precise, denote six input arrays mathematically by
189 :math:`e^1\\in \\mathbb{R}^{I\\cdot J}`,
190 :math:`e^2\\in \\mathbb{R}^{I\\cdot K}`,
191 :math:`W\\in \\mathbb{R}^{J \\cdot K \\cdot L}`,
192 :math:`V^1\\in \\mathbb{R}^{J \\cdot L}`,
193 :math:`V^2\\in \\mathbb{R}^{K \\cdot L}`, and
194 :math:`b\\in \\mathbb{R}^{L}`,
195 where :math:`I` is mini-batch size.
196 In this document, we call :math:`V^1`, :math:`V^2`, and :math:`b` linear
197 parameters.
198
199 The output of forward propagation is calculated as
200
201 .. math::
202
203 y_{il} = \\sum_{jk} e^1_{ij} e^2_{ik} W_{jkl} + \\
204 \\sum_{j} e^1_{ij} V^1_{jl} + \\sum_{k} e^2_{ik} V^2_{kl} + b_{l}.
205
206 Note that V1, V2, b are optional. If these are not given, then this
207 function omits the last three terms in the above equation.
208
209 .. note::
210
211 This function accepts an input variable ``e1`` or ``e2`` of a non-matrix
212 array. In this case, the leading dimension is treated as the batch
213 dimension, and the other dimensions are reduced to one dimension.
214
215 .. note::
216
217 In the original paper, :math:`J` and :math:`K`
218 must be equal and the author denotes :math:`[V^1 V^2]`
219 (concatenation of matrices) by :math:`V`.
220
221 Args:
222 e1 (~chainer.Variable): Left input variable.
223 e2 (~chainer.Variable): Right input variable.
224 W (~chainer.Variable): Quadratic weight variable.
225 V1 (~chainer.Variable): Left coefficient variable.
226 V2 (~chainer.Variable): Right coefficient variable.
227 b (~chainer.Variable): Bias variable.
228
229 Returns:
230 ~chainer.Variable: Output variable.
231
232 See:
233 `Reasoning With Neural Tensor Networks for Knowledge Base Completion
234 <https://papers.nips.cc/paper/5028-reasoning-with-neural-tensor-
235 networks-for-knowledge-base-completion>`_ [Socher+, NIPS2013].
236
237 """
238 flags = [V1 is None, V2 is None, b is None]
239 if any(flags):
240 if not all(flags):
241 raise ValueError('All coefficients and bias for bilinear() must '
242 'be None, if at least one of them is None.')
243 return BilinearFunction().apply((e1, e2, W))[0]
244 return BilinearFunction().apply((e1, e2, W, V1, V2, b))[0]
245
[end of chainer/functions/connection/bilinear.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/connection/bilinear.py b/chainer/functions/connection/bilinear.py
--- a/chainer/functions/connection/bilinear.py
+++ b/chainer/functions/connection/bilinear.py
@@ -80,7 +80,19 @@
W = inputs[2]
xp = cuda.get_array_module(*inputs)
- y = xp.einsum('ij,ik,jkl->il', e1, e2, W)
+ if xp is numpy:
+ y = numpy.einsum('ij,ik,jkl->il', e1, e2, W)
+ else:
+ i_len, j_len = e1.shape
+ k_len = e2.shape[1]
+ # 'ij,ik->ijk'
+ e1e2 = e1[:, :, None] * e2[:, None, :]
+ # ijk->i[jk]
+ e1e2 = e1e2.reshape(i_len, j_len * k_len)
+ # jkl->[jk]l
+ W_mat = W.reshape(-1, W.shape[2])
+ # 'i[jk],[jk]l->il'
+ y = e1e2.dot(W_mat)
if len(inputs) == 6:
V1, V2, b = inputs[3:]
@@ -110,9 +122,23 @@
W, gy = inputs[2], inputs[-1]
xp = cuda.get_array_module(*inputs)
- ge1 = xp.einsum('ik,jkl,il->ij', e2, W, gy)
- ge2 = xp.einsum('ij,jkl,il->ik', e1, W, gy)
- gW = xp.einsum('ij,ik,il->jkl', e1, e2, gy)
+ if xp is numpy:
+ gW = numpy.einsum('ij,ik,il->jkl', e1, e2, gy)
+ ge1 = numpy.einsum('ik,jkl,il->ij', e2, W, gy)
+ ge2 = numpy.einsum('ij,jkl,il->ik', e1, W, gy)
+ else:
+ kern = cuda.reduce('T in0, T in1, T in2', 'T out',
+ 'in0 * in1 * in2', 'a + b', 'out = a', 0,
+ 'bilinear_product')
+
+ e1_b = e1[:, :, None, None] # ij
+ e2_b = e2[:, None, :, None] # ik
+ gy_b = gy[:, None, None, :] # il
+ W_b = W[None, :, :, :] # jkl
+
+ gW = kern(e1_b, e2_b, gy_b, axis=0) # 'ij,ik,il->jkl'
+ ge1 = kern(e2_b, W_b, gy_b, axis=(2, 3)) # 'ik,jkl,il->ij'
+ ge2 = kern(e1_b, W_b, gy_b, axis=(1, 3)) # 'ij,jkl,il->ik'
ret = ge1.reshape(inputs[0].shape), ge2.reshape(inputs[1].shape), gW
| {"golden_diff": "diff --git a/chainer/functions/connection/bilinear.py b/chainer/functions/connection/bilinear.py\n--- a/chainer/functions/connection/bilinear.py\n+++ b/chainer/functions/connection/bilinear.py\n@@ -80,7 +80,19 @@\n W = inputs[2]\n \n xp = cuda.get_array_module(*inputs)\n- y = xp.einsum('ij,ik,jkl->il', e1, e2, W)\n+ if xp is numpy:\n+ y = numpy.einsum('ij,ik,jkl->il', e1, e2, W)\n+ else:\n+ i_len, j_len = e1.shape\n+ k_len = e2.shape[1]\n+ # 'ij,ik->ijk'\n+ e1e2 = e1[:, :, None] * e2[:, None, :]\n+ # ijk->i[jk]\n+ e1e2 = e1e2.reshape(i_len, j_len * k_len)\n+ # jkl->[jk]l\n+ W_mat = W.reshape(-1, W.shape[2])\n+ # 'i[jk],[jk]l->il'\n+ y = e1e2.dot(W_mat)\n \n if len(inputs) == 6:\n V1, V2, b = inputs[3:]\n@@ -110,9 +122,23 @@\n W, gy = inputs[2], inputs[-1]\n \n xp = cuda.get_array_module(*inputs)\n- ge1 = xp.einsum('ik,jkl,il->ij', e2, W, gy)\n- ge2 = xp.einsum('ij,jkl,il->ik', e1, W, gy)\n- gW = xp.einsum('ij,ik,il->jkl', e1, e2, gy)\n+ if xp is numpy:\n+ gW = numpy.einsum('ij,ik,il->jkl', e1, e2, gy)\n+ ge1 = numpy.einsum('ik,jkl,il->ij', e2, W, gy)\n+ ge2 = numpy.einsum('ij,jkl,il->ik', e1, W, gy)\n+ else:\n+ kern = cuda.reduce('T in0, T in1, T in2', 'T out',\n+ 'in0 * in1 * in2', 'a + b', 'out = a', 0,\n+ 'bilinear_product')\n+\n+ e1_b = e1[:, :, None, None] # ij\n+ e2_b = e2[:, None, :, None] # ik\n+ gy_b = gy[:, None, None, :] # il\n+ W_b = W[None, :, :, :] # jkl\n+\n+ gW = kern(e1_b, e2_b, gy_b, axis=0) # 'ij,ik,il->jkl'\n+ ge1 = kern(e2_b, W_b, gy_b, axis=(2, 3)) # 'ik,jkl,il->ij'\n+ ge2 = kern(e1_b, W_b, gy_b, axis=(1, 3)) # 'ij,jkl,il->ik'\n \n ret = ge1.reshape(inputs[0].shape), ge2.reshape(inputs[1].shape), gW\n", "issue": "F.bilinear requires huge GPU memory\nThe following code results `cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 18014398509481984 bytes (total 18014399785863168 bytes)`\r\n```\r\nimport chainer\r\nimport cupy\r\nb = chainer.links.Bilinear(256, 256, 256).to_gpu()\r\ne1 = cupy.random.randn(64, 256).astype('f')\r\ne2 = cupy.random.randn(64, 256).astype('f')\r\ny = b(e1, e2)\r\nprint(y)\r\n```\r\n\r\nHow to fix: merge cupy/cupy#1218 (or do not use `cupy.einsum`).\r\nI confirmed the code run in ~5sec with\r\n- chainer: master(6bab773dec70f291108ab2575622805252f9a208)\r\n- cupy: (Merge: cupy/cupy@6162f9a cupy/cupy@7f89bd0)\n", "before_files": [{"content": "import numpy\n\nimport chainer\nfrom chainer.backends import cuda\nfrom chainer import function_node\nfrom chainer.utils import type_check\n\n\ndef _as_mat(x):\n if x.ndim == 2:\n return x\n return x.reshape(len(x), -1)\n\n\ndef _ij_ik_il_to_jkl(a, b, c):\n ab = chainer.functions.matmul(a[:, :, None], b[:, None, :]) # ijk\n return chainer.functions.matmul(_as_mat(ab).T, c).reshape(\n a.shape[1], b.shape[1], c.shape[1])\n\n\ndef _ij_ik_jkl_to_il(a, b, c):\n ab = chainer.functions.matmul(a[:, :, None], b[:, None, :]) # ijk\n c = c.reshape(-1, c.shape[-1]) # [jk]l\n return chainer.functions.matmul(_as_mat(ab), c)\n\n\ndef _ij_il_jkl_to_ik(a, b, c):\n return _ij_ik_jkl_to_il(a, b, chainer.functions.swapaxes(c, 1, 2))\n\n\ndef _ik_il_jkl_to_ij(a, b, c):\n return _ij_ik_jkl_to_il(a, b, chainer.functions.rollaxis(c, 0, c.ndim))\n\n\nclass BilinearFunction(function_node.FunctionNode):\n def check_type_forward(self, in_types):\n n_in = type_check.eval(in_types.size())\n if n_in != 3 and n_in != 6:\n raise type_check.InvalidType(\n '{0} or {1}'.format(\n in_types.size() == 3, in_types.size() == 6),\n '{0} == {1}'.format(in_types.size(), n_in))\n\n e1_type, e2_type, W_type = in_types[:3]\n type_check_prod = type_check.make_variable(numpy.prod, 'prod')\n type_check.expect(\n e1_type.dtype == numpy.float32,\n e1_type.ndim >= 2,\n e2_type.dtype == numpy.float32,\n e2_type.ndim >= 2,\n e1_type.shape[0] == e2_type.shape[0],\n W_type.dtype == numpy.float32,\n W_type.ndim == 3,\n type_check_prod(e1_type.shape[1:]) == W_type.shape[0],\n type_check_prod(e2_type.shape[1:]) == W_type.shape[1],\n )\n\n if n_in == 6:\n out_size = W_type.shape[2]\n V1_type, V2_type, b_type = in_types[3:]\n type_check.expect(\n V1_type.dtype == numpy.float32,\n V1_type.ndim == 2,\n V1_type.shape[0] == W_type.shape[0],\n V1_type.shape[1] == out_size,\n V2_type.dtype == numpy.float32,\n V2_type.ndim == 2,\n V2_type.shape[0] == W_type.shape[1],\n V2_type.shape[1] == out_size,\n b_type.dtype == numpy.float32,\n b_type.ndim == 1,\n b_type.shape[0] == out_size,\n )\n\n def forward(self, inputs):\n self.retain_inputs(tuple(range(len(inputs))))\n\n e1 = _as_mat(inputs[0])\n e2 = _as_mat(inputs[1])\n W = inputs[2]\n\n xp = cuda.get_array_module(*inputs)\n y = xp.einsum('ij,ik,jkl->il', e1, e2, W)\n\n if len(inputs) == 6:\n V1, V2, b = inputs[3:]\n y += e1.dot(V1)\n y += e2.dot(V2)\n y += b\n return y,\n\n def backward(self, indexes, grad_outputs):\n inputs = self.get_retained_inputs()\n e1, e2, W = inputs[:3]\n gy, = grad_outputs\n\n if len(inputs) == 6:\n V1, V2 = inputs[3], inputs[4]\n return BilinearFunctionGrad().apply((e1, e2, W, V1, V2, gy))\n return BilinearFunctionGrad().apply((e1, e2, W, gy))\n\n\nclass BilinearFunctionGrad(function_node.FunctionNode):\n\n def forward(self, inputs):\n self.retain_inputs(tuple(range(len(inputs))))\n\n e1 = _as_mat(inputs[0])\n e2 = _as_mat(inputs[1])\n W, gy = inputs[2], inputs[-1]\n\n xp = cuda.get_array_module(*inputs)\n ge1 = xp.einsum('ik,jkl,il->ij', e2, W, gy)\n ge2 = xp.einsum('ij,jkl,il->ik', e1, W, gy)\n gW = xp.einsum('ij,ik,il->jkl', e1, e2, gy)\n\n ret = ge1.reshape(inputs[0].shape), ge2.reshape(inputs[1].shape), gW\n\n if len(inputs) == 6:\n V1, V2 = inputs[3], inputs[4]\n gV1 = e1.T.dot(gy)\n gV2 = e2.T.dot(gy)\n gb = gy.sum(0)\n ge1 += gy.dot(V1.T)\n ge2 += gy.dot(V2.T)\n ret += gV1, gV2, gb\n\n return ret\n\n def backward(self, indexes, grad_outputs):\n inputs = self.get_retained_inputs()\n\n e1 = _as_mat(inputs[0])\n e2 = _as_mat(inputs[1])\n W, gy = inputs[2], inputs[-1]\n\n gge1 = _as_mat(grad_outputs[0])\n gge2 = _as_mat(grad_outputs[1])\n ggW = grad_outputs[2]\n\n dge1_de2 = _ij_il_jkl_to_ik(gge1, gy, W)\n dge1_dW = _ij_ik_il_to_jkl(gge1, e2, gy)\n dge1_dgy = _ij_ik_jkl_to_il(gge1, e2, W)\n\n dge2_de1 = _ik_il_jkl_to_ij(gge2, gy, W)\n dge2_dW = _ij_ik_il_to_jkl(e1, gge2, gy)\n dge2_dgy = _ij_ik_jkl_to_il(e1, gge2, W)\n\n dgW_de1 = _ik_il_jkl_to_ij(e2, gy, ggW)\n dgW_de2 = _ij_il_jkl_to_ik(e1, gy, ggW)\n dgW_dgy = _ij_ik_jkl_to_il(e1, e2, ggW)\n\n ge1 = dgW_de1 + dge2_de1\n ge2 = dgW_de2 + dge1_de2\n gW = dge1_dW + dge2_dW\n ggy = dgW_dgy + dge1_dgy + dge2_dgy\n\n if len(inputs) == 6:\n V1, V2 = inputs[3], inputs[4]\n ggV1, ggV2, ggb = grad_outputs[3:]\n\n gV1 = chainer.functions.matmul(gge1, gy, transa=True)\n gV2 = chainer.functions.matmul(gge2, gy, transa=True)\n\n ge1 += chainer.functions.matmul(gy, ggV1, transb=True)\n ge2 += chainer.functions.matmul(gy, ggV2, transb=True)\n ggy += chainer.functions.matmul(gge1, V1)\n ggy += chainer.functions.matmul(gge2, V2)\n ggy += chainer.functions.matmul(e1, ggV1)\n ggy += chainer.functions.matmul(e2, ggV2)\n ggy += chainer.functions.broadcast_to(ggb, ggy.shape)\n\n ge1 = ge1.reshape(inputs[0].shape)\n ge2 = ge2.reshape(inputs[1].shape)\n\n if len(inputs) == 6:\n return ge1, ge2, gW, gV1, gV2, ggy\n return ge1, ge2, gW, ggy\n\n\ndef bilinear(e1, e2, W, V1=None, V2=None, b=None):\n \"\"\"Applies a bilinear function based on given parameters.\n\n This is a building block of Neural Tensor Network (see the reference paper\n below). It takes two input variables and one or four parameters, and\n outputs one variable.\n\n To be precise, denote six input arrays mathematically by\n :math:`e^1\\\\in \\\\mathbb{R}^{I\\\\cdot J}`,\n :math:`e^2\\\\in \\\\mathbb{R}^{I\\\\cdot K}`,\n :math:`W\\\\in \\\\mathbb{R}^{J \\\\cdot K \\\\cdot L}`,\n :math:`V^1\\\\in \\\\mathbb{R}^{J \\\\cdot L}`,\n :math:`V^2\\\\in \\\\mathbb{R}^{K \\\\cdot L}`, and\n :math:`b\\\\in \\\\mathbb{R}^{L}`,\n where :math:`I` is mini-batch size.\n In this document, we call :math:`V^1`, :math:`V^2`, and :math:`b` linear\n parameters.\n\n The output of forward propagation is calculated as\n\n .. math::\n\n y_{il} = \\\\sum_{jk} e^1_{ij} e^2_{ik} W_{jkl} + \\\\\n \\\\sum_{j} e^1_{ij} V^1_{jl} + \\\\sum_{k} e^2_{ik} V^2_{kl} + b_{l}.\n\n Note that V1, V2, b are optional. If these are not given, then this\n function omits the last three terms in the above equation.\n\n .. note::\n\n This function accepts an input variable ``e1`` or ``e2`` of a non-matrix\n array. In this case, the leading dimension is treated as the batch\n dimension, and the other dimensions are reduced to one dimension.\n\n .. note::\n\n In the original paper, :math:`J` and :math:`K`\n must be equal and the author denotes :math:`[V^1 V^2]`\n (concatenation of matrices) by :math:`V`.\n\n Args:\n e1 (~chainer.Variable): Left input variable.\n e2 (~chainer.Variable): Right input variable.\n W (~chainer.Variable): Quadratic weight variable.\n V1 (~chainer.Variable): Left coefficient variable.\n V2 (~chainer.Variable): Right coefficient variable.\n b (~chainer.Variable): Bias variable.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n See:\n `Reasoning With Neural Tensor Networks for Knowledge Base Completion\n <https://papers.nips.cc/paper/5028-reasoning-with-neural-tensor-\n networks-for-knowledge-base-completion>`_ [Socher+, NIPS2013].\n\n \"\"\"\n flags = [V1 is None, V2 is None, b is None]\n if any(flags):\n if not all(flags):\n raise ValueError('All coefficients and bias for bilinear() must '\n 'be None, if at least one of them is None.')\n return BilinearFunction().apply((e1, e2, W))[0]\n return BilinearFunction().apply((e1, e2, W, V1, V2, b))[0]\n", "path": "chainer/functions/connection/bilinear.py"}]} | 4,042 | 746 |
gh_patches_debug_40992 | rasdani/github-patches | git_diff | modin-project__modin-2701 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement usecols parameter for read_csv with OmniSci backend
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. What kind of performance improvements would you like to see with this new API?
According to pyarrow documentation `pyarrow.read_csv` supports `include_columns` (https://arrow.apache.org/docs/python/generated/pyarrow.csv.ConvertOptions.html#pyarrow.csv.ConvertOptions), it can be used for implementation of `usecols` parameter of `modin.read_csv` with OmniSci backend.
</issue>
<code>
[start of modin/experimental/engines/omnisci_on_ray/io.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 from modin.experimental.backends.omnisci.query_compiler import DFAlgQueryCompiler
15 from modin.engines.ray.generic.io import RayIO
16 from modin.experimental.engines.omnisci_on_ray.frame.data import OmnisciOnRayFrame
17 from modin.error_message import ErrorMessage
18
19 from pyarrow.csv import read_csv, ParseOptions, ConvertOptions, ReadOptions
20 import pyarrow as pa
21
22
23 class OmnisciOnRayIO(RayIO):
24
25 frame_cls = OmnisciOnRayFrame
26 query_compiler_cls = DFAlgQueryCompiler
27
28 arg_keys = [
29 "filepath_or_buffer",
30 "sep",
31 "delimiter",
32 "header",
33 "names",
34 "index_col",
35 "usecols",
36 "squeeze",
37 "prefix",
38 "mangle_dupe_cols",
39 "dtype",
40 "engine",
41 "converters",
42 "true_values",
43 "false_values",
44 "skipinitialspace",
45 "skiprows",
46 "nrows",
47 "na_values",
48 "keep_default_na",
49 "na_filter",
50 "verbose",
51 "skip_blank_lines",
52 "parse_dates",
53 "infer_datetime_format",
54 "keep_date_col",
55 "date_parser",
56 "dayfirst",
57 "cache_dates",
58 "iterator",
59 "chunksize",
60 "compression",
61 "thousands",
62 "decimal",
63 "lineterminator",
64 "quotechar",
65 "quoting",
66 "escapechar",
67 "comment",
68 "encoding",
69 "dialect",
70 "error_bad_lines",
71 "warn_bad_lines",
72 "skipfooter",
73 "doublequote",
74 "delim_whitespace",
75 "low_memory",
76 "memory_map",
77 "float_precision",
78 ]
79
80 @classmethod
81 def read_csv(
82 cls,
83 filepath_or_buffer,
84 sep=",",
85 delimiter=None,
86 header="infer",
87 names=None,
88 index_col=None,
89 usecols=None,
90 squeeze=False,
91 prefix=None,
92 mangle_dupe_cols=True,
93 dtype=None,
94 engine=None,
95 converters=None,
96 true_values=None,
97 false_values=None,
98 skipinitialspace=False,
99 skiprows=None,
100 nrows=None,
101 na_values=None,
102 keep_default_na=True,
103 na_filter=True,
104 verbose=False,
105 skip_blank_lines=True,
106 parse_dates=False,
107 infer_datetime_format=False,
108 keep_date_col=False,
109 date_parser=None,
110 dayfirst=False,
111 cache_dates=True,
112 iterator=False,
113 chunksize=None,
114 compression="infer",
115 thousands=None,
116 decimal=b".",
117 lineterminator=None,
118 quotechar='"',
119 quoting=0,
120 escapechar=None,
121 comment=None,
122 encoding=None,
123 dialect=None,
124 error_bad_lines=True,
125 warn_bad_lines=True,
126 skipfooter=0,
127 doublequote=True,
128 delim_whitespace=False,
129 low_memory=True,
130 memory_map=False,
131 float_precision=None,
132 storage_options=None,
133 ):
134 items = locals().copy()
135 mykwargs = {k: items[k] for k in items if k in cls.arg_keys}
136 eng = str(engine).lower().strip()
137 try:
138 if eng in ["pandas", "c"]:
139 return cls._read(**mykwargs)
140
141 if isinstance(dtype, dict):
142 column_types = {c: cls._dtype_to_arrow(t) for c, t in dtype.items()}
143 else:
144 column_types = cls._dtype_to_arrow(dtype)
145
146 if (type(parse_dates) is list) and type(column_types) is dict:
147 for c in parse_dates:
148 column_types[c] = pa.timestamp("s")
149
150 if names:
151 if header == 0:
152 skiprows = skiprows + 1 if skiprows is not None else 1
153 elif header is None or header == "infer":
154 pass
155 else:
156 raise NotImplementedError(
157 "read_csv with 'arrow' engine and provided 'names' parameter supports only 0, None and 'infer' header values"
158 )
159 else:
160 if header == 0 or header == "infer":
161 pass
162 else:
163 raise NotImplementedError(
164 "read_csv with 'arrow' engine without 'names' parameter provided supports only 0 and 'infer' header values"
165 )
166
167 if delimiter is None:
168 delimiter = sep
169
170 if delim_whitespace and delimiter != ",":
171 raise ValueError(
172 "Specified a delimiter and delim_whitespace=True; you can only specify one."
173 )
174
175 po = ParseOptions(
176 delimiter="\\s+" if delim_whitespace else delimiter,
177 quote_char=quotechar,
178 double_quote=doublequote,
179 escape_char=escapechar,
180 newlines_in_values=False,
181 ignore_empty_lines=skip_blank_lines,
182 )
183 co = ConvertOptions(
184 check_utf8=None,
185 column_types=column_types,
186 null_values=None,
187 true_values=None,
188 false_values=None,
189 # timestamp fields should be handled as strings if parse_dates
190 # didn't passed explicitly as an array or a dict
191 timestamp_parsers=[""] if isinstance(parse_dates, bool) else None,
192 strings_can_be_null=None,
193 include_columns=None,
194 include_missing_columns=None,
195 auto_dict_encode=None,
196 auto_dict_max_cardinality=None,
197 )
198 ro = ReadOptions(
199 use_threads=True,
200 block_size=None,
201 skip_rows=skiprows,
202 column_names=names,
203 autogenerate_column_names=None,
204 )
205
206 at = read_csv(
207 filepath_or_buffer,
208 read_options=ro,
209 parse_options=po,
210 convert_options=co,
211 )
212
213 return cls.from_arrow(at)
214 except (pa.ArrowNotImplementedError, NotImplementedError):
215 if eng in ["arrow"]:
216 raise
217
218 ErrorMessage.default_to_pandas("`read_csv`")
219 return cls._read(**mykwargs)
220
221 @classmethod
222 def _dtype_to_arrow(cls, dtype):
223 if dtype is None:
224 return None
225 tname = dtype if isinstance(dtype, str) else dtype.name
226 if tname == "category":
227 return pa.dictionary(index_type=pa.int32(), value_type=pa.string())
228 elif tname == "string":
229 return pa.string()
230 else:
231 return pa.from_numpy_dtype(tname)
232
[end of modin/experimental/engines/omnisci_on_ray/io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modin/experimental/engines/omnisci_on_ray/io.py b/modin/experimental/engines/omnisci_on_ray/io.py
--- a/modin/experimental/engines/omnisci_on_ray/io.py
+++ b/modin/experimental/engines/omnisci_on_ray/io.py
@@ -19,6 +19,9 @@
from pyarrow.csv import read_csv, ParseOptions, ConvertOptions, ReadOptions
import pyarrow as pa
+import pandas
+from pandas.io.parsers import _validate_usecols_arg
+
class OmnisciOnRayIO(RayIO):
@@ -172,6 +175,8 @@
"Specified a delimiter and delim_whitespace=True; you can only specify one."
)
+ usecols_md = cls._prepare_pyarrow_usecols(mykwargs)
+
po = ParseOptions(
delimiter="\\s+" if delim_whitespace else delimiter,
quote_char=quotechar,
@@ -190,7 +195,7 @@
# didn't passed explicitly as an array or a dict
timestamp_parsers=[""] if isinstance(parse_dates, bool) else None,
strings_can_be_null=None,
- include_columns=None,
+ include_columns=usecols_md,
include_missing_columns=None,
auto_dict_encode=None,
auto_dict_max_cardinality=None,
@@ -229,3 +234,57 @@
return pa.string()
else:
return pa.from_numpy_dtype(tname)
+
+ @classmethod
+ def _prepare_pyarrow_usecols(cls, read_csv_kwargs):
+ """
+ Define `usecols` parameter in the way pyarrow can process it.
+ ----------
+ read_csv_kwargs:
+ read_csv function parameters.
+
+ Returns
+ -------
+ usecols_md: list
+ Redefined `usecols` parameter.
+ """
+ usecols = read_csv_kwargs.get("usecols", None)
+ engine = read_csv_kwargs.get("engine", None)
+ usecols_md, usecols_names_dtypes = _validate_usecols_arg(usecols)
+ if usecols_md:
+ empty_pd_df = pandas.read_csv(
+ **dict(
+ read_csv_kwargs,
+ nrows=0,
+ skipfooter=0,
+ usecols=None,
+ engine=None if engine == "arrow" else engine,
+ )
+ )
+ column_names = empty_pd_df.columns
+ if usecols_names_dtypes == "string":
+ if usecols_md.issubset(set(column_names)):
+ # columns should be sorted because pandas doesn't preserve columns order
+ usecols_md = [
+ col_name for col_name in column_names if col_name in usecols_md
+ ]
+ else:
+ raise NotImplementedError(
+ "values passed in the `usecols` parameter don't match columns names"
+ )
+ elif usecols_names_dtypes == "integer":
+ # columns should be sorted because pandas doesn't preserve columns order
+ usecols_md = sorted(usecols_md)
+ if len(column_names) < usecols_md[-1]:
+ raise NotImplementedError(
+ "max usecols value is higher than the number of columns"
+ )
+ usecols_md = [column_names[i] for i in usecols_md]
+ elif callable(usecols_md):
+ usecols_md = [
+ col_name for col_name in column_names if usecols_md(col_name)
+ ]
+ else:
+ raise NotImplementedError("unsupported `usecols` parameter")
+
+ return usecols_md
| {"golden_diff": "diff --git a/modin/experimental/engines/omnisci_on_ray/io.py b/modin/experimental/engines/omnisci_on_ray/io.py\n--- a/modin/experimental/engines/omnisci_on_ray/io.py\n+++ b/modin/experimental/engines/omnisci_on_ray/io.py\n@@ -19,6 +19,9 @@\n from pyarrow.csv import read_csv, ParseOptions, ConvertOptions, ReadOptions\n import pyarrow as pa\n \n+import pandas\n+from pandas.io.parsers import _validate_usecols_arg\n+\n \n class OmnisciOnRayIO(RayIO):\n \n@@ -172,6 +175,8 @@\n \"Specified a delimiter and delim_whitespace=True; you can only specify one.\"\n )\n \n+ usecols_md = cls._prepare_pyarrow_usecols(mykwargs)\n+\n po = ParseOptions(\n delimiter=\"\\\\s+\" if delim_whitespace else delimiter,\n quote_char=quotechar,\n@@ -190,7 +195,7 @@\n # didn't passed explicitly as an array or a dict\n timestamp_parsers=[\"\"] if isinstance(parse_dates, bool) else None,\n strings_can_be_null=None,\n- include_columns=None,\n+ include_columns=usecols_md,\n include_missing_columns=None,\n auto_dict_encode=None,\n auto_dict_max_cardinality=None,\n@@ -229,3 +234,57 @@\n return pa.string()\n else:\n return pa.from_numpy_dtype(tname)\n+\n+ @classmethod\n+ def _prepare_pyarrow_usecols(cls, read_csv_kwargs):\n+ \"\"\"\n+ Define `usecols` parameter in the way pyarrow can process it.\n+ ----------\n+ read_csv_kwargs:\n+ read_csv function parameters.\n+\n+ Returns\n+ -------\n+ usecols_md: list\n+ Redefined `usecols` parameter.\n+ \"\"\"\n+ usecols = read_csv_kwargs.get(\"usecols\", None)\n+ engine = read_csv_kwargs.get(\"engine\", None)\n+ usecols_md, usecols_names_dtypes = _validate_usecols_arg(usecols)\n+ if usecols_md:\n+ empty_pd_df = pandas.read_csv(\n+ **dict(\n+ read_csv_kwargs,\n+ nrows=0,\n+ skipfooter=0,\n+ usecols=None,\n+ engine=None if engine == \"arrow\" else engine,\n+ )\n+ )\n+ column_names = empty_pd_df.columns\n+ if usecols_names_dtypes == \"string\":\n+ if usecols_md.issubset(set(column_names)):\n+ # columns should be sorted because pandas doesn't preserve columns order\n+ usecols_md = [\n+ col_name for col_name in column_names if col_name in usecols_md\n+ ]\n+ else:\n+ raise NotImplementedError(\n+ \"values passed in the `usecols` parameter don't match columns names\"\n+ )\n+ elif usecols_names_dtypes == \"integer\":\n+ # columns should be sorted because pandas doesn't preserve columns order\n+ usecols_md = sorted(usecols_md)\n+ if len(column_names) < usecols_md[-1]:\n+ raise NotImplementedError(\n+ \"max usecols value is higher than the number of columns\"\n+ )\n+ usecols_md = [column_names[i] for i in usecols_md]\n+ elif callable(usecols_md):\n+ usecols_md = [\n+ col_name for col_name in column_names if usecols_md(col_name)\n+ ]\n+ else:\n+ raise NotImplementedError(\"unsupported `usecols` parameter\")\n+\n+ return usecols_md\n", "issue": "Implement usecols parameter for read_csv with OmniSci backend\n**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is. What kind of performance improvements would you like to see with this new API?\r\nAccording to pyarrow documentation `pyarrow.read_csv` supports `include_columns` (https://arrow.apache.org/docs/python/generated/pyarrow.csv.ConvertOptions.html#pyarrow.csv.ConvertOptions), it can be used for implementation of `usecols` parameter of `modin.read_csv` with OmniSci backend.\r\n\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom modin.experimental.backends.omnisci.query_compiler import DFAlgQueryCompiler\nfrom modin.engines.ray.generic.io import RayIO\nfrom modin.experimental.engines.omnisci_on_ray.frame.data import OmnisciOnRayFrame\nfrom modin.error_message import ErrorMessage\n\nfrom pyarrow.csv import read_csv, ParseOptions, ConvertOptions, ReadOptions\nimport pyarrow as pa\n\n\nclass OmnisciOnRayIO(RayIO):\n\n frame_cls = OmnisciOnRayFrame\n query_compiler_cls = DFAlgQueryCompiler\n\n arg_keys = [\n \"filepath_or_buffer\",\n \"sep\",\n \"delimiter\",\n \"header\",\n \"names\",\n \"index_col\",\n \"usecols\",\n \"squeeze\",\n \"prefix\",\n \"mangle_dupe_cols\",\n \"dtype\",\n \"engine\",\n \"converters\",\n \"true_values\",\n \"false_values\",\n \"skipinitialspace\",\n \"skiprows\",\n \"nrows\",\n \"na_values\",\n \"keep_default_na\",\n \"na_filter\",\n \"verbose\",\n \"skip_blank_lines\",\n \"parse_dates\",\n \"infer_datetime_format\",\n \"keep_date_col\",\n \"date_parser\",\n \"dayfirst\",\n \"cache_dates\",\n \"iterator\",\n \"chunksize\",\n \"compression\",\n \"thousands\",\n \"decimal\",\n \"lineterminator\",\n \"quotechar\",\n \"quoting\",\n \"escapechar\",\n \"comment\",\n \"encoding\",\n \"dialect\",\n \"error_bad_lines\",\n \"warn_bad_lines\",\n \"skipfooter\",\n \"doublequote\",\n \"delim_whitespace\",\n \"low_memory\",\n \"memory_map\",\n \"float_precision\",\n ]\n\n @classmethod\n def read_csv(\n cls,\n filepath_or_buffer,\n sep=\",\",\n delimiter=None,\n header=\"infer\",\n names=None,\n index_col=None,\n usecols=None,\n squeeze=False,\n prefix=None,\n mangle_dupe_cols=True,\n dtype=None,\n engine=None,\n converters=None,\n true_values=None,\n false_values=None,\n skipinitialspace=False,\n skiprows=None,\n nrows=None,\n na_values=None,\n keep_default_na=True,\n na_filter=True,\n verbose=False,\n skip_blank_lines=True,\n parse_dates=False,\n infer_datetime_format=False,\n keep_date_col=False,\n date_parser=None,\n dayfirst=False,\n cache_dates=True,\n iterator=False,\n chunksize=None,\n compression=\"infer\",\n thousands=None,\n decimal=b\".\",\n lineterminator=None,\n quotechar='\"',\n quoting=0,\n escapechar=None,\n comment=None,\n encoding=None,\n dialect=None,\n error_bad_lines=True,\n warn_bad_lines=True,\n skipfooter=0,\n doublequote=True,\n delim_whitespace=False,\n low_memory=True,\n memory_map=False,\n float_precision=None,\n storage_options=None,\n ):\n items = locals().copy()\n mykwargs = {k: items[k] for k in items if k in cls.arg_keys}\n eng = str(engine).lower().strip()\n try:\n if eng in [\"pandas\", \"c\"]:\n return cls._read(**mykwargs)\n\n if isinstance(dtype, dict):\n column_types = {c: cls._dtype_to_arrow(t) for c, t in dtype.items()}\n else:\n column_types = cls._dtype_to_arrow(dtype)\n\n if (type(parse_dates) is list) and type(column_types) is dict:\n for c in parse_dates:\n column_types[c] = pa.timestamp(\"s\")\n\n if names:\n if header == 0:\n skiprows = skiprows + 1 if skiprows is not None else 1\n elif header is None or header == \"infer\":\n pass\n else:\n raise NotImplementedError(\n \"read_csv with 'arrow' engine and provided 'names' parameter supports only 0, None and 'infer' header values\"\n )\n else:\n if header == 0 or header == \"infer\":\n pass\n else:\n raise NotImplementedError(\n \"read_csv with 'arrow' engine without 'names' parameter provided supports only 0 and 'infer' header values\"\n )\n\n if delimiter is None:\n delimiter = sep\n\n if delim_whitespace and delimiter != \",\":\n raise ValueError(\n \"Specified a delimiter and delim_whitespace=True; you can only specify one.\"\n )\n\n po = ParseOptions(\n delimiter=\"\\\\s+\" if delim_whitespace else delimiter,\n quote_char=quotechar,\n double_quote=doublequote,\n escape_char=escapechar,\n newlines_in_values=False,\n ignore_empty_lines=skip_blank_lines,\n )\n co = ConvertOptions(\n check_utf8=None,\n column_types=column_types,\n null_values=None,\n true_values=None,\n false_values=None,\n # timestamp fields should be handled as strings if parse_dates\n # didn't passed explicitly as an array or a dict\n timestamp_parsers=[\"\"] if isinstance(parse_dates, bool) else None,\n strings_can_be_null=None,\n include_columns=None,\n include_missing_columns=None,\n auto_dict_encode=None,\n auto_dict_max_cardinality=None,\n )\n ro = ReadOptions(\n use_threads=True,\n block_size=None,\n skip_rows=skiprows,\n column_names=names,\n autogenerate_column_names=None,\n )\n\n at = read_csv(\n filepath_or_buffer,\n read_options=ro,\n parse_options=po,\n convert_options=co,\n )\n\n return cls.from_arrow(at)\n except (pa.ArrowNotImplementedError, NotImplementedError):\n if eng in [\"arrow\"]:\n raise\n\n ErrorMessage.default_to_pandas(\"`read_csv`\")\n return cls._read(**mykwargs)\n\n @classmethod\n def _dtype_to_arrow(cls, dtype):\n if dtype is None:\n return None\n tname = dtype if isinstance(dtype, str) else dtype.name\n if tname == \"category\":\n return pa.dictionary(index_type=pa.int32(), value_type=pa.string())\n elif tname == \"string\":\n return pa.string()\n else:\n return pa.from_numpy_dtype(tname)\n", "path": "modin/experimental/engines/omnisci_on_ray/io.py"}]} | 2,755 | 790 |
gh_patches_debug_21053 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-6083 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: filtrer les notifications par `is_read`
**Description du besoin**
Pour les besoins de l'extension, il serait intéressant de pouvoir filtrer les notifications (URL `/api/notifications`) selon leur propriété `is_read` pour ne récupérer que les non lues (les autres n'ayant pas d'intérêt pour ce cas d'usage).
**Description de la solution**
Ajouter un filtre pour `is_read` (booléen) sur l'URL `/api/notifications`
**Description des alternatives**
Pouvoir trier selon cette propriété (pour avoir les non-lues d'abord), _a minima_.
**Contexte additionnel**
Voir le code de [notifier.js#64](https://github.com/zestedesavoir/extensions-notificateurs/blob/master/Universal/notifier.js#L64) pour voir le cas d'usage en question (qui me permettrait de supprimer le `.filter()` ligne 78 tout en récupérant des notifications potentiellement anciennes mais non lues qui sont actuellement inaccessibles).
</issue>
<code>
[start of zds/notification/api/views.py]
1 import datetime
2 from django.core.cache import cache
3 from django.db.models.signals import post_delete
4 from django.db.models.signals import post_save
5 from dry_rest_permissions.generics import DRYPermissions
6 from rest_framework import filters
7 from rest_framework.generics import ListAPIView
8 from rest_framework.permissions import IsAuthenticated
9 from rest_framework_extensions.cache.decorators import cache_response
10 from rest_framework_extensions.etag.decorators import etag
11 from rest_framework_extensions.key_constructor import bits
12 from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor
13
14 from zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit
15 from zds.notification.api.serializers import NotificationSerializer
16 from zds.notification.models import Notification
17
18
19 class PagingNotificationListKeyConstructor(DefaultKeyConstructor):
20 pagination = DJRF3xPaginationKeyBit()
21 search = bits.QueryParamsKeyBit(["search", "ordering", "type"])
22 list_sql_query = bits.ListSqlQueryKeyBit()
23 unique_view_id = bits.UniqueViewIdKeyBit()
24 user = bits.UserKeyBit()
25 updated_at = UpdatedAtKeyBit("api_updated_notification")
26
27
28 def change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):
29 cache.set("api_updated_notification", datetime.datetime.utcnow())
30
31
32 post_save.connect(receiver=change_api_notification_updated_at, sender=Notification)
33 post_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)
34
35
36 class NotificationListAPI(ListAPIView):
37 """
38 List of notification.
39 """
40
41 filter_backends = (filters.SearchFilter, filters.OrderingFilter)
42 search_fields = ("title",)
43 ordering_fields = (
44 "pubdate",
45 "title",
46 )
47 list_key_func = PagingNotificationListKeyConstructor()
48 serializer_class = NotificationSerializer
49 permission_classes = (
50 IsAuthenticated,
51 DRYPermissions,
52 )
53
54 @etag(list_key_func)
55 @cache_response(key_func=list_key_func)
56 def get(self, request, *args, **kwargs):
57 """
58 Lists all notifications of a user.
59 ---
60
61 parameters:
62 - name: Authorization
63 description: Bearer token to make an authenticated request.
64 required: true
65 paramType: header
66 - name: page
67 description: Restricts output to the given page number.
68 required: false
69 paramType: query
70 - name: page_size
71 description: Sets the number of notifications per page.
72 required: false
73 paramType: query
74 - name: search
75 description: Filters by title.
76 required: false
77 paramType: query
78 - name: ordering
79 description: Sorts the results. You can order by (-)pubdate or (-)title.
80 paramType: query
81 - name: type
82 description: Filters by notification type.
83 paramType: query
84 - name: subscription_type
85 description: Filters by subscription type.
86 paramType: query
87 - name: expand
88 description: Returns an object instead of an identifier representing the given field.
89 required: false
90 paramType: query
91 responseMessages:
92 - code: 401
93 message: Not Authenticated
94 - code: 404
95 message: Not Found
96 """
97 return self.list(request, *args, **kwargs)
98
99 def get_queryset(self):
100 queryset = Notification.objects.get_notifications_of(self.request.user)
101 subscription_type = self.request.query_params.get("subscription_type", None)
102 if subscription_type:
103 queryset = queryset.filter(subscription__content_type__model=subscription_type)
104 _type = self.request.query_params.get("type", None)
105 if _type:
106 queryset = queryset.filter(content_type__model=_type)
107 return queryset
108
[end of zds/notification/api/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/notification/api/views.py b/zds/notification/api/views.py
--- a/zds/notification/api/views.py
+++ b/zds/notification/api/views.py
@@ -84,6 +84,9 @@
- name: subscription_type
description: Filters by subscription type.
paramType: query
+ - name: is_read
+ description: Filters by read status.
+ paramType: query
- name: expand
description: Returns an object instead of an identifier representing the given field.
required: false
@@ -104,4 +107,9 @@
_type = self.request.query_params.get("type", None)
if _type:
queryset = queryset.filter(content_type__model=_type)
+ is_read = str(self.request.query_params.get("is_read", None)).lower()
+ if is_read == "true":
+ queryset = queryset.filter(is_read=True)
+ elif is_read == "false":
+ queryset = queryset.filter(is_read=False)
return queryset
| {"golden_diff": "diff --git a/zds/notification/api/views.py b/zds/notification/api/views.py\n--- a/zds/notification/api/views.py\n+++ b/zds/notification/api/views.py\n@@ -84,6 +84,9 @@\n - name: subscription_type\n description: Filters by subscription type.\n paramType: query\n+ - name: is_read\n+ description: Filters by read status.\n+ paramType: query\n - name: expand\n description: Returns an object instead of an identifier representing the given field.\n required: false\n@@ -104,4 +107,9 @@\n _type = self.request.query_params.get(\"type\", None)\n if _type:\n queryset = queryset.filter(content_type__model=_type)\n+ is_read = str(self.request.query_params.get(\"is_read\", None)).lower()\n+ if is_read == \"true\":\n+ queryset = queryset.filter(is_read=True)\n+ elif is_read == \"false\":\n+ queryset = queryset.filter(is_read=False)\n return queryset\n", "issue": "API: filtrer les notifications par `is_read`\n**Description du besoin**\r\n\r\nPour les besoins de l'extension, il serait int\u00e9ressant de pouvoir filtrer les notifications (URL `/api/notifications`) selon leur propri\u00e9t\u00e9 `is_read` pour ne r\u00e9cup\u00e9rer que les non lues (les autres n'ayant pas d'int\u00e9r\u00eat pour ce cas d'usage).\r\n\r\n**Description de la solution**\r\n\r\nAjouter un filtre pour `is_read` (bool\u00e9en) sur l'URL `/api/notifications`\r\n\r\n**Description des alternatives**\r\n\r\nPouvoir trier selon cette propri\u00e9t\u00e9 (pour avoir les non-lues d'abord), _a minima_.\r\n\r\n**Contexte additionnel**\r\n\r\nVoir le code de [notifier.js#64](https://github.com/zestedesavoir/extensions-notificateurs/blob/master/Universal/notifier.js#L64) pour voir le cas d'usage en question (qui me permettrait de supprimer le `.filter()` ligne 78 tout en r\u00e9cup\u00e9rant des notifications potentiellement anciennes mais non lues qui sont actuellement inaccessibles).\r\n\n", "before_files": [{"content": "import datetime\nfrom django.core.cache import cache\nfrom django.db.models.signals import post_delete\nfrom django.db.models.signals import post_save\nfrom dry_rest_permissions.generics import DRYPermissions\nfrom rest_framework import filters\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework_extensions.cache.decorators import cache_response\nfrom rest_framework_extensions.etag.decorators import etag\nfrom rest_framework_extensions.key_constructor import bits\nfrom rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor\n\nfrom zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit\nfrom zds.notification.api.serializers import NotificationSerializer\nfrom zds.notification.models import Notification\n\n\nclass PagingNotificationListKeyConstructor(DefaultKeyConstructor):\n pagination = DJRF3xPaginationKeyBit()\n search = bits.QueryParamsKeyBit([\"search\", \"ordering\", \"type\"])\n list_sql_query = bits.ListSqlQueryKeyBit()\n unique_view_id = bits.UniqueViewIdKeyBit()\n user = bits.UserKeyBit()\n updated_at = UpdatedAtKeyBit(\"api_updated_notification\")\n\n\ndef change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):\n cache.set(\"api_updated_notification\", datetime.datetime.utcnow())\n\n\npost_save.connect(receiver=change_api_notification_updated_at, sender=Notification)\npost_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)\n\n\nclass NotificationListAPI(ListAPIView):\n \"\"\"\n List of notification.\n \"\"\"\n\n filter_backends = (filters.SearchFilter, filters.OrderingFilter)\n search_fields = (\"title\",)\n ordering_fields = (\n \"pubdate\",\n \"title\",\n )\n list_key_func = PagingNotificationListKeyConstructor()\n serializer_class = NotificationSerializer\n permission_classes = (\n IsAuthenticated,\n DRYPermissions,\n )\n\n @etag(list_key_func)\n @cache_response(key_func=list_key_func)\n def get(self, request, *args, **kwargs):\n \"\"\"\n Lists all notifications of a user.\n ---\n\n parameters:\n - name: Authorization\n description: Bearer token to make an authenticated request.\n required: true\n paramType: header\n - name: page\n description: Restricts output to the given page number.\n required: false\n paramType: query\n - name: page_size\n description: Sets the number of notifications per page.\n required: false\n paramType: query\n - name: search\n description: Filters by title.\n required: false\n paramType: query\n - name: ordering\n description: Sorts the results. You can order by (-)pubdate or (-)title.\n paramType: query\n - name: type\n description: Filters by notification type.\n paramType: query\n - name: subscription_type\n description: Filters by subscription type.\n paramType: query\n - name: expand\n description: Returns an object instead of an identifier representing the given field.\n required: false\n paramType: query\n responseMessages:\n - code: 401\n message: Not Authenticated\n - code: 404\n message: Not Found\n \"\"\"\n return self.list(request, *args, **kwargs)\n\n def get_queryset(self):\n queryset = Notification.objects.get_notifications_of(self.request.user)\n subscription_type = self.request.query_params.get(\"subscription_type\", None)\n if subscription_type:\n queryset = queryset.filter(subscription__content_type__model=subscription_type)\n _type = self.request.query_params.get(\"type\", None)\n if _type:\n queryset = queryset.filter(content_type__model=_type)\n return queryset\n", "path": "zds/notification/api/views.py"}]} | 1,780 | 225 |
gh_patches_debug_16446 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Handle print() statements for inline scripts
print() statements in inline scripts should be suppressed, and produce into ctx.log.warn() calls instead.
</issue>
<code>
[start of mitmproxy/addons/script.py]
1 import contextlib
2 import os
3 import shlex
4 import sys
5 import threading
6 import traceback
7 import types
8
9 from mitmproxy import exceptions
10 from mitmproxy import ctx
11 from mitmproxy import eventsequence
12
13
14 import watchdog.events
15 from watchdog.observers import polling
16
17
18 def parse_command(command):
19 """
20 Returns a (path, args) tuple.
21 """
22 if not command or not command.strip():
23 raise ValueError("Empty script command.")
24 # Windows: escape all backslashes in the path.
25 if os.name == "nt": # pragma: no cover
26 backslashes = shlex.split(command, posix=False)[0].count("\\")
27 command = command.replace("\\", "\\\\", backslashes)
28 args = shlex.split(command) # pragma: no cover
29 args[0] = os.path.expanduser(args[0])
30 if not os.path.exists(args[0]):
31 raise ValueError(
32 ("Script file not found: %s.\r\n"
33 "If your script path contains spaces, "
34 "make sure to wrap it in additional quotes, e.g. -s \"'./foo bar/baz.py' --args\".") %
35 args[0])
36 elif os.path.isdir(args[0]):
37 raise ValueError("Not a file: %s" % args[0])
38 return args[0], args[1:]
39
40
41 def cut_traceback(tb, func_name):
42 """
43 Cut off a traceback at the function with the given name.
44 The func_name's frame is excluded.
45
46 Args:
47 tb: traceback object, as returned by sys.exc_info()[2]
48 func_name: function name
49
50 Returns:
51 Reduced traceback.
52 """
53 tb_orig = tb
54
55 for _, _, fname, _ in traceback.extract_tb(tb):
56 tb = tb.tb_next
57 if fname == func_name:
58 break
59
60 if tb is None:
61 # We could not find the method, take the full stack trace.
62 # This may happen on some Python interpreters/flavors (e.g. PyInstaller).
63 return tb_orig
64 else:
65 return tb
66
67
68 @contextlib.contextmanager
69 def scriptenv(path, args):
70 oldargs = sys.argv
71 sys.argv = [path] + args
72 script_dir = os.path.dirname(os.path.abspath(path))
73 sys.path.append(script_dir)
74 try:
75 yield
76 except SystemExit as v:
77 ctx.log.error("Script exited with code %s" % v.code)
78 except Exception:
79 etype, value, tb = sys.exc_info()
80 tb = cut_traceback(tb, "scriptenv").tb_next
81 ctx.log.error(
82 "Script error: %s" % "".join(
83 traceback.format_exception(etype, value, tb)
84 )
85 )
86 finally:
87 sys.argv = oldargs
88 sys.path.pop()
89
90
91 def load_script(path, args):
92 with open(path, "rb") as f:
93 try:
94 code = compile(f.read(), path, 'exec')
95 except SyntaxError as e:
96 ctx.log.error(
97 "Script error: %s line %s: %s" % (
98 e.filename, e.lineno, e.msg
99 )
100 )
101 return
102 ns = {'__file__': os.path.abspath(path)}
103 with scriptenv(path, args):
104 exec(code, ns)
105 return types.SimpleNamespace(**ns)
106
107
108 class ReloadHandler(watchdog.events.FileSystemEventHandler):
109 def __init__(self, callback):
110 self.callback = callback
111
112 def filter(self, event):
113 """
114 Returns True only when .py file is changed
115 """
116 if event.is_directory:
117 return False
118 if os.path.basename(event.src_path).startswith("."):
119 return False
120 if event.src_path.endswith(".py"):
121 return True
122 return False
123
124 def on_modified(self, event):
125 if self.filter(event):
126 self.callback()
127
128 def on_created(self, event):
129 if self.filter(event):
130 self.callback()
131
132
133 class Script:
134 """
135 An addon that manages a single script.
136 """
137 def __init__(self, command):
138 self.name = command
139
140 self.command = command
141 self.path, self.args = parse_command(command)
142 self.ns = None
143 self.observer = None
144 self.dead = False
145
146 self.last_options = None
147 self.should_reload = threading.Event()
148
149 for i in eventsequence.Events:
150 if not hasattr(self, i):
151 def mkprox():
152 evt = i
153
154 def prox(*args, **kwargs):
155 self.run(evt, *args, **kwargs)
156 return prox
157 setattr(self, i, mkprox())
158
159 def run(self, name, *args, **kwargs):
160 # It's possible for ns to be un-initialised if we failed during
161 # configure
162 if self.ns is not None and not self.dead:
163 func = getattr(self.ns, name, None)
164 if func:
165 with scriptenv(self.path, self.args):
166 return func(*args, **kwargs)
167
168 def reload(self):
169 self.should_reload.set()
170
171 def load_script(self):
172 self.ns = load_script(self.path, self.args)
173 ret = self.run("start", self.last_options)
174 if ret:
175 self.ns = ret
176 self.run("start", self.last_options)
177
178 def tick(self):
179 if self.should_reload.is_set():
180 self.should_reload.clear()
181 ctx.log.info("Reloading script: %s" % self.name)
182 self.ns = load_script(self.path, self.args)
183 self.start(self.last_options)
184 self.configure(self.last_options, self.last_options.keys())
185 else:
186 self.run("tick")
187
188 def start(self, opts):
189 self.last_options = opts
190 self.load_script()
191
192 def configure(self, options, updated):
193 self.last_options = options
194 if not self.observer:
195 self.observer = polling.PollingObserver()
196 # Bind the handler to the real underlying master object
197 self.observer.schedule(
198 ReloadHandler(self.reload),
199 os.path.dirname(self.path) or "."
200 )
201 self.observer.start()
202 self.run("configure", options, updated)
203
204 def done(self):
205 self.run("done")
206 self.dead = True
207
208
209 class ScriptLoader:
210 """
211 An addon that manages loading scripts from options.
212 """
213 def __init__(self):
214 self.is_running = False
215
216 def running(self):
217 self.is_running = True
218
219 def run_once(self, command, flows):
220 try:
221 sc = Script(command)
222 except ValueError as e:
223 raise ValueError(str(e))
224 sc.load_script()
225 for f in flows:
226 for evt, o in eventsequence.iterate(f):
227 sc.run(evt, o)
228 sc.done()
229 return sc
230
231 def configure(self, options, updated):
232 if "scripts" in updated:
233 for s in options.scripts:
234 if options.scripts.count(s) > 1:
235 raise exceptions.OptionsError("Duplicate script: %s" % s)
236
237 for a in ctx.master.addons.chain[:]:
238 if isinstance(a, Script) and a.name not in options.scripts:
239 ctx.log.info("Un-loading script: %s" % a.name)
240 ctx.master.addons.remove(a)
241
242 # The machinations below are to ensure that:
243 # - Scripts remain in the same order
244 # - Scripts are listed directly after the script addon. This is
245 # needed to ensure that interactions with, for instance, flow
246 # serialization remains correct.
247 # - Scripts are not initialized un-necessarily. If only a
248 # script's order in the script list has changed, it should simply
249 # be moved.
250
251 current = {}
252 for a in ctx.master.addons.chain[:]:
253 if isinstance(a, Script):
254 current[a.name] = a
255 ctx.master.addons.chain.remove(a)
256
257 ordered = []
258 newscripts = []
259 for s in options.scripts:
260 if s in current:
261 ordered.append(current[s])
262 else:
263 ctx.log.info("Loading script: %s" % s)
264 try:
265 sc = Script(s)
266 except ValueError as e:
267 raise exceptions.OptionsError(str(e))
268 ordered.append(sc)
269 newscripts.append(sc)
270
271 ochain = ctx.master.addons.chain
272 pos = ochain.index(self)
273 ctx.master.addons.chain = ochain[:pos + 1] + ordered + ochain[pos + 1:]
274
275 for s in newscripts:
276 ctx.master.addons.invoke_addon(s, "start", options)
277 if self.is_running:
278 # If we're already running, we configure and tell the addon
279 # we're up and running.
280 ctx.master.addons.invoke_addon(
281 s, "configure", options, options.keys()
282 )
283 ctx.master.addons.invoke_addon(s, "running")
284
[end of mitmproxy/addons/script.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py
--- a/mitmproxy/addons/script.py
+++ b/mitmproxy/addons/script.py
@@ -65,14 +65,28 @@
return tb
+class StreamLog:
+ """
+ A class for redirecting output using contextlib.
+ """
+ def __init__(self, log):
+ self.log = log
+
+ def write(self, buf):
+ if buf.strip():
+ self.log(buf)
+
+
@contextlib.contextmanager
def scriptenv(path, args):
oldargs = sys.argv
sys.argv = [path] + args
script_dir = os.path.dirname(os.path.abspath(path))
sys.path.append(script_dir)
+ stdout_replacement = StreamLog(ctx.log.warn)
try:
- yield
+ with contextlib.redirect_stdout(stdout_replacement):
+ yield
except SystemExit as v:
ctx.log.error("Script exited with code %s" % v.code)
except Exception:
| {"golden_diff": "diff --git a/mitmproxy/addons/script.py b/mitmproxy/addons/script.py\n--- a/mitmproxy/addons/script.py\n+++ b/mitmproxy/addons/script.py\n@@ -65,14 +65,28 @@\n return tb\n \n \n+class StreamLog:\n+ \"\"\"\n+ A class for redirecting output using contextlib.\n+ \"\"\"\n+ def __init__(self, log):\n+ self.log = log\n+\n+ def write(self, buf):\n+ if buf.strip():\n+ self.log(buf)\n+\n+\n @contextlib.contextmanager\n def scriptenv(path, args):\n oldargs = sys.argv\n sys.argv = [path] + args\n script_dir = os.path.dirname(os.path.abspath(path))\n sys.path.append(script_dir)\n+ stdout_replacement = StreamLog(ctx.log.warn)\n try:\n- yield\n+ with contextlib.redirect_stdout(stdout_replacement):\n+ yield\n except SystemExit as v:\n ctx.log.error(\"Script exited with code %s\" % v.code)\n except Exception:\n", "issue": "Handle print() statements for inline scripts\nprint() statements in inline scripts should be suppressed, and produce into ctx.log.warn() calls instead. \n\n", "before_files": [{"content": "import contextlib\nimport os\nimport shlex\nimport sys\nimport threading\nimport traceback\nimport types\n\nfrom mitmproxy import exceptions\nfrom mitmproxy import ctx\nfrom mitmproxy import eventsequence\n\n\nimport watchdog.events\nfrom watchdog.observers import polling\n\n\ndef parse_command(command):\n \"\"\"\n Returns a (path, args) tuple.\n \"\"\"\n if not command or not command.strip():\n raise ValueError(\"Empty script command.\")\n # Windows: escape all backslashes in the path.\n if os.name == \"nt\": # pragma: no cover\n backslashes = shlex.split(command, posix=False)[0].count(\"\\\\\")\n command = command.replace(\"\\\\\", \"\\\\\\\\\", backslashes)\n args = shlex.split(command) # pragma: no cover\n args[0] = os.path.expanduser(args[0])\n if not os.path.exists(args[0]):\n raise ValueError(\n (\"Script file not found: %s.\\r\\n\"\n \"If your script path contains spaces, \"\n \"make sure to wrap it in additional quotes, e.g. -s \\\"'./foo bar/baz.py' --args\\\".\") %\n args[0])\n elif os.path.isdir(args[0]):\n raise ValueError(\"Not a file: %s\" % args[0])\n return args[0], args[1:]\n\n\ndef cut_traceback(tb, func_name):\n \"\"\"\n Cut off a traceback at the function with the given name.\n The func_name's frame is excluded.\n\n Args:\n tb: traceback object, as returned by sys.exc_info()[2]\n func_name: function name\n\n Returns:\n Reduced traceback.\n \"\"\"\n tb_orig = tb\n\n for _, _, fname, _ in traceback.extract_tb(tb):\n tb = tb.tb_next\n if fname == func_name:\n break\n\n if tb is None:\n # We could not find the method, take the full stack trace.\n # This may happen on some Python interpreters/flavors (e.g. PyInstaller).\n return tb_orig\n else:\n return tb\n\n\[email protected]\ndef scriptenv(path, args):\n oldargs = sys.argv\n sys.argv = [path] + args\n script_dir = os.path.dirname(os.path.abspath(path))\n sys.path.append(script_dir)\n try:\n yield\n except SystemExit as v:\n ctx.log.error(\"Script exited with code %s\" % v.code)\n except Exception:\n etype, value, tb = sys.exc_info()\n tb = cut_traceback(tb, \"scriptenv\").tb_next\n ctx.log.error(\n \"Script error: %s\" % \"\".join(\n traceback.format_exception(etype, value, tb)\n )\n )\n finally:\n sys.argv = oldargs\n sys.path.pop()\n\n\ndef load_script(path, args):\n with open(path, \"rb\") as f:\n try:\n code = compile(f.read(), path, 'exec')\n except SyntaxError as e:\n ctx.log.error(\n \"Script error: %s line %s: %s\" % (\n e.filename, e.lineno, e.msg\n )\n )\n return\n ns = {'__file__': os.path.abspath(path)}\n with scriptenv(path, args):\n exec(code, ns)\n return types.SimpleNamespace(**ns)\n\n\nclass ReloadHandler(watchdog.events.FileSystemEventHandler):\n def __init__(self, callback):\n self.callback = callback\n\n def filter(self, event):\n \"\"\"\n Returns True only when .py file is changed\n \"\"\"\n if event.is_directory:\n return False\n if os.path.basename(event.src_path).startswith(\".\"):\n return False\n if event.src_path.endswith(\".py\"):\n return True\n return False\n\n def on_modified(self, event):\n if self.filter(event):\n self.callback()\n\n def on_created(self, event):\n if self.filter(event):\n self.callback()\n\n\nclass Script:\n \"\"\"\n An addon that manages a single script.\n \"\"\"\n def __init__(self, command):\n self.name = command\n\n self.command = command\n self.path, self.args = parse_command(command)\n self.ns = None\n self.observer = None\n self.dead = False\n\n self.last_options = None\n self.should_reload = threading.Event()\n\n for i in eventsequence.Events:\n if not hasattr(self, i):\n def mkprox():\n evt = i\n\n def prox(*args, **kwargs):\n self.run(evt, *args, **kwargs)\n return prox\n setattr(self, i, mkprox())\n\n def run(self, name, *args, **kwargs):\n # It's possible for ns to be un-initialised if we failed during\n # configure\n if self.ns is not None and not self.dead:\n func = getattr(self.ns, name, None)\n if func:\n with scriptenv(self.path, self.args):\n return func(*args, **kwargs)\n\n def reload(self):\n self.should_reload.set()\n\n def load_script(self):\n self.ns = load_script(self.path, self.args)\n ret = self.run(\"start\", self.last_options)\n if ret:\n self.ns = ret\n self.run(\"start\", self.last_options)\n\n def tick(self):\n if self.should_reload.is_set():\n self.should_reload.clear()\n ctx.log.info(\"Reloading script: %s\" % self.name)\n self.ns = load_script(self.path, self.args)\n self.start(self.last_options)\n self.configure(self.last_options, self.last_options.keys())\n else:\n self.run(\"tick\")\n\n def start(self, opts):\n self.last_options = opts\n self.load_script()\n\n def configure(self, options, updated):\n self.last_options = options\n if not self.observer:\n self.observer = polling.PollingObserver()\n # Bind the handler to the real underlying master object\n self.observer.schedule(\n ReloadHandler(self.reload),\n os.path.dirname(self.path) or \".\"\n )\n self.observer.start()\n self.run(\"configure\", options, updated)\n\n def done(self):\n self.run(\"done\")\n self.dead = True\n\n\nclass ScriptLoader:\n \"\"\"\n An addon that manages loading scripts from options.\n \"\"\"\n def __init__(self):\n self.is_running = False\n\n def running(self):\n self.is_running = True\n\n def run_once(self, command, flows):\n try:\n sc = Script(command)\n except ValueError as e:\n raise ValueError(str(e))\n sc.load_script()\n for f in flows:\n for evt, o in eventsequence.iterate(f):\n sc.run(evt, o)\n sc.done()\n return sc\n\n def configure(self, options, updated):\n if \"scripts\" in updated:\n for s in options.scripts:\n if options.scripts.count(s) > 1:\n raise exceptions.OptionsError(\"Duplicate script: %s\" % s)\n\n for a in ctx.master.addons.chain[:]:\n if isinstance(a, Script) and a.name not in options.scripts:\n ctx.log.info(\"Un-loading script: %s\" % a.name)\n ctx.master.addons.remove(a)\n\n # The machinations below are to ensure that:\n # - Scripts remain in the same order\n # - Scripts are listed directly after the script addon. This is\n # needed to ensure that interactions with, for instance, flow\n # serialization remains correct.\n # - Scripts are not initialized un-necessarily. If only a\n # script's order in the script list has changed, it should simply\n # be moved.\n\n current = {}\n for a in ctx.master.addons.chain[:]:\n if isinstance(a, Script):\n current[a.name] = a\n ctx.master.addons.chain.remove(a)\n\n ordered = []\n newscripts = []\n for s in options.scripts:\n if s in current:\n ordered.append(current[s])\n else:\n ctx.log.info(\"Loading script: %s\" % s)\n try:\n sc = Script(s)\n except ValueError as e:\n raise exceptions.OptionsError(str(e))\n ordered.append(sc)\n newscripts.append(sc)\n\n ochain = ctx.master.addons.chain\n pos = ochain.index(self)\n ctx.master.addons.chain = ochain[:pos + 1] + ordered + ochain[pos + 1:]\n\n for s in newscripts:\n ctx.master.addons.invoke_addon(s, \"start\", options)\n if self.is_running:\n # If we're already running, we configure and tell the addon\n # we're up and running.\n ctx.master.addons.invoke_addon(\n s, \"configure\", options, options.keys()\n )\n ctx.master.addons.invoke_addon(s, \"running\")\n", "path": "mitmproxy/addons/script.py"}]} | 3,235 | 232 |
gh_patches_debug_43858 | rasdani/github-patches | git_diff | litestar-org__litestar-3293 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: route not recognized in schemas for specific types
### Description
Currently some parameters types don't show in the docs. It seems `int` works fine, while `str` and `uuid` don't.
This is silent, no logs are shown, running `debug` mode. Same behavior for both `/schema/elements` and `/schema/swagger`.
### URL to code causing the issue
_No response_
### MCVE
```python
(Uncomment one at a time)
class BugRoutes(Controller):
tags = ["Bugs"]
path = "/bugs"
dependencies = dict(context=Provide(route_context))
@routes.get()
def get_bugs(self, context: AppContext) -> Response:
return Response({})
@routes.post()
def create_bug(self, context: AppContext, data: Any) -> Response:
return Response({})
# This works
# @routes.get("/{param:int}")
# def get_bug(self, context: AppContext, param: int) -> Response:
# return Response({})
# This doesn't work (not showing on docs)
# @routes.get("/{param:str}")
# def get_bug_str(self, context: AppContext, param: str) -> Response:
# return Response({})
# This doesn't work (not showing on docs)
# @routes.get("/{param:uuid}")
# def get_bug_uuid(self, context: AppContext, param: UUID) -> Response:
# return Response({})
@routes.patch("/{param:int}")
def update_bug(self, context: AppContext, param: int) -> Response:
return Response({})
@routes.delete("/{param:int}")
def delete_bug(self, context: AppContext, param: int) -> None:
return Response({})
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.3.2
### Platform
- [X] Mac
- [ ] Linux
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2700">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2700/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2700/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of litestar/_openapi/path_item.py]
1 from __future__ import annotations
2
3 from inspect import cleandoc
4 from typing import TYPE_CHECKING
5
6 from litestar._openapi.parameters import create_parameters_for_handler
7 from litestar._openapi.request_body import create_request_body
8 from litestar._openapi.responses import create_responses_for_handler
9 from litestar._openapi.utils import SEPARATORS_CLEANUP_PATTERN
10 from litestar.enums import HttpMethod
11 from litestar.openapi.spec import Operation, PathItem
12 from litestar.utils.helpers import unwrap_partial
13
14 if TYPE_CHECKING:
15 from litestar._openapi.datastructures import OpenAPIContext
16 from litestar.handlers.http_handlers import HTTPRouteHandler
17 from litestar.routes import HTTPRoute
18
19 __all__ = ("create_path_item_for_route",)
20
21
22 class PathItemFactory:
23 """Factory for creating a PathItem instance for a given route."""
24
25 def __init__(self, openapi_context: OpenAPIContext, route: HTTPRoute) -> None:
26 self.context = openapi_context
27 self.route = route
28 self._path_item = PathItem()
29
30 def create_path_item(self) -> PathItem:
31 """Create a PathItem for the given route parsing all http_methods into Operation Models.
32
33 Returns:
34 A PathItem instance.
35 """
36 for http_method, handler_tuple in self.route.route_handler_map.items():
37 route_handler, _ = handler_tuple
38
39 if not route_handler.resolve_include_in_schema():
40 continue
41
42 operation = self.create_operation_for_handler_method(route_handler, HttpMethod(http_method))
43
44 setattr(self._path_item, http_method.lower(), operation)
45
46 return self._path_item
47
48 def create_operation_for_handler_method(
49 self, route_handler: HTTPRouteHandler, http_method: HttpMethod
50 ) -> Operation:
51 """Create an Operation instance for a given route handler and http method.
52
53 Args:
54 route_handler: A route handler instance.
55 http_method: An HttpMethod enum value.
56
57 Returns:
58 An Operation instance.
59 """
60 operation_id = self.create_operation_id(route_handler, http_method)
61 parameters = create_parameters_for_handler(self.context, route_handler, self.route.path_parameters)
62 signature_fields = route_handler.parsed_fn_signature.parameters
63
64 request_body = None
65 if data_field := signature_fields.get("data"):
66 request_body = create_request_body(
67 self.context, route_handler.handler_id, route_handler.resolve_data_dto(), data_field
68 )
69
70 raises_validation_error = bool(data_field or self._path_item.parameters or parameters)
71 responses = create_responses_for_handler(
72 self.context, route_handler, raises_validation_error=raises_validation_error
73 )
74
75 return route_handler.operation_class(
76 operation_id=operation_id,
77 tags=route_handler.resolve_tags() or None,
78 summary=route_handler.summary or SEPARATORS_CLEANUP_PATTERN.sub("", route_handler.handler_name.title()),
79 description=self.create_description_for_handler(route_handler),
80 deprecated=route_handler.deprecated,
81 responses=responses,
82 request_body=request_body,
83 parameters=parameters or None, # type: ignore[arg-type]
84 security=route_handler.resolve_security() or None,
85 )
86
87 def create_operation_id(self, route_handler: HTTPRouteHandler, http_method: HttpMethod) -> str:
88 """Create an operation id for a given route handler and http method.
89
90 Adds the operation id to the context's operation id set, where it is checked for uniqueness.
91
92 Args:
93 route_handler: A route handler instance.
94 http_method: An HttpMethod enum value.
95
96 Returns:
97 An operation id string.
98 """
99 if isinstance(route_handler.operation_id, str):
100 operation_id = route_handler.operation_id
101 elif callable(route_handler.operation_id):
102 operation_id = route_handler.operation_id(route_handler, http_method, self.route.path_components)
103 else:
104 operation_id = self.context.openapi_config.operation_id_creator(
105 route_handler, http_method, self.route.path_components
106 )
107 self.context.add_operation_id(operation_id)
108 return operation_id
109
110 def create_description_for_handler(self, route_handler: HTTPRouteHandler) -> str | None:
111 """Produce the operation description for a route handler.
112
113 Args:
114 route_handler: A route handler instance.
115
116 Returns:
117 An optional description string
118 """
119 handler_description = route_handler.description
120 if handler_description is None and self.context.openapi_config.use_handler_docstrings:
121 fn = unwrap_partial(route_handler.fn)
122 return cleandoc(fn.__doc__) if fn.__doc__ else None
123 return handler_description
124
125
126 def create_path_item_for_route(openapi_context: OpenAPIContext, route: HTTPRoute) -> PathItem:
127 """Create a PathItem for the given route parsing all http_methods into Operation Models.
128
129 Args:
130 openapi_context: The OpenAPIContext instance.
131 route: The route to create a PathItem for.
132
133 Returns:
134 A PathItem instance.
135 """
136 path_item_factory = PathItemFactory(openapi_context, route)
137 return path_item_factory.create_path_item()
138
[end of litestar/_openapi/path_item.py]
[start of litestar/_openapi/plugin.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from litestar._openapi.datastructures import OpenAPIContext
6 from litestar._openapi.path_item import create_path_item_for_route
7 from litestar.exceptions import ImproperlyConfiguredException
8 from litestar.plugins import InitPluginProtocol
9 from litestar.plugins.base import ReceiveRoutePlugin
10 from litestar.routes import HTTPRoute
11
12 if TYPE_CHECKING:
13 from litestar.app import Litestar
14 from litestar.config.app import AppConfig
15 from litestar.openapi.config import OpenAPIConfig
16 from litestar.openapi.spec import OpenAPI
17 from litestar.routes import BaseRoute
18
19
20 class OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):
21 __slots__ = (
22 "app",
23 "included_routes",
24 "_openapi_config",
25 "_openapi_schema",
26 )
27
28 def __init__(self, app: Litestar) -> None:
29 self.app = app
30 self.included_routes: dict[str, HTTPRoute] = {}
31 self._openapi_config: OpenAPIConfig | None = None
32 self._openapi_schema: OpenAPI | None = None
33
34 def _build_openapi_schema(self) -> OpenAPI:
35 openapi_config = self.openapi_config
36
37 if openapi_config.create_examples:
38 from litestar._openapi.schema_generation.examples import ExampleFactory
39
40 ExampleFactory.seed_random(openapi_config.random_seed)
41
42 openapi = openapi_config.to_openapi_schema()
43 context = OpenAPIContext(openapi_config=openapi_config, plugins=self.app.plugins.openapi)
44 openapi.paths = {
45 route.path_format or "/": create_path_item_for_route(context, route)
46 for route in self.included_routes.values()
47 }
48 openapi.components.schemas = context.schema_registry.generate_components_schemas()
49 return openapi
50
51 def provide_openapi(self) -> OpenAPI:
52 if not self._openapi_schema:
53 self._openapi_schema = self._build_openapi_schema()
54 return self._openapi_schema
55
56 def on_app_init(self, app_config: AppConfig) -> AppConfig:
57 if app_config.openapi_config:
58 self._openapi_config = app_config.openapi_config
59 app_config.route_handlers.append(self.openapi_config.openapi_controller)
60 return app_config
61
62 @property
63 def openapi_config(self) -> OpenAPIConfig:
64 if not self._openapi_config:
65 raise ImproperlyConfiguredException("OpenAPIConfig not initialized")
66 return self._openapi_config
67
68 def receive_route(self, route: BaseRoute) -> None:
69 if not isinstance(route, HTTPRoute):
70 return
71
72 if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):
73 # Force recompute the schema if a new route is added
74 self._openapi_schema = None
75 self.included_routes[route.path] = route
76
[end of litestar/_openapi/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/litestar/_openapi/path_item.py b/litestar/_openapi/path_item.py
--- a/litestar/_openapi/path_item.py
+++ b/litestar/_openapi/path_item.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import dataclasses
from inspect import cleandoc
from typing import TYPE_CHECKING
@@ -8,6 +9,7 @@
from litestar._openapi.responses import create_responses_for_handler
from litestar._openapi.utils import SEPARATORS_CLEANUP_PATTERN
from litestar.enums import HttpMethod
+from litestar.exceptions import ImproperlyConfiguredException
from litestar.openapi.spec import Operation, PathItem
from litestar.utils.helpers import unwrap_partial
@@ -16,7 +18,7 @@
from litestar.handlers.http_handlers import HTTPRouteHandler
from litestar.routes import HTTPRoute
-__all__ = ("create_path_item_for_route",)
+__all__ = ("create_path_item_for_route", "merge_path_item_operations")
class PathItemFactory:
@@ -135,3 +137,32 @@
"""
path_item_factory = PathItemFactory(openapi_context, route)
return path_item_factory.create_path_item()
+
+
+def merge_path_item_operations(source: PathItem, other: PathItem, for_path: str) -> PathItem:
+ """Merge operations from path items, creating a new path item that includes
+ operations from both.
+ """
+ attrs_to_merge = {"get", "put", "post", "delete", "options", "head", "patch", "trace"}
+ fields = {f.name for f in dataclasses.fields(PathItem)} - attrs_to_merge
+ if any(getattr(source, attr) and getattr(other, attr) for attr in attrs_to_merge):
+ raise ValueError("Cannot merge operation for PathItem if operation is set on both items")
+
+ if differing_values := [
+ (value_a, value_b) for attr in fields if (value_a := getattr(source, attr)) != (value_b := getattr(other, attr))
+ ]:
+ raise ImproperlyConfiguredException(
+ f"Conflicting OpenAPI path configuration for {for_path!r}. "
+ f"{', '.join(f'{a} != {b}' for a, b in differing_values)}"
+ )
+
+ return dataclasses.replace(
+ source,
+ get=source.get or other.get,
+ post=source.post or other.post,
+ patch=source.patch or other.patch,
+ put=source.put or other.put,
+ delete=source.delete or other.delete,
+ options=source.options or other.options,
+ trace=source.trace or other.trace,
+ )
diff --git a/litestar/_openapi/plugin.py b/litestar/_openapi/plugin.py
--- a/litestar/_openapi/plugin.py
+++ b/litestar/_openapi/plugin.py
@@ -3,7 +3,7 @@
from typing import TYPE_CHECKING
from litestar._openapi.datastructures import OpenAPIContext
-from litestar._openapi.path_item import create_path_item_for_route
+from litestar._openapi.path_item import create_path_item_for_route, merge_path_item_operations
from litestar.exceptions import ImproperlyConfiguredException
from litestar.plugins import InitPluginProtocol
from litestar.plugins.base import ReceiveRoutePlugin
@@ -13,7 +13,7 @@
from litestar.app import Litestar
from litestar.config.app import AppConfig
from litestar.openapi.config import OpenAPIConfig
- from litestar.openapi.spec import OpenAPI
+ from litestar.openapi.spec import OpenAPI, PathItem
from litestar.routes import BaseRoute
@@ -41,10 +41,15 @@
openapi = openapi_config.to_openapi_schema()
context = OpenAPIContext(openapi_config=openapi_config, plugins=self.app.plugins.openapi)
- openapi.paths = {
- route.path_format or "/": create_path_item_for_route(context, route)
- for route in self.included_routes.values()
- }
+ path_items: dict[str, PathItem] = {}
+ for route in self.included_routes.values():
+ path = route.path_format or "/"
+ path_item = create_path_item_for_route(context, route)
+ if existing_path_item := path_items.get(path):
+ path_item = merge_path_item_operations(existing_path_item, path_item, for_path=path)
+ path_items[path] = path_item
+
+ openapi.paths = path_items
openapi.components.schemas = context.schema_registry.generate_components_schemas()
return openapi
| {"golden_diff": "diff --git a/litestar/_openapi/path_item.py b/litestar/_openapi/path_item.py\n--- a/litestar/_openapi/path_item.py\n+++ b/litestar/_openapi/path_item.py\n@@ -1,5 +1,6 @@\n from __future__ import annotations\n \n+import dataclasses\n from inspect import cleandoc\n from typing import TYPE_CHECKING\n \n@@ -8,6 +9,7 @@\n from litestar._openapi.responses import create_responses_for_handler\n from litestar._openapi.utils import SEPARATORS_CLEANUP_PATTERN\n from litestar.enums import HttpMethod\n+from litestar.exceptions import ImproperlyConfiguredException\n from litestar.openapi.spec import Operation, PathItem\n from litestar.utils.helpers import unwrap_partial\n \n@@ -16,7 +18,7 @@\n from litestar.handlers.http_handlers import HTTPRouteHandler\n from litestar.routes import HTTPRoute\n \n-__all__ = (\"create_path_item_for_route\",)\n+__all__ = (\"create_path_item_for_route\", \"merge_path_item_operations\")\n \n \n class PathItemFactory:\n@@ -135,3 +137,32 @@\n \"\"\"\n path_item_factory = PathItemFactory(openapi_context, route)\n return path_item_factory.create_path_item()\n+\n+\n+def merge_path_item_operations(source: PathItem, other: PathItem, for_path: str) -> PathItem:\n+ \"\"\"Merge operations from path items, creating a new path item that includes\n+ operations from both.\n+ \"\"\"\n+ attrs_to_merge = {\"get\", \"put\", \"post\", \"delete\", \"options\", \"head\", \"patch\", \"trace\"}\n+ fields = {f.name for f in dataclasses.fields(PathItem)} - attrs_to_merge\n+ if any(getattr(source, attr) and getattr(other, attr) for attr in attrs_to_merge):\n+ raise ValueError(\"Cannot merge operation for PathItem if operation is set on both items\")\n+\n+ if differing_values := [\n+ (value_a, value_b) for attr in fields if (value_a := getattr(source, attr)) != (value_b := getattr(other, attr))\n+ ]:\n+ raise ImproperlyConfiguredException(\n+ f\"Conflicting OpenAPI path configuration for {for_path!r}. \"\n+ f\"{', '.join(f'{a} != {b}' for a, b in differing_values)}\"\n+ )\n+\n+ return dataclasses.replace(\n+ source,\n+ get=source.get or other.get,\n+ post=source.post or other.post,\n+ patch=source.patch or other.patch,\n+ put=source.put or other.put,\n+ delete=source.delete or other.delete,\n+ options=source.options or other.options,\n+ trace=source.trace or other.trace,\n+ )\ndiff --git a/litestar/_openapi/plugin.py b/litestar/_openapi/plugin.py\n--- a/litestar/_openapi/plugin.py\n+++ b/litestar/_openapi/plugin.py\n@@ -3,7 +3,7 @@\n from typing import TYPE_CHECKING\n \n from litestar._openapi.datastructures import OpenAPIContext\n-from litestar._openapi.path_item import create_path_item_for_route\n+from litestar._openapi.path_item import create_path_item_for_route, merge_path_item_operations\n from litestar.exceptions import ImproperlyConfiguredException\n from litestar.plugins import InitPluginProtocol\n from litestar.plugins.base import ReceiveRoutePlugin\n@@ -13,7 +13,7 @@\n from litestar.app import Litestar\n from litestar.config.app import AppConfig\n from litestar.openapi.config import OpenAPIConfig\n- from litestar.openapi.spec import OpenAPI\n+ from litestar.openapi.spec import OpenAPI, PathItem\n from litestar.routes import BaseRoute\n \n \n@@ -41,10 +41,15 @@\n \n openapi = openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=openapi_config, plugins=self.app.plugins.openapi)\n- openapi.paths = {\n- route.path_format or \"/\": create_path_item_for_route(context, route)\n- for route in self.included_routes.values()\n- }\n+ path_items: dict[str, PathItem] = {}\n+ for route in self.included_routes.values():\n+ path = route.path_format or \"/\"\n+ path_item = create_path_item_for_route(context, route)\n+ if existing_path_item := path_items.get(path):\n+ path_item = merge_path_item_operations(existing_path_item, path_item, for_path=path)\n+ path_items[path] = path_item\n+\n+ openapi.paths = path_items\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n", "issue": "Bug: route not recognized in schemas for specific types\n### Description\r\n\r\nCurrently some parameters types don't show in the docs. It seems `int` works fine, while `str` and `uuid` don't.\r\nThis is silent, no logs are shown, running `debug` mode. Same behavior for both `/schema/elements` and `/schema/swagger`.\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\n(Uncomment one at a time)\r\n\r\n\r\nclass BugRoutes(Controller):\r\n tags = [\"Bugs\"]\r\n path = \"/bugs\"\r\n dependencies = dict(context=Provide(route_context))\r\n\r\n @routes.get()\r\n def get_bugs(self, context: AppContext) -> Response:\r\n return Response({})\r\n\r\n @routes.post()\r\n def create_bug(self, context: AppContext, data: Any) -> Response:\r\n return Response({})\r\n\r\n # This works\r\n # @routes.get(\"/{param:int}\")\r\n # def get_bug(self, context: AppContext, param: int) -> Response:\r\n # return Response({})\r\n\r\n # This doesn't work (not showing on docs)\r\n # @routes.get(\"/{param:str}\")\r\n # def get_bug_str(self, context: AppContext, param: str) -> Response:\r\n # return Response({})\r\n\r\n # This doesn't work (not showing on docs)\r\n # @routes.get(\"/{param:uuid}\")\r\n # def get_bug_uuid(self, context: AppContext, param: UUID) -> Response:\r\n # return Response({})\r\n\r\n @routes.patch(\"/{param:int}\")\r\n def update_bug(self, context: AppContext, param: int) -> Response:\r\n return Response({})\r\n\r\n @routes.delete(\"/{param:int}\")\r\n def delete_bug(self, context: AppContext, param: int) -> None:\r\n return Response({})\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n_No response_\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\n2.3.2\r\n\r\n### Platform\r\n\r\n- [X] Mac\r\n- [ ] Linux\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2700\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2700/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2700/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom inspect import cleandoc\nfrom typing import TYPE_CHECKING\n\nfrom litestar._openapi.parameters import create_parameters_for_handler\nfrom litestar._openapi.request_body import create_request_body\nfrom litestar._openapi.responses import create_responses_for_handler\nfrom litestar._openapi.utils import SEPARATORS_CLEANUP_PATTERN\nfrom litestar.enums import HttpMethod\nfrom litestar.openapi.spec import Operation, PathItem\nfrom litestar.utils.helpers import unwrap_partial\n\nif TYPE_CHECKING:\n from litestar._openapi.datastructures import OpenAPIContext\n from litestar.handlers.http_handlers import HTTPRouteHandler\n from litestar.routes import HTTPRoute\n\n__all__ = (\"create_path_item_for_route\",)\n\n\nclass PathItemFactory:\n \"\"\"Factory for creating a PathItem instance for a given route.\"\"\"\n\n def __init__(self, openapi_context: OpenAPIContext, route: HTTPRoute) -> None:\n self.context = openapi_context\n self.route = route\n self._path_item = PathItem()\n\n def create_path_item(self) -> PathItem:\n \"\"\"Create a PathItem for the given route parsing all http_methods into Operation Models.\n\n Returns:\n A PathItem instance.\n \"\"\"\n for http_method, handler_tuple in self.route.route_handler_map.items():\n route_handler, _ = handler_tuple\n\n if not route_handler.resolve_include_in_schema():\n continue\n\n operation = self.create_operation_for_handler_method(route_handler, HttpMethod(http_method))\n\n setattr(self._path_item, http_method.lower(), operation)\n\n return self._path_item\n\n def create_operation_for_handler_method(\n self, route_handler: HTTPRouteHandler, http_method: HttpMethod\n ) -> Operation:\n \"\"\"Create an Operation instance for a given route handler and http method.\n\n Args:\n route_handler: A route handler instance.\n http_method: An HttpMethod enum value.\n\n Returns:\n An Operation instance.\n \"\"\"\n operation_id = self.create_operation_id(route_handler, http_method)\n parameters = create_parameters_for_handler(self.context, route_handler, self.route.path_parameters)\n signature_fields = route_handler.parsed_fn_signature.parameters\n\n request_body = None\n if data_field := signature_fields.get(\"data\"):\n request_body = create_request_body(\n self.context, route_handler.handler_id, route_handler.resolve_data_dto(), data_field\n )\n\n raises_validation_error = bool(data_field or self._path_item.parameters or parameters)\n responses = create_responses_for_handler(\n self.context, route_handler, raises_validation_error=raises_validation_error\n )\n\n return route_handler.operation_class(\n operation_id=operation_id,\n tags=route_handler.resolve_tags() or None,\n summary=route_handler.summary or SEPARATORS_CLEANUP_PATTERN.sub(\"\", route_handler.handler_name.title()),\n description=self.create_description_for_handler(route_handler),\n deprecated=route_handler.deprecated,\n responses=responses,\n request_body=request_body,\n parameters=parameters or None, # type: ignore[arg-type]\n security=route_handler.resolve_security() or None,\n )\n\n def create_operation_id(self, route_handler: HTTPRouteHandler, http_method: HttpMethod) -> str:\n \"\"\"Create an operation id for a given route handler and http method.\n\n Adds the operation id to the context's operation id set, where it is checked for uniqueness.\n\n Args:\n route_handler: A route handler instance.\n http_method: An HttpMethod enum value.\n\n Returns:\n An operation id string.\n \"\"\"\n if isinstance(route_handler.operation_id, str):\n operation_id = route_handler.operation_id\n elif callable(route_handler.operation_id):\n operation_id = route_handler.operation_id(route_handler, http_method, self.route.path_components)\n else:\n operation_id = self.context.openapi_config.operation_id_creator(\n route_handler, http_method, self.route.path_components\n )\n self.context.add_operation_id(operation_id)\n return operation_id\n\n def create_description_for_handler(self, route_handler: HTTPRouteHandler) -> str | None:\n \"\"\"Produce the operation description for a route handler.\n\n Args:\n route_handler: A route handler instance.\n\n Returns:\n An optional description string\n \"\"\"\n handler_description = route_handler.description\n if handler_description is None and self.context.openapi_config.use_handler_docstrings:\n fn = unwrap_partial(route_handler.fn)\n return cleandoc(fn.__doc__) if fn.__doc__ else None\n return handler_description\n\n\ndef create_path_item_for_route(openapi_context: OpenAPIContext, route: HTTPRoute) -> PathItem:\n \"\"\"Create a PathItem for the given route parsing all http_methods into Operation Models.\n\n Args:\n openapi_context: The OpenAPIContext instance.\n route: The route to create a PathItem for.\n\n Returns:\n A PathItem instance.\n \"\"\"\n path_item_factory = PathItemFactory(openapi_context, route)\n return path_item_factory.create_path_item()\n", "path": "litestar/_openapi/path_item.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom litestar._openapi.datastructures import OpenAPIContext\nfrom litestar._openapi.path_item import create_path_item_for_route\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.plugins import InitPluginProtocol\nfrom litestar.plugins.base import ReceiveRoutePlugin\nfrom litestar.routes import HTTPRoute\n\nif TYPE_CHECKING:\n from litestar.app import Litestar\n from litestar.config.app import AppConfig\n from litestar.openapi.config import OpenAPIConfig\n from litestar.openapi.spec import OpenAPI\n from litestar.routes import BaseRoute\n\n\nclass OpenAPIPlugin(InitPluginProtocol, ReceiveRoutePlugin):\n __slots__ = (\n \"app\",\n \"included_routes\",\n \"_openapi_config\",\n \"_openapi_schema\",\n )\n\n def __init__(self, app: Litestar) -> None:\n self.app = app\n self.included_routes: dict[str, HTTPRoute] = {}\n self._openapi_config: OpenAPIConfig | None = None\n self._openapi_schema: OpenAPI | None = None\n\n def _build_openapi_schema(self) -> OpenAPI:\n openapi_config = self.openapi_config\n\n if openapi_config.create_examples:\n from litestar._openapi.schema_generation.examples import ExampleFactory\n\n ExampleFactory.seed_random(openapi_config.random_seed)\n\n openapi = openapi_config.to_openapi_schema()\n context = OpenAPIContext(openapi_config=openapi_config, plugins=self.app.plugins.openapi)\n openapi.paths = {\n route.path_format or \"/\": create_path_item_for_route(context, route)\n for route in self.included_routes.values()\n }\n openapi.components.schemas = context.schema_registry.generate_components_schemas()\n return openapi\n\n def provide_openapi(self) -> OpenAPI:\n if not self._openapi_schema:\n self._openapi_schema = self._build_openapi_schema()\n return self._openapi_schema\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n if app_config.openapi_config:\n self._openapi_config = app_config.openapi_config\n app_config.route_handlers.append(self.openapi_config.openapi_controller)\n return app_config\n\n @property\n def openapi_config(self) -> OpenAPIConfig:\n if not self._openapi_config:\n raise ImproperlyConfiguredException(\"OpenAPIConfig not initialized\")\n return self._openapi_config\n\n def receive_route(self, route: BaseRoute) -> None:\n if not isinstance(route, HTTPRoute):\n return\n\n if any(route_handler.resolve_include_in_schema() for route_handler, _ in route.route_handler_map.values()):\n # Force recompute the schema if a new route is added\n self._openapi_schema = None\n self.included_routes[route.path] = route\n", "path": "litestar/_openapi/plugin.py"}]} | 3,426 | 1,022 |
gh_patches_debug_21633 | rasdani/github-patches | git_diff | PyGithub__PyGithub-2439 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
v1.58.0 TypeError: create_jwt() got an unexpected keyword argument 'expiration'
The `expiration` keyword argument was removed in v1.58.0. The interface defined in GithubIntegration.pyi is no longer accurate.
</issue>
<code>
[start of github/GithubIntegration.py]
1 import time
2
3 import deprecated
4 import jwt
5
6 from github import Consts
7 from github.GithubException import GithubException
8 from github.Installation import Installation
9 from github.InstallationAuthorization import InstallationAuthorization
10 from github.PaginatedList import PaginatedList
11 from github.Requester import Requester
12
13
14 class GithubIntegration:
15 """
16 Main class to obtain tokens for a GitHub integration.
17 """
18
19 def __init__(
20 self,
21 integration_id,
22 private_key,
23 base_url=Consts.DEFAULT_BASE_URL,
24 jwt_expiry=Consts.DEFAULT_JWT_EXPIRY,
25 jwt_issued_at=Consts.DEFAULT_JWT_ISSUED_AT,
26 ):
27 """
28 :param integration_id: int
29 :param private_key: string
30 :param base_url: string
31 :param jwt_expiry: int. Expiry of the JWT used to get the information about this integration.
32 The default expiration is in 5 minutes and is capped at 10 minutes according to GitHub documentation
33 https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#generating-a-json-web-token-jwt
34 :param jwt_issued_at: int. Number of seconds, relative to now, to set for the "iat" (issued at) parameter.
35 The default value is -60 to protect against clock drift
36 """
37 assert isinstance(integration_id, (int, str)), integration_id
38 assert isinstance(private_key, str), "supplied private key should be a string"
39 assert isinstance(base_url, str), base_url
40 assert isinstance(jwt_expiry, int), jwt_expiry
41 assert Consts.MIN_JWT_EXPIRY <= jwt_expiry <= Consts.MAX_JWT_EXPIRY, jwt_expiry
42 assert isinstance(jwt_issued_at, int)
43
44 self.base_url = base_url
45 self.integration_id = integration_id
46 self.private_key = private_key
47 self.jwt_expiry = jwt_expiry
48 self.jwt_issued_at = jwt_issued_at
49 self.__requester = Requester(
50 login_or_token=None,
51 password=None,
52 jwt=self.create_jwt(),
53 app_auth=None,
54 base_url=self.base_url,
55 timeout=Consts.DEFAULT_TIMEOUT,
56 user_agent="PyGithub/Python",
57 per_page=Consts.DEFAULT_PER_PAGE,
58 verify=True,
59 retry=None,
60 pool_size=None,
61 )
62
63 def _get_headers(self):
64 """
65 Get headers for the requests.
66
67 :return: dict
68 """
69 return {
70 "Authorization": f"Bearer {self.create_jwt()}",
71 "Accept": Consts.mediaTypeIntegrationPreview,
72 "User-Agent": "PyGithub/Python",
73 }
74
75 def _get_installed_app(self, url):
76 """
77 Get installation for the given URL.
78
79 :param url: str
80 :rtype: :class:`github.Installation.Installation`
81 """
82 headers, response = self.__requester.requestJsonAndCheck(
83 "GET", url, headers=self._get_headers()
84 )
85
86 return Installation(
87 requester=self.__requester,
88 headers=headers,
89 attributes=response,
90 completed=True,
91 )
92
93 def create_jwt(self):
94 """
95 Create a signed JWT
96 https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app
97
98 :return string:
99 """
100 now = int(time.time())
101 payload = {
102 "iat": now + self.jwt_issued_at,
103 "exp": now + self.jwt_expiry,
104 "iss": self.integration_id,
105 }
106 encrypted = jwt.encode(payload, key=self.private_key, algorithm="RS256")
107
108 if isinstance(encrypted, bytes):
109 encrypted = encrypted.decode("utf-8")
110
111 return encrypted
112
113 def get_access_token(self, installation_id, permissions=None):
114 """
115 :calls: `POST /app/installations/{installation_id}/access_tokens <https://docs.github.com/en/rest/apps/apps#create-an-installation-access-token-for-an-app>`
116 :param installation_id: int
117 :param permissions: dict
118 :return: :class:`github.InstallationAuthorization.InstallationAuthorization`
119 """
120 if permissions is None:
121 permissions = {}
122
123 if not isinstance(permissions, dict):
124 raise GithubException(
125 status=400, data={"message": "Invalid permissions"}, headers=None
126 )
127
128 body = {"permissions": permissions}
129 headers, response = self.__requester.requestJsonAndCheck(
130 "POST",
131 f"/app/installations/{installation_id}/access_tokens",
132 input=body,
133 )
134
135 return InstallationAuthorization(
136 requester=self.__requester,
137 headers=headers,
138 attributes=response,
139 completed=True,
140 )
141
142 @deprecated.deprecated("Use get_repo_installation")
143 def get_installation(self, owner, repo):
144 """
145 Deprecated by get_repo_installation
146
147 :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`
148 :param owner: str
149 :param repo: str
150 :rtype: :class:`github.Installation.Installation`
151 """
152 return self._get_installed_app(url=f"/repos/{owner}/{repo}/installation")
153
154 def get_installations(self):
155 """
156 :calls: GET /app/installations <https://docs.github.com/en/rest/reference/apps#list-installations-for-the-authenticated-app>
157 :rtype: :class:`github.PaginatedList.PaginatedList[github.Installation.Installation]`
158 """
159 return PaginatedList(
160 contentClass=Installation,
161 requester=self.__requester,
162 firstUrl="/app/installations",
163 firstParams=None,
164 headers=self._get_headers(),
165 list_item="installations",
166 )
167
168 def get_org_installation(self, org):
169 """
170 :calls: `GET /orgs/{org}/installation <https://docs.github.com/en/rest/apps/apps#get-an-organization-installation-for-the-authenticated-app>`
171 :param org: str
172 :rtype: :class:`github.Installation.Installation`
173 """
174 return self._get_installed_app(url=f"/orgs/{org}/installation")
175
176 def get_repo_installation(self, owner, repo):
177 """
178 :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`
179 :param owner: str
180 :param repo: str
181 :rtype: :class:`github.Installation.Installation`
182 """
183 return self._get_installed_app(url=f"/repos/{owner}/{repo}/installation")
184
185 def get_user_installation(self, username):
186 """
187 :calls: `GET /users/{username}/installation <https://docs.github.com/en/rest/apps/apps#get-a-user-installation-for-the-authenticated-app>`
188 :param username: str
189 :rtype: :class:`github.Installation.Installation`
190 """
191 return self._get_installed_app(url=f"/users/{username}/installation")
192
193 def get_app_installation(self, installation_id):
194 """
195 :calls: `GET /app/installations/{installation_id} <https://docs.github.com/en/rest/apps/apps#get-an-installation-for-the-authenticated-app>`
196 :param installation_id: int
197 :rtype: :class:`github.Installation.Installation`
198 """
199 return self._get_installed_app(url=f"/app/installations/{installation_id}")
200
[end of github/GithubIntegration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/github/GithubIntegration.py b/github/GithubIntegration.py
--- a/github/GithubIntegration.py
+++ b/github/GithubIntegration.py
@@ -90,17 +90,23 @@
completed=True,
)
- def create_jwt(self):
+ def create_jwt(self, expiration=None):
"""
Create a signed JWT
https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app
:return string:
"""
+ if expiration is not None:
+ assert isinstance(expiration, int), expiration
+ assert (
+ Consts.MIN_JWT_EXPIRY <= expiration <= Consts.MAX_JWT_EXPIRY
+ ), expiration
+
now = int(time.time())
payload = {
"iat": now + self.jwt_issued_at,
- "exp": now + self.jwt_expiry,
+ "exp": now + (expiration if expiration is not None else self.jwt_expiry),
"iss": self.integration_id,
}
encrypted = jwt.encode(payload, key=self.private_key, algorithm="RS256")
| {"golden_diff": "diff --git a/github/GithubIntegration.py b/github/GithubIntegration.py\n--- a/github/GithubIntegration.py\n+++ b/github/GithubIntegration.py\n@@ -90,17 +90,23 @@\n completed=True,\n )\n \n- def create_jwt(self):\n+ def create_jwt(self, expiration=None):\n \"\"\"\n Create a signed JWT\n https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app\n \n :return string:\n \"\"\"\n+ if expiration is not None:\n+ assert isinstance(expiration, int), expiration\n+ assert (\n+ Consts.MIN_JWT_EXPIRY <= expiration <= Consts.MAX_JWT_EXPIRY\n+ ), expiration\n+\n now = int(time.time())\n payload = {\n \"iat\": now + self.jwt_issued_at,\n- \"exp\": now + self.jwt_expiry,\n+ \"exp\": now + (expiration if expiration is not None else self.jwt_expiry),\n \"iss\": self.integration_id,\n }\n encrypted = jwt.encode(payload, key=self.private_key, algorithm=\"RS256\")\n", "issue": "v1.58.0 TypeError: create_jwt() got an unexpected keyword argument 'expiration'\nThe `expiration` keyword argument was removed in v1.58.0. The interface defined in GithubIntegration.pyi is no longer accurate. \n", "before_files": [{"content": "import time\n\nimport deprecated\nimport jwt\n\nfrom github import Consts\nfrom github.GithubException import GithubException\nfrom github.Installation import Installation\nfrom github.InstallationAuthorization import InstallationAuthorization\nfrom github.PaginatedList import PaginatedList\nfrom github.Requester import Requester\n\n\nclass GithubIntegration:\n \"\"\"\n Main class to obtain tokens for a GitHub integration.\n \"\"\"\n\n def __init__(\n self,\n integration_id,\n private_key,\n base_url=Consts.DEFAULT_BASE_URL,\n jwt_expiry=Consts.DEFAULT_JWT_EXPIRY,\n jwt_issued_at=Consts.DEFAULT_JWT_ISSUED_AT,\n ):\n \"\"\"\n :param integration_id: int\n :param private_key: string\n :param base_url: string\n :param jwt_expiry: int. Expiry of the JWT used to get the information about this integration.\n The default expiration is in 5 minutes and is capped at 10 minutes according to GitHub documentation\n https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#generating-a-json-web-token-jwt\n :param jwt_issued_at: int. Number of seconds, relative to now, to set for the \"iat\" (issued at) parameter.\n The default value is -60 to protect against clock drift\n \"\"\"\n assert isinstance(integration_id, (int, str)), integration_id\n assert isinstance(private_key, str), \"supplied private key should be a string\"\n assert isinstance(base_url, str), base_url\n assert isinstance(jwt_expiry, int), jwt_expiry\n assert Consts.MIN_JWT_EXPIRY <= jwt_expiry <= Consts.MAX_JWT_EXPIRY, jwt_expiry\n assert isinstance(jwt_issued_at, int)\n\n self.base_url = base_url\n self.integration_id = integration_id\n self.private_key = private_key\n self.jwt_expiry = jwt_expiry\n self.jwt_issued_at = jwt_issued_at\n self.__requester = Requester(\n login_or_token=None,\n password=None,\n jwt=self.create_jwt(),\n app_auth=None,\n base_url=self.base_url,\n timeout=Consts.DEFAULT_TIMEOUT,\n user_agent=\"PyGithub/Python\",\n per_page=Consts.DEFAULT_PER_PAGE,\n verify=True,\n retry=None,\n pool_size=None,\n )\n\n def _get_headers(self):\n \"\"\"\n Get headers for the requests.\n\n :return: dict\n \"\"\"\n return {\n \"Authorization\": f\"Bearer {self.create_jwt()}\",\n \"Accept\": Consts.mediaTypeIntegrationPreview,\n \"User-Agent\": \"PyGithub/Python\",\n }\n\n def _get_installed_app(self, url):\n \"\"\"\n Get installation for the given URL.\n\n :param url: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n headers, response = self.__requester.requestJsonAndCheck(\n \"GET\", url, headers=self._get_headers()\n )\n\n return Installation(\n requester=self.__requester,\n headers=headers,\n attributes=response,\n completed=True,\n )\n\n def create_jwt(self):\n \"\"\"\n Create a signed JWT\n https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app\n\n :return string:\n \"\"\"\n now = int(time.time())\n payload = {\n \"iat\": now + self.jwt_issued_at,\n \"exp\": now + self.jwt_expiry,\n \"iss\": self.integration_id,\n }\n encrypted = jwt.encode(payload, key=self.private_key, algorithm=\"RS256\")\n\n if isinstance(encrypted, bytes):\n encrypted = encrypted.decode(\"utf-8\")\n\n return encrypted\n\n def get_access_token(self, installation_id, permissions=None):\n \"\"\"\n :calls: `POST /app/installations/{installation_id}/access_tokens <https://docs.github.com/en/rest/apps/apps#create-an-installation-access-token-for-an-app>`\n :param installation_id: int\n :param permissions: dict\n :return: :class:`github.InstallationAuthorization.InstallationAuthorization`\n \"\"\"\n if permissions is None:\n permissions = {}\n\n if not isinstance(permissions, dict):\n raise GithubException(\n status=400, data={\"message\": \"Invalid permissions\"}, headers=None\n )\n\n body = {\"permissions\": permissions}\n headers, response = self.__requester.requestJsonAndCheck(\n \"POST\",\n f\"/app/installations/{installation_id}/access_tokens\",\n input=body,\n )\n\n return InstallationAuthorization(\n requester=self.__requester,\n headers=headers,\n attributes=response,\n completed=True,\n )\n\n @deprecated.deprecated(\"Use get_repo_installation\")\n def get_installation(self, owner, repo):\n \"\"\"\n Deprecated by get_repo_installation\n\n :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`\n :param owner: str\n :param repo: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/repos/{owner}/{repo}/installation\")\n\n def get_installations(self):\n \"\"\"\n :calls: GET /app/installations <https://docs.github.com/en/rest/reference/apps#list-installations-for-the-authenticated-app>\n :rtype: :class:`github.PaginatedList.PaginatedList[github.Installation.Installation]`\n \"\"\"\n return PaginatedList(\n contentClass=Installation,\n requester=self.__requester,\n firstUrl=\"/app/installations\",\n firstParams=None,\n headers=self._get_headers(),\n list_item=\"installations\",\n )\n\n def get_org_installation(self, org):\n \"\"\"\n :calls: `GET /orgs/{org}/installation <https://docs.github.com/en/rest/apps/apps#get-an-organization-installation-for-the-authenticated-app>`\n :param org: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/orgs/{org}/installation\")\n\n def get_repo_installation(self, owner, repo):\n \"\"\"\n :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`\n :param owner: str\n :param repo: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/repos/{owner}/{repo}/installation\")\n\n def get_user_installation(self, username):\n \"\"\"\n :calls: `GET /users/{username}/installation <https://docs.github.com/en/rest/apps/apps#get-a-user-installation-for-the-authenticated-app>`\n :param username: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/users/{username}/installation\")\n\n def get_app_installation(self, installation_id):\n \"\"\"\n :calls: `GET /app/installations/{installation_id} <https://docs.github.com/en/rest/apps/apps#get-an-installation-for-the-authenticated-app>`\n :param installation_id: int\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/app/installations/{installation_id}\")\n", "path": "github/GithubIntegration.py"}]} | 2,698 | 250 |
gh_patches_debug_8845 | rasdani/github-patches | git_diff | safe-global__safe-config-service-14 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include provider info in the serialized response of `GET /safe-apps/`
The `/safe-apps` endpoint should include data about the provider if any
</issue>
<code>
[start of src/safe_apps/serializers.py]
1 from rest_framework import serializers
2
3 from .models import SafeApp
4
5
6 class SafeAppsResponseSerializer(serializers.ModelSerializer):
7 class Meta:
8 model = SafeApp
9 fields = ['url', 'name', 'icon_url', 'description', 'networks']
10
[end of src/safe_apps/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/safe_apps/serializers.py b/src/safe_apps/serializers.py
--- a/src/safe_apps/serializers.py
+++ b/src/safe_apps/serializers.py
@@ -1,9 +1,17 @@
from rest_framework import serializers
-from .models import SafeApp
+from .models import SafeApp, Provider
+
+
+class ProviderSerializer(serializers.ModelSerializer):
+ class Meta:
+ model = Provider
+ fields = ['url', 'name']
class SafeAppsResponseSerializer(serializers.ModelSerializer):
+ provider = ProviderSerializer()
+
class Meta:
model = SafeApp
- fields = ['url', 'name', 'icon_url', 'description', 'networks']
+ fields = ['url', 'name', 'icon_url', 'description', 'networks', 'provider']
| {"golden_diff": "diff --git a/src/safe_apps/serializers.py b/src/safe_apps/serializers.py\n--- a/src/safe_apps/serializers.py\n+++ b/src/safe_apps/serializers.py\n@@ -1,9 +1,17 @@\n from rest_framework import serializers\n \n-from .models import SafeApp\n+from .models import SafeApp, Provider\n+\n+\n+class ProviderSerializer(serializers.ModelSerializer):\n+ class Meta:\n+ model = Provider\n+ fields = ['url', 'name']\n \n \n class SafeAppsResponseSerializer(serializers.ModelSerializer):\n+ provider = ProviderSerializer()\n+\n class Meta:\n model = SafeApp\n- fields = ['url', 'name', 'icon_url', 'description', 'networks']\n+ fields = ['url', 'name', 'icon_url', 'description', 'networks', 'provider']\n", "issue": "Include provider info in the serialized response of `GET /safe-apps/`\nThe `/safe-apps` endpoint should include data about the provider if any\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom .models import SafeApp\n\n\nclass SafeAppsResponseSerializer(serializers.ModelSerializer):\n class Meta:\n model = SafeApp\n fields = ['url', 'name', 'icon_url', 'description', 'networks']\n", "path": "src/safe_apps/serializers.py"}]} | 638 | 180 |
gh_patches_debug_29629 | rasdani/github-patches | git_diff | aio-libs__aiohttp-4556 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GET Requests to link-local IPv6 addresses don't work on Python 3.7+
🐞 **Describe the bug**
The aiohttp resolver loses information related to linklocal IPv6 addresses on Python 3.7+ due to a changes in the representation returned by `socket.getaddrinfo()`
💡 **To Reproduce**
Try to get an URL like `http://[fe80::1%eth0]:8080/`, it will result in an OSError (Invalid argument) exception.
This seems to be due to the way that scopeid's are handled in [resolver.py](https://github.com/aio-libs/aiohttp/blob/72c2acd4850b1cbc638b413a7c28d96882b4d7e8/aiohttp/resolver.py#L31-L37):
Run `socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]` on python 3.6:
```python
socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]
>>> socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]
('fe80::1%eth0', 8080, 0, 4)
```
Run it on python 3.7:
```python
>>> socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]
('fe80::1', 8080, 0, 4)y
```
The `address` element of the tuple no longer includes the textual representation of the scope id, it's only contained in the matching scope_id element of the tuple - which then is missing when later callings _loop.create_connection().
💡 **Expected behavior**
The URL is successfully retrieved for link local IPv6 addresses.
📋 **Logs/tracebacks**
```python-traceback (paste your traceback in the next line)
N/A
```
📋 **Your version of the Python**
```console
$ python3 --version
Python 3.6.6
$ python3.7 --version
Python 3.7.5
```
📋 **Your version of the aiohttp/yarl/multidict distributions**
```console
$ python -m pip show aiohttp
python -m pip show aiohttp
Name: aiohttp
Version: 3.6.2
```
```console
$ python -m pip show multidict
Name: multidict
Version: 4.7.4
```
```console
$ python -m pip show yarl
Name: yarl
Version: 1.4.2
```
📋 **Additional context**
OS: Centos7 Linux
Proxy Server: No
Related to: client
</issue>
<code>
[start of aiohttp/resolver.py]
1 import socket
2 from typing import Any, Dict, List
3
4 from .abc import AbstractResolver
5 from .helpers import get_running_loop
6
7 __all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')
8
9 try:
10 import aiodns
11
12 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
13 except ImportError: # pragma: no cover
14 aiodns = None
15
16 aiodns_default = False
17
18
19 class ThreadedResolver(AbstractResolver):
20 """Use Executor for synchronous getaddrinfo() calls, which defaults to
21 concurrent.futures.ThreadPoolExecutor.
22 """
23
24 def __init__(self) -> None:
25 self._loop = get_running_loop()
26
27 async def resolve(self, host: str, port: int=0,
28 family: int=socket.AF_INET) -> List[Dict[str, Any]]:
29 infos = await self._loop.getaddrinfo(
30 host, port, type=socket.SOCK_STREAM, family=family)
31
32 hosts = []
33 for family, _, proto, _, address in infos:
34 hosts.append(
35 {'hostname': host,
36 'host': address[0], 'port': address[1],
37 'family': family, 'proto': proto,
38 'flags': socket.AI_NUMERICHOST})
39
40 return hosts
41
42 async def close(self) -> None:
43 pass
44
45
46 class AsyncResolver(AbstractResolver):
47 """Use the `aiodns` package to make asynchronous DNS lookups"""
48
49 def __init__(self, *args: Any, **kwargs: Any) -> None:
50 if aiodns is None:
51 raise RuntimeError("Resolver requires aiodns library")
52
53 self._loop = get_running_loop()
54 self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)
55
56 async def resolve(self, host: str, port: int=0,
57 family: int=socket.AF_INET) -> List[Dict[str, Any]]:
58 try:
59 resp = await self._resolver.gethostbyname(host, family)
60 except aiodns.error.DNSError as exc:
61 msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
62 raise OSError(msg) from exc
63 hosts = []
64 for address in resp.addresses:
65 hosts.append(
66 {'hostname': host,
67 'host': address, 'port': port,
68 'family': family, 'proto': 0,
69 'flags': socket.AI_NUMERICHOST})
70
71 if not hosts:
72 raise OSError("DNS lookup failed")
73
74 return hosts
75
76 async def close(self) -> None:
77 return self._resolver.cancel()
78
79
80 DefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver
81
[end of aiohttp/resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -31,11 +31,23 @@
hosts = []
for family, _, proto, _, address in infos:
- hosts.append(
- {'hostname': host,
- 'host': address[0], 'port': address[1],
- 'family': family, 'proto': proto,
- 'flags': socket.AI_NUMERICHOST})
+ if family == socket.AF_INET6 and address[3]: # type: ignore
+ # This is essential for link-local IPv6 addresses.
+ # LL IPv6 is a VERY rare case. Strictly speaking, we should use
+ # getnameinfo() unconditionally, but performance makes sense.
+ host, _port = socket.getnameinfo(
+ address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV)
+ port = int(_port)
+ else:
+ host, port = address[:2]
+ hosts.append({
+ 'hostname': host,
+ 'host': host,
+ 'port': port,
+ 'family': family,
+ 'proto': proto,
+ 'flags': socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
+ })
return hosts
@@ -62,11 +74,14 @@
raise OSError(msg) from exc
hosts = []
for address in resp.addresses:
- hosts.append(
- {'hostname': host,
- 'host': address, 'port': port,
- 'family': family, 'proto': 0,
- 'flags': socket.AI_NUMERICHOST})
+ hosts.append({
+ 'hostname': host,
+ 'host': address,
+ 'port': port,
+ 'family': family,
+ 'proto': 0,
+ 'flags': socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
+ })
if not hosts:
raise OSError("DNS lookup failed")
| {"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -31,11 +31,23 @@\n \n hosts = []\n for family, _, proto, _, address in infos:\n- hosts.append(\n- {'hostname': host,\n- 'host': address[0], 'port': address[1],\n- 'family': family, 'proto': proto,\n- 'flags': socket.AI_NUMERICHOST})\n+ if family == socket.AF_INET6 and address[3]: # type: ignore\n+ # This is essential for link-local IPv6 addresses.\n+ # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n+ # getnameinfo() unconditionally, but performance makes sense.\n+ host, _port = socket.getnameinfo(\n+ address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV)\n+ port = int(_port)\n+ else:\n+ host, port = address[:2]\n+ hosts.append({\n+ 'hostname': host,\n+ 'host': host,\n+ 'port': port,\n+ 'family': family,\n+ 'proto': proto,\n+ 'flags': socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n+ })\n \n return hosts\n \n@@ -62,11 +74,14 @@\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n- hosts.append(\n- {'hostname': host,\n- 'host': address, 'port': port,\n- 'family': family, 'proto': 0,\n- 'flags': socket.AI_NUMERICHOST})\n+ hosts.append({\n+ 'hostname': host,\n+ 'host': address,\n+ 'port': port,\n+ 'family': family,\n+ 'proto': 0,\n+ 'flags': socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n+ })\n \n if not hosts:\n raise OSError(\"DNS lookup failed\")\n", "issue": "GET Requests to link-local IPv6 addresses don't work on Python 3.7+\n\ud83d\udc1e **Describe the bug**\r\nThe aiohttp resolver loses information related to linklocal IPv6 addresses on Python 3.7+ due to a changes in the representation returned by `socket.getaddrinfo()`\r\n\r\n\ud83d\udca1 **To Reproduce**\r\nTry to get an URL like `http://[fe80::1%eth0]:8080/`, it will result in an OSError (Invalid argument) exception.\r\n\r\nThis seems to be due to the way that scopeid's are handled in [resolver.py](https://github.com/aio-libs/aiohttp/blob/72c2acd4850b1cbc638b413a7c28d96882b4d7e8/aiohttp/resolver.py#L31-L37):\r\n\r\nRun `socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]` on python 3.6:\r\n```python\r\nsocket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]\r\n>>> socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]\r\n('fe80::1%eth0', 8080, 0, 4)\r\n```\r\n\r\nRun it on python 3.7:\r\n```python\r\n>>> socket.getaddrinfo('fe80::1%eth0', 8080, family=socket.AF_INET6, proto=socket.IPPROTO_TCP)[0][4]\r\n('fe80::1', 8080, 0, 4)y\r\n```\r\n\r\nThe `address` element of the tuple no longer includes the textual representation of the scope id, it's only contained in the matching scope_id element of the tuple - which then is missing when later callings _loop.create_connection().\r\n\r\n\ud83d\udca1 **Expected behavior**\r\nThe URL is successfully retrieved for link local IPv6 addresses.\r\n\r\n\r\n\ud83d\udccb **Logs/tracebacks**\r\n```python-traceback (paste your traceback in the next line)\r\nN/A\r\n```\r\n\r\n\ud83d\udccb **Your version of the Python**\r\n```console\r\n$ python3 --version\r\nPython 3.6.6\r\n$ python3.7 --version\r\nPython 3.7.5\r\n```\r\n\r\n\ud83d\udccb **Your version of the aiohttp/yarl/multidict distributions**\r\n```console\r\n$ python -m pip show aiohttp\r\npython -m pip show aiohttp\r\nName: aiohttp\r\nVersion: 3.6.2\r\n```\r\n```console\r\n$ python -m pip show multidict\r\nName: multidict\r\nVersion: 4.7.4\r\n```\r\n```console\r\n$ python -m pip show yarl\r\nName: yarl\r\nVersion: 1.4.2\r\n```\r\n\r\n\ud83d\udccb **Additional context**\r\nOS: Centos7 Linux\r\nProxy Server: No\r\nRelated to: client\n", "before_files": [{"content": "import socket\nfrom typing import Any, Dict, List\n\nfrom .abc import AbstractResolver\nfrom .helpers import get_running_loop\n\n__all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')\n\ntry:\n import aiodns\n\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self) -> None:\n self._loop = get_running_loop()\n\n async def resolve(self, host: str, port: int=0,\n family: int=socket.AF_INET) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family)\n\n hosts = []\n for family, _, proto, _, address in infos:\n hosts.append(\n {'hostname': host,\n 'host': address[0], 'port': address[1],\n 'family': family, 'proto': proto,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n async def close(self) -> None:\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = get_running_loop()\n self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)\n\n async def resolve(self, host: str, port: int=0,\n family: int=socket.AF_INET) -> List[Dict[str, Any]]:\n try:\n resp = await self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n async def close(self) -> None:\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]} | 1,958 | 458 |
gh_patches_debug_4277 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1086 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
send_photo file from disk doesn't seem to work on python2
### Steps to reproduce
1. https://github.com/python-telegram-bot/python-telegram-bot/wiki/Code-snippets#post-an-image-file-from-disk
2. I'm using the API and I'm getting this error:
```
'ascii' codec can't decode byte 0x89 in position 0: ordinal not in range(128)
2018-04-24 09:49:59,039 - telegram.ext.dispatcher - ERROR - An uncaught error was raised while processing the update
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\telegram\ext\dispatcher.py", line 279, in process_update
handler.handle_update(update, self)
File "C:\Python27\lib\site-packages\telegram\ext\commandhandler.py", line 173, in handle_update
return self.callback(dispatcher.bot, update, **optional_args)
File "bot_status.py", line 101, in graph_progress
bot.send_photo(chat_id, open(photo, 'rb'))
File "C:\Python27\lib\site-packages\telegram\bot.py", line 60, in decorator
result = func(self, *args, **kwargs)
File "C:\Python27\lib\site-packages\telegram\bot.py", line 85, in decorator
result = self._request.post(url, data, timeout=kwargs.get('timeout'))
File "C:\Python27\lib\site-packages\telegram\utils\request.py", line 270, in post
'POST', url, body=data.to_form(), headers=data.headers, **urlopen_kwargs)
File "C:\Python27\lib\site-packages\telegram\files\inputfile.py", line 127, in to_form
return self._parse(form)
File "C:\Python27\lib\site-packages\telegram\files\inputfile.py", line 141, in _parse
return '\r\n'.join(form)
UnicodeDecodeError: 'ascii' codec can't decode byte 0x89 in position 0: ordinal not in range(128)
```
3.
### Expected behaviour
I was supposed to get an image
### Actual behaviour
The bot raised an exception
I've tested the same code on python3 and it works correctly, it seems to be a python2 only issue.
In the _parse function it seems that element form[5] is unicode which forces python to treat everything as unicode and PNG is not a valid utf8 data.
### Configuration
Windows 10 x64 1803
**Version of Python, python-telegram-bot & dependencies:**
``$ python -m telegram``
```
python-telegram-bot 10.0.2
certifi 2018.04.16
future 0.16.0
Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)]
```
</issue>
<code>
[start of telegram/files/inputfile.py]
1 #!/usr/bin/env python
2 # pylint: disable=W0622,E0611
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2018
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20 """This module contains an object that represents a Telegram InputFile."""
21
22 try:
23 # python 3
24 from email.generator import _make_boundary as choose_boundary
25 except ImportError:
26 # python 2
27 from mimetools import choose_boundary
28
29 import imghdr
30 import mimetypes
31 import os
32 import sys
33
34 from telegram import TelegramError
35
36 DEFAULT_MIME_TYPE = 'application/octet-stream'
37 USER_AGENT = 'Python Telegram Bot (https://github.com/python-telegram-bot/python-telegram-bot)'
38 FILE_TYPES = ('audio', 'document', 'photo', 'sticker', 'video', 'voice', 'certificate',
39 'video_note', 'png_sticker')
40
41
42 class InputFile(object):
43 """This object represents a Telegram InputFile.
44
45 Attributes:
46 data (:obj:`dict`): Data containing an inputfile.
47
48 Args:
49 data (:obj:`dict`): Data containing an inputfile.
50
51 Raises:
52 TelegramError
53
54 """
55
56 def __init__(self, data):
57 self.data = data
58 self.boundary = choose_boundary()
59
60 for t in FILE_TYPES:
61 if t in data:
62 self.input_name = t
63 self.input_file = data.pop(t)
64 break
65 else:
66 raise TelegramError('Unknown inputfile type')
67
68 if hasattr(self.input_file, 'read'):
69 self.filename = None
70 self.input_file_content = self.input_file.read()
71 if 'filename' in data:
72 self.filename = self.data.pop('filename')
73 elif hasattr(self.input_file, 'name'):
74 # on py2.7, pylint fails to understand this properly
75 # pylint: disable=E1101
76 self.filename = os.path.basename(self.input_file.name)
77
78 try:
79 self.mimetype = self.is_image(self.input_file_content)
80 if not self.filename or '.' not in self.filename:
81 self.filename = self.mimetype.replace('/', '.')
82 except TelegramError:
83 if self.filename:
84 self.mimetype = mimetypes.guess_type(
85 self.filename)[0] or DEFAULT_MIME_TYPE
86 else:
87 self.mimetype = DEFAULT_MIME_TYPE
88
89 @property
90 def headers(self):
91 """:obj:`dict`: Headers."""
92
93 return {'User-agent': USER_AGENT, 'Content-type': self.content_type}
94
95 @property
96 def content_type(self):
97 """:obj:`str`: Content type"""
98 return 'multipart/form-data; boundary=%s' % self.boundary
99
100 def to_form(self):
101 """Transform the inputfile to multipart/form data.
102
103 Returns:
104 :obj:`str`
105
106 """
107 form = []
108 form_boundary = '--' + self.boundary
109
110 # Add data fields
111 for name in iter(self.data):
112 value = self.data[name]
113 form.extend([
114 form_boundary, 'Content-Disposition: form-data; name="%s"' % name, '', str(value)
115 ])
116
117 # Add input_file to upload
118 form.extend([
119 form_boundary, 'Content-Disposition: form-data; name="%s"; filename="%s"' %
120 (self.input_name,
121 self.filename), 'Content-Type: %s' % self.mimetype, '', self.input_file_content
122 ])
123
124 form.append('--' + self.boundary + '--')
125 form.append('')
126
127 return self._parse(form)
128
129 @staticmethod
130 def _parse(form):
131 if sys.version_info > (3,):
132 # on Python 3 form needs to be byte encoded
133 encoded_form = []
134 for item in form:
135 try:
136 encoded_form.append(item.encode())
137 except AttributeError:
138 encoded_form.append(item)
139
140 return b'\r\n'.join(encoded_form)
141 return '\r\n'.join(form)
142
143 @staticmethod
144 def is_image(stream):
145 """Check if the content file is an image by analyzing its headers.
146
147 Args:
148 stream (:obj:`str`): A str representing the content of a file.
149
150 Returns:
151 :obj:`str`: The str mime-type of an image.
152
153 """
154 image = imghdr.what(None, stream)
155 if image:
156 return 'image/%s' % image
157
158 raise TelegramError('Could not parse file content')
159
160 @staticmethod
161 def is_inputfile(data):
162 """Check if the request is a file request.
163
164 Args:
165 data (Dict[:obj:`str`, :obj:`str`]): A dict of (str, str) key/value pairs.
166
167 Returns:
168 :obj:`bool`
169
170 """
171 if data:
172 file_type = [i for i in iter(data) if i in FILE_TYPES]
173
174 if file_type:
175 file_content = data[file_type[0]]
176
177 return hasattr(file_content, 'read')
178
179 return False
180
[end of telegram/files/inputfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/files/inputfile.py b/telegram/files/inputfile.py
--- a/telegram/files/inputfile.py
+++ b/telegram/files/inputfile.py
@@ -86,6 +86,10 @@
else:
self.mimetype = DEFAULT_MIME_TYPE
+ if sys.version_info < (3,):
+ if isinstance(self.filename, unicode): # flake8: noqa pylint: disable=E0602
+ self.filename = self.filename.encode('utf-8', 'replace')
+
@property
def headers(self):
""":obj:`dict`: Headers."""
| {"golden_diff": "diff --git a/telegram/files/inputfile.py b/telegram/files/inputfile.py\n--- a/telegram/files/inputfile.py\n+++ b/telegram/files/inputfile.py\n@@ -86,6 +86,10 @@\n else:\n self.mimetype = DEFAULT_MIME_TYPE\n \n+ if sys.version_info < (3,):\n+ if isinstance(self.filename, unicode): # flake8: noqa pylint: disable=E0602\n+ self.filename = self.filename.encode('utf-8', 'replace')\n+\n @property\n def headers(self):\n \"\"\":obj:`dict`: Headers.\"\"\"\n", "issue": "send_photo file from disk doesn't seem to work on python2\n### Steps to reproduce\r\n1. https://github.com/python-telegram-bot/python-telegram-bot/wiki/Code-snippets#post-an-image-file-from-disk\r\n\r\n2. I'm using the API and I'm getting this error:\r\n```\r\n'ascii' codec can't decode byte 0x89 in position 0: ordinal not in range(128)\r\n2018-04-24 09:49:59,039 - telegram.ext.dispatcher - ERROR - An uncaught error was raised while processing the update\r\nTraceback (most recent call last):\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\ext\\dispatcher.py\", line 279, in process_update\r\n handler.handle_update(update, self)\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\ext\\commandhandler.py\", line 173, in handle_update\r\n return self.callback(dispatcher.bot, update, **optional_args)\r\n File \"bot_status.py\", line 101, in graph_progress\r\n bot.send_photo(chat_id, open(photo, 'rb'))\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\bot.py\", line 60, in decorator\r\n result = func(self, *args, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\bot.py\", line 85, in decorator\r\n result = self._request.post(url, data, timeout=kwargs.get('timeout'))\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\utils\\request.py\", line 270, in post\r\n 'POST', url, body=data.to_form(), headers=data.headers, **urlopen_kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\files\\inputfile.py\", line 127, in to_form\r\n return self._parse(form)\r\n File \"C:\\Python27\\lib\\site-packages\\telegram\\files\\inputfile.py\", line 141, in _parse\r\n return '\\r\\n'.join(form)\r\nUnicodeDecodeError: 'ascii' codec can't decode byte 0x89 in position 0: ordinal not in range(128)\r\n```\r\n\r\n3.\r\n\r\n### Expected behaviour\r\nI was supposed to get an image\r\n\r\n### Actual behaviour\r\nThe bot raised an exception\r\n\r\nI've tested the same code on python3 and it works correctly, it seems to be a python2 only issue.\r\nIn the _parse function it seems that element form[5] is unicode which forces python to treat everything as unicode and PNG is not a valid utf8 data.\r\n\r\n### Configuration\r\nWindows 10 x64 1803\r\n\r\n\r\n**Version of Python, python-telegram-bot & dependencies:**\r\n\r\n``$ python -m telegram``\r\n```\r\npython-telegram-bot 10.0.2\r\ncertifi 2018.04.16\r\nfuture 0.16.0\r\nPython 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:25:58) [MSC v.1500 64 bit (AMD64)]\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=W0622,E0611\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram InputFile.\"\"\"\n\ntry:\n # python 3\n from email.generator import _make_boundary as choose_boundary\nexcept ImportError:\n # python 2\n from mimetools import choose_boundary\n\nimport imghdr\nimport mimetypes\nimport os\nimport sys\n\nfrom telegram import TelegramError\n\nDEFAULT_MIME_TYPE = 'application/octet-stream'\nUSER_AGENT = 'Python Telegram Bot (https://github.com/python-telegram-bot/python-telegram-bot)'\nFILE_TYPES = ('audio', 'document', 'photo', 'sticker', 'video', 'voice', 'certificate',\n 'video_note', 'png_sticker')\n\n\nclass InputFile(object):\n \"\"\"This object represents a Telegram InputFile.\n\n Attributes:\n data (:obj:`dict`): Data containing an inputfile.\n\n Args:\n data (:obj:`dict`): Data containing an inputfile.\n\n Raises:\n TelegramError\n\n \"\"\"\n\n def __init__(self, data):\n self.data = data\n self.boundary = choose_boundary()\n\n for t in FILE_TYPES:\n if t in data:\n self.input_name = t\n self.input_file = data.pop(t)\n break\n else:\n raise TelegramError('Unknown inputfile type')\n\n if hasattr(self.input_file, 'read'):\n self.filename = None\n self.input_file_content = self.input_file.read()\n if 'filename' in data:\n self.filename = self.data.pop('filename')\n elif hasattr(self.input_file, 'name'):\n # on py2.7, pylint fails to understand this properly\n # pylint: disable=E1101\n self.filename = os.path.basename(self.input_file.name)\n\n try:\n self.mimetype = self.is_image(self.input_file_content)\n if not self.filename or '.' not in self.filename:\n self.filename = self.mimetype.replace('/', '.')\n except TelegramError:\n if self.filename:\n self.mimetype = mimetypes.guess_type(\n self.filename)[0] or DEFAULT_MIME_TYPE\n else:\n self.mimetype = DEFAULT_MIME_TYPE\n\n @property\n def headers(self):\n \"\"\":obj:`dict`: Headers.\"\"\"\n\n return {'User-agent': USER_AGENT, 'Content-type': self.content_type}\n\n @property\n def content_type(self):\n \"\"\":obj:`str`: Content type\"\"\"\n return 'multipart/form-data; boundary=%s' % self.boundary\n\n def to_form(self):\n \"\"\"Transform the inputfile to multipart/form data.\n\n Returns:\n :obj:`str`\n\n \"\"\"\n form = []\n form_boundary = '--' + self.boundary\n\n # Add data fields\n for name in iter(self.data):\n value = self.data[name]\n form.extend([\n form_boundary, 'Content-Disposition: form-data; name=\"%s\"' % name, '', str(value)\n ])\n\n # Add input_file to upload\n form.extend([\n form_boundary, 'Content-Disposition: form-data; name=\"%s\"; filename=\"%s\"' %\n (self.input_name,\n self.filename), 'Content-Type: %s' % self.mimetype, '', self.input_file_content\n ])\n\n form.append('--' + self.boundary + '--')\n form.append('')\n\n return self._parse(form)\n\n @staticmethod\n def _parse(form):\n if sys.version_info > (3,):\n # on Python 3 form needs to be byte encoded\n encoded_form = []\n for item in form:\n try:\n encoded_form.append(item.encode())\n except AttributeError:\n encoded_form.append(item)\n\n return b'\\r\\n'.join(encoded_form)\n return '\\r\\n'.join(form)\n\n @staticmethod\n def is_image(stream):\n \"\"\"Check if the content file is an image by analyzing its headers.\n\n Args:\n stream (:obj:`str`): A str representing the content of a file.\n\n Returns:\n :obj:`str`: The str mime-type of an image.\n\n \"\"\"\n image = imghdr.what(None, stream)\n if image:\n return 'image/%s' % image\n\n raise TelegramError('Could not parse file content')\n\n @staticmethod\n def is_inputfile(data):\n \"\"\"Check if the request is a file request.\n\n Args:\n data (Dict[:obj:`str`, :obj:`str`]): A dict of (str, str) key/value pairs.\n\n Returns:\n :obj:`bool`\n\n \"\"\"\n if data:\n file_type = [i for i in iter(data) if i in FILE_TYPES]\n\n if file_type:\n file_content = data[file_type[0]]\n\n return hasattr(file_content, 'read')\n\n return False\n", "path": "telegram/files/inputfile.py"}]} | 2,928 | 134 |
gh_patches_debug_29472 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Undesirable record grouping behaviours
## Description
Record grouping has a set of behaviours, that are not desirable.
* It considers order_by, which leads to formation of incorrect query on the backend, if we don't group by the sorted column.

* It considers limit and offset. These apply on the grouped result itself, and is unrelated to the record limit & offset.


## Expected behavior
* It should not consider order_by.
* It should not consider limit and offset.
We could also probably have a dedicated API for this. It could also obtain the values for columns, to filter the grouped results. Having it as part of records API makes less sense, since the group count is not a reflection of the record results.
</issue>
<code>
[start of mathesar/pagination.py]
1 from collections import OrderedDict
2
3 from rest_framework.pagination import LimitOffsetPagination
4 from rest_framework.response import Response
5
6
7 class DefaultLimitOffsetPagination(LimitOffsetPagination):
8 default_limit = 50
9 max_limit = 500
10
11 def get_paginated_response(self, data):
12 return Response(OrderedDict([
13 ('count', self.count),
14 ('results', data)
15 ]))
16
17
18 class ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):
19
20 def paginate_queryset(self, queryset, request, table_id):
21 self.limit = self.get_limit(request)
22 if self.limit is None:
23 self.limit = self.default_limit
24 self.offset = self.get_offset(request)
25 table = queryset.get(id=table_id)
26 self.count = len(table.sa_columns)
27 self.request = request
28 return list(table.sa_columns)[self.offset:self.offset + self.limit]
29
30
31 class TableLimitOffsetPagination(DefaultLimitOffsetPagination):
32
33 def paginate_queryset(self, queryset, request, table_id,
34 filters=[], order_by=[]):
35 self.limit = self.get_limit(request)
36 if self.limit is None:
37 self.limit = self.default_limit
38 self.offset = self.get_offset(request)
39 # TODO: Cache count value somewhere, since calculating it is expensive.
40 table = queryset.get(id=table_id)
41 self.count = table.sa_num_records(filters=filters)
42 self.request = request
43
44 return table.get_records(
45 self.limit, self.offset, filters=filters, order_by=order_by,
46 )
47
48
49 class TableLimitOffsetGroupPagination(TableLimitOffsetPagination):
50 def get_paginated_response(self, data):
51 return Response(OrderedDict([
52 ('count', self.count),
53 ('group_count', self.group_count),
54 ('results', data)
55 ]))
56
57 def paginate_queryset(self, queryset, request, table_id,
58 filters=[], order_by=[], group_count_by=[]):
59 records = super().paginate_queryset(
60 queryset, request, table_id, filters=filters, order_by=order_by
61 )
62
63 table = queryset.get(id=table_id)
64 if group_count_by:
65 group_count = table.get_group_counts(
66 group_count_by, self.limit, self.offset,
67 filters=filters, order_by=order_by
68 )
69 # Convert the tuple keys into strings so it can be converted to JSON
70 group_count = {','.join(k): v for k, v in group_count.items()}
71 self.group_count = {
72 'group_count_by': group_count_by,
73 'results': group_count,
74 }
75 else:
76 self.group_count = {
77 'group_count_by': None,
78 'results': None,
79 }
80
81 return records
82
[end of mathesar/pagination.py]
[start of db/records.py]
1 import logging
2 from sqlalchemy import delete, select, Column, func
3 from sqlalchemy.inspection import inspect
4 from sqlalchemy_filters import apply_filters, apply_sort
5 from sqlalchemy_filters.exceptions import FieldNotFound
6
7 from db.constants import ID
8
9 logger = logging.getLogger(__name__)
10
11
12 # Grouping exceptions follow the sqlalchemy_filters exceptions patterns
13 class BadGroupFormat(Exception):
14 pass
15
16
17 class GroupFieldNotFound(FieldNotFound):
18 pass
19
20
21 def _get_primary_key_column(table):
22 primary_key_list = list(inspect(table).primary_key)
23 # We do not support getting by composite primary keys
24 assert len(primary_key_list) == 1
25 return primary_key_list[0]
26
27
28 def _create_col_objects(table, column_list):
29 return [
30 table.columns[col] if type(col) == str else col
31 for col in column_list
32 ]
33
34
35 def get_record(table, engine, id_value):
36 primary_key_column = _get_primary_key_column(table)
37 query = select(table).where(primary_key_column == id_value)
38 with engine.begin() as conn:
39 result = conn.execute(query).fetchall()
40 assert len(result) <= 1
41 return result[0] if result else None
42
43
44 def get_records(
45 table, engine, limit=None, offset=None, order_by=[], filters=[],
46 ):
47 """
48 Returns records from a table.
49
50 Args:
51 table: SQLAlchemy table object
52 engine: SQLAlchemy engine object
53 limit: int, gives number of rows to return
54 offset: int, gives number of rows to skip
55 order_by: list of dictionaries, where each dictionary has a 'field' and
56 'direction' field.
57 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
58 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
59 field, in addition to an 'value' field if appropriate.
60 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
61 """
62 query = select(table).limit(limit).offset(offset)
63 if order_by is not None:
64 query = apply_sort(query, order_by)
65 if filters is not None:
66 query = apply_filters(query, filters)
67 with engine.begin() as conn:
68 return conn.execute(query).fetchall()
69
70
71 def get_group_counts(
72 table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],
73 ):
74 """
75 Returns counts by specified groupings
76
77 Args:
78 table: SQLAlchemy table object
79 engine: SQLAlchemy engine object
80 limit: int, gives number of rows to return
81 offset: int, gives number of rows to skip
82 group_by: list or tuple of column names or column objects to group by
83 order_by: list of dictionaries, where each dictionary has a 'field' and
84 'direction' field.
85 See: https://github.com/centerofci/sqlalchemy-filters#sort-format
86 filters: list of dictionaries, where each dictionary has a 'field' and 'op'
87 field, in addition to an 'value' field if appropriate.
88 See: https://github.com/centerofci/sqlalchemy-filters#filters-format
89 """
90 if type(group_by) not in (tuple, list):
91 raise BadGroupFormat(f"Group spec {group_by} must be list or tuple.")
92 for field in group_by:
93 if type(field) not in (str, Column):
94 raise BadGroupFormat(f"Group field {field} must be a string or Column.")
95 field_name = field if type(field) == str else field.name
96 if field_name not in table.c:
97 raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
98
99 group_by = _create_col_objects(table, group_by)
100 query = (
101 select(*group_by, func.count(table.c[ID]))
102 .group_by(*group_by)
103 .limit(limit)
104 .offset(offset)
105 )
106 if order_by is not None:
107 query = apply_sort(query, order_by)
108 if filters is not None:
109 query = apply_filters(query, filters)
110 with engine.begin() as conn:
111 records = conn.execute(query).fetchall()
112
113 # Last field is the count, preceding fields are the group by fields
114 counts = {
115 (*record[:-1],): record[-1]
116 for record in records
117 }
118 return counts
119
120
121 def get_distinct_tuple_values(
122 column_list, engine, table=None, limit=None, offset=None,
123 ):
124 """
125 Returns distinct tuples from a given list of columns.
126
127 Args:
128 column_list: list of column names or SQLAlchemy column objects
129 engine: SQLAlchemy engine object
130 table: SQLAlchemy table object
131 limit: int, gives number of rows to return
132 offset: int, gives number of rows to skip
133
134 If no table is given, the column_list must consist entirely of
135 SQLAlchemy column objects associated with a table.
136 """
137 if table is not None:
138 column_objects = _create_col_objects(table, column_list)
139 else:
140 column_objects = column_list
141 try:
142 assert all([type(col) == Column for col in column_objects])
143 except AssertionError as e:
144 logger.error("All columns must be str or sqlalchemy.Column type")
145 raise e
146
147 query = (
148 select(*column_objects)
149 .distinct()
150 .limit(limit)
151 .offset(offset)
152 )
153 with engine.begin() as conn:
154 res = conn.execute(query).fetchall()
155 return [tuple(zip(column_objects, row)) for row in res]
156
157
158 def distinct_tuples_to_filter(distinct_tuples):
159 filters = []
160 for col, value in distinct_tuples:
161 filters.append({
162 "field": col,
163 "op": "==",
164 "value": value,
165 })
166 return filters
167
168
169 def create_record_or_records(table, engine, record_data):
170 """
171 record_data can be a dictionary, tuple, or list of dictionaries or tuples.
172 if record_data is a list, it creates multiple records.
173 """
174 id_value = None
175 with engine.begin() as connection:
176 result = connection.execute(table.insert(), record_data)
177 # If there was only a single record created, return the record.
178 if result.rowcount == 1:
179 # We need to manually commit insertion so that we can retrieve the record.
180 connection.commit()
181 id_value = result.inserted_primary_key[0]
182 if id_value is not None:
183 return get_record(table, engine, id_value)
184 # Do not return any records if multiple rows were added.
185 return None
186
187
188 def create_records_from_csv(table, engine, csv_filename, column_names, delimiter=None,
189 escape=None, quote=None):
190 with open(csv_filename, 'rb') as csv_file:
191 with engine.begin() as conn:
192 cursor = conn.connection.cursor()
193 relation = '.'.join('"{}"'.format(part) for part in (table.schema, table.name))
194 formatted_columns = '({})'.format(','.join([f'"{column_name}"' for column_name in column_names]))
195
196 copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV HEADER'
197 if delimiter:
198 copy_sql += f" DELIMITER E'{delimiter}'"
199 if escape:
200 copy_sql += f" ESCAPE '{escape}'"
201 if quote:
202 if quote == "'":
203 quote = "''"
204 copy_sql += f" QUOTE '{quote}'"
205
206 cursor.copy_expert(copy_sql, csv_file)
207
208
209 def update_record(table, engine, id_value, record_data):
210 primary_key_column = _get_primary_key_column(table)
211 with engine.begin() as connection:
212 connection.execute(
213 table.update().where(primary_key_column == id_value).values(record_data)
214 )
215 return get_record(table, engine, id_value)
216
217
218 def delete_record(table, engine, id_value):
219 primary_key_column = _get_primary_key_column(table)
220 query = delete(table).where(primary_key_column == id_value)
221 with engine.begin() as conn:
222 return conn.execute(query)
223
[end of db/records.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/records.py b/db/records.py
--- a/db/records.py
+++ b/db/records.py
@@ -96,10 +96,8 @@
if field_name not in table.c:
raise GroupFieldNotFound(f"Group field {field} not found in {table}.")
- group_by = _create_col_objects(table, group_by)
query = (
- select(*group_by, func.count(table.c[ID]))
- .group_by(*group_by)
+ select(table)
.limit(limit)
.offset(offset)
)
@@ -107,6 +105,13 @@
query = apply_sort(query, order_by)
if filters is not None:
query = apply_filters(query, filters)
+ subquery = query.subquery()
+
+ group_by = [
+ subquery.columns[col] if type(col) == str else subquery.columns[col.name]
+ for col in group_by
+ ]
+ query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)
with engine.begin() as conn:
records = conn.execute(query).fetchall()
diff --git a/mathesar/pagination.py b/mathesar/pagination.py
--- a/mathesar/pagination.py
+++ b/mathesar/pagination.py
@@ -67,7 +67,8 @@
filters=filters, order_by=order_by
)
# Convert the tuple keys into strings so it can be converted to JSON
- group_count = {','.join(k): v for k, v in group_count.items()}
+ group_count = [{"values": list(cols), "count": count}
+ for cols, count in group_count.items()]
self.group_count = {
'group_count_by': group_count_by,
'results': group_count,
| {"golden_diff": "diff --git a/db/records.py b/db/records.py\n--- a/db/records.py\n+++ b/db/records.py\n@@ -96,10 +96,8 @@\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n \n- group_by = _create_col_objects(table, group_by)\n query = (\n- select(*group_by, func.count(table.c[ID]))\n- .group_by(*group_by)\n+ select(table)\n .limit(limit)\n .offset(offset)\n )\n@@ -107,6 +105,13 @@\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n+ subquery = query.subquery()\n+\n+ group_by = [\n+ subquery.columns[col] if type(col) == str else subquery.columns[col.name]\n+ for col in group_by\n+ ]\n+ query = select(*group_by, func.count(subquery.c[ID])).group_by(*group_by)\n with engine.begin() as conn:\n records = conn.execute(query).fetchall()\n \ndiff --git a/mathesar/pagination.py b/mathesar/pagination.py\n--- a/mathesar/pagination.py\n+++ b/mathesar/pagination.py\n@@ -67,7 +67,8 @@\n filters=filters, order_by=order_by\n )\n # Convert the tuple keys into strings so it can be converted to JSON\n- group_count = {','.join(k): v for k, v in group_count.items()}\n+ group_count = [{\"values\": list(cols), \"count\": count}\n+ for cols, count in group_count.items()]\n self.group_count = {\n 'group_count_by': group_count_by,\n 'results': group_count,\n", "issue": "Undesirable record grouping behaviours\n## Description\r\nRecord grouping has a set of behaviours, that are not desirable.\r\n* It considers order_by, which leads to formation of incorrect query on the backend, if we don't group by the sorted column.\r\n\r\n\r\n* It considers limit and offset. These apply on the grouped result itself, and is unrelated to the record limit & offset.\r\n\r\n\r\n\r\n\r\n## Expected behavior\r\n* It should not consider order_by.\r\n* It should not consider limit and offset.\r\n\r\nWe could also probably have a dedicated API for this. It could also obtain the values for columns, to filter the grouped results. Having it as part of records API makes less sense, since the group count is not a reflection of the record results.\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom rest_framework.pagination import LimitOffsetPagination\nfrom rest_framework.response import Response\n\n\nclass DefaultLimitOffsetPagination(LimitOffsetPagination):\n default_limit = 50\n max_limit = 500\n\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('results', data)\n ]))\n\n\nclass ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n table = queryset.get(id=table_id)\n self.count = len(table.sa_columns)\n self.request = request\n return list(table.sa_columns)[self.offset:self.offset + self.limit]\n\n\nclass TableLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[]):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n # TODO: Cache count value somewhere, since calculating it is expensive.\n table = queryset.get(id=table_id)\n self.count = table.sa_num_records(filters=filters)\n self.request = request\n\n return table.get_records(\n self.limit, self.offset, filters=filters, order_by=order_by,\n )\n\n\nclass TableLimitOffsetGroupPagination(TableLimitOffsetPagination):\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('group_count', self.group_count),\n ('results', data)\n ]))\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[], group_count_by=[]):\n records = super().paginate_queryset(\n queryset, request, table_id, filters=filters, order_by=order_by\n )\n\n table = queryset.get(id=table_id)\n if group_count_by:\n group_count = table.get_group_counts(\n group_count_by, self.limit, self.offset,\n filters=filters, order_by=order_by\n )\n # Convert the tuple keys into strings so it can be converted to JSON\n group_count = {','.join(k): v for k, v in group_count.items()}\n self.group_count = {\n 'group_count_by': group_count_by,\n 'results': group_count,\n }\n else:\n self.group_count = {\n 'group_count_by': None,\n 'results': None,\n }\n\n return records\n", "path": "mathesar/pagination.py"}, {"content": "import logging\nfrom sqlalchemy import delete, select, Column, func\nfrom sqlalchemy.inspection import inspect\nfrom sqlalchemy_filters import apply_filters, apply_sort\nfrom sqlalchemy_filters.exceptions import FieldNotFound\n\nfrom db.constants import ID\n\nlogger = logging.getLogger(__name__)\n\n\n# Grouping exceptions follow the sqlalchemy_filters exceptions patterns\nclass BadGroupFormat(Exception):\n pass\n\n\nclass GroupFieldNotFound(FieldNotFound):\n pass\n\n\ndef _get_primary_key_column(table):\n primary_key_list = list(inspect(table).primary_key)\n # We do not support getting by composite primary keys\n assert len(primary_key_list) == 1\n return primary_key_list[0]\n\n\ndef _create_col_objects(table, column_list):\n return [\n table.columns[col] if type(col) == str else col\n for col in column_list\n ]\n\n\ndef get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n result = conn.execute(query).fetchall()\n assert len(result) <= 1\n return result[0] if result else None\n\n\ndef get_records(\n table, engine, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns records from a table.\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n query = select(table).limit(limit).offset(offset)\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n with engine.begin() as conn:\n return conn.execute(query).fetchall()\n\n\ndef get_group_counts(\n table, engine, group_by, limit=None, offset=None, order_by=[], filters=[],\n):\n \"\"\"\n Returns counts by specified groupings\n\n Args:\n table: SQLAlchemy table object\n engine: SQLAlchemy engine object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n group_by: list or tuple of column names or column objects to group by\n order_by: list of dictionaries, where each dictionary has a 'field' and\n 'direction' field.\n See: https://github.com/centerofci/sqlalchemy-filters#sort-format\n filters: list of dictionaries, where each dictionary has a 'field' and 'op'\n field, in addition to an 'value' field if appropriate.\n See: https://github.com/centerofci/sqlalchemy-filters#filters-format\n \"\"\"\n if type(group_by) not in (tuple, list):\n raise BadGroupFormat(f\"Group spec {group_by} must be list or tuple.\")\n for field in group_by:\n if type(field) not in (str, Column):\n raise BadGroupFormat(f\"Group field {field} must be a string or Column.\")\n field_name = field if type(field) == str else field.name\n if field_name not in table.c:\n raise GroupFieldNotFound(f\"Group field {field} not found in {table}.\")\n\n group_by = _create_col_objects(table, group_by)\n query = (\n select(*group_by, func.count(table.c[ID]))\n .group_by(*group_by)\n .limit(limit)\n .offset(offset)\n )\n if order_by is not None:\n query = apply_sort(query, order_by)\n if filters is not None:\n query = apply_filters(query, filters)\n with engine.begin() as conn:\n records = conn.execute(query).fetchall()\n\n # Last field is the count, preceding fields are the group by fields\n counts = {\n (*record[:-1],): record[-1]\n for record in records\n }\n return counts\n\n\ndef get_distinct_tuple_values(\n column_list, engine, table=None, limit=None, offset=None,\n):\n \"\"\"\n Returns distinct tuples from a given list of columns.\n\n Args:\n column_list: list of column names or SQLAlchemy column objects\n engine: SQLAlchemy engine object\n table: SQLAlchemy table object\n limit: int, gives number of rows to return\n offset: int, gives number of rows to skip\n\n If no table is given, the column_list must consist entirely of\n SQLAlchemy column objects associated with a table.\n \"\"\"\n if table is not None:\n column_objects = _create_col_objects(table, column_list)\n else:\n column_objects = column_list\n try:\n assert all([type(col) == Column for col in column_objects])\n except AssertionError as e:\n logger.error(\"All columns must be str or sqlalchemy.Column type\")\n raise e\n\n query = (\n select(*column_objects)\n .distinct()\n .limit(limit)\n .offset(offset)\n )\n with engine.begin() as conn:\n res = conn.execute(query).fetchall()\n return [tuple(zip(column_objects, row)) for row in res]\n\n\ndef distinct_tuples_to_filter(distinct_tuples):\n filters = []\n for col, value in distinct_tuples:\n filters.append({\n \"field\": col,\n \"op\": \"==\",\n \"value\": value,\n })\n return filters\n\n\ndef create_record_or_records(table, engine, record_data):\n \"\"\"\n record_data can be a dictionary, tuple, or list of dictionaries or tuples.\n if record_data is a list, it creates multiple records.\n \"\"\"\n id_value = None\n with engine.begin() as connection:\n result = connection.execute(table.insert(), record_data)\n # If there was only a single record created, return the record.\n if result.rowcount == 1:\n # We need to manually commit insertion so that we can retrieve the record.\n connection.commit()\n id_value = result.inserted_primary_key[0]\n if id_value is not None:\n return get_record(table, engine, id_value)\n # Do not return any records if multiple rows were added.\n return None\n\n\ndef create_records_from_csv(table, engine, csv_filename, column_names, delimiter=None,\n escape=None, quote=None):\n with open(csv_filename, 'rb') as csv_file:\n with engine.begin() as conn:\n cursor = conn.connection.cursor()\n relation = '.'.join('\"{}\"'.format(part) for part in (table.schema, table.name))\n formatted_columns = '({})'.format(','.join([f'\"{column_name}\"' for column_name in column_names]))\n\n copy_sql = f'COPY {relation} {formatted_columns} FROM STDIN CSV HEADER'\n if delimiter:\n copy_sql += f\" DELIMITER E'{delimiter}'\"\n if escape:\n copy_sql += f\" ESCAPE '{escape}'\"\n if quote:\n if quote == \"'\":\n quote = \"''\"\n copy_sql += f\" QUOTE '{quote}'\"\n\n cursor.copy_expert(copy_sql, csv_file)\n\n\ndef update_record(table, engine, id_value, record_data):\n primary_key_column = _get_primary_key_column(table)\n with engine.begin() as connection:\n connection.execute(\n table.update().where(primary_key_column == id_value).values(record_data)\n )\n return get_record(table, engine, id_value)\n\n\ndef delete_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = delete(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n return conn.execute(query)\n", "path": "db/records.py"}]} | 3,977 | 399 |
gh_patches_debug_22504 | rasdani/github-patches | git_diff | wright-group__WrightTools-360 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows Tempfile Error
On Windows, tempfiles attempted to be opened using h5py cause errors.
I do not have the error message in front of me at present, but I believe it was a 'file already exists' flavor of problem.
We may need to remove the created tmpfile and just use the name....
</issue>
<code>
[start of WrightTools/_base.py]
1 """WrightTools base classes and associated."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import shutil
8 import weakref
9 import tempfile
10 import posixpath
11
12 import numpy as np
13
14 import h5py
15
16
17 # --- define --------------------------------------------------------------------------------------
18
19
20 wt5_version = '0.0.0'
21
22
23 # --- dataset -------------------------------------------------------------------------------------
24
25
26 class Dataset(h5py.Dataset):
27 instances = {}
28
29
30 # --- group ---------------------------------------------------------------------------------------
31
32
33 class Group(h5py.Group):
34 instances = {}
35 class_name = 'Group'
36
37 def __init__(self, filepath=None, parent=None, name=None, **kwargs):
38 if filepath is None:
39 return
40 if parent == '':
41 parent = posixpath.sep
42 # file
43 self.filepath = filepath
44 path = parent + posixpath.sep + name
45 file = h5py.File(self.filepath, 'a')
46 file.require_group(parent)
47 file.require_group(path)
48 h5py.Group.__init__(self, bind=file[path].id)
49 self.__n = 0
50 self.fid = self.file.fid
51 if name is not None:
52 self.attrs['name'] = name
53 self.attrs.update(kwargs)
54 self.attrs['class'] = self.class_name
55 # load from file
56 self._items = []
57 for name in self.item_names:
58 self._items.append(self[name])
59 setattr(self, name, self[name])
60 # kwargs
61 self.attrs.update(kwargs)
62 # the following are populated if not already recorded
63 self.__version__
64 self.natural_name
65
66 def __new__(cls, *args, **kwargs):
67 # extract
68 filepath = args[0] if len(args) > 0 else kwargs.get('filepath', None)
69 parent = args[1] if len(args) > 1 else kwargs.get('parent', None)
70 name = args[2] if len(args) > 2 else kwargs.get('name', cls.class_name.lower())
71 edit_local = args[3] if len(args) > 3 else kwargs.get('edit_local', False)
72 # tempfile
73 tmpfile = None
74 if edit_local and filepath is None:
75 raise Exception # TODO: better exception
76 if not edit_local:
77 tmpfile = tempfile.NamedTemporaryFile(prefix='', suffix='.wt5')
78 p = tmpfile.name
79 if filepath:
80 shutil.copyfile(src=filepath, dst=p)
81 elif edit_local and filepath:
82 p = filepath
83 # construct fullpath
84 if parent is None:
85 parent = ''
86 name = '/'
87 fullpath = p + '::' + parent + name
88 # create and/or return
89 if fullpath not in cls.instances.keys():
90 kwargs['filepath'] = p
91 kwargs['parent'] = parent
92 kwargs['name'] = name
93 instance = super(Group, cls).__new__(cls)
94 cls.__init__(instance, **kwargs)
95 cls.instances[fullpath] = instance
96 if tmpfile:
97 setattr(instance, '_tmpfile', tmpfile)
98 weakref.finalize(instance, instance.close)
99 return instance
100 instance = cls.instances[fullpath]
101 return instance
102
103 @property
104 def __version__(self):
105 if '__version__' not in self.file.attrs.keys():
106 self.file.attrs['__version__'] = wt5_version
107 return self.file.attrs['__version__']
108
109 @property
110 def fullpath(self):
111 return self.filepath + '::' + self.name
112
113 @property
114 def item_names(self):
115 if 'item_names' not in self.attrs.keys():
116 self.attrs['item_names'] = np.array([], dtype='S')
117 return self.attrs['item_names']
118
119 @property
120 def natural_name(self):
121 if 'name' not in self.attrs.keys():
122 self.attrs['name'] = self.__class__.default_name
123 return self.attrs['name']
124
125 @property
126 def parent(self):
127 from .collection import Collection
128 group = super().parent
129 parent = group.parent.name
130 if parent == posixpath.sep:
131 parent = None
132 return Collection(self.filepath, parent=parent, name=group.attrs['name'])
133
134 def close(self):
135 if(self.fid.valid > 0):
136 self.__class__.instances.pop(self.fullpath)
137 self.file.flush()
138 self.file.close()
139 if hasattr(self, '_tmpfile'):
140 self._tmpfile.close()
141
142 def flush(self):
143 self.file.flush()
144
[end of WrightTools/_base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/WrightTools/_base.py b/WrightTools/_base.py
--- a/WrightTools/_base.py
+++ b/WrightTools/_base.py
@@ -5,6 +5,7 @@
import shutil
+import os
import weakref
import tempfile
import posixpath
@@ -74,8 +75,8 @@
if edit_local and filepath is None:
raise Exception # TODO: better exception
if not edit_local:
- tmpfile = tempfile.NamedTemporaryFile(prefix='', suffix='.wt5')
- p = tmpfile.name
+ tmpfile = tempfile.mkstemp(prefix='', suffix='.wt5')
+ p = tmpfile[1]
if filepath:
shutil.copyfile(src=filepath, dst=p)
elif edit_local and filepath:
@@ -137,7 +138,8 @@
self.file.flush()
self.file.close()
if hasattr(self, '_tmpfile'):
- self._tmpfile.close()
+ os.close(self._tmpfile[0])
+ os.remove(self._tmpfile[1])
def flush(self):
self.file.flush()
| {"golden_diff": "diff --git a/WrightTools/_base.py b/WrightTools/_base.py\n--- a/WrightTools/_base.py\n+++ b/WrightTools/_base.py\n@@ -5,6 +5,7 @@\n \n \n import shutil\n+import os\n import weakref\n import tempfile\n import posixpath\n@@ -74,8 +75,8 @@\n if edit_local and filepath is None:\n raise Exception # TODO: better exception\n if not edit_local:\n- tmpfile = tempfile.NamedTemporaryFile(prefix='', suffix='.wt5')\n- p = tmpfile.name\n+ tmpfile = tempfile.mkstemp(prefix='', suffix='.wt5')\n+ p = tmpfile[1]\n if filepath:\n shutil.copyfile(src=filepath, dst=p)\n elif edit_local and filepath:\n@@ -137,7 +138,8 @@\n self.file.flush()\n self.file.close()\n if hasattr(self, '_tmpfile'):\n- self._tmpfile.close()\n+ os.close(self._tmpfile[0])\n+ os.remove(self._tmpfile[1])\n \n def flush(self):\n self.file.flush()\n", "issue": "Windows Tempfile Error\nOn Windows, tempfiles attempted to be opened using h5py cause errors.\r\n\r\nI do not have the error message in front of me at present, but I believe it was a 'file already exists' flavor of problem. \r\n\r\nWe may need to remove the created tmpfile and just use the name....\n", "before_files": [{"content": "\"\"\"WrightTools base classes and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport shutil\nimport weakref\nimport tempfile\nimport posixpath\n\nimport numpy as np\n\nimport h5py\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\nwt5_version = '0.0.0'\n\n\n# --- dataset -------------------------------------------------------------------------------------\n\n\nclass Dataset(h5py.Dataset):\n instances = {}\n\n\n# --- group ---------------------------------------------------------------------------------------\n\n\nclass Group(h5py.Group):\n instances = {}\n class_name = 'Group'\n\n def __init__(self, filepath=None, parent=None, name=None, **kwargs):\n if filepath is None:\n return\n if parent == '':\n parent = posixpath.sep\n # file\n self.filepath = filepath\n path = parent + posixpath.sep + name\n file = h5py.File(self.filepath, 'a')\n file.require_group(parent)\n file.require_group(path)\n h5py.Group.__init__(self, bind=file[path].id)\n self.__n = 0\n self.fid = self.file.fid\n if name is not None:\n self.attrs['name'] = name\n self.attrs.update(kwargs)\n self.attrs['class'] = self.class_name\n # load from file\n self._items = []\n for name in self.item_names:\n self._items.append(self[name])\n setattr(self, name, self[name])\n # kwargs\n self.attrs.update(kwargs)\n # the following are populated if not already recorded\n self.__version__\n self.natural_name\n\n def __new__(cls, *args, **kwargs):\n # extract\n filepath = args[0] if len(args) > 0 else kwargs.get('filepath', None)\n parent = args[1] if len(args) > 1 else kwargs.get('parent', None)\n name = args[2] if len(args) > 2 else kwargs.get('name', cls.class_name.lower())\n edit_local = args[3] if len(args) > 3 else kwargs.get('edit_local', False)\n # tempfile\n tmpfile = None\n if edit_local and filepath is None:\n raise Exception # TODO: better exception\n if not edit_local:\n tmpfile = tempfile.NamedTemporaryFile(prefix='', suffix='.wt5')\n p = tmpfile.name\n if filepath:\n shutil.copyfile(src=filepath, dst=p)\n elif edit_local and filepath:\n p = filepath\n # construct fullpath\n if parent is None:\n parent = ''\n name = '/'\n fullpath = p + '::' + parent + name\n # create and/or return\n if fullpath not in cls.instances.keys():\n kwargs['filepath'] = p\n kwargs['parent'] = parent\n kwargs['name'] = name\n instance = super(Group, cls).__new__(cls)\n cls.__init__(instance, **kwargs)\n cls.instances[fullpath] = instance\n if tmpfile:\n setattr(instance, '_tmpfile', tmpfile)\n weakref.finalize(instance, instance.close)\n return instance\n instance = cls.instances[fullpath]\n return instance\n\n @property\n def __version__(self):\n if '__version__' not in self.file.attrs.keys():\n self.file.attrs['__version__'] = wt5_version\n return self.file.attrs['__version__']\n\n @property\n def fullpath(self):\n return self.filepath + '::' + self.name\n\n @property\n def item_names(self):\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return self.attrs['item_names']\n\n @property\n def natural_name(self):\n if 'name' not in self.attrs.keys():\n self.attrs['name'] = self.__class__.default_name\n return self.attrs['name']\n\n @property\n def parent(self):\n from .collection import Collection\n group = super().parent\n parent = group.parent.name\n if parent == posixpath.sep:\n parent = None\n return Collection(self.filepath, parent=parent, name=group.attrs['name'])\n\n def close(self):\n if(self.fid.valid > 0):\n self.__class__.instances.pop(self.fullpath)\n self.file.flush()\n self.file.close()\n if hasattr(self, '_tmpfile'):\n self._tmpfile.close()\n\n def flush(self):\n self.file.flush()\n", "path": "WrightTools/_base.py"}]} | 1,890 | 247 |
gh_patches_debug_25508 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-728 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UploadFile causes ASGI application's exception when it is in debug mode
Apparently when we use the ASGI application in debug mode, it cannot print the value of variables if they are not JSON serializable.
In my use case, when I tried to use the [file upload](https://strawberry.rocks/docs/features/file-upload) example in debug mode this issue ended up happening.
I believe it is because of this:
https://github.com/strawberry-graphql/strawberry/blob/de215370b247a417af8a8dd5fc382d71e305bcd7/strawberry/utils/debug.py#L26-L29
Perhaps converting variables to string might help.
</issue>
<code>
[start of strawberry/utils/debug.py]
1 import datetime
2 import json
3 import typing
4
5 from pygments import highlight, lexers
6 from pygments.formatters import Terminal256Formatter
7
8 from .graphql_lexer import GraphQLLexer
9
10
11 def pretty_print_graphql_operation(
12 operation_name: str, query: str, variables: typing.Dict["str", typing.Any]
13 ): # pragma: no cover
14 """Pretty print a GraphQL operation using pygments.
15
16 Won't print introspection operation to prevent noise in the output."""
17
18 if operation_name == "IntrospectionQuery":
19 return
20
21 now = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
22
23 print(f"[{now}]: {operation_name or 'No operation name'}")
24 print(highlight(query, GraphQLLexer(), Terminal256Formatter()))
25
26 if variables:
27 variables_json = json.dumps(variables, indent=4)
28
29 print(highlight(variables_json, lexers.JsonLexer(), Terminal256Formatter()))
30
[end of strawberry/utils/debug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/utils/debug.py b/strawberry/utils/debug.py
--- a/strawberry/utils/debug.py
+++ b/strawberry/utils/debug.py
@@ -1,6 +1,7 @@
import datetime
import json
-import typing
+from json import JSONEncoder
+from typing import Any, Dict
from pygments import highlight, lexers
from pygments.formatters import Terminal256Formatter
@@ -8,9 +9,14 @@
from .graphql_lexer import GraphQLLexer
+class StrawberryJSONEncoder(JSONEncoder):
+ def default(self, o: Any) -> Any:
+ return repr(o)
+
+
def pretty_print_graphql_operation(
- operation_name: str, query: str, variables: typing.Dict["str", typing.Any]
-): # pragma: no cover
+ operation_name: str, query: str, variables: Dict["str", Any]
+):
"""Pretty print a GraphQL operation using pygments.
Won't print introspection operation to prevent noise in the output."""
@@ -24,6 +30,6 @@
print(highlight(query, GraphQLLexer(), Terminal256Formatter()))
if variables:
- variables_json = json.dumps(variables, indent=4)
+ variables_json = json.dumps(variables, indent=4, cls=StrawberryJSONEncoder)
print(highlight(variables_json, lexers.JsonLexer(), Terminal256Formatter()))
| {"golden_diff": "diff --git a/strawberry/utils/debug.py b/strawberry/utils/debug.py\n--- a/strawberry/utils/debug.py\n+++ b/strawberry/utils/debug.py\n@@ -1,6 +1,7 @@\n import datetime\n import json\n-import typing\n+from json import JSONEncoder\n+from typing import Any, Dict\n \n from pygments import highlight, lexers\n from pygments.formatters import Terminal256Formatter\n@@ -8,9 +9,14 @@\n from .graphql_lexer import GraphQLLexer\n \n \n+class StrawberryJSONEncoder(JSONEncoder):\n+ def default(self, o: Any) -> Any:\n+ return repr(o)\n+\n+\n def pretty_print_graphql_operation(\n- operation_name: str, query: str, variables: typing.Dict[\"str\", typing.Any]\n-): # pragma: no cover\n+ operation_name: str, query: str, variables: Dict[\"str\", Any]\n+):\n \"\"\"Pretty print a GraphQL operation using pygments.\n \n Won't print introspection operation to prevent noise in the output.\"\"\"\n@@ -24,6 +30,6 @@\n print(highlight(query, GraphQLLexer(), Terminal256Formatter()))\n \n if variables:\n- variables_json = json.dumps(variables, indent=4)\n+ variables_json = json.dumps(variables, indent=4, cls=StrawberryJSONEncoder)\n \n print(highlight(variables_json, lexers.JsonLexer(), Terminal256Formatter()))\n", "issue": "UploadFile causes ASGI application's exception when it is in debug mode\nApparently when we use the ASGI application in debug mode, it cannot print the value of variables if they are not JSON serializable.\r\n\r\nIn my use case, when I tried to use the [file upload](https://strawberry.rocks/docs/features/file-upload) example in debug mode this issue ended up happening.\r\n\r\nI believe it is because of this:\r\n\r\nhttps://github.com/strawberry-graphql/strawberry/blob/de215370b247a417af8a8dd5fc382d71e305bcd7/strawberry/utils/debug.py#L26-L29\r\n\r\nPerhaps converting variables to string might help.\n", "before_files": [{"content": "import datetime\nimport json\nimport typing\n\nfrom pygments import highlight, lexers\nfrom pygments.formatters import Terminal256Formatter\n\nfrom .graphql_lexer import GraphQLLexer\n\n\ndef pretty_print_graphql_operation(\n operation_name: str, query: str, variables: typing.Dict[\"str\", typing.Any]\n): # pragma: no cover\n \"\"\"Pretty print a GraphQL operation using pygments.\n\n Won't print introspection operation to prevent noise in the output.\"\"\"\n\n if operation_name == \"IntrospectionQuery\":\n return\n\n now = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n\n print(f\"[{now}]: {operation_name or 'No operation name'}\")\n print(highlight(query, GraphQLLexer(), Terminal256Formatter()))\n\n if variables:\n variables_json = json.dumps(variables, indent=4)\n\n print(highlight(variables_json, lexers.JsonLexer(), Terminal256Formatter()))\n", "path": "strawberry/utils/debug.py"}]} | 956 | 317 |
gh_patches_debug_4565 | rasdani/github-patches | git_diff | pypa__setuptools-1591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Impossible to install packages with setuptools v.40.6.x, if six isn't installed
Upgrade pip and setuptools to latest versions
```
oleg$ pip install --upgrade pip setuptools
Collecting pip
Using cached https://files.pythonhosted.org/packages/c2/d7/90f34cb0d83a6c5631cf71dfe64cc1054598c843a92b400e55675cc2ac37/pip-18.1-py2.py3-none-any.whl
Collecting setuptools
Using cached https://files.pythonhosted.org/packages/4b/47/1417da90ed6f4c88465d08ea2461ff41c94cc6cc223f333d130d7a99199a/setuptools-40.6.1-py2.py3-none-any.whl
Installing collected packages: pip, setuptools
Found existing installation: pip 9.0.1
Uninstalling pip-9.0.1:
Successfully uninstalled pip-9.0.1
Found existing installation: setuptools 38.2.4
Uninstalling setuptools-38.2.4:
Successfully uninstalled setuptools-38.2.4
Successfully installed pip-18.1 setuptools-40.6.1
```
Try to install any package, d2to1 for example
```
oleg$ pip install d2to1
Collecting d2to1
Downloading https://files.pythonhosted.org/packages/dc/bd/eac45e4e77d76f6c0ae539819c40f1babb891d7855129663e37957a7c2df/d2to1-0.2.12.post1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/ns/71p61z5s3hsd2pv327gmdh1c0000gn/T/pip-install-2J_LhF/d2to1/setup.py", line 17, in <module>
setup(**cfg_to_args())
File "d2to1/util.py", line 204, in cfg_to_args
wrap_commands(kwargs)
File "d2to1/util.py", line 439, in wrap_commands
for cmd, _ in dist.get_command_list():
File "/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/setuptools/dist.py", line 724, in get_command_list
cmdclass = ep.resolve()
File "/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2352, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/setuptools/command/upload_docs.py", line 23, in <module>
from .upload import upload
File "/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/setuptools/command/upload.py", line 15, in <module>
from six.moves.urllib.request import urlopen, Request
ImportError: No module named six.moves.urllib.request
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/ns/71p61z5s3hsd2pv327gmdh1c0000gn/T/pip-install-2J_LhF/d2to1/
````
```
oleg$ pip list
Package Version
---------- ------------
d2to1 0.2.12.post1
pip 18.1
setuptools 40.6.1
wheel 0.30.0
```
Downgrade setuptools to v.40.5.0, and try installation again
```
oleg$ pip install --upgrade setuptools==40.5.0
Collecting setuptools==40.5.0
Downloading https://files.pythonhosted.org/packages/82/a1/ba6fb41367b375f5cb653d1317d8ca263c636cff6566e2da1b0da716069d/setuptools-40.5.0-py2.py3-none-any.whl (569kB)
100% |████████████████████████████████| 573kB 754kB/s
Installing collected packages: setuptools
Found existing installation: setuptools 40.6.1
Uninstalling setuptools-40.6.1:
Successfully uninstalled setuptools-40.6.1
Successfully installed setuptools-40.5.0
oleg$ pip install d2to1
Collecting d2to1
Using cached https://files.pythonhosted.org/packages/dc/bd/eac45e4e77d76f6c0ae539819c40f1babb891d7855129663e37957a7c2df/d2to1-0.2.12.post1.tar.gz
Requirement already satisfied: setuptools in /Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages (from d2to1) (40.5.0)
Building wheels for collected packages: d2to1
Running setup.py bdist_wheel for d2to1 ... done
Stored in directory: /Users/oleg/Library/Caches/pip/wheels/e6/1a/ed/11531583d510d72448e39bfc254147d0e7b2b2ad65722b3a6f
Successfully built d2to1
Installing collected packages: d2to1
Successfully installed d2to1-0.2.12.post1
```
</issue>
<code>
[start of setuptools/command/upload.py]
1 import io
2 import os
3 import hashlib
4 import getpass
5 import platform
6
7 from base64 import standard_b64encode
8
9 from distutils import log
10 from distutils.command import upload as orig
11 from distutils.spawn import spawn
12
13 from distutils.errors import DistutilsError
14
15 from six.moves.urllib.request import urlopen, Request
16 from six.moves.urllib.error import HTTPError
17 from six.moves.urllib.parse import urlparse
18
19 class upload(orig.upload):
20 """
21 Override default upload behavior to obtain password
22 in a variety of different ways.
23 """
24 def run(self):
25 try:
26 orig.upload.run(self)
27 finally:
28 self.announce(
29 "WARNING: Uploading via this command is deprecated, use twine "
30 "to upload instead (https://pypi.org/p/twine/)",
31 log.WARN
32 )
33
34 def finalize_options(self):
35 orig.upload.finalize_options(self)
36 self.username = (
37 self.username or
38 getpass.getuser()
39 )
40 # Attempt to obtain password. Short circuit evaluation at the first
41 # sign of success.
42 self.password = (
43 self.password or
44 self._load_password_from_keyring() or
45 self._prompt_for_password()
46 )
47
48 def upload_file(self, command, pyversion, filename):
49 # Makes sure the repository URL is compliant
50 schema, netloc, url, params, query, fragments = \
51 urlparse(self.repository)
52 if params or query or fragments:
53 raise AssertionError("Incompatible url %s" % self.repository)
54
55 if schema not in ('http', 'https'):
56 raise AssertionError("unsupported schema " + schema)
57
58 # Sign if requested
59 if self.sign:
60 gpg_args = ["gpg", "--detach-sign", "-a", filename]
61 if self.identity:
62 gpg_args[2:2] = ["--local-user", self.identity]
63 spawn(gpg_args,
64 dry_run=self.dry_run)
65
66 # Fill in the data - send all the meta-data in case we need to
67 # register a new release
68 with open(filename, 'rb') as f:
69 content = f.read()
70
71 meta = self.distribution.metadata
72
73 data = {
74 # action
75 ':action': 'file_upload',
76 'protocol_version': '1',
77
78 # identify release
79 'name': meta.get_name(),
80 'version': meta.get_version(),
81
82 # file content
83 'content': (os.path.basename(filename),content),
84 'filetype': command,
85 'pyversion': pyversion,
86 'md5_digest': hashlib.md5(content).hexdigest(),
87
88 # additional meta-data
89 'metadata_version': str(meta.get_metadata_version()),
90 'summary': meta.get_description(),
91 'home_page': meta.get_url(),
92 'author': meta.get_contact(),
93 'author_email': meta.get_contact_email(),
94 'license': meta.get_licence(),
95 'description': meta.get_long_description(),
96 'keywords': meta.get_keywords(),
97 'platform': meta.get_platforms(),
98 'classifiers': meta.get_classifiers(),
99 'download_url': meta.get_download_url(),
100 # PEP 314
101 'provides': meta.get_provides(),
102 'requires': meta.get_requires(),
103 'obsoletes': meta.get_obsoletes(),
104 }
105
106 data['comment'] = ''
107
108 if self.sign:
109 data['gpg_signature'] = (os.path.basename(filename) + ".asc",
110 open(filename+".asc", "rb").read())
111
112 # set up the authentication
113 user_pass = (self.username + ":" + self.password).encode('ascii')
114 # The exact encoding of the authentication string is debated.
115 # Anyway PyPI only accepts ascii for both username or password.
116 auth = "Basic " + standard_b64encode(user_pass).decode('ascii')
117
118 # Build up the MIME payload for the POST data
119 boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'
120 sep_boundary = b'\r\n--' + boundary.encode('ascii')
121 end_boundary = sep_boundary + b'--\r\n'
122 body = io.BytesIO()
123 for key, value in data.items():
124 title = '\r\nContent-Disposition: form-data; name="%s"' % key
125 # handle multiple entries for the same name
126 if not isinstance(value, list):
127 value = [value]
128 for value in value:
129 if type(value) is tuple:
130 title += '; filename="%s"' % value[0]
131 value = value[1]
132 else:
133 value = str(value).encode('utf-8')
134 body.write(sep_boundary)
135 body.write(title.encode('utf-8'))
136 body.write(b"\r\n\r\n")
137 body.write(value)
138 body.write(end_boundary)
139 body = body.getvalue()
140
141 msg = "Submitting %s to %s" % (filename, self.repository)
142 self.announce(msg, log.INFO)
143
144 # build the Request
145 headers = {
146 'Content-type': 'multipart/form-data; boundary=%s' % boundary,
147 'Content-length': str(len(body)),
148 'Authorization': auth,
149 }
150
151 request = Request(self.repository, data=body,
152 headers=headers)
153 # send the data
154 try:
155 result = urlopen(request)
156 status = result.getcode()
157 reason = result.msg
158 except HTTPError as e:
159 status = e.code
160 reason = e.msg
161 except OSError as e:
162 self.announce(str(e), log.ERROR)
163 raise
164
165 if status == 200:
166 self.announce('Server response (%s): %s' % (status, reason),
167 log.INFO)
168 if self.show_response:
169 text = getattr(self, '_read_pypi_response',
170 lambda x: None)(result)
171 if text is not None:
172 msg = '\n'.join(('-' * 75, text, '-' * 75))
173 self.announce(msg, log.INFO)
174 else:
175 msg = 'Upload failed (%s): %s' % (status, reason)
176 self.announce(msg, log.ERROR)
177 raise DistutilsError(msg)
178
179 def _load_password_from_keyring(self):
180 """
181 Attempt to load password from keyring. Suppress Exceptions.
182 """
183 try:
184 keyring = __import__('keyring')
185 return keyring.get_password(self.repository, self.username)
186 except Exception:
187 pass
188
189 def _prompt_for_password(self):
190 """
191 Prompt for a password on the tty. Suppress Exceptions.
192 """
193 try:
194 return getpass.getpass()
195 except (Exception, KeyboardInterrupt):
196 pass
197
[end of setuptools/command/upload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/command/upload.py b/setuptools/command/upload.py
--- a/setuptools/command/upload.py
+++ b/setuptools/command/upload.py
@@ -12,9 +12,9 @@
from distutils.errors import DistutilsError
-from six.moves.urllib.request import urlopen, Request
-from six.moves.urllib.error import HTTPError
-from six.moves.urllib.parse import urlparse
+from setuptools.extern.six.moves.urllib.request import urlopen, Request
+from setuptools.extern.six.moves.urllib.error import HTTPError
+from setuptools.extern.six.moves.urllib.parse import urlparse
class upload(orig.upload):
"""
| {"golden_diff": "diff --git a/setuptools/command/upload.py b/setuptools/command/upload.py\n--- a/setuptools/command/upload.py\n+++ b/setuptools/command/upload.py\n@@ -12,9 +12,9 @@\n \n from distutils.errors import DistutilsError\n \n-from six.moves.urllib.request import urlopen, Request\n-from six.moves.urllib.error import HTTPError\n-from six.moves.urllib.parse import urlparse\n+from setuptools.extern.six.moves.urllib.request import urlopen, Request\n+from setuptools.extern.six.moves.urllib.error import HTTPError\n+from setuptools.extern.six.moves.urllib.parse import urlparse\n \n class upload(orig.upload):\n \"\"\"\n", "issue": "Impossible to install packages with setuptools v.40.6.x, if six isn't installed\nUpgrade pip and setuptools to latest versions\r\n\r\n```\r\noleg$ pip install --upgrade pip setuptools\r\nCollecting pip\r\n Using cached https://files.pythonhosted.org/packages/c2/d7/90f34cb0d83a6c5631cf71dfe64cc1054598c843a92b400e55675cc2ac37/pip-18.1-py2.py3-none-any.whl\r\nCollecting setuptools\r\n Using cached https://files.pythonhosted.org/packages/4b/47/1417da90ed6f4c88465d08ea2461ff41c94cc6cc223f333d130d7a99199a/setuptools-40.6.1-py2.py3-none-any.whl\r\nInstalling collected packages: pip, setuptools\r\n Found existing installation: pip 9.0.1\r\n Uninstalling pip-9.0.1:\r\n Successfully uninstalled pip-9.0.1\r\n Found existing installation: setuptools 38.2.4\r\n Uninstalling setuptools-38.2.4:\r\n Successfully uninstalled setuptools-38.2.4\r\nSuccessfully installed pip-18.1 setuptools-40.6.1\r\n```\r\n\r\nTry to install any package, d2to1 for example\r\n\r\n```\r\noleg$ pip install d2to1\r\nCollecting d2to1\r\n Downloading https://files.pythonhosted.org/packages/dc/bd/eac45e4e77d76f6c0ae539819c40f1babb891d7855129663e37957a7c2df/d2to1-0.2.12.post1.tar.gz\r\n Complete output from command python setup.py egg_info:\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/private/var/folders/ns/71p61z5s3hsd2pv327gmdh1c0000gn/T/pip-install-2J_LhF/d2to1/setup.py\", line 17, in <module>\r\n setup(**cfg_to_args())\r\n File \"d2to1/util.py\", line 204, in cfg_to_args\r\n wrap_commands(kwargs)\r\n File \"d2to1/util.py\", line 439, in wrap_commands\r\n for cmd, _ in dist.get_command_list():\r\n File \"/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/setuptools/dist.py\", line 724, in get_command_list\r\n cmdclass = ep.resolve()\r\n File \"/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/pkg_resources/__init__.py\", line 2352, in resolve\r\n module = __import__(self.module_name, fromlist=['__name__'], level=0)\r\n File \"/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/setuptools/command/upload_docs.py\", line 23, in <module>\r\n from .upload import upload\r\n File \"/Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages/setuptools/command/upload.py\", line 15, in <module>\r\n from six.moves.urllib.request import urlopen, Request\r\n ImportError: No module named six.moves.urllib.request\r\n \r\n ----------------------------------------\r\nCommand \"python setup.py egg_info\" failed with error code 1 in /private/var/folders/ns/71p61z5s3hsd2pv327gmdh1c0000gn/T/pip-install-2J_LhF/d2to1/\r\n````\r\n\r\n```\r\noleg$ pip list\r\nPackage Version \r\n---------- ------------\r\nd2to1 0.2.12.post1\r\npip 18.1 \r\nsetuptools 40.6.1 \r\nwheel 0.30.0\r\n```\r\n\r\nDowngrade setuptools to v.40.5.0, and try installation again\r\n\r\n```\r\noleg$ pip install --upgrade setuptools==40.5.0\r\nCollecting setuptools==40.5.0\r\n Downloading https://files.pythonhosted.org/packages/82/a1/ba6fb41367b375f5cb653d1317d8ca263c636cff6566e2da1b0da716069d/setuptools-40.5.0-py2.py3-none-any.whl (569kB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 573kB 754kB/s \r\nInstalling collected packages: setuptools\r\n Found existing installation: setuptools 40.6.1\r\n Uninstalling setuptools-40.6.1:\r\n Successfully uninstalled setuptools-40.6.1\r\nSuccessfully installed setuptools-40.5.0\r\n\r\noleg$ pip install d2to1\r\nCollecting d2to1\r\n Using cached https://files.pythonhosted.org/packages/dc/bd/eac45e4e77d76f6c0ae539819c40f1babb891d7855129663e37957a7c2df/d2to1-0.2.12.post1.tar.gz\r\nRequirement already satisfied: setuptools in /Users/oleg/.virtualenvs/yandex/lib/python2.7/site-packages (from d2to1) (40.5.0)\r\nBuilding wheels for collected packages: d2to1\r\n Running setup.py bdist_wheel for d2to1 ... done\r\n Stored in directory: /Users/oleg/Library/Caches/pip/wheels/e6/1a/ed/11531583d510d72448e39bfc254147d0e7b2b2ad65722b3a6f\r\nSuccessfully built d2to1\r\nInstalling collected packages: d2to1\r\nSuccessfully installed d2to1-0.2.12.post1\r\n```\n", "before_files": [{"content": "import io\nimport os\nimport hashlib\nimport getpass\nimport platform\n\nfrom base64 import standard_b64encode\n\nfrom distutils import log\nfrom distutils.command import upload as orig\nfrom distutils.spawn import spawn\n\nfrom distutils.errors import DistutilsError\n\nfrom six.moves.urllib.request import urlopen, Request\nfrom six.moves.urllib.error import HTTPError\nfrom six.moves.urllib.parse import urlparse\n\nclass upload(orig.upload):\n \"\"\"\n Override default upload behavior to obtain password\n in a variety of different ways.\n \"\"\"\n def run(self):\n try:\n orig.upload.run(self)\n finally:\n self.announce(\n \"WARNING: Uploading via this command is deprecated, use twine \"\n \"to upload instead (https://pypi.org/p/twine/)\",\n log.WARN\n )\n\n def finalize_options(self):\n orig.upload.finalize_options(self)\n self.username = (\n self.username or\n getpass.getuser()\n )\n # Attempt to obtain password. Short circuit evaluation at the first\n # sign of success.\n self.password = (\n self.password or\n self._load_password_from_keyring() or\n self._prompt_for_password()\n )\n\n def upload_file(self, command, pyversion, filename):\n # Makes sure the repository URL is compliant\n schema, netloc, url, params, query, fragments = \\\n urlparse(self.repository)\n if params or query or fragments:\n raise AssertionError(\"Incompatible url %s\" % self.repository)\n\n if schema not in ('http', 'https'):\n raise AssertionError(\"unsupported schema \" + schema)\n\n # Sign if requested\n if self.sign:\n gpg_args = [\"gpg\", \"--detach-sign\", \"-a\", filename]\n if self.identity:\n gpg_args[2:2] = [\"--local-user\", self.identity]\n spawn(gpg_args,\n dry_run=self.dry_run)\n\n # Fill in the data - send all the meta-data in case we need to\n # register a new release\n with open(filename, 'rb') as f:\n content = f.read()\n\n meta = self.distribution.metadata\n\n data = {\n # action\n ':action': 'file_upload',\n 'protocol_version': '1',\n\n # identify release\n 'name': meta.get_name(),\n 'version': meta.get_version(),\n\n # file content\n 'content': (os.path.basename(filename),content),\n 'filetype': command,\n 'pyversion': pyversion,\n 'md5_digest': hashlib.md5(content).hexdigest(),\n\n # additional meta-data\n 'metadata_version': str(meta.get_metadata_version()),\n 'summary': meta.get_description(),\n 'home_page': meta.get_url(),\n 'author': meta.get_contact(),\n 'author_email': meta.get_contact_email(),\n 'license': meta.get_licence(),\n 'description': meta.get_long_description(),\n 'keywords': meta.get_keywords(),\n 'platform': meta.get_platforms(),\n 'classifiers': meta.get_classifiers(),\n 'download_url': meta.get_download_url(),\n # PEP 314\n 'provides': meta.get_provides(),\n 'requires': meta.get_requires(),\n 'obsoletes': meta.get_obsoletes(),\n }\n\n data['comment'] = ''\n\n if self.sign:\n data['gpg_signature'] = (os.path.basename(filename) + \".asc\",\n open(filename+\".asc\", \"rb\").read())\n\n # set up the authentication\n user_pass = (self.username + \":\" + self.password).encode('ascii')\n # The exact encoding of the authentication string is debated.\n # Anyway PyPI only accepts ascii for both username or password.\n auth = \"Basic \" + standard_b64encode(user_pass).decode('ascii')\n\n # Build up the MIME payload for the POST data\n boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'\n sep_boundary = b'\\r\\n--' + boundary.encode('ascii')\n end_boundary = sep_boundary + b'--\\r\\n'\n body = io.BytesIO()\n for key, value in data.items():\n title = '\\r\\nContent-Disposition: form-data; name=\"%s\"' % key\n # handle multiple entries for the same name\n if not isinstance(value, list):\n value = [value]\n for value in value:\n if type(value) is tuple:\n title += '; filename=\"%s\"' % value[0]\n value = value[1]\n else:\n value = str(value).encode('utf-8')\n body.write(sep_boundary)\n body.write(title.encode('utf-8'))\n body.write(b\"\\r\\n\\r\\n\")\n body.write(value)\n body.write(end_boundary)\n body = body.getvalue()\n\n msg = \"Submitting %s to %s\" % (filename, self.repository)\n self.announce(msg, log.INFO)\n\n # build the Request\n headers = {\n 'Content-type': 'multipart/form-data; boundary=%s' % boundary,\n 'Content-length': str(len(body)),\n 'Authorization': auth,\n }\n\n request = Request(self.repository, data=body,\n headers=headers)\n # send the data\n try:\n result = urlopen(request)\n status = result.getcode()\n reason = result.msg\n except HTTPError as e:\n status = e.code\n reason = e.msg\n except OSError as e:\n self.announce(str(e), log.ERROR)\n raise\n\n if status == 200:\n self.announce('Server response (%s): %s' % (status, reason),\n log.INFO)\n if self.show_response:\n text = getattr(self, '_read_pypi_response',\n lambda x: None)(result)\n if text is not None:\n msg = '\\n'.join(('-' * 75, text, '-' * 75))\n self.announce(msg, log.INFO)\n else:\n msg = 'Upload failed (%s): %s' % (status, reason)\n self.announce(msg, log.ERROR)\n raise DistutilsError(msg)\n\n def _load_password_from_keyring(self):\n \"\"\"\n Attempt to load password from keyring. Suppress Exceptions.\n \"\"\"\n try:\n keyring = __import__('keyring')\n return keyring.get_password(self.repository, self.username)\n except Exception:\n pass\n\n def _prompt_for_password(self):\n \"\"\"\n Prompt for a password on the tty. Suppress Exceptions.\n \"\"\"\n try:\n return getpass.getpass()\n except (Exception, KeyboardInterrupt):\n pass\n", "path": "setuptools/command/upload.py"}]} | 3,907 | 135 |
gh_patches_debug_33721 | rasdani/github-patches | git_diff | docker__docker-py-1178 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support create network EnableIPv6 and Labels options
Check the remote API:
https://docs.docker.com/engine/reference/api/docker_remote_api_v1.23/#create-a-network
There are two missing JSON parameters:
```
EnableIPv6 - Enable IPv6 on the network
Labels - Labels to set on the network, specified as a map: {"key":"value" [,"key2":"value2"]}
```
</issue>
<code>
[start of docker/api/network.py]
1 import json
2
3 from ..errors import InvalidVersion
4 from ..utils import check_resource, minimum_version
5 from ..utils import version_lt
6
7
8 class NetworkApiMixin(object):
9 @minimum_version('1.21')
10 def networks(self, names=None, ids=None):
11 filters = {}
12 if names:
13 filters['name'] = names
14 if ids:
15 filters['id'] = ids
16
17 params = {'filters': json.dumps(filters)}
18
19 url = self._url("/networks")
20 res = self._get(url, params=params)
21 return self._result(res, json=True)
22
23 @minimum_version('1.21')
24 def create_network(self, name, driver=None, options=None, ipam=None,
25 check_duplicate=None, internal=False):
26 if options is not None and not isinstance(options, dict):
27 raise TypeError('options must be a dictionary')
28
29 data = {
30 'Name': name,
31 'Driver': driver,
32 'Options': options,
33 'IPAM': ipam,
34 'CheckDuplicate': check_duplicate
35 }
36
37 if internal:
38 if version_lt(self._version, '1.22'):
39 raise InvalidVersion('Internal networks are not '
40 'supported in API version < 1.22')
41 data['Internal'] = True
42
43 url = self._url("/networks/create")
44 res = self._post_json(url, data=data)
45 return self._result(res, json=True)
46
47 @minimum_version('1.21')
48 def remove_network(self, net_id):
49 url = self._url("/networks/{0}", net_id)
50 res = self._delete(url)
51 self._raise_for_status(res)
52
53 @minimum_version('1.21')
54 def inspect_network(self, net_id):
55 url = self._url("/networks/{0}", net_id)
56 res = self._get(url)
57 return self._result(res, json=True)
58
59 @check_resource
60 @minimum_version('1.21')
61 def connect_container_to_network(self, container, net_id,
62 ipv4_address=None, ipv6_address=None,
63 aliases=None, links=None,
64 link_local_ips=None):
65 data = {
66 "Container": container,
67 "EndpointConfig": self.create_endpoint_config(
68 aliases=aliases, links=links, ipv4_address=ipv4_address,
69 ipv6_address=ipv6_address, link_local_ips=link_local_ips
70 ),
71 }
72
73 url = self._url("/networks/{0}/connect", net_id)
74 res = self._post_json(url, data=data)
75 self._raise_for_status(res)
76
77 @check_resource
78 @minimum_version('1.21')
79 def disconnect_container_from_network(self, container, net_id):
80 data = {"container": container}
81 url = self._url("/networks/{0}/disconnect", net_id)
82 res = self._post_json(url, data=data)
83 self._raise_for_status(res)
84
[end of docker/api/network.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/api/network.py b/docker/api/network.py
--- a/docker/api/network.py
+++ b/docker/api/network.py
@@ -22,7 +22,8 @@
@minimum_version('1.21')
def create_network(self, name, driver=None, options=None, ipam=None,
- check_duplicate=None, internal=False):
+ check_duplicate=None, internal=False, labels=None,
+ enable_ipv6=False):
if options is not None and not isinstance(options, dict):
raise TypeError('options must be a dictionary')
@@ -34,6 +35,22 @@
'CheckDuplicate': check_duplicate
}
+ if labels is not None:
+ if version_lt(self._version, '1.23'):
+ raise InvalidVersion(
+ 'network labels were introduced in API 1.23'
+ )
+ if not isinstance(labels, dict):
+ raise TypeError('labels must be a dictionary')
+ data["Labels"] = labels
+
+ if enable_ipv6:
+ if version_lt(self._version, '1.23'):
+ raise InvalidVersion(
+ 'enable_ipv6 was introduced in API 1.23'
+ )
+ data['EnableIPv6'] = True
+
if internal:
if version_lt(self._version, '1.22'):
raise InvalidVersion('Internal networks are not '
@@ -76,8 +93,15 @@
@check_resource
@minimum_version('1.21')
- def disconnect_container_from_network(self, container, net_id):
- data = {"container": container}
+ def disconnect_container_from_network(self, container, net_id,
+ force=False):
+ data = {"Container": container}
+ if force:
+ if version_lt(self._version, '1.22'):
+ raise InvalidVersion(
+ 'Forced disconnect was introduced in API 1.22'
+ )
+ data['Force'] = force
url = self._url("/networks/{0}/disconnect", net_id)
res = self._post_json(url, data=data)
self._raise_for_status(res)
| {"golden_diff": "diff --git a/docker/api/network.py b/docker/api/network.py\n--- a/docker/api/network.py\n+++ b/docker/api/network.py\n@@ -22,7 +22,8 @@\n \n @minimum_version('1.21')\n def create_network(self, name, driver=None, options=None, ipam=None,\n- check_duplicate=None, internal=False):\n+ check_duplicate=None, internal=False, labels=None,\n+ enable_ipv6=False):\n if options is not None and not isinstance(options, dict):\n raise TypeError('options must be a dictionary')\n \n@@ -34,6 +35,22 @@\n 'CheckDuplicate': check_duplicate\n }\n \n+ if labels is not None:\n+ if version_lt(self._version, '1.23'):\n+ raise InvalidVersion(\n+ 'network labels were introduced in API 1.23'\n+ )\n+ if not isinstance(labels, dict):\n+ raise TypeError('labels must be a dictionary')\n+ data[\"Labels\"] = labels\n+\n+ if enable_ipv6:\n+ if version_lt(self._version, '1.23'):\n+ raise InvalidVersion(\n+ 'enable_ipv6 was introduced in API 1.23'\n+ )\n+ data['EnableIPv6'] = True\n+\n if internal:\n if version_lt(self._version, '1.22'):\n raise InvalidVersion('Internal networks are not '\n@@ -76,8 +93,15 @@\n \n @check_resource\n @minimum_version('1.21')\n- def disconnect_container_from_network(self, container, net_id):\n- data = {\"container\": container}\n+ def disconnect_container_from_network(self, container, net_id,\n+ force=False):\n+ data = {\"Container\": container}\n+ if force:\n+ if version_lt(self._version, '1.22'):\n+ raise InvalidVersion(\n+ 'Forced disconnect was introduced in API 1.22'\n+ )\n+ data['Force'] = force\n url = self._url(\"/networks/{0}/disconnect\", net_id)\n res = self._post_json(url, data=data)\n self._raise_for_status(res)\n", "issue": "Support create network EnableIPv6 and Labels options \nCheck the remote API:\nhttps://docs.docker.com/engine/reference/api/docker_remote_api_v1.23/#create-a-network\n\nThere are two missing JSON parameters:\n\n```\nEnableIPv6 - Enable IPv6 on the network\nLabels - Labels to set on the network, specified as a map: {\"key\":\"value\" [,\"key2\":\"value2\"]}\n```\n\n", "before_files": [{"content": "import json\n\nfrom ..errors import InvalidVersion\nfrom ..utils import check_resource, minimum_version\nfrom ..utils import version_lt\n\n\nclass NetworkApiMixin(object):\n @minimum_version('1.21')\n def networks(self, names=None, ids=None):\n filters = {}\n if names:\n filters['name'] = names\n if ids:\n filters['id'] = ids\n\n params = {'filters': json.dumps(filters)}\n\n url = self._url(\"/networks\")\n res = self._get(url, params=params)\n return self._result(res, json=True)\n\n @minimum_version('1.21')\n def create_network(self, name, driver=None, options=None, ipam=None,\n check_duplicate=None, internal=False):\n if options is not None and not isinstance(options, dict):\n raise TypeError('options must be a dictionary')\n\n data = {\n 'Name': name,\n 'Driver': driver,\n 'Options': options,\n 'IPAM': ipam,\n 'CheckDuplicate': check_duplicate\n }\n\n if internal:\n if version_lt(self._version, '1.22'):\n raise InvalidVersion('Internal networks are not '\n 'supported in API version < 1.22')\n data['Internal'] = True\n\n url = self._url(\"/networks/create\")\n res = self._post_json(url, data=data)\n return self._result(res, json=True)\n\n @minimum_version('1.21')\n def remove_network(self, net_id):\n url = self._url(\"/networks/{0}\", net_id)\n res = self._delete(url)\n self._raise_for_status(res)\n\n @minimum_version('1.21')\n def inspect_network(self, net_id):\n url = self._url(\"/networks/{0}\", net_id)\n res = self._get(url)\n return self._result(res, json=True)\n\n @check_resource\n @minimum_version('1.21')\n def connect_container_to_network(self, container, net_id,\n ipv4_address=None, ipv6_address=None,\n aliases=None, links=None,\n link_local_ips=None):\n data = {\n \"Container\": container,\n \"EndpointConfig\": self.create_endpoint_config(\n aliases=aliases, links=links, ipv4_address=ipv4_address,\n ipv6_address=ipv6_address, link_local_ips=link_local_ips\n ),\n }\n\n url = self._url(\"/networks/{0}/connect\", net_id)\n res = self._post_json(url, data=data)\n self._raise_for_status(res)\n\n @check_resource\n @minimum_version('1.21')\n def disconnect_container_from_network(self, container, net_id):\n data = {\"container\": container}\n url = self._url(\"/networks/{0}/disconnect\", net_id)\n res = self._post_json(url, data=data)\n self._raise_for_status(res)\n", "path": "docker/api/network.py"}]} | 1,417 | 480 |
gh_patches_debug_30024 | rasdani/github-patches | git_diff | vispy__vispy-2144 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add ability to pass "fpos" as a parameter to the ColorFilter
Hi all,
I am currently trying to use the ```ColorFilter``` (https://github.com/vispy/vispy/blob/main/vispy/visuals/filters/color.py) in a project along with several other filters, which I need to be placed in a specific order. However, right now, ```fpos``` cannot be passed as a parameter to ```ColorFilter```, which is always using 8:
```
def __init__(self, filter=(1., 1., 1., 1.)):
super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)
self.filter = filter
```
Is it possible to change this so the user can specify any position for this filter?
Thanks so much,
Clare
</issue>
<code>
[start of vispy/visuals/filters/color.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 from .base_filter import Filter
6 from ..shaders import Function, Varying
7 from ...color import colormap, Color
8
9
10 class IsolineFilter(Filter):
11 FRAG_SHADER = """
12 void isoline() {
13 if ($isolevel <= 0. || $isowidth <= 0.) {
14 return;
15 }
16
17 // function taken from glumpy/examples/isocurves.py
18 // and extended to have level, width, color and antialiasing
19 // as parameters
20
21 // Extract data value
22 // this accounts for perception,
23 // have to decide, which one to use or make this a uniform
24 const vec3 w = vec3(0.299, 0.587, 0.114);
25 //const vec3 w = vec3(0.2126, 0.7152, 0.0722);
26 float value = dot(gl_FragColor.rgb, w);
27
28 // setup lw, aa
29 float linewidth = $isowidth + $antialias;
30
31 // "middle" contour(s) dividing upper and lower half
32 // but only if isolevel is even
33 if( mod($isolevel,2.0) == 0.0 ) {
34 if( length(value - 0.5) < 0.5 / $isolevel)
35 linewidth = linewidth * 2;
36 }
37
38 // Trace contour isoline
39 float v = $isolevel * value - 0.5;
40 float dv = linewidth/2.0 * fwidth(v);
41 float f = abs(fract(v) - 0.5);
42 float d = smoothstep(-dv, +dv, f);
43 float t = linewidth/2.0 - $antialias;
44 d = abs(d)*linewidth/2.0 - t;
45
46 if( d < - linewidth ) {
47 d = 1.0;
48 } else {
49 d /= $antialias;
50 }
51
52 // setup foreground
53 vec4 fc = $isocolor;
54
55 // mix with background
56 if (d < 1.) {
57 gl_FragColor = mix(gl_FragColor, fc, 1-d);
58 }
59
60 }
61 """
62
63 def __init__(self, level=2., width=2.0, antialias=1.0, color='black'):
64 super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER)
65
66 self.level = level
67 self.width = width
68 self.color = color
69 self.antialias = antialias
70
71 @property
72 def level(self):
73 return self._level
74
75 @level.setter
76 def level(self, lev):
77 if lev <= 0:
78 lev = 0
79 self._level = lev
80 self.fshader['isolevel'] = float(lev)
81
82 @property
83 def width(self):
84 return self._width
85
86 @width.setter
87 def width(self, w):
88 self._width = w
89 self.fshader['isowidth'] = float(w)
90
91 @property
92 def color(self):
93 return self._color
94
95 @color.setter
96 def color(self, c):
97 self._color = c
98 self.fshader['isocolor'] = Color(c).rgba
99
100 @property
101 def antialias(self):
102 return self._antialias
103
104 @antialias.setter
105 def antialias(self, a):
106 self._antialias = a
107 self.fshader['antialias'] = float(a)
108
109
110 class Alpha(Filter):
111 FRAG_SHADER = """
112 void apply_alpha() {
113 gl_FragColor.a = gl_FragColor.a * $alpha;
114 }
115 """
116
117 def __init__(self, alpha=1.0):
118 super(Alpha, self).__init__(fcode=self.FRAG_SHADER)
119
120 self.alpha = alpha
121
122 @property
123 def alpha(self):
124 return self._alpha
125
126 @alpha.setter
127 def alpha(self, a):
128 self._alpha = a
129 self.fshader['alpha'] = float(a)
130
131
132 class ColorFilter(Filter):
133 FRAG_SHADER = """
134 void apply_color_filter() {
135 gl_FragColor = gl_FragColor * $filter;
136 }
137 """
138
139 def __init__(self, filter=(1., 1., 1., 1.)):
140 super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)
141
142 self.filter = filter
143
144 @property
145 def filter(self):
146 return self._filter
147
148 @filter.setter
149 def filter(self, f):
150 self._filter = tuple(f)
151 self.fshader['filter'] = self._filter
152
153
154 class ZColormapFilter(Filter):
155 FRAG_SHADER = """
156 void z_colormap_support() {
157 $zval = $position.z;
158 }
159 """
160 VERT_SHADER = """
161 void apply_z_colormap() {
162 gl_FragColor = $cmap(($zval - $zrange.x) /
163 ($zrange.y - $zrange.x));
164 }
165 """
166
167 def __init__(self, cmap, zrange=(0., 1.)):
168 super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=3,
169 vcode=self.VERT_SHADER, vpos=9)
170
171 if isinstance(cmap, str):
172 cmap = colormap.get_colormap(cmap)
173 self.cmap = Function(cmap.glsl_map)
174 self.fshader['cmap'] = self.cmap
175 self.fshader['zrange'] = zrange
176 self.vshader['zval'] = Varying('v_zval', dtype='float')
177 self.fshader['zval'] = self.vshader['zval']
178
179 def _attach(self, visual):
180 super(ZColormapFilter, self)._attach(visual)
181 self.vshader['position'] = visual.shared_program.vert['position']
182
[end of vispy/visuals/filters/color.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vispy/visuals/filters/color.py b/vispy/visuals/filters/color.py
--- a/vispy/visuals/filters/color.py
+++ b/vispy/visuals/filters/color.py
@@ -60,8 +60,8 @@
}
"""
- def __init__(self, level=2., width=2.0, antialias=1.0, color='black'):
- super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER)
+ def __init__(self, level=2., width=2.0, antialias=1.0, color='black', **kwargs):
+ super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER, **kwargs)
self.level = level
self.width = width
@@ -114,8 +114,8 @@
}
"""
- def __init__(self, alpha=1.0):
- super(Alpha, self).__init__(fcode=self.FRAG_SHADER)
+ def __init__(self, alpha=1.0, **kwargs):
+ super(Alpha, self).__init__(fcode=self.FRAG_SHADER, **kwargs)
self.alpha = alpha
@@ -136,8 +136,8 @@
}
"""
- def __init__(self, filter=(1., 1., 1., 1.)):
- super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)
+ def __init__(self, filter=(1., 1., 1., 1.), fpos=8, **kwargs):
+ super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=fpos, **kwargs)
self.filter = filter
@@ -164,9 +164,9 @@
}
"""
- def __init__(self, cmap, zrange=(0., 1.)):
- super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=3,
- vcode=self.VERT_SHADER, vpos=9)
+ def __init__(self, cmap, zrange=(0., 1.), fpos=3, vpos=9, **kwargs):
+ super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=fpos,
+ vcode=self.VERT_SHADER, vpos=vpos, **kwargs)
if isinstance(cmap, str):
cmap = colormap.get_colormap(cmap)
| {"golden_diff": "diff --git a/vispy/visuals/filters/color.py b/vispy/visuals/filters/color.py\n--- a/vispy/visuals/filters/color.py\n+++ b/vispy/visuals/filters/color.py\n@@ -60,8 +60,8 @@\n }\n \"\"\"\n \n- def __init__(self, level=2., width=2.0, antialias=1.0, color='black'):\n- super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER)\n+ def __init__(self, level=2., width=2.0, antialias=1.0, color='black', **kwargs):\n+ super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER, **kwargs)\n \n self.level = level\n self.width = width\n@@ -114,8 +114,8 @@\n }\n \"\"\"\n \n- def __init__(self, alpha=1.0):\n- super(Alpha, self).__init__(fcode=self.FRAG_SHADER)\n+ def __init__(self, alpha=1.0, **kwargs):\n+ super(Alpha, self).__init__(fcode=self.FRAG_SHADER, **kwargs)\n \n self.alpha = alpha\n \n@@ -136,8 +136,8 @@\n }\n \"\"\"\n \n- def __init__(self, filter=(1., 1., 1., 1.)):\n- super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)\n+ def __init__(self, filter=(1., 1., 1., 1.), fpos=8, **kwargs):\n+ super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=fpos, **kwargs)\n \n self.filter = filter\n \n@@ -164,9 +164,9 @@\n }\n \"\"\"\n \n- def __init__(self, cmap, zrange=(0., 1.)):\n- super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=3,\n- vcode=self.VERT_SHADER, vpos=9)\n+ def __init__(self, cmap, zrange=(0., 1.), fpos=3, vpos=9, **kwargs):\n+ super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=fpos,\n+ vcode=self.VERT_SHADER, vpos=vpos, **kwargs)\n \n if isinstance(cmap, str):\n cmap = colormap.get_colormap(cmap)\n", "issue": "Add ability to pass \"fpos\" as a parameter to the ColorFilter\nHi all,\r\nI am currently trying to use the ```ColorFilter``` (https://github.com/vispy/vispy/blob/main/vispy/visuals/filters/color.py) in a project along with several other filters, which I need to be placed in a specific order. However, right now, ```fpos``` cannot be passed as a parameter to ```ColorFilter```, which is always using 8:\r\n```\r\n def __init__(self, filter=(1., 1., 1., 1.)):\r\n super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)\r\n\r\n self.filter = filter\r\n```\r\n\r\nIs it possible to change this so the user can specify any position for this filter?\r\n\r\nThanks so much,\r\nClare\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\nfrom .base_filter import Filter\nfrom ..shaders import Function, Varying\nfrom ...color import colormap, Color\n\n\nclass IsolineFilter(Filter):\n FRAG_SHADER = \"\"\"\n void isoline() {\n if ($isolevel <= 0. || $isowidth <= 0.) {\n return;\n }\n\n // function taken from glumpy/examples/isocurves.py\n // and extended to have level, width, color and antialiasing\n // as parameters\n\n // Extract data value\n // this accounts for perception,\n // have to decide, which one to use or make this a uniform\n const vec3 w = vec3(0.299, 0.587, 0.114);\n //const vec3 w = vec3(0.2126, 0.7152, 0.0722);\n float value = dot(gl_FragColor.rgb, w);\n\n // setup lw, aa\n float linewidth = $isowidth + $antialias;\n\n // \"middle\" contour(s) dividing upper and lower half\n // but only if isolevel is even\n if( mod($isolevel,2.0) == 0.0 ) {\n if( length(value - 0.5) < 0.5 / $isolevel)\n linewidth = linewidth * 2;\n }\n\n // Trace contour isoline\n float v = $isolevel * value - 0.5;\n float dv = linewidth/2.0 * fwidth(v);\n float f = abs(fract(v) - 0.5);\n float d = smoothstep(-dv, +dv, f);\n float t = linewidth/2.0 - $antialias;\n d = abs(d)*linewidth/2.0 - t;\n\n if( d < - linewidth ) {\n d = 1.0;\n } else {\n d /= $antialias;\n }\n\n // setup foreground\n vec4 fc = $isocolor;\n\n // mix with background\n if (d < 1.) {\n gl_FragColor = mix(gl_FragColor, fc, 1-d);\n }\n\n }\n \"\"\"\n\n def __init__(self, level=2., width=2.0, antialias=1.0, color='black'):\n super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER)\n\n self.level = level\n self.width = width\n self.color = color\n self.antialias = antialias\n\n @property\n def level(self):\n return self._level\n\n @level.setter\n def level(self, lev):\n if lev <= 0:\n lev = 0\n self._level = lev\n self.fshader['isolevel'] = float(lev)\n\n @property\n def width(self):\n return self._width\n\n @width.setter\n def width(self, w):\n self._width = w\n self.fshader['isowidth'] = float(w)\n\n @property\n def color(self):\n return self._color\n\n @color.setter\n def color(self, c):\n self._color = c\n self.fshader['isocolor'] = Color(c).rgba\n\n @property\n def antialias(self):\n return self._antialias\n\n @antialias.setter\n def antialias(self, a):\n self._antialias = a\n self.fshader['antialias'] = float(a)\n\n\nclass Alpha(Filter):\n FRAG_SHADER = \"\"\"\n void apply_alpha() {\n gl_FragColor.a = gl_FragColor.a * $alpha;\n }\n \"\"\"\n\n def __init__(self, alpha=1.0):\n super(Alpha, self).__init__(fcode=self.FRAG_SHADER)\n\n self.alpha = alpha\n\n @property\n def alpha(self):\n return self._alpha\n\n @alpha.setter\n def alpha(self, a):\n self._alpha = a\n self.fshader['alpha'] = float(a)\n\n\nclass ColorFilter(Filter):\n FRAG_SHADER = \"\"\"\n void apply_color_filter() {\n gl_FragColor = gl_FragColor * $filter;\n }\n \"\"\"\n\n def __init__(self, filter=(1., 1., 1., 1.)):\n super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)\n\n self.filter = filter\n\n @property\n def filter(self):\n return self._filter\n\n @filter.setter\n def filter(self, f):\n self._filter = tuple(f)\n self.fshader['filter'] = self._filter\n\n\nclass ZColormapFilter(Filter):\n FRAG_SHADER = \"\"\"\n void z_colormap_support() {\n $zval = $position.z;\n }\n \"\"\"\n VERT_SHADER = \"\"\"\n void apply_z_colormap() {\n gl_FragColor = $cmap(($zval - $zrange.x) /\n ($zrange.y - $zrange.x));\n }\n \"\"\"\n\n def __init__(self, cmap, zrange=(0., 1.)):\n super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=3,\n vcode=self.VERT_SHADER, vpos=9)\n\n if isinstance(cmap, str):\n cmap = colormap.get_colormap(cmap)\n self.cmap = Function(cmap.glsl_map)\n self.fshader['cmap'] = self.cmap\n self.fshader['zrange'] = zrange\n self.vshader['zval'] = Varying('v_zval', dtype='float')\n self.fshader['zval'] = self.vshader['zval']\n\n def _attach(self, visual):\n super(ZColormapFilter, self)._attach(visual)\n self.vshader['position'] = visual.shared_program.vert['position']\n", "path": "vispy/visuals/filters/color.py"}]} | 2,513 | 567 |
gh_patches_debug_40239 | rasdani/github-patches | git_diff | freedomofpress__securedrop-4914 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
securedrop-admin (setup) fails in Tails 4.0-rc1
## Description
Running `./securedrop-admin setup` in tails 4.0-rc1 (upgraded from 3.16) returns the following error:
```
INFO: Virtualenv already exists, not creating
INFO: Checking Python dependencies for securedrop-admin
ERROR: Failed to install pip dependencies. Check network connection and try again.
```
This was done in VMs, will try to reproduce on hardware.
## Steps to Reproduce
1. Upgrade Tails device from 3.16 to 4.0 (Boot into 4.0-rc1 drive and clone to upgrade 3.16 drive)
2. Boot into newly-upgraded tails drive
3. Verify that the internet is working (tor is bootstrapped, you can reach an external website over tor)
4. check out `1.1.0~rc2` tag
5. Run `./securedrop-admin setup` in ~/Persistent/securedrop
6. Observe error
## Expected Behavior
Securedrop-admin should run and the dependencies should be installed.
## Actual Behavior
Securedrop-admin fails and returns an error, the dependencies are not installed
## Comments
Per https://github.com/freedomofpress/securedrop/pull/4852/files#diff-b5e536cc161fcc0d62e661b4d6eae381R70-R73
When running the commands locally, I get
* `lsb_release --id --short` returns `Debian`
* `uname -a` returns `Linux amnesia 5.3.0-trunk-amd64 #1 SMF Debian 5.3.2-1~exp1 (2019-10-02) x86_64 GNU/Linux`
When i run ./securedrop-admin with no parameter, I get:
```
amnesia@amnesia:~/Persistent/securedrop$ ./securedrop-admin help
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'
Current thread 0x00007cf687450740 (most recent call first):
Aborted
```
securedrop-admin (setup) fails in Tails 4.0-rc1
## Description
Running `./securedrop-admin setup` in tails 4.0-rc1 (upgraded from 3.16) returns the following error:
```
INFO: Virtualenv already exists, not creating
INFO: Checking Python dependencies for securedrop-admin
ERROR: Failed to install pip dependencies. Check network connection and try again.
```
This was done in VMs, will try to reproduce on hardware.
## Steps to Reproduce
1. Upgrade Tails device from 3.16 to 4.0 (Boot into 4.0-rc1 drive and clone to upgrade 3.16 drive)
2. Boot into newly-upgraded tails drive
3. Verify that the internet is working (tor is bootstrapped, you can reach an external website over tor)
4. check out `1.1.0~rc2` tag
5. Run `./securedrop-admin setup` in ~/Persistent/securedrop
6. Observe error
## Expected Behavior
Securedrop-admin should run and the dependencies should be installed.
## Actual Behavior
Securedrop-admin fails and returns an error, the dependencies are not installed
## Comments
Per https://github.com/freedomofpress/securedrop/pull/4852/files#diff-b5e536cc161fcc0d62e661b4d6eae381R70-R73
When running the commands locally, I get
* `lsb_release --id --short` returns `Debian`
* `uname -a` returns `Linux amnesia 5.3.0-trunk-amd64 #1 SMF Debian 5.3.2-1~exp1 (2019-10-02) x86_64 GNU/Linux`
When i run ./securedrop-admin with no parameter, I get:
```
amnesia@amnesia:~/Persistent/securedrop$ ./securedrop-admin help
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'
Current thread 0x00007cf687450740 (most recent call first):
Aborted
```
</issue>
<code>
[start of admin/bootstrap.py]
1 # -*- mode: python; coding: utf-8 -*-
2 #
3 # Copyright (C) 2013-2018 Freedom of the Press Foundation & al
4 # Copyright (C) 2018 Loic Dachary <[email protected]>
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19
20 import argparse
21 import logging
22 import os
23 import subprocess
24 import sys
25
26 sdlog = logging.getLogger(__name__)
27
28 DIR = os.path.dirname(os.path.realpath(__file__))
29 VENV_DIR = os.path.join(DIR, ".venv3")
30
31
32 def setup_logger(verbose=False):
33 """ Configure logging handler """
34 # Set default level on parent
35 sdlog.setLevel(logging.DEBUG)
36 level = logging.DEBUG if verbose else logging.INFO
37
38 stdout = logging.StreamHandler(sys.stdout)
39 stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
40 stdout.setLevel(level)
41 sdlog.addHandler(stdout)
42
43
44 def run_command(command):
45 """
46 Wrapper function to display stdout for running command,
47 similar to how shelling out in a Bash script displays rolling output.
48
49 Yields a list of the stdout from the `command`, and raises a
50 CalledProcessError if `command` returns non-zero.
51 """
52 popen = subprocess.Popen(command,
53 stdout=subprocess.PIPE,
54 stderr=subprocess.STDOUT)
55 for stdout_line in iter(popen.stdout.readline, b""):
56 yield stdout_line
57 popen.stdout.close()
58 return_code = popen.wait()
59 if return_code:
60 raise subprocess.CalledProcessError(return_code, command)
61
62
63 def is_tails():
64 try:
65 id = subprocess.check_output('lsb_release --id --short',
66 shell=True).strip()
67 except subprocess.CalledProcessError:
68 id = None
69
70 # dirty hack to unreliably detect Tails 4.0~beta2
71 if id == 'Debian':
72 if os.uname()[1] == 'amnesia':
73 id = 'Tails'
74
75 return id == 'Tails'
76
77
78 def maybe_torify():
79 if is_tails():
80 return ['torify']
81 else:
82 return []
83
84
85 def install_apt_dependencies(args):
86 """
87 Install apt dependencies in Tails. In order to install Ansible in
88 a virtualenv, first there are a number of Python prerequisites.
89 """
90 sdlog.info("Installing SecureDrop Admin dependencies")
91 sdlog.info(("You'll be prompted for the temporary Tails admin password,"
92 " which was set on Tails login screen"))
93
94 apt_command = ['sudo', 'su', '-c',
95 "apt-get update && \
96 apt-get -q -o=Dpkg::Use-Pty=0 install -y \
97 python3-virtualenv \
98 python3-yaml \
99 python3-pip \
100 ccontrol \
101 virtualenv \
102 libffi-dev \
103 libssl-dev \
104 libpython3-dev",
105 ]
106
107 try:
108 # Print command results in real-time, to keep Admin apprised
109 # of progress during long-running command.
110 for output_line in run_command(apt_command):
111 print(output_line.decode('utf-8').rstrip())
112 except subprocess.CalledProcessError:
113 # Tails supports apt persistence, which was used by SecureDrop
114 # under Tails 2.x. If updates are being applied, don't try to pile
115 # on with more apt requests.
116 sdlog.error(("Failed to install apt dependencies. Check network"
117 " connection and try again."))
118 raise
119
120
121 def envsetup(args):
122 """Installs Admin tooling required for managing SecureDrop. Specifically:
123
124 * updates apt-cache
125 * installs apt packages for Python virtualenv
126 * creates virtualenv
127 * installs pip packages inside virtualenv
128
129 The virtualenv is created within the Persistence volume in Tails, so that
130 Ansible is available to the Admin on subsequent boots without requiring
131 installation of packages again.
132 """
133 # virtualenv doesnt exist? Install dependencies and create
134 if not os.path.exists(VENV_DIR):
135
136 install_apt_dependencies(args)
137
138 # Technically you can create a virtualenv from within python
139 # but pip can only be run over tor on tails, and debugging that
140 # along with instaling a third-party dependency is not worth
141 # the effort here.
142 sdlog.info("Setting up virtualenv")
143 try:
144 sdlog.debug(subprocess.check_output(
145 maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],
146 stderr=subprocess.STDOUT))
147 except subprocess.CalledProcessError as e:
148 sdlog.debug(e.output)
149 sdlog.error(("Unable to create virtualenv. Check network settings"
150 " and try again."))
151 raise
152 else:
153 sdlog.info("Virtualenv already exists, not creating")
154
155 install_pip_dependencies(args)
156 if os.path.exists(os.path.join(DIR, 'setup.py')):
157 install_pip_self(args)
158
159 sdlog.info("Finished installing SecureDrop dependencies")
160
161
162 def install_pip_self(args):
163 pip_install_cmd = [
164 os.path.join(VENV_DIR, 'bin', 'pip3'),
165 'install', '-e', DIR
166 ]
167 try:
168 subprocess.check_output(maybe_torify() + pip_install_cmd,
169 stderr=subprocess.STDOUT)
170 except subprocess.CalledProcessError as e:
171 sdlog.debug(e.output)
172 sdlog.error("Unable to install self, run with -v for more information")
173 raise
174
175
176 def install_pip_dependencies(args, pip_install_cmd=[
177 os.path.join(VENV_DIR, 'bin', 'pip3'),
178 'install',
179 # Specify requirements file.
180 '-r', os.path.join(DIR, 'requirements.txt'),
181 '--require-hashes',
182 # Make sure to upgrade packages only if necessary.
183 '-U', '--upgrade-strategy', 'only-if-needed',
184 ]):
185 """
186 Install Python dependencies via pip into virtualenv.
187 """
188
189 sdlog.info("Checking Python dependencies for securedrop-admin")
190 try:
191 pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,
192 stderr=subprocess.STDOUT)
193 except subprocess.CalledProcessError as e:
194 sdlog.debug(e.output)
195 sdlog.error(("Failed to install pip dependencies. Check network"
196 " connection and try again."))
197 raise
198
199 sdlog.debug(pip_output)
200 if "Successfully installed" in str(pip_output):
201 sdlog.info("Python dependencies for securedrop-admin upgraded")
202 else:
203 sdlog.info("Python dependencies for securedrop-admin are up-to-date")
204
205
206 def parse_argv(argv):
207 parser = argparse.ArgumentParser()
208 parser.add_argument('-v', action='store_true', default=False,
209 help="Increase verbosity on output")
210 parser.set_defaults(func=envsetup)
211
212 return parser.parse_args(argv)
213
214
215 if __name__ == "__main__":
216 args = parse_argv(sys.argv[1:])
217 setup_logger(args.v)
218 if args.v:
219 args.func(args)
220 else:
221 try:
222 args.func(args)
223 except Exception:
224 sys.exit(1)
225 else:
226 sys.exit(0)
227
[end of admin/bootstrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/admin/bootstrap.py b/admin/bootstrap.py
--- a/admin/bootstrap.py
+++ b/admin/bootstrap.py
@@ -20,6 +20,7 @@
import argparse
import logging
import os
+import shutil
import subprocess
import sys
@@ -68,13 +69,48 @@
id = None
# dirty hack to unreliably detect Tails 4.0~beta2
- if id == 'Debian':
+ if id == b'Debian':
if os.uname()[1] == 'amnesia':
id = 'Tails'
return id == 'Tails'
+def clean_up_tails3_venv(virtualenv_dir=VENV_DIR):
+ """
+ Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is
+ based on Debian Buster and uses libpython3.7. This means that the Tails 3.x
+ virtualenv will not work under Tails 4.x, and will need to be destroyed and
+ rebuilt. We can detect if the version of libpython is 3.5 in the
+ admin/.venv3/ folder, and delete it if that's the case. This will ensure a
+ smooth upgrade from Tails 3.x to Tails 4.x.
+ """
+ if is_tails():
+ try:
+ dist = subprocess.check_output('lsb_release --codename --short',
+ shell=True).strip()
+ except subprocess.CalledProcessError:
+ dist = None
+
+ # tails4 is based on buster
+ if dist == b'buster':
+ python_lib_path = os.path.join(virtualenv_dir, "lib/python3.5")
+ if os.path.exists(os.path.join(python_lib_path)):
+ sdlog.info(
+ "Tails 3 Python 3 virtualenv detected. "
+ "Removing it."
+ )
+ shutil.rmtree(virtualenv_dir)
+ sdlog.info("Tails 3 Python 3 virtualenv deleted.")
+
+
+def checkenv(args):
+ clean_up_tails3_venv(VENV_DIR)
+ if not os.path.exists(os.path.join(VENV_DIR, "bin/activate")):
+ sdlog.error('Please run "securedrop-admin setup".')
+ sys.exit(1)
+
+
def maybe_torify():
if is_tails():
return ['torify']
@@ -130,6 +166,9 @@
Ansible is available to the Admin on subsequent boots without requiring
installation of packages again.
"""
+ # clean up tails 3.x venv when migrating to tails 4.x
+ clean_up_tails3_venv(VENV_DIR)
+
# virtualenv doesnt exist? Install dependencies and create
if not os.path.exists(VENV_DIR):
@@ -209,18 +248,30 @@
help="Increase verbosity on output")
parser.set_defaults(func=envsetup)
+ subparsers = parser.add_subparsers()
+
+ envsetup_parser = subparsers.add_parser(
+ 'envsetup',
+ help='Set up the admin virtualenv.'
+ )
+ envsetup_parser.set_defaults(func=envsetup)
+
+ checkenv_parser = subparsers.add_parser(
+ 'checkenv',
+ help='Check that the admin virtualenv is properly set up.'
+ )
+ checkenv_parser.set_defaults(func=checkenv)
+
return parser.parse_args(argv)
if __name__ == "__main__":
args = parse_argv(sys.argv[1:])
setup_logger(args.v)
- if args.v:
+
+ try:
args.func(args)
+ except Exception:
+ sys.exit(1)
else:
- try:
- args.func(args)
- except Exception:
- sys.exit(1)
- else:
- sys.exit(0)
+ sys.exit(0)
| {"golden_diff": "diff --git a/admin/bootstrap.py b/admin/bootstrap.py\n--- a/admin/bootstrap.py\n+++ b/admin/bootstrap.py\n@@ -20,6 +20,7 @@\n import argparse\n import logging\n import os\n+import shutil\n import subprocess\n import sys\n \n@@ -68,13 +69,48 @@\n id = None\n \n # dirty hack to unreliably detect Tails 4.0~beta2\n- if id == 'Debian':\n+ if id == b'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n \n return id == 'Tails'\n \n \n+def clean_up_tails3_venv(virtualenv_dir=VENV_DIR):\n+ \"\"\"\n+ Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is\n+ based on Debian Buster and uses libpython3.7. This means that the Tails 3.x\n+ virtualenv will not work under Tails 4.x, and will need to be destroyed and\n+ rebuilt. We can detect if the version of libpython is 3.5 in the\n+ admin/.venv3/ folder, and delete it if that's the case. This will ensure a\n+ smooth upgrade from Tails 3.x to Tails 4.x.\n+ \"\"\"\n+ if is_tails():\n+ try:\n+ dist = subprocess.check_output('lsb_release --codename --short',\n+ shell=True).strip()\n+ except subprocess.CalledProcessError:\n+ dist = None\n+\n+ # tails4 is based on buster\n+ if dist == b'buster':\n+ python_lib_path = os.path.join(virtualenv_dir, \"lib/python3.5\")\n+ if os.path.exists(os.path.join(python_lib_path)):\n+ sdlog.info(\n+ \"Tails 3 Python 3 virtualenv detected. \"\n+ \"Removing it.\"\n+ )\n+ shutil.rmtree(virtualenv_dir)\n+ sdlog.info(\"Tails 3 Python 3 virtualenv deleted.\")\n+\n+\n+def checkenv(args):\n+ clean_up_tails3_venv(VENV_DIR)\n+ if not os.path.exists(os.path.join(VENV_DIR, \"bin/activate\")):\n+ sdlog.error('Please run \"securedrop-admin setup\".')\n+ sys.exit(1)\n+\n+\n def maybe_torify():\n if is_tails():\n return ['torify']\n@@ -130,6 +166,9 @@\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n+ # clean up tails 3.x venv when migrating to tails 4.x\n+ clean_up_tails3_venv(VENV_DIR)\n+\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(VENV_DIR):\n \n@@ -209,18 +248,30 @@\n help=\"Increase verbosity on output\")\n parser.set_defaults(func=envsetup)\n \n+ subparsers = parser.add_subparsers()\n+\n+ envsetup_parser = subparsers.add_parser(\n+ 'envsetup',\n+ help='Set up the admin virtualenv.'\n+ )\n+ envsetup_parser.set_defaults(func=envsetup)\n+\n+ checkenv_parser = subparsers.add_parser(\n+ 'checkenv',\n+ help='Check that the admin virtualenv is properly set up.'\n+ )\n+ checkenv_parser.set_defaults(func=checkenv)\n+\n return parser.parse_args(argv)\n \n \n if __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n- if args.v:\n+\n+ try:\n args.func(args)\n+ except Exception:\n+ sys.exit(1)\n else:\n- try:\n- args.func(args)\n- except Exception:\n- sys.exit(1)\n- else:\n- sys.exit(0)\n+ sys.exit(0)\n", "issue": "securedrop-admin (setup) fails in Tails 4.0-rc1\n## Description\r\n\r\nRunning `./securedrop-admin setup` in tails 4.0-rc1 (upgraded from 3.16) returns the following error:\r\n\r\n```\r\nINFO: Virtualenv already exists, not creating\r\nINFO: Checking Python dependencies for securedrop-admin\r\nERROR: Failed to install pip dependencies. Check network connection and try again.\r\n```\r\nThis was done in VMs, will try to reproduce on hardware.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Upgrade Tails device from 3.16 to 4.0 (Boot into 4.0-rc1 drive and clone to upgrade 3.16 drive)\r\n2. Boot into newly-upgraded tails drive\r\n3. Verify that the internet is working (tor is bootstrapped, you can reach an external website over tor)\r\n4. check out `1.1.0~rc2` tag\r\n5. Run `./securedrop-admin setup` in ~/Persistent/securedrop\r\n6. Observe error\r\n\r\n## Expected Behavior\r\n\r\nSecuredrop-admin should run and the dependencies should be installed.\r\n\r\n## Actual Behavior\r\n\r\nSecuredrop-admin fails and returns an error, the dependencies are not installed\r\n\r\n## Comments\r\n\r\nPer https://github.com/freedomofpress/securedrop/pull/4852/files#diff-b5e536cc161fcc0d62e661b4d6eae381R70-R73\r\n\r\nWhen running the commands locally, I get\r\n* `lsb_release --id --short` returns `Debian`\r\n* `uname -a` returns `Linux amnesia 5.3.0-trunk-amd64 #1 SMF Debian 5.3.2-1~exp1 (2019-10-02) x86_64 GNU/Linux`\r\n\r\nWhen i run ./securedrop-admin with no parameter, I get:\r\n```\r\namnesia@amnesia:~/Persistent/securedrop$ ./securedrop-admin help\r\nCould not find platform independent libraries <prefix>\r\nCould not find platform dependent libraries <exec_prefix>\r\nConsider setting $PYTHONHOME to <prefix>[:<exec_prefix>]\r\nFatal Python error: Py_Initialize: Unable to get the locale encoding\r\nImportError: No module named 'encodings'\r\n\r\nCurrent thread 0x00007cf687450740 (most recent call first):\r\nAborted\r\n```\nsecuredrop-admin (setup) fails in Tails 4.0-rc1\n## Description\r\n\r\nRunning `./securedrop-admin setup` in tails 4.0-rc1 (upgraded from 3.16) returns the following error:\r\n\r\n```\r\nINFO: Virtualenv already exists, not creating\r\nINFO: Checking Python dependencies for securedrop-admin\r\nERROR: Failed to install pip dependencies. Check network connection and try again.\r\n```\r\nThis was done in VMs, will try to reproduce on hardware.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Upgrade Tails device from 3.16 to 4.0 (Boot into 4.0-rc1 drive and clone to upgrade 3.16 drive)\r\n2. Boot into newly-upgraded tails drive\r\n3. Verify that the internet is working (tor is bootstrapped, you can reach an external website over tor)\r\n4. check out `1.1.0~rc2` tag\r\n5. Run `./securedrop-admin setup` in ~/Persistent/securedrop\r\n6. Observe error\r\n\r\n## Expected Behavior\r\n\r\nSecuredrop-admin should run and the dependencies should be installed.\r\n\r\n## Actual Behavior\r\n\r\nSecuredrop-admin fails and returns an error, the dependencies are not installed\r\n\r\n## Comments\r\n\r\nPer https://github.com/freedomofpress/securedrop/pull/4852/files#diff-b5e536cc161fcc0d62e661b4d6eae381R70-R73\r\n\r\nWhen running the commands locally, I get\r\n* `lsb_release --id --short` returns `Debian`\r\n* `uname -a` returns `Linux amnesia 5.3.0-trunk-amd64 #1 SMF Debian 5.3.2-1~exp1 (2019-10-02) x86_64 GNU/Linux`\r\n\r\nWhen i run ./securedrop-admin with no parameter, I get:\r\n```\r\namnesia@amnesia:~/Persistent/securedrop$ ./securedrop-admin help\r\nCould not find platform independent libraries <prefix>\r\nCould not find platform dependent libraries <exec_prefix>\r\nConsider setting $PYTHONHOME to <prefix>[:<exec_prefix>]\r\nFatal Python error: Py_Initialize: Unable to get the locale encoding\r\nImportError: No module named 'encodings'\r\n\r\nCurrent thread 0x00007cf687450740 (most recent call first):\r\nAborted\r\n```\n", "before_files": [{"content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\nimport argparse\nimport logging\nimport os\nimport subprocess\nimport sys\n\nsdlog = logging.getLogger(__name__)\n\nDIR = os.path.dirname(os.path.realpath(__file__))\nVENV_DIR = os.path.join(DIR, \".venv3\")\n\n\ndef setup_logger(verbose=False):\n \"\"\" Configure logging handler \"\"\"\n # Set default level on parent\n sdlog.setLevel(logging.DEBUG)\n level = logging.DEBUG if verbose else logging.INFO\n\n stdout = logging.StreamHandler(sys.stdout)\n stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))\n stdout.setLevel(level)\n sdlog.addHandler(stdout)\n\n\ndef run_command(command):\n \"\"\"\n Wrapper function to display stdout for running command,\n similar to how shelling out in a Bash script displays rolling output.\n\n Yields a list of the stdout from the `command`, and raises a\n CalledProcessError if `command` returns non-zero.\n \"\"\"\n popen = subprocess.Popen(command,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n for stdout_line in iter(popen.stdout.readline, b\"\"):\n yield stdout_line\n popen.stdout.close()\n return_code = popen.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, command)\n\n\ndef is_tails():\n try:\n id = subprocess.check_output('lsb_release --id --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n id = None\n\n # dirty hack to unreliably detect Tails 4.0~beta2\n if id == 'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n\n return id == 'Tails'\n\n\ndef maybe_torify():\n if is_tails():\n return ['torify']\n else:\n return []\n\n\ndef install_apt_dependencies(args):\n \"\"\"\n Install apt dependencies in Tails. In order to install Ansible in\n a virtualenv, first there are a number of Python prerequisites.\n \"\"\"\n sdlog.info(\"Installing SecureDrop Admin dependencies\")\n sdlog.info((\"You'll be prompted for the temporary Tails admin password,\"\n \" which was set on Tails login screen\"))\n\n apt_command = ['sudo', 'su', '-c',\n \"apt-get update && \\\n apt-get -q -o=Dpkg::Use-Pty=0 install -y \\\n python3-virtualenv \\\n python3-yaml \\\n python3-pip \\\n ccontrol \\\n virtualenv \\\n libffi-dev \\\n libssl-dev \\\n libpython3-dev\",\n ]\n\n try:\n # Print command results in real-time, to keep Admin apprised\n # of progress during long-running command.\n for output_line in run_command(apt_command):\n print(output_line.decode('utf-8').rstrip())\n except subprocess.CalledProcessError:\n # Tails supports apt persistence, which was used by SecureDrop\n # under Tails 2.x. If updates are being applied, don't try to pile\n # on with more apt requests.\n sdlog.error((\"Failed to install apt dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n\ndef envsetup(args):\n \"\"\"Installs Admin tooling required for managing SecureDrop. Specifically:\n\n * updates apt-cache\n * installs apt packages for Python virtualenv\n * creates virtualenv\n * installs pip packages inside virtualenv\n\n The virtualenv is created within the Persistence volume in Tails, so that\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(VENV_DIR):\n\n install_apt_dependencies(args)\n\n # Technically you can create a virtualenv from within python\n # but pip can only be run over tor on tails, and debugging that\n # along with instaling a third-party dependency is not worth\n # the effort here.\n sdlog.info(\"Setting up virtualenv\")\n try:\n sdlog.debug(subprocess.check_output(\n maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],\n stderr=subprocess.STDOUT))\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Unable to create virtualenv. Check network settings\"\n \" and try again.\"))\n raise\n else:\n sdlog.info(\"Virtualenv already exists, not creating\")\n\n install_pip_dependencies(args)\n if os.path.exists(os.path.join(DIR, 'setup.py')):\n install_pip_self(args)\n\n sdlog.info(\"Finished installing SecureDrop dependencies\")\n\n\ndef install_pip_self(args):\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install', '-e', DIR\n ]\n try:\n subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error(\"Unable to install self, run with -v for more information\")\n raise\n\n\ndef install_pip_dependencies(args, pip_install_cmd=[\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install',\n # Specify requirements file.\n '-r', os.path.join(DIR, 'requirements.txt'),\n '--require-hashes',\n # Make sure to upgrade packages only if necessary.\n '-U', '--upgrade-strategy', 'only-if-needed',\n]):\n \"\"\"\n Install Python dependencies via pip into virtualenv.\n \"\"\"\n\n sdlog.info(\"Checking Python dependencies for securedrop-admin\")\n try:\n pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Failed to install pip dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n sdlog.debug(pip_output)\n if \"Successfully installed\" in str(pip_output):\n sdlog.info(\"Python dependencies for securedrop-admin upgraded\")\n else:\n sdlog.info(\"Python dependencies for securedrop-admin are up-to-date\")\n\n\ndef parse_argv(argv):\n parser = argparse.ArgumentParser()\n parser.add_argument('-v', action='store_true', default=False,\n help=\"Increase verbosity on output\")\n parser.set_defaults(func=envsetup)\n\n return parser.parse_args(argv)\n\n\nif __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n if args.v:\n args.func(args)\n else:\n try:\n args.func(args)\n except Exception:\n sys.exit(1)\n else:\n sys.exit(0)\n", "path": "admin/bootstrap.py"}]} | 3,846 | 881 |
gh_patches_debug_57041 | rasdani/github-patches | git_diff | espnet__espnet-3073 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Question on chunk shift in ChunkIterFactory.build_iter()
In the code, shift width is calculated as a ratio of utterance length as follows:
S = int(L * self.chunk_shift_ratio)
Shouldn't shift width be calculated as a ratio of chunk length like below ?
S = int(W * self.chunk_shift_ratio)
</issue>
<code>
[start of espnet2/iterators/chunk_iter_factory.py]
1 import logging
2 from typing import Any
3 from typing import Dict
4 from typing import Iterator
5 from typing import List
6 from typing import Sequence
7 from typing import Tuple
8 from typing import Union
9
10 import numpy as np
11 import torch
12 from typeguard import check_argument_types
13
14 from espnet2.iterators.abs_iter_factory import AbsIterFactory
15 from espnet2.iterators.sequence_iter_factory import SequenceIterFactory
16 from espnet2.samplers.abs_sampler import AbsSampler
17
18
19 class ChunkIterFactory(AbsIterFactory):
20 """Creates chunks from a sequence
21
22 Examples:
23 >>> batches = [["id1"], ["id2"], ...]
24 >>> batch_size = 128
25 >>> chunk_length = 1000
26 >>> iter_factory = ChunkIterFactory(dataset, batches, batch_size, chunk_length)
27 >>> it = iter_factory.build_iter(epoch)
28 >>> for ids, batch in it:
29 ... ...
30
31 - The number of mini-batches are varied in each epochs and
32 we can't get the number in advance
33 because IterFactory doesn't be given to the length information.
34 - Since the first reason, "num_iters_per_epoch" can't be implemented
35 for this iterator. Instead of it, "num_samples_per_epoch" is implemented.
36
37 """
38
39 def __init__(
40 self,
41 dataset,
42 batch_size: int,
43 batches: Union[AbsSampler, Sequence[Sequence[Any]]],
44 chunk_length: Union[int, str],
45 chunk_shift_ratio: float = 0.5,
46 num_cache_chunks: int = 1024,
47 num_samples_per_epoch: int = None,
48 seed: int = 0,
49 shuffle: bool = False,
50 num_workers: int = 0,
51 collate_fn=None,
52 pin_memory: bool = False,
53 ):
54 assert check_argument_types()
55 assert all(len(x) == 1 for x in batches), "batch-size must be 1"
56
57 self.per_sample_iter_factory = SequenceIterFactory(
58 dataset=dataset,
59 batches=batches,
60 num_iters_per_epoch=num_samples_per_epoch,
61 seed=seed,
62 shuffle=shuffle,
63 num_workers=num_workers,
64 collate_fn=collate_fn,
65 pin_memory=pin_memory,
66 )
67
68 self.num_cache_chunks = max(num_cache_chunks, batch_size)
69 if isinstance(chunk_length, str):
70 if len(chunk_length) == 0:
71 raise ValueError("e.g. 5,8 or 3-5: but got empty string")
72
73 self.chunk_lengths = []
74 for x in chunk_length.split(","):
75 try:
76 sps = list(map(int, x.split("-")))
77 except ValueError:
78 raise ValueError(f"e.g. 5,8 or 3-5: but got {chunk_length}")
79
80 if len(sps) > 2:
81 raise ValueError(f"e.g. 5,8 or 3-5: but got {chunk_length}")
82 elif len(sps) == 2:
83 # Append all numbers between the range into the candidates
84 self.chunk_lengths += list(range(sps[0], sps[1] + 1))
85 else:
86 self.chunk_lengths += [sps[0]]
87 else:
88 # Single candidates: Fixed chunk length
89 self.chunk_lengths = [chunk_length]
90
91 self.chunk_shift_ratio = chunk_shift_ratio
92 self.batch_size = batch_size
93 self.seed = seed
94 self.shuffle = shuffle
95
96 def build_iter(
97 self,
98 epoch: int,
99 shuffle: bool = None,
100 ) -> Iterator[Tuple[List[str], Dict[str, torch.Tensor]]]:
101 per_sample_loader = self.per_sample_iter_factory.build_iter(epoch, shuffle)
102
103 if shuffle is None:
104 shuffle = self.shuffle
105 state = np.random.RandomState(epoch + self.seed)
106
107 # NOTE(kamo):
108 # This iterator supports multiple chunk lengths and
109 # keep chunks for each lenghts here until collecting specified numbers
110 cache_chunks_dict = {}
111 cache_id_list_dict = {}
112 for ids, batch in per_sample_loader:
113 # Must be per-sample-loader
114 assert len(ids) == 1, f"Must be per-sample-loader: {len(ids)}"
115 assert all(len(x) == 1 for x in batch.values())
116
117 # Get keys of sequence data
118 sequence_keys = []
119 for key in batch:
120 if key + "_lengths" in batch:
121 sequence_keys.append(key)
122 # Remove lengths data and get the first sample
123 batch = {k: v[0] for k, v in batch.items() if not k.endswith("_lengths")}
124 id_ = ids[0]
125
126 for key in sequence_keys:
127 if len(batch[key]) != len(batch[sequence_keys[0]]):
128 raise RuntimeError(
129 f"All sequences must has same length: "
130 f"{len(batch[key])} != {len(batch[sequence_keys[0]])}"
131 )
132
133 L = len(batch[sequence_keys[0]])
134 # Select chunk length
135 chunk_lengths = [lg for lg in self.chunk_lengths if lg < L]
136 if len(chunk_lengths) == 0:
137 logging.warning(
138 f"The length of '{id_}' is {L}, but it is shorter than "
139 f"any candidates of chunk-length: {self.chunk_lengths}"
140 )
141 continue
142
143 W = int(state.choice(chunk_lengths, 1))
144 cache_id_list = cache_id_list_dict.setdefault(W, [])
145 cache_chunks = cache_chunks_dict.setdefault(W, {})
146
147 # Shift width to the next chunk
148 S = int(L * self.chunk_shift_ratio)
149 # Number of chunks
150 N = (L - W) // S + 1
151 if shuffle:
152 Z = state.randint(0, (L - W) % S + 1)
153 else:
154 Z = 0
155
156 # Split a sequence into chunks.
157 # Note that the marginal frames divided by chunk length are discarded
158 for k, v in batch.items():
159 if k not in cache_chunks:
160 cache_chunks[k] = []
161 if k in sequence_keys:
162 # Shift chunks with overlapped length for data augmentation
163 cache_chunks[k] += [v[Z + i * S : Z + i * S + W] for i in range(N)]
164 else:
165 # If not sequence, use whole data instead of chunk
166 cache_chunks[k] += [v for _ in range(N)]
167 cache_id_list += [id_ for _ in range(N)]
168
169 if len(cache_id_list) > self.num_cache_chunks:
170 cache_id_list, cache_chunks = yield from self._generate_mini_batches(
171 cache_id_list,
172 cache_chunks,
173 shuffle,
174 state,
175 )
176
177 cache_id_list_dict[W] = cache_id_list
178 cache_chunks_dict[W] = cache_chunks
179
180 else:
181 for W in cache_id_list_dict:
182 cache_id_list = cache_id_list_dict.setdefault(W, [])
183 cache_chunks = cache_chunks_dict.setdefault(W, {})
184
185 yield from self._generate_mini_batches(
186 cache_id_list,
187 cache_chunks,
188 shuffle,
189 state,
190 )
191
192 def _generate_mini_batches(
193 self,
194 id_list: List[str],
195 batches: Dict[str, List[torch.Tensor]],
196 shuffle: bool,
197 state: np.random.RandomState,
198 ):
199 if shuffle:
200 indices = np.arange(0, len(id_list))
201 state.shuffle(indices)
202 batches = {k: [v[i] for i in indices] for k, v in batches.items()}
203 id_list = [id_list[i] for i in indices]
204
205 bs = self.batch_size
206 while len(id_list) >= bs:
207 # Make mini-batch and yield
208 yield (
209 id_list[:bs],
210 {k: torch.stack(v[:bs], 0) for k, v in batches.items()},
211 )
212 id_list = id_list[bs:]
213 batches = {k: v[bs:] for k, v in batches.items()}
214
215 return id_list, batches
216
[end of espnet2/iterators/chunk_iter_factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/espnet2/iterators/chunk_iter_factory.py b/espnet2/iterators/chunk_iter_factory.py
--- a/espnet2/iterators/chunk_iter_factory.py
+++ b/espnet2/iterators/chunk_iter_factory.py
@@ -145,7 +145,7 @@
cache_chunks = cache_chunks_dict.setdefault(W, {})
# Shift width to the next chunk
- S = int(L * self.chunk_shift_ratio)
+ S = int(W * self.chunk_shift_ratio)
# Number of chunks
N = (L - W) // S + 1
if shuffle:
| {"golden_diff": "diff --git a/espnet2/iterators/chunk_iter_factory.py b/espnet2/iterators/chunk_iter_factory.py\n--- a/espnet2/iterators/chunk_iter_factory.py\n+++ b/espnet2/iterators/chunk_iter_factory.py\n@@ -145,7 +145,7 @@\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n \n # Shift width to the next chunk\n- S = int(L * self.chunk_shift_ratio)\n+ S = int(W * self.chunk_shift_ratio)\n # Number of chunks\n N = (L - W) // S + 1\n if shuffle:\n", "issue": "Question on chunk shift in ChunkIterFactory.build_iter()\nIn the code, shift width is calculated as a ratio of utterance length as follows:\r\nS = int(L * self.chunk_shift_ratio)\r\n\r\nShouldn't shift width be calculated as a ratio of chunk length like below ?\r\nS = int(W * self.chunk_shift_ratio)\r\n\n", "before_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Iterator\nfrom typing import List\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom typeguard import check_argument_types\n\nfrom espnet2.iterators.abs_iter_factory import AbsIterFactory\nfrom espnet2.iterators.sequence_iter_factory import SequenceIterFactory\nfrom espnet2.samplers.abs_sampler import AbsSampler\n\n\nclass ChunkIterFactory(AbsIterFactory):\n \"\"\"Creates chunks from a sequence\n\n Examples:\n >>> batches = [[\"id1\"], [\"id2\"], ...]\n >>> batch_size = 128\n >>> chunk_length = 1000\n >>> iter_factory = ChunkIterFactory(dataset, batches, batch_size, chunk_length)\n >>> it = iter_factory.build_iter(epoch)\n >>> for ids, batch in it:\n ... ...\n\n - The number of mini-batches are varied in each epochs and\n we can't get the number in advance\n because IterFactory doesn't be given to the length information.\n - Since the first reason, \"num_iters_per_epoch\" can't be implemented\n for this iterator. Instead of it, \"num_samples_per_epoch\" is implemented.\n\n \"\"\"\n\n def __init__(\n self,\n dataset,\n batch_size: int,\n batches: Union[AbsSampler, Sequence[Sequence[Any]]],\n chunk_length: Union[int, str],\n chunk_shift_ratio: float = 0.5,\n num_cache_chunks: int = 1024,\n num_samples_per_epoch: int = None,\n seed: int = 0,\n shuffle: bool = False,\n num_workers: int = 0,\n collate_fn=None,\n pin_memory: bool = False,\n ):\n assert check_argument_types()\n assert all(len(x) == 1 for x in batches), \"batch-size must be 1\"\n\n self.per_sample_iter_factory = SequenceIterFactory(\n dataset=dataset,\n batches=batches,\n num_iters_per_epoch=num_samples_per_epoch,\n seed=seed,\n shuffle=shuffle,\n num_workers=num_workers,\n collate_fn=collate_fn,\n pin_memory=pin_memory,\n )\n\n self.num_cache_chunks = max(num_cache_chunks, batch_size)\n if isinstance(chunk_length, str):\n if len(chunk_length) == 0:\n raise ValueError(\"e.g. 5,8 or 3-5: but got empty string\")\n\n self.chunk_lengths = []\n for x in chunk_length.split(\",\"):\n try:\n sps = list(map(int, x.split(\"-\")))\n except ValueError:\n raise ValueError(f\"e.g. 5,8 or 3-5: but got {chunk_length}\")\n\n if len(sps) > 2:\n raise ValueError(f\"e.g. 5,8 or 3-5: but got {chunk_length}\")\n elif len(sps) == 2:\n # Append all numbers between the range into the candidates\n self.chunk_lengths += list(range(sps[0], sps[1] + 1))\n else:\n self.chunk_lengths += [sps[0]]\n else:\n # Single candidates: Fixed chunk length\n self.chunk_lengths = [chunk_length]\n\n self.chunk_shift_ratio = chunk_shift_ratio\n self.batch_size = batch_size\n self.seed = seed\n self.shuffle = shuffle\n\n def build_iter(\n self,\n epoch: int,\n shuffle: bool = None,\n ) -> Iterator[Tuple[List[str], Dict[str, torch.Tensor]]]:\n per_sample_loader = self.per_sample_iter_factory.build_iter(epoch, shuffle)\n\n if shuffle is None:\n shuffle = self.shuffle\n state = np.random.RandomState(epoch + self.seed)\n\n # NOTE(kamo):\n # This iterator supports multiple chunk lengths and\n # keep chunks for each lenghts here until collecting specified numbers\n cache_chunks_dict = {}\n cache_id_list_dict = {}\n for ids, batch in per_sample_loader:\n # Must be per-sample-loader\n assert len(ids) == 1, f\"Must be per-sample-loader: {len(ids)}\"\n assert all(len(x) == 1 for x in batch.values())\n\n # Get keys of sequence data\n sequence_keys = []\n for key in batch:\n if key + \"_lengths\" in batch:\n sequence_keys.append(key)\n # Remove lengths data and get the first sample\n batch = {k: v[0] for k, v in batch.items() if not k.endswith(\"_lengths\")}\n id_ = ids[0]\n\n for key in sequence_keys:\n if len(batch[key]) != len(batch[sequence_keys[0]]):\n raise RuntimeError(\n f\"All sequences must has same length: \"\n f\"{len(batch[key])} != {len(batch[sequence_keys[0]])}\"\n )\n\n L = len(batch[sequence_keys[0]])\n # Select chunk length\n chunk_lengths = [lg for lg in self.chunk_lengths if lg < L]\n if len(chunk_lengths) == 0:\n logging.warning(\n f\"The length of '{id_}' is {L}, but it is shorter than \"\n f\"any candidates of chunk-length: {self.chunk_lengths}\"\n )\n continue\n\n W = int(state.choice(chunk_lengths, 1))\n cache_id_list = cache_id_list_dict.setdefault(W, [])\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n\n # Shift width to the next chunk\n S = int(L * self.chunk_shift_ratio)\n # Number of chunks\n N = (L - W) // S + 1\n if shuffle:\n Z = state.randint(0, (L - W) % S + 1)\n else:\n Z = 0\n\n # Split a sequence into chunks.\n # Note that the marginal frames divided by chunk length are discarded\n for k, v in batch.items():\n if k not in cache_chunks:\n cache_chunks[k] = []\n if k in sequence_keys:\n # Shift chunks with overlapped length for data augmentation\n cache_chunks[k] += [v[Z + i * S : Z + i * S + W] for i in range(N)]\n else:\n # If not sequence, use whole data instead of chunk\n cache_chunks[k] += [v for _ in range(N)]\n cache_id_list += [id_ for _ in range(N)]\n\n if len(cache_id_list) > self.num_cache_chunks:\n cache_id_list, cache_chunks = yield from self._generate_mini_batches(\n cache_id_list,\n cache_chunks,\n shuffle,\n state,\n )\n\n cache_id_list_dict[W] = cache_id_list\n cache_chunks_dict[W] = cache_chunks\n\n else:\n for W in cache_id_list_dict:\n cache_id_list = cache_id_list_dict.setdefault(W, [])\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n\n yield from self._generate_mini_batches(\n cache_id_list,\n cache_chunks,\n shuffle,\n state,\n )\n\n def _generate_mini_batches(\n self,\n id_list: List[str],\n batches: Dict[str, List[torch.Tensor]],\n shuffle: bool,\n state: np.random.RandomState,\n ):\n if shuffle:\n indices = np.arange(0, len(id_list))\n state.shuffle(indices)\n batches = {k: [v[i] for i in indices] for k, v in batches.items()}\n id_list = [id_list[i] for i in indices]\n\n bs = self.batch_size\n while len(id_list) >= bs:\n # Make mini-batch and yield\n yield (\n id_list[:bs],\n {k: torch.stack(v[:bs], 0) for k, v in batches.items()},\n )\n id_list = id_list[bs:]\n batches = {k: v[bs:] for k, v in batches.items()}\n\n return id_list, batches\n", "path": "espnet2/iterators/chunk_iter_factory.py"}]} | 2,879 | 143 |
gh_patches_debug_20448 | rasdani/github-patches | git_diff | litestar-org__litestar-3454 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs: Document SSE
### Summary
The SSE documentation is currently lacking:
- Docs for `ServerSentEventMessage`
- Sending messages a dicts
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/3011">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/3011/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/3011/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of docs/examples/responses/sse_responses.py]
1 from asyncio import sleep
2 from typing import AsyncGenerator
3
4 from litestar import Litestar, get
5 from litestar.response import ServerSentEvent
6
7
8 async def my_generator() -> AsyncGenerator[bytes, None]:
9 count = 0
10 while count < 10:
11 await sleep(0.01)
12 count += 1
13 yield str(count)
14
15
16 @get(path="/count", sync_to_thread=False)
17 def sse_handler() -> ServerSentEvent:
18 return ServerSentEvent(my_generator())
19
20
21 app = Litestar(route_handlers=[sse_handler])
22
[end of docs/examples/responses/sse_responses.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/examples/responses/sse_responses.py b/docs/examples/responses/sse_responses.py
--- a/docs/examples/responses/sse_responses.py
+++ b/docs/examples/responses/sse_responses.py
@@ -2,15 +2,28 @@
from typing import AsyncGenerator
from litestar import Litestar, get
-from litestar.response import ServerSentEvent
+from litestar.response import ServerSentEvent, ServerSentEventMessage
+from litestar.types import SSEData
-async def my_generator() -> AsyncGenerator[bytes, None]:
+async def my_generator() -> AsyncGenerator[SSEData, None]:
count = 0
while count < 10:
await sleep(0.01)
count += 1
+ # In the generator you can yield integers, strings, bytes, dictionaries, or ServerSentEventMessage objects
+ # dicts can have the following keys: data, event, id, retry, comment
+
+ # here we yield an integer
+ yield count
+ # here a string
yield str(count)
+ # here bytes
+ yield str(count).encode("utf-8")
+ # here a dictionary
+ yield {"data": 2 * count, "event": "event2", "retry": 10}
+ # here a ServerSentEventMessage object
+ yield ServerSentEventMessage(event="something-with-comment", retry=1000, comment="some comment")
@get(path="/count", sync_to_thread=False)
| {"golden_diff": "diff --git a/docs/examples/responses/sse_responses.py b/docs/examples/responses/sse_responses.py\n--- a/docs/examples/responses/sse_responses.py\n+++ b/docs/examples/responses/sse_responses.py\n@@ -2,15 +2,28 @@\n from typing import AsyncGenerator\n \n from litestar import Litestar, get\n-from litestar.response import ServerSentEvent\n+from litestar.response import ServerSentEvent, ServerSentEventMessage\n+from litestar.types import SSEData\n \n \n-async def my_generator() -> AsyncGenerator[bytes, None]:\n+async def my_generator() -> AsyncGenerator[SSEData, None]:\n count = 0\n while count < 10:\n await sleep(0.01)\n count += 1\n+ # In the generator you can yield integers, strings, bytes, dictionaries, or ServerSentEventMessage objects\n+ # dicts can have the following keys: data, event, id, retry, comment\n+\n+ # here we yield an integer\n+ yield count\n+ # here a string\n yield str(count)\n+ # here bytes\n+ yield str(count).encode(\"utf-8\")\n+ # here a dictionary\n+ yield {\"data\": 2 * count, \"event\": \"event2\", \"retry\": 10}\n+ # here a ServerSentEventMessage object\n+ yield ServerSentEventMessage(event=\"something-with-comment\", retry=1000, comment=\"some comment\")\n \n \n @get(path=\"/count\", sync_to_thread=False)\n", "issue": "Docs: Document SSE\n### Summary\n\nThe SSE documentation is currently lacking:\r\n\r\n- Docs for `ServerSentEventMessage`\r\n- Sending messages a dicts\r\n\r\n\n\n<!-- POLAR PLEDGE BADGE START -->\n---\n> [!NOTE] \n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\n>\n> Check out all issues funded or available for funding [on our Polar.sh dashboard](https://polar.sh/litestar-org)\n> * If you would like to see an issue prioritized, make a pledge towards it!\n> * We receive the pledge once the issue is completed & verified\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/3011\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/3011/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/3011/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from asyncio import sleep\nfrom typing import AsyncGenerator\n\nfrom litestar import Litestar, get\nfrom litestar.response import ServerSentEvent\n\n\nasync def my_generator() -> AsyncGenerator[bytes, None]:\n count = 0\n while count < 10:\n await sleep(0.01)\n count += 1\n yield str(count)\n\n\n@get(path=\"/count\", sync_to_thread=False)\ndef sse_handler() -> ServerSentEvent:\n return ServerSentEvent(my_generator())\n\n\napp = Litestar(route_handlers=[sse_handler])\n", "path": "docs/examples/responses/sse_responses.py"}]} | 1,018 | 332 |
gh_patches_debug_26741 | rasdani/github-patches | git_diff | pre-commit__pre-commit-893 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect shebang in .git/hooks/pre-commit for python3 only installations
The shebang for `.git/hooks/pre-commit` is `#!/usr/bin/env python`. I work with setups where `python3` is the only python in env.
Could the shebang be the install python instead? I.e. the installation under `INSTALL_PYTHON = '/usr/bin/python3'`
</issue>
<code>
[start of pre_commit/commands/install_uninstall.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import logging
6 import os.path
7 import sys
8
9 from pre_commit import git
10 from pre_commit import output
11 from pre_commit.repository import repositories
12 from pre_commit.util import cmd_output
13 from pre_commit.util import make_executable
14 from pre_commit.util import mkdirp
15 from pre_commit.util import resource_text
16
17
18 logger = logging.getLogger(__name__)
19
20 # This is used to identify the hook file we install
21 PRIOR_HASHES = (
22 '4d9958c90bc262f47553e2c073f14cfe',
23 'd8ee923c46731b42cd95cc869add4062',
24 '49fd668cb42069aa1b6048464be5d395',
25 '79f09a650522a87b0da915d0d983b2de',
26 'e358c9dae00eac5d06b38dfdb1e33a8c',
27 )
28 CURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'
29 TEMPLATE_START = '# start templated\n'
30 TEMPLATE_END = '# end templated\n'
31
32
33 def _hook_paths(git_root, hook_type):
34 pth = os.path.join(git.get_git_dir(git_root), 'hooks', hook_type)
35 return pth, '{}.legacy'.format(pth)
36
37
38 def is_our_script(filename):
39 if not os.path.exists(filename):
40 return False
41 with io.open(filename) as f:
42 contents = f.read()
43 return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)
44
45
46 def install(
47 runner, store, overwrite=False, hooks=False, hook_type='pre-commit',
48 skip_on_missing_conf=False,
49 ):
50 """Install the pre-commit hooks."""
51 if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():
52 logger.error(
53 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
54 'hint: `git config --unset-all core.hooksPath`',
55 )
56 return 1
57
58 hook_path, legacy_path = _hook_paths(runner.git_root, hook_type)
59
60 mkdirp(os.path.dirname(hook_path))
61
62 # If we have an existing hook, move it to pre-commit.legacy
63 if os.path.lexists(hook_path) and not is_our_script(hook_path):
64 os.rename(hook_path, legacy_path)
65
66 # If we specify overwrite, we simply delete the legacy file
67 if overwrite and os.path.exists(legacy_path):
68 os.remove(legacy_path)
69 elif os.path.exists(legacy_path):
70 output.write_line(
71 'Running in migration mode with existing hooks at {}\n'
72 'Use -f to use only pre-commit.'.format(legacy_path),
73 )
74
75 params = {
76 'CONFIG': runner.config_file,
77 'HOOK_TYPE': hook_type,
78 'INSTALL_PYTHON': sys.executable,
79 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,
80 }
81
82 with io.open(hook_path, 'w') as hook_file:
83 contents = resource_text('hook-tmpl')
84 before, rest = contents.split(TEMPLATE_START)
85 to_template, after = rest.split(TEMPLATE_END)
86
87 hook_file.write(before + TEMPLATE_START)
88 for line in to_template.splitlines():
89 var = line.split()[0]
90 hook_file.write('{} = {!r}\n'.format(var, params[var]))
91 hook_file.write(TEMPLATE_END + after)
92 make_executable(hook_path)
93
94 output.write_line('pre-commit installed at {}'.format(hook_path))
95
96 # If they requested we install all of the hooks, do so.
97 if hooks:
98 install_hooks(runner, store)
99
100 return 0
101
102
103 def install_hooks(runner, store):
104 for repository in repositories(runner.config, store):
105 repository.require_installed()
106
107
108 def uninstall(runner, hook_type='pre-commit'):
109 """Uninstall the pre-commit hooks."""
110 hook_path, legacy_path = _hook_paths(runner.git_root, hook_type)
111
112 # If our file doesn't exist or it isn't ours, gtfo.
113 if not os.path.exists(hook_path) or not is_our_script(hook_path):
114 return 0
115
116 os.remove(hook_path)
117 output.write_line('{} uninstalled'.format(hook_type))
118
119 if os.path.exists(legacy_path):
120 os.rename(legacy_path, hook_path)
121 output.write_line('Restored previous hooks to {}'.format(hook_path))
122
123 return 0
124
[end of pre_commit/commands/install_uninstall.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -8,6 +8,7 @@
from pre_commit import git
from pre_commit import output
+from pre_commit.languages import python
from pre_commit.repository import repositories
from pre_commit.util import cmd_output
from pre_commit.util import make_executable
@@ -43,6 +44,16 @@
return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)
+def shebang():
+ if sys.platform == 'win32':
+ py = 'python'
+ else:
+ py = python.get_default_version()
+ if py == 'default':
+ py = 'python'
+ return '#!/usr/bin/env {}'.format(py)
+
+
def install(
runner, store, overwrite=False, hooks=False, hook_type='pre-commit',
skip_on_missing_conf=False,
@@ -84,6 +95,8 @@
before, rest = contents.split(TEMPLATE_START)
to_template, after = rest.split(TEMPLATE_END)
+ before = before.replace('#!/usr/bin/env python', shebang())
+
hook_file.write(before + TEMPLATE_START)
for line in to_template.splitlines():
var = line.split()[0]
| {"golden_diff": "diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py\n--- a/pre_commit/commands/install_uninstall.py\n+++ b/pre_commit/commands/install_uninstall.py\n@@ -8,6 +8,7 @@\n \n from pre_commit import git\n from pre_commit import output\n+from pre_commit.languages import python\n from pre_commit.repository import repositories\n from pre_commit.util import cmd_output\n from pre_commit.util import make_executable\n@@ -43,6 +44,16 @@\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n \n \n+def shebang():\n+ if sys.platform == 'win32':\n+ py = 'python'\n+ else:\n+ py = python.get_default_version()\n+ if py == 'default':\n+ py = 'python'\n+ return '#!/usr/bin/env {}'.format(py)\n+\n+\n def install(\n runner, store, overwrite=False, hooks=False, hook_type='pre-commit',\n skip_on_missing_conf=False,\n@@ -84,6 +95,8 @@\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n \n+ before = before.replace('#!/usr/bin/env python', shebang())\n+\n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n var = line.split()[0]\n", "issue": "Incorrect shebang in .git/hooks/pre-commit for python3 only installations\nThe shebang for `.git/hooks/pre-commit` is `#!/usr/bin/env python`. I work with setups where `python3` is the only python in env.\r\n\r\nCould the shebang be the install python instead? I.e. the installation under `INSTALL_PYTHON = '/usr/bin/python3'`\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.repository import repositories\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import mkdirp\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n '79f09a650522a87b0da915d0d983b2de',\n 'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(git_root, hook_type):\n pth = os.path.join(git.get_git_dir(git_root), 'hooks', hook_type)\n return pth, '{}.legacy'.format(pth)\n\n\ndef is_our_script(filename):\n if not os.path.exists(filename):\n return False\n with io.open(filename) as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef install(\n runner, store, overwrite=False, hooks=False, hook_type='pre-commit',\n skip_on_missing_conf=False,\n):\n \"\"\"Install the pre-commit hooks.\"\"\"\n if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n hook_path, legacy_path = _hook_paths(runner.git_root, hook_type)\n\n mkdirp(os.path.dirname(hook_path))\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n os.rename(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n 'Running in migration mode with existing hooks at {}\\n'\n 'Use -f to use only pre-commit.'.format(legacy_path),\n )\n\n params = {\n 'CONFIG': runner.config_file,\n 'HOOK_TYPE': hook_type,\n 'INSTALL_PYTHON': sys.executable,\n 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,\n }\n\n with io.open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n\n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n var = line.split()[0]\n hook_file.write('{} = {!r}\\n'.format(var, params[var]))\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line('pre-commit installed at {}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n install_hooks(runner, store)\n\n return 0\n\n\ndef install_hooks(runner, store):\n for repository in repositories(runner.config, store):\n repository.require_installed()\n\n\ndef uninstall(runner, hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path, legacy_path = _hook_paths(runner.git_root, hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return 0\n\n os.remove(hook_path)\n output.write_line('{} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n output.write_line('Restored previous hooks to {}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}]} | 1,947 | 303 |
gh_patches_debug_36938 | rasdani/github-patches | git_diff | ManimCommunity__manim-435 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Latex and Sphinx are not yet functioning together.
This example is working on sphinx:
```
.. manim:: DotScene1
:quality: low
:save_last_frame:
class DotScene1(Scene):
def construct(self):
dot = Dot().set_color(GREEN)
self.add(dot)
self.wait(1)
```
However, when I have something tex related, it throws an error:
```
.. manim:: TextExample
:quality: medium
:save_last_frame:
class TextExample(Scene):
def construct(self):
t = TextMobject("Hello World")
self.add(t)
```
> Exception occurred:
File "/home/k/projects/manim-community/manim/utils/tex_file_writing.py", line 32, in generate_tex_file
with open(result, "w", encoding="utf-8") as outfile:
FileNotFoundError: [Errno 2] No such file or directory: 'media/Tex/7d1ec941f0e30957.tex'
The full traceback has been saved in /tmp/sphinx-err-4zdxhjgt.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make: *** [Makefile:26: html] Error 2
A similar error message comes e.g. for this example:
```
.. manim:: Plot1
:quality: medium
:save_last_frame:
class Plot1(GraphScene):
def construct(self):
self.setup_axes()
my_func = lambda x: np.sin(x)
func_graph=self.get_graph(my_func)
self.add(func_graph)
```
</issue>
<code>
[start of docs/source/manim_directive.py]
1 r"""
2 A directive for including Manim videos in a Sphinx document
3 ===========================================================
4
5 When rendering the HTML documentation, the ``.. manim::`` directive
6 implemented here allows to include rendered videos.
7
8 Its basic usage that allows processing **inline content**
9 looks as follows::
10
11 .. manim:: MyScene
12
13 class MyScene(Scene):
14 def construct(self):
15 ...
16
17 It is required to pass the name of the class representing the
18 scene to be rendered to the directive.
19
20 As a second application, the directive can also be used to
21 render scenes that are defined within doctests, for example::
22
23 .. manim:: DirectiveDoctestExample
24
25 >>> dot = Dot(color=RED)
26 >>> dot.color
27 <Color #fc6255>
28 >>> class DirectiveDoctestExample(Scene):
29 ... def construct(self):
30 ... self.play(ShowCreation(dot))
31
32
33 Options
34 -------
35
36 Options can be passed as follows::
37
38 .. manim:: <Class name>
39 :<option name>: <value>
40
41 The following configuration options are supported by the
42 directive:
43
44 display_source
45 If this flag is present without argument,
46 the source code is displayed above the rendered video.
47
48 quality : {'low', 'medium', 'high', 'fourk'}
49 Controls render quality of the video, in analogy to
50 the corresponding command line flags.
51
52 save_as_gif
53 If this flag is present without argument,
54 the scene is rendered as a gif.
55
56 save_last_frame
57 If this flag is present without argument,
58 an image representing the last frame of the scene will
59 be rendered and displayed, instead of a video.
60
61 """
62 from docutils.parsers.rst import directives, Directive
63 from docutils.parsers.rst.directives.images import Image
64
65 import jinja2
66 import os
67 from os.path import relpath
68
69 import shutil
70
71 classnamedict = {}
72
73
74 class ManimDirective(Directive):
75 r"""The ``.. manim::`` directive.
76
77 See the module docstring for documentation.
78 """
79 has_content = True
80 required_arguments = 1
81 optional_arguments = 0
82 option_spec = {
83 "display_source": bool,
84 "quality": lambda arg: directives.choice(
85 arg, ("low", "medium", "high", "fourk")
86 ),
87 "save_as_gif": bool,
88 "save_last_frame": bool,
89 }
90 final_argument_whitespace = True
91
92 def run(self):
93 from manim import config
94
95 global classnamedict
96
97 clsname = self.arguments[0]
98 if clsname not in classnamedict:
99 classnamedict[clsname] = 1
100 else:
101 classnamedict[clsname] += 1
102
103 display_source = "display_source" in self.options
104 save_as_gif = "save_as_gif" in self.options
105 save_last_frame = "save_last_frame" in self.options
106 assert not (save_as_gif and save_last_frame)
107
108 frame_rate = config["frame_rate"]
109 pixel_height = config["pixel_height"]
110 pixel_width = config["pixel_width"]
111
112 if "quality" in self.options:
113 quality = self.options["quality"]
114 if quality == "low":
115 pixel_height = 480
116 pixel_width = 854
117 frame_rate = 15
118 elif quality == "medium":
119 pixel_height = 720
120 pixel_width = 1280
121 frame_rate = 30
122 elif quality == "high":
123 pixel_height = 1440
124 pixel_width = 2560
125 frame_rate = 60
126 elif quality == "fourk":
127 pixel_height = 2160
128 pixel_width = 3840
129 frame_rate = 60
130
131 qualitydir = f"{pixel_height}p{frame_rate}"
132
133 state_machine = self.state_machine
134 document = state_machine.document
135
136 source_file_name = document.attributes["source"]
137 source_rel_name = relpath(source_file_name, setup.confdir)
138 source_rel_dir = os.path.dirname(source_rel_name)
139 while source_rel_dir.startswith(os.path.sep):
140 source_rel_dir = source_rel_dir[1:]
141
142 dest_dir = os.path.abspath(
143 os.path.join(setup.app.builder.outdir, source_rel_dir)
144 )
145 if not os.path.exists(dest_dir):
146 os.makedirs(dest_dir)
147
148 source_block = [
149 ".. code-block:: python",
150 "",
151 *[" " + line for line in self.content],
152 ]
153 source_block = "\n".join(source_block)
154
155 media_dir = os.path.join("source", "media")
156 images_dir = os.path.join(media_dir, "images")
157 video_dir = os.path.join(media_dir, "videos")
158 output_file = f"{clsname}-{classnamedict[clsname]}"
159
160 file_writer_config_code = [
161 f'config["frame_rate"] = {frame_rate}',
162 f'config["pixel_height"] = {pixel_height}',
163 f'config["pixel_width"] = {pixel_width}',
164 f'file_writer_config["media_dir"] = "{media_dir}"',
165 f'file_writer_config["images_dir"] = "{images_dir}"',
166 f'file_writer_config["video_dir"] = "{video_dir}"',
167 f'file_writer_config["save_last_frame"] = {save_last_frame}',
168 f'file_writer_config["save_as_gif"] = {save_as_gif}',
169 f'file_writer_config["output_file"] = "{output_file}"',
170 ]
171
172 user_code = self.content
173 if user_code[0].startswith(">>> "): # check whether block comes from doctest
174 user_code = [
175 line[4:] for line in user_code if line.startswith((">>> ", "... "))
176 ]
177
178 code = [
179 "from manim import *",
180 *file_writer_config_code,
181 *user_code,
182 f"{clsname}()",
183 ]
184 exec("\n".join(code), globals())
185
186 # copy video file to output directory
187 if not (save_as_gif or save_last_frame):
188 filename = f"{output_file}.mp4"
189 filesrc = os.path.join(video_dir, qualitydir, filename)
190 destfile = os.path.join(dest_dir, filename)
191 shutil.copyfile(filesrc, destfile)
192 elif save_as_gif:
193 filename = f"{output_file}.gif"
194 filesrc = os.path.join(video_dir, qualitydir, filename)
195 elif save_last_frame:
196 filename = f"{output_file}.png"
197 filesrc = os.path.join(images_dir, filename)
198 else:
199 raise ValueError("Invalid combination of render flags received.")
200
201 rendered_template = jinja2.Template(TEMPLATE).render(
202 display_source=display_source,
203 filesrc=filesrc[6:],
204 output_file=output_file,
205 save_last_frame=save_last_frame,
206 save_as_gif=save_as_gif,
207 source_block=source_block,
208 )
209 state_machine.insert_input(
210 rendered_template.split("\n"), source=document.attributes["source"]
211 )
212
213 return []
214
215
216 def setup(app):
217 import manim
218
219 setup.app = app
220 setup.config = app.config
221 setup.confdir = app.confdir
222 app.add_directive("manim", ManimDirective)
223
224 metadata = {"parallel_read_safe": False, "parallel_write_safe": True}
225 return metadata
226
227
228 TEMPLATE = r"""
229 {% if display_source %}
230 .. raw:: html
231
232 <div class="manim-example">
233
234 {{ source_block }}
235 {% endif %}
236
237 {% if not (save_as_gif or save_last_frame) %}
238 .. raw:: html
239
240 <video class="manim-video" controls loop autoplay src="./{{ output_file }}.mp4"></video>
241 {% elif save_as_gif %}
242 .. image:: {{ filesrc }}
243 :align: center
244 {% elif save_last_frame %}
245 .. image:: {{ filesrc }}
246 :align: center
247 {% endif %}
248
249 {% if display_source %}
250 .. raw:: html
251
252 </div>
253 {% endif %}
254 """
255
[end of docs/source/manim_directive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/manim_directive.py b/docs/source/manim_directive.py
--- a/docs/source/manim_directive.py
+++ b/docs/source/manim_directive.py
@@ -152,8 +152,18 @@
]
source_block = "\n".join(source_block)
- media_dir = os.path.join("source", "media")
+ media_dir = os.path.join(setup.confdir, "media")
+ if not os.path.exists(media_dir):
+ os.mkdir(media_dir)
images_dir = os.path.join(media_dir, "images")
+ if not os.path.exists(images_dir):
+ os.mkdir(images_dir)
+ tex_dir = os.path.join(media_dir, "tex")
+ if not os.path.exists(tex_dir):
+ os.mkdir(tex_dir)
+ text_dir = os.path.join(media_dir, "text")
+ if not os.path.exists(text_dir):
+ os.mkdir(text_dir)
video_dir = os.path.join(media_dir, "videos")
output_file = f"{clsname}-{classnamedict[clsname]}"
@@ -163,6 +173,8 @@
f'config["pixel_width"] = {pixel_width}',
f'file_writer_config["media_dir"] = "{media_dir}"',
f'file_writer_config["images_dir"] = "{images_dir}"',
+ f'file_writer_config["tex_dir"] = "{tex_dir}"',
+ f'file_writer_config["text_dir"] = "{text_dir}"',
f'file_writer_config["video_dir"] = "{video_dir}"',
f'file_writer_config["save_last_frame"] = {save_last_frame}',
f'file_writer_config["save_as_gif"] = {save_as_gif}',
@@ -200,7 +212,7 @@
rendered_template = jinja2.Template(TEMPLATE).render(
display_source=display_source,
- filesrc=filesrc[6:],
+ filesrc_rel=os.path.relpath(filesrc, setup.confdir),
output_file=output_file,
save_last_frame=save_last_frame,
save_as_gif=save_as_gif,
@@ -239,10 +251,10 @@
<video class="manim-video" controls loop autoplay src="./{{ output_file }}.mp4"></video>
{% elif save_as_gif %}
-.. image:: {{ filesrc }}
+.. image:: /{{ filesrc_rel }}
:align: center
{% elif save_last_frame %}
-.. image:: {{ filesrc }}
+.. image:: /{{ filesrc_rel }}
:align: center
{% endif %}
| {"golden_diff": "diff --git a/docs/source/manim_directive.py b/docs/source/manim_directive.py\n--- a/docs/source/manim_directive.py\n+++ b/docs/source/manim_directive.py\n@@ -152,8 +152,18 @@\n ]\n source_block = \"\\n\".join(source_block)\n \n- media_dir = os.path.join(\"source\", \"media\")\n+ media_dir = os.path.join(setup.confdir, \"media\")\n+ if not os.path.exists(media_dir):\n+ os.mkdir(media_dir)\n images_dir = os.path.join(media_dir, \"images\")\n+ if not os.path.exists(images_dir):\n+ os.mkdir(images_dir)\n+ tex_dir = os.path.join(media_dir, \"tex\")\n+ if not os.path.exists(tex_dir):\n+ os.mkdir(tex_dir)\n+ text_dir = os.path.join(media_dir, \"text\")\n+ if not os.path.exists(text_dir):\n+ os.mkdir(text_dir)\n video_dir = os.path.join(media_dir, \"videos\")\n output_file = f\"{clsname}-{classnamedict[clsname]}\"\n \n@@ -163,6 +173,8 @@\n f'config[\"pixel_width\"] = {pixel_width}',\n f'file_writer_config[\"media_dir\"] = \"{media_dir}\"',\n f'file_writer_config[\"images_dir\"] = \"{images_dir}\"',\n+ f'file_writer_config[\"tex_dir\"] = \"{tex_dir}\"',\n+ f'file_writer_config[\"text_dir\"] = \"{text_dir}\"',\n f'file_writer_config[\"video_dir\"] = \"{video_dir}\"',\n f'file_writer_config[\"save_last_frame\"] = {save_last_frame}',\n f'file_writer_config[\"save_as_gif\"] = {save_as_gif}',\n@@ -200,7 +212,7 @@\n \n rendered_template = jinja2.Template(TEMPLATE).render(\n display_source=display_source,\n- filesrc=filesrc[6:],\n+ filesrc_rel=os.path.relpath(filesrc, setup.confdir),\n output_file=output_file,\n save_last_frame=save_last_frame,\n save_as_gif=save_as_gif,\n@@ -239,10 +251,10 @@\n \n <video class=\"manim-video\" controls loop autoplay src=\"./{{ output_file }}.mp4\"></video>\n {% elif save_as_gif %}\n-.. image:: {{ filesrc }}\n+.. image:: /{{ filesrc_rel }}\n :align: center\n {% elif save_last_frame %}\n-.. image:: {{ filesrc }}\n+.. image:: /{{ filesrc_rel }}\n :align: center\n {% endif %}\n", "issue": "Latex and Sphinx are not yet functioning together.\nThis example is working on sphinx:\r\n```\r\n.. manim:: DotScene1\r\n :quality: low\r\n :save_last_frame:\r\n\r\n class DotScene1(Scene):\r\n def construct(self):\r\n dot = Dot().set_color(GREEN)\r\n self.add(dot)\r\n self.wait(1)\r\n```\r\nHowever, when I have something tex related, it throws an error:\r\n```\r\n.. manim:: TextExample\r\n :quality: medium\r\n :save_last_frame:\r\n \r\n class TextExample(Scene):\r\n def construct(self):\r\n t = TextMobject(\"Hello World\")\r\n self.add(t)\r\n```\r\n> Exception occurred:\r\n File \"/home/k/projects/manim-community/manim/utils/tex_file_writing.py\", line 32, in generate_tex_file\r\n with open(result, \"w\", encoding=\"utf-8\") as outfile:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'media/Tex/7d1ec941f0e30957.tex'\r\nThe full traceback has been saved in /tmp/sphinx-err-4zdxhjgt.log, if you want to report the issue to the developers.\r\nPlease also report this if it was a user error, so that a better error message can be provided next time.\r\nA bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!\r\nmake: *** [Makefile:26: html] Error 2\r\n\r\nA similar error message comes e.g. for this example:\r\n```\r\n.. manim:: Plot1\r\n :quality: medium\r\n :save_last_frame:\r\n\r\n class Plot1(GraphScene):\r\n def construct(self):\r\n self.setup_axes()\r\n my_func = lambda x: np.sin(x)\r\n func_graph=self.get_graph(my_func)\r\n self.add(func_graph)\r\n```\n", "before_files": [{"content": "r\"\"\"\nA directive for including Manim videos in a Sphinx document\n===========================================================\n\nWhen rendering the HTML documentation, the ``.. manim::`` directive\nimplemented here allows to include rendered videos.\n\nIts basic usage that allows processing **inline content** \nlooks as follows::\n\n .. manim:: MyScene\n\n class MyScene(Scene):\n def construct(self):\n ...\n\nIt is required to pass the name of the class representing the\nscene to be rendered to the directive.\n\nAs a second application, the directive can also be used to\nrender scenes that are defined within doctests, for example::\n\n .. manim:: DirectiveDoctestExample\n\n >>> dot = Dot(color=RED)\n >>> dot.color\n <Color #fc6255>\n >>> class DirectiveDoctestExample(Scene):\n ... def construct(self):\n ... self.play(ShowCreation(dot))\n\n\nOptions\n-------\n\nOptions can be passed as follows::\n\n .. manim:: <Class name>\n :<option name>: <value>\n\nThe following configuration options are supported by the\ndirective:\n\n display_source\n If this flag is present without argument,\n the source code is displayed above the rendered video.\n\n quality : {'low', 'medium', 'high', 'fourk'}\n Controls render quality of the video, in analogy to\n the corresponding command line flags.\n\n save_as_gif\n If this flag is present without argument,\n the scene is rendered as a gif.\n\n save_last_frame\n If this flag is present without argument,\n an image representing the last frame of the scene will\n be rendered and displayed, instead of a video.\n\n\"\"\"\nfrom docutils.parsers.rst import directives, Directive\nfrom docutils.parsers.rst.directives.images import Image\n\nimport jinja2\nimport os\nfrom os.path import relpath\n\nimport shutil\n\nclassnamedict = {}\n\n\nclass ManimDirective(Directive):\n r\"\"\"The ``.. manim::`` directive.\n\n See the module docstring for documentation.\n \"\"\"\n has_content = True\n required_arguments = 1\n optional_arguments = 0\n option_spec = {\n \"display_source\": bool,\n \"quality\": lambda arg: directives.choice(\n arg, (\"low\", \"medium\", \"high\", \"fourk\")\n ),\n \"save_as_gif\": bool,\n \"save_last_frame\": bool,\n }\n final_argument_whitespace = True\n\n def run(self):\n from manim import config\n\n global classnamedict\n\n clsname = self.arguments[0]\n if clsname not in classnamedict:\n classnamedict[clsname] = 1\n else:\n classnamedict[clsname] += 1\n\n display_source = \"display_source\" in self.options\n save_as_gif = \"save_as_gif\" in self.options\n save_last_frame = \"save_last_frame\" in self.options\n assert not (save_as_gif and save_last_frame)\n\n frame_rate = config[\"frame_rate\"]\n pixel_height = config[\"pixel_height\"]\n pixel_width = config[\"pixel_width\"]\n\n if \"quality\" in self.options:\n quality = self.options[\"quality\"]\n if quality == \"low\":\n pixel_height = 480\n pixel_width = 854\n frame_rate = 15\n elif quality == \"medium\":\n pixel_height = 720\n pixel_width = 1280\n frame_rate = 30\n elif quality == \"high\":\n pixel_height = 1440\n pixel_width = 2560\n frame_rate = 60\n elif quality == \"fourk\":\n pixel_height = 2160\n pixel_width = 3840\n frame_rate = 60\n\n qualitydir = f\"{pixel_height}p{frame_rate}\"\n\n state_machine = self.state_machine\n document = state_machine.document\n\n source_file_name = document.attributes[\"source\"]\n source_rel_name = relpath(source_file_name, setup.confdir)\n source_rel_dir = os.path.dirname(source_rel_name)\n while source_rel_dir.startswith(os.path.sep):\n source_rel_dir = source_rel_dir[1:]\n\n dest_dir = os.path.abspath(\n os.path.join(setup.app.builder.outdir, source_rel_dir)\n )\n if not os.path.exists(dest_dir):\n os.makedirs(dest_dir)\n\n source_block = [\n \".. code-block:: python\",\n \"\",\n *[\" \" + line for line in self.content],\n ]\n source_block = \"\\n\".join(source_block)\n\n media_dir = os.path.join(\"source\", \"media\")\n images_dir = os.path.join(media_dir, \"images\")\n video_dir = os.path.join(media_dir, \"videos\")\n output_file = f\"{clsname}-{classnamedict[clsname]}\"\n\n file_writer_config_code = [\n f'config[\"frame_rate\"] = {frame_rate}',\n f'config[\"pixel_height\"] = {pixel_height}',\n f'config[\"pixel_width\"] = {pixel_width}',\n f'file_writer_config[\"media_dir\"] = \"{media_dir}\"',\n f'file_writer_config[\"images_dir\"] = \"{images_dir}\"',\n f'file_writer_config[\"video_dir\"] = \"{video_dir}\"',\n f'file_writer_config[\"save_last_frame\"] = {save_last_frame}',\n f'file_writer_config[\"save_as_gif\"] = {save_as_gif}',\n f'file_writer_config[\"output_file\"] = \"{output_file}\"',\n ]\n\n user_code = self.content\n if user_code[0].startswith(\">>> \"): # check whether block comes from doctest\n user_code = [\n line[4:] for line in user_code if line.startswith((\">>> \", \"... \"))\n ]\n\n code = [\n \"from manim import *\",\n *file_writer_config_code,\n *user_code,\n f\"{clsname}()\",\n ]\n exec(\"\\n\".join(code), globals())\n\n # copy video file to output directory\n if not (save_as_gif or save_last_frame):\n filename = f\"{output_file}.mp4\"\n filesrc = os.path.join(video_dir, qualitydir, filename)\n destfile = os.path.join(dest_dir, filename)\n shutil.copyfile(filesrc, destfile)\n elif save_as_gif:\n filename = f\"{output_file}.gif\"\n filesrc = os.path.join(video_dir, qualitydir, filename)\n elif save_last_frame:\n filename = f\"{output_file}.png\"\n filesrc = os.path.join(images_dir, filename)\n else:\n raise ValueError(\"Invalid combination of render flags received.\")\n\n rendered_template = jinja2.Template(TEMPLATE).render(\n display_source=display_source,\n filesrc=filesrc[6:],\n output_file=output_file,\n save_last_frame=save_last_frame,\n save_as_gif=save_as_gif,\n source_block=source_block,\n )\n state_machine.insert_input(\n rendered_template.split(\"\\n\"), source=document.attributes[\"source\"]\n )\n\n return []\n\n\ndef setup(app):\n import manim\n\n setup.app = app\n setup.config = app.config\n setup.confdir = app.confdir\n app.add_directive(\"manim\", ManimDirective)\n\n metadata = {\"parallel_read_safe\": False, \"parallel_write_safe\": True}\n return metadata\n\n\nTEMPLATE = r\"\"\"\n{% if display_source %}\n.. raw:: html\n\n <div class=\"manim-example\">\n\n{{ source_block }}\n{% endif %}\n\n{% if not (save_as_gif or save_last_frame) %}\n.. raw:: html\n\n <video class=\"manim-video\" controls loop autoplay src=\"./{{ output_file }}.mp4\"></video>\n{% elif save_as_gif %}\n.. image:: {{ filesrc }}\n :align: center\n{% elif save_last_frame %}\n.. image:: {{ filesrc }}\n :align: center\n{% endif %}\n\n{% if display_source %}\n.. raw:: html\n\n </div>\n{% endif %}\n\"\"\"\n", "path": "docs/source/manim_directive.py"}]} | 3,354 | 575 |
gh_patches_debug_38239 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-795 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Getting 'W2030:You must specify a valid Default value' in 0.17.1
*cfn-lint version: (`cfn-lint --version`)*
`cfn-lint 0.17.1`
*Description of issue.*
```
[cfn-lint] W2030:You must specify a valid Default value for DemoInstanceType (/Demo/DemoInstanceType).
Valid values are ['a1.2xlarge', 'a1.4xlarge', 'a1.large', 'a1.medium', 'a1.xlarge', 'c1.medium', 'c1.xlarge', 'c3.2xlarge', 'c3.4xlarge', 'c3.8xlarge', 'c3.large', 'c3.xlarge', 'c4.2xlarge', 'c4.4xlarge', 'c4.8xlarge', 'c4.large', 'c4.xlarge', 'c5.18xlarge', 'c5.2xlarge', 'c5.4xlarge', 'c5.9xlarge', 'c5.large', 'c5.xlarge', 'c5d.18xlarge', 'c5d.2xlarge', 'c5d.4xlarge', 'c5d.9xlarge', 'c5d.large', 'c5d.xlarge', 'c5n.18xlarge', 'c5n.2xlarge', 'c5n.4xlarge', 'c5n.9xlarge', 'c5n.large', 'c5n.xlarge', 'cc2.8xlarge', 'cr1.8xlarge', 'd2.2xlarge', 'd2.4xlarge', 'd2.8xlarge', 'd2.xlarge', 'f1.16xlarge', 'f1.2xlarge', 'f1.4xlarge', 'g2.2xlarge', 'g2.8xlarge', 'g3.16xlarge', 'g3.4xlarge', 'g3.8xlarge', 'g3s.xlarge', 'h1.16xlarge', 'h1.2xlarge', 'h1.4xlarge', 'h1.8xlarge', 'hs1.8xlarge', 'i2.2xlarge', 'i2.4xlarge', 'i2.8xlarge', 'i2.xlarge', 'i3.16xlarge', 'i3.2xlarge', 'i3.4xlarge', 'i3.8xlarge', 'i3.large', 'i3.xlarge', 'm1.large', 'm1.medium', 'm1.small', 'm1.xlarge', 'm2.2xlarge', 'm2.4xlarge', 'm2.xlarge', 'm3.2xlarge', 'm3.large', 'm3.medium', 'm3.xlarge', 'm4.10xlarge', 'm4.16xlarge', 'm4.2xlarge', 'm4.4xlarge', 'm4.large', 'm4.xlarge', 'm5.12xlarge', 'm5.24xlarge', 'm5.2xlarge', 'm5.4xlarge', 'm5.large', 'm5.metal', 'm5.xlarge', 'm5a.12xlarge', 'm5a.24xlarge', 'm5a.2xlarge', 'm5a.4xlarge', 'm5a.large', 'm5a.xlarge', 'm5ad.12xlarge', 'm5ad.24xlarge', 'm5ad.2xlarge', 'm5ad.4xlarge', 'm5ad.large', 'm5ad.xlarge', 'm5d.12xlarge', 'm5d.24xlarge', 'm5d.2xlarge', 'm5d.4xlarge', 'm5d.large', 'm5d.metal', 'm5d.xlarge', 'p2.16xlarge', 'p2.8xlarge', 'p2.xlarge', 'p3.16xlarge', 'p3.2xlarge', 'p3.8xlarge', 'p3dn.24xlarge', 'r3.2xlarge', 'r3.4xlarge', 'r3.8xlarge', 'r3.large', 'r3.xlarge', 'r4.16xlarge', 'r4.2xlarge', 'r4.4xlarge', 'r4.8xlarge', 'r4.large', 'r4.xlarge', 'r5.12xlarge', 'r5.24xlarge', 'r5.2xlarge', 'r5.4xlarge', 'r5.large', 'r5.xlarge', 'r5a.12xlarge', 'r5a.24xlarge', 'r5a.2xlarge', 'r5a.4xlarge', 'r5a.large', 'r5a.xlarge', 'r5ad.12xlarge', 'r5ad.24xlarge', 'r5ad.2xlarge', 'r5ad.4xlarge', 'r5ad.large', 'r5ad.xlarge', 'r5d.12xlarge', 'r5d.24xlarge', 'r5d.2xlarge', 'r5d.4xlarge', 'r5d.large', 'r5d.xlarge', 't1.micro', 't2.2xlarge', 't2.large', 't2.medium', 't2.micro', 't2.nano', 't2.small', 't2.xlarge', 't3.2xlarge', 't3.large', 't3.medium', 't3.micro', 't3.nano', 't3.small', 't3.xlarge', 'x1.16xlarge', 'x1.32xlarge', 'x1e.16xlarge', 'x1e.2xlarge', 'x1e.32xlarge', 'x1e.4xlarge', 'x1e.8xlarge', 'x1e.xlarge', 'z1d.12xlarge', 'z1d.2xlarge', 'z1d.3xlarge', 'z1d.6xlarge', 'z1d.large', 'z1d.xlarge']
```
The CloudFormation parameter is :
```
DemoInstanceType:
Default: /Demo/DemoInstanceType # Recommend t3.nano
Description: EC2 instance type to use to create the collector host
Type: AWS::SSM::Parameter::Value<String>
```
The value of the SSM parameter is `t3.nano`
I have an older project using the same pattern and the virtual environment still has cfn-lint version 0.12.0. It's not raising this complaint. I verified by updating to latest (0.17.1) and the problem cropped up. When I downgraded back to 0.12.0, the problem went away.
</issue>
<code>
[start of src/cfnlint/rules/parameters/AllowedValue.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from cfnlint import CloudFormationLintRule
18 from cfnlint import RuleMatch
19
20 from cfnlint.helpers import RESOURCE_SPECS
21
22
23 class AllowedValue(CloudFormationLintRule):
24 """Check if parameters have a valid value"""
25 id = 'W2030'
26 shortdesc = 'Check if parameters have a valid value'
27 description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'
28 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
29 tags = ['resources', 'property', 'allowed value']
30
31 def initialize(self, cfn):
32 """Initialize the rule"""
33 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
34 self.resource_property_types.append(resource_type_spec)
35 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
36 self.resource_sub_property_types.append(property_type_spec)
37
38 def check_value_ref(self, value, **kwargs):
39 """Check Ref"""
40 matches = []
41
42 allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
43 cfn = kwargs.get('cfn')
44
45 if allowed_value_specs:
46 if value in cfn.template.get('Parameters', {}):
47 param = cfn.template.get('Parameters').get(value, {})
48 parameter_values = param.get('AllowedValues')
49 default_value = param.get('Default')
50
51 # Check Allowed Values
52 if parameter_values:
53 for index, allowed_value in enumerate(parameter_values):
54 if allowed_value not in allowed_value_specs:
55 param_path = ['Parameters', value, 'AllowedValues', index]
56 message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
57 matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
58 elif default_value:
59 # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
60 if default_value not in allowed_value_specs:
61 param_path = ['Parameters', value, 'Default']
62 message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
63 matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
64
65 return matches
66
67 def check(self, cfn, properties, value_specs, property_specs, path):
68 """Check itself"""
69 matches = list()
70 for p_value, p_path in properties.items_safe(path[:]):
71 for prop in p_value:
72 if prop in value_specs:
73 value = value_specs.get(prop).get('Value', {})
74 if value:
75 value_type = value.get('ValueType', '')
76 property_type = property_specs.get('Properties').get(prop).get('Type')
77 matches.extend(
78 cfn.check_value(
79 p_value, prop, p_path,
80 check_ref=self.check_value_ref,
81 value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
82 cfn=cfn, property_type=property_type, property_name=prop
83 )
84 )
85
86 return matches
87
88 def match_resource_sub_properties(self, properties, property_type, path, cfn):
89 """Match for sub properties"""
90 matches = list()
91
92 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
93 property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
94 matches.extend(self.check(cfn, properties, specs, property_specs, path))
95
96 return matches
97
98 def match_resource_properties(self, properties, resource_type, path, cfn):
99 """Check CloudFormation Properties"""
100 matches = list()
101
102 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
103 resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
104 matches.extend(self.check(cfn, properties, specs, resource_specs, path))
105
106 return matches
107
[end of src/cfnlint/rules/parameters/AllowedValue.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py
--- a/src/cfnlint/rules/parameters/AllowedValue.py
+++ b/src/cfnlint/rules/parameters/AllowedValue.py
@@ -14,6 +14,7 @@
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
+import six
from cfnlint import CloudFormationLintRule
from cfnlint import RuleMatch
@@ -47,20 +48,24 @@
param = cfn.template.get('Parameters').get(value, {})
parameter_values = param.get('AllowedValues')
default_value = param.get('Default')
-
- # Check Allowed Values
- if parameter_values:
- for index, allowed_value in enumerate(parameter_values):
- if allowed_value not in allowed_value_specs:
- param_path = ['Parameters', value, 'AllowedValues', index]
- message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
- matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
- elif default_value:
- # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
- if default_value not in allowed_value_specs:
- param_path = ['Parameters', value, 'Default']
- message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
- matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
+ parameter_type = param.get('Type')
+ if isinstance(parameter_type, six.string_types):
+ if ((not parameter_type.startswith('List<')) and
+ (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and
+ parameter_type not in ['CommaDelimitedList']):
+ # Check Allowed Values
+ if parameter_values:
+ for index, allowed_value in enumerate(parameter_values):
+ if allowed_value not in allowed_value_specs:
+ param_path = ['Parameters', value, 'AllowedValues', index]
+ message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
+ matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
+ elif default_value:
+ # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
+ if default_value not in allowed_value_specs:
+ param_path = ['Parameters', value, 'Default']
+ message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
+ matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py\n--- a/src/cfnlint/rules/parameters/AllowedValue.py\n+++ b/src/cfnlint/rules/parameters/AllowedValue.py\n@@ -14,6 +14,7 @@\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n \"\"\"\n+import six\n from cfnlint import CloudFormationLintRule\n from cfnlint import RuleMatch\n \n@@ -47,20 +48,24 @@\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n-\n- # Check Allowed Values\n- if parameter_values:\n- for index, allowed_value in enumerate(parameter_values):\n- if allowed_value not in allowed_value_specs:\n- param_path = ['Parameters', value, 'AllowedValues', index]\n- message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n- matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n- elif default_value:\n- # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n- if default_value not in allowed_value_specs:\n- param_path = ['Parameters', value, 'Default']\n- message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n- matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n+ parameter_type = param.get('Type')\n+ if isinstance(parameter_type, six.string_types):\n+ if ((not parameter_type.startswith('List<')) and\n+ (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n+ parameter_type not in ['CommaDelimitedList']):\n+ # Check Allowed Values\n+ if parameter_values:\n+ for index, allowed_value in enumerate(parameter_values):\n+ if allowed_value not in allowed_value_specs:\n+ param_path = ['Parameters', value, 'AllowedValues', index]\n+ message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n+ matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n+ elif default_value:\n+ # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n+ if default_value not in allowed_value_specs:\n+ param_path = ['Parameters', value, 'Default']\n+ message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n+ matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n \n return matches\n", "issue": "Getting 'W2030:You must specify a valid Default value' in 0.17.1\n*cfn-lint version: (`cfn-lint --version`)*\r\n`cfn-lint 0.17.1`\r\n\r\n*Description of issue.*\r\n```\r\n[cfn-lint] W2030:You must specify a valid Default value for DemoInstanceType (/Demo/DemoInstanceType).\r\nValid values are ['a1.2xlarge', 'a1.4xlarge', 'a1.large', 'a1.medium', 'a1.xlarge', 'c1.medium', 'c1.xlarge', 'c3.2xlarge', 'c3.4xlarge', 'c3.8xlarge', 'c3.large', 'c3.xlarge', 'c4.2xlarge', 'c4.4xlarge', 'c4.8xlarge', 'c4.large', 'c4.xlarge', 'c5.18xlarge', 'c5.2xlarge', 'c5.4xlarge', 'c5.9xlarge', 'c5.large', 'c5.xlarge', 'c5d.18xlarge', 'c5d.2xlarge', 'c5d.4xlarge', 'c5d.9xlarge', 'c5d.large', 'c5d.xlarge', 'c5n.18xlarge', 'c5n.2xlarge', 'c5n.4xlarge', 'c5n.9xlarge', 'c5n.large', 'c5n.xlarge', 'cc2.8xlarge', 'cr1.8xlarge', 'd2.2xlarge', 'd2.4xlarge', 'd2.8xlarge', 'd2.xlarge', 'f1.16xlarge', 'f1.2xlarge', 'f1.4xlarge', 'g2.2xlarge', 'g2.8xlarge', 'g3.16xlarge', 'g3.4xlarge', 'g3.8xlarge', 'g3s.xlarge', 'h1.16xlarge', 'h1.2xlarge', 'h1.4xlarge', 'h1.8xlarge', 'hs1.8xlarge', 'i2.2xlarge', 'i2.4xlarge', 'i2.8xlarge', 'i2.xlarge', 'i3.16xlarge', 'i3.2xlarge', 'i3.4xlarge', 'i3.8xlarge', 'i3.large', 'i3.xlarge', 'm1.large', 'm1.medium', 'm1.small', 'm1.xlarge', 'm2.2xlarge', 'm2.4xlarge', 'm2.xlarge', 'm3.2xlarge', 'm3.large', 'm3.medium', 'm3.xlarge', 'm4.10xlarge', 'm4.16xlarge', 'm4.2xlarge', 'm4.4xlarge', 'm4.large', 'm4.xlarge', 'm5.12xlarge', 'm5.24xlarge', 'm5.2xlarge', 'm5.4xlarge', 'm5.large', 'm5.metal', 'm5.xlarge', 'm5a.12xlarge', 'm5a.24xlarge', 'm5a.2xlarge', 'm5a.4xlarge', 'm5a.large', 'm5a.xlarge', 'm5ad.12xlarge', 'm5ad.24xlarge', 'm5ad.2xlarge', 'm5ad.4xlarge', 'm5ad.large', 'm5ad.xlarge', 'm5d.12xlarge', 'm5d.24xlarge', 'm5d.2xlarge', 'm5d.4xlarge', 'm5d.large', 'm5d.metal', 'm5d.xlarge', 'p2.16xlarge', 'p2.8xlarge', 'p2.xlarge', 'p3.16xlarge', 'p3.2xlarge', 'p3.8xlarge', 'p3dn.24xlarge', 'r3.2xlarge', 'r3.4xlarge', 'r3.8xlarge', 'r3.large', 'r3.xlarge', 'r4.16xlarge', 'r4.2xlarge', 'r4.4xlarge', 'r4.8xlarge', 'r4.large', 'r4.xlarge', 'r5.12xlarge', 'r5.24xlarge', 'r5.2xlarge', 'r5.4xlarge', 'r5.large', 'r5.xlarge', 'r5a.12xlarge', 'r5a.24xlarge', 'r5a.2xlarge', 'r5a.4xlarge', 'r5a.large', 'r5a.xlarge', 'r5ad.12xlarge', 'r5ad.24xlarge', 'r5ad.2xlarge', 'r5ad.4xlarge', 'r5ad.large', 'r5ad.xlarge', 'r5d.12xlarge', 'r5d.24xlarge', 'r5d.2xlarge', 'r5d.4xlarge', 'r5d.large', 'r5d.xlarge', 't1.micro', 't2.2xlarge', 't2.large', 't2.medium', 't2.micro', 't2.nano', 't2.small', 't2.xlarge', 't3.2xlarge', 't3.large', 't3.medium', 't3.micro', 't3.nano', 't3.small', 't3.xlarge', 'x1.16xlarge', 'x1.32xlarge', 'x1e.16xlarge', 'x1e.2xlarge', 'x1e.32xlarge', 'x1e.4xlarge', 'x1e.8xlarge', 'x1e.xlarge', 'z1d.12xlarge', 'z1d.2xlarge', 'z1d.3xlarge', 'z1d.6xlarge', 'z1d.large', 'z1d.xlarge']\r\n```\r\n\r\nThe CloudFormation parameter is :\r\n```\r\n DemoInstanceType:\r\n Default: /Demo/DemoInstanceType # Recommend t3.nano\r\n Description: EC2 instance type to use to create the collector host\r\n Type: AWS::SSM::Parameter::Value<String>\r\n```\r\n\r\nThe value of the SSM parameter is `t3.nano`\r\n\r\nI have an older project using the same pattern and the virtual environment still has cfn-lint version 0.12.0. It's not raising this complaint. I verified by updating to latest (0.17.1) and the problem cropped up. When I downgraded back to 0.12.0, the problem went away.\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if allowed_value not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n elif default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if default_value not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}]} | 3,489 | 653 |
gh_patches_debug_9753 | rasdani/github-patches | git_diff | bentoml__BentoML-1625 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deployment on remote Yatai server fails due to injection issue
**Describe the bug**
Attempting to deploy to SageMaker or Lambda fails with this error:
```
Error: sagemaker deploy failed: INTERNAL:<dependency_injector.wiring.Provide object at 0x11f748be0> has type Provide, but expected one of: bytes, unicode
```
**To Reproduce**
**This is based on the latest version of the code as of this writing**
- Start remote Yatai server
- Configure BentoML to use the remote Yatai server (e.g. by modifying `default_bentoml.yml`
- Start a deployment to SageMaker or Lambda (without specifying a namespace through the `--namespace` option
- The error message above is shown
**Expected behavior**
Deployment should proceed normally, and the error message should not be displayed.
**Environment:**
- BentoML version 0.12.1+24.g4019bac.dirty
**Additional context**
After some initial debugging, the error appears to originate from this line: https://github.com/bentoml/BentoML/blob/4019bac4af320bad73bf960f6bd2d617f3fd4a52/bentoml/yatai/yatai_service_impl.py#L106
`self.default_namespace` is not wired / injected properly, and will instead be a `Provide` object. This causes issues downstream as a string is expected. A workaround is to specify the environment when deploying via the CLI (`--namespace`).
My hunch is that `YataiServiceImpl` does not get properly wired/injected due to it being wrapped in the `get_yatai_service_impl` method here:https://github.com/bentoml/BentoML/blob/4019bac4af320bad73bf960f6bd2d617f3fd4a52/bentoml/yatai/yatai_service_impl.py#L74
I have little experience with dependency injection so couldn't figure out _why_ it wouldn't get wired properly.
</issue>
<code>
[start of bentoml/configuration/__init__.py]
1 # Copyright 2019 Atalaya Tech, Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import logging
17 from functools import lru_cache
18
19 from bentoml import __version__, _version as version_mod
20
21
22 # Note this file is loaded prior to logging being configured, thus logger is only
23 # used within functions in this file
24 logger = logging.getLogger(__name__)
25
26
27 DEBUG_ENV_VAR = "BENTOML_DEBUG"
28
29
30 def expand_env_var(env_var):
31 """Expands potentially nested env var by repeatedly applying `expandvars` and
32 `expanduser` until interpolation stops having any effect.
33 """
34 if not env_var:
35 return env_var
36 while True:
37 interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))
38 if interpolated == env_var:
39 return interpolated
40 else:
41 env_var = interpolated
42
43
44 # This is used as default for config('core', 'bentoml_deploy_version') - which is used
45 # for getting the BentoML PyPI version string or the URL to a BentoML sdist, indicating
46 # the BentoML module to be used when loading and using a saved BentoService bundle.
47 # This is useful when using customized BentoML fork/branch or when working with
48 # development branches of BentoML
49 BENTOML_VERSION = __version__
50 # e.g. from '0.4.2+5.g6cac97f.dirty' to '0.4.2'
51 LAST_PYPI_RELEASE_VERSION = __version__.split('+')[0]
52
53
54 def _is_pip_installed_bentoml():
55 is_installed_package = hasattr(version_mod, 'version_json')
56 is_tagged = not __version__.startswith('0+untagged')
57 is_clean = not version_mod.get_versions()['dirty']
58 return is_installed_package and is_tagged and is_clean
59
60
61 def get_local_config_file():
62 if "BENTOML_CONFIG" in os.environ:
63 # User local config file for customizing bentoml
64 return expand_env_var(os.environ.get("BENTOML_CONFIG"))
65 return None
66
67
68 @lru_cache(maxsize=1)
69 def get_bentoml_deploy_version(bentoml_deploy_version: str):
70 """
71 BentoML version to use for generated docker image or serverless function bundle to
72 be deployed, this can be changed to an url to your fork of BentoML on github, or an
73 url to your custom BentoML build, for example:
74
75 bentoml_deploy_version = git+https://github.com/{username}/bentoml.git@{branch}
76 """
77
78 if bentoml_deploy_version != LAST_PYPI_RELEASE_VERSION:
79 logger.info(f"Setting BentoML deploy version to '{bentoml_deploy_version}'")
80
81 if LAST_PYPI_RELEASE_VERSION != BENTOML_VERSION:
82 if _is_pip_installed_bentoml():
83 logger.warning(
84 "Using BentoML not from official PyPI release. In order to find the "
85 "same version of BentoML when deploying your BentoService, you must "
86 "set the 'core/bentoml_deploy_version' config to a http/git location "
87 "of your BentoML fork, e.g.: 'bentoml_deploy_version = "
88 "git+https://github.com/{username}/bentoml.git@{branch}'"
89 )
90 else:
91 logger.warning(
92 "Using BentoML installed in `editable` model, the local BentoML "
93 "repository including all code changes will be packaged together with "
94 "saved bundle created, under the './bundled_pip_dependencies' "
95 "directory of the saved bundle."
96 )
97 return bentoml_deploy_version
98
99
100 def set_debug_mode(enabled: bool):
101 os.environ[DEBUG_ENV_VAR] = str(enabled)
102
103 # reconfigure logging
104 from bentoml.utils.log import configure_logging
105
106 configure_logging()
107
108 logger.debug(
109 f"Setting debug mode: {'ON' if enabled else 'OFF'} for current session"
110 )
111
112
113 def get_debug_mode():
114 if DEBUG_ENV_VAR in os.environ:
115 return os.environ[DEBUG_ENV_VAR].lower() == "true"
116 return False
117
118
119 def inject_dependencies():
120 """Inject dependencies and configuration to BentoML packages"""
121
122 from timeit import default_timer as timer
123
124 start = timer()
125
126 logger.debug("Start dependency injection")
127
128 from bentoml.configuration.containers import BentoMLContainer, BentoMLConfiguration
129
130 config_file = get_local_config_file()
131 if config_file and config_file.endswith(".yml"):
132 configuration = BentoMLConfiguration(override_config_file=config_file)
133 else:
134 configuration = BentoMLConfiguration()
135
136 container = BentoMLContainer()
137 container.config.from_dict(configuration.as_dict())
138
139 from bentoml import (
140 marshal,
141 server,
142 tracing,
143 cli,
144 adapters,
145 saved_bundle,
146 service,
147 )
148 from bentoml.yatai import yatai_service
149 from bentoml.yatai.repository import s3_repository, gcs_repository
150
151 container.wire(
152 modules=[yatai_service, s3_repository, gcs_repository],
153 packages=[marshal, server, tracing, cli, adapters, saved_bundle, service],
154 )
155
156 end = timer()
157
158 logger.debug("Dependency injection completed in %.3f seconds", end - start)
159
[end of bentoml/configuration/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bentoml/configuration/__init__.py b/bentoml/configuration/__init__.py
--- a/bentoml/configuration/__init__.py
+++ b/bentoml/configuration/__init__.py
@@ -146,10 +146,11 @@
service,
)
from bentoml.yatai import yatai_service
+ from bentoml.yatai import yatai_service_impl
from bentoml.yatai.repository import s3_repository, gcs_repository
container.wire(
- modules=[yatai_service, s3_repository, gcs_repository],
+ modules=[yatai_service, s3_repository, gcs_repository, yatai_service_impl],
packages=[marshal, server, tracing, cli, adapters, saved_bundle, service],
)
| {"golden_diff": "diff --git a/bentoml/configuration/__init__.py b/bentoml/configuration/__init__.py\n--- a/bentoml/configuration/__init__.py\n+++ b/bentoml/configuration/__init__.py\n@@ -146,10 +146,11 @@\n service,\n )\n from bentoml.yatai import yatai_service\n+ from bentoml.yatai import yatai_service_impl\n from bentoml.yatai.repository import s3_repository, gcs_repository\n \n container.wire(\n- modules=[yatai_service, s3_repository, gcs_repository],\n+ modules=[yatai_service, s3_repository, gcs_repository, yatai_service_impl],\n packages=[marshal, server, tracing, cli, adapters, saved_bundle, service],\n )\n", "issue": "Deployment on remote Yatai server fails due to injection issue\n**Describe the bug**\r\n\r\nAttempting to deploy to SageMaker or Lambda fails with this error:\r\n\r\n```\r\nError: sagemaker deploy failed: INTERNAL:<dependency_injector.wiring.Provide object at 0x11f748be0> has type Provide, but expected one of: bytes, unicode\r\n```\r\n\r\n**To Reproduce**\r\n\r\n**This is based on the latest version of the code as of this writing**\r\n\r\n- Start remote Yatai server\r\n- Configure BentoML to use the remote Yatai server (e.g. by modifying `default_bentoml.yml`\r\n- Start a deployment to SageMaker or Lambda (without specifying a namespace through the `--namespace` option\r\n- The error message above is shown\r\n\r\n**Expected behavior**\r\nDeployment should proceed normally, and the error message should not be displayed.\r\n\r\n**Environment:**\r\n- BentoML version 0.12.1+24.g4019bac.dirty\r\n\r\n\r\n**Additional context**\r\n\r\nAfter some initial debugging, the error appears to originate from this line: https://github.com/bentoml/BentoML/blob/4019bac4af320bad73bf960f6bd2d617f3fd4a52/bentoml/yatai/yatai_service_impl.py#L106\r\n\r\n`self.default_namespace` is not wired / injected properly, and will instead be a `Provide` object. This causes issues downstream as a string is expected. A workaround is to specify the environment when deploying via the CLI (`--namespace`).\r\n\r\nMy hunch is that `YataiServiceImpl` does not get properly wired/injected due to it being wrapped in the `get_yatai_service_impl` method here:https://github.com/bentoml/BentoML/blob/4019bac4af320bad73bf960f6bd2d617f3fd4a52/bentoml/yatai/yatai_service_impl.py#L74 \r\n\r\nI have little experience with dependency injection so couldn't figure out _why_ it wouldn't get wired properly.\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 Atalaya Tech, Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport logging\nfrom functools import lru_cache\n\nfrom bentoml import __version__, _version as version_mod\n\n\n# Note this file is loaded prior to logging being configured, thus logger is only\n# used within functions in this file\nlogger = logging.getLogger(__name__)\n\n\nDEBUG_ENV_VAR = \"BENTOML_DEBUG\"\n\n\ndef expand_env_var(env_var):\n \"\"\"Expands potentially nested env var by repeatedly applying `expandvars` and\n `expanduser` until interpolation stops having any effect.\n \"\"\"\n if not env_var:\n return env_var\n while True:\n interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))\n if interpolated == env_var:\n return interpolated\n else:\n env_var = interpolated\n\n\n# This is used as default for config('core', 'bentoml_deploy_version') - which is used\n# for getting the BentoML PyPI version string or the URL to a BentoML sdist, indicating\n# the BentoML module to be used when loading and using a saved BentoService bundle.\n# This is useful when using customized BentoML fork/branch or when working with\n# development branches of BentoML\nBENTOML_VERSION = __version__\n# e.g. from '0.4.2+5.g6cac97f.dirty' to '0.4.2'\nLAST_PYPI_RELEASE_VERSION = __version__.split('+')[0]\n\n\ndef _is_pip_installed_bentoml():\n is_installed_package = hasattr(version_mod, 'version_json')\n is_tagged = not __version__.startswith('0+untagged')\n is_clean = not version_mod.get_versions()['dirty']\n return is_installed_package and is_tagged and is_clean\n\n\ndef get_local_config_file():\n if \"BENTOML_CONFIG\" in os.environ:\n # User local config file for customizing bentoml\n return expand_env_var(os.environ.get(\"BENTOML_CONFIG\"))\n return None\n\n\n@lru_cache(maxsize=1)\ndef get_bentoml_deploy_version(bentoml_deploy_version: str):\n \"\"\"\n BentoML version to use for generated docker image or serverless function bundle to\n be deployed, this can be changed to an url to your fork of BentoML on github, or an\n url to your custom BentoML build, for example:\n\n bentoml_deploy_version = git+https://github.com/{username}/bentoml.git@{branch}\n \"\"\"\n\n if bentoml_deploy_version != LAST_PYPI_RELEASE_VERSION:\n logger.info(f\"Setting BentoML deploy version to '{bentoml_deploy_version}'\")\n\n if LAST_PYPI_RELEASE_VERSION != BENTOML_VERSION:\n if _is_pip_installed_bentoml():\n logger.warning(\n \"Using BentoML not from official PyPI release. In order to find the \"\n \"same version of BentoML when deploying your BentoService, you must \"\n \"set the 'core/bentoml_deploy_version' config to a http/git location \"\n \"of your BentoML fork, e.g.: 'bentoml_deploy_version = \"\n \"git+https://github.com/{username}/bentoml.git@{branch}'\"\n )\n else:\n logger.warning(\n \"Using BentoML installed in `editable` model, the local BentoML \"\n \"repository including all code changes will be packaged together with \"\n \"saved bundle created, under the './bundled_pip_dependencies' \"\n \"directory of the saved bundle.\"\n )\n return bentoml_deploy_version\n\n\ndef set_debug_mode(enabled: bool):\n os.environ[DEBUG_ENV_VAR] = str(enabled)\n\n # reconfigure logging\n from bentoml.utils.log import configure_logging\n\n configure_logging()\n\n logger.debug(\n f\"Setting debug mode: {'ON' if enabled else 'OFF'} for current session\"\n )\n\n\ndef get_debug_mode():\n if DEBUG_ENV_VAR in os.environ:\n return os.environ[DEBUG_ENV_VAR].lower() == \"true\"\n return False\n\n\ndef inject_dependencies():\n \"\"\"Inject dependencies and configuration to BentoML packages\"\"\"\n\n from timeit import default_timer as timer\n\n start = timer()\n\n logger.debug(\"Start dependency injection\")\n\n from bentoml.configuration.containers import BentoMLContainer, BentoMLConfiguration\n\n config_file = get_local_config_file()\n if config_file and config_file.endswith(\".yml\"):\n configuration = BentoMLConfiguration(override_config_file=config_file)\n else:\n configuration = BentoMLConfiguration()\n\n container = BentoMLContainer()\n container.config.from_dict(configuration.as_dict())\n\n from bentoml import (\n marshal,\n server,\n tracing,\n cli,\n adapters,\n saved_bundle,\n service,\n )\n from bentoml.yatai import yatai_service\n from bentoml.yatai.repository import s3_repository, gcs_repository\n\n container.wire(\n modules=[yatai_service, s3_repository, gcs_repository],\n packages=[marshal, server, tracing, cli, adapters, saved_bundle, service],\n )\n\n end = timer()\n\n logger.debug(\"Dependency injection completed in %.3f seconds\", end - start)\n", "path": "bentoml/configuration/__init__.py"}]} | 2,667 | 182 |
gh_patches_debug_3831 | rasdani/github-patches | git_diff | pantsbuild__pants-18678 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong version of Python used to build `pex_binary` targets in `2.16.0rc0`
**Describe the bug**
* Our CI image contains both Python 3.8 and 3.9.
* We set `[python].interpreter_constraints = ["==3.8.*"]` in `pants.toml`.
* At least one `pex_binary` depends on a version of `numpy` that (for reasons we haven't dug into) only works with Python 3.8, not Python 3.9
* We haven't investigated the build failures because we expect everything to run against Python 3.8 as configured by `[python].interpreter_constraints`
After upgrading to Pants 2.16.0rc0 we see failures building the `pex_binary` in CI, with errors that indicate the build process is trying to build a dependency (`numpy`) against Python 3.9 instead of the expected/configured Python 3.8
This is very concerning. We still run Python 3.8 everywhere in production, so I don't want Pexes to be building against Python 3.9. I've downgraded us back to 2.16.0a1 for now and confirmed this fixes the problem.
**Pants version**
2.16.0rc0
**OS**
Linux
**Additional info**
I am suspicious of https://github.com/pantsbuild/pants/commit/d3d325777952435186be42443fb28fde6771fae7 and https://github.com/pantsbuild/pants/commit/e8d387ba6b4d4502e3b6db5ae68ffe7beeeb10a7
</issue>
<code>
[start of src/python/pants/backend/python/util_rules/pex_cli.py]
1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 from dataclasses import dataclass
8 from typing import Iterable, List, Mapping, Optional, Tuple
9
10 from pants.backend.python.subsystems.python_native_code import PythonNativeCodeSubsystem
11 from pants.backend.python.subsystems.setup import PythonSetup
12 from pants.backend.python.util_rules import pex_environment
13 from pants.backend.python.util_rules.pex_environment import PexEnvironment, PexSubsystem
14 from pants.core.util_rules import adhoc_binaries, external_tool
15 from pants.core.util_rules.adhoc_binaries import PythonBuildStandaloneBinary
16 from pants.core.util_rules.external_tool import (
17 DownloadedExternalTool,
18 ExternalToolRequest,
19 TemplatedExternalTool,
20 )
21 from pants.engine.fs import CreateDigest, Digest, Directory, MergeDigests
22 from pants.engine.internals.selectors import MultiGet
23 from pants.engine.platform import Platform
24 from pants.engine.process import Process, ProcessCacheScope
25 from pants.engine.rules import Get, collect_rules, rule
26 from pants.option.global_options import GlobalOptions, ca_certs_path_to_file_content
27 from pants.util.frozendict import FrozenDict
28 from pants.util.logging import LogLevel
29 from pants.util.meta import classproperty
30 from pants.util.strutil import create_path_env_var
31
32
33 class PexCli(TemplatedExternalTool):
34 options_scope = "pex-cli"
35 name = "pex"
36 help = "The PEX (Python EXecutable) tool (https://github.com/pantsbuild/pex)."
37
38 default_version = "v2.1.131"
39 default_url_template = "https://github.com/pantsbuild/pex/releases/download/{version}/pex"
40 version_constraints = ">=2.1.124,<3.0"
41
42 @classproperty
43 def default_known_versions(cls):
44 return [
45 "|".join(
46 (
47 cls.default_version,
48 plat,
49 "28b9dfc7e2f5f49f1e189b79eba3dd79ca2186f765009ea02dd6095f5359bf59",
50 "4084520",
51 )
52 )
53 for plat in ["macos_arm64", "macos_x86_64", "linux_x86_64", "linux_arm64"]
54 ]
55
56
57 @dataclass(frozen=True)
58 class PexCliProcess:
59 subcommand: tuple[str, ...]
60 extra_args: tuple[str, ...]
61 description: str = dataclasses.field(compare=False)
62 additional_input_digest: Optional[Digest]
63 extra_env: Optional[FrozenDict[str, str]]
64 output_files: Optional[Tuple[str, ...]]
65 output_directories: Optional[Tuple[str, ...]]
66 level: LogLevel
67 concurrency_available: int
68 cache_scope: ProcessCacheScope
69
70 def __init__(
71 self,
72 *,
73 subcommand: Iterable[str],
74 extra_args: Iterable[str],
75 description: str,
76 additional_input_digest: Optional[Digest] = None,
77 extra_env: Optional[Mapping[str, str]] = None,
78 output_files: Optional[Iterable[str]] = None,
79 output_directories: Optional[Iterable[str]] = None,
80 level: LogLevel = LogLevel.INFO,
81 concurrency_available: int = 0,
82 cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,
83 ) -> None:
84 object.__setattr__(self, "subcommand", tuple(subcommand))
85 object.__setattr__(self, "extra_args", tuple(extra_args))
86 object.__setattr__(self, "description", description)
87 object.__setattr__(self, "additional_input_digest", additional_input_digest)
88 object.__setattr__(self, "extra_env", FrozenDict(extra_env) if extra_env else None)
89 object.__setattr__(self, "output_files", tuple(output_files) if output_files else None)
90 object.__setattr__(
91 self, "output_directories", tuple(output_directories) if output_directories else None
92 )
93 object.__setattr__(self, "level", level)
94 object.__setattr__(self, "concurrency_available", concurrency_available)
95 object.__setattr__(self, "cache_scope", cache_scope)
96
97 self.__post_init__()
98
99 def __post_init__(self) -> None:
100 if "--pex-root-path" in self.extra_args:
101 raise ValueError("`--pex-root` flag not allowed. We set its value for you.")
102
103
104 class PexPEX(DownloadedExternalTool):
105 """The Pex PEX binary."""
106
107
108 @rule
109 async def download_pex_pex(pex_cli: PexCli, platform: Platform) -> PexPEX:
110 pex_pex = await Get(DownloadedExternalTool, ExternalToolRequest, pex_cli.get_request(platform))
111 return PexPEX(digest=pex_pex.digest, exe=pex_pex.exe)
112
113
114 @rule
115 async def setup_pex_cli_process(
116 request: PexCliProcess,
117 pex_pex: PexPEX,
118 pex_env: PexEnvironment,
119 bootstrap_python: PythonBuildStandaloneBinary,
120 python_native_code: PythonNativeCodeSubsystem.EnvironmentAware,
121 global_options: GlobalOptions,
122 pex_subsystem: PexSubsystem,
123 python_setup: PythonSetup,
124 ) -> Process:
125 tmpdir = ".tmp"
126 gets: List[Get] = [Get(Digest, CreateDigest([Directory(tmpdir)]))]
127
128 cert_args = []
129 if global_options.ca_certs_path:
130 ca_certs_fc = ca_certs_path_to_file_content(global_options.ca_certs_path)
131 gets.append(Get(Digest, CreateDigest((ca_certs_fc,))))
132 cert_args = ["--cert", ca_certs_fc.path]
133
134 digests_to_merge = [pex_pex.digest]
135 digests_to_merge.extend(await MultiGet(gets))
136 if request.additional_input_digest:
137 digests_to_merge.append(request.additional_input_digest)
138 input_digest = await Get(Digest, MergeDigests(digests_to_merge))
139
140 global_args = [
141 # Ensure Pex and its subprocesses create temporary files in the the process execution
142 # sandbox. It may make sense to do this generally for Processes, but in the short term we
143 # have known use cases where /tmp is too small to hold large wheel downloads Pex is asked to
144 # perform. Making the TMPDIR local to the sandbox allows control via
145 # --local-execution-root-dir for the local case and should work well with remote cases where
146 # a remoting implementation has to allow for processes producing large binaries in a
147 # sandbox to support reasonable workloads. Communicating TMPDIR via --tmpdir instead of via
148 # environment variable allows Pex to absolutize the path ensuring subprocesses that change
149 # CWD can find the TMPDIR.
150 "--tmpdir",
151 tmpdir,
152 ]
153
154 if request.concurrency_available > 0:
155 global_args.extend(["--jobs", "{pants_concurrency}"])
156
157 verbosity_args = [f"-{'v' * pex_subsystem.verbosity}"] if pex_subsystem.verbosity > 0 else []
158
159 # NB: We should always pass `--python-path`, as that tells Pex where to look for interpreters
160 # when `--python` isn't an absolute path.
161 resolve_args = [
162 *cert_args,
163 "--python-path",
164 create_path_env_var(pex_env.interpreter_search_paths),
165 ]
166 # All old-style pex runs take the --pip-version flag, but only certain subcommands of the
167 # `pex3` console script do. So if invoked with a subcommand, the caller must selectively
168 # set --pip-version only on subcommands that take it.
169 pip_version_args = (
170 [] if request.subcommand else ["--pip-version", python_setup.pip_version.value]
171 )
172 args = [
173 *request.subcommand,
174 *global_args,
175 *verbosity_args,
176 *pip_version_args,
177 *resolve_args,
178 # NB: This comes at the end because it may use `--` passthrough args, # which must come at
179 # the end.
180 *request.extra_args,
181 ]
182
183 complete_pex_env = pex_env.in_sandbox(working_directory=None)
184 normalized_argv = complete_pex_env.create_argv(pex_pex.exe, *args)
185 env = {
186 **complete_pex_env.environment_dict(python=bootstrap_python),
187 **python_native_code.subprocess_env_vars,
188 **(request.extra_env or {}),
189 # If a subcommand is used, we need to use the `pex3` console script.
190 **({"PEX_SCRIPT": "pex3"} if request.subcommand else {}),
191 }
192
193 return Process(
194 normalized_argv,
195 description=request.description,
196 input_digest=input_digest,
197 env=env,
198 output_files=request.output_files,
199 output_directories=request.output_directories,
200 append_only_caches=complete_pex_env.append_only_caches,
201 immutable_input_digests=bootstrap_python.immutable_input_digests,
202 level=request.level,
203 concurrency_available=request.concurrency_available,
204 cache_scope=request.cache_scope,
205 )
206
207
208 def rules():
209 return [
210 *collect_rules(),
211 *external_tool.rules(),
212 *pex_environment.rules(),
213 *adhoc_binaries.rules(),
214 ]
215
[end of src/python/pants/backend/python/util_rules/pex_cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/python/pants/backend/python/util_rules/pex_cli.py b/src/python/pants/backend/python/util_rules/pex_cli.py
--- a/src/python/pants/backend/python/util_rules/pex_cli.py
+++ b/src/python/pants/backend/python/util_rules/pex_cli.py
@@ -37,7 +37,7 @@
default_version = "v2.1.131"
default_url_template = "https://github.com/pantsbuild/pex/releases/download/{version}/pex"
- version_constraints = ">=2.1.124,<3.0"
+ version_constraints = ">=2.1.129,<3.0"
@classproperty
def default_known_versions(cls):
| {"golden_diff": "diff --git a/src/python/pants/backend/python/util_rules/pex_cli.py b/src/python/pants/backend/python/util_rules/pex_cli.py\n--- a/src/python/pants/backend/python/util_rules/pex_cli.py\n+++ b/src/python/pants/backend/python/util_rules/pex_cli.py\n@@ -37,7 +37,7 @@\n \n default_version = \"v2.1.131\"\n default_url_template = \"https://github.com/pantsbuild/pex/releases/download/{version}/pex\"\n- version_constraints = \">=2.1.124,<3.0\"\n+ version_constraints = \">=2.1.129,<3.0\"\n \n @classproperty\n def default_known_versions(cls):\n", "issue": "Wrong version of Python used to build `pex_binary` targets in `2.16.0rc0`\n**Describe the bug**\r\n\r\n* Our CI image contains both Python 3.8 and 3.9.\r\n* We set `[python].interpreter_constraints = [\"==3.8.*\"]` in `pants.toml`.\r\n* At least one `pex_binary` depends on a version of `numpy` that (for reasons we haven't dug into) only works with Python 3.8, not Python 3.9\r\n * We haven't investigated the build failures because we expect everything to run against Python 3.8 as configured by `[python].interpreter_constraints`\r\n\r\nAfter upgrading to Pants 2.16.0rc0 we see failures building the `pex_binary` in CI, with errors that indicate the build process is trying to build a dependency (`numpy`) against Python 3.9 instead of the expected/configured Python 3.8\r\n\r\nThis is very concerning. We still run Python 3.8 everywhere in production, so I don't want Pexes to be building against Python 3.9. I've downgraded us back to 2.16.0a1 for now and confirmed this fixes the problem.\r\n\r\n**Pants version**\r\n\r\n2.16.0rc0\r\n\r\n**OS**\r\n\r\nLinux\r\n\r\n**Additional info**\r\n\r\nI am suspicious of https://github.com/pantsbuild/pants/commit/d3d325777952435186be42443fb28fde6771fae7 and https://github.com/pantsbuild/pants/commit/e8d387ba6b4d4502e3b6db5ae68ffe7beeeb10a7\n", "before_files": [{"content": "# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport dataclasses\nfrom dataclasses import dataclass\nfrom typing import Iterable, List, Mapping, Optional, Tuple\n\nfrom pants.backend.python.subsystems.python_native_code import PythonNativeCodeSubsystem\nfrom pants.backend.python.subsystems.setup import PythonSetup\nfrom pants.backend.python.util_rules import pex_environment\nfrom pants.backend.python.util_rules.pex_environment import PexEnvironment, PexSubsystem\nfrom pants.core.util_rules import adhoc_binaries, external_tool\nfrom pants.core.util_rules.adhoc_binaries import PythonBuildStandaloneBinary\nfrom pants.core.util_rules.external_tool import (\n DownloadedExternalTool,\n ExternalToolRequest,\n TemplatedExternalTool,\n)\nfrom pants.engine.fs import CreateDigest, Digest, Directory, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.platform import Platform\nfrom pants.engine.process import Process, ProcessCacheScope\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.option.global_options import GlobalOptions, ca_certs_path_to_file_content\nfrom pants.util.frozendict import FrozenDict\nfrom pants.util.logging import LogLevel\nfrom pants.util.meta import classproperty\nfrom pants.util.strutil import create_path_env_var\n\n\nclass PexCli(TemplatedExternalTool):\n options_scope = \"pex-cli\"\n name = \"pex\"\n help = \"The PEX (Python EXecutable) tool (https://github.com/pantsbuild/pex).\"\n\n default_version = \"v2.1.131\"\n default_url_template = \"https://github.com/pantsbuild/pex/releases/download/{version}/pex\"\n version_constraints = \">=2.1.124,<3.0\"\n\n @classproperty\n def default_known_versions(cls):\n return [\n \"|\".join(\n (\n cls.default_version,\n plat,\n \"28b9dfc7e2f5f49f1e189b79eba3dd79ca2186f765009ea02dd6095f5359bf59\",\n \"4084520\",\n )\n )\n for plat in [\"macos_arm64\", \"macos_x86_64\", \"linux_x86_64\", \"linux_arm64\"]\n ]\n\n\n@dataclass(frozen=True)\nclass PexCliProcess:\n subcommand: tuple[str, ...]\n extra_args: tuple[str, ...]\n description: str = dataclasses.field(compare=False)\n additional_input_digest: Optional[Digest]\n extra_env: Optional[FrozenDict[str, str]]\n output_files: Optional[Tuple[str, ...]]\n output_directories: Optional[Tuple[str, ...]]\n level: LogLevel\n concurrency_available: int\n cache_scope: ProcessCacheScope\n\n def __init__(\n self,\n *,\n subcommand: Iterable[str],\n extra_args: Iterable[str],\n description: str,\n additional_input_digest: Optional[Digest] = None,\n extra_env: Optional[Mapping[str, str]] = None,\n output_files: Optional[Iterable[str]] = None,\n output_directories: Optional[Iterable[str]] = None,\n level: LogLevel = LogLevel.INFO,\n concurrency_available: int = 0,\n cache_scope: ProcessCacheScope = ProcessCacheScope.SUCCESSFUL,\n ) -> None:\n object.__setattr__(self, \"subcommand\", tuple(subcommand))\n object.__setattr__(self, \"extra_args\", tuple(extra_args))\n object.__setattr__(self, \"description\", description)\n object.__setattr__(self, \"additional_input_digest\", additional_input_digest)\n object.__setattr__(self, \"extra_env\", FrozenDict(extra_env) if extra_env else None)\n object.__setattr__(self, \"output_files\", tuple(output_files) if output_files else None)\n object.__setattr__(\n self, \"output_directories\", tuple(output_directories) if output_directories else None\n )\n object.__setattr__(self, \"level\", level)\n object.__setattr__(self, \"concurrency_available\", concurrency_available)\n object.__setattr__(self, \"cache_scope\", cache_scope)\n\n self.__post_init__()\n\n def __post_init__(self) -> None:\n if \"--pex-root-path\" in self.extra_args:\n raise ValueError(\"`--pex-root` flag not allowed. We set its value for you.\")\n\n\nclass PexPEX(DownloadedExternalTool):\n \"\"\"The Pex PEX binary.\"\"\"\n\n\n@rule\nasync def download_pex_pex(pex_cli: PexCli, platform: Platform) -> PexPEX:\n pex_pex = await Get(DownloadedExternalTool, ExternalToolRequest, pex_cli.get_request(platform))\n return PexPEX(digest=pex_pex.digest, exe=pex_pex.exe)\n\n\n@rule\nasync def setup_pex_cli_process(\n request: PexCliProcess,\n pex_pex: PexPEX,\n pex_env: PexEnvironment,\n bootstrap_python: PythonBuildStandaloneBinary,\n python_native_code: PythonNativeCodeSubsystem.EnvironmentAware,\n global_options: GlobalOptions,\n pex_subsystem: PexSubsystem,\n python_setup: PythonSetup,\n) -> Process:\n tmpdir = \".tmp\"\n gets: List[Get] = [Get(Digest, CreateDigest([Directory(tmpdir)]))]\n\n cert_args = []\n if global_options.ca_certs_path:\n ca_certs_fc = ca_certs_path_to_file_content(global_options.ca_certs_path)\n gets.append(Get(Digest, CreateDigest((ca_certs_fc,))))\n cert_args = [\"--cert\", ca_certs_fc.path]\n\n digests_to_merge = [pex_pex.digest]\n digests_to_merge.extend(await MultiGet(gets))\n if request.additional_input_digest:\n digests_to_merge.append(request.additional_input_digest)\n input_digest = await Get(Digest, MergeDigests(digests_to_merge))\n\n global_args = [\n # Ensure Pex and its subprocesses create temporary files in the the process execution\n # sandbox. It may make sense to do this generally for Processes, but in the short term we\n # have known use cases where /tmp is too small to hold large wheel downloads Pex is asked to\n # perform. Making the TMPDIR local to the sandbox allows control via\n # --local-execution-root-dir for the local case and should work well with remote cases where\n # a remoting implementation has to allow for processes producing large binaries in a\n # sandbox to support reasonable workloads. Communicating TMPDIR via --tmpdir instead of via\n # environment variable allows Pex to absolutize the path ensuring subprocesses that change\n # CWD can find the TMPDIR.\n \"--tmpdir\",\n tmpdir,\n ]\n\n if request.concurrency_available > 0:\n global_args.extend([\"--jobs\", \"{pants_concurrency}\"])\n\n verbosity_args = [f\"-{'v' * pex_subsystem.verbosity}\"] if pex_subsystem.verbosity > 0 else []\n\n # NB: We should always pass `--python-path`, as that tells Pex where to look for interpreters\n # when `--python` isn't an absolute path.\n resolve_args = [\n *cert_args,\n \"--python-path\",\n create_path_env_var(pex_env.interpreter_search_paths),\n ]\n # All old-style pex runs take the --pip-version flag, but only certain subcommands of the\n # `pex3` console script do. So if invoked with a subcommand, the caller must selectively\n # set --pip-version only on subcommands that take it.\n pip_version_args = (\n [] if request.subcommand else [\"--pip-version\", python_setup.pip_version.value]\n )\n args = [\n *request.subcommand,\n *global_args,\n *verbosity_args,\n *pip_version_args,\n *resolve_args,\n # NB: This comes at the end because it may use `--` passthrough args, # which must come at\n # the end.\n *request.extra_args,\n ]\n\n complete_pex_env = pex_env.in_sandbox(working_directory=None)\n normalized_argv = complete_pex_env.create_argv(pex_pex.exe, *args)\n env = {\n **complete_pex_env.environment_dict(python=bootstrap_python),\n **python_native_code.subprocess_env_vars,\n **(request.extra_env or {}),\n # If a subcommand is used, we need to use the `pex3` console script.\n **({\"PEX_SCRIPT\": \"pex3\"} if request.subcommand else {}),\n }\n\n return Process(\n normalized_argv,\n description=request.description,\n input_digest=input_digest,\n env=env,\n output_files=request.output_files,\n output_directories=request.output_directories,\n append_only_caches=complete_pex_env.append_only_caches,\n immutable_input_digests=bootstrap_python.immutable_input_digests,\n level=request.level,\n concurrency_available=request.concurrency_available,\n cache_scope=request.cache_scope,\n )\n\n\ndef rules():\n return [\n *collect_rules(),\n *external_tool.rules(),\n *pex_environment.rules(),\n *adhoc_binaries.rules(),\n ]\n", "path": "src/python/pants/backend/python/util_rules/pex_cli.py"}]} | 3,513 | 161 |
gh_patches_debug_3064 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-1350 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in prostate_2D example
https://github.com/NVIDIA/NVFlare/blob/8f8f029eeecf58a85d9633357ce1ed4f8f39f655/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py#L171
`self.transform_valid` is not defined if `cache_rate=0`.
</issue>
<code>
[start of examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17
18 import torch
19 import torch.optim as optim
20 from learners.supervised_learner import SupervisedLearner
21 from monai.data import CacheDataset, DataLoader, Dataset, load_decathlon_datalist
22 from monai.inferers import SimpleInferer
23 from monai.losses import DiceLoss
24 from monai.metrics import DiceMetric
25 from monai.networks.nets.unet import UNet
26 from monai.transforms import (
27 Activations,
28 AsDiscrete,
29 AsDiscreted,
30 Compose,
31 EnsureChannelFirstd,
32 EnsureType,
33 EnsureTyped,
34 LoadImaged,
35 Resized,
36 ScaleIntensityRanged,
37 )
38 from utils.custom_client_datalist_json_path import custom_client_datalist_json_path
39
40 from nvflare.apis.fl_context import FLContext
41 from nvflare.app_common.app_constant import AppConstants
42 from nvflare.app_common.pt.pt_fedproxloss import PTFedProxLoss
43
44
45 class SupervisedMonaiProstateLearner(SupervisedLearner):
46 def __init__(
47 self,
48 train_config_filename,
49 aggregation_epochs: int = 1,
50 train_task_name: str = AppConstants.TASK_TRAIN,
51 ):
52 """MONAI Learner for prostate segmentation task.
53 It inherits from SupervisedLearner.
54
55 Args:
56 train_config_filename: path for config file, this is an addition term for config loading
57 aggregation_epochs: the number of training epochs for a round.
58 train_task_name: name of the task to train the model.
59
60 Returns:
61 a Shareable with the updated local model after running `execute()`
62 """
63 super().__init__(
64 aggregation_epochs=aggregation_epochs,
65 train_task_name=train_task_name,
66 )
67 self.train_config_filename = train_config_filename
68 self.config_info = None
69
70 def train_config(self, fl_ctx: FLContext):
71 """MONAI traning configuration
72 Here, we use a json to specify the needed parameters
73 """
74
75 # Load training configurations json
76 engine = fl_ctx.get_engine()
77 ws = engine.get_workspace()
78 app_config_dir = ws.get_app_config_dir(fl_ctx.get_job_id())
79 train_config_file_path = os.path.join(app_config_dir, self.train_config_filename)
80 if not os.path.isfile(train_config_file_path):
81 self.log_error(
82 fl_ctx,
83 f"Training configuration file does not exist at {train_config_file_path}",
84 )
85 with open(train_config_file_path) as file:
86 self.config_info = json.load(file)
87
88 # Get the config_info
89 self.lr = self.config_info["learning_rate"]
90 self.fedproxloss_mu = self.config_info["fedproxloss_mu"]
91 cache_rate = self.config_info["cache_dataset"]
92 dataset_base_dir = self.config_info["dataset_base_dir"]
93 datalist_json_path = self.config_info["datalist_json_path"]
94
95 # Get datalist json
96 datalist_json_path = custom_client_datalist_json_path(datalist_json_path, self.client_id)
97
98 # Set datalist
99 train_list = load_decathlon_datalist(
100 data_list_file_path=datalist_json_path,
101 is_segmentation=True,
102 data_list_key="training",
103 base_dir=dataset_base_dir,
104 )
105 valid_list = load_decathlon_datalist(
106 data_list_file_path=datalist_json_path,
107 is_segmentation=True,
108 data_list_key="validation",
109 base_dir=dataset_base_dir,
110 )
111 self.log_info(
112 fl_ctx,
113 f"Training Size: {len(train_list)}, Validation Size: {len(valid_list)}",
114 )
115
116 # Set the training-related context
117 self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
118 self.model = UNet(
119 spatial_dims=2,
120 in_channels=1,
121 out_channels=1,
122 channels=(16, 32, 64, 128, 256),
123 strides=(2, 2, 2, 2),
124 num_res_units=2,
125 ).to(self.device)
126 self.optimizer = optim.Adam(self.model.parameters(), lr=self.lr)
127 self.criterion = DiceLoss(sigmoid=True)
128
129 if self.fedproxloss_mu > 0:
130 self.log_info(fl_ctx, f"using FedProx loss with mu {self.fedproxloss_mu}")
131 self.criterion_prox = PTFedProxLoss(mu=self.fedproxloss_mu)
132
133 self.transform = Compose(
134 [
135 LoadImaged(keys=["image", "label"]),
136 EnsureChannelFirstd(keys=["image", "label"]),
137 ScaleIntensityRanged(keys=["image", "label"], a_min=0, a_max=255, b_min=0.0, b_max=1.0),
138 Resized(
139 keys=["image", "label"],
140 spatial_size=(256, 256),
141 mode=("bilinear"),
142 align_corners=True,
143 ),
144 AsDiscreted(keys=["label"], threshold=0.5),
145 EnsureTyped(keys=["image", "label"]),
146 ]
147 )
148 self.transform_post = Compose([EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold=0.5)])
149
150 # Set dataset
151 if cache_rate > 0.0:
152 self.train_dataset = CacheDataset(
153 data=train_list,
154 transform=self.transform,
155 cache_rate=cache_rate,
156 num_workers=4,
157 )
158 self.valid_dataset = CacheDataset(
159 data=valid_list,
160 transform=self.transform,
161 cache_rate=cache_rate,
162 num_workers=4,
163 )
164 else:
165 self.train_dataset = Dataset(
166 data=train_list,
167 transform=self.transform,
168 )
169 self.valid_dataset = Dataset(
170 data=valid_list,
171 transform=self.transform_valid,
172 )
173
174 self.train_loader = DataLoader(
175 self.train_dataset,
176 batch_size=1,
177 shuffle=True,
178 num_workers=2,
179 )
180 self.valid_loader = DataLoader(
181 self.valid_dataset,
182 batch_size=1,
183 shuffle=False,
184 num_workers=2,
185 )
186
187 # Set inferer and evaluation metric
188 self.inferer = SimpleInferer()
189 self.valid_metric = DiceMetric(include_background=False, reduction="mean", get_not_nans=False)
190
[end of examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py b/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py
--- a/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py
+++ b/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py
@@ -168,7 +168,7 @@
)
self.valid_dataset = Dataset(
data=valid_list,
- transform=self.transform_valid,
+ transform=self.transform,
)
self.train_loader = DataLoader(
| {"golden_diff": "diff --git a/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py b/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py\n--- a/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py\n+++ b/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py\n@@ -168,7 +168,7 @@\n )\n self.valid_dataset = Dataset(\n data=valid_list,\n- transform=self.transform_valid,\n+ transform=self.transform,\n )\n \n self.train_loader = DataLoader(\n", "issue": "Bug in prostate_2D example\nhttps://github.com/NVIDIA/NVFlare/blob/8f8f029eeecf58a85d9633357ce1ed4f8f39f655/examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py#L171\r\n\r\n`self.transform_valid` is not defined if `cache_rate=0`.\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\nimport os\n\nimport torch\nimport torch.optim as optim\nfrom learners.supervised_learner import SupervisedLearner\nfrom monai.data import CacheDataset, DataLoader, Dataset, load_decathlon_datalist\nfrom monai.inferers import SimpleInferer\nfrom monai.losses import DiceLoss\nfrom monai.metrics import DiceMetric\nfrom monai.networks.nets.unet import UNet\nfrom monai.transforms import (\n Activations,\n AsDiscrete,\n AsDiscreted,\n Compose,\n EnsureChannelFirstd,\n EnsureType,\n EnsureTyped,\n LoadImaged,\n Resized,\n ScaleIntensityRanged,\n)\nfrom utils.custom_client_datalist_json_path import custom_client_datalist_json_path\n\nfrom nvflare.apis.fl_context import FLContext\nfrom nvflare.app_common.app_constant import AppConstants\nfrom nvflare.app_common.pt.pt_fedproxloss import PTFedProxLoss\n\n\nclass SupervisedMonaiProstateLearner(SupervisedLearner):\n def __init__(\n self,\n train_config_filename,\n aggregation_epochs: int = 1,\n train_task_name: str = AppConstants.TASK_TRAIN,\n ):\n \"\"\"MONAI Learner for prostate segmentation task.\n It inherits from SupervisedLearner.\n\n Args:\n train_config_filename: path for config file, this is an addition term for config loading\n aggregation_epochs: the number of training epochs for a round.\n train_task_name: name of the task to train the model.\n\n Returns:\n a Shareable with the updated local model after running `execute()`\n \"\"\"\n super().__init__(\n aggregation_epochs=aggregation_epochs,\n train_task_name=train_task_name,\n )\n self.train_config_filename = train_config_filename\n self.config_info = None\n\n def train_config(self, fl_ctx: FLContext):\n \"\"\"MONAI traning configuration\n Here, we use a json to specify the needed parameters\n \"\"\"\n\n # Load training configurations json\n engine = fl_ctx.get_engine()\n ws = engine.get_workspace()\n app_config_dir = ws.get_app_config_dir(fl_ctx.get_job_id())\n train_config_file_path = os.path.join(app_config_dir, self.train_config_filename)\n if not os.path.isfile(train_config_file_path):\n self.log_error(\n fl_ctx,\n f\"Training configuration file does not exist at {train_config_file_path}\",\n )\n with open(train_config_file_path) as file:\n self.config_info = json.load(file)\n\n # Get the config_info\n self.lr = self.config_info[\"learning_rate\"]\n self.fedproxloss_mu = self.config_info[\"fedproxloss_mu\"]\n cache_rate = self.config_info[\"cache_dataset\"]\n dataset_base_dir = self.config_info[\"dataset_base_dir\"]\n datalist_json_path = self.config_info[\"datalist_json_path\"]\n\n # Get datalist json\n datalist_json_path = custom_client_datalist_json_path(datalist_json_path, self.client_id)\n\n # Set datalist\n train_list = load_decathlon_datalist(\n data_list_file_path=datalist_json_path,\n is_segmentation=True,\n data_list_key=\"training\",\n base_dir=dataset_base_dir,\n )\n valid_list = load_decathlon_datalist(\n data_list_file_path=datalist_json_path,\n is_segmentation=True,\n data_list_key=\"validation\",\n base_dir=dataset_base_dir,\n )\n self.log_info(\n fl_ctx,\n f\"Training Size: {len(train_list)}, Validation Size: {len(valid_list)}\",\n )\n\n # Set the training-related context\n self.device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n self.model = UNet(\n spatial_dims=2,\n in_channels=1,\n out_channels=1,\n channels=(16, 32, 64, 128, 256),\n strides=(2, 2, 2, 2),\n num_res_units=2,\n ).to(self.device)\n self.optimizer = optim.Adam(self.model.parameters(), lr=self.lr)\n self.criterion = DiceLoss(sigmoid=True)\n\n if self.fedproxloss_mu > 0:\n self.log_info(fl_ctx, f\"using FedProx loss with mu {self.fedproxloss_mu}\")\n self.criterion_prox = PTFedProxLoss(mu=self.fedproxloss_mu)\n\n self.transform = Compose(\n [\n LoadImaged(keys=[\"image\", \"label\"]),\n EnsureChannelFirstd(keys=[\"image\", \"label\"]),\n ScaleIntensityRanged(keys=[\"image\", \"label\"], a_min=0, a_max=255, b_min=0.0, b_max=1.0),\n Resized(\n keys=[\"image\", \"label\"],\n spatial_size=(256, 256),\n mode=(\"bilinear\"),\n align_corners=True,\n ),\n AsDiscreted(keys=[\"label\"], threshold=0.5),\n EnsureTyped(keys=[\"image\", \"label\"]),\n ]\n )\n self.transform_post = Compose([EnsureType(), Activations(sigmoid=True), AsDiscrete(threshold=0.5)])\n\n # Set dataset\n if cache_rate > 0.0:\n self.train_dataset = CacheDataset(\n data=train_list,\n transform=self.transform,\n cache_rate=cache_rate,\n num_workers=4,\n )\n self.valid_dataset = CacheDataset(\n data=valid_list,\n transform=self.transform,\n cache_rate=cache_rate,\n num_workers=4,\n )\n else:\n self.train_dataset = Dataset(\n data=train_list,\n transform=self.transform,\n )\n self.valid_dataset = Dataset(\n data=valid_list,\n transform=self.transform_valid,\n )\n\n self.train_loader = DataLoader(\n self.train_dataset,\n batch_size=1,\n shuffle=True,\n num_workers=2,\n )\n self.valid_loader = DataLoader(\n self.valid_dataset,\n batch_size=1,\n shuffle=False,\n num_workers=2,\n )\n\n # Set inferer and evaluation metric\n self.inferer = SimpleInferer()\n self.valid_metric = DiceMetric(include_background=False, reduction=\"mean\", get_not_nans=False)\n", "path": "examples/advanced/prostate/prostate_2D/custom/learners/supervised_monai_prostate_learner.py"}]} | 2,642 | 160 |
gh_patches_debug_1192 | rasdani/github-patches | git_diff | getredash__redash-4189 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JIRA setup: change password field name to "API Token"
While a password can be used there, it's not recommended and eventually will be deprecated.
</issue>
<code>
[start of redash/query_runner/jql.py]
1 import re
2 from collections import OrderedDict
3
4 from redash.query_runner import *
5 from redash.utils import json_dumps, json_loads
6
7
8 # TODO: make this more general and move into __init__.py
9 class ResultSet(object):
10 def __init__(self):
11 self.columns = OrderedDict()
12 self.rows = []
13
14 def add_row(self, row):
15 for key in row.keys():
16 self.add_column(key)
17
18 self.rows.append(row)
19
20 def add_column(self, column, column_type=TYPE_STRING):
21 if column not in self.columns:
22 self.columns[column] = {'name': column, 'type': column_type, 'friendly_name': column}
23
24 def to_json(self):
25 return json_dumps({'rows': self.rows, 'columns': self.columns.values()})
26
27 def merge(self, set):
28 self.rows = self.rows + set.rows
29
30
31 def parse_issue(issue, field_mapping):
32 result = OrderedDict()
33 result['key'] = issue['key']
34
35 for k, v in issue['fields'].iteritems():#
36 output_name = field_mapping.get_output_field_name(k)
37 member_names = field_mapping.get_dict_members(k)
38
39 if isinstance(v, dict):
40 if len(member_names) > 0:
41 # if field mapping with dict member mappings defined get value of each member
42 for member_name in member_names:
43 if member_name in v:
44 result[field_mapping.get_dict_output_field_name(k, member_name)] = v[member_name]
45
46 else:
47 # these special mapping rules are kept for backwards compatibility
48 if 'key' in v:
49 result['{}_key'.format(output_name)] = v['key']
50 if 'name' in v:
51 result['{}_name'.format(output_name)] = v['name']
52
53 if k in v:
54 result[output_name] = v[k]
55
56 if 'watchCount' in v:
57 result[output_name] = v['watchCount']
58
59 elif isinstance(v, list):
60 if len(member_names) > 0:
61 # if field mapping with dict member mappings defined get value of each member
62 for member_name in member_names:
63 listValues = []
64 for listItem in v:
65 if isinstance(listItem, dict):
66 if member_name in listItem:
67 listValues.append(listItem[member_name])
68 if len(listValues) > 0:
69 result[field_mapping.get_dict_output_field_name(k, member_name)] = ','.join(listValues)
70
71 else:
72 # otherwise support list values only for non-dict items
73 listValues = []
74 for listItem in v:
75 if not isinstance(listItem, dict):
76 listValues.append(listItem)
77 if len(listValues) > 0:
78 result[output_name] = ','.join(listValues)
79
80 else:
81 result[output_name] = v
82
83 return result
84
85
86 def parse_issues(data, field_mapping):
87 results = ResultSet()
88
89 for issue in data['issues']:
90 results.add_row(parse_issue(issue, field_mapping))
91
92 return results
93
94
95 def parse_count(data):
96 results = ResultSet()
97 results.add_row({'count': data['total']})
98 return results
99
100
101 class FieldMapping:
102
103 def __init__(cls, query_field_mapping):
104 cls.mapping = []
105 for k, v in query_field_mapping.iteritems():
106 field_name = k
107 member_name = None
108
109 # check for member name contained in field name
110 member_parser = re.search('(\w+)\.(\w+)', k)
111 if (member_parser):
112 field_name = member_parser.group(1)
113 member_name = member_parser.group(2)
114
115 cls.mapping.append({
116 'field_name': field_name,
117 'member_name': member_name,
118 'output_field_name': v
119 })
120
121 def get_output_field_name(cls, field_name):
122 for item in cls.mapping:
123 if item['field_name'] == field_name and not item['member_name']:
124 return item['output_field_name']
125 return field_name
126
127 def get_dict_members(cls, field_name):
128 member_names = []
129 for item in cls.mapping:
130 if item['field_name'] == field_name and item['member_name']:
131 member_names.append(item['member_name'])
132 return member_names
133
134 def get_dict_output_field_name(cls, field_name, member_name):
135 for item in cls.mapping:
136 if item['field_name'] == field_name and item['member_name'] == member_name:
137 return item['output_field_name']
138 return None
139
140
141 class JiraJQL(BaseHTTPQueryRunner):
142 noop_query = '{"queryType": "count"}'
143 response_error = "JIRA returned unexpected status code"
144 requires_authentication = True
145 url_title = 'JIRA URL'
146 username_title = 'Username'
147 password_title = 'Password'
148
149 @classmethod
150 def name(cls):
151 return "JIRA (JQL)"
152
153 def __init__(self, configuration):
154 super(JiraJQL, self).__init__(configuration)
155 self.syntax = 'json'
156
157 def run_query(self, query, user):
158 jql_url = '{}/rest/api/2/search'.format(self.configuration["url"])
159
160 try:
161 query = json_loads(query)
162 query_type = query.pop('queryType', 'select')
163 field_mapping = FieldMapping(query.pop('fieldMapping', {}))
164
165 if query_type == 'count':
166 query['maxResults'] = 1
167 query['fields'] = ''
168 else:
169 query['maxResults'] = query.get('maxResults', 1000)
170
171 response, error = self.get_response(jql_url, params=query)
172 if error is not None:
173 return None, error
174
175 data = response.json()
176
177 if query_type == 'count':
178 results = parse_count(data)
179 else:
180 results = parse_issues(data, field_mapping)
181 index = data['startAt'] + data['maxResults']
182
183 while data['total'] > index:
184 query['startAt'] = index
185 response, error = self.get_response(jql_url, params=query)
186 if error is not None:
187 return None, error
188
189 data = response.json()
190 index = data['startAt'] + data['maxResults']
191
192 addl_results = parse_issues(data, field_mapping)
193 results.merge(addl_results)
194
195 return results.to_json(), None
196 except KeyboardInterrupt:
197 return None, "Query cancelled by user."
198
199
200 register(JiraJQL)
201
[end of redash/query_runner/jql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/query_runner/jql.py b/redash/query_runner/jql.py
--- a/redash/query_runner/jql.py
+++ b/redash/query_runner/jql.py
@@ -144,7 +144,7 @@
requires_authentication = True
url_title = 'JIRA URL'
username_title = 'Username'
- password_title = 'Password'
+ password_title = 'API Token'
@classmethod
def name(cls):
| {"golden_diff": "diff --git a/redash/query_runner/jql.py b/redash/query_runner/jql.py\n--- a/redash/query_runner/jql.py\n+++ b/redash/query_runner/jql.py\n@@ -144,7 +144,7 @@\n requires_authentication = True\n url_title = 'JIRA URL'\n username_title = 'Username'\n- password_title = 'Password'\n+ password_title = 'API Token'\n \n @classmethod\n def name(cls):\n", "issue": "JIRA setup: change password field name to \"API Token\"\nWhile a password can be used there, it's not recommended and eventually will be deprecated. \n", "before_files": [{"content": "import re\nfrom collections import OrderedDict\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\n\n# TODO: make this more general and move into __init__.py\nclass ResultSet(object):\n def __init__(self):\n self.columns = OrderedDict()\n self.rows = []\n\n def add_row(self, row):\n for key in row.keys():\n self.add_column(key)\n\n self.rows.append(row)\n\n def add_column(self, column, column_type=TYPE_STRING):\n if column not in self.columns:\n self.columns[column] = {'name': column, 'type': column_type, 'friendly_name': column}\n\n def to_json(self):\n return json_dumps({'rows': self.rows, 'columns': self.columns.values()})\n\n def merge(self, set):\n self.rows = self.rows + set.rows\n\n\ndef parse_issue(issue, field_mapping):\n result = OrderedDict()\n result['key'] = issue['key']\n\n for k, v in issue['fields'].iteritems():#\n output_name = field_mapping.get_output_field_name(k)\n member_names = field_mapping.get_dict_members(k)\n\n if isinstance(v, dict):\n if len(member_names) > 0:\n # if field mapping with dict member mappings defined get value of each member\n for member_name in member_names:\n if member_name in v:\n result[field_mapping.get_dict_output_field_name(k, member_name)] = v[member_name]\n\n else:\n # these special mapping rules are kept for backwards compatibility\n if 'key' in v:\n result['{}_key'.format(output_name)] = v['key']\n if 'name' in v:\n result['{}_name'.format(output_name)] = v['name']\n\n if k in v:\n result[output_name] = v[k]\n\n if 'watchCount' in v:\n result[output_name] = v['watchCount']\n\n elif isinstance(v, list):\n if len(member_names) > 0:\n # if field mapping with dict member mappings defined get value of each member\n for member_name in member_names:\n listValues = []\n for listItem in v:\n if isinstance(listItem, dict):\n if member_name in listItem:\n listValues.append(listItem[member_name])\n if len(listValues) > 0:\n result[field_mapping.get_dict_output_field_name(k, member_name)] = ','.join(listValues)\n\n else:\n # otherwise support list values only for non-dict items\n listValues = []\n for listItem in v:\n if not isinstance(listItem, dict):\n listValues.append(listItem)\n if len(listValues) > 0:\n result[output_name] = ','.join(listValues)\n\n else:\n result[output_name] = v\n\n return result\n\n\ndef parse_issues(data, field_mapping):\n results = ResultSet()\n\n for issue in data['issues']:\n results.add_row(parse_issue(issue, field_mapping))\n\n return results\n\n\ndef parse_count(data):\n results = ResultSet()\n results.add_row({'count': data['total']})\n return results\n\n\nclass FieldMapping:\n\n def __init__(cls, query_field_mapping):\n cls.mapping = []\n for k, v in query_field_mapping.iteritems():\n field_name = k\n member_name = None\n\n # check for member name contained in field name\n member_parser = re.search('(\\w+)\\.(\\w+)', k)\n if (member_parser):\n field_name = member_parser.group(1)\n member_name = member_parser.group(2)\n\n cls.mapping.append({\n 'field_name': field_name,\n 'member_name': member_name,\n 'output_field_name': v\n })\n\n def get_output_field_name(cls, field_name):\n for item in cls.mapping:\n if item['field_name'] == field_name and not item['member_name']:\n return item['output_field_name']\n return field_name\n\n def get_dict_members(cls, field_name):\n member_names = []\n for item in cls.mapping:\n if item['field_name'] == field_name and item['member_name']:\n member_names.append(item['member_name'])\n return member_names\n\n def get_dict_output_field_name(cls, field_name, member_name):\n for item in cls.mapping:\n if item['field_name'] == field_name and item['member_name'] == member_name:\n return item['output_field_name']\n return None\n\n\nclass JiraJQL(BaseHTTPQueryRunner):\n noop_query = '{\"queryType\": \"count\"}'\n response_error = \"JIRA returned unexpected status code\"\n requires_authentication = True\n url_title = 'JIRA URL'\n username_title = 'Username'\n password_title = 'Password'\n\n @classmethod\n def name(cls):\n return \"JIRA (JQL)\"\n\n def __init__(self, configuration):\n super(JiraJQL, self).__init__(configuration)\n self.syntax = 'json'\n\n def run_query(self, query, user):\n jql_url = '{}/rest/api/2/search'.format(self.configuration[\"url\"])\n\n try:\n query = json_loads(query)\n query_type = query.pop('queryType', 'select')\n field_mapping = FieldMapping(query.pop('fieldMapping', {}))\n\n if query_type == 'count':\n query['maxResults'] = 1\n query['fields'] = ''\n else:\n query['maxResults'] = query.get('maxResults', 1000)\n\n response, error = self.get_response(jql_url, params=query)\n if error is not None:\n return None, error\n\n data = response.json()\n\n if query_type == 'count':\n results = parse_count(data)\n else:\n results = parse_issues(data, field_mapping)\n index = data['startAt'] + data['maxResults']\n\n while data['total'] > index:\n query['startAt'] = index\n response, error = self.get_response(jql_url, params=query)\n if error is not None:\n return None, error\n\n data = response.json()\n index = data['startAt'] + data['maxResults']\n\n addl_results = parse_issues(data, field_mapping)\n results.merge(addl_results)\n\n return results.to_json(), None\n except KeyboardInterrupt:\n return None, \"Query cancelled by user.\"\n\n\nregister(JiraJQL)\n", "path": "redash/query_runner/jql.py"}]} | 2,464 | 103 |
gh_patches_debug_10217 | rasdani/github-patches | git_diff | sbi-dev__sbi-1155 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Density Estimator batched sample mixes up samples from different posteriors
**Describe the bug**
Given a batched observation, i.e., x1 and x2, the sampling method mixes up samples from different distributions.
**To Reproduce**
```python
import torch
from sbi import analysis as analysis
from sbi import utils as utils
from sbi.inference.base import infer
num_dim = 3
prior = utils.BoxUniform(low=-2 * torch.ones(num_dim), high=2 * torch.ones(num_dim))
def simulator(parameter_set):
return 1.0 + parameter_set + torch.randn(parameter_set.shape) * 0.1
posterior = infer(simulator, prior, method="SNPE", num_simulations=200)
observation = torch.stack([torch.zeros(3), torch.ones(3)])
posterior_samples = posterior.posterior_estimator.sample((1000,), condition=observation)
# Outputs an multimodal distribution, but should be unimodal (mixes up samples from the two different x_os)
samples1 = posterior_samples[:,0].detach()
_ = analysis.pairplot([samples1], limits=[[-2, 2], [-2, 2], [-2, 2]], figsize=(6, 6))
```
**Additional context**
Likely a "reshaping" bug.
</issue>
<code>
[start of sbi/neural_nets/density_estimators/nflows_flow.py]
1 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
2 # under the Apache License Version 2.0, see <https://www.apache.org/licenses/>
3
4 from typing import Tuple
5
6 import torch
7 from pyknos.nflows.flows import Flow
8 from torch import Tensor, nn
9
10 from sbi.neural_nets.density_estimators.base import DensityEstimator
11 from sbi.sbi_types import Shape
12
13
14 class NFlowsFlow(DensityEstimator):
15 r"""`nflows`- based normalizing flow density estimator.
16
17 Flow type objects already have a .log_prob() and .sample() method, so here we just
18 wrap them and add the .loss() method.
19 """
20
21 def __init__(
22 self, net: Flow, input_shape: torch.Size, condition_shape: torch.Size
23 ) -> None:
24 """Initialize density estimator which wraps flows from the `nflows` library.
25
26 Args:
27 net: The raw `nflows` flow.
28 input_shape: Event shape of the input at which the density is being
29 evaluated (and which is also the event_shape of samples).
30 condition_shape: Shape of the condition. If not provided, it will assume a
31 1D input.
32 """
33 super().__init__(net, input_shape=input_shape, condition_shape=condition_shape)
34 # TODO: Remove as soon as DensityEstimator becomes abstract
35 self.net: Flow
36
37 @property
38 def embedding_net(self) -> nn.Module:
39 r"""Return the embedding network."""
40 return self.net._embedding_net
41
42 def inverse_transform(self, input: Tensor, condition: Tensor) -> Tensor:
43 r"""Return the inverse flow-transform of the inputs given a condition.
44
45 The inverse transform is the transformation that maps the inputs back to the
46 base distribution (noise) space.
47
48 Args:
49 input: Inputs to evaluate the inverse transform on of shape
50 (*batch_shape1, input_size).
51 condition: Conditions of shape (*batch_shape2, *condition_shape).
52
53 Raises:
54 RuntimeError: If batch_shape1 and batch_shape2 are not broadcastable.
55
56 Returns:
57 noise: Transformed inputs.
58 """
59 self._check_condition_shape(condition)
60 condition_dims = len(self.condition_shape)
61
62 # PyTorch's automatic broadcasting
63 batch_shape_in = input.shape[:-1]
64 batch_shape_cond = condition.shape[:-condition_dims]
65 batch_shape = torch.broadcast_shapes(batch_shape_in, batch_shape_cond)
66 # Expand the input and condition to the same batch shape
67 input = input.expand(batch_shape + (input.shape[-1],))
68 condition = condition.expand(batch_shape + self.condition_shape)
69 # Flatten required by nflows, but now both have the same batch shape
70 input = input.reshape(-1, input.shape[-1])
71 condition = condition.reshape(-1, *self.condition_shape)
72
73 noise, _ = self.net._transorm(input, context=condition)
74 noise = noise.reshape(batch_shape)
75 return noise
76
77 def log_prob(self, input: Tensor, condition: Tensor) -> Tensor:
78 r"""Return the log probabilities of the inputs given a condition or multiple
79 i.e. batched conditions.
80
81 Args:
82 input: Inputs to evaluate the log probability on. Of shape
83 `(sample_dim, batch_dim, *event_shape)`.
84 condition: Conditions of shape `(sample_dim, batch_dim, *event_shape)`.
85
86 Raises:
87 AssertionError: If `input_batch_dim != condition_batch_dim`.
88
89 Returns:
90 Sample-wise log probabilities, shape `(input_sample_dim, input_batch_dim)`.
91 """
92 input_sample_dim = input.shape[0]
93 input_batch_dim = input.shape[1]
94 condition_batch_dim = condition.shape[0]
95 condition_event_dims = len(condition.shape[1:])
96
97 assert condition_batch_dim == input_batch_dim, (
98 f"Batch shape of condition {condition_batch_dim} and input "
99 f"{input_batch_dim} do not match."
100 )
101
102 # Nflows needs to have a single batch dimension for condition and input.
103 input = input.reshape((input_batch_dim * input_sample_dim, -1))
104
105 # Repeat the condition to match `input_batch_dim * input_sample_dim`.
106 ones_for_event_dims = (1,) * condition_event_dims # Tuple of 1s, e.g. (1, 1, 1)
107 condition = condition.repeat(input_sample_dim, *ones_for_event_dims)
108
109 log_probs = self.net.log_prob(input, context=condition)
110 return log_probs.reshape((input_sample_dim, input_batch_dim))
111
112 def loss(self, input: Tensor, condition: Tensor) -> Tensor:
113 r"""Return the negative log-probability for training the density estimator.
114
115 Args:
116 input: Inputs of shape `(batch_dim, *input_event_shape)`.
117 condition: Conditions of shape `(batch_dim, *condition_event_shape)`.
118
119 Returns:
120 Negative log-probability of shape `(batch_dim,)`.
121 """
122 return -self.log_prob(input.unsqueeze(0), condition)[0]
123
124 def sample(self, sample_shape: Shape, condition: Tensor) -> Tensor:
125 r"""Return samples from the density estimator.
126
127 Args:
128 sample_shape: Shape of the samples to return.
129 condition: Conditions of shape `(sample_dim, batch_dim, *event_shape)`.
130
131 Returns:
132 Samples of shape `(*sample_shape, condition_batch_dim)`.
133 """
134 condition_batch_dim = condition.shape[0]
135 num_samples = torch.Size(sample_shape).numel()
136
137 samples = self.net.sample(num_samples, context=condition)
138
139 return samples.reshape((
140 *sample_shape,
141 condition_batch_dim,
142 -1,
143 ))
144
145 def sample_and_log_prob(
146 self, sample_shape: torch.Size, condition: Tensor, **kwargs
147 ) -> Tuple[Tensor, Tensor]:
148 r"""Return samples and their density from the density estimator.
149
150 Args:
151 sample_shape: Shape of the samples to return.
152 condition: Conditions of shape (sample_dim, batch_dim, *event_shape).
153
154 Returns:
155 Samples of shape `(*sample_shape, condition_batch_dim, *input_event_shape)`
156 and associated log probs of shape `(*sample_shape, condition_batch_dim)`.
157 """
158 condition_batch_dim = condition.shape[0]
159 num_samples = torch.Size(sample_shape).numel()
160
161 samples, log_probs = self.net.sample_and_log_prob(
162 num_samples, context=condition
163 )
164 samples = samples.reshape((*sample_shape, condition_batch_dim, -1))
165 log_probs = log_probs.reshape((*sample_shape, -1))
166 return samples, log_probs
167
[end of sbi/neural_nets/density_estimators/nflows_flow.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sbi/neural_nets/density_estimators/nflows_flow.py b/sbi/neural_nets/density_estimators/nflows_flow.py
--- a/sbi/neural_nets/density_estimators/nflows_flow.py
+++ b/sbi/neural_nets/density_estimators/nflows_flow.py
@@ -135,12 +135,8 @@
num_samples = torch.Size(sample_shape).numel()
samples = self.net.sample(num_samples, context=condition)
-
- return samples.reshape((
- *sample_shape,
- condition_batch_dim,
- -1,
- ))
+ samples = samples.transpose(0, 1)
+ return samples.reshape((*sample_shape, condition_batch_dim, *self.input_shape))
def sample_and_log_prob(
self, sample_shape: torch.Size, condition: Tensor, **kwargs
| {"golden_diff": "diff --git a/sbi/neural_nets/density_estimators/nflows_flow.py b/sbi/neural_nets/density_estimators/nflows_flow.py\n--- a/sbi/neural_nets/density_estimators/nflows_flow.py\n+++ b/sbi/neural_nets/density_estimators/nflows_flow.py\n@@ -135,12 +135,8 @@\n num_samples = torch.Size(sample_shape).numel()\n \n samples = self.net.sample(num_samples, context=condition)\n-\n- return samples.reshape((\n- *sample_shape,\n- condition_batch_dim,\n- -1,\n- ))\n+ samples = samples.transpose(0, 1)\n+ return samples.reshape((*sample_shape, condition_batch_dim, *self.input_shape))\n \n def sample_and_log_prob(\n self, sample_shape: torch.Size, condition: Tensor, **kwargs\n", "issue": "Density Estimator batched sample mixes up samples from different posteriors\n**Describe the bug**\r\nGiven a batched observation, i.e., x1 and x2, the sampling method mixes up samples from different distributions.\r\n\r\n**To Reproduce**\r\n```python\r\nimport torch\r\n\r\nfrom sbi import analysis as analysis\r\nfrom sbi import utils as utils\r\nfrom sbi.inference.base import infer\r\n\r\nnum_dim = 3\r\nprior = utils.BoxUniform(low=-2 * torch.ones(num_dim), high=2 * torch.ones(num_dim))\r\n\r\ndef simulator(parameter_set):\r\n return 1.0 + parameter_set + torch.randn(parameter_set.shape) * 0.1\r\n\r\nposterior = infer(simulator, prior, method=\"SNPE\", num_simulations=200)\r\nobservation = torch.stack([torch.zeros(3), torch.ones(3)])\r\nposterior_samples = posterior.posterior_estimator.sample((1000,), condition=observation)\r\n\r\n# Outputs an multimodal distribution, but should be unimodal (mixes up samples from the two different x_os)\r\nsamples1 = posterior_samples[:,0].detach()\r\n_ = analysis.pairplot([samples1], limits=[[-2, 2], [-2, 2], [-2, 2]], figsize=(6, 6))\r\n```\r\n\r\n**Additional context**\r\n\r\nLikely a \"reshaping\" bug. \r\n\n", "before_files": [{"content": "# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Apache License Version 2.0, see <https://www.apache.org/licenses/>\n\nfrom typing import Tuple\n\nimport torch\nfrom pyknos.nflows.flows import Flow\nfrom torch import Tensor, nn\n\nfrom sbi.neural_nets.density_estimators.base import DensityEstimator\nfrom sbi.sbi_types import Shape\n\n\nclass NFlowsFlow(DensityEstimator):\n r\"\"\"`nflows`- based normalizing flow density estimator.\n\n Flow type objects already have a .log_prob() and .sample() method, so here we just\n wrap them and add the .loss() method.\n \"\"\"\n\n def __init__(\n self, net: Flow, input_shape: torch.Size, condition_shape: torch.Size\n ) -> None:\n \"\"\"Initialize density estimator which wraps flows from the `nflows` library.\n\n Args:\n net: The raw `nflows` flow.\n input_shape: Event shape of the input at which the density is being\n evaluated (and which is also the event_shape of samples).\n condition_shape: Shape of the condition. If not provided, it will assume a\n 1D input.\n \"\"\"\n super().__init__(net, input_shape=input_shape, condition_shape=condition_shape)\n # TODO: Remove as soon as DensityEstimator becomes abstract\n self.net: Flow\n\n @property\n def embedding_net(self) -> nn.Module:\n r\"\"\"Return the embedding network.\"\"\"\n return self.net._embedding_net\n\n def inverse_transform(self, input: Tensor, condition: Tensor) -> Tensor:\n r\"\"\"Return the inverse flow-transform of the inputs given a condition.\n\n The inverse transform is the transformation that maps the inputs back to the\n base distribution (noise) space.\n\n Args:\n input: Inputs to evaluate the inverse transform on of shape\n (*batch_shape1, input_size).\n condition: Conditions of shape (*batch_shape2, *condition_shape).\n\n Raises:\n RuntimeError: If batch_shape1 and batch_shape2 are not broadcastable.\n\n Returns:\n noise: Transformed inputs.\n \"\"\"\n self._check_condition_shape(condition)\n condition_dims = len(self.condition_shape)\n\n # PyTorch's automatic broadcasting\n batch_shape_in = input.shape[:-1]\n batch_shape_cond = condition.shape[:-condition_dims]\n batch_shape = torch.broadcast_shapes(batch_shape_in, batch_shape_cond)\n # Expand the input and condition to the same batch shape\n input = input.expand(batch_shape + (input.shape[-1],))\n condition = condition.expand(batch_shape + self.condition_shape)\n # Flatten required by nflows, but now both have the same batch shape\n input = input.reshape(-1, input.shape[-1])\n condition = condition.reshape(-1, *self.condition_shape)\n\n noise, _ = self.net._transorm(input, context=condition)\n noise = noise.reshape(batch_shape)\n return noise\n\n def log_prob(self, input: Tensor, condition: Tensor) -> Tensor:\n r\"\"\"Return the log probabilities of the inputs given a condition or multiple\n i.e. batched conditions.\n\n Args:\n input: Inputs to evaluate the log probability on. Of shape\n `(sample_dim, batch_dim, *event_shape)`.\n condition: Conditions of shape `(sample_dim, batch_dim, *event_shape)`.\n\n Raises:\n AssertionError: If `input_batch_dim != condition_batch_dim`.\n\n Returns:\n Sample-wise log probabilities, shape `(input_sample_dim, input_batch_dim)`.\n \"\"\"\n input_sample_dim = input.shape[0]\n input_batch_dim = input.shape[1]\n condition_batch_dim = condition.shape[0]\n condition_event_dims = len(condition.shape[1:])\n\n assert condition_batch_dim == input_batch_dim, (\n f\"Batch shape of condition {condition_batch_dim} and input \"\n f\"{input_batch_dim} do not match.\"\n )\n\n # Nflows needs to have a single batch dimension for condition and input.\n input = input.reshape((input_batch_dim * input_sample_dim, -1))\n\n # Repeat the condition to match `input_batch_dim * input_sample_dim`.\n ones_for_event_dims = (1,) * condition_event_dims # Tuple of 1s, e.g. (1, 1, 1)\n condition = condition.repeat(input_sample_dim, *ones_for_event_dims)\n\n log_probs = self.net.log_prob(input, context=condition)\n return log_probs.reshape((input_sample_dim, input_batch_dim))\n\n def loss(self, input: Tensor, condition: Tensor) -> Tensor:\n r\"\"\"Return the negative log-probability for training the density estimator.\n\n Args:\n input: Inputs of shape `(batch_dim, *input_event_shape)`.\n condition: Conditions of shape `(batch_dim, *condition_event_shape)`.\n\n Returns:\n Negative log-probability of shape `(batch_dim,)`.\n \"\"\"\n return -self.log_prob(input.unsqueeze(0), condition)[0]\n\n def sample(self, sample_shape: Shape, condition: Tensor) -> Tensor:\n r\"\"\"Return samples from the density estimator.\n\n Args:\n sample_shape: Shape of the samples to return.\n condition: Conditions of shape `(sample_dim, batch_dim, *event_shape)`.\n\n Returns:\n Samples of shape `(*sample_shape, condition_batch_dim)`.\n \"\"\"\n condition_batch_dim = condition.shape[0]\n num_samples = torch.Size(sample_shape).numel()\n\n samples = self.net.sample(num_samples, context=condition)\n\n return samples.reshape((\n *sample_shape,\n condition_batch_dim,\n -1,\n ))\n\n def sample_and_log_prob(\n self, sample_shape: torch.Size, condition: Tensor, **kwargs\n ) -> Tuple[Tensor, Tensor]:\n r\"\"\"Return samples and their density from the density estimator.\n\n Args:\n sample_shape: Shape of the samples to return.\n condition: Conditions of shape (sample_dim, batch_dim, *event_shape).\n\n Returns:\n Samples of shape `(*sample_shape, condition_batch_dim, *input_event_shape)`\n and associated log probs of shape `(*sample_shape, condition_batch_dim)`.\n \"\"\"\n condition_batch_dim = condition.shape[0]\n num_samples = torch.Size(sample_shape).numel()\n\n samples, log_probs = self.net.sample_and_log_prob(\n num_samples, context=condition\n )\n samples = samples.reshape((*sample_shape, condition_batch_dim, -1))\n log_probs = log_probs.reshape((*sample_shape, -1))\n return samples, log_probs\n", "path": "sbi/neural_nets/density_estimators/nflows_flow.py"}]} | 2,660 | 186 |
gh_patches_debug_18582 | rasdani/github-patches | git_diff | wagtail__wagtail-423 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test_post_reorder in editors picks unit tests failing on Sqlite
Running the unit tests under sqlite:
```
DATABASE_ENGINE=django.db.backends.sqlite3 ./runtests.py
```
results in this test failure:
```
FAIL: test_post_reorder (wagtail.wagtailsearch.tests.test_editorspicks.TestEditorsPicksEditView)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/vagrant/wagtail/wagtail/wagtailsearch/tests/test_editorspicks.py", line 222, in test_post_reorder
self.assertEqual(models.Query.get("Hello").editors_picks.all()[0], self.editors_pick_2)
AssertionError: <EditorsPick: EditorsPick object> != <EditorsPick: EditorsPick object>
----------------------------------------------------------------------
Ran 446 tests in 36.358s
FAILED (failures=1, skipped=9, expected failures=1)
Destroying test database for alias 'default'...
```
</issue>
<code>
[start of wagtail/wagtailsearch/views/editorspicks.py]
1 from django.shortcuts import render, redirect, get_object_or_404
2 from django.contrib.auth.decorators import permission_required
3 from django.contrib import messages
4
5 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
6 from django.utils.translation import ugettext as _
7 from django.views.decorators.vary import vary_on_headers
8
9 from wagtail.wagtailsearch import models, forms
10 from wagtail.wagtailadmin.forms import SearchForm
11
12
13 @permission_required('wagtailadmin.access_admin')
14 @vary_on_headers('X-Requested-With')
15 def index(request):
16 is_searching = False
17 page = request.GET.get('p', 1)
18 query_string = request.GET.get('q', "")
19
20 queries = models.Query.objects.filter(editors_picks__isnull=False).distinct()
21
22 # Search
23 if query_string:
24 queries = queries.filter(query_string__icontains=query_string)
25 is_searching = True
26
27 # Pagination
28 paginator = Paginator(queries, 20)
29 try:
30 queries = paginator.page(page)
31 except PageNotAnInteger:
32 queries = paginator.page(1)
33 except EmptyPage:
34 queries = paginator.page(paginator.num_pages)
35
36 if request.is_ajax():
37 return render(request, "wagtailsearch/editorspicks/results.html", {
38 'is_searching': is_searching,
39 'queries': queries,
40 'query_string': query_string,
41 })
42 else:
43 return render(request, 'wagtailsearch/editorspicks/index.html', {
44 'is_searching': is_searching,
45 'queries': queries,
46 'query_string': query_string,
47 'search_form': SearchForm(data=dict(q=query_string) if query_string else None, placeholder=_("Search editor's picks")),
48 })
49
50
51 def save_editorspicks(query, new_query, editors_pick_formset):
52 # Save
53 if editors_pick_formset.is_valid():
54 # Set sort_order
55 for i, form in enumerate(editors_pick_formset.ordered_forms):
56 form.instance.sort_order = i
57
58 editors_pick_formset.save()
59
60 # If query was changed, move all editors picks to the new query
61 if query != new_query:
62 editors_pick_formset.get_queryset().update(query=new_query)
63
64 return True
65 else:
66 return False
67
68
69 @permission_required('wagtailadmin.access_admin')
70 def add(request):
71 if request.POST:
72 # Get query
73 query_form = forms.QueryForm(request.POST)
74 if query_form.is_valid():
75 query = models.Query.get(query_form['query_string'].value())
76
77 # Save editors picks
78 editors_pick_formset = forms.EditorsPickFormSet(request.POST, instance=query)
79 if save_editorspicks(query, query, editors_pick_formset):
80 messages.success(request, _("Editor's picks for '{0}' created.").format(query))
81 return redirect('wagtailsearch_editorspicks_index')
82 else:
83 if len(editors_pick_formset.non_form_errors()):
84 messages.error(request, " ".join(error for error in editors_pick_formset.non_form_errors())) # formset level error (e.g. no forms submitted)
85 else:
86 messages.error(request, _("Recommendations have not been created due to errors")) # specific errors will be displayed within form fields
87 else:
88 editors_pick_formset = forms.EditorsPickFormSet()
89 else:
90 query_form = forms.QueryForm()
91 editors_pick_formset = forms.EditorsPickFormSet()
92
93 return render(request, 'wagtailsearch/editorspicks/add.html', {
94 'query_form': query_form,
95 'editors_pick_formset': editors_pick_formset,
96 })
97
98
99 @permission_required('wagtailadmin.access_admin')
100 def edit(request, query_id):
101 query = get_object_or_404(models.Query, id=query_id)
102
103 if request.POST:
104 # Get query
105 query_form = forms.QueryForm(request.POST)
106 # and the recommendations
107 editors_pick_formset = forms.EditorsPickFormSet(request.POST, instance=query)
108
109 if query_form.is_valid():
110 new_query = models.Query.get(query_form['query_string'].value())
111
112 # Save editors picks
113 if save_editorspicks(query, new_query, editors_pick_formset):
114 messages.success(request, _("Editor's picks for '{0}' updated.").format(new_query))
115 return redirect('wagtailsearch_editorspicks_index')
116 else:
117 if len(editors_pick_formset.non_form_errors()):
118 messages.error(request, " ".join(error for error in editors_pick_formset.non_form_errors())) # formset level error (e.g. no forms submitted)
119 else:
120 messages.error(request, _("Recommendations have not been saved due to errors")) # specific errors will be displayed within form fields
121
122 else:
123 query_form = forms.QueryForm(initial=dict(query_string=query.query_string))
124 editors_pick_formset = forms.EditorsPickFormSet(instance=query)
125
126 return render(request, 'wagtailsearch/editorspicks/edit.html', {
127 'query_form': query_form,
128 'editors_pick_formset': editors_pick_formset,
129 'query': query,
130 })
131
132
133 @permission_required('wagtailadmin.access_admin')
134 def delete(request, query_id):
135 query = get_object_or_404(models.Query, id=query_id)
136
137 if request.POST:
138 query.editors_picks.all().delete()
139 messages.success(request, _("Editor's picks deleted."))
140 return redirect('wagtailsearch_editorspicks_index')
141
142 return render(request, 'wagtailsearch/editorspicks/confirm_delete.html', {
143 'query': query,
144 })
145
[end of wagtail/wagtailsearch/views/editorspicks.py]
[start of wagtail/wagtailsearch/models.py]
1 import datetime
2
3 from django.db import models
4 from django.utils import timezone
5 from django.utils.encoding import python_2_unicode_compatible
6
7 from wagtail.wagtailsearch.indexed import Indexed
8 from wagtail.wagtailsearch.utils import normalise_query_string, MAX_QUERY_STRING_LENGTH
9
10
11 @python_2_unicode_compatible
12 class Query(models.Model):
13 query_string = models.CharField(max_length=MAX_QUERY_STRING_LENGTH, unique=True)
14
15 def save(self, *args, **kwargs):
16 # Normalise query string
17 self.query_string = normalise_query_string(self.query_string)
18
19 super(Query, self).save(*args, **kwargs)
20
21 def add_hit(self, date=None):
22 if date is None:
23 date = timezone.now().date()
24 daily_hits, created = QueryDailyHits.objects.get_or_create(query=self, date=date)
25 daily_hits.hits = models.F('hits') + 1
26 daily_hits.save()
27
28 def __str__(self):
29 return self.query_string
30
31 @property
32 def hits(self):
33 hits = self.daily_hits.aggregate(models.Sum('hits'))['hits__sum']
34 return hits if hits else 0
35
36 @classmethod
37 def garbage_collect(cls):
38 """
39 Deletes all Query records that have no daily hits or editors picks
40 """
41 cls.objects.filter(daily_hits__isnull=True, editors_picks__isnull=True).delete()
42
43 @classmethod
44 def get(cls, query_string):
45 return cls.objects.get_or_create(query_string=normalise_query_string(query_string))[0]
46
47 @classmethod
48 def get_most_popular(cls, date_since=None):
49 # TODO: Implement date_since
50 return cls.objects.filter(daily_hits__isnull=False).annotate(_hits=models.Sum('daily_hits__hits')).distinct().order_by('-_hits')
51
52
53 class QueryDailyHits(models.Model):
54 query = models.ForeignKey(Query, db_index=True, related_name='daily_hits')
55 date = models.DateField()
56 hits = models.IntegerField(default=0)
57
58 @classmethod
59 def garbage_collect(cls):
60 """
61 Deletes all QueryDailyHits records that are older than 7 days
62 """
63 min_date = timezone.now().date() - datetime.timedelta(days=7)
64
65 cls.objects.filter(date__lt=min_date).delete()
66
67 class Meta:
68 unique_together = (
69 ('query', 'date'),
70 )
71
72
73 class EditorsPick(models.Model):
74 query = models.ForeignKey(Query, db_index=True, related_name='editors_picks')
75 page = models.ForeignKey('wagtailcore.Page')
76 sort_order = models.IntegerField(null=True, blank=True, editable=False)
77 description = models.TextField(blank=True)
78
79 class Meta:
80 ordering = ('sort_order', )
81
82
83 # Used for tests
84
85 class SearchTest(models.Model, Indexed):
86 title = models.CharField(max_length=255)
87 content = models.TextField()
88 live = models.BooleanField(default=False)
89
90 indexed_fields = ("title", "content", "callable_indexed_field", "live")
91
92 def callable_indexed_field(self):
93 return "Callable"
94
95
96 class SearchTestChild(SearchTest):
97 extra_content = models.TextField()
98
99 indexed_fields = "extra_content"
100
[end of wagtail/wagtailsearch/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailsearch/models.py b/wagtail/wagtailsearch/models.py
--- a/wagtail/wagtailsearch/models.py
+++ b/wagtail/wagtailsearch/models.py
@@ -76,6 +76,9 @@
sort_order = models.IntegerField(null=True, blank=True, editable=False)
description = models.TextField(blank=True)
+ def __repr__(self):
+ return 'EditorsPick(query="' + self.query.query_string + '", page="' + self.page.title + '")'
+
class Meta:
ordering = ('sort_order', )
diff --git a/wagtail/wagtailsearch/views/editorspicks.py b/wagtail/wagtailsearch/views/editorspicks.py
--- a/wagtail/wagtailsearch/views/editorspicks.py
+++ b/wagtail/wagtailsearch/views/editorspicks.py
@@ -55,6 +55,9 @@
for i, form in enumerate(editors_pick_formset.ordered_forms):
form.instance.sort_order = i
+ # Make sure the form is marked as changed so it gets saved with the new order
+ form.has_changed = lambda: True
+
editors_pick_formset.save()
# If query was changed, move all editors picks to the new query
| {"golden_diff": "diff --git a/wagtail/wagtailsearch/models.py b/wagtail/wagtailsearch/models.py\n--- a/wagtail/wagtailsearch/models.py\n+++ b/wagtail/wagtailsearch/models.py\n@@ -76,6 +76,9 @@\n sort_order = models.IntegerField(null=True, blank=True, editable=False)\n description = models.TextField(blank=True)\n \n+ def __repr__(self):\n+ return 'EditorsPick(query=\"' + self.query.query_string + '\", page=\"' + self.page.title + '\")'\n+\n class Meta:\n ordering = ('sort_order', )\n \ndiff --git a/wagtail/wagtailsearch/views/editorspicks.py b/wagtail/wagtailsearch/views/editorspicks.py\n--- a/wagtail/wagtailsearch/views/editorspicks.py\n+++ b/wagtail/wagtailsearch/views/editorspicks.py\n@@ -55,6 +55,9 @@\n for i, form in enumerate(editors_pick_formset.ordered_forms):\n form.instance.sort_order = i\n \n+ # Make sure the form is marked as changed so it gets saved with the new order\n+ form.has_changed = lambda: True\n+\n editors_pick_formset.save()\n \n # If query was changed, move all editors picks to the new query\n", "issue": "test_post_reorder in editors picks unit tests failing on Sqlite\nRunning the unit tests under sqlite:\n\n```\nDATABASE_ENGINE=django.db.backends.sqlite3 ./runtests.py\n```\n\nresults in this test failure:\n\n```\nFAIL: test_post_reorder (wagtail.wagtailsearch.tests.test_editorspicks.TestEditorsPicksEditView)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"/home/vagrant/wagtail/wagtail/wagtailsearch/tests/test_editorspicks.py\", line 222, in test_post_reorder\n self.assertEqual(models.Query.get(\"Hello\").editors_picks.all()[0], self.editors_pick_2)\nAssertionError: <EditorsPick: EditorsPick object> != <EditorsPick: EditorsPick object>\n\n----------------------------------------------------------------------\nRan 446 tests in 36.358s\n\nFAILED (failures=1, skipped=9, expected failures=1)\nDestroying test database for alias 'default'...\n```\n\n", "before_files": [{"content": "from django.shortcuts import render, redirect, get_object_or_404\nfrom django.contrib.auth.decorators import permission_required\nfrom django.contrib import messages\n\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.vary import vary_on_headers\n\nfrom wagtail.wagtailsearch import models, forms\nfrom wagtail.wagtailadmin.forms import SearchForm\n\n\n@permission_required('wagtailadmin.access_admin')\n@vary_on_headers('X-Requested-With')\ndef index(request):\n is_searching = False\n page = request.GET.get('p', 1)\n query_string = request.GET.get('q', \"\")\n\n queries = models.Query.objects.filter(editors_picks__isnull=False).distinct()\n\n # Search\n if query_string:\n queries = queries.filter(query_string__icontains=query_string)\n is_searching = True\n\n # Pagination\n paginator = Paginator(queries, 20)\n try:\n queries = paginator.page(page)\n except PageNotAnInteger:\n queries = paginator.page(1)\n except EmptyPage:\n queries = paginator.page(paginator.num_pages)\n\n if request.is_ajax():\n return render(request, \"wagtailsearch/editorspicks/results.html\", {\n 'is_searching': is_searching,\n 'queries': queries,\n 'query_string': query_string,\n })\n else:\n return render(request, 'wagtailsearch/editorspicks/index.html', {\n 'is_searching': is_searching,\n 'queries': queries,\n 'query_string': query_string,\n 'search_form': SearchForm(data=dict(q=query_string) if query_string else None, placeholder=_(\"Search editor's picks\")),\n })\n\n\ndef save_editorspicks(query, new_query, editors_pick_formset):\n # Save\n if editors_pick_formset.is_valid():\n # Set sort_order\n for i, form in enumerate(editors_pick_formset.ordered_forms):\n form.instance.sort_order = i\n\n editors_pick_formset.save()\n\n # If query was changed, move all editors picks to the new query\n if query != new_query:\n editors_pick_formset.get_queryset().update(query=new_query)\n\n return True\n else:\n return False\n\n\n@permission_required('wagtailadmin.access_admin')\ndef add(request):\n if request.POST:\n # Get query\n query_form = forms.QueryForm(request.POST)\n if query_form.is_valid():\n query = models.Query.get(query_form['query_string'].value())\n\n # Save editors picks\n editors_pick_formset = forms.EditorsPickFormSet(request.POST, instance=query)\n if save_editorspicks(query, query, editors_pick_formset):\n messages.success(request, _(\"Editor's picks for '{0}' created.\").format(query))\n return redirect('wagtailsearch_editorspicks_index')\n else:\n if len(editors_pick_formset.non_form_errors()):\n messages.error(request, \" \".join(error for error in editors_pick_formset.non_form_errors())) # formset level error (e.g. no forms submitted)\n else:\n messages.error(request, _(\"Recommendations have not been created due to errors\")) # specific errors will be displayed within form fields\n else:\n editors_pick_formset = forms.EditorsPickFormSet()\n else:\n query_form = forms.QueryForm()\n editors_pick_formset = forms.EditorsPickFormSet()\n\n return render(request, 'wagtailsearch/editorspicks/add.html', {\n 'query_form': query_form,\n 'editors_pick_formset': editors_pick_formset,\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef edit(request, query_id):\n query = get_object_or_404(models.Query, id=query_id)\n\n if request.POST:\n # Get query\n query_form = forms.QueryForm(request.POST)\n # and the recommendations\n editors_pick_formset = forms.EditorsPickFormSet(request.POST, instance=query)\n\n if query_form.is_valid():\n new_query = models.Query.get(query_form['query_string'].value())\n\n # Save editors picks\n if save_editorspicks(query, new_query, editors_pick_formset):\n messages.success(request, _(\"Editor's picks for '{0}' updated.\").format(new_query))\n return redirect('wagtailsearch_editorspicks_index')\n else:\n if len(editors_pick_formset.non_form_errors()):\n messages.error(request, \" \".join(error for error in editors_pick_formset.non_form_errors())) # formset level error (e.g. no forms submitted)\n else:\n messages.error(request, _(\"Recommendations have not been saved due to errors\")) # specific errors will be displayed within form fields\n\n else:\n query_form = forms.QueryForm(initial=dict(query_string=query.query_string))\n editors_pick_formset = forms.EditorsPickFormSet(instance=query)\n\n return render(request, 'wagtailsearch/editorspicks/edit.html', {\n 'query_form': query_form,\n 'editors_pick_formset': editors_pick_formset,\n 'query': query,\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef delete(request, query_id):\n query = get_object_or_404(models.Query, id=query_id)\n\n if request.POST:\n query.editors_picks.all().delete()\n messages.success(request, _(\"Editor's picks deleted.\"))\n return redirect('wagtailsearch_editorspicks_index')\n\n return render(request, 'wagtailsearch/editorspicks/confirm_delete.html', {\n 'query': query,\n })\n", "path": "wagtail/wagtailsearch/views/editorspicks.py"}, {"content": "import datetime\n\nfrom django.db import models\nfrom django.utils import timezone\nfrom django.utils.encoding import python_2_unicode_compatible\n\nfrom wagtail.wagtailsearch.indexed import Indexed\nfrom wagtail.wagtailsearch.utils import normalise_query_string, MAX_QUERY_STRING_LENGTH\n\n\n@python_2_unicode_compatible\nclass Query(models.Model):\n query_string = models.CharField(max_length=MAX_QUERY_STRING_LENGTH, unique=True)\n\n def save(self, *args, **kwargs):\n # Normalise query string\n self.query_string = normalise_query_string(self.query_string)\n\n super(Query, self).save(*args, **kwargs)\n\n def add_hit(self, date=None):\n if date is None:\n date = timezone.now().date()\n daily_hits, created = QueryDailyHits.objects.get_or_create(query=self, date=date)\n daily_hits.hits = models.F('hits') + 1\n daily_hits.save()\n\n def __str__(self):\n return self.query_string\n\n @property\n def hits(self):\n hits = self.daily_hits.aggregate(models.Sum('hits'))['hits__sum']\n return hits if hits else 0\n\n @classmethod\n def garbage_collect(cls):\n \"\"\"\n Deletes all Query records that have no daily hits or editors picks\n \"\"\"\n cls.objects.filter(daily_hits__isnull=True, editors_picks__isnull=True).delete()\n\n @classmethod\n def get(cls, query_string):\n return cls.objects.get_or_create(query_string=normalise_query_string(query_string))[0]\n\n @classmethod\n def get_most_popular(cls, date_since=None):\n # TODO: Implement date_since\n return cls.objects.filter(daily_hits__isnull=False).annotate(_hits=models.Sum('daily_hits__hits')).distinct().order_by('-_hits')\n\n\nclass QueryDailyHits(models.Model):\n query = models.ForeignKey(Query, db_index=True, related_name='daily_hits')\n date = models.DateField()\n hits = models.IntegerField(default=0)\n\n @classmethod\n def garbage_collect(cls):\n \"\"\"\n Deletes all QueryDailyHits records that are older than 7 days\n \"\"\"\n min_date = timezone.now().date() - datetime.timedelta(days=7)\n\n cls.objects.filter(date__lt=min_date).delete()\n\n class Meta:\n unique_together = (\n ('query', 'date'),\n )\n\n\nclass EditorsPick(models.Model):\n query = models.ForeignKey(Query, db_index=True, related_name='editors_picks')\n page = models.ForeignKey('wagtailcore.Page')\n sort_order = models.IntegerField(null=True, blank=True, editable=False)\n description = models.TextField(blank=True)\n\n class Meta:\n ordering = ('sort_order', )\n\n\n# Used for tests\n\nclass SearchTest(models.Model, Indexed):\n title = models.CharField(max_length=255)\n content = models.TextField()\n live = models.BooleanField(default=False)\n\n indexed_fields = (\"title\", \"content\", \"callable_indexed_field\", \"live\")\n\n def callable_indexed_field(self):\n return \"Callable\"\n\n\nclass SearchTestChild(SearchTest):\n extra_content = models.TextField()\n\n indexed_fields = \"extra_content\"\n", "path": "wagtail/wagtailsearch/models.py"}]} | 3,202 | 286 |
gh_patches_debug_35746 | rasdani/github-patches | git_diff | vispy__vispy-1391 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SceneGraph: HowTo view single scene in different viewboxes
Using https://github.com/vispy/vispy/blob/master/examples/basics/scene/one_scene_four_cams.py to view a single scene in four different viewboxes doesn't work.
The scene is actually generated four times, not only once. There are reminders of multi-parenting commented out in the example, but this won't work any more (since removal of multi-parenting).
Is it possible to have one scene viewed from different angels (eg. top view, front view and side view) without recreating the scene four times?
</issue>
<code>
[start of examples/basics/scene/one_scene_four_cams.py]
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright (c) Vispy Development Team. All Rights Reserved.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 # -----------------------------------------------------------------------------
6 # vispy: gallery 2
7
8 """
9 Demonstrating a single scene that is shown in four different viewboxes,
10 each with a different camera.
11 """
12
13 # todo: the panzoom camera sometimes work, sometimes not. Not sure why.
14 # we should probably make iterating over children deterministic, so that
15 # an error like this becomes easier to reproduce ...
16
17 import sys
18
19 from vispy import app, scene, io
20
21 canvas = scene.SceneCanvas(keys='interactive')
22 canvas.size = 800, 600
23 canvas.show()
24
25 # Create two ViewBoxes, place side-by-side
26 vb1 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
27 vb2 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
28 vb3 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
29 vb4 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
30 scenes = vb1.scene, vb2.scene, vb3.scene, vb4.scene
31
32 # Put viewboxes in a grid
33 grid = canvas.central_widget.add_grid()
34 grid.padding = 6
35 grid.add_widget(vb1, 0, 0)
36 grid.add_widget(vb2, 0, 1)
37 grid.add_widget(vb3, 1, 0)
38 grid.add_widget(vb4, 1, 1)
39
40 # Create some visuals to show
41 # AK: Ideally, we could just create one visual that is present in all
42 # scenes, but that results in flicker for the PanZoomCamera, I suspect
43 # due to errors in transform caching.
44 im1 = io.load_crate().astype('float32') / 255
45 #image1 = scene.visuals.Image(im1, grid=(20, 20), parent=scenes)
46 for par in scenes:
47 image = scene.visuals.Image(im1, grid=(20, 20), parent=par)
48
49 #vol1 = np.load(io.load_data_file('volume/stent.npz'))['arr_0']
50 #volume1 = scene.visuals.Volume(vol1, parent=scenes)
51 #volume1.transform = scene.STTransform(translate=(0, 0, 10))
52
53 # Assign cameras
54 vb1.camera = scene.BaseCamera()
55 vb2.camera = scene.PanZoomCamera()
56 vb3.camera = scene.TurntableCamera()
57 vb4.camera = scene.FlyCamera()
58
59
60 # If True, show a cuboid at each camera
61 if False:
62 cube = scene.visuals.Cube((3, 3, 5))
63 cube.transform = scene.STTransform(translate=(0, 0, 6))
64 for vb in (vb1, vb2, vb3, vb4):
65 vb.camera.parents = scenes
66 cube.add_parent(vb.camera)
67
68 if __name__ == '__main__':
69 if sys.flags.interactive != 1:
70 app.run()
71
[end of examples/basics/scene/one_scene_four_cams.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/basics/scene/one_scene_four_cams.py b/examples/basics/scene/one_scene_four_cams.py
--- a/examples/basics/scene/one_scene_four_cams.py
+++ b/examples/basics/scene/one_scene_four_cams.py
@@ -8,11 +8,12 @@
"""
Demonstrating a single scene that is shown in four different viewboxes,
each with a different camera.
-"""
-# todo: the panzoom camera sometimes work, sometimes not. Not sure why.
-# we should probably make iterating over children deterministic, so that
-# an error like this becomes easier to reproduce ...
+Note:
+ This example just creates four scenes using the same visual.
+ Multiple views are currently not available. See #1124 how this could
+ be achieved.
+"""
import sys
@@ -22,7 +23,7 @@
canvas.size = 800, 600
canvas.show()
-# Create two ViewBoxes, place side-by-side
+# Create four ViewBoxes
vb1 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
vb2 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
vb3 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)
@@ -38,33 +39,16 @@
grid.add_widget(vb4, 1, 1)
# Create some visuals to show
-# AK: Ideally, we could just create one visual that is present in all
-# scenes, but that results in flicker for the PanZoomCamera, I suspect
-# due to errors in transform caching.
im1 = io.load_crate().astype('float32') / 255
-#image1 = scene.visuals.Image(im1, grid=(20, 20), parent=scenes)
for par in scenes:
image = scene.visuals.Image(im1, grid=(20, 20), parent=par)
-#vol1 = np.load(io.load_data_file('volume/stent.npz'))['arr_0']
-#volume1 = scene.visuals.Volume(vol1, parent=scenes)
-#volume1.transform = scene.STTransform(translate=(0, 0, 10))
-
# Assign cameras
vb1.camera = scene.BaseCamera()
vb2.camera = scene.PanZoomCamera()
vb3.camera = scene.TurntableCamera()
vb4.camera = scene.FlyCamera()
-
-# If True, show a cuboid at each camera
-if False:
- cube = scene.visuals.Cube((3, 3, 5))
- cube.transform = scene.STTransform(translate=(0, 0, 6))
- for vb in (vb1, vb2, vb3, vb4):
- vb.camera.parents = scenes
- cube.add_parent(vb.camera)
-
if __name__ == '__main__':
if sys.flags.interactive != 1:
app.run()
| {"golden_diff": "diff --git a/examples/basics/scene/one_scene_four_cams.py b/examples/basics/scene/one_scene_four_cams.py\n--- a/examples/basics/scene/one_scene_four_cams.py\n+++ b/examples/basics/scene/one_scene_four_cams.py\n@@ -8,11 +8,12 @@\n \"\"\"\n Demonstrating a single scene that is shown in four different viewboxes,\n each with a different camera.\n-\"\"\"\n \n-# todo: the panzoom camera sometimes work, sometimes not. Not sure why.\n-# we should probably make iterating over children deterministic, so that\n-# an error like this becomes easier to reproduce ...\n+Note:\n+ This example just creates four scenes using the same visual.\n+ Multiple views are currently not available. See #1124 how this could\n+ be achieved.\n+\"\"\"\n \n import sys\n \n@@ -22,7 +23,7 @@\n canvas.size = 800, 600\n canvas.show()\n \n-# Create two ViewBoxes, place side-by-side\n+# Create four ViewBoxes\n vb1 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\n vb2 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\n vb3 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\n@@ -38,33 +39,16 @@\n grid.add_widget(vb4, 1, 1)\n \n # Create some visuals to show\n-# AK: Ideally, we could just create one visual that is present in all\n-# scenes, but that results in flicker for the PanZoomCamera, I suspect\n-# due to errors in transform caching.\n im1 = io.load_crate().astype('float32') / 255\n-#image1 = scene.visuals.Image(im1, grid=(20, 20), parent=scenes)\n for par in scenes:\n image = scene.visuals.Image(im1, grid=(20, 20), parent=par)\n \n-#vol1 = np.load(io.load_data_file('volume/stent.npz'))['arr_0']\n-#volume1 = scene.visuals.Volume(vol1, parent=scenes)\n-#volume1.transform = scene.STTransform(translate=(0, 0, 10))\n-\n # Assign cameras\n vb1.camera = scene.BaseCamera()\n vb2.camera = scene.PanZoomCamera()\n vb3.camera = scene.TurntableCamera()\n vb4.camera = scene.FlyCamera()\n \n-\n-# If True, show a cuboid at each camera\n-if False:\n- cube = scene.visuals.Cube((3, 3, 5))\n- cube.transform = scene.STTransform(translate=(0, 0, 6))\n- for vb in (vb1, vb2, vb3, vb4):\n- vb.camera.parents = scenes\n- cube.add_parent(vb.camera)\n-\n if __name__ == '__main__':\n if sys.flags.interactive != 1:\n app.run()\n", "issue": "SceneGraph: HowTo view single scene in different viewboxes\nUsing https://github.com/vispy/vispy/blob/master/examples/basics/scene/one_scene_four_cams.py to view a single scene in four different viewboxes doesn't work.\n\nThe scene is actually generated four times, not only once. There are reminders of multi-parenting commented out in the example, but this won't work any more (since removal of multi-parenting).\n\nIs it possible to have one scene viewed from different angels (eg. top view, front view and side view) without recreating the scene four times?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n# vispy: gallery 2\n\n\"\"\"\nDemonstrating a single scene that is shown in four different viewboxes,\neach with a different camera.\n\"\"\"\n\n# todo: the panzoom camera sometimes work, sometimes not. Not sure why.\n# we should probably make iterating over children deterministic, so that\n# an error like this becomes easier to reproduce ...\n\nimport sys\n\nfrom vispy import app, scene, io\n\ncanvas = scene.SceneCanvas(keys='interactive')\ncanvas.size = 800, 600\ncanvas.show()\n\n# Create two ViewBoxes, place side-by-side\nvb1 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\nvb2 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\nvb3 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\nvb4 = scene.widgets.ViewBox(border_color='white', parent=canvas.scene)\nscenes = vb1.scene, vb2.scene, vb3.scene, vb4.scene\n\n# Put viewboxes in a grid\ngrid = canvas.central_widget.add_grid()\ngrid.padding = 6\ngrid.add_widget(vb1, 0, 0)\ngrid.add_widget(vb2, 0, 1)\ngrid.add_widget(vb3, 1, 0)\ngrid.add_widget(vb4, 1, 1)\n\n# Create some visuals to show\n# AK: Ideally, we could just create one visual that is present in all\n# scenes, but that results in flicker for the PanZoomCamera, I suspect\n# due to errors in transform caching.\nim1 = io.load_crate().astype('float32') / 255\n#image1 = scene.visuals.Image(im1, grid=(20, 20), parent=scenes)\nfor par in scenes:\n image = scene.visuals.Image(im1, grid=(20, 20), parent=par)\n\n#vol1 = np.load(io.load_data_file('volume/stent.npz'))['arr_0']\n#volume1 = scene.visuals.Volume(vol1, parent=scenes)\n#volume1.transform = scene.STTransform(translate=(0, 0, 10))\n\n# Assign cameras\nvb1.camera = scene.BaseCamera()\nvb2.camera = scene.PanZoomCamera()\nvb3.camera = scene.TurntableCamera()\nvb4.camera = scene.FlyCamera()\n\n\n# If True, show a cuboid at each camera\nif False:\n cube = scene.visuals.Cube((3, 3, 5))\n cube.transform = scene.STTransform(translate=(0, 0, 6))\n for vb in (vb1, vb2, vb3, vb4):\n vb.camera.parents = scenes\n cube.add_parent(vb.camera)\n\nif __name__ == '__main__':\n if sys.flags.interactive != 1:\n app.run()\n", "path": "examples/basics/scene/one_scene_four_cams.py"}]} | 1,464 | 653 |
gh_patches_debug_26263 | rasdani/github-patches | git_diff | pypa__pip-2303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Selfcheck failure on Windows
I get this warning all the time:
```
There was an error checking the latest version of pip
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\pip\utils\outdated.py", line 115, in pip_version_check
state.save(pypi_version, current_time)
File "C:\Python27\lib\site-packages\pip\utils\outdated.py", line 62, in save
with open(self.statefile_path) as statefile:
IOError: [Errno 2] No such file or directory: u'C:\\Users\\ionel_000\\AppData\\Local\\pip\\Cache\\selfcheck.json'
```
If I create the file, it complains about invalid json. I've put `{}` inside, the warning has gone away, but this seems very wrong to me.
</issue>
<code>
[start of pip/utils/outdated.py]
1 from __future__ import absolute_import
2
3 import datetime
4 import json
5 import logging
6 import os.path
7 import sys
8
9 from pip._vendor import lockfile
10 from pip._vendor import pkg_resources
11
12 from pip.compat import total_seconds
13 from pip.index import PyPI
14 from pip.locations import USER_CACHE_DIR, running_under_virtualenv
15
16
17 SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ"
18
19
20 logger = logging.getLogger(__name__)
21
22
23 class VirtualenvSelfCheckState(object):
24 def __init__(self):
25 self.statefile_path = os.path.join(sys.prefix, "pip-selfcheck.json")
26
27 # Load the existing state
28 try:
29 with open(self.statefile_path) as statefile:
30 self.state = json.load(statefile)
31 except (IOError, ValueError):
32 self.state = {}
33
34 def save(self, pypi_version, current_time):
35 # Attempt to write out our version check file
36 with open(self.statefile_path, "w") as statefile:
37 json.dump(
38 {
39 "last_check": current_time.strftime(SELFCHECK_DATE_FMT),
40 "pypi_version": pypi_version,
41 },
42 statefile,
43 sort_keys=True,
44 separators=(",", ":")
45 )
46
47
48 class GlobalSelfCheckState(object):
49 def __init__(self):
50 self.statefile_path = os.path.join(USER_CACHE_DIR, "selfcheck.json")
51
52 # Load the existing state
53 try:
54 with open(self.statefile_path) as statefile:
55 self.state = json.load(statefile)[sys.prefix]
56 except (IOError, ValueError, KeyError):
57 self.state = {}
58
59 def save(self, pypi_version, current_time):
60 # Attempt to write out our version check file
61 with lockfile.LockFile(self.statefile_path):
62 with open(self.statefile_path) as statefile:
63 state = json.load(statefile)
64
65 state[sys.prefix] = {
66 "last_check": current_time.strftime(SELFCHECK_DATE_FMT),
67 "pypi_version": pypi_version,
68 }
69
70 with open(self.statefile_path, "w") as statefile:
71 json.dump(state, statefile, sort_keys=True,
72 separators=(",", ":"))
73
74
75 def load_selfcheck_statefile():
76 if running_under_virtualenv():
77 return VirtualenvSelfCheckState()
78 else:
79 return GlobalSelfCheckState()
80
81
82 def pip_version_check(session):
83 """Check for an update for pip.
84
85 Limit the frequency of checks to once per week. State is stored either in
86 the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix
87 of the pip script path.
88 """
89 import pip # imported here to prevent circular imports
90 pypi_version = None
91
92 try:
93 state = load_selfcheck_statefile()
94
95 current_time = datetime.datetime.utcnow()
96 # Determine if we need to refresh the state
97 if "last_check" in state.state and "pypi_version" in state.state:
98 last_check = datetime.datetime.strptime(
99 state.state["last_check"],
100 SELFCHECK_DATE_FMT
101 )
102 if total_seconds(current_time - last_check) < 7 * 24 * 60 * 60:
103 pypi_version = state.state["pypi_version"]
104
105 # Refresh the version if we need to or just see if we need to warn
106 if pypi_version is None:
107 resp = session.get(
108 PyPI.pip_json_url,
109 headers={"Accept": "application/json"},
110 )
111 resp.raise_for_status()
112 pypi_version = resp.json()["info"]["version"]
113
114 # save that we've performed a check
115 state.save(pypi_version, current_time)
116
117 pip_version = pkg_resources.parse_version(pip.__version__)
118
119 # Determine if our pypi_version is older
120 if pip_version < pkg_resources.parse_version(pypi_version):
121 logger.warning(
122 "You are using pip version %s, however version %s is "
123 "available.\nYou should consider upgrading via the "
124 "'pip install --upgrade pip' command." % (pip.__version__,
125 pypi_version)
126 )
127
128 except Exception:
129 logger.debug(
130 "There was an error checking the latest version of pip",
131 exc_info=True,
132 )
133
[end of pip/utils/outdated.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pip/utils/outdated.py b/pip/utils/outdated.py
--- a/pip/utils/outdated.py
+++ b/pip/utils/outdated.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import
import datetime
+import errno
import json
import logging
import os.path
@@ -12,6 +13,7 @@
from pip.compat import total_seconds
from pip.index import PyPI
from pip.locations import USER_CACHE_DIR, running_under_virtualenv
+from pip.utils.filesystem import check_path_owner
SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ"
@@ -57,6 +59,19 @@
self.state = {}
def save(self, pypi_version, current_time):
+ # Check to make sure that we own the directory
+ if not check_path_owner(
+ os.path.dirname(self.statefile_path), os.geteuid()):
+ return
+
+ # Now that we've ensured the directory is owned by this user, we'll go
+ # ahead and make sure that all our directories are created.
+ try:
+ os.makedirs(os.path.dirname(self.statefile_path))
+ except OSError as exc:
+ if exc.errno != errno.EEXIST:
+ raise
+
# Attempt to write out our version check file
with lockfile.LockFile(self.statefile_path):
with open(self.statefile_path) as statefile:
| {"golden_diff": "diff --git a/pip/utils/outdated.py b/pip/utils/outdated.py\n--- a/pip/utils/outdated.py\n+++ b/pip/utils/outdated.py\n@@ -1,6 +1,7 @@\n from __future__ import absolute_import\n \n import datetime\n+import errno\n import json\n import logging\n import os.path\n@@ -12,6 +13,7 @@\n from pip.compat import total_seconds\n from pip.index import PyPI\n from pip.locations import USER_CACHE_DIR, running_under_virtualenv\n+from pip.utils.filesystem import check_path_owner\n \n \n SELFCHECK_DATE_FMT = \"%Y-%m-%dT%H:%M:%SZ\"\n@@ -57,6 +59,19 @@\n self.state = {}\n \n def save(self, pypi_version, current_time):\n+ # Check to make sure that we own the directory\n+ if not check_path_owner(\n+ os.path.dirname(self.statefile_path), os.geteuid()):\n+ return\n+\n+ # Now that we've ensured the directory is owned by this user, we'll go\n+ # ahead and make sure that all our directories are created.\n+ try:\n+ os.makedirs(os.path.dirname(self.statefile_path))\n+ except OSError as exc:\n+ if exc.errno != errno.EEXIST:\n+ raise\n+\n # Attempt to write out our version check file\n with lockfile.LockFile(self.statefile_path):\n with open(self.statefile_path) as statefile:\n", "issue": "Selfcheck failure on Windows\nI get this warning all the time:\n\n```\nThere was an error checking the latest version of pip\nTraceback (most recent call last):\n File \"C:\\Python27\\lib\\site-packages\\pip\\utils\\outdated.py\", line 115, in pip_version_check\n state.save(pypi_version, current_time)\n File \"C:\\Python27\\lib\\site-packages\\pip\\utils\\outdated.py\", line 62, in save\n with open(self.statefile_path) as statefile:\nIOError: [Errno 2] No such file or directory: u'C:\\\\Users\\\\ionel_000\\\\AppData\\\\Local\\\\pip\\\\Cache\\\\selfcheck.json'\n```\n\nIf I create the file, it complains about invalid json. I've put `{}` inside, the warning has gone away, but this seems very wrong to me.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport datetime\nimport json\nimport logging\nimport os.path\nimport sys\n\nfrom pip._vendor import lockfile\nfrom pip._vendor import pkg_resources\n\nfrom pip.compat import total_seconds\nfrom pip.index import PyPI\nfrom pip.locations import USER_CACHE_DIR, running_under_virtualenv\n\n\nSELFCHECK_DATE_FMT = \"%Y-%m-%dT%H:%M:%SZ\"\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass VirtualenvSelfCheckState(object):\n def __init__(self):\n self.statefile_path = os.path.join(sys.prefix, \"pip-selfcheck.json\")\n\n # Load the existing state\n try:\n with open(self.statefile_path) as statefile:\n self.state = json.load(statefile)\n except (IOError, ValueError):\n self.state = {}\n\n def save(self, pypi_version, current_time):\n # Attempt to write out our version check file\n with open(self.statefile_path, \"w\") as statefile:\n json.dump(\n {\n \"last_check\": current_time.strftime(SELFCHECK_DATE_FMT),\n \"pypi_version\": pypi_version,\n },\n statefile,\n sort_keys=True,\n separators=(\",\", \":\")\n )\n\n\nclass GlobalSelfCheckState(object):\n def __init__(self):\n self.statefile_path = os.path.join(USER_CACHE_DIR, \"selfcheck.json\")\n\n # Load the existing state\n try:\n with open(self.statefile_path) as statefile:\n self.state = json.load(statefile)[sys.prefix]\n except (IOError, ValueError, KeyError):\n self.state = {}\n\n def save(self, pypi_version, current_time):\n # Attempt to write out our version check file\n with lockfile.LockFile(self.statefile_path):\n with open(self.statefile_path) as statefile:\n state = json.load(statefile)\n\n state[sys.prefix] = {\n \"last_check\": current_time.strftime(SELFCHECK_DATE_FMT),\n \"pypi_version\": pypi_version,\n }\n\n with open(self.statefile_path, \"w\") as statefile:\n json.dump(state, statefile, sort_keys=True,\n separators=(\",\", \":\"))\n\n\ndef load_selfcheck_statefile():\n if running_under_virtualenv():\n return VirtualenvSelfCheckState()\n else:\n return GlobalSelfCheckState()\n\n\ndef pip_version_check(session):\n \"\"\"Check for an update for pip.\n\n Limit the frequency of checks to once per week. State is stored either in\n the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix\n of the pip script path.\n \"\"\"\n import pip # imported here to prevent circular imports\n pypi_version = None\n\n try:\n state = load_selfcheck_statefile()\n\n current_time = datetime.datetime.utcnow()\n # Determine if we need to refresh the state\n if \"last_check\" in state.state and \"pypi_version\" in state.state:\n last_check = datetime.datetime.strptime(\n state.state[\"last_check\"],\n SELFCHECK_DATE_FMT\n )\n if total_seconds(current_time - last_check) < 7 * 24 * 60 * 60:\n pypi_version = state.state[\"pypi_version\"]\n\n # Refresh the version if we need to or just see if we need to warn\n if pypi_version is None:\n resp = session.get(\n PyPI.pip_json_url,\n headers={\"Accept\": \"application/json\"},\n )\n resp.raise_for_status()\n pypi_version = resp.json()[\"info\"][\"version\"]\n\n # save that we've performed a check\n state.save(pypi_version, current_time)\n\n pip_version = pkg_resources.parse_version(pip.__version__)\n\n # Determine if our pypi_version is older\n if pip_version < pkg_resources.parse_version(pypi_version):\n logger.warning(\n \"You are using pip version %s, however version %s is \"\n \"available.\\nYou should consider upgrading via the \"\n \"'pip install --upgrade pip' command.\" % (pip.__version__,\n pypi_version)\n )\n\n except Exception:\n logger.debug(\n \"There was an error checking the latest version of pip\",\n exc_info=True,\n )\n", "path": "pip/utils/outdated.py"}]} | 1,941 | 319 |
gh_patches_debug_3852 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1773 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Taiwan real-time data has stopped working
Taiwain seems to have been offline recently
It used to work correctly, something may have changed in the data source?
Kibana error description [here](https://kibana.electricitymap.org/app/kibana#/discover/10af54f0-0c4a-11e9-85c1-1d63df8c862c?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(message,extra.key,level),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:level,negate:!f,params:(query:ERROR,type:phrase),type:phrase,value:ERROR),query:(match:(level:(query:ERROR,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:'@timestamp',negate:!f,params:(query:'2019-02-13T09:56:26.971Z',type:phrase),type:phrase,value:'February%2013th%202019,%2010:56:26.971'),query:(match:('@timestamp':(query:'2019-02-13T09:56:26.971Z',type:phrase))))),index:'96f67170-0c49-11e9-85c1-1d63df8c862c',interval:auto,query:(language:lucene,query:''),sort:!('@timestamp',asc)))
</issue>
<code>
[start of parsers/TW.py]
1 #!/usr/bin/env python3
2 import arrow
3 import requests
4 import pandas
5 import dateutil
6
7
8 def fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None):
9 if target_datetime:
10 raise NotImplementedError('This parser is not yet able to parse past dates')
11
12 url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'
13 response = requests.get(url)
14 data = response.json()
15
16 dumpDate = data['']
17 prodData = data['aaData']
18
19 tz = 'Asia/Taipei'
20 dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))
21
22 objData = pandas.DataFrame(prodData)
23
24 objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',
25 'additional']
26
27 objData['fueltype'] = objData.fueltype.str.split('(').str[1]
28 objData['fueltype'] = objData.fueltype.str.split(')').str[0]
29 objData.drop('additional', axis=1, inplace=True)
30 objData.drop('percentage', axis=1, inplace=True)
31
32 objData = objData.convert_objects(convert_numeric=True)
33 production = pandas.DataFrame(objData.groupby('fueltype').sum())
34 production.columns = ['capacity', 'output']
35
36 coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity
37 gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity
38 oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity
39
40 coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output
41 gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output
42 oil_production = production.ix['Oil'].output + production.ix['Diesel'].output
43
44 # For storage, note that load will be negative, and generation positive.
45 # We require the opposite
46
47 returndata = {
48 'zoneKey': zone_key,
49 'datetime': dumpDate.datetime,
50 'production': {
51 'coal': coal_production,
52 'gas': gas_production,
53 'oil': oil_production,
54 'hydro': production.ix['Hydro'].output,
55 'nuclear': production.ix['Nuclear'].output,
56 'solar': production.ix['Solar'].output,
57 'wind': production.ix['Wind'].output,
58 'unknown': production.ix['Co-Gen'].output
59 },
60 'capacity': {
61 'coal': coal_capacity,
62 'gas': gas_capacity,
63 'oil': oil_capacity,
64 'hydro': production.ix['Hydro'].capacity,
65 'hydro storage':production.ix['Pumping Gen'].capacity,
66 'nuclear': production.ix['Nuclear'].capacity,
67 'solar': production.ix['Solar'].capacity,
68 'wind': production.ix['Wind'].capacity,
69 'unknown': production.ix['Co-Gen'].capacity
70 },
71 'storage': {
72 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output
73 },
74 'source': 'taipower.com.tw'
75 }
76
77 return returndata
78
79
80 if __name__ == '__main__':
81 print(fetch_production())
82
[end of parsers/TW.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsers/TW.py b/parsers/TW.py
--- a/parsers/TW.py
+++ b/parsers/TW.py
@@ -9,7 +9,7 @@
if target_datetime:
raise NotImplementedError('This parser is not yet able to parse past dates')
- url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'
+ url = 'http://www.taipower.com.tw/d006/loadGraph/loadGraph/data/genary.txt'
response = requests.get(url)
data = response.json()
| {"golden_diff": "diff --git a/parsers/TW.py b/parsers/TW.py\n--- a/parsers/TW.py\n+++ b/parsers/TW.py\n@@ -9,7 +9,7 @@\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n \n- url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n+ url = 'http://www.taipower.com.tw/d006/loadGraph/loadGraph/data/genary.txt'\n response = requests.get(url)\n data = response.json()\n", "issue": "Taiwan real-time data has stopped working\nTaiwain seems to have been offline recently\r\nIt used to work correctly, something may have changed in the data source?\r\n\r\nKibana error description [here](https://kibana.electricitymap.org/app/kibana#/discover/10af54f0-0c4a-11e9-85c1-1d63df8c862c?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(message,extra.key,level),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:level,negate:!f,params:(query:ERROR,type:phrase),type:phrase,value:ERROR),query:(match:(level:(query:ERROR,type:phrase)))),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'96f67170-0c49-11e9-85c1-1d63df8c862c',key:'@timestamp',negate:!f,params:(query:'2019-02-13T09:56:26.971Z',type:phrase),type:phrase,value:'February%2013th%202019,%2010:56:26.971'),query:(match:('@timestamp':(query:'2019-02-13T09:56:26.971Z',type:phrase))))),index:'96f67170-0c49-11e9-85c1-1d63df8c862c',interval:auto,query:(language:lucene,query:''),sort:!('@timestamp',asc)))\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport arrow\nimport requests\nimport pandas\nimport dateutil\n\n\ndef fetch_production(zone_key='TW', session=None, target_datetime=None, logger=None):\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n url = 'http://data.taipower.com.tw/opendata01/apply/file/d006001/001.txt'\n response = requests.get(url)\n data = response.json()\n\n dumpDate = data['']\n prodData = data['aaData']\n\n tz = 'Asia/Taipei'\n dumpDate = arrow.get(dumpDate, 'YYYY-MM-DD HH:mm').replace(tzinfo=dateutil.tz.gettz(tz))\n\n objData = pandas.DataFrame(prodData)\n\n objData.columns = ['fueltype', 'name', 'capacity', 'output', 'percentage',\n 'additional']\n\n objData['fueltype'] = objData.fueltype.str.split('(').str[1]\n objData['fueltype'] = objData.fueltype.str.split(')').str[0]\n objData.drop('additional', axis=1, inplace=True)\n objData.drop('percentage', axis=1, inplace=True)\n\n objData = objData.convert_objects(convert_numeric=True)\n production = pandas.DataFrame(objData.groupby('fueltype').sum())\n production.columns = ['capacity', 'output']\n\n coal_capacity = production.ix['Coal'].capacity + production.ix['IPP-Coal'].capacity\n gas_capacity = production.ix['LNG'].capacity + production.ix['IPP-LNG'].capacity\n oil_capacity = production.ix['Oil'].capacity + production.ix['Diesel'].capacity\n\n coal_production = production.ix['Coal'].output + production.ix['IPP-Coal'].output\n gas_production = production.ix['LNG'].output + production.ix['IPP-LNG'].output\n oil_production = production.ix['Oil'].output + production.ix['Diesel'].output\n\n # For storage, note that load will be negative, and generation positive.\n # We require the opposite\n\n returndata = {\n 'zoneKey': zone_key,\n 'datetime': dumpDate.datetime,\n 'production': {\n 'coal': coal_production,\n 'gas': gas_production,\n 'oil': oil_production,\n 'hydro': production.ix['Hydro'].output,\n 'nuclear': production.ix['Nuclear'].output,\n 'solar': production.ix['Solar'].output,\n 'wind': production.ix['Wind'].output,\n 'unknown': production.ix['Co-Gen'].output\n },\n 'capacity': {\n 'coal': coal_capacity,\n 'gas': gas_capacity,\n 'oil': oil_capacity,\n 'hydro': production.ix['Hydro'].capacity,\n 'hydro storage':production.ix['Pumping Gen'].capacity,\n 'nuclear': production.ix['Nuclear'].capacity,\n 'solar': production.ix['Solar'].capacity,\n 'wind': production.ix['Wind'].capacity,\n 'unknown': production.ix['Co-Gen'].capacity\n },\n 'storage': {\n 'hydro': -1 * production.ix['Pumping Load'].output - production.ix['Pumping Gen'].output\n },\n 'source': 'taipower.com.tw'\n }\n\n return returndata\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/TW.py"}]} | 1,895 | 136 |
gh_patches_debug_495 | rasdani/github-patches | git_diff | mantl__mantl-1470 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace cisco logo with Mantl logo in documentation
Now that Mantl has it's own logo (see http://mantl.io and the readme), should we replace [the Cisco logo](https://github.com/CiscoCloud/mantl/blob/master/docs/_static/cisco.png) we use in the docs?
- Ansible version (`ansible --version`): n/a
- Python version (`python --version`): n/a
- Git commit hash or branch: n/a
- Cloud Environment: n/a
- Terraform version (`terraform version`): n/a
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Mantl documentation build configuration file, created by sphinx-quickstart on
4 # Wed Feb 4 06:59:14 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17
18 # If extensions (or modules to document with autodoc) are in another directory,
19 # add these directories to sys.path here. If the directory is relative to the
20 # documentation root, use os.path.abspath to make it absolute, like shown here.
21 #sys.path.insert(0, os.path.abspath('.'))
22
23 # -- General configuration ------------------------------------------------
24
25 # If your documentation needs a minimal Sphinx version, state it here.
26 #needs_sphinx = '1.0'
27
28 # Add any Sphinx extension module names here, as strings. They can be
29 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
30 # ones.
31 extensions = [
32 'sphinx.ext.autodoc',
33 'sphinx.ext.intersphinx',
34 'sphinx.ext.viewcode',
35 'sphinx.ext.todo',
36 ]
37
38 # Add any paths that contain templates here, relative to this directory.
39 templates_path = ['_templates']
40
41 # The suffix of source filenames.
42 source_suffix = '.rst'
43
44 # The encoding of source files.
45 #source_encoding = 'utf-8-sig'
46
47 # The master toctree document.
48 master_doc = 'index'
49
50 # General information about the project.
51 project = u'Mantl'
52 copyright = u'2015, Cisco Systems, Incorporated'
53
54 # The version info for the project you're documenting, acts as replacement for
55 # |version| and |release|, also used in various other places throughout the
56 # built documents.
57 #
58 # The short X.Y version.
59 version = '1.0'
60 # The full version, including alpha/beta/rc tags.
61 release = '1.0.3'
62
63 # The language for content autogenerated by Sphinx. Refer to documentation
64 # for a list of supported languages.
65 #language = None
66
67 # There are two options for replacing |today|: either, you set today to some
68 # non-false value, then it is used:
69 #today = ''
70 # Else, today_fmt is used as the format for a strftime call.
71 #today_fmt = '%B %d, %Y'
72
73 # List of patterns, relative to source directory, that match files and
74 # directories to ignore when looking for source files.
75 exclude_patterns = ['_build']
76
77 # The reST default role (used for this markup: `text`) to use for all
78 # documents.
79 #default_role = None
80
81 # If true, '()' will be appended to :func: etc. cross-reference text.
82 #add_function_parentheses = True
83
84 # If true, the current module name will be prepended to all description
85 # unit titles (such as .. function::).
86 #add_module_names = True
87
88 # If true, sectionauthor and moduleauthor directives will be shown in the
89 # output. They are ignored by default.
90 #show_authors = False
91
92 # The name of the Pygments (syntax highlighting) style to use.
93 pygments_style = 'sphinx'
94
95 # A list of ignored prefixes for module index sorting.
96 #modindex_common_prefix = []
97
98 # If true, keep warnings as "system message" paragraphs in the built documents.
99 #keep_warnings = False
100
101
102 # -- Options for HTML output ----------------------------------------------
103
104 import alabaster
105
106 # The theme to use for HTML and HTML Help pages. See the documentation for
107 # a list of builtin themes.
108 html_theme = 'alabaster'
109
110 # Theme options are theme-specific and customize the look and feel of a theme
111 # further. For a list of options available for each theme, see the
112 # documentation.
113 extensions += ['alabaster']
114 html_theme_options = {
115 'github_user': 'ciscocloud',
116 'github_repo': 'mantl',
117 'logo': 'cisco.png',
118 'logo_name': True,
119 }
120
121 # Add any paths that contain custom themes here, relative to this directory.
122 html_theme_path = [alabaster.get_path()]
123
124 # The name for this set of Sphinx documents. If None, it defaults to
125 # "<project> v<release> documentation".
126 #html_title = None
127
128 # A shorter title for the navigation bar. Default is the same as html_title.
129 #html_short_title = None
130
131 # The name of an image file (relative to this directory) to place at the top
132 # of the sidebar.
133 # html_logo = None
134
135 # The name of an image file (within the static path) to use as favicon of the
136 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
137 # pixels large.
138 #html_favicon = None
139
140 # Add any paths that contain custom static files (such as style sheets) here,
141 # relative to this directory. They are copied after the builtin static files,
142 # so a file named "default.css" will overwrite the builtin "default.css".
143 html_static_path = ['_static']
144
145 # Add any extra paths that contain custom files (such as robots.txt or
146 # .htaccess) here, relative to this directory. These files are copied
147 # directly to the root of the documentation.
148 #html_extra_path = []
149
150 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
151 # using the given strftime format.
152 html_last_updated_fmt = '%b %d, %Y'
153
154 # If true, SmartyPants will be used to convert quotes and dashes to
155 # typographically correct entities.
156 #html_use_smartypants = True
157
158 # Custom sidebar templates, maps document names to template names.
159 html_sidebars = {
160 '**': [
161 'about.html', 'navigation.html', 'searchbox.html'
162 ]
163 }
164
165 # Additional templates that should be rendered to pages, maps page names to
166 # template names.
167 #html_additional_pages = {}
168
169 # If false, no module index is generated.
170 html_domain_indices = True
171
172 # If false, no index is generated.
173 html_use_index = True
174
175 # If true, the index is split into individual pages for each letter.
176 #html_split_index = False
177
178 # If true, links to the reST sources are added to the pages.
179 html_show_sourcelink = True
180
181 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
182 html_show_sphinx = False
183
184 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
185 #html_show_copyright = True
186
187 # If true, an OpenSearch description file will be output, and all pages will
188 # contain a <link> tag referring to it. The value of this option must be the
189 # base URL from which the finished HTML is served.
190 #html_use_opensearch = ''
191
192 # This is the file name suffix for HTML files (e.g. ".xhtml").
193 #html_file_suffix = None
194
195 # Output file base name for HTML help builder.
196 htmlhelp_basename = 'Mantldoc'
197
198
199 # -- Options for LaTeX output ---------------------------------------------
200
201 latex_elements = {
202 # The paper size ('letterpaper' or 'a4paper').
203 #'papersize': 'letterpaper',
204
205 # The font size ('10pt', '11pt' or '12pt').
206 #'pointsize': '10pt',
207
208 # Additional stuff for the LaTeX preamble.
209 #'preamble': '',
210 }
211
212 # Grouping the document tree into LaTeX files. List of tuples
213 # (source start file, target name, title,
214 # author, documentclass [howto, manual, or own class]).
215 latex_documents = [
216 ('index', 'Mantl.tex', u'Mantl Documentation',
217 u'Cisco Systems, Incorporated', 'manual'),
218 ]
219
220 # The name of an image file (relative to this directory) to place at the top of
221 # the title page.
222 #latex_logo = None
223
224 # For "manual" documents, if this is true, then toplevel headings are parts,
225 # not chapters.
226 #latex_use_parts = False
227
228 # If true, show page references after internal links.
229 #latex_show_pagerefs = False
230
231 # If true, show URL addresses after external links.
232 latex_show_urls = 'footnote'
233
234 # Documents to append as an appendix to all manuals.
235 #latex_appendices = []
236
237 # If false, no module index is generated.
238 #latex_domain_indices = True
239
240
241 # -- Options for manual page output ---------------------------------------
242
243 # One entry per manual page. List of tuples
244 # (source start file, name, description, authors, manual section).
245 man_pages = [
246 ('index', 'Mantl', u'Mantl Documentation',
247 [u'Cisco Systems, Incorporated'], 1)
248 ]
249
250 # If true, show URL addresses after external links.
251 #man_show_urls = False
252
253
254 # -- Options for Texinfo output -------------------------------------------
255
256 # Grouping the document tree into Texinfo files. List of tuples
257 # (source start file, target name, title, author,
258 # dir menu entry, description, category)
259 texinfo_documents = [
260 ('index', 'Mantl', u'Mantl Documentation',
261 u'Cisco Systems, Incorporated', 'Mantl', 'One line description of project.',
262 'Miscellaneous'),
263 ]
264
265 # Documents to append as an appendix to all manuals.
266 #texinfo_appendices = []
267
268 # If false, no module index is generated.
269 #texinfo_domain_indices = True
270
271 # How to display URL addresses: 'footnote', 'no', or 'inline'.
272 #texinfo_show_urls = 'footnote'
273
274 # If true, do not generate a @detailmenu in the "Top" node's menu.
275 #texinfo_no_detailmenu = False
276
277
278 # Example configuration for intersphinx: refer to the Python standard library.
279 intersphinx_mapping = {
280 'python': ('http://docs.python.org/', None),
281 'ansible': ('http://docs.ansible.com/', None),
282 }
283
284 # -- Options for todo ext ------------------------------------------------
285 todo_include_todos = os.getenv('INCLUDE_TODOS', '0') == '1' or version != release
286
287 # -- setup ---------------------------------------------------------------
288 def setup(app):
289 from sphinx.util.texescape import tex_replacements
290 tex_replacements.extend([
291 (u'☐', u'[ ]'),
292 (u'☑', u'[x]'),
293 ])
294
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -114,7 +114,7 @@
html_theme_options = {
'github_user': 'ciscocloud',
'github_repo': 'mantl',
- 'logo': 'cisco.png',
+ 'logo': 'mantl-logo.png',
'logo_name': True,
}
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -114,7 +114,7 @@\n html_theme_options = {\n 'github_user': 'ciscocloud',\n 'github_repo': 'mantl',\n- 'logo': 'cisco.png',\n+ 'logo': 'mantl-logo.png',\n 'logo_name': True,\n }\n", "issue": "Replace cisco logo with Mantl logo in documentation\nNow that Mantl has it's own logo (see http://mantl.io and the readme), should we replace [the Cisco logo](https://github.com/CiscoCloud/mantl/blob/master/docs/_static/cisco.png) we use in the docs? \n- Ansible version (`ansible --version`): n/a\n- Python version (`python --version`): n/a\n- Git commit hash or branch: n/a\n- Cloud Environment: n/a\n- Terraform version (`terraform version`): n/a\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Mantl documentation build configuration file, created by sphinx-quickstart on\n# Wed Feb 4 06:59:14 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.todo',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Mantl'\ncopyright = u'2015, Cisco Systems, Incorporated'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '1.0'\n# The full version, including alpha/beta/rc tags.\nrelease = '1.0.3'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\nimport alabaster\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nextensions += ['alabaster']\nhtml_theme_options = {\n 'github_user': 'ciscocloud',\n 'github_repo': 'mantl',\n 'logo': 'cisco.png',\n 'logo_name': True,\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = [alabaster.get_path()]\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': [\n 'about.html', 'navigation.html', 'searchbox.html'\n ]\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\nhtml_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Mantldoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Mantl.tex', u'Mantl Documentation',\n u'Cisco Systems, Incorporated', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\nlatex_show_urls = 'footnote'\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'Mantl', u'Mantl Documentation',\n [u'Cisco Systems, Incorporated'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Mantl', u'Mantl Documentation',\n u'Cisco Systems, Incorporated', 'Mantl', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('http://docs.python.org/', None),\n 'ansible': ('http://docs.ansible.com/', None),\n}\n\n# -- Options for todo ext ------------------------------------------------\ntodo_include_todos = os.getenv('INCLUDE_TODOS', '0') == '1' or version != release\n\n# -- setup ---------------------------------------------------------------\ndef setup(app):\n from sphinx.util.texescape import tex_replacements\n tex_replacements.extend([\n (u'\u2610', u'[ ]'),\n (u'\u2611', u'[x]'),\n ])\n", "path": "docs/conf.py"}]} | 3,716 | 92 |
gh_patches_debug_3005 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3792 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
log spam during test
What's this `TypeError` about? Seems spurious ...
```
pid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last):
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py", line 26, in f
pid-13897 thread-4384100352 (Thread-1) func()
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py", line 82, in <lambda>
pid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website))
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py", line 145, in update_cta
pid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0
pid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple'
```
log spam during test
What's this `TypeError` about? Seems spurious ...
```
pid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last):
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py", line 26, in f
pid-13897 thread-4384100352 (Thread-1) func()
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py", line 82, in <lambda>
pid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website))
pid-13897 thread-4384100352 (Thread-1) File "/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py", line 145, in update_cta
pid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0
pid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple'
```
</issue>
<code>
[start of gratipay/utils/__init__.py]
1 # encoding: utf8
2
3 from __future__ import absolute_import, division, print_function, unicode_literals
4
5 from datetime import datetime, timedelta
6
7 from aspen import Response, json
8 from aspen.utils import to_rfc822, utcnow
9 from dependency_injection import resolve_dependencies
10 from postgres.cursors import SimpleCursorBase
11
12 import gratipay
13
14
15 BEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')
16
17 # Difference between current time and credit card expiring date when
18 # card is considered as expiring
19 EXPIRING_DELTA = timedelta(days = 30)
20
21
22 def dict_to_querystring(mapping):
23 if not mapping:
24 return u''
25
26 arguments = []
27 for key, values in mapping.iteritems():
28 for val in values:
29 arguments.append(u'='.join([key, val]))
30
31 return u'?' + u'&'.join(arguments)
32
33
34 def use_tildes_for_participants(website, request):
35 if request.path.raw.startswith('/~/'):
36 to = '/~' + request.path.raw[3:]
37 if request.qs.raw:
38 to += '?' + request.qs.raw
39 website.redirect(to)
40 elif request.path.raw.startswith('/~'):
41 request.path.__init__('/~/' + request.path.raw[2:])
42
43
44 def canonicalize(redirect, path, base, canonical, given, arguments=None):
45 if given != canonical:
46 assert canonical.lower() == given.lower() # sanity check
47 remainder = path[len(base + given):]
48
49 if arguments is not None:
50 arguments = dict_to_querystring(arguments)
51
52 newpath = base + canonical + remainder + arguments or ''
53 redirect(newpath)
54
55
56 def get_participant(state, restrict=True, resolve_unclaimed=True):
57 """Given a Request, raise Response or return Participant.
58
59 If restrict is True then we'll restrict access to owners and admins.
60
61 """
62 redirect = state['website'].redirect
63 request = state['request']
64 user = state['user']
65 slug = request.line.uri.path['username']
66 qs = request.line.uri.querystring
67 _ = state['_']
68
69 if restrict:
70 if user.ANON:
71 raise Response(403, _("You need to log in to access this page."))
72
73 from gratipay.models.participant import Participant # avoid circular import
74 participant = Participant.from_username(slug)
75
76 if participant is None:
77 raise Response(404)
78
79 canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)
80
81 if participant.is_closed:
82 if user.ADMIN:
83 return participant
84 raise Response(410)
85
86 if participant.claimed_time is None and resolve_unclaimed:
87 to = participant.resolve_unclaimed()
88 if to:
89 # This is a stub account (someone on another platform who hasn't
90 # actually registered with Gratipay yet)
91 redirect(to)
92 else:
93 # This is an archived account (result of take_over)
94 if user.ADMIN:
95 return participant
96 raise Response(404)
97
98 if restrict:
99 if participant != user.participant:
100 if not user.ADMIN:
101 raise Response(403, _("You are not authorized to access this page."))
102
103 return participant
104
105
106 def get_team(state):
107 """Given a Request, raise Response or return Team.
108 """
109 redirect = state['website'].redirect
110 request = state['request']
111 user = state['user']
112 slug = request.line.uri.path['team']
113 qs = request.line.uri.querystring
114
115 from gratipay.models.team import Team # avoid circular import
116 team = Team.from_slug(slug)
117
118 if team is None:
119 # Try to redirect to a Participant.
120 from gratipay.models.participant import Participant # avoid circular import
121 participant = Participant.from_username(slug)
122 if participant is not None:
123 qs = '?' + request.qs.raw if request.qs.raw else ''
124 redirect('/~' + request.path.raw[1:] + qs)
125 raise Response(404)
126
127 canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)
128
129 if team.is_closed and not user.ADMIN:
130 raise Response(410)
131
132 return team
133
134
135 def update_cta(website):
136 nusers = website.db.one("""
137 SELECT nusers FROM paydays
138 ORDER BY ts_end DESC LIMIT 1
139 """, default=(0.0, 0))
140 nreceiving_from = website.db.one("""
141 SELECT nreceiving_from
142 FROM teams
143 WHERE slug = 'Gratipay'
144 """, default=0)
145 website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0
146 if cur < 10: goal = 20
147 elif cur < 15: goal = 30
148 elif cur < 25: goal = 40
149 elif cur < 35: goal = 50
150 elif cur < 45: goal = 60
151 elif cur < 55: goal = 70
152 elif cur < 65: goal = 80
153 elif cur > 70: goal = None
154 website.support_goal = goal
155
156
157 def _execute(this, sql, params=[]):
158 print(sql.strip(), params)
159 super(SimpleCursorBase, this).execute(sql, params)
160
161 def log_cursor(f):
162 "Prints sql and params to stdout. Works globaly so watch for threaded use."
163 def wrapper(*a, **kw):
164 try:
165 SimpleCursorBase.execute = _execute
166 ret = f(*a, **kw)
167 finally:
168 del SimpleCursorBase.execute
169 return ret
170 return wrapper
171
172
173 def format_money(money):
174 format = '%.2f' if money < 1000 else '%.0f'
175 return format % money
176
177
178 def excerpt_intro(text, length=175, append=u'…'):
179 if not text:
180 return ''
181 if len(text) > length:
182 return text[:length] + append
183 return text
184
185
186 def is_card_expiring(expiration_year, expiration_month):
187 now = datetime.utcnow()
188 expiring_date = datetime(expiration_year, expiration_month, 1)
189 delta = expiring_date - now
190 return delta < EXPIRING_DELTA
191
192
193 def set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):
194 cookies[key] = value
195 cookie = cookies[key]
196 if expires:
197 if isinstance(expires, timedelta):
198 expires += utcnow()
199 if isinstance(expires, datetime):
200 expires = to_rfc822(expires).encode('ascii')
201 cookie[b'expires'] = expires
202 if httponly:
203 cookie[b'httponly'] = True
204 if path:
205 cookie[b'path'] = path
206 if gratipay.use_secure_cookies:
207 cookie[b'secure'] = True
208
209
210 def erase_cookie(cookies, key, **kw):
211 set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)
212
213
214 def filter_profile_nav(user, participant, pages):
215 out = []
216 for foo, bar, show_them, show_others in pages:
217 if (user.participant == participant and show_them) \
218 or (user.participant != participant and show_others) \
219 or user.ADMIN:
220 out.append((foo, bar, show_them, show_others))
221 return out
222
223
224 def to_javascript(obj):
225 """For when you want to inject an object into a <script> tag.
226 """
227 return json.dumps(obj).replace('</', '<\\/')
228
229
230 class LazyResponse(Response):
231
232 def __init__(self, code, lazy_body, **kw):
233 Response.__init__(self, code, '', **kw)
234 self.lazy_body = lazy_body
235
236 def render_body(self, state):
237 f = self.lazy_body
238 self.body = f(*resolve_dependencies(f, state).as_args)
239
[end of gratipay/utils/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/utils/__init__.py b/gratipay/utils/__init__.py
--- a/gratipay/utils/__init__.py
+++ b/gratipay/utils/__init__.py
@@ -136,7 +136,7 @@
nusers = website.db.one("""
SELECT nusers FROM paydays
ORDER BY ts_end DESC LIMIT 1
- """, default=(0.0, 0))
+ """, default=0)
nreceiving_from = website.db.one("""
SELECT nreceiving_from
FROM teams
| {"golden_diff": "diff --git a/gratipay/utils/__init__.py b/gratipay/utils/__init__.py\n--- a/gratipay/utils/__init__.py\n+++ b/gratipay/utils/__init__.py\n@@ -136,7 +136,7 @@\n nusers = website.db.one(\"\"\"\n SELECT nusers FROM paydays\n ORDER BY ts_end DESC LIMIT 1\n- \"\"\", default=(0.0, 0))\n+ \"\"\", default=0)\n nreceiving_from = website.db.one(\"\"\"\n SELECT nreceiving_from\n FROM teams\n", "issue": "log spam during test\nWhat's this `TypeError` about? Seems spurious ...\n\n```\npid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last):\npid-13897 thread-4384100352 (Thread-1) File \"/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py\", line 26, in f\npid-13897 thread-4384100352 (Thread-1) func()\npid-13897 thread-4384100352 (Thread-1) File \"/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py\", line 82, in <lambda>\npid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website))\npid-13897 thread-4384100352 (Thread-1) File \"/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py\", line 145, in update_cta\npid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\npid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple'\n```\n\nlog spam during test\nWhat's this `TypeError` about? Seems spurious ...\n\n```\npid-13897 thread-4384100352 (Thread-1) Traceback (most recent call last):\npid-13897 thread-4384100352 (Thread-1) File \"/Users/whit537/personal/gratipay/gratipay.com/gratipay/cron.py\", line 26, in f\npid-13897 thread-4384100352 (Thread-1) func()\npid-13897 thread-4384100352 (Thread-1) File \"/Users/whit537/personal/gratipay/gratipay.com/gratipay/main.py\", line 82, in <lambda>\npid-13897 thread-4384100352 (Thread-1) cron(env.update_cta_every, lambda: utils.update_cta(website))\npid-13897 thread-4384100352 (Thread-1) File \"/Users/whit537/personal/gratipay/gratipay.com/gratipay/utils/__init__.py\", line 145, in update_cta\npid-13897 thread-4384100352 (Thread-1) website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\npid-13897 thread-4384100352 (Thread-1) TypeError: unsupported operand type(s) for /: 'int' and 'tuple'\n```\n\n", "before_files": [{"content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cursors import SimpleCursorBase\n\nimport gratipay\n\n\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')\n\n# Difference between current time and credit card expiring date when\n# card is considered as expiring\nEXPIRING_DELTA = timedelta(days = 30)\n\n\ndef dict_to_querystring(mapping):\n if not mapping:\n return u''\n\n arguments = []\n for key, values in mapping.iteritems():\n for val in values:\n arguments.append(u'='.join([key, val]))\n\n return u'?' + u'&'.join(arguments)\n\n\ndef use_tildes_for_participants(website, request):\n if request.path.raw.startswith('/~/'):\n to = '/~' + request.path.raw[3:]\n if request.qs.raw:\n to += '?' + request.qs.raw\n website.redirect(to)\n elif request.path.raw.startswith('/~'):\n request.path.__init__('/~/' + request.path.raw[2:])\n\n\ndef canonicalize(redirect, path, base, canonical, given, arguments=None):\n if given != canonical:\n assert canonical.lower() == given.lower() # sanity check\n remainder = path[len(base + given):]\n\n if arguments is not None:\n arguments = dict_to_querystring(arguments)\n\n newpath = base + canonical + remainder + arguments or ''\n redirect(newpath)\n\n\ndef get_participant(state, restrict=True, resolve_unclaimed=True):\n \"\"\"Given a Request, raise Response or return Participant.\n\n If restrict is True then we'll restrict access to owners and admins.\n\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['username']\n qs = request.line.uri.querystring\n _ = state['_']\n\n if restrict:\n if user.ANON:\n raise Response(403, _(\"You need to log in to access this page.\"))\n\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n\n if participant is None:\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)\n\n if participant.is_closed:\n if user.ADMIN:\n return participant\n raise Response(410)\n\n if participant.claimed_time is None and resolve_unclaimed:\n to = participant.resolve_unclaimed()\n if to:\n # This is a stub account (someone on another platform who hasn't\n # actually registered with Gratipay yet)\n redirect(to)\n else:\n # This is an archived account (result of take_over)\n if user.ADMIN:\n return participant\n raise Response(404)\n\n if restrict:\n if participant != user.participant:\n if not user.ADMIN:\n raise Response(403, _(\"You are not authorized to access this page.\"))\n\n return participant\n\n\ndef get_team(state):\n \"\"\"Given a Request, raise Response or return Team.\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['team']\n qs = request.line.uri.querystring\n\n from gratipay.models.team import Team # avoid circular import\n team = Team.from_slug(slug)\n\n if team is None:\n # Try to redirect to a Participant.\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n if participant is not None:\n qs = '?' + request.qs.raw if request.qs.raw else ''\n redirect('/~' + request.path.raw[1:] + qs)\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)\n\n if team.is_closed and not user.ADMIN:\n raise Response(410)\n\n return team\n\n\ndef update_cta(website):\n nusers = website.db.one(\"\"\"\n SELECT nusers FROM paydays\n ORDER BY ts_end DESC LIMIT 1\n \"\"\", default=(0.0, 0))\n nreceiving_from = website.db.one(\"\"\"\n SELECT nreceiving_from\n FROM teams\n WHERE slug = 'Gratipay'\n \"\"\", default=0)\n website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\n if cur < 10: goal = 20\n elif cur < 15: goal = 30\n elif cur < 25: goal = 40\n elif cur < 35: goal = 50\n elif cur < 45: goal = 60\n elif cur < 55: goal = 70\n elif cur < 65: goal = 80\n elif cur > 70: goal = None\n website.support_goal = goal\n\n\ndef _execute(this, sql, params=[]):\n print(sql.strip(), params)\n super(SimpleCursorBase, this).execute(sql, params)\n\ndef log_cursor(f):\n \"Prints sql and params to stdout. Works globaly so watch for threaded use.\"\n def wrapper(*a, **kw):\n try:\n SimpleCursorBase.execute = _execute\n ret = f(*a, **kw)\n finally:\n del SimpleCursorBase.execute\n return ret\n return wrapper\n\n\ndef format_money(money):\n format = '%.2f' if money < 1000 else '%.0f'\n return format % money\n\n\ndef excerpt_intro(text, length=175, append=u'\u2026'):\n if not text:\n return ''\n if len(text) > length:\n return text[:length] + append\n return text\n\n\ndef is_card_expiring(expiration_year, expiration_month):\n now = datetime.utcnow()\n expiring_date = datetime(expiration_year, expiration_month, 1)\n delta = expiring_date - now\n return delta < EXPIRING_DELTA\n\n\ndef set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):\n cookies[key] = value\n cookie = cookies[key]\n if expires:\n if isinstance(expires, timedelta):\n expires += utcnow()\n if isinstance(expires, datetime):\n expires = to_rfc822(expires).encode('ascii')\n cookie[b'expires'] = expires\n if httponly:\n cookie[b'httponly'] = True\n if path:\n cookie[b'path'] = path\n if gratipay.use_secure_cookies:\n cookie[b'secure'] = True\n\n\ndef erase_cookie(cookies, key, **kw):\n set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)\n\n\ndef filter_profile_nav(user, participant, pages):\n out = []\n for foo, bar, show_them, show_others in pages:\n if (user.participant == participant and show_them) \\\n or (user.participant != participant and show_others) \\\n or user.ADMIN:\n out.append((foo, bar, show_them, show_others))\n return out\n\n\ndef to_javascript(obj):\n \"\"\"For when you want to inject an object into a <script> tag.\n \"\"\"\n return json.dumps(obj).replace('</', '<\\\\/')\n\n\nclass LazyResponse(Response):\n\n def __init__(self, code, lazy_body, **kw):\n Response.__init__(self, code, '', **kw)\n self.lazy_body = lazy_body\n\n def render_body(self, state):\n f = self.lazy_body\n self.body = f(*resolve_dependencies(f, state).as_args)\n", "path": "gratipay/utils/__init__.py"}]} | 3,717 | 126 |
gh_patches_debug_25842 | rasdani/github-patches | git_diff | amundsen-io__amundsen-1303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug Report: Glue search_tables with Filters and result tables more than 100 items
<!--- Provide a general summary of the issue in the Title above -->
<!--- Look through existing open and closed issues to see if someone has reported the issue before -->
There is a bug while applying Filters for the database which contains more than 100 items. Since glue returns 100 items per page and to move to the next page we need to specify `NextToken`.
I have 138 tables, which means I will be iterating 2 times over the result.
The filter:
```python
{
'Key': 'DatabaseName',
'Value': glue_database_name
}
```
Every time I run the code I get different results: the length of the list is always the same - 138. However, the length of the set is always different. It ranges from 1 to 30.
I run my check over 10 times.
I took look at the documentation and found a proper parameter `MaxResults` for further checking. Since I know precisely desired table count, I put it as 150 and the issue has totally gone.
## Expected Behavior
Get the exact same result for filtered tables.
## Current Behavior
Query result from [`self._glue.search_tables(**kwargs)`](https://github.com/amundsen-io/amundsen/blob/main/databuilder/databuilder/extractor/glue_extractor.py#L78) contains duplicates
## Possible Solution
I'm not sure, but I think for the next (second) iteration (page, which contains up to 100 items) we are using a new `NextToken` with previous filters. Maybe the problem lies here.
## Steps to Reproduce
1. Have more than 100 glue tables in a single DB in AWS
2. Query it using the abovementioned `DatabaseName` filter
3. Observe duplicates in the list
## Hot-fix
1. Add `MaxResults` to [`kwargs`](https://github.com/amundsen-io/amundsen/blob/main/databuilder/databuilder/extractor/glue_extractor.py#L80) that is more than your actual size of overall tables
2. Observe a proper behavior
## Context
Q: How has this issue affected you?
A: It affects our production system
## Your Environment
```
amundsen-databuilder==4.5.3
amundsen-gremlin==0.0.9
Flask==1.1.4
gremlinpython==3.4.9
requests-aws4auth==1.1.1
typing-extensions==3.10.0
overrides==6.1.0
```
</issue>
<code>
[start of databuilder/databuilder/extractor/glue_extractor.py]
1 # Copyright Contributors to the Amundsen project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from typing import (
5 Any, Dict, Iterator, List, Union,
6 )
7
8 import boto3
9 from pyhocon import ConfigFactory, ConfigTree
10
11 from databuilder.extractor.base_extractor import Extractor
12 from databuilder.models.table_metadata import ColumnMetadata, TableMetadata
13
14
15 class GlueExtractor(Extractor):
16 """
17 Extracts tables and columns metadata from AWS Glue metastore
18 """
19
20 CLUSTER_KEY = 'cluster'
21 FILTER_KEY = 'filters'
22 DEFAULT_CONFIG = ConfigFactory.from_dict({CLUSTER_KEY: 'gold', FILTER_KEY: None})
23
24 def init(self, conf: ConfigTree) -> None:
25 conf = conf.with_fallback(GlueExtractor.DEFAULT_CONFIG)
26 self._cluster = conf.get_string(GlueExtractor.CLUSTER_KEY)
27 self._filters = conf.get(GlueExtractor.FILTER_KEY)
28 self._glue = boto3.client('glue')
29 self._extract_iter: Union[None, Iterator] = None
30
31 def extract(self) -> Union[TableMetadata, None]:
32 if not self._extract_iter:
33 self._extract_iter = self._get_extract_iter()
34 try:
35 return next(self._extract_iter)
36 except StopIteration:
37 return None
38
39 def get_scope(self) -> str:
40 return 'extractor.glue'
41
42 def _get_extract_iter(self) -> Iterator[TableMetadata]:
43 """
44 It gets all tables and yields TableMetadata
45 :return:
46 """
47 for row in self._get_raw_extract_iter():
48 columns, i = [], 0
49
50 for column in row['StorageDescriptor']['Columns'] \
51 + row.get('PartitionKeys', []):
52 columns.append(ColumnMetadata(
53 column['Name'],
54 column['Comment'] if 'Comment' in column else None,
55 column['Type'],
56 i
57 ))
58 i += 1
59
60 yield TableMetadata(
61 'glue',
62 self._cluster,
63 row['DatabaseName'],
64 row['Name'],
65 row.get('Description') or row.get('Parameters', {}).get('comment'),
66 columns,
67 row.get('TableType') == 'VIRTUAL_VIEW',
68 )
69
70 def _get_raw_extract_iter(self) -> Iterator[Dict[str, Any]]:
71 """
72 Provides iterator of results row from glue client
73 :return:
74 """
75 tables = self._search_tables()
76 return iter(tables)
77
78 def _search_tables(self) -> List[Dict[str, Any]]:
79 tables = []
80 kwargs = {}
81 if self._filters is not None:
82 kwargs['Filters'] = self._filters
83 data = self._glue.search_tables(**kwargs)
84 tables += data['TableList']
85 while 'NextToken' in data:
86 token = data['NextToken']
87 kwargs['NextToken'] = token
88 data = self._glue.search_tables(**kwargs)
89 tables += data['TableList']
90 return tables
91
[end of databuilder/databuilder/extractor/glue_extractor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/databuilder/databuilder/extractor/glue_extractor.py b/databuilder/databuilder/extractor/glue_extractor.py
--- a/databuilder/databuilder/extractor/glue_extractor.py
+++ b/databuilder/databuilder/extractor/glue_extractor.py
@@ -19,12 +19,14 @@
CLUSTER_KEY = 'cluster'
FILTER_KEY = 'filters'
- DEFAULT_CONFIG = ConfigFactory.from_dict({CLUSTER_KEY: 'gold', FILTER_KEY: None})
+ MAX_RESULTS_KEY = 'max_results'
+ DEFAULT_CONFIG = ConfigFactory.from_dict({CLUSTER_KEY: 'gold', FILTER_KEY: None, MAX_RESULTS_KEY: 500})
def init(self, conf: ConfigTree) -> None:
conf = conf.with_fallback(GlueExtractor.DEFAULT_CONFIG)
self._cluster = conf.get_string(GlueExtractor.CLUSTER_KEY)
self._filters = conf.get(GlueExtractor.FILTER_KEY)
+ self._max_results = conf.get(GlueExtractor.MAX_RESULTS_KEY)
self._glue = boto3.client('glue')
self._extract_iter: Union[None, Iterator] = None
@@ -80,6 +82,7 @@
kwargs = {}
if self._filters is not None:
kwargs['Filters'] = self._filters
+ kwargs['MaxResults'] = self._max_results
data = self._glue.search_tables(**kwargs)
tables += data['TableList']
while 'NextToken' in data:
| {"golden_diff": "diff --git a/databuilder/databuilder/extractor/glue_extractor.py b/databuilder/databuilder/extractor/glue_extractor.py\n--- a/databuilder/databuilder/extractor/glue_extractor.py\n+++ b/databuilder/databuilder/extractor/glue_extractor.py\n@@ -19,12 +19,14 @@\n \n CLUSTER_KEY = 'cluster'\n FILTER_KEY = 'filters'\n- DEFAULT_CONFIG = ConfigFactory.from_dict({CLUSTER_KEY: 'gold', FILTER_KEY: None})\n+ MAX_RESULTS_KEY = 'max_results'\n+ DEFAULT_CONFIG = ConfigFactory.from_dict({CLUSTER_KEY: 'gold', FILTER_KEY: None, MAX_RESULTS_KEY: 500})\n \n def init(self, conf: ConfigTree) -> None:\n conf = conf.with_fallback(GlueExtractor.DEFAULT_CONFIG)\n self._cluster = conf.get_string(GlueExtractor.CLUSTER_KEY)\n self._filters = conf.get(GlueExtractor.FILTER_KEY)\n+ self._max_results = conf.get(GlueExtractor.MAX_RESULTS_KEY)\n self._glue = boto3.client('glue')\n self._extract_iter: Union[None, Iterator] = None\n \n@@ -80,6 +82,7 @@\n kwargs = {}\n if self._filters is not None:\n kwargs['Filters'] = self._filters\n+ kwargs['MaxResults'] = self._max_results\n data = self._glue.search_tables(**kwargs)\n tables += data['TableList']\n while 'NextToken' in data:\n", "issue": "Bug Report: Glue search_tables with Filters and result tables more than 100 items\n<!--- Provide a general summary of the issue in the Title above -->\r\n<!--- Look through existing open and closed issues to see if someone has reported the issue before -->\r\n\r\nThere is a bug while applying Filters for the database which contains more than 100 items. Since glue returns 100 items per page and to move to the next page we need to specify `NextToken`.\r\nI have 138 tables, which means I will be iterating 2 times over the result.\r\n\r\nThe filter:\r\n```python\r\n{\r\n 'Key': 'DatabaseName',\r\n 'Value': glue_database_name\r\n}\r\n```\r\n\r\nEvery time I run the code I get different results: the length of the list is always the same - 138. However, the length of the set is always different. It ranges from 1 to 30.\r\nI run my check over 10 times.\r\n\r\nI took look at the documentation and found a proper parameter `MaxResults` for further checking. Since I know precisely desired table count, I put it as 150 and the issue has totally gone.\r\n\r\n## Expected Behavior\r\nGet the exact same result for filtered tables.\r\n\r\n## Current Behavior\r\nQuery result from [`self._glue.search_tables(**kwargs)`](https://github.com/amundsen-io/amundsen/blob/main/databuilder/databuilder/extractor/glue_extractor.py#L78) contains duplicates\r\n\r\n## Possible Solution\r\nI'm not sure, but I think for the next (second) iteration (page, which contains up to 100 items) we are using a new `NextToken` with previous filters. Maybe the problem lies here.\r\n\r\n## Steps to Reproduce\r\n1. Have more than 100 glue tables in a single DB in AWS\r\n2. Query it using the abovementioned `DatabaseName` filter\r\n3. Observe duplicates in the list\r\n\r\n## Hot-fix\r\n1. Add `MaxResults` to [`kwargs`](https://github.com/amundsen-io/amundsen/blob/main/databuilder/databuilder/extractor/glue_extractor.py#L80) that is more than your actual size of overall tables\r\n2. Observe a proper behavior\r\n\r\n## Context\r\nQ: How has this issue affected you?\r\nA: It affects our production system\r\n\r\n## Your Environment\r\n```\r\namundsen-databuilder==4.5.3\r\namundsen-gremlin==0.0.9\r\nFlask==1.1.4\r\ngremlinpython==3.4.9\r\nrequests-aws4auth==1.1.1\r\ntyping-extensions==3.10.0\r\noverrides==6.1.0\r\n```\n", "before_files": [{"content": "# Copyright Contributors to the Amundsen project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom typing import (\n Any, Dict, Iterator, List, Union,\n)\n\nimport boto3\nfrom pyhocon import ConfigFactory, ConfigTree\n\nfrom databuilder.extractor.base_extractor import Extractor\nfrom databuilder.models.table_metadata import ColumnMetadata, TableMetadata\n\n\nclass GlueExtractor(Extractor):\n \"\"\"\n Extracts tables and columns metadata from AWS Glue metastore\n \"\"\"\n\n CLUSTER_KEY = 'cluster'\n FILTER_KEY = 'filters'\n DEFAULT_CONFIG = ConfigFactory.from_dict({CLUSTER_KEY: 'gold', FILTER_KEY: None})\n\n def init(self, conf: ConfigTree) -> None:\n conf = conf.with_fallback(GlueExtractor.DEFAULT_CONFIG)\n self._cluster = conf.get_string(GlueExtractor.CLUSTER_KEY)\n self._filters = conf.get(GlueExtractor.FILTER_KEY)\n self._glue = boto3.client('glue')\n self._extract_iter: Union[None, Iterator] = None\n\n def extract(self) -> Union[TableMetadata, None]:\n if not self._extract_iter:\n self._extract_iter = self._get_extract_iter()\n try:\n return next(self._extract_iter)\n except StopIteration:\n return None\n\n def get_scope(self) -> str:\n return 'extractor.glue'\n\n def _get_extract_iter(self) -> Iterator[TableMetadata]:\n \"\"\"\n It gets all tables and yields TableMetadata\n :return:\n \"\"\"\n for row in self._get_raw_extract_iter():\n columns, i = [], 0\n\n for column in row['StorageDescriptor']['Columns'] \\\n + row.get('PartitionKeys', []):\n columns.append(ColumnMetadata(\n column['Name'],\n column['Comment'] if 'Comment' in column else None,\n column['Type'],\n i\n ))\n i += 1\n\n yield TableMetadata(\n 'glue',\n self._cluster,\n row['DatabaseName'],\n row['Name'],\n row.get('Description') or row.get('Parameters', {}).get('comment'),\n columns,\n row.get('TableType') == 'VIRTUAL_VIEW',\n )\n\n def _get_raw_extract_iter(self) -> Iterator[Dict[str, Any]]:\n \"\"\"\n Provides iterator of results row from glue client\n :return:\n \"\"\"\n tables = self._search_tables()\n return iter(tables)\n\n def _search_tables(self) -> List[Dict[str, Any]]:\n tables = []\n kwargs = {}\n if self._filters is not None:\n kwargs['Filters'] = self._filters\n data = self._glue.search_tables(**kwargs)\n tables += data['TableList']\n while 'NextToken' in data:\n token = data['NextToken']\n kwargs['NextToken'] = token\n data = self._glue.search_tables(**kwargs)\n tables += data['TableList']\n return tables\n", "path": "databuilder/databuilder/extractor/glue_extractor.py"}]} | 1,943 | 338 |
gh_patches_debug_23504 | rasdani/github-patches | git_diff | iterative__dvc-8197 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
installing from Ubuntu repo does not install s3 adapter
# Bug Report
DVC version 2.6.3

DVC version 2.21.1

</issue>
<code>
[start of scripts/pyinstaller/build.py]
1 import os
2 import pathlib
3 from subprocess import STDOUT, check_call
4
5 path = pathlib.Path(__file__).parent.absolute()
6 hooks = path / "hooks"
7 dvc = path.parent.parent / "dvc"
8 entry = dvc / "__main__.py"
9
10 check_call(
11 [
12 "pyinstaller",
13 "--additional-hooks-dir",
14 os.fspath(hooks),
15 "--name",
16 "dvc",
17 "-y",
18 os.fspath(entry),
19 ],
20 cwd=path,
21 stderr=STDOUT,
22 )
23
24 check_call(
25 [
26 path / "dist" / "dvc" / "dvc",
27 "doctor",
28 ],
29 stderr=STDOUT,
30 )
31
[end of scripts/pyinstaller/build.py]
[start of scripts/pyinstaller/hooks/hook-dvc.py]
1 from PyInstaller.utils.hooks import ( # pylint:disable=import-error
2 copy_metadata,
3 )
4
5 # needed for `dvc doctor` to show dep versions
6 datas = copy_metadata("adlfs", recursive=True)
7 datas += copy_metadata("knack")
8 datas += copy_metadata("gcsfs")
9 datas += copy_metadata("pyarrow")
10 datas += copy_metadata("pydrive2")
11 datas += copy_metadata("s3fs", recursive=True)
12 datas += copy_metadata("boto3")
13 datas += copy_metadata("ossfs")
14 datas += copy_metadata("sshfs")
15 datas += copy_metadata("webdav4")
16 datas += copy_metadata("aiohttp")
17 datas += copy_metadata("aiohttp_retry")
18
19 # https://github.com/pypa/setuptools/issues/1963
20 hiddenimports = ["pkg_resources.py2_warn"]
21
[end of scripts/pyinstaller/hooks/hook-dvc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/pyinstaller/build.py b/scripts/pyinstaller/build.py
--- a/scripts/pyinstaller/build.py
+++ b/scripts/pyinstaller/build.py
@@ -1,6 +1,6 @@
import os
import pathlib
-from subprocess import STDOUT, check_call
+from subprocess import STDOUT, check_call, check_output
path = pathlib.Path(__file__).parent.absolute()
hooks = path / "hooks"
@@ -21,10 +21,27 @@
stderr=STDOUT,
)
-check_call(
+out = check_output(
[
path / "dist" / "dvc" / "dvc",
"doctor",
],
stderr=STDOUT,
-)
+).decode()
+
+remotes = [
+ "s3",
+ "oss",
+ "gdrive",
+ "gs",
+ "hdfs",
+ "http",
+ "webhdfs",
+ "azure",
+ "ssh",
+ "webdav",
+]
+
+print(out)
+for remote in remotes:
+ assert f"\t{remote}" in out, f"Missing support for {remote}"
diff --git a/scripts/pyinstaller/hooks/hook-dvc.py b/scripts/pyinstaller/hooks/hook-dvc.py
--- a/scripts/pyinstaller/hooks/hook-dvc.py
+++ b/scripts/pyinstaller/hooks/hook-dvc.py
@@ -16,5 +16,15 @@
datas += copy_metadata("aiohttp")
datas += copy_metadata("aiohttp_retry")
-# https://github.com/pypa/setuptools/issues/1963
-hiddenimports = ["pkg_resources.py2_warn"]
+hiddenimports = [
+ "dvc_azure",
+ "dvc_gdrive",
+ "dvc_gs",
+ "dvc_hdfs",
+ "dvc_oss",
+ "dvc_s3",
+ "dvc_webdav",
+ "dvc_webhdfs",
+ # https://github.com/pypa/setuptools/issues/1963
+ "pkg_resources.py2_warn",
+]
| {"golden_diff": "diff --git a/scripts/pyinstaller/build.py b/scripts/pyinstaller/build.py\n--- a/scripts/pyinstaller/build.py\n+++ b/scripts/pyinstaller/build.py\n@@ -1,6 +1,6 @@\n import os\n import pathlib\n-from subprocess import STDOUT, check_call\n+from subprocess import STDOUT, check_call, check_output\n \n path = pathlib.Path(__file__).parent.absolute()\n hooks = path / \"hooks\"\n@@ -21,10 +21,27 @@\n stderr=STDOUT,\n )\n \n-check_call(\n+out = check_output(\n [\n path / \"dist\" / \"dvc\" / \"dvc\",\n \"doctor\",\n ],\n stderr=STDOUT,\n-)\n+).decode()\n+\n+remotes = [\n+ \"s3\",\n+ \"oss\",\n+ \"gdrive\",\n+ \"gs\",\n+ \"hdfs\",\n+ \"http\",\n+ \"webhdfs\",\n+ \"azure\",\n+ \"ssh\",\n+ \"webdav\",\n+]\n+\n+print(out)\n+for remote in remotes:\n+ assert f\"\\t{remote}\" in out, f\"Missing support for {remote}\"\ndiff --git a/scripts/pyinstaller/hooks/hook-dvc.py b/scripts/pyinstaller/hooks/hook-dvc.py\n--- a/scripts/pyinstaller/hooks/hook-dvc.py\n+++ b/scripts/pyinstaller/hooks/hook-dvc.py\n@@ -16,5 +16,15 @@\n datas += copy_metadata(\"aiohttp\")\n datas += copy_metadata(\"aiohttp_retry\")\n \n-# https://github.com/pypa/setuptools/issues/1963\n-hiddenimports = [\"pkg_resources.py2_warn\"]\n+hiddenimports = [\n+ \"dvc_azure\",\n+ \"dvc_gdrive\",\n+ \"dvc_gs\",\n+ \"dvc_hdfs\",\n+ \"dvc_oss\",\n+ \"dvc_s3\",\n+ \"dvc_webdav\",\n+ \"dvc_webhdfs\",\n+ # https://github.com/pypa/setuptools/issues/1963\n+ \"pkg_resources.py2_warn\",\n+]\n", "issue": "installing from Ubuntu repo does not install s3 adapter\n# Bug Report\r\n\r\nDVC version 2.6.3\r\n\r\n\r\n\r\nDVC version 2.21.1\r\n\r\n\r\n\n", "before_files": [{"content": "import os\nimport pathlib\nfrom subprocess import STDOUT, check_call\n\npath = pathlib.Path(__file__).parent.absolute()\nhooks = path / \"hooks\"\ndvc = path.parent.parent / \"dvc\"\nentry = dvc / \"__main__.py\"\n\ncheck_call(\n [\n \"pyinstaller\",\n \"--additional-hooks-dir\",\n os.fspath(hooks),\n \"--name\",\n \"dvc\",\n \"-y\",\n os.fspath(entry),\n ],\n cwd=path,\n stderr=STDOUT,\n)\n\ncheck_call(\n [\n path / \"dist\" / \"dvc\" / \"dvc\",\n \"doctor\",\n ],\n stderr=STDOUT,\n)\n", "path": "scripts/pyinstaller/build.py"}, {"content": "from PyInstaller.utils.hooks import ( # pylint:disable=import-error\n copy_metadata,\n)\n\n# needed for `dvc doctor` to show dep versions\ndatas = copy_metadata(\"adlfs\", recursive=True)\ndatas += copy_metadata(\"knack\")\ndatas += copy_metadata(\"gcsfs\")\ndatas += copy_metadata(\"pyarrow\")\ndatas += copy_metadata(\"pydrive2\")\ndatas += copy_metadata(\"s3fs\", recursive=True)\ndatas += copy_metadata(\"boto3\")\ndatas += copy_metadata(\"ossfs\")\ndatas += copy_metadata(\"sshfs\")\ndatas += copy_metadata(\"webdav4\")\ndatas += copy_metadata(\"aiohttp\")\ndatas += copy_metadata(\"aiohttp_retry\")\n\n# https://github.com/pypa/setuptools/issues/1963\nhiddenimports = [\"pkg_resources.py2_warn\"]\n", "path": "scripts/pyinstaller/hooks/hook-dvc.py"}]} | 1,118 | 456 |
gh_patches_debug_5606 | rasdani/github-patches | git_diff | ansible__ansible-lint-477 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False positive EANSIBLE0014 also flags vars in shell task
# Issue Type
- Bug report
# Ansible and Ansible Lint details
```
ansible --version
ansible 2.3.0.0
ansible-lint --version
ansible-lint 3.4.13
```
- ansible installation method: pip
- ansible-lint installation method: pip
# Desired Behaviour
EANSIBLE0014 should validate only command task, not shell.
# Actual Behaviour (Bug report only)
When ansible-lint validating playbook with shell tasks with env vars
```
- hosts: "localhost"
gather_facts: no
become: no
tasks:
- shell: 'MYVAR="$(date)" env | grep MY'
```
it fails and complains about Env vars shouldn't be in command
```
test-play.yaml:5: [EANSIBLE0014] Environment variables don't work as part of command
```
</issue>
<code>
[start of lib/ansiblelint/rules/EnvVarsInCommandRule.py]
1 # Copyright (c) 2016 Will Thames <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 from ansiblelint import AnsibleLintRule
22 from ansiblelint.utils import LINE_NUMBER_KEY, FILENAME_KEY, get_first_cmd_arg
23
24
25 class EnvVarsInCommandRule(AnsibleLintRule):
26 id = '304'
27 shortdesc = "Environment variables don't work as part of command"
28 description = (
29 'Environment variables should be passed to ``shell`` or ``command`` '
30 'through environment argument'
31 )
32 severity = 'VERY_HIGH'
33 tags = ['command-shell', 'bug', 'ANSIBLE0014']
34 version_added = 'historic'
35
36 expected_args = ['chdir', 'creates', 'executable', 'removes', 'stdin', 'warn',
37 'cmd', '__ansible_module__', '__ansible_arguments__',
38 LINE_NUMBER_KEY, FILENAME_KEY]
39
40 def matchtask(self, file, task):
41 if task["action"]["__ansible_module__"] in ['shell', 'command']:
42 first_cmd_arg = get_first_cmd_arg(task)
43 if not first_cmd_arg:
44 return
45
46 return any([arg not in self.expected_args for arg in task['action']] +
47 ["=" in first_cmd_arg])
48
[end of lib/ansiblelint/rules/EnvVarsInCommandRule.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansiblelint/rules/EnvVarsInCommandRule.py b/lib/ansiblelint/rules/EnvVarsInCommandRule.py
--- a/lib/ansiblelint/rules/EnvVarsInCommandRule.py
+++ b/lib/ansiblelint/rules/EnvVarsInCommandRule.py
@@ -38,7 +38,7 @@
LINE_NUMBER_KEY, FILENAME_KEY]
def matchtask(self, file, task):
- if task["action"]["__ansible_module__"] in ['shell', 'command']:
+ if task["action"]["__ansible_module__"] in ['command']:
first_cmd_arg = get_first_cmd_arg(task)
if not first_cmd_arg:
return
| {"golden_diff": "diff --git a/lib/ansiblelint/rules/EnvVarsInCommandRule.py b/lib/ansiblelint/rules/EnvVarsInCommandRule.py\n--- a/lib/ansiblelint/rules/EnvVarsInCommandRule.py\n+++ b/lib/ansiblelint/rules/EnvVarsInCommandRule.py\n@@ -38,7 +38,7 @@\n LINE_NUMBER_KEY, FILENAME_KEY]\n \n def matchtask(self, file, task):\n- if task[\"action\"][\"__ansible_module__\"] in ['shell', 'command']:\n+ if task[\"action\"][\"__ansible_module__\"] in ['command']:\n first_cmd_arg = get_first_cmd_arg(task)\n if not first_cmd_arg:\n return\n", "issue": "False positive EANSIBLE0014 also flags vars in shell task\n# Issue Type\r\n- Bug report\r\n\r\n# Ansible and Ansible Lint details\r\n```\r\nansible --version\r\nansible 2.3.0.0\r\nansible-lint --version\r\nansible-lint 3.4.13\r\n```\r\n\r\n- ansible installation method: pip\r\n- ansible-lint installation method: pip\r\n\r\n# Desired Behaviour\r\n\r\nEANSIBLE0014 should validate only command task, not shell.\r\n\r\n# Actual Behaviour (Bug report only)\r\n\r\nWhen ansible-lint validating playbook with shell tasks with env vars\r\n```\r\n- hosts: \"localhost\"\r\n gather_facts: no\r\n become: no\r\n tasks:\r\n - shell: 'MYVAR=\"$(date)\" env | grep MY'\r\n```\r\nit fails and complains about Env vars shouldn't be in command\r\n```\r\ntest-play.yaml:5: [EANSIBLE0014] Environment variables don't work as part of command\r\n```\r\n\n", "before_files": [{"content": "# Copyright (c) 2016 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nfrom ansiblelint.utils import LINE_NUMBER_KEY, FILENAME_KEY, get_first_cmd_arg\n\n\nclass EnvVarsInCommandRule(AnsibleLintRule):\n id = '304'\n shortdesc = \"Environment variables don't work as part of command\"\n description = (\n 'Environment variables should be passed to ``shell`` or ``command`` '\n 'through environment argument'\n )\n severity = 'VERY_HIGH'\n tags = ['command-shell', 'bug', 'ANSIBLE0014']\n version_added = 'historic'\n\n expected_args = ['chdir', 'creates', 'executable', 'removes', 'stdin', 'warn',\n 'cmd', '__ansible_module__', '__ansible_arguments__',\n LINE_NUMBER_KEY, FILENAME_KEY]\n\n def matchtask(self, file, task):\n if task[\"action\"][\"__ansible_module__\"] in ['shell', 'command']:\n first_cmd_arg = get_first_cmd_arg(task)\n if not first_cmd_arg:\n return\n\n return any([arg not in self.expected_args for arg in task['action']] +\n [\"=\" in first_cmd_arg])\n", "path": "lib/ansiblelint/rules/EnvVarsInCommandRule.py"}]} | 1,334 | 148 |
gh_patches_debug_29604 | rasdani/github-patches | git_diff | sublimelsp__LSP-1310 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[LSP-elm] Documentation popup is no logner visible when "More" link is clicked
* OS and language server - Ubunutu 20.04, LSP-elm, ST 4085
* How you installed LSP - git, latest st-4000-exploration
When clicking the `More` link in the AC popup,
I expect to see a documentation popup.
But I see nothing.
This commit introduced this behavior 19df9e19afeb0f32064a8b7e3a11ebaa4254f63c
If I checkout the commit before 19df9e19afeb0f32064a8b7e3a11ebaa4254f63c, everything works as expected.
</issue>
<code>
[start of plugin/completion.py]
1 import mdpopups
2 import sublime
3 import sublime_plugin
4 import webbrowser
5 from .core.css import css
6 from .core.logging import debug
7 from .core.edit import parse_text_edit
8 from .core.protocol import Request, InsertTextFormat, Range
9 from .core.registry import LspTextCommand
10 from .core.typing import Any, List, Dict, Optional, Generator, Union
11 from .core.views import FORMAT_STRING, FORMAT_MARKUP_CONTENT, minihtml
12 from .core.views import range_to_region
13
14
15 class LspResolveDocsCommand(LspTextCommand):
16
17 completions = [] # type: List[Dict[str, Any]]
18
19 def run(self, edit: sublime.Edit, index: int, event: Optional[dict] = None) -> None:
20 item = self.completions[index]
21 detail = self.format_documentation(item.get('detail') or "")
22 documentation = self.format_documentation(item.get("documentation") or "")
23 # don't show the detail in the cooperate AC popup if it is already shown in the AC details filed.
24 self.is_detail_shown = bool(detail)
25 if not detail or not documentation:
26 # To make sure that the detail or documentation fields doesn't exist we need to resove the completion item.
27 # If those fields appear after the item is resolved we show them in the popup.
28 self.do_resolve(item)
29 else:
30 minihtml_content = self.get_content(documentation, detail)
31 self.show_popup(minihtml_content)
32
33 def format_documentation(self, content: Union[str, Dict[str, str]]) -> str:
34 return minihtml(self.view, content, allowed_formats=FORMAT_STRING | FORMAT_MARKUP_CONTENT)
35
36 def get_content(self, documentation: str, detail: str) -> str:
37 content = ""
38 if detail and not self.is_detail_shown:
39 content += "<div class='highlight'>{}</div>".format(detail)
40 if documentation:
41 content += "<div>{}</div>".format(documentation)
42 return content
43
44 def show_popup(self, minihtml_content: str) -> None:
45 viewport_width = self.view.viewport_extent()[0]
46 mdpopups.show_popup(
47 self.view,
48 minihtml_content,
49 flags=sublime.COOPERATE_WITH_AUTO_COMPLETE,
50 css=css().popups,
51 wrapper_class=css().popups_classname,
52 max_width=viewport_width,
53 on_navigate=self.on_navigate
54 )
55
56 def on_navigate(self, url: str) -> None:
57 webbrowser.open(url)
58
59 def do_resolve(self, item: dict) -> None:
60 session = self.best_session('completionProvider.resolveProvider')
61 if session:
62 session.send_request(
63 Request.resolveCompletionItem(item),
64 lambda res: self.handle_resolve_response(res))
65
66 def handle_resolve_response(self, item: Optional[dict]) -> None:
67 detail = ""
68 documentation = ""
69 if item:
70 detail = self.format_documentation(item.get('detail') or "")
71 documentation = self.format_documentation(item.get("documentation") or "")
72 if not documentation:
73 documentation = self.format_documentation({"kind": "markdown", "value": "*No documentation available.*"})
74 minihtml_content = self.get_content(documentation, detail)
75 show = self.update_popup if self.view.is_popup_visible() else self.show_popup
76 # NOTE: Update/show popups from the main thread, or else the popup might make the AC widget disappear.
77 sublime.set_timeout(lambda: show(minihtml_content))
78
79 def update_popup(self, minihtml_content: str) -> None:
80 mdpopups.update_popup(
81 self.view,
82 minihtml_content,
83 css=css().popups,
84 wrapper_class=css().popups_classname,
85 )
86
87
88 class LspCompleteCommand(sublime_plugin.TextCommand):
89
90 def epilogue(self, item: Dict[str, Any]) -> None:
91 additional_edits = item.get('additionalTextEdits')
92 if additional_edits:
93 edits = [parse_text_edit(additional_edit) for additional_edit in additional_edits]
94 self.view.run_command("lsp_apply_document_edit", {'changes': edits})
95 command = item.get("command")
96 if command:
97 debug('Running server command "{}" for view {}'.format(command, self.view.id()))
98 self.view.run_command("lsp_execute", {"command_name": command})
99
100
101 class LspCompleteInsertTextCommand(LspCompleteCommand):
102
103 def run(self, edit: sublime.Edit, **item: Any) -> None:
104 insert_text = item.get("insertText") or item["label"]
105 if item.get("insertTextFormat", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:
106 self.view.run_command("insert_snippet", {"contents": insert_text})
107 else:
108 self.view.run_command("insert", {"characters": insert_text})
109 self.epilogue(item)
110
111
112 class LspCompleteTextEditCommand(LspCompleteCommand):
113
114 def run(self, edit: sublime.Edit, **item: Any) -> None:
115 text_edit = item["textEdit"]
116 new_text = text_edit['newText']
117 edit_region = range_to_region(Range.from_lsp(text_edit['range']), self.view)
118 if item.get("insertTextFormat", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:
119 for region in self.translated_regions(edit_region):
120 self.view.erase(edit, region)
121 self.view.run_command("insert_snippet", {"contents": new_text})
122 else:
123 for region in self.translated_regions(edit_region):
124 # NOTE: Cannot do .replace, because ST will select the replacement.
125 self.view.erase(edit, region)
126 self.view.insert(edit, region.a, new_text)
127 self.epilogue(item)
128
129 def translated_regions(self, edit_region: sublime.Region) -> Generator[sublime.Region, None, None]:
130 selection = self.view.sel()
131 primary_cursor_position = selection[0].b
132 for region in reversed(selection):
133 # For each selection region, apply the same removal as for the "primary" region.
134 # To do that, translate, or offset, the LSP edit region into the non-"primary" regions.
135 # The concept of "primary" is our own, and there is no mention of it in the LSP spec.
136 translation = region.b - primary_cursor_position
137 translated_edit_region = sublime.Region(edit_region.a + translation, edit_region.b + translation)
138 yield translated_edit_region
139
140
141 def resolve(completion_list: sublime.CompletionList, items: List[sublime.CompletionItem], flags: int = 0) -> None:
142 # Resolve the promise on the main thread to prevent any sort of data race for _set_target (see sublime_plugin.py).
143 sublime.set_timeout(lambda: completion_list.set_completions(items, flags))
144
[end of plugin/completion.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/completion.py b/plugin/completion.py
--- a/plugin/completion.py
+++ b/plugin/completion.py
@@ -25,10 +25,12 @@
if not detail or not documentation:
# To make sure that the detail or documentation fields doesn't exist we need to resove the completion item.
# If those fields appear after the item is resolved we show them in the popup.
- self.do_resolve(item)
- else:
- minihtml_content = self.get_content(documentation, detail)
- self.show_popup(minihtml_content)
+ session = self.best_session('completionProvider.resolveProvider')
+ if session:
+ session.send_request(Request.resolveCompletionItem(item), self.handle_resolve_response)
+ return
+ minihtml_content = self.get_content(documentation, detail)
+ self.show_popup(minihtml_content)
def format_documentation(self, content: Union[str, Dict[str, str]]) -> str:
return minihtml(self.view, content, allowed_formats=FORMAT_STRING | FORMAT_MARKUP_CONTENT)
@@ -56,13 +58,6 @@
def on_navigate(self, url: str) -> None:
webbrowser.open(url)
- def do_resolve(self, item: dict) -> None:
- session = self.best_session('completionProvider.resolveProvider')
- if session:
- session.send_request(
- Request.resolveCompletionItem(item),
- lambda res: self.handle_resolve_response(res))
-
def handle_resolve_response(self, item: Optional[dict]) -> None:
detail = ""
documentation = ""
| {"golden_diff": "diff --git a/plugin/completion.py b/plugin/completion.py\n--- a/plugin/completion.py\n+++ b/plugin/completion.py\n@@ -25,10 +25,12 @@\n if not detail or not documentation:\n # To make sure that the detail or documentation fields doesn't exist we need to resove the completion item.\n # If those fields appear after the item is resolved we show them in the popup.\n- self.do_resolve(item)\n- else:\n- minihtml_content = self.get_content(documentation, detail)\n- self.show_popup(minihtml_content)\n+ session = self.best_session('completionProvider.resolveProvider')\n+ if session:\n+ session.send_request(Request.resolveCompletionItem(item), self.handle_resolve_response)\n+ return\n+ minihtml_content = self.get_content(documentation, detail)\n+ self.show_popup(minihtml_content)\n \n def format_documentation(self, content: Union[str, Dict[str, str]]) -> str:\n return minihtml(self.view, content, allowed_formats=FORMAT_STRING | FORMAT_MARKUP_CONTENT)\n@@ -56,13 +58,6 @@\n def on_navigate(self, url: str) -> None:\n webbrowser.open(url)\n \n- def do_resolve(self, item: dict) -> None:\n- session = self.best_session('completionProvider.resolveProvider')\n- if session:\n- session.send_request(\n- Request.resolveCompletionItem(item),\n- lambda res: self.handle_resolve_response(res))\n-\n def handle_resolve_response(self, item: Optional[dict]) -> None:\n detail = \"\"\n documentation = \"\"\n", "issue": "[LSP-elm] Documentation popup is no logner visible when \"More\" link is clicked\n* OS and language server - Ubunutu 20.04, LSP-elm, ST 4085 \r\n* How you installed LSP - git, latest st-4000-exploration\r\n\r\nWhen clicking the `More` link in the AC popup,\r\nI expect to see a documentation popup.\r\nBut I see nothing. \r\n\r\nThis commit introduced this behavior 19df9e19afeb0f32064a8b7e3a11ebaa4254f63c\r\nIf I checkout the commit before 19df9e19afeb0f32064a8b7e3a11ebaa4254f63c, everything works as expected.\n", "before_files": [{"content": "import mdpopups\nimport sublime\nimport sublime_plugin\nimport webbrowser\nfrom .core.css import css\nfrom .core.logging import debug\nfrom .core.edit import parse_text_edit\nfrom .core.protocol import Request, InsertTextFormat, Range\nfrom .core.registry import LspTextCommand\nfrom .core.typing import Any, List, Dict, Optional, Generator, Union\nfrom .core.views import FORMAT_STRING, FORMAT_MARKUP_CONTENT, minihtml\nfrom .core.views import range_to_region\n\n\nclass LspResolveDocsCommand(LspTextCommand):\n\n completions = [] # type: List[Dict[str, Any]]\n\n def run(self, edit: sublime.Edit, index: int, event: Optional[dict] = None) -> None:\n item = self.completions[index]\n detail = self.format_documentation(item.get('detail') or \"\")\n documentation = self.format_documentation(item.get(\"documentation\") or \"\")\n # don't show the detail in the cooperate AC popup if it is already shown in the AC details filed.\n self.is_detail_shown = bool(detail)\n if not detail or not documentation:\n # To make sure that the detail or documentation fields doesn't exist we need to resove the completion item.\n # If those fields appear after the item is resolved we show them in the popup.\n self.do_resolve(item)\n else:\n minihtml_content = self.get_content(documentation, detail)\n self.show_popup(minihtml_content)\n\n def format_documentation(self, content: Union[str, Dict[str, str]]) -> str:\n return minihtml(self.view, content, allowed_formats=FORMAT_STRING | FORMAT_MARKUP_CONTENT)\n\n def get_content(self, documentation: str, detail: str) -> str:\n content = \"\"\n if detail and not self.is_detail_shown:\n content += \"<div class='highlight'>{}</div>\".format(detail)\n if documentation:\n content += \"<div>{}</div>\".format(documentation)\n return content\n\n def show_popup(self, minihtml_content: str) -> None:\n viewport_width = self.view.viewport_extent()[0]\n mdpopups.show_popup(\n self.view,\n minihtml_content,\n flags=sublime.COOPERATE_WITH_AUTO_COMPLETE,\n css=css().popups,\n wrapper_class=css().popups_classname,\n max_width=viewport_width,\n on_navigate=self.on_navigate\n )\n\n def on_navigate(self, url: str) -> None:\n webbrowser.open(url)\n\n def do_resolve(self, item: dict) -> None:\n session = self.best_session('completionProvider.resolveProvider')\n if session:\n session.send_request(\n Request.resolveCompletionItem(item),\n lambda res: self.handle_resolve_response(res))\n\n def handle_resolve_response(self, item: Optional[dict]) -> None:\n detail = \"\"\n documentation = \"\"\n if item:\n detail = self.format_documentation(item.get('detail') or \"\")\n documentation = self.format_documentation(item.get(\"documentation\") or \"\")\n if not documentation:\n documentation = self.format_documentation({\"kind\": \"markdown\", \"value\": \"*No documentation available.*\"})\n minihtml_content = self.get_content(documentation, detail)\n show = self.update_popup if self.view.is_popup_visible() else self.show_popup\n # NOTE: Update/show popups from the main thread, or else the popup might make the AC widget disappear.\n sublime.set_timeout(lambda: show(minihtml_content))\n\n def update_popup(self, minihtml_content: str) -> None:\n mdpopups.update_popup(\n self.view,\n minihtml_content,\n css=css().popups,\n wrapper_class=css().popups_classname,\n )\n\n\nclass LspCompleteCommand(sublime_plugin.TextCommand):\n\n def epilogue(self, item: Dict[str, Any]) -> None:\n additional_edits = item.get('additionalTextEdits')\n if additional_edits:\n edits = [parse_text_edit(additional_edit) for additional_edit in additional_edits]\n self.view.run_command(\"lsp_apply_document_edit\", {'changes': edits})\n command = item.get(\"command\")\n if command:\n debug('Running server command \"{}\" for view {}'.format(command, self.view.id()))\n self.view.run_command(\"lsp_execute\", {\"command_name\": command})\n\n\nclass LspCompleteInsertTextCommand(LspCompleteCommand):\n\n def run(self, edit: sublime.Edit, **item: Any) -> None:\n insert_text = item.get(\"insertText\") or item[\"label\"]\n if item.get(\"insertTextFormat\", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:\n self.view.run_command(\"insert_snippet\", {\"contents\": insert_text})\n else:\n self.view.run_command(\"insert\", {\"characters\": insert_text})\n self.epilogue(item)\n\n\nclass LspCompleteTextEditCommand(LspCompleteCommand):\n\n def run(self, edit: sublime.Edit, **item: Any) -> None:\n text_edit = item[\"textEdit\"]\n new_text = text_edit['newText']\n edit_region = range_to_region(Range.from_lsp(text_edit['range']), self.view)\n if item.get(\"insertTextFormat\", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:\n for region in self.translated_regions(edit_region):\n self.view.erase(edit, region)\n self.view.run_command(\"insert_snippet\", {\"contents\": new_text})\n else:\n for region in self.translated_regions(edit_region):\n # NOTE: Cannot do .replace, because ST will select the replacement.\n self.view.erase(edit, region)\n self.view.insert(edit, region.a, new_text)\n self.epilogue(item)\n\n def translated_regions(self, edit_region: sublime.Region) -> Generator[sublime.Region, None, None]:\n selection = self.view.sel()\n primary_cursor_position = selection[0].b\n for region in reversed(selection):\n # For each selection region, apply the same removal as for the \"primary\" region.\n # To do that, translate, or offset, the LSP edit region into the non-\"primary\" regions.\n # The concept of \"primary\" is our own, and there is no mention of it in the LSP spec.\n translation = region.b - primary_cursor_position\n translated_edit_region = sublime.Region(edit_region.a + translation, edit_region.b + translation)\n yield translated_edit_region\n\n\ndef resolve(completion_list: sublime.CompletionList, items: List[sublime.CompletionItem], flags: int = 0) -> None:\n # Resolve the promise on the main thread to prevent any sort of data race for _set_target (see sublime_plugin.py).\n sublime.set_timeout(lambda: completion_list.set_completions(items, flags))\n", "path": "plugin/completion.py"}]} | 2,482 | 345 |
gh_patches_debug_20627 | rasdani/github-patches | git_diff | ciudadanointeligente__votainteligente-portal-electoral-283 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Candidate has_answered siempre en false
¿Cómo se hace para que deje de mostrar el enlace a twitter para candidatos que tienen todas las respuestas?
¿Cómo se hace para cambiar "pídele" por "pedile"?
</issue>
<code>
[start of elections/models.py]
1 # coding=utf-8
2 from django.db import models
3 from autoslug import AutoSlugField
4 from taggit.managers import TaggableManager
5 from django.core.urlresolvers import reverse
6 from popolo.models import Person, Area
7 from django.utils.translation import ugettext_lazy as _
8 from markdown_deux.templatetags.markdown_deux_tags import markdown_allowed
9 from candidator.models import Category, Topic as CanTopic
10 from picklefield.fields import PickledObjectField
11 from django.conf import settings
12 from django.utils.encoding import python_2_unicode_compatible
13 from django.contrib.flatpages.models import FlatPage
14 import copy
15
16
17 class ExtraInfoMixin(models.Model):
18 extra_info = PickledObjectField(default={})
19
20 class Meta:
21 abstract = True
22
23 def __init__(self, *args, **kwargs):
24 super(ExtraInfoMixin, self).__init__(*args, **kwargs)
25 default_extra_info = copy.copy(self.default_extra_info)
26 default_extra_info.update(self.extra_info)
27 self.extra_info = default_extra_info
28
29
30 class Candidate(Person, ExtraInfoMixin):
31 election = models.ForeignKey('Election', related_name='candidates', null=True)
32
33 default_extra_info = settings.DEFAULT_CANDIDATE_EXTRA_INFO
34
35 @property
36 def twitter(self):
37 links = self.contact_details.filter(contact_type="TWITTER")
38 if links:
39 return links.first()
40
41 class Meta:
42 verbose_name = _("Candidato")
43 verbose_name_plural = _("Candidatos")
44
45
46 class CandidateFlatPage(FlatPage):
47 candidate = models.ForeignKey(Candidate, related_name='flatpages')
48
49 class Meta:
50 verbose_name = _(u"Página estáticas por candidato")
51 verbose_name_plural = _(u"Páginas estáticas por candidato")
52
53 def get_absolute_url(self):
54 return reverse('candidate_flatpage', kwargs={'election_slug': self.candidate.election.slug,
55 'slug': self.candidate.id,
56 'url': self.url
57 }
58 )
59
60
61 class PersonalData(models.Model):
62 candidate = models.ForeignKey('Candidate', related_name="personal_datas")
63 label = models.CharField(max_length=512)
64 value = models.CharField(max_length=1024)
65
66
67 class Topic(CanTopic):
68 class Meta:
69 proxy = True
70 verbose_name = _(u"Pregunta")
71 verbose_name_plural = _(u"Preguntas")
72
73 @property
74 def election(self):
75 category = QuestionCategory.objects.get(category_ptr=self.category)
76 return category.election
77
78
79 @python_2_unicode_compatible
80 class QuestionCategory(Category):
81 election = models.ForeignKey('Election', related_name='categories', null=True)
82
83 def __str__(self):
84 return u'<%s> in <%s>' % (self.name, self.election.name)
85
86 class Meta:
87 verbose_name = _(u"Categoría de pregunta")
88 verbose_name_plural = _(u"Categorías de pregunta")
89
90
91 class Election(ExtraInfoMixin, models.Model):
92 name = models.CharField(max_length=255)
93 slug = AutoSlugField(populate_from='name', unique=True)
94 description = models.TextField(blank=True)
95 tags = TaggableManager(blank=True)
96 searchable = models.BooleanField(default=True)
97 highlighted = models.BooleanField(default=False)
98 extra_info_title = models.CharField(max_length=50, blank=True, null=True)
99 extra_info_content = models.TextField(max_length=3000, blank=True, null=True, help_text=_("Puedes usar Markdown. <br/> ")
100 + markdown_allowed())
101 uses_preguntales = models.BooleanField(default=True, help_text=_(u"Esta elección debe usar preguntales?"))
102 uses_ranking = models.BooleanField(default=True, help_text=_(u"Esta elección debe usar ranking"))
103 uses_face_to_face = models.BooleanField(default=True, help_text=_(u"Esta elección debe usar frente a frente"))
104 uses_soul_mate = models.BooleanField(default=True, help_text=_(u"Esta elección debe usar 1/2 naranja"))
105 uses_questionary = models.BooleanField(default=True, help_text=_(u"Esta elección debe usar cuestionario"))
106
107 default_extra_info = settings.DEFAULT_ELECTION_EXTRA_INFO
108 area = models.ForeignKey(Area, null=True, related_name="elections")
109
110 def __unicode__(self):
111 return self.name
112
113 def get_absolute_url(self):
114 return reverse('election_view', kwargs={'slug': self.slug})
115
116 def get_extra_info_url(self):
117 return reverse('election_extra_info', kwargs={'slug': self.slug})
118
119 class Meta:
120 verbose_name = _(u'Mi Elección')
121 verbose_name_plural = _(u'Mis Elecciones')
122
[end of elections/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elections/models.py b/elections/models.py
--- a/elections/models.py
+++ b/elections/models.py
@@ -6,7 +6,7 @@
from popolo.models import Person, Area
from django.utils.translation import ugettext_lazy as _
from markdown_deux.templatetags.markdown_deux_tags import markdown_allowed
-from candidator.models import Category, Topic as CanTopic
+from candidator.models import Category, Topic as CanTopic, TakenPosition
from picklefield.fields import PickledObjectField
from django.conf import settings
from django.utils.encoding import python_2_unicode_compatible
@@ -38,6 +38,11 @@
if links:
return links.first()
+ @property
+ def has_answered(self):
+ are_there_answers = TakenPosition.objects.filter(person=self, position__isnull=False).exists()
+ return are_there_answers
+
class Meta:
verbose_name = _("Candidato")
verbose_name_plural = _("Candidatos")
| {"golden_diff": "diff --git a/elections/models.py b/elections/models.py\n--- a/elections/models.py\n+++ b/elections/models.py\n@@ -6,7 +6,7 @@\n from popolo.models import Person, Area\n from django.utils.translation import ugettext_lazy as _\n from markdown_deux.templatetags.markdown_deux_tags import markdown_allowed\n-from candidator.models import Category, Topic as CanTopic\n+from candidator.models import Category, Topic as CanTopic, TakenPosition\n from picklefield.fields import PickledObjectField\n from django.conf import settings\n from django.utils.encoding import python_2_unicode_compatible\n@@ -38,6 +38,11 @@\n if links:\n return links.first()\n \n+ @property\n+ def has_answered(self):\n+ are_there_answers = TakenPosition.objects.filter(person=self, position__isnull=False).exists()\n+ return are_there_answers\n+\n class Meta:\n verbose_name = _(\"Candidato\")\n verbose_name_plural = _(\"Candidatos\")\n", "issue": "Candidate has_answered siempre en false\n\u00bfC\u00f3mo se hace para que deje de mostrar el enlace a twitter para candidatos que tienen todas las respuestas?\n\u00bfC\u00f3mo se hace para cambiar \"p\u00eddele\" por \"pedile\"?\n\n", "before_files": [{"content": "# coding=utf-8\nfrom django.db import models\nfrom autoslug import AutoSlugField\nfrom taggit.managers import TaggableManager\nfrom django.core.urlresolvers import reverse\nfrom popolo.models import Person, Area\nfrom django.utils.translation import ugettext_lazy as _\nfrom markdown_deux.templatetags.markdown_deux_tags import markdown_allowed\nfrom candidator.models import Category, Topic as CanTopic\nfrom picklefield.fields import PickledObjectField\nfrom django.conf import settings\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.contrib.flatpages.models import FlatPage\nimport copy\n\n\nclass ExtraInfoMixin(models.Model):\n extra_info = PickledObjectField(default={})\n\n class Meta:\n abstract = True\n\n def __init__(self, *args, **kwargs):\n super(ExtraInfoMixin, self).__init__(*args, **kwargs)\n default_extra_info = copy.copy(self.default_extra_info)\n default_extra_info.update(self.extra_info)\n self.extra_info = default_extra_info\n\n\nclass Candidate(Person, ExtraInfoMixin):\n election = models.ForeignKey('Election', related_name='candidates', null=True)\n\n default_extra_info = settings.DEFAULT_CANDIDATE_EXTRA_INFO\n\n @property\n def twitter(self):\n links = self.contact_details.filter(contact_type=\"TWITTER\")\n if links:\n return links.first()\n\n class Meta:\n verbose_name = _(\"Candidato\")\n verbose_name_plural = _(\"Candidatos\")\n\n\nclass CandidateFlatPage(FlatPage):\n candidate = models.ForeignKey(Candidate, related_name='flatpages')\n\n class Meta:\n verbose_name = _(u\"P\u00e1gina est\u00e1ticas por candidato\")\n verbose_name_plural = _(u\"P\u00e1ginas est\u00e1ticas por candidato\")\n\n def get_absolute_url(self):\n return reverse('candidate_flatpage', kwargs={'election_slug': self.candidate.election.slug,\n 'slug': self.candidate.id,\n 'url': self.url\n }\n )\n\n\nclass PersonalData(models.Model):\n candidate = models.ForeignKey('Candidate', related_name=\"personal_datas\")\n label = models.CharField(max_length=512)\n value = models.CharField(max_length=1024)\n\n\nclass Topic(CanTopic):\n class Meta:\n proxy = True\n verbose_name = _(u\"Pregunta\")\n verbose_name_plural = _(u\"Preguntas\")\n\n @property\n def election(self):\n category = QuestionCategory.objects.get(category_ptr=self.category)\n return category.election\n\n\n@python_2_unicode_compatible\nclass QuestionCategory(Category):\n election = models.ForeignKey('Election', related_name='categories', null=True)\n\n def __str__(self):\n return u'<%s> in <%s>' % (self.name, self.election.name)\n\n class Meta:\n verbose_name = _(u\"Categor\u00eda de pregunta\")\n verbose_name_plural = _(u\"Categor\u00edas de pregunta\")\n\n\nclass Election(ExtraInfoMixin, models.Model):\n name = models.CharField(max_length=255)\n slug = AutoSlugField(populate_from='name', unique=True)\n description = models.TextField(blank=True)\n tags = TaggableManager(blank=True)\n searchable = models.BooleanField(default=True)\n highlighted = models.BooleanField(default=False)\n extra_info_title = models.CharField(max_length=50, blank=True, null=True)\n extra_info_content = models.TextField(max_length=3000, blank=True, null=True, help_text=_(\"Puedes usar Markdown. <br/> \")\n + markdown_allowed())\n uses_preguntales = models.BooleanField(default=True, help_text=_(u\"Esta elecci\u00f3n debe usar preguntales?\"))\n uses_ranking = models.BooleanField(default=True, help_text=_(u\"Esta elecci\u00f3n debe usar ranking\"))\n uses_face_to_face = models.BooleanField(default=True, help_text=_(u\"Esta elecci\u00f3n debe usar frente a frente\"))\n uses_soul_mate = models.BooleanField(default=True, help_text=_(u\"Esta elecci\u00f3n debe usar 1/2 naranja\"))\n uses_questionary = models.BooleanField(default=True, help_text=_(u\"Esta elecci\u00f3n debe usar cuestionario\"))\n\n default_extra_info = settings.DEFAULT_ELECTION_EXTRA_INFO\n area = models.ForeignKey(Area, null=True, related_name=\"elections\")\n\n def __unicode__(self):\n return self.name\n\n def get_absolute_url(self):\n return reverse('election_view', kwargs={'slug': self.slug})\n\n def get_extra_info_url(self):\n return reverse('election_extra_info', kwargs={'slug': self.slug})\n\n class Meta:\n verbose_name = _(u'Mi Elecci\u00f3n')\n verbose_name_plural = _(u'Mis Elecciones')\n", "path": "elections/models.py"}]} | 1,847 | 221 |
gh_patches_debug_6647 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1647 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[META 555] Add automated span type/subtype checking against shared spec
Spec PR: https://github.com/elastic/apm/pull/443
To start, we would just ensure that all span types/subtypes appear in the spec. In the future we will work on cross-agent alignment.
</issue>
<code>
[start of elasticapm/instrumentation/packages/asyncio/aiopg.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm.contrib.asyncio.traces import async_capture_span
32 from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule
33 from elasticapm.instrumentation.packages.dbapi2 import extract_signature
34
35
36 class AioPGInstrumentation(AsyncAbstractInstrumentedModule):
37 name = "aiopg"
38
39 instrument_list = [
40 ("aiopg.cursor", "Cursor.execute"),
41 ("aiopg.cursor", "Cursor.callproc"),
42 ("aiopg.connection", "Cursor.execute"),
43 ("aiopg.connection", "Cursor.callproc"),
44 ]
45
46 async def call(self, module, method, wrapped, instance, args, kwargs):
47 if method == "Cursor.execute":
48 query = args[0] if len(args) else kwargs["operation"]
49 query = _bake_sql(instance.raw, query)
50 name = extract_signature(query)
51 context = {"db": {"type": "sql", "statement": query}}
52 action = "query"
53 elif method == "Cursor.callproc":
54 func = args[0] if len(args) else kwargs["procname"]
55 name = func + "()"
56 context = None
57 action = "exec"
58 else:
59 raise AssertionError("call from uninstrumented method")
60 async with async_capture_span(
61 name, leaf=True, span_type="db", span_subtype="postgres", span_action=action, extra=context
62 ):
63 return await wrapped(*args, **kwargs)
64
65
66 def _bake_sql(cursor, sql):
67 # if this is a Composable object, use its `as_string` method
68 # see http://initd.org/psycopg/docs/sql.html
69 if hasattr(sql, "as_string"):
70 return sql.as_string(cursor)
71 return sql
72
[end of elasticapm/instrumentation/packages/asyncio/aiopg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/asyncio/aiopg.py b/elasticapm/instrumentation/packages/asyncio/aiopg.py
--- a/elasticapm/instrumentation/packages/asyncio/aiopg.py
+++ b/elasticapm/instrumentation/packages/asyncio/aiopg.py
@@ -58,7 +58,7 @@
else:
raise AssertionError("call from uninstrumented method")
async with async_capture_span(
- name, leaf=True, span_type="db", span_subtype="postgres", span_action=action, extra=context
+ name, leaf=True, span_type="db", span_subtype="postgresql", span_action=action, extra=context
):
return await wrapped(*args, **kwargs)
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/asyncio/aiopg.py b/elasticapm/instrumentation/packages/asyncio/aiopg.py\n--- a/elasticapm/instrumentation/packages/asyncio/aiopg.py\n+++ b/elasticapm/instrumentation/packages/asyncio/aiopg.py\n@@ -58,7 +58,7 @@\n else:\n raise AssertionError(\"call from uninstrumented method\")\n async with async_capture_span(\n- name, leaf=True, span_type=\"db\", span_subtype=\"postgres\", span_action=action, extra=context\n+ name, leaf=True, span_type=\"db\", span_subtype=\"postgresql\", span_action=action, extra=context\n ):\n return await wrapped(*args, **kwargs)\n", "issue": "[META 555] Add automated span type/subtype checking against shared spec\nSpec PR: https://github.com/elastic/apm/pull/443\r\n\r\nTo start, we would just ensure that all span types/subtypes appear in the spec. In the future we will work on cross-agent alignment.\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.contrib.asyncio.traces import async_capture_span\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.instrumentation.packages.dbapi2 import extract_signature\n\n\nclass AioPGInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"aiopg\"\n\n instrument_list = [\n (\"aiopg.cursor\", \"Cursor.execute\"),\n (\"aiopg.cursor\", \"Cursor.callproc\"),\n (\"aiopg.connection\", \"Cursor.execute\"),\n (\"aiopg.connection\", \"Cursor.callproc\"),\n ]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n if method == \"Cursor.execute\":\n query = args[0] if len(args) else kwargs[\"operation\"]\n query = _bake_sql(instance.raw, query)\n name = extract_signature(query)\n context = {\"db\": {\"type\": \"sql\", \"statement\": query}}\n action = \"query\"\n elif method == \"Cursor.callproc\":\n func = args[0] if len(args) else kwargs[\"procname\"]\n name = func + \"()\"\n context = None\n action = \"exec\"\n else:\n raise AssertionError(\"call from uninstrumented method\")\n async with async_capture_span(\n name, leaf=True, span_type=\"db\", span_subtype=\"postgres\", span_action=action, extra=context\n ):\n return await wrapped(*args, **kwargs)\n\n\ndef _bake_sql(cursor, sql):\n # if this is a Composable object, use its `as_string` method\n # see http://initd.org/psycopg/docs/sql.html\n if hasattr(sql, \"as_string\"):\n return sql.as_string(cursor)\n return sql\n", "path": "elasticapm/instrumentation/packages/asyncio/aiopg.py"}]} | 1,470 | 171 |
gh_patches_debug_18169 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-490 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ensure cutoff date is updated
figure out when the cutoff date for ambiguous timestamps needs updating and either calculate it dynamically or add a unit test that fails when it needs adjusting.
https://github.com/scoutapp/scout_apm_python/blob/cf2246e6ff0dc1b69ffff25e10cd83782895ee27/src/scout_apm/core/web_requests.py#L149-L173
</issue>
<code>
[start of src/scout_apm/core/web_requests.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import time
5
6 from scout_apm.compat import datetime_to_timestamp, parse_qsl, urlencode
7 from scout_apm.core.config import scout_config
8
9 # Originally derived from:
10 # 1. Rails:
11 # https://github.com/rails/rails/blob/0196551e6039ca864d1eee1e01819fcae12c1dc9/railties/lib/rails/generators/rails/app/templates/config/initializers/filter_parameter_logging.rb.tt # noqa
12 # 2. Sentry server side scrubbing:
13 # https://docs.sentry.io/data-management/sensitive-data/#server-side-scrubbing
14 FILTER_PARAMETERS = frozenset(
15 [
16 "access",
17 "access_token",
18 "api_key",
19 "apikey",
20 "auth",
21 "auth_token",
22 "card[number]",
23 "certificate",
24 "credentials",
25 "crypt",
26 "key",
27 "mysql_pwd",
28 "otp",
29 "passwd",
30 "password",
31 "private",
32 "protected",
33 "salt",
34 "secret",
35 "ssn",
36 "stripetoken",
37 "token",
38 ]
39 )
40
41
42 def create_filtered_path(path, query_params):
43 if scout_config.value("uri_reporting") == "path":
44 return path
45 # Python 2 unicode compatibility: force all keys and values to bytes
46 filtered_params = sorted(
47 (
48 (
49 key.encode("utf-8"),
50 (
51 b"[FILTERED]"
52 if key.lower() in FILTER_PARAMETERS
53 else value.encode("utf-8")
54 ),
55 )
56 for key, value in query_params
57 )
58 )
59 if not filtered_params:
60 return path
61 return path + "?" + urlencode(filtered_params)
62
63
64 def ignore_path(path):
65 ignored_paths = scout_config.value("ignore")
66 for ignored in ignored_paths:
67 if path.startswith(ignored):
68 return True
69 return False
70
71
72 def track_request_queue_time(header_value, tracked_request):
73 if header_value.startswith("t="):
74 header_value = header_value[2:]
75
76 try:
77 first_char = header_value[0]
78 except IndexError:
79 return False
80
81 if not first_char.isdigit(): # filter out negatives, nan, inf, etc.
82 return False
83
84 try:
85 ambiguous_start_timestamp = float(header_value)
86 except ValueError:
87 return False
88
89 start_timestamp_ns = convert_ambiguous_timestamp_to_ns(ambiguous_start_timestamp)
90 if start_timestamp_ns == 0.0:
91 return False
92
93 tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9
94
95 # Ignore if in the future
96 if start_timestamp_ns > tr_start_timestamp_ns:
97 return False
98
99 queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)
100 tracked_request.tag("scout.queue_time_ns", queue_time_ns)
101 return True
102
103
104 def track_amazon_request_queue_time(header_value, tracked_request):
105 items = header_value.split(";")
106 found_item = None
107 for item in items:
108 if found_item is None and item.startswith("Root="):
109 found_item = item
110 elif item.startswith("Self="):
111 found_item = item
112
113 if found_item is None:
114 return False
115
116 pieces = found_item.split("-")
117 if len(pieces) != 3:
118 return False
119
120 timestamp_str = pieces[1]
121
122 try:
123 first_char = timestamp_str[0]
124 except IndexError:
125 return False
126
127 if not first_char.isdigit():
128 return False
129
130 try:
131 start_timestamp_ns = int(timestamp_str) * 1000000000.0
132 except ValueError:
133 return False
134
135 if start_timestamp_ns == 0:
136 return False
137
138 tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9
139
140 # Ignore if in the futuren
141 if start_timestamp_ns > tr_start_timestamp_ns:
142 return False
143
144 queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)
145 tracked_request.tag("scout.queue_time_ns", queue_time_ns)
146 return True
147
148
149 # Cutoff epoch is used for determining ambiguous timestamp boundaries, and is
150 # just over 10 years ago at time of writing
151 CUTOFF_EPOCH_S = time.mktime((2009, 6, 1, 0, 0, 0, 0, 0, 0))
152 CUTOFF_EPOCH_MS = CUTOFF_EPOCH_S * 1000.0
153 CUTOFF_EPOCH_US = CUTOFF_EPOCH_S * 1000000.0
154 CUTOFF_EPOCH_NS = CUTOFF_EPOCH_S * 1000000000.0
155
156
157 def convert_ambiguous_timestamp_to_ns(timestamp):
158 """
159 Convert an ambiguous float timestamp that could be in nanoseconds,
160 microseconds, milliseconds, or seconds to nanoseconds. Return 0.0 for
161 values in the more than 10 years ago.
162 """
163 if timestamp > CUTOFF_EPOCH_NS:
164 converted_timestamp = timestamp
165 elif timestamp > CUTOFF_EPOCH_US:
166 converted_timestamp = timestamp * 1000.0
167 elif timestamp > CUTOFF_EPOCH_MS:
168 converted_timestamp = timestamp * 1000000.0
169 elif timestamp > CUTOFF_EPOCH_S:
170 converted_timestamp = timestamp * 1000000000.0
171 else:
172 return 0.0
173 return converted_timestamp
174
175
176 def asgi_track_request_data(scope, tracked_request):
177 """
178 Track request data from an ASGI HTTP or Websocket scope.
179 """
180 path = scope.get("root_path", "") + scope["path"]
181 query_params = parse_qsl(scope.get("query_string", b"").decode("utf-8"))
182 tracked_request.tag("path", create_filtered_path(path, query_params))
183 if ignore_path(path):
184 tracked_request.tag("ignore_transaction", True)
185
186 # We only care about the last values of headers so don't care that we use
187 # a plain dict rather than a multi-value dict
188 headers = {k.lower(): v for k, v in scope.get("headers", ())}
189
190 user_ip = (
191 headers.get(b"x-forwarded-for", b"").decode("latin1").split(",")[0]
192 or headers.get(b"client-ip", b"").decode("latin1").split(",")[0]
193 or scope.get("client", ("",))[0]
194 )
195 tracked_request.tag("user_ip", user_ip)
196
197 queue_time = headers.get(b"x-queue-start", b"") or headers.get(
198 b"x-request-start", b""
199 )
200 tracked_queue_time = track_request_queue_time(
201 queue_time.decode("latin1"), tracked_request
202 )
203 if not tracked_queue_time:
204 amazon_queue_time = headers.get(b"x-amzn-trace-id", b"")
205 track_amazon_request_queue_time(
206 amazon_queue_time.decode("latin1"), tracked_request
207 )
208
209
210 def werkzeug_track_request_data(werkzeug_request, tracked_request):
211 """
212 Several integrations use Werkzeug requests, so share the code for
213 extracting common data here.
214 """
215 path = werkzeug_request.path
216 tracked_request.tag(
217 "path", create_filtered_path(path, werkzeug_request.args.items(multi=True))
218 )
219 if ignore_path(path):
220 tracked_request.tag("ignore_transaction", True)
221
222 # Determine a remote IP to associate with the request. The value is
223 # spoofable by the requester so this is not suitable to use in any
224 # security sensitive context.
225 user_ip = (
226 werkzeug_request.headers.get("x-forwarded-for", default="").split(",")[0]
227 or werkzeug_request.headers.get("client-ip", default="").split(",")[0]
228 or werkzeug_request.remote_addr
229 )
230 tracked_request.tag("user_ip", user_ip)
231
232 queue_time = werkzeug_request.headers.get(
233 "x-queue-start", default=""
234 ) or werkzeug_request.headers.get("x-request-start", default="")
235 tracked_queue_time = track_request_queue_time(queue_time, tracked_request)
236 if not tracked_queue_time:
237 amazon_queue_time = werkzeug_request.headers.get("x-amzn-trace-id", default="")
238 track_amazon_request_queue_time(amazon_queue_time, tracked_request)
239
[end of src/scout_apm/core/web_requests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/core/web_requests.py b/src/scout_apm/core/web_requests.py
--- a/src/scout_apm/core/web_requests.py
+++ b/src/scout_apm/core/web_requests.py
@@ -1,6 +1,7 @@
# coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
+import datetime as dt
import time
from scout_apm.compat import datetime_to_timestamp, parse_qsl, urlencode
@@ -146,9 +147,8 @@
return True
-# Cutoff epoch is used for determining ambiguous timestamp boundaries, and is
-# just over 10 years ago at time of writing
-CUTOFF_EPOCH_S = time.mktime((2009, 6, 1, 0, 0, 0, 0, 0, 0))
+# Cutoff epoch is used for determining ambiguous timestamp boundaries
+CUTOFF_EPOCH_S = time.mktime((dt.date.today().year - 10, 1, 1, 0, 0, 0, 0, 0, 0))
CUTOFF_EPOCH_MS = CUTOFF_EPOCH_S * 1000.0
CUTOFF_EPOCH_US = CUTOFF_EPOCH_S * 1000000.0
CUTOFF_EPOCH_NS = CUTOFF_EPOCH_S * 1000000000.0
| {"golden_diff": "diff --git a/src/scout_apm/core/web_requests.py b/src/scout_apm/core/web_requests.py\n--- a/src/scout_apm/core/web_requests.py\n+++ b/src/scout_apm/core/web_requests.py\n@@ -1,6 +1,7 @@\n # coding=utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+import datetime as dt\n import time\n \n from scout_apm.compat import datetime_to_timestamp, parse_qsl, urlencode\n@@ -146,9 +147,8 @@\n return True\n \n \n-# Cutoff epoch is used for determining ambiguous timestamp boundaries, and is\n-# just over 10 years ago at time of writing\n-CUTOFF_EPOCH_S = time.mktime((2009, 6, 1, 0, 0, 0, 0, 0, 0))\n+# Cutoff epoch is used for determining ambiguous timestamp boundaries\n+CUTOFF_EPOCH_S = time.mktime((dt.date.today().year - 10, 1, 1, 0, 0, 0, 0, 0, 0))\n CUTOFF_EPOCH_MS = CUTOFF_EPOCH_S * 1000.0\n CUTOFF_EPOCH_US = CUTOFF_EPOCH_S * 1000000.0\n CUTOFF_EPOCH_NS = CUTOFF_EPOCH_S * 1000000000.0\n", "issue": "Ensure cutoff date is updated\nfigure out when the cutoff date for ambiguous timestamps needs updating and either calculate it dynamically or add a unit test that fails when it needs adjusting.\r\n\r\nhttps://github.com/scoutapp/scout_apm_python/blob/cf2246e6ff0dc1b69ffff25e10cd83782895ee27/src/scout_apm/core/web_requests.py#L149-L173\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport time\n\nfrom scout_apm.compat import datetime_to_timestamp, parse_qsl, urlencode\nfrom scout_apm.core.config import scout_config\n\n# Originally derived from:\n# 1. Rails:\n# https://github.com/rails/rails/blob/0196551e6039ca864d1eee1e01819fcae12c1dc9/railties/lib/rails/generators/rails/app/templates/config/initializers/filter_parameter_logging.rb.tt # noqa\n# 2. Sentry server side scrubbing:\n# https://docs.sentry.io/data-management/sensitive-data/#server-side-scrubbing\nFILTER_PARAMETERS = frozenset(\n [\n \"access\",\n \"access_token\",\n \"api_key\",\n \"apikey\",\n \"auth\",\n \"auth_token\",\n \"card[number]\",\n \"certificate\",\n \"credentials\",\n \"crypt\",\n \"key\",\n \"mysql_pwd\",\n \"otp\",\n \"passwd\",\n \"password\",\n \"private\",\n \"protected\",\n \"salt\",\n \"secret\",\n \"ssn\",\n \"stripetoken\",\n \"token\",\n ]\n)\n\n\ndef create_filtered_path(path, query_params):\n if scout_config.value(\"uri_reporting\") == \"path\":\n return path\n # Python 2 unicode compatibility: force all keys and values to bytes\n filtered_params = sorted(\n (\n (\n key.encode(\"utf-8\"),\n (\n b\"[FILTERED]\"\n if key.lower() in FILTER_PARAMETERS\n else value.encode(\"utf-8\")\n ),\n )\n for key, value in query_params\n )\n )\n if not filtered_params:\n return path\n return path + \"?\" + urlencode(filtered_params)\n\n\ndef ignore_path(path):\n ignored_paths = scout_config.value(\"ignore\")\n for ignored in ignored_paths:\n if path.startswith(ignored):\n return True\n return False\n\n\ndef track_request_queue_time(header_value, tracked_request):\n if header_value.startswith(\"t=\"):\n header_value = header_value[2:]\n\n try:\n first_char = header_value[0]\n except IndexError:\n return False\n\n if not first_char.isdigit(): # filter out negatives, nan, inf, etc.\n return False\n\n try:\n ambiguous_start_timestamp = float(header_value)\n except ValueError:\n return False\n\n start_timestamp_ns = convert_ambiguous_timestamp_to_ns(ambiguous_start_timestamp)\n if start_timestamp_ns == 0.0:\n return False\n\n tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9\n\n # Ignore if in the future\n if start_timestamp_ns > tr_start_timestamp_ns:\n return False\n\n queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)\n tracked_request.tag(\"scout.queue_time_ns\", queue_time_ns)\n return True\n\n\ndef track_amazon_request_queue_time(header_value, tracked_request):\n items = header_value.split(\";\")\n found_item = None\n for item in items:\n if found_item is None and item.startswith(\"Root=\"):\n found_item = item\n elif item.startswith(\"Self=\"):\n found_item = item\n\n if found_item is None:\n return False\n\n pieces = found_item.split(\"-\")\n if len(pieces) != 3:\n return False\n\n timestamp_str = pieces[1]\n\n try:\n first_char = timestamp_str[0]\n except IndexError:\n return False\n\n if not first_char.isdigit():\n return False\n\n try:\n start_timestamp_ns = int(timestamp_str) * 1000000000.0\n except ValueError:\n return False\n\n if start_timestamp_ns == 0:\n return False\n\n tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9\n\n # Ignore if in the futuren\n if start_timestamp_ns > tr_start_timestamp_ns:\n return False\n\n queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)\n tracked_request.tag(\"scout.queue_time_ns\", queue_time_ns)\n return True\n\n\n# Cutoff epoch is used for determining ambiguous timestamp boundaries, and is\n# just over 10 years ago at time of writing\nCUTOFF_EPOCH_S = time.mktime((2009, 6, 1, 0, 0, 0, 0, 0, 0))\nCUTOFF_EPOCH_MS = CUTOFF_EPOCH_S * 1000.0\nCUTOFF_EPOCH_US = CUTOFF_EPOCH_S * 1000000.0\nCUTOFF_EPOCH_NS = CUTOFF_EPOCH_S * 1000000000.0\n\n\ndef convert_ambiguous_timestamp_to_ns(timestamp):\n \"\"\"\n Convert an ambiguous float timestamp that could be in nanoseconds,\n microseconds, milliseconds, or seconds to nanoseconds. Return 0.0 for\n values in the more than 10 years ago.\n \"\"\"\n if timestamp > CUTOFF_EPOCH_NS:\n converted_timestamp = timestamp\n elif timestamp > CUTOFF_EPOCH_US:\n converted_timestamp = timestamp * 1000.0\n elif timestamp > CUTOFF_EPOCH_MS:\n converted_timestamp = timestamp * 1000000.0\n elif timestamp > CUTOFF_EPOCH_S:\n converted_timestamp = timestamp * 1000000000.0\n else:\n return 0.0\n return converted_timestamp\n\n\ndef asgi_track_request_data(scope, tracked_request):\n \"\"\"\n Track request data from an ASGI HTTP or Websocket scope.\n \"\"\"\n path = scope.get(\"root_path\", \"\") + scope[\"path\"]\n query_params = parse_qsl(scope.get(\"query_string\", b\"\").decode(\"utf-8\"))\n tracked_request.tag(\"path\", create_filtered_path(path, query_params))\n if ignore_path(path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n # We only care about the last values of headers so don't care that we use\n # a plain dict rather than a multi-value dict\n headers = {k.lower(): v for k, v in scope.get(\"headers\", ())}\n\n user_ip = (\n headers.get(b\"x-forwarded-for\", b\"\").decode(\"latin1\").split(\",\")[0]\n or headers.get(b\"client-ip\", b\"\").decode(\"latin1\").split(\",\")[0]\n or scope.get(\"client\", (\"\",))[0]\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = headers.get(b\"x-queue-start\", b\"\") or headers.get(\n b\"x-request-start\", b\"\"\n )\n tracked_queue_time = track_request_queue_time(\n queue_time.decode(\"latin1\"), tracked_request\n )\n if not tracked_queue_time:\n amazon_queue_time = headers.get(b\"x-amzn-trace-id\", b\"\")\n track_amazon_request_queue_time(\n amazon_queue_time.decode(\"latin1\"), tracked_request\n )\n\n\ndef werkzeug_track_request_data(werkzeug_request, tracked_request):\n \"\"\"\n Several integrations use Werkzeug requests, so share the code for\n extracting common data here.\n \"\"\"\n path = werkzeug_request.path\n tracked_request.tag(\n \"path\", create_filtered_path(path, werkzeug_request.args.items(multi=True))\n )\n if ignore_path(path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n # Determine a remote IP to associate with the request. The value is\n # spoofable by the requester so this is not suitable to use in any\n # security sensitive context.\n user_ip = (\n werkzeug_request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or werkzeug_request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or werkzeug_request.remote_addr\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = werkzeug_request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or werkzeug_request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = werkzeug_request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n", "path": "src/scout_apm/core/web_requests.py"}]} | 3,110 | 326 |
gh_patches_debug_30933 | rasdani/github-patches | git_diff | fossasia__open-event-server-4770 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Modules API gives 404 at localhost
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-server
**Current behavior:**
<!-- Describe how the bug manifests. -->
Currently, neither GET nor PATCH is working in `/v1/modules`
**Expected behavior:**
<!-- Describe what the behavior would be without the bug. -->
It should work.
**Steps to reproduce:**
<!-- If you are able to illustrate the bug or feature request with an example, please provide steps to reproduce -->
**Related code:**
```
insert any relevant code here else remove this section
```
**Other information:**
<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. -->
**System information:**
<!-- Add information about the system your facing this bug on. If you think this is irrelevant or if it's a UI bug or a feature request, please remove this section -->
```
Your operating system
```
```
output of `python --version`
```
**Wanna work on this issue**
</issue>
<code>
[start of app/api/modules.py]
1 from flask_rest_jsonapi import ResourceDetail
2
3 from app.api.bootstrap import api
4 from app.api.schema.modules import ModuleSchema
5 from app.models import db
6 from app.models.module import Module
7
8
9 class ModuleDetail(ResourceDetail):
10 """
11 module detail by id
12 """
13 def before_get(self, args, kwargs):
14 """
15 before get method to get the resource id for fetching details
16 :param args:
17 :param kwargs:
18 :return:
19 """
20 kwargs['id'] = 1
21
22 decorators = (api.has_permission('is_admin', methods="PATCH", id="1"),)
23 methods = ['GET', 'PATCH']
24 schema = ModuleSchema
25 data_layer = {'session': db.session,
26 'model': Module}
27
[end of app/api/modules.py]
[start of populate_db.py]
1 from app import current_app
2 from app.models import db
3 from app.api.helpers.db import get_or_create # , save_to_db
4
5 # Admin message settings
6 from app.api.helpers.system_mails import MAILS
7 from app.models.message_setting import MessageSettings
8
9 # Event Role-Service Permissions
10 from app.models.role import Role
11 from app.models.service import Service
12 from app.models.permission import Permission
13
14 from app.models.track import Track
15 from app.models.session import Session
16 from app.models.speaker import Speaker
17 from app.models.sponsor import Sponsor
18 from app.models.microlocation import Microlocation
19
20 from app.models.user import ORGANIZER, COORGANIZER, TRACK_ORGANIZER, MODERATOR, ATTENDEE, REGISTRAR
21
22 # Admin Panel Permissions
23 from app.models.panel_permission import PanelPermission
24 from app.models.custom_system_role import CustomSysRole
25
26 from app.models.setting import Setting
27
28 # User Permissions
29 from app.models.user_permission import UserPermission
30 SALES = 'sales'
31
32
33 def create_roles():
34 get_or_create(Role, name=ORGANIZER, title_name='Organizer')
35 get_or_create(Role, name=COORGANIZER, title_name='Co-organizer')
36 get_or_create(Role, name=TRACK_ORGANIZER, title_name='Track Organizer')
37 get_or_create(Role, name=MODERATOR, title_name='Moderator')
38 get_or_create(Role, name=ATTENDEE, title_name='Attendee')
39 get_or_create(Role, name=REGISTRAR, title_name='Registrar')
40
41
42 def create_services():
43 track = Track.get_service_name()
44 session = Session.get_service_name()
45 speaker = Speaker.get_service_name()
46 sponsor = Sponsor.get_service_name()
47 microlocation = Microlocation.get_service_name()
48
49 get_or_create(Service, name=track)
50 get_or_create(Service, name=session)
51 get_or_create(Service, name=speaker)
52 get_or_create(Service, name=sponsor)
53 get_or_create(Service, name=microlocation)
54
55
56 def create_settings():
57 get_or_create(Setting, app_name='Open Event')
58
59
60 def create_permissions():
61 orgr = Role.query.get(1)
62 coorgr = Role.query.get(2)
63 track_orgr = Role.query.get(3)
64 mod = Role.query.get(4)
65
66 track = Service.query.get(1)
67 session = Service.query.get(2)
68 speaker = Service.query.get(3)
69 sponsor = Service.query.get(4)
70 microlocation = Service.query.get(5)
71
72 # For ORGANIZER
73 # All four permissions set to True
74 get_or_create(Permission, role=orgr, service=track)
75 get_or_create(Permission, role=orgr, service=session)
76 get_or_create(Permission, role=orgr, service=speaker)
77 get_or_create(Permission, role=orgr, service=sponsor)
78 get_or_create(Permission, role=orgr, service=microlocation)
79
80 # For COORGANIZER
81 perm, _ = get_or_create(Permission, role=coorgr, service=track)
82 perm.can_create, perm.can_delete = False, False
83 db.session.add(perm)
84
85 perm, _ = get_or_create(Permission, role=coorgr, service=session)
86 perm.can_create, perm.can_delete = False, False
87 db.session.add(perm)
88
89 perm, _ = get_or_create(Permission, role=coorgr, service=speaker)
90 perm.can_create, perm.can_delete = False, False
91 db.session.add(perm)
92
93 perm, _ = get_or_create(Permission, role=coorgr, service=sponsor)
94 perm.can_create, perm.can_delete = False, False
95 db.session.add(perm)
96
97 perm, _ = get_or_create(Permission, role=coorgr, service=microlocation)
98 perm.can_create, perm.can_delete = False, False
99 db.session.add(perm)
100
101 # For TRACK_ORGANIZER
102 perm, _ = get_or_create(Permission, role=track_orgr, service=track)
103 db.session.add(perm)
104
105 # For MODERATOR
106 perm, _ = get_or_create(Permission, role=mod, service=track)
107 perm.can_create, perm.can_update, perm.can_delete = False, False, False
108 db.session.add(perm)
109
110
111 def create_custom_sys_roles():
112 role, _ = get_or_create(CustomSysRole, name='Sales Admin')
113 db.session.add(role)
114 role, _ = get_or_create(CustomSysRole, name='Marketer')
115 db.session.add(role)
116
117
118 def create_panel_permissions():
119 sales_admin = CustomSysRole.query.filter_by(name='Sales Admin').first()
120 perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=sales_admin)
121 db.session.add(perm)
122 marketer = CustomSysRole.query.filter_by(name='Marketer').first()
123 perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=marketer)
124 db.session.add(perm)
125
126
127 def create_user_permissions():
128 # Publish Event
129 user_perm, _ = get_or_create(UserPermission, name='publish_event',
130 description='Publish event (make event live)')
131 user_perm.verified_user = True
132 db.session.add(user_perm)
133
134 # Create Event
135 user_perm, _ = get_or_create(UserPermission, name='create_event',
136 description='Create event')
137 user_perm.verified_user, user_perm.unverified_user = True, True
138 db.session.add(user_perm)
139
140
141 def create_admin_message_settings():
142 default_mails = ["Next Event",
143 "Session Schedule Change",
144 "User email",
145 "Invitation For Papers",
146 "After Event",
147 "Ticket(s) Purchased",
148 "Session Accept or Reject",
149 "Event Published",
150 "Event Export Failed",
151 "Event Exported",
152 "Event Role Invitation",
153 "New Session Proposal"]
154 for mail in MAILS:
155 if mail in default_mails:
156 get_or_create(MessageSettings, action=mail, mail_status=1, notification_status=1, user_control_status=1)
157 else:
158 get_or_create(MessageSettings, action=mail, mail_status=0, notification_status=0, user_control_status=0)
159
160
161 def populate():
162 """
163 Create defined Roles, Services and Permissions.
164 """
165 print('Creating roles...')
166 create_roles()
167 print('Creating services...')
168 create_services()
169 print('Creating permissions...')
170 create_permissions()
171 print('Creating custom system roles...')
172 create_custom_sys_roles()
173 print('Creating admin panel permissions...')
174 create_panel_permissions()
175 print('Creating user permissions...')
176 create_user_permissions()
177 print('Creating settings...')
178 create_settings()
179 print('Creating admin message settings...')
180 create_admin_message_settings()
181
182
183 def populate_without_print():
184 """
185 Create defined Roles, Services and Permissions.
186 """
187 create_roles()
188 create_services()
189 create_permissions()
190 create_custom_sys_roles()
191 create_panel_permissions()
192 create_user_permissions()
193 create_admin_message_settings()
194
195 db.session.commit()
196
197
198 if __name__ == '__main__':
199 with current_app.app_context():
200 populate()
201
[end of populate_db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/modules.py b/app/api/modules.py
--- a/app/api/modules.py
+++ b/app/api/modules.py
@@ -10,6 +10,7 @@
"""
module detail by id
"""
+
def before_get(self, args, kwargs):
"""
before get method to get the resource id for fetching details
@@ -19,7 +20,7 @@
"""
kwargs['id'] = 1
- decorators = (api.has_permission('is_admin', methods="PATCH", id="1"),)
+ decorators = (api.has_permission('is_admin', methods='PATCH', id='1'),)
methods = ['GET', 'PATCH']
schema = ModuleSchema
data_layer = {'session': db.session,
diff --git a/populate_db.py b/populate_db.py
--- a/populate_db.py
+++ b/populate_db.py
@@ -24,6 +24,7 @@
from app.models.custom_system_role import CustomSysRole
from app.models.setting import Setting
+from app.models.module import Module
# User Permissions
from app.models.user_permission import UserPermission
@@ -57,6 +58,10 @@
get_or_create(Setting, app_name='Open Event')
+def create_modules():
+ get_or_create(Module, donation_include=False)
+
+
def create_permissions():
orgr = Role.query.get(1)
coorgr = Role.query.get(2)
@@ -176,6 +181,8 @@
create_user_permissions()
print('Creating settings...')
create_settings()
+ print('Creating modules...')
+ create_modules()
print('Creating admin message settings...')
create_admin_message_settings()
| {"golden_diff": "diff --git a/app/api/modules.py b/app/api/modules.py\n--- a/app/api/modules.py\n+++ b/app/api/modules.py\n@@ -10,6 +10,7 @@\n \"\"\"\n module detail by id\n \"\"\"\n+\n def before_get(self, args, kwargs):\n \"\"\"\n before get method to get the resource id for fetching details\n@@ -19,7 +20,7 @@\n \"\"\"\n kwargs['id'] = 1\n \n- decorators = (api.has_permission('is_admin', methods=\"PATCH\", id=\"1\"),)\n+ decorators = (api.has_permission('is_admin', methods='PATCH', id='1'),)\n methods = ['GET', 'PATCH']\n schema = ModuleSchema\n data_layer = {'session': db.session,\ndiff --git a/populate_db.py b/populate_db.py\n--- a/populate_db.py\n+++ b/populate_db.py\n@@ -24,6 +24,7 @@\n from app.models.custom_system_role import CustomSysRole\n \n from app.models.setting import Setting\n+from app.models.module import Module\n \n # User Permissions\n from app.models.user_permission import UserPermission\n@@ -57,6 +58,10 @@\n get_or_create(Setting, app_name='Open Event')\n \n \n+def create_modules():\n+ get_or_create(Module, donation_include=False)\n+\n+\n def create_permissions():\n orgr = Role.query.get(1)\n coorgr = Role.query.get(2)\n@@ -176,6 +181,8 @@\n create_user_permissions()\n print('Creating settings...')\n create_settings()\n+ print('Creating modules...')\n+ create_modules()\n print('Creating admin message settings...')\n create_admin_message_settings()\n", "issue": "Modules API gives 404 at localhost\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-server\r\n\r\n**Current behavior:**\r\n<!-- Describe how the bug manifests. -->\r\n\r\nCurrently, neither GET nor PATCH is working in `/v1/modules`\r\n**Expected behavior:**\r\n<!-- Describe what the behavior would be without the bug. -->\r\n\r\nIt should work.\r\n**Steps to reproduce:**\r\n<!-- If you are able to illustrate the bug or feature request with an example, please provide steps to reproduce -->\r\n\r\n**Related code:**\r\n\r\n```\r\ninsert any relevant code here else remove this section\r\n```\r\n\r\n**Other information:**\r\n<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. -->\r\n\r\n**System information:** \r\n\r\n<!-- Add information about the system your facing this bug on. If you think this is irrelevant or if it's a UI bug or a feature request, please remove this section -->\r\n\r\n```\r\nYour operating system\r\n```\r\n\r\n```\r\noutput of `python --version`\r\n```\r\n**Wanna work on this issue**\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail\n\nfrom app.api.bootstrap import api\nfrom app.api.schema.modules import ModuleSchema\nfrom app.models import db\nfrom app.models.module import Module\n\n\nclass ModuleDetail(ResourceDetail):\n \"\"\"\n module detail by id\n \"\"\"\n def before_get(self, args, kwargs):\n \"\"\"\n before get method to get the resource id for fetching details\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n kwargs['id'] = 1\n\n decorators = (api.has_permission('is_admin', methods=\"PATCH\", id=\"1\"),)\n methods = ['GET', 'PATCH']\n schema = ModuleSchema\n data_layer = {'session': db.session,\n 'model': Module}\n", "path": "app/api/modules.py"}, {"content": "from app import current_app\nfrom app.models import db\nfrom app.api.helpers.db import get_or_create # , save_to_db\n\n# Admin message settings\nfrom app.api.helpers.system_mails import MAILS\nfrom app.models.message_setting import MessageSettings\n\n# Event Role-Service Permissions\nfrom app.models.role import Role\nfrom app.models.service import Service\nfrom app.models.permission import Permission\n\nfrom app.models.track import Track\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.sponsor import Sponsor\nfrom app.models.microlocation import Microlocation\n\nfrom app.models.user import ORGANIZER, COORGANIZER, TRACK_ORGANIZER, MODERATOR, ATTENDEE, REGISTRAR\n\n# Admin Panel Permissions\nfrom app.models.panel_permission import PanelPermission\nfrom app.models.custom_system_role import CustomSysRole\n\nfrom app.models.setting import Setting\n\n# User Permissions\nfrom app.models.user_permission import UserPermission\nSALES = 'sales'\n\n\ndef create_roles():\n get_or_create(Role, name=ORGANIZER, title_name='Organizer')\n get_or_create(Role, name=COORGANIZER, title_name='Co-organizer')\n get_or_create(Role, name=TRACK_ORGANIZER, title_name='Track Organizer')\n get_or_create(Role, name=MODERATOR, title_name='Moderator')\n get_or_create(Role, name=ATTENDEE, title_name='Attendee')\n get_or_create(Role, name=REGISTRAR, title_name='Registrar')\n\n\ndef create_services():\n track = Track.get_service_name()\n session = Session.get_service_name()\n speaker = Speaker.get_service_name()\n sponsor = Sponsor.get_service_name()\n microlocation = Microlocation.get_service_name()\n\n get_or_create(Service, name=track)\n get_or_create(Service, name=session)\n get_or_create(Service, name=speaker)\n get_or_create(Service, name=sponsor)\n get_or_create(Service, name=microlocation)\n\n\ndef create_settings():\n get_or_create(Setting, app_name='Open Event')\n\n\ndef create_permissions():\n orgr = Role.query.get(1)\n coorgr = Role.query.get(2)\n track_orgr = Role.query.get(3)\n mod = Role.query.get(4)\n\n track = Service.query.get(1)\n session = Service.query.get(2)\n speaker = Service.query.get(3)\n sponsor = Service.query.get(4)\n microlocation = Service.query.get(5)\n\n # For ORGANIZER\n # All four permissions set to True\n get_or_create(Permission, role=orgr, service=track)\n get_or_create(Permission, role=orgr, service=session)\n get_or_create(Permission, role=orgr, service=speaker)\n get_or_create(Permission, role=orgr, service=sponsor)\n get_or_create(Permission, role=orgr, service=microlocation)\n\n # For COORGANIZER\n perm, _ = get_or_create(Permission, role=coorgr, service=track)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n perm, _ = get_or_create(Permission, role=coorgr, service=session)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n perm, _ = get_or_create(Permission, role=coorgr, service=speaker)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n perm, _ = get_or_create(Permission, role=coorgr, service=sponsor)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n perm, _ = get_or_create(Permission, role=coorgr, service=microlocation)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n # For TRACK_ORGANIZER\n perm, _ = get_or_create(Permission, role=track_orgr, service=track)\n db.session.add(perm)\n\n # For MODERATOR\n perm, _ = get_or_create(Permission, role=mod, service=track)\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n\ndef create_custom_sys_roles():\n role, _ = get_or_create(CustomSysRole, name='Sales Admin')\n db.session.add(role)\n role, _ = get_or_create(CustomSysRole, name='Marketer')\n db.session.add(role)\n\n\ndef create_panel_permissions():\n sales_admin = CustomSysRole.query.filter_by(name='Sales Admin').first()\n perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=sales_admin)\n db.session.add(perm)\n marketer = CustomSysRole.query.filter_by(name='Marketer').first()\n perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=marketer)\n db.session.add(perm)\n\n\ndef create_user_permissions():\n # Publish Event\n user_perm, _ = get_or_create(UserPermission, name='publish_event',\n description='Publish event (make event live)')\n user_perm.verified_user = True\n db.session.add(user_perm)\n\n # Create Event\n user_perm, _ = get_or_create(UserPermission, name='create_event',\n description='Create event')\n user_perm.verified_user, user_perm.unverified_user = True, True\n db.session.add(user_perm)\n\n\ndef create_admin_message_settings():\n default_mails = [\"Next Event\",\n \"Session Schedule Change\",\n \"User email\",\n \"Invitation For Papers\",\n \"After Event\",\n \"Ticket(s) Purchased\",\n \"Session Accept or Reject\",\n \"Event Published\",\n \"Event Export Failed\",\n \"Event Exported\",\n \"Event Role Invitation\",\n \"New Session Proposal\"]\n for mail in MAILS:\n if mail in default_mails:\n get_or_create(MessageSettings, action=mail, mail_status=1, notification_status=1, user_control_status=1)\n else:\n get_or_create(MessageSettings, action=mail, mail_status=0, notification_status=0, user_control_status=0)\n\n\ndef populate():\n \"\"\"\n Create defined Roles, Services and Permissions.\n \"\"\"\n print('Creating roles...')\n create_roles()\n print('Creating services...')\n create_services()\n print('Creating permissions...')\n create_permissions()\n print('Creating custom system roles...')\n create_custom_sys_roles()\n print('Creating admin panel permissions...')\n create_panel_permissions()\n print('Creating user permissions...')\n create_user_permissions()\n print('Creating settings...')\n create_settings()\n print('Creating admin message settings...')\n create_admin_message_settings()\n\n\ndef populate_without_print():\n \"\"\"\n Create defined Roles, Services and Permissions.\n \"\"\"\n create_roles()\n create_services()\n create_permissions()\n create_custom_sys_roles()\n create_panel_permissions()\n create_user_permissions()\n create_admin_message_settings()\n\n db.session.commit()\n\n\nif __name__ == '__main__':\n with current_app.app_context():\n populate()\n", "path": "populate_db.py"}]} | 3,077 | 370 |
gh_patches_debug_28632 | rasdani/github-patches | git_diff | Parsl__parsl-1951 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bash wrapper does not close log file, resulting in accumulation of open files
**Describe the bug**
This code in the bash remote wrapper:
```
set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)
```
opens a new log file per app. But it does not ever close that log file, so a worker running many bash apps will accumulate many open files.
This log file should be closed at the end of each bash app execution.
**To Reproduce**
Run two bash apps in one worker with a long delay. use `lsof` to see which files are open for that worker.
**Expected behavior**
log file should be closed at end of bash app execution
</issue>
<code>
[start of parsl/app/bash.py]
1 from functools import update_wrapper
2 from functools import partial
3 from inspect import signature, Parameter
4
5 from parsl.app.errors import wrap_error
6 from parsl.app.app import AppBase
7 from parsl.dataflow.dflow import DataFlowKernelLoader
8
9
10 def remote_side_bash_executor(func, *args, **kwargs):
11 """Executes the supplied function with *args and **kwargs to get a
12 command-line to run, and then run that command-line using bash.
13 """
14 import os
15 import time
16 import subprocess
17 import logging
18 import parsl.app.errors as pe
19 from parsl import set_file_logger
20 from parsl.utils import get_std_fname_mode
21
22 logbase = "/tmp"
23 format_string = "%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s"
24
25 # make this name unique per invocation so that each invocation can
26 # log to its own file. It would be better to include the task_id here
27 # but that is awkward to wire through at the moment as apps do not
28 # have access to that execution context.
29 t = time.time()
30
31 logname = __name__ + "." + str(t)
32 logger = logging.getLogger(logname)
33 set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)
34
35 func_name = func.__name__
36
37 executable = None
38
39 # Try to run the func to compose the commandline
40 try:
41 # Execute the func to get the commandline
42 executable = func(*args, **kwargs)
43
44 if not isinstance(executable, str):
45 raise ValueError(f"Expected a str for bash_app commandline, got {type(executable)}")
46
47 except AttributeError as e:
48 if executable is not None:
49 raise pe.AppBadFormatting("App formatting failed for app '{}' with AttributeError: {}".format(func_name, e))
50 else:
51 raise pe.BashAppNoReturn("Bash app '{}' did not return a value, or returned None - with this exception: {}".format(func_name, e))
52
53 except IndexError as e:
54 raise pe.AppBadFormatting("App formatting failed for app '{}' with IndexError: {}".format(func_name, e))
55 except Exception as e:
56 logger.error("Caught exception during formatting of app '{}': {}".format(func_name, e))
57 raise e
58
59 logger.debug("Executable: %s", executable)
60
61 # Updating stdout, stderr if values passed at call time.
62
63 def open_std_fd(fdname):
64 # fdname is 'stdout' or 'stderr'
65 stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)
66 if stdfspec is None:
67 return None
68
69 fname, mode = get_std_fname_mode(fdname, stdfspec)
70 try:
71 if os.path.dirname(fname):
72 os.makedirs(os.path.dirname(fname), exist_ok=True)
73 fd = open(fname, mode)
74 except Exception as e:
75 raise pe.BadStdStreamFile(fname, e)
76 return fd
77
78 std_out = open_std_fd('stdout')
79 std_err = open_std_fd('stderr')
80 timeout = kwargs.get('walltime')
81
82 if std_err is not None:
83 print('--> executable follows <--\n{}\n--> end executable <--'.format(executable), file=std_err, flush=True)
84
85 returncode = None
86 try:
87 proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')
88 proc.wait(timeout=timeout)
89 returncode = proc.returncode
90
91 except subprocess.TimeoutExpired:
92 raise pe.AppTimeout("[{}] App exceeded walltime: {}".format(func_name, timeout))
93
94 except Exception as e:
95 raise pe.AppException("[{}] App caught exception with returncode: {}".format(func_name, returncode), e)
96
97 if returncode != 0:
98 raise pe.BashExitFailure(func_name, proc.returncode)
99
100 # TODO : Add support for globs here
101
102 missing = []
103 for outputfile in kwargs.get('outputs', []):
104 fpath = outputfile.filepath
105
106 if not os.path.exists(fpath):
107 missing.extend([outputfile])
108
109 if missing:
110 raise pe.MissingOutputs("[{}] Missing outputs".format(func_name), missing)
111
112 return returncode
113
114
115 class BashApp(AppBase):
116
117 def __init__(self, func, data_flow_kernel=None, cache=False, executors='all', ignore_for_cache=None):
118 super().__init__(func, data_flow_kernel=data_flow_kernel, executors=executors, cache=cache, ignore_for_cache=ignore_for_cache)
119 self.kwargs = {}
120
121 # We duplicate the extraction of parameter defaults
122 # to self.kwargs to ensure availability at point of
123 # command string format. Refer: #349
124 sig = signature(func)
125
126 for s in sig.parameters:
127 if sig.parameters[s].default is not Parameter.empty:
128 self.kwargs[s] = sig.parameters[s].default
129
130 # update_wrapper allows remote_side_bash_executor to masquerade as self.func
131 # partial is used to attach the first arg the "func" to the remote_side_bash_executor
132 # this is done to avoid passing a function type in the args which parsl.serializer
133 # doesn't support
134 remote_fn = partial(update_wrapper(remote_side_bash_executor, self.func), self.func)
135 remote_fn.__name__ = self.func.__name__
136 self.wrapped_remote_function = wrap_error(remote_fn)
137
138 def __call__(self, *args, **kwargs):
139 """Handle the call to a Bash app.
140
141 Args:
142 - Arbitrary
143
144 Kwargs:
145 - Arbitrary
146
147 Returns:
148 App_fut
149
150 """
151 invocation_kwargs = {}
152 invocation_kwargs.update(self.kwargs)
153 invocation_kwargs.update(kwargs)
154
155 if self.data_flow_kernel is None:
156 dfk = DataFlowKernelLoader.dfk()
157 else:
158 dfk = self.data_flow_kernel
159
160 app_fut = dfk.submit(self.wrapped_remote_function,
161 app_args=args,
162 executors=self.executors,
163 cache=self.cache,
164 ignore_for_cache=self.ignore_for_cache,
165 app_kwargs=invocation_kwargs)
166
167 return app_fut
168
[end of parsl/app/bash.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsl/app/bash.py b/parsl/app/bash.py
--- a/parsl/app/bash.py
+++ b/parsl/app/bash.py
@@ -12,26 +12,10 @@
command-line to run, and then run that command-line using bash.
"""
import os
- import time
import subprocess
- import logging
import parsl.app.errors as pe
- from parsl import set_file_logger
from parsl.utils import get_std_fname_mode
- logbase = "/tmp"
- format_string = "%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s"
-
- # make this name unique per invocation so that each invocation can
- # log to its own file. It would be better to include the task_id here
- # but that is awkward to wire through at the moment as apps do not
- # have access to that execution context.
- t = time.time()
-
- logname = __name__ + "." + str(t)
- logger = logging.getLogger(logname)
- set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)
-
func_name = func.__name__
executable = None
@@ -53,11 +37,8 @@
except IndexError as e:
raise pe.AppBadFormatting("App formatting failed for app '{}' with IndexError: {}".format(func_name, e))
except Exception as e:
- logger.error("Caught exception during formatting of app '{}': {}".format(func_name, e))
raise e
- logger.debug("Executable: %s", executable)
-
# Updating stdout, stderr if values passed at call time.
def open_std_fd(fdname):
| {"golden_diff": "diff --git a/parsl/app/bash.py b/parsl/app/bash.py\n--- a/parsl/app/bash.py\n+++ b/parsl/app/bash.py\n@@ -12,26 +12,10 @@\n command-line to run, and then run that command-line using bash.\n \"\"\"\n import os\n- import time\n import subprocess\n- import logging\n import parsl.app.errors as pe\n- from parsl import set_file_logger\n from parsl.utils import get_std_fname_mode\n \n- logbase = \"/tmp\"\n- format_string = \"%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s\"\n-\n- # make this name unique per invocation so that each invocation can\n- # log to its own file. It would be better to include the task_id here\n- # but that is awkward to wire through at the moment as apps do not\n- # have access to that execution context.\n- t = time.time()\n-\n- logname = __name__ + \".\" + str(t)\n- logger = logging.getLogger(logname)\n- set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)\n-\n func_name = func.__name__\n \n executable = None\n@@ -53,11 +37,8 @@\n except IndexError as e:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with IndexError: {}\".format(func_name, e))\n except Exception as e:\n- logger.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n raise e\n \n- logger.debug(\"Executable: %s\", executable)\n-\n # Updating stdout, stderr if values passed at call time.\n \n def open_std_fd(fdname):\n", "issue": "bash wrapper does not close log file, resulting in accumulation of open files\n**Describe the bug**\r\nThis code in the bash remote wrapper:\r\n```\r\n set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)\r\n```\r\nopens a new log file per app. But it does not ever close that log file, so a worker running many bash apps will accumulate many open files.\r\n\r\nThis log file should be closed at the end of each bash app execution.\r\n\r\n**To Reproduce**\r\nRun two bash apps in one worker with a long delay. use `lsof` to see which files are open for that worker.\r\n\r\n**Expected behavior**\r\nlog file should be closed at end of bash app execution\r\n\n", "before_files": [{"content": "from functools import update_wrapper\nfrom functools import partial\nfrom inspect import signature, Parameter\n\nfrom parsl.app.errors import wrap_error\nfrom parsl.app.app import AppBase\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\n\ndef remote_side_bash_executor(func, *args, **kwargs):\n \"\"\"Executes the supplied function with *args and **kwargs to get a\n command-line to run, and then run that command-line using bash.\n \"\"\"\n import os\n import time\n import subprocess\n import logging\n import parsl.app.errors as pe\n from parsl import set_file_logger\n from parsl.utils import get_std_fname_mode\n\n logbase = \"/tmp\"\n format_string = \"%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s\"\n\n # make this name unique per invocation so that each invocation can\n # log to its own file. It would be better to include the task_id here\n # but that is awkward to wire through at the moment as apps do not\n # have access to that execution context.\n t = time.time()\n\n logname = __name__ + \".\" + str(t)\n logger = logging.getLogger(logname)\n set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)\n\n func_name = func.__name__\n\n executable = None\n\n # Try to run the func to compose the commandline\n try:\n # Execute the func to get the commandline\n executable = func(*args, **kwargs)\n\n if not isinstance(executable, str):\n raise ValueError(f\"Expected a str for bash_app commandline, got {type(executable)}\")\n\n except AttributeError as e:\n if executable is not None:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with AttributeError: {}\".format(func_name, e))\n else:\n raise pe.BashAppNoReturn(\"Bash app '{}' did not return a value, or returned None - with this exception: {}\".format(func_name, e))\n\n except IndexError as e:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with IndexError: {}\".format(func_name, e))\n except Exception as e:\n logger.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n raise e\n\n logger.debug(\"Executable: %s\", executable)\n\n # Updating stdout, stderr if values passed at call time.\n\n def open_std_fd(fdname):\n # fdname is 'stdout' or 'stderr'\n stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)\n if stdfspec is None:\n return None\n\n fname, mode = get_std_fname_mode(fdname, stdfspec)\n try:\n if os.path.dirname(fname):\n os.makedirs(os.path.dirname(fname), exist_ok=True)\n fd = open(fname, mode)\n except Exception as e:\n raise pe.BadStdStreamFile(fname, e)\n return fd\n\n std_out = open_std_fd('stdout')\n std_err = open_std_fd('stderr')\n timeout = kwargs.get('walltime')\n\n if std_err is not None:\n print('--> executable follows <--\\n{}\\n--> end executable <--'.format(executable), file=std_err, flush=True)\n\n returncode = None\n try:\n proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')\n proc.wait(timeout=timeout)\n returncode = proc.returncode\n\n except subprocess.TimeoutExpired:\n raise pe.AppTimeout(\"[{}] App exceeded walltime: {}\".format(func_name, timeout))\n\n except Exception as e:\n raise pe.AppException(\"[{}] App caught exception with returncode: {}\".format(func_name, returncode), e)\n\n if returncode != 0:\n raise pe.BashExitFailure(func_name, proc.returncode)\n\n # TODO : Add support for globs here\n\n missing = []\n for outputfile in kwargs.get('outputs', []):\n fpath = outputfile.filepath\n\n if not os.path.exists(fpath):\n missing.extend([outputfile])\n\n if missing:\n raise pe.MissingOutputs(\"[{}] Missing outputs\".format(func_name), missing)\n\n return returncode\n\n\nclass BashApp(AppBase):\n\n def __init__(self, func, data_flow_kernel=None, cache=False, executors='all', ignore_for_cache=None):\n super().__init__(func, data_flow_kernel=data_flow_kernel, executors=executors, cache=cache, ignore_for_cache=ignore_for_cache)\n self.kwargs = {}\n\n # We duplicate the extraction of parameter defaults\n # to self.kwargs to ensure availability at point of\n # command string format. Refer: #349\n sig = signature(func)\n\n for s in sig.parameters:\n if sig.parameters[s].default is not Parameter.empty:\n self.kwargs[s] = sig.parameters[s].default\n\n # update_wrapper allows remote_side_bash_executor to masquerade as self.func\n # partial is used to attach the first arg the \"func\" to the remote_side_bash_executor\n # this is done to avoid passing a function type in the args which parsl.serializer\n # doesn't support\n remote_fn = partial(update_wrapper(remote_side_bash_executor, self.func), self.func)\n remote_fn.__name__ = self.func.__name__\n self.wrapped_remote_function = wrap_error(remote_fn)\n\n def __call__(self, *args, **kwargs):\n \"\"\"Handle the call to a Bash app.\n\n Args:\n - Arbitrary\n\n Kwargs:\n - Arbitrary\n\n Returns:\n App_fut\n\n \"\"\"\n invocation_kwargs = {}\n invocation_kwargs.update(self.kwargs)\n invocation_kwargs.update(kwargs)\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n app_fut = dfk.submit(self.wrapped_remote_function,\n app_args=args,\n executors=self.executors,\n cache=self.cache,\n ignore_for_cache=self.ignore_for_cache,\n app_kwargs=invocation_kwargs)\n\n return app_fut\n", "path": "parsl/app/bash.py"}]} | 2,465 | 409 |
gh_patches_debug_34381 | rasdani/github-patches | git_diff | facebookresearch__hydra-1560 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[callbacks] call on_*_end events in reverse order
</issue>
<code>
[start of hydra/core/callbacks.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import warnings
3 from typing import Any
4
5 from omegaconf import DictConfig
6
7 from hydra.core.utils import JobReturn
8 from hydra.utils import instantiate
9
10
11 class Callbacks:
12 def __init__(self, config: DictConfig) -> None:
13 self.callbacks = []
14 for params in config.hydra.callbacks.values():
15 self.callbacks.append(instantiate(params))
16
17 def _notify(self, function_name: str, **kwargs: Any) -> None:
18 for c in self.callbacks:
19 try:
20 getattr(c, function_name)(**kwargs)
21 except Exception as e:
22 warnings.warn(
23 f"Callback {type(c).__name__}.{function_name} raised {type(e).__name__}: {e}"
24 )
25
26 def on_run_start(self, config: DictConfig, **kwargs: Any) -> None:
27 self._notify(function_name="on_run_start", config=config, **kwargs)
28
29 def on_run_end(self, config: DictConfig, **kwargs: Any) -> None:
30 self._notify(function_name="on_run_end", config=config, **kwargs)
31
32 def on_multirun_start(self, config: DictConfig, **kwargs: Any) -> None:
33 self._notify(function_name="on_multirun_start", config=config, **kwargs)
34
35 def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:
36 self._notify(function_name="on_multirun_end", config=config, **kwargs)
37
38 def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:
39 self._notify(function_name="on_job_start", config=config, **kwargs)
40
41 def on_job_end(
42 self, config: DictConfig, job_return: JobReturn, **kwargs: Any
43 ) -> None:
44 self._notify(
45 function_name="on_job_end", config=config, job_return=job_return, **kwargs
46 )
47
[end of hydra/core/callbacks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hydra/core/callbacks.py b/hydra/core/callbacks.py
--- a/hydra/core/callbacks.py
+++ b/hydra/core/callbacks.py
@@ -14,8 +14,9 @@
for params in config.hydra.callbacks.values():
self.callbacks.append(instantiate(params))
- def _notify(self, function_name: str, **kwargs: Any) -> None:
- for c in self.callbacks:
+ def _notify(self, function_name: str, reverse: bool = False, **kwargs: Any) -> None:
+ callbacks = reversed(self.callbacks) if reverse else self.callbacks
+ for c in callbacks:
try:
getattr(c, function_name)(**kwargs)
except Exception as e:
@@ -27,13 +28,15 @@
self._notify(function_name="on_run_start", config=config, **kwargs)
def on_run_end(self, config: DictConfig, **kwargs: Any) -> None:
- self._notify(function_name="on_run_end", config=config, **kwargs)
+ self._notify(function_name="on_run_end", config=config, reverse=True, **kwargs)
def on_multirun_start(self, config: DictConfig, **kwargs: Any) -> None:
self._notify(function_name="on_multirun_start", config=config, **kwargs)
def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:
- self._notify(function_name="on_multirun_end", config=config, **kwargs)
+ self._notify(
+ function_name="on_multirun_end", reverse=True, config=config, **kwargs
+ )
def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:
self._notify(function_name="on_job_start", config=config, **kwargs)
@@ -42,5 +45,9 @@
self, config: DictConfig, job_return: JobReturn, **kwargs: Any
) -> None:
self._notify(
- function_name="on_job_end", config=config, job_return=job_return, **kwargs
+ function_name="on_job_end",
+ config=config,
+ job_return=job_return,
+ reverse=True,
+ **kwargs,
)
| {"golden_diff": "diff --git a/hydra/core/callbacks.py b/hydra/core/callbacks.py\n--- a/hydra/core/callbacks.py\n+++ b/hydra/core/callbacks.py\n@@ -14,8 +14,9 @@\n for params in config.hydra.callbacks.values():\n self.callbacks.append(instantiate(params))\n \n- def _notify(self, function_name: str, **kwargs: Any) -> None:\n- for c in self.callbacks:\n+ def _notify(self, function_name: str, reverse: bool = False, **kwargs: Any) -> None:\n+ callbacks = reversed(self.callbacks) if reverse else self.callbacks\n+ for c in callbacks:\n try:\n getattr(c, function_name)(**kwargs)\n except Exception as e:\n@@ -27,13 +28,15 @@\n self._notify(function_name=\"on_run_start\", config=config, **kwargs)\n \n def on_run_end(self, config: DictConfig, **kwargs: Any) -> None:\n- self._notify(function_name=\"on_run_end\", config=config, **kwargs)\n+ self._notify(function_name=\"on_run_end\", config=config, reverse=True, **kwargs)\n \n def on_multirun_start(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_multirun_start\", config=config, **kwargs)\n \n def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:\n- self._notify(function_name=\"on_multirun_end\", config=config, **kwargs)\n+ self._notify(\n+ function_name=\"on_multirun_end\", reverse=True, config=config, **kwargs\n+ )\n \n def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_job_start\", config=config, **kwargs)\n@@ -42,5 +45,9 @@\n self, config: DictConfig, job_return: JobReturn, **kwargs: Any\n ) -> None:\n self._notify(\n- function_name=\"on_job_end\", config=config, job_return=job_return, **kwargs\n+ function_name=\"on_job_end\",\n+ config=config,\n+ job_return=job_return,\n+ reverse=True,\n+ **kwargs,\n )\n", "issue": "[callbacks] call on_*_end events in reverse order\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport warnings\nfrom typing import Any\n\nfrom omegaconf import DictConfig\n\nfrom hydra.core.utils import JobReturn\nfrom hydra.utils import instantiate\n\n\nclass Callbacks:\n def __init__(self, config: DictConfig) -> None:\n self.callbacks = []\n for params in config.hydra.callbacks.values():\n self.callbacks.append(instantiate(params))\n\n def _notify(self, function_name: str, **kwargs: Any) -> None:\n for c in self.callbacks:\n try:\n getattr(c, function_name)(**kwargs)\n except Exception as e:\n warnings.warn(\n f\"Callback {type(c).__name__}.{function_name} raised {type(e).__name__}: {e}\"\n )\n\n def on_run_start(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_run_start\", config=config, **kwargs)\n\n def on_run_end(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_run_end\", config=config, **kwargs)\n\n def on_multirun_start(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_multirun_start\", config=config, **kwargs)\n\n def on_multirun_end(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_multirun_end\", config=config, **kwargs)\n\n def on_job_start(self, config: DictConfig, **kwargs: Any) -> None:\n self._notify(function_name=\"on_job_start\", config=config, **kwargs)\n\n def on_job_end(\n self, config: DictConfig, job_return: JobReturn, **kwargs: Any\n ) -> None:\n self._notify(\n function_name=\"on_job_end\", config=config, job_return=job_return, **kwargs\n )\n", "path": "hydra/core/callbacks.py"}]} | 1,063 | 504 |
gh_patches_debug_6037 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MI failing since at least 2018-05-12
MI has been failing since 2018-05-12
Based on automated runs it appears that MI has not run successfully in 2 days (2018-05-12).
```
02:47:18 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-GG
02:47:19 INFO pupa: save bill HJR GG in 2017-2018 as bill_dcf34e60-5681-11e8-a8aa-029b97b45e2a.json
02:47:19 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2017-HJR-HH
02:47:20 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-HH
02:47:21 INFO pupa: save bill HJR HH in 2017-2018 as bill_de254248-5681-11e8-a8aa-029b97b45e2a.json
02:47:21 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2017-HJR-II
02:47:22 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-II
02:47:23 INFO pupa: save bill HJR II in 2017-2018 as bill_df57e738-5681-11e8-a8aa-029b97b45e2a.json
02:47:23 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2017-HJR-JJ
02:47:24 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-JJ
02:47:25 WARNING pupa: Cannot open bill page for HJR JJ; skipping
02:47:25 INFO scrapelib: GET - http://www.senate.michigan.gov/senatorinfo_list.html
02:47:25 INFO scrapelib: HEAD - http://www.senate.michigan.gov/_images/Booher.png
02:47:26 INFO scrapelib: GET - http://www.SenatorDarwinBooher.com/contact/
02:47:27 INFO pupa: save person Booher, Darwin L as person_e21c0558-5681-11e8-a8aa-029b97b45e2a.json
loaded Open States pupa settings...
mi (scrape, import)
bills: {}
people: {}
committees: {}
Traceback (most recent call last):
File "/opt/openstates/venv-pupa//bin/pupa", line 11, in <module>
load_entry_point('pupa', 'console_scripts', 'pupa')()
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py", line 68, in main
subcommands[args.subcommand].handle(args, other)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 116, in do_scrape
self.save_object(obj)
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 99, in save_object
raise ve
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 96, in save_object
obj.validate()
File "/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py", line 191, in validate
self.__class__.__name__, self._id, '\n\t'+'\n\t'.join(errors)
pupa.exceptions.ScrapeValueError: validation of Person e21c0558-5681-11e8-a8aa-029b97b45e2a failed:
'/booher' is not a 'uri'
Failed validating 'format' in schema['properties']['links']['items']['properties']['url']:
{'format': 'uri', 'type': 'string'}
On instance['links'][0]['url']:
'/booher'
'/booher' is not a 'uri'
Failed validating 'format' in schema['properties']['sources']['items']['properties']['url']:
{'format': 'uri', 'type': 'string'}
On instance['sources'][0]['url']:
'/booher'
```
Visit http://bobsled.openstates.org for more info.
</issue>
<code>
[start of openstates/mi/people.py]
1 import re
2 import requests
3
4 import lxml.html
5 import scrapelib
6 from pupa.scrape import Person, Scraper
7
8
9 abbr = {'D': 'Democratic', 'R': 'Republican'}
10
11
12 class MIPersonScraper(Scraper):
13 def scrape(self, chamber=None, session=None):
14 if chamber == 'upper':
15 yield from self.scrape_upper(chamber)
16 elif chamber == 'lower':
17 yield from self.scrape_lower(chamber)
18 else:
19 yield from self.scrape_upper(chamber)
20 yield from self.scrape_lower(chamber)
21
22 def scrape_lower(self, chamber):
23 url = 'http://www.house.mi.gov/mhrpublic/frmRepList.aspx'
24 table = [
25 "website",
26 "district",
27 "name",
28 "party",
29 "location",
30 "phone",
31 "email"
32 ]
33
34 data = self.get(url).text
35 doc = lxml.html.fromstring(data)
36
37 # skip two rows at top
38 for row in doc.xpath('//table[@id="grvRepInfo"]/*'):
39 tds = row.xpath('.//td')
40 if len(tds) == 0:
41 continue
42 metainf = {}
43 for i in range(0, len(table)):
44 metainf[table[i]] = tds[i]
45 district = str(int(metainf['district'].text_content().strip()))
46 party = metainf['party'].text_content().strip()
47 phone = metainf['phone'].text_content().strip()
48 email = metainf['email'].text_content().strip()
49 name = metainf['name'].text_content().strip()
50 if name == 'Vacant' or re.match(r'^District \d{1,3}$', name):
51 self.warning('District {} appears vacant, and will be skipped'.format(district))
52 continue
53 leg_url = metainf['website'].xpath("./a")[0].attrib['href']
54
55 office = metainf['location'].text_content().strip()
56 office = re.sub(
57 ' HOB',
58 ' Anderson House Office Building\n124 North Capitol Avenue\nLansing, MI 48933',
59 office
60 )
61 office = re.sub(
62 ' CB',
63 ' State Capitol Building\nLansing, MI 48909',
64 office
65 )
66
67 try:
68 photo_url = self.get_photo_url(leg_url)[0]
69 except (scrapelib.HTTPError, IndexError):
70 photo_url = ''
71 self.warning('no photo url for %s', name)
72
73 person = Person(name=name, district=district, party=abbr[party],
74 primary_org='lower', image=photo_url)
75
76 person.add_link(leg_url)
77 person.add_source(leg_url)
78
79 person.add_contact_detail(type='address', value=office, note='Capitol Office')
80 person.add_contact_detail(type='voice', value=phone, note='Capitol Office')
81 person.add_contact_detail(type='email', value=email, note='Capitol Office')
82
83 yield person
84
85 def scrape_upper(self, chamber):
86 url = 'http://www.senate.michigan.gov/senatorinfo_list.html'
87 url_to_append = 'http://www.senate.michigan.gov/_images/'
88 data = self.get(url).text
89 doc = lxml.html.fromstring(data)
90 for row in doc.xpath('//table[not(@class="calendar")]//tr')[3:]:
91 if len(row) != 7:
92 continue
93
94 # party, dist, member, office_phone, office_fax, office_loc
95 party, dist, member, contact, phone, fax, loc = row.getchildren()
96 if (party.text_content().strip() == "" or
97 'Lieutenant Governor' in member.text_content()):
98 continue
99
100 party = abbr[party.text]
101 district = dist.text_content().strip()
102 name = member.text_content().strip()
103 name = re.sub(r'\s+', " ", name)
104 surname = re.split(', | ', name)
105 surname[0] = re.sub('[\']', '', surname[0])
106 try:
107 self.head(url_to_append + surname[0] + '.png')
108 photo_url = url_to_append + surname[0] + '.png'
109 except scrapelib.HTTPError:
110 try:
111 self.head(url_to_append + surname[0] + '.jpg')
112 photo_url = url_to_append + surname[0] + '.jpg'
113 except scrapelib.HTTPError:
114 photo_url = None
115
116 if name == 'Vacant':
117 self.info('district %s is vacant', district)
118 continue
119
120 leg_url = member.xpath('a/@href')[0]
121 office_phone = phone.text
122 office_fax = fax.text
123
124 office_loc = loc.text
125 office_loc = re.sub(
126 ' Farnum Bldg',
127 ' Farnum Office Building\n125 West Allegan Street\nLansing, MI 48933',
128 office_loc
129 )
130 office_loc = re.sub(
131 ' Capitol Bldg',
132 ' State Capitol Building\nLansing, MI 48909',
133 office_loc
134 )
135
136 # email addresses aren't on the list page anymore but they
137 # are on the page linked off "Contact Me"
138
139 # data has a typo in a row
140 email = None
141 contact_url = [
142 a for a in row.xpath(".//a")
143 if a.text in ('Contact Me', 'Conact Me')][0].get('href')
144 try:
145 contact_html = self.get(contact_url).text
146 contact_doc = lxml.html.fromstring(contact_html)
147
148 header_email = contact_doc.xpath("//a[@class='header_email']")
149 if header_email:
150 email = header_email[0].text
151 else:
152 # not using the most common template, but maybe they
153 # dropped their email on the page somewhere
154 links = contact_doc.xpath('//a') or []
155 text_email = [a for a in links
156 if 'mailto:' in (a.get('href') or '')]
157 if text_email:
158 email = text_email[0].text
159 except requests.exceptions.TooManyRedirects:
160 self.warning("Contact Link Not Working for %s" % name)
161 person = Person(name=name, district=district, party=party,
162 primary_org='upper', image=photo_url)
163
164 person.add_link(leg_url)
165 person.add_source(leg_url)
166
167 person.add_contact_detail(type='address', value=office_loc, note='Capitol Office')
168 person.add_contact_detail(type='voice', value=office_phone, note='Capitol Office')
169 person.add_contact_detail(type='fax', value=office_fax, note='Capitol Office')
170 if email:
171 person.add_contact_detail(type='email', value=email, note='Capitol Office')
172
173 yield person
174
175 def get_photo_url(self, url):
176 data = self.get(url).text
177 doc = lxml.html.fromstring(data)
178 doc.make_links_absolute(url)
179 return (
180 doc.xpath('//div[contains(@class, "headshotTop")]//img/@src') + # housedems.com
181 doc.xpath('//div[contains(@class, "widget_sp_image")]//img/@src') # gophouse.org
182 )
183
[end of openstates/mi/people.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openstates/mi/people.py b/openstates/mi/people.py
--- a/openstates/mi/people.py
+++ b/openstates/mi/people.py
@@ -87,6 +87,7 @@
url_to_append = 'http://www.senate.michigan.gov/_images/'
data = self.get(url).text
doc = lxml.html.fromstring(data)
+ doc.make_links_absolute(url)
for row in doc.xpath('//table[not(@class="calendar")]//tr')[3:]:
if len(row) != 7:
continue
| {"golden_diff": "diff --git a/openstates/mi/people.py b/openstates/mi/people.py\n--- a/openstates/mi/people.py\n+++ b/openstates/mi/people.py\n@@ -87,6 +87,7 @@\n url_to_append = 'http://www.senate.michigan.gov/_images/'\n data = self.get(url).text\n doc = lxml.html.fromstring(data)\n+ doc.make_links_absolute(url)\n for row in doc.xpath('//table[not(@class=\"calendar\")]//tr')[3:]:\n if len(row) != 7:\n continue\n", "issue": "MI failing since at least 2018-05-12\nMI has been failing since 2018-05-12\n\nBased on automated runs it appears that MI has not run successfully in 2 days (2018-05-12).\n\n\n```\n 02:47:18 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-GG\n02:47:19 INFO pupa: save bill HJR GG in 2017-2018 as bill_dcf34e60-5681-11e8-a8aa-029b97b45e2a.json\n02:47:19 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2017-HJR-HH\n02:47:20 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-HH\n02:47:21 INFO pupa: save bill HJR HH in 2017-2018 as bill_de254248-5681-11e8-a8aa-029b97b45e2a.json\n02:47:21 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2017-HJR-II\n02:47:22 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-II\n02:47:23 INFO pupa: save bill HJR II in 2017-2018 as bill_df57e738-5681-11e8-a8aa-029b97b45e2a.json\n02:47:23 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2017-HJR-JJ\n02:47:24 INFO scrapelib: GET - http://legislature.mi.gov/doc.aspx?2018-HJR-JJ\n02:47:25 WARNING pupa: Cannot open bill page for HJR JJ; skipping\n02:47:25 INFO scrapelib: GET - http://www.senate.michigan.gov/senatorinfo_list.html\n02:47:25 INFO scrapelib: HEAD - http://www.senate.michigan.gov/_images/Booher.png\n02:47:26 INFO scrapelib: GET - http://www.SenatorDarwinBooher.com/contact/\n02:47:27 INFO pupa: save person Booher, Darwin L as person_e21c0558-5681-11e8-a8aa-029b97b45e2a.json\nloaded Open States pupa settings...\nmi (scrape, import)\n bills: {}\n people: {}\n committees: {}\nTraceback (most recent call last):\n File \"/opt/openstates/venv-pupa//bin/pupa\", line 11, in <module>\n load_entry_point('pupa', 'console_scripts', 'pupa')()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py\", line 68, in main\n subcommands[args.subcommand].handle(args, other)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 260, in handle\n return self.do_handle(args, other, juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 305, in do_handle\n report['scrape'] = self.do_scrape(juris, args, scrapers)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 173, in do_scrape\n report[scraper_name] = scraper.do_scrape(**scrape_args)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 116, in do_scrape\n self.save_object(obj)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 99, in save_object\n raise ve\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 96, in save_object\n obj.validate()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/scrape/base.py\", line 191, in validate\n self.__class__.__name__, self._id, '\\n\\t'+'\\n\\t'.join(errors)\npupa.exceptions.ScrapeValueError: validation of Person e21c0558-5681-11e8-a8aa-029b97b45e2a failed: \n\t'/booher' is not a 'uri'\nFailed validating 'format' in schema['properties']['links']['items']['properties']['url']:\n {'format': 'uri', 'type': 'string'}\nOn instance['links'][0]['url']:\n '/booher'\n\t'/booher' is not a 'uri'\nFailed validating 'format' in schema['properties']['sources']['items']['properties']['url']:\n {'format': 'uri', 'type': 'string'}\nOn instance['sources'][0]['url']:\n '/booher'\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "import re\nimport requests\n\nimport lxml.html\nimport scrapelib\nfrom pupa.scrape import Person, Scraper\n\n\nabbr = {'D': 'Democratic', 'R': 'Republican'}\n\n\nclass MIPersonScraper(Scraper):\n def scrape(self, chamber=None, session=None):\n if chamber == 'upper':\n yield from self.scrape_upper(chamber)\n elif chamber == 'lower':\n yield from self.scrape_lower(chamber)\n else:\n yield from self.scrape_upper(chamber)\n yield from self.scrape_lower(chamber)\n\n def scrape_lower(self, chamber):\n url = 'http://www.house.mi.gov/mhrpublic/frmRepList.aspx'\n table = [\n \"website\",\n \"district\",\n \"name\",\n \"party\",\n \"location\",\n \"phone\",\n \"email\"\n ]\n\n data = self.get(url).text\n doc = lxml.html.fromstring(data)\n\n # skip two rows at top\n for row in doc.xpath('//table[@id=\"grvRepInfo\"]/*'):\n tds = row.xpath('.//td')\n if len(tds) == 0:\n continue\n metainf = {}\n for i in range(0, len(table)):\n metainf[table[i]] = tds[i]\n district = str(int(metainf['district'].text_content().strip()))\n party = metainf['party'].text_content().strip()\n phone = metainf['phone'].text_content().strip()\n email = metainf['email'].text_content().strip()\n name = metainf['name'].text_content().strip()\n if name == 'Vacant' or re.match(r'^District \\d{1,3}$', name):\n self.warning('District {} appears vacant, and will be skipped'.format(district))\n continue\n leg_url = metainf['website'].xpath(\"./a\")[0].attrib['href']\n\n office = metainf['location'].text_content().strip()\n office = re.sub(\n ' HOB',\n ' Anderson House Office Building\\n124 North Capitol Avenue\\nLansing, MI 48933',\n office\n )\n office = re.sub(\n ' CB',\n ' State Capitol Building\\nLansing, MI 48909',\n office\n )\n\n try:\n photo_url = self.get_photo_url(leg_url)[0]\n except (scrapelib.HTTPError, IndexError):\n photo_url = ''\n self.warning('no photo url for %s', name)\n\n person = Person(name=name, district=district, party=abbr[party],\n primary_org='lower', image=photo_url)\n\n person.add_link(leg_url)\n person.add_source(leg_url)\n\n person.add_contact_detail(type='address', value=office, note='Capitol Office')\n person.add_contact_detail(type='voice', value=phone, note='Capitol Office')\n person.add_contact_detail(type='email', value=email, note='Capitol Office')\n\n yield person\n\n def scrape_upper(self, chamber):\n url = 'http://www.senate.michigan.gov/senatorinfo_list.html'\n url_to_append = 'http://www.senate.michigan.gov/_images/'\n data = self.get(url).text\n doc = lxml.html.fromstring(data)\n for row in doc.xpath('//table[not(@class=\"calendar\")]//tr')[3:]:\n if len(row) != 7:\n continue\n\n # party, dist, member, office_phone, office_fax, office_loc\n party, dist, member, contact, phone, fax, loc = row.getchildren()\n if (party.text_content().strip() == \"\" or\n 'Lieutenant Governor' in member.text_content()):\n continue\n\n party = abbr[party.text]\n district = dist.text_content().strip()\n name = member.text_content().strip()\n name = re.sub(r'\\s+', \" \", name)\n surname = re.split(', | ', name)\n surname[0] = re.sub('[\\']', '', surname[0])\n try:\n self.head(url_to_append + surname[0] + '.png')\n photo_url = url_to_append + surname[0] + '.png'\n except scrapelib.HTTPError:\n try:\n self.head(url_to_append + surname[0] + '.jpg')\n photo_url = url_to_append + surname[0] + '.jpg'\n except scrapelib.HTTPError:\n photo_url = None\n\n if name == 'Vacant':\n self.info('district %s is vacant', district)\n continue\n\n leg_url = member.xpath('a/@href')[0]\n office_phone = phone.text\n office_fax = fax.text\n\n office_loc = loc.text\n office_loc = re.sub(\n ' Farnum Bldg',\n ' Farnum Office Building\\n125 West Allegan Street\\nLansing, MI 48933',\n office_loc\n )\n office_loc = re.sub(\n ' Capitol Bldg',\n ' State Capitol Building\\nLansing, MI 48909',\n office_loc\n )\n\n # email addresses aren't on the list page anymore but they\n # are on the page linked off \"Contact Me\"\n\n # data has a typo in a row\n email = None\n contact_url = [\n a for a in row.xpath(\".//a\")\n if a.text in ('Contact Me', 'Conact Me')][0].get('href')\n try:\n contact_html = self.get(contact_url).text\n contact_doc = lxml.html.fromstring(contact_html)\n\n header_email = contact_doc.xpath(\"//a[@class='header_email']\")\n if header_email:\n email = header_email[0].text\n else:\n # not using the most common template, but maybe they\n # dropped their email on the page somewhere\n links = contact_doc.xpath('//a') or []\n text_email = [a for a in links\n if 'mailto:' in (a.get('href') or '')]\n if text_email:\n email = text_email[0].text\n except requests.exceptions.TooManyRedirects:\n self.warning(\"Contact Link Not Working for %s\" % name)\n person = Person(name=name, district=district, party=party,\n primary_org='upper', image=photo_url)\n\n person.add_link(leg_url)\n person.add_source(leg_url)\n\n person.add_contact_detail(type='address', value=office_loc, note='Capitol Office')\n person.add_contact_detail(type='voice', value=office_phone, note='Capitol Office')\n person.add_contact_detail(type='fax', value=office_fax, note='Capitol Office')\n if email:\n person.add_contact_detail(type='email', value=email, note='Capitol Office')\n\n yield person\n\n def get_photo_url(self, url):\n data = self.get(url).text\n doc = lxml.html.fromstring(data)\n doc.make_links_absolute(url)\n return (\n doc.xpath('//div[contains(@class, \"headshotTop\")]//img/@src') + # housedems.com\n doc.xpath('//div[contains(@class, \"widget_sp_image\")]//img/@src') # gophouse.org\n )\n", "path": "openstates/mi/people.py"}]} | 3,828 | 124 |
gh_patches_debug_23761 | rasdani/github-patches | git_diff | fossasia__open-event-server-5139 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add custom form for attendees
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Add custom form for attendees
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
**Working on it**
</issue>
<code>
[start of app/models/custom_form.py]
1 import json
2 from sqlalchemy.schema import UniqueConstraint
3
4 from app.models import db
5 from app.models.base import SoftDeletionModel
6
7 SESSION_FORM = {
8 "title": {"include": 1, "require": 1},
9 "subtitle": {"include": 0, "require": 0},
10 "short_abstract": {"include": 1, "require": 0},
11 "long_abstract": {"include": 0, "require": 0},
12 "comments": {"include": 1, "require": 0},
13 "track": {"include": 0, "require": 0},
14 "session_type": {"include": 0, "require": 0},
15 "language": {"include": 0, "require": 0},
16 "slides": {"include": 1, "require": 0},
17 "video": {"include": 0, "require": 0},
18 "audio": {"include": 0, "require": 0}
19 }
20
21 SPEAKER_FORM = {
22 "name": {"include": 1, "require": 1},
23 "email": {"include": 1, "require": 1},
24 "photo": {"include": 1, "require": 0},
25 "organisation": {"include": 1, "require": 0},
26 "position": {"include": 1, "require": 0},
27 "country": {"include": 1, "require": 0},
28 "short_biography": {"include": 1, "require": 0},
29 "long_biography": {"include": 0, "require": 0},
30 "mobile": {"include": 0, "require": 0},
31 "website": {"include": 1, "require": 0},
32 "facebook": {"include": 0, "require": 0},
33 "twitter": {"include": 1, "require": 0},
34 "github": {"include": 0, "require": 0},
35 "linkedin": {"include": 0, "require": 0}
36 }
37
38 session_form_str = json.dumps(SESSION_FORM, separators=(',', ':'))
39 speaker_form_str = json.dumps(SPEAKER_FORM, separators=(',', ':'))
40
41
42 class CustomForms(SoftDeletionModel):
43 """custom form model class"""
44 __tablename__ = 'custom_forms'
45 __table_args__ = (UniqueConstraint('event_id', 'field_identifier', 'form', name='custom_form_identifier'), )
46 id = db.Column(db.Integer, primary_key=True)
47 field_identifier = db.Column(db.String, nullable=False)
48 form = db.Column(db.String, nullable=False)
49 type = db.Column(db.String, nullable=False)
50 is_required = db.Column(db.Boolean)
51 is_included = db.Column(db.Boolean)
52 is_fixed = db.Column(db.Boolean)
53 event_id = db.Column(db.Integer, db.ForeignKey('events.id', ondelete='CASCADE'))
54
55 def __init__(self,
56 event_id=None,
57 field_identifier=None,
58 form=None,
59 type=None,
60 is_required=None,
61 is_included=None,
62 is_fixed=None,
63 deleted_at=None):
64 self.event_id = event_id
65 self.field_identifier = field_identifier,
66 self.form = form,
67 self.type = type,
68 self.is_required = is_required,
69 self.is_included = is_included,
70 self.is_fixed = is_fixed
71 self.deleted_at = deleted_at
72
73 def __repr__(self):
74 return '<CustomForm %r>' % self.id
75
76 def __str__(self):
77 return self.__repr__()
78
79 @property
80 def serialize(self):
81 """Return object data in easily serializable format"""
82
83 return {
84 'id': self.id,
85 'field_identifier': self.field_identifier,
86 'form': self.form,
87 'type': self.type,
88 'is_required': self.is_required,
89 'is_included': self.is_included,
90 'is_fixed': self.is_fixed
91 }
92
[end of app/models/custom_form.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/models/custom_form.py b/app/models/custom_form.py
--- a/app/models/custom_form.py
+++ b/app/models/custom_form.py
@@ -35,8 +35,34 @@
"linkedin": {"include": 0, "require": 0}
}
+ATTENDEE_FORM = {
+ "firstname": {"include": 1, "require": 1},
+ "lastname": {"include": 1, "require": 1},
+ "email": {"include": 1, "require": 0},
+ "address": {"include": 1, "require": 0},
+ "city": {"include": 1, "require": 0},
+ "state": {"include": 1, "require": 0},
+ "country": {"include": 1, "require": 0},
+ "job_title": {"include": 1, "require": 0},
+ "phone": {"include": 1, "require": 0},
+ "tax_business_info": {"include": 0, "require": 0},
+ "billing_address": {"include": 0, "require": 0},
+ "home_address": {"include": 0, "require": 0},
+ "shipping_address": {"include": 0, "require": 0},
+ "company": {"include": 0, "require": 0},
+ "work_address": {"include": 0, "require": 0},
+ "work_phone": {"include": 0, "require": 0},
+ "website": {"include": 1, "require": 0},
+ "blog": {"include": 0, "require": 0},
+ "twitter": {"include": 1, "require": 0},
+ "facebook": {"include": 0, "require": 0},
+ "github": {"include": 1, "require": 0},
+ "gender": {"include": 0, "require": 0},
+}
+
session_form_str = json.dumps(SESSION_FORM, separators=(',', ':'))
speaker_form_str = json.dumps(SPEAKER_FORM, separators=(',', ':'))
+attendee_form_str = json.dumps(ATTENDEE_FORM, separators=(',', ':'))
class CustomForms(SoftDeletionModel):
| {"golden_diff": "diff --git a/app/models/custom_form.py b/app/models/custom_form.py\n--- a/app/models/custom_form.py\n+++ b/app/models/custom_form.py\n@@ -35,8 +35,34 @@\n \"linkedin\": {\"include\": 0, \"require\": 0}\n }\n \n+ATTENDEE_FORM = {\n+ \"firstname\": {\"include\": 1, \"require\": 1},\n+ \"lastname\": {\"include\": 1, \"require\": 1},\n+ \"email\": {\"include\": 1, \"require\": 0},\n+ \"address\": {\"include\": 1, \"require\": 0},\n+ \"city\": {\"include\": 1, \"require\": 0},\n+ \"state\": {\"include\": 1, \"require\": 0},\n+ \"country\": {\"include\": 1, \"require\": 0},\n+ \"job_title\": {\"include\": 1, \"require\": 0},\n+ \"phone\": {\"include\": 1, \"require\": 0},\n+ \"tax_business_info\": {\"include\": 0, \"require\": 0},\n+ \"billing_address\": {\"include\": 0, \"require\": 0},\n+ \"home_address\": {\"include\": 0, \"require\": 0},\n+ \"shipping_address\": {\"include\": 0, \"require\": 0},\n+ \"company\": {\"include\": 0, \"require\": 0},\n+ \"work_address\": {\"include\": 0, \"require\": 0},\n+ \"work_phone\": {\"include\": 0, \"require\": 0},\n+ \"website\": {\"include\": 1, \"require\": 0},\n+ \"blog\": {\"include\": 0, \"require\": 0},\n+ \"twitter\": {\"include\": 1, \"require\": 0},\n+ \"facebook\": {\"include\": 0, \"require\": 0},\n+ \"github\": {\"include\": 1, \"require\": 0},\n+ \"gender\": {\"include\": 0, \"require\": 0},\n+}\n+\n session_form_str = json.dumps(SESSION_FORM, separators=(',', ':'))\n speaker_form_str = json.dumps(SPEAKER_FORM, separators=(',', ':'))\n+attendee_form_str = json.dumps(ATTENDEE_FORM, separators=(',', ':'))\n \n \n class CustomForms(SoftDeletionModel):\n", "issue": "Add custom form for attendees\n**Is your feature request related to a problem? Please describe.**\r\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\n\r\nAdd custom form for attendees\r\n\r\n**Describe the solution you'd like**\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n**Describe alternatives you've considered**\r\n<!-- A clear and concise description of any alternative solutions or features you've considered. -->\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\r\n**Working on it**\n", "before_files": [{"content": "import json\nfrom sqlalchemy.schema import UniqueConstraint\n\nfrom app.models import db\nfrom app.models.base import SoftDeletionModel\n\nSESSION_FORM = {\n \"title\": {\"include\": 1, \"require\": 1},\n \"subtitle\": {\"include\": 0, \"require\": 0},\n \"short_abstract\": {\"include\": 1, \"require\": 0},\n \"long_abstract\": {\"include\": 0, \"require\": 0},\n \"comments\": {\"include\": 1, \"require\": 0},\n \"track\": {\"include\": 0, \"require\": 0},\n \"session_type\": {\"include\": 0, \"require\": 0},\n \"language\": {\"include\": 0, \"require\": 0},\n \"slides\": {\"include\": 1, \"require\": 0},\n \"video\": {\"include\": 0, \"require\": 0},\n \"audio\": {\"include\": 0, \"require\": 0}\n}\n\nSPEAKER_FORM = {\n \"name\": {\"include\": 1, \"require\": 1},\n \"email\": {\"include\": 1, \"require\": 1},\n \"photo\": {\"include\": 1, \"require\": 0},\n \"organisation\": {\"include\": 1, \"require\": 0},\n \"position\": {\"include\": 1, \"require\": 0},\n \"country\": {\"include\": 1, \"require\": 0},\n \"short_biography\": {\"include\": 1, \"require\": 0},\n \"long_biography\": {\"include\": 0, \"require\": 0},\n \"mobile\": {\"include\": 0, \"require\": 0},\n \"website\": {\"include\": 1, \"require\": 0},\n \"facebook\": {\"include\": 0, \"require\": 0},\n \"twitter\": {\"include\": 1, \"require\": 0},\n \"github\": {\"include\": 0, \"require\": 0},\n \"linkedin\": {\"include\": 0, \"require\": 0}\n}\n\nsession_form_str = json.dumps(SESSION_FORM, separators=(',', ':'))\nspeaker_form_str = json.dumps(SPEAKER_FORM, separators=(',', ':'))\n\n\nclass CustomForms(SoftDeletionModel):\n \"\"\"custom form model class\"\"\"\n __tablename__ = 'custom_forms'\n __table_args__ = (UniqueConstraint('event_id', 'field_identifier', 'form', name='custom_form_identifier'), )\n id = db.Column(db.Integer, primary_key=True)\n field_identifier = db.Column(db.String, nullable=False)\n form = db.Column(db.String, nullable=False)\n type = db.Column(db.String, nullable=False)\n is_required = db.Column(db.Boolean)\n is_included = db.Column(db.Boolean)\n is_fixed = db.Column(db.Boolean)\n event_id = db.Column(db.Integer, db.ForeignKey('events.id', ondelete='CASCADE'))\n\n def __init__(self,\n event_id=None,\n field_identifier=None,\n form=None,\n type=None,\n is_required=None,\n is_included=None,\n is_fixed=None,\n deleted_at=None):\n self.event_id = event_id\n self.field_identifier = field_identifier,\n self.form = form,\n self.type = type,\n self.is_required = is_required,\n self.is_included = is_included,\n self.is_fixed = is_fixed\n self.deleted_at = deleted_at\n\n def __repr__(self):\n return '<CustomForm %r>' % self.id\n\n def __str__(self):\n return self.__repr__()\n\n @property\n def serialize(self):\n \"\"\"Return object data in easily serializable format\"\"\"\n\n return {\n 'id': self.id,\n 'field_identifier': self.field_identifier,\n 'form': self.form,\n 'type': self.type,\n 'is_required': self.is_required,\n 'is_included': self.is_included,\n 'is_fixed': self.is_fixed\n }\n", "path": "app/models/custom_form.py"}]} | 1,682 | 517 |
gh_patches_debug_33913 | rasdani/github-patches | git_diff | cocotb__cocotb-1881 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
XGMII monitor crashes with AttributeError
`self._pkt` is initialized to `b""` (immutable), but we are using `.append()` to add payload data to it. This won't work. Maybe instead use a `bytearray()`?
_Originally posted by @LeChuck42 in https://github.com/cocotb/cocotb/pull/1545#issuecomment-635394899_
These lines exemplify the issue:
https://github.com/cocotb/cocotb/blob/924f35a3b7d39543118b7bfaed77dd4808e6612b/cocotb/monitors/xgmii.py#L107-L121
</issue>
<code>
[start of cocotb/monitors/xgmii.py]
1 # Copyright (c) 2013 Potential Ventures Ltd
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions are met:
6 # * Redistributions of source code must retain the above copyright
7 # notice, this list of conditions and the following disclaimer.
8 # * Redistributions in binary form must reproduce the above copyright
9 # notice, this list of conditions and the following disclaimer in the
10 # documentation and/or other materials provided with the distribution.
11 # * Neither the name of Potential Ventures Ltd nor the names of its
12 # contributors may be used to endorse or promote products derived from this
13 # software without specific prior written permission.
14 #
15 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
16 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
17 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
18 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
19 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
22 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
26 """Monitor for XGMII (10 Gigabit Media Independent Interface)."""
27
28 # By default cast to scapy packets, otherwise we pass the string of bytes
29 try:
30 from scapy.all import Ether
31 _have_scapy = True
32 except ImportError:
33 _have_scapy = False
34
35 import struct
36 import zlib
37
38 import cocotb
39 from cocotb.utils import hexdump
40 from cocotb.monitors import Monitor
41 from cocotb.triggers import RisingEdge
42
43 _XGMII_IDLE = 0x07 # noqa
44 _XGMII_START = 0xFB # noqa
45 _XGMII_TERMINATE = 0xFD # noqa
46
47 _PREAMBLE_SFD = b"\x55\x55\x55\x55\x55\x55\xD5"
48
49
50 class XGMII(Monitor):
51 """XGMII (10 Gigabit Media Independent Interface) Monitor.
52
53 Assumes a single vector, either 4 or 8 bytes plus control bit for each byte.
54
55 If interleaved is ``True`` then the control bits are adjacent to the bytes.
56
57 .. versionchanged:: 1.4.0
58 This now emits packets of type :class:`bytes` rather than :class:`str`,
59 which matches the behavior of :class:`cocotb.drivers.xgmii.XGMII`.
60 """
61
62 def __init__(self, signal, clock, interleaved=True, callback=None,
63 event=None):
64 """Args:
65 signal (SimHandle): The XGMII data bus.
66 clock (SimHandle): The associated clock (assumed to be
67 driven by another coroutine).
68 interleaved (bool, optional): Whether control bits are interleaved
69 with the data bytes or not.
70
71 If interleaved the bus is
72 byte0, byte0_control, byte1, byte1_control, ...
73
74 Otherwise expect
75 byte0, byte1, ..., byte0_control, byte1_control, ...
76 """
77 self.log = signal._log
78 self.clock = clock
79 self.signal = signal
80 self.bytes = len(self.signal) // 9
81 self.interleaved = interleaved
82 Monitor.__init__(self, callback=callback, event=event)
83
84 def _get_bytes(self):
85 """Take a value and extract the individual bytes and control bits.
86
87 Returns a tuple of lists.
88 """
89 value = self.signal.value.integer
90 bytes = []
91 ctrls = []
92 byte_shift = 8
93 ctrl_base = 8 * self.bytes
94 ctrl_inc = 1
95 if self.interleaved:
96 byte_shift += 1
97 ctrl_base = 8
98 ctrl_inc = 9
99
100 for i in range(self.bytes):
101 bytes.append((value >> (i * byte_shift)) & 0xff)
102 ctrls.append(bool(value & (1 << ctrl_base)))
103 ctrl_base += ctrl_inc
104
105 return ctrls, bytes
106
107 def _add_payload(self, ctrl, bytes):
108 """Take the payload and return true if more to come"""
109 for index, byte in enumerate(bytes):
110 if ctrl[index]:
111 if byte != _XGMII_TERMINATE:
112 self.log.error("Got control character in XGMII payload")
113 self.log.info("data = :" +
114 " ".join(["%02X" % b for b in bytes]))
115 self.log.info("ctrl = :" +
116 " ".join(["%s" % str(c) for c in ctrl]))
117 self._pkt = b""
118 return False
119
120 self._pkt.append(byte)
121 return True
122
123 @cocotb.coroutine
124 def _monitor_recv(self):
125 clk = RisingEdge(self.clock)
126 self._pkt = b""
127
128 while True:
129 yield clk
130 ctrl, bytes = self._get_bytes()
131
132 if ctrl[0] and bytes[0] == _XGMII_START:
133
134 ctrl, bytes = ctrl[1:], bytes[1:]
135
136 while self._add_payload(ctrl, bytes):
137 yield clk
138 ctrl, bytes = self._get_bytes()
139
140 elif self.bytes == 8 :
141 if ctrl[4] and bytes[4] == _XGMII_START:
142
143 ctrl, bytes = ctrl[5:], bytes[5:]
144
145 while self._add_payload(ctrl, bytes):
146 yield clk
147 ctrl, bytes = self._get_bytes()
148
149 if self._pkt:
150
151 self.log.debug("Received:\n%s" % (hexdump(self._pkt)))
152
153 if len(self._pkt) < 64 + 7:
154 self.log.error("Received a runt frame!")
155 if len(self._pkt) < 12:
156 self.log.error("No data to extract")
157 self._pkt = b""
158 continue
159
160 preamble_sfd = self._pkt[0:7]
161 crc32 = self._pkt[-4:]
162 payload = self._pkt[7:-4]
163
164 if preamble_sfd != _PREAMBLE_SFD:
165 self.log.error("Got a frame with unknown preamble/SFD")
166 self.log.error(hexdump(preamble_sfd))
167 self._pkt = b""
168 continue
169
170 expected_crc = struct.pack("<I",
171 (zlib.crc32(payload) & 0xFFFFFFFF))
172
173 if crc32 != expected_crc:
174 self.log.error("Incorrect CRC on received packet")
175 self.log.info("Expected: %s" % (hexdump(expected_crc)))
176 self.log.info("Received: %s" % (hexdump(crc32)))
177
178 # Use scapy to decode the packet
179 if _have_scapy:
180 p = Ether(payload)
181 self.log.debug("Received decoded packet:\n%s" % p.show2())
182 else:
183 p = payload
184
185 self._recv(p)
186 self._pkt = b""
187
[end of cocotb/monitors/xgmii.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cocotb/monitors/xgmii.py b/cocotb/monitors/xgmii.py
--- a/cocotb/monitors/xgmii.py
+++ b/cocotb/monitors/xgmii.py
@@ -114,7 +114,7 @@
" ".join(["%02X" % b for b in bytes]))
self.log.info("ctrl = :" +
" ".join(["%s" % str(c) for c in ctrl]))
- self._pkt = b""
+ self._pkt = bytearray()
return False
self._pkt.append(byte)
@@ -123,7 +123,7 @@
@cocotb.coroutine
def _monitor_recv(self):
clk = RisingEdge(self.clock)
- self._pkt = b""
+ self._pkt = bytearray()
while True:
yield clk
@@ -154,7 +154,7 @@
self.log.error("Received a runt frame!")
if len(self._pkt) < 12:
self.log.error("No data to extract")
- self._pkt = b""
+ self._pkt = bytearray()
continue
preamble_sfd = self._pkt[0:7]
@@ -164,7 +164,7 @@
if preamble_sfd != _PREAMBLE_SFD:
self.log.error("Got a frame with unknown preamble/SFD")
self.log.error(hexdump(preamble_sfd))
- self._pkt = b""
+ self._pkt = bytearray()
continue
expected_crc = struct.pack("<I",
@@ -183,4 +183,4 @@
p = payload
self._recv(p)
- self._pkt = b""
+ self._pkt = bytearray()
| {"golden_diff": "diff --git a/cocotb/monitors/xgmii.py b/cocotb/monitors/xgmii.py\n--- a/cocotb/monitors/xgmii.py\n+++ b/cocotb/monitors/xgmii.py\n@@ -114,7 +114,7 @@\n \" \".join([\"%02X\" % b for b in bytes]))\n self.log.info(\"ctrl = :\" +\n \" \".join([\"%s\" % str(c) for c in ctrl]))\n- self._pkt = b\"\"\n+ self._pkt = bytearray()\n return False\n \n self._pkt.append(byte)\n@@ -123,7 +123,7 @@\n @cocotb.coroutine\n def _monitor_recv(self):\n clk = RisingEdge(self.clock)\n- self._pkt = b\"\"\n+ self._pkt = bytearray()\n \n while True:\n yield clk\n@@ -154,7 +154,7 @@\n self.log.error(\"Received a runt frame!\")\n if len(self._pkt) < 12:\n self.log.error(\"No data to extract\")\n- self._pkt = b\"\"\n+ self._pkt = bytearray()\n continue\n \n preamble_sfd = self._pkt[0:7]\n@@ -164,7 +164,7 @@\n if preamble_sfd != _PREAMBLE_SFD:\n self.log.error(\"Got a frame with unknown preamble/SFD\")\n self.log.error(hexdump(preamble_sfd))\n- self._pkt = b\"\"\n+ self._pkt = bytearray()\n continue\n \n expected_crc = struct.pack(\"<I\",\n@@ -183,4 +183,4 @@\n p = payload\n \n self._recv(p)\n- self._pkt = b\"\"\n+ self._pkt = bytearray()\n", "issue": "XGMII monitor crashes with AttributeError\n`self._pkt` is initialized to `b\"\"` (immutable), but we are using `.append()` to add payload data to it. This won't work. Maybe instead use a `bytearray()`?\r\n\r\n_Originally posted by @LeChuck42 in https://github.com/cocotb/cocotb/pull/1545#issuecomment-635394899_\r\n\r\nThese lines exemplify the issue:\r\n\r\nhttps://github.com/cocotb/cocotb/blob/924f35a3b7d39543118b7bfaed77dd4808e6612b/cocotb/monitors/xgmii.py#L107-L121\n", "before_files": [{"content": "# Copyright (c) 2013 Potential Ventures Ltd\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd nor the names of its\n# contributors may be used to endorse or promote products derived from this\n# software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"Monitor for XGMII (10 Gigabit Media Independent Interface).\"\"\"\n\n# By default cast to scapy packets, otherwise we pass the string of bytes\ntry:\n from scapy.all import Ether\n _have_scapy = True\nexcept ImportError:\n _have_scapy = False\n\nimport struct\nimport zlib\n\nimport cocotb\nfrom cocotb.utils import hexdump\nfrom cocotb.monitors import Monitor\nfrom cocotb.triggers import RisingEdge\n\n_XGMII_IDLE = 0x07 # noqa\n_XGMII_START = 0xFB # noqa\n_XGMII_TERMINATE = 0xFD # noqa\n\n_PREAMBLE_SFD = b\"\\x55\\x55\\x55\\x55\\x55\\x55\\xD5\"\n\n\nclass XGMII(Monitor):\n \"\"\"XGMII (10 Gigabit Media Independent Interface) Monitor.\n\n Assumes a single vector, either 4 or 8 bytes plus control bit for each byte.\n\n If interleaved is ``True`` then the control bits are adjacent to the bytes.\n\n .. versionchanged:: 1.4.0\n This now emits packets of type :class:`bytes` rather than :class:`str`,\n which matches the behavior of :class:`cocotb.drivers.xgmii.XGMII`.\n \"\"\"\n\n def __init__(self, signal, clock, interleaved=True, callback=None,\n event=None):\n \"\"\"Args:\n signal (SimHandle): The XGMII data bus.\n clock (SimHandle): The associated clock (assumed to be\n driven by another coroutine).\n interleaved (bool, optional): Whether control bits are interleaved\n with the data bytes or not.\n\n If interleaved the bus is\n byte0, byte0_control, byte1, byte1_control, ...\n\n Otherwise expect\n byte0, byte1, ..., byte0_control, byte1_control, ...\n \"\"\"\n self.log = signal._log\n self.clock = clock\n self.signal = signal\n self.bytes = len(self.signal) // 9\n self.interleaved = interleaved\n Monitor.__init__(self, callback=callback, event=event)\n\n def _get_bytes(self):\n \"\"\"Take a value and extract the individual bytes and control bits.\n\n Returns a tuple of lists.\n \"\"\"\n value = self.signal.value.integer\n bytes = []\n ctrls = []\n byte_shift = 8\n ctrl_base = 8 * self.bytes\n ctrl_inc = 1\n if self.interleaved:\n byte_shift += 1\n ctrl_base = 8\n ctrl_inc = 9\n\n for i in range(self.bytes):\n bytes.append((value >> (i * byte_shift)) & 0xff)\n ctrls.append(bool(value & (1 << ctrl_base)))\n ctrl_base += ctrl_inc\n\n return ctrls, bytes\n\n def _add_payload(self, ctrl, bytes):\n \"\"\"Take the payload and return true if more to come\"\"\"\n for index, byte in enumerate(bytes):\n if ctrl[index]:\n if byte != _XGMII_TERMINATE:\n self.log.error(\"Got control character in XGMII payload\")\n self.log.info(\"data = :\" +\n \" \".join([\"%02X\" % b for b in bytes]))\n self.log.info(\"ctrl = :\" +\n \" \".join([\"%s\" % str(c) for c in ctrl]))\n self._pkt = b\"\"\n return False\n\n self._pkt.append(byte)\n return True\n\n @cocotb.coroutine\n def _monitor_recv(self):\n clk = RisingEdge(self.clock)\n self._pkt = b\"\"\n\n while True:\n yield clk\n ctrl, bytes = self._get_bytes()\n\n if ctrl[0] and bytes[0] == _XGMII_START:\n\n ctrl, bytes = ctrl[1:], bytes[1:]\n\n while self._add_payload(ctrl, bytes):\n yield clk\n ctrl, bytes = self._get_bytes()\n\n elif self.bytes == 8 :\n if ctrl[4] and bytes[4] == _XGMII_START:\n\n ctrl, bytes = ctrl[5:], bytes[5:]\n\n while self._add_payload(ctrl, bytes):\n yield clk\n ctrl, bytes = self._get_bytes()\n\n if self._pkt:\n\n self.log.debug(\"Received:\\n%s\" % (hexdump(self._pkt)))\n\n if len(self._pkt) < 64 + 7:\n self.log.error(\"Received a runt frame!\")\n if len(self._pkt) < 12:\n self.log.error(\"No data to extract\")\n self._pkt = b\"\"\n continue\n\n preamble_sfd = self._pkt[0:7]\n crc32 = self._pkt[-4:]\n payload = self._pkt[7:-4]\n\n if preamble_sfd != _PREAMBLE_SFD:\n self.log.error(\"Got a frame with unknown preamble/SFD\")\n self.log.error(hexdump(preamble_sfd))\n self._pkt = b\"\"\n continue\n\n expected_crc = struct.pack(\"<I\",\n (zlib.crc32(payload) & 0xFFFFFFFF))\n\n if crc32 != expected_crc:\n self.log.error(\"Incorrect CRC on received packet\")\n self.log.info(\"Expected: %s\" % (hexdump(expected_crc)))\n self.log.info(\"Received: %s\" % (hexdump(crc32)))\n\n # Use scapy to decode the packet\n if _have_scapy:\n p = Ether(payload)\n self.log.debug(\"Received decoded packet:\\n%s\" % p.show2())\n else:\n p = payload\n\n self._recv(p)\n self._pkt = b\"\"\n", "path": "cocotb/monitors/xgmii.py"}]} | 2,760 | 402 |
gh_patches_debug_9114 | rasdani/github-patches | git_diff | UTNkar__moore-183 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Centre drive embeds
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Description
Center drive embeds. Currently it looks a bit weird.

### Steps to Reproduce
1. [First Step]
2. [Second Step]
3. [and so on...]
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
</issue>
<code>
[start of src/google/models.py]
1 from datetime import date
2
3 from django.db import models
4 from django.utils.translation import ugettext_lazy as _
5 from wagtail.wagtailadmin.edit_handlers import FieldPanel, StreamFieldPanel, \
6 TabbedInterface, ObjectList
7 from wagtail.wagtailcore import blocks
8 from wagtail.wagtailcore.fields import StreamField, RichTextField
9 from wagtail.wagtailcore.models import Page
10 from wagtail.wagtailsearch import index
11
12 from blocks.models import WAGTAIL_STATIC_BLOCKTYPES
13 from utils.translation import TranslatedField
14
15
16 class GoogleFormBlock(blocks.StructBlock):
17 form_id = blocks.CharBlock()
18 height = blocks.IntegerBlock()
19
20 class Meta:
21 label = _('Google Form')
22 icon = 'fa-check-square-o'
23 template = 'google/blocks/form.html'
24 group = _('Meta')
25
26
27 class GoogleFormIndex(Page):
28 title_sv = models.CharField(max_length=255)
29 translated_title = TranslatedField('title', 'title_sv')
30
31 description_en = RichTextField(
32 verbose_name=_('English description'),
33 blank=True,
34 )
35 description_sv = RichTextField(
36 verbose_name=_('Swedish description'),
37 blank=True,
38 )
39 description = TranslatedField('description_en', 'description_sv')
40
41 # Editor panels configuration
42 content_panels = Page.content_panels + [
43 FieldPanel('title_sv', classname="full title"),
44 FieldPanel('description_en'),
45 FieldPanel('description_sv'),
46 ]
47
48 # Sub-page type rules
49 subpage_types = ['google.GoogleFormPage']
50
51 def get_context(self, request, **kwargs):
52 context = super(GoogleFormIndex, self).get_context(request, **kwargs)
53
54 # Add extra variables and return the updated context
55 context['google_forms'] = GoogleFormPage.objects.child_of(self).live()\
56 .order_by('-deadline')
57 return context
58
59
60 class GoogleFormPage(Page):
61 title_sv = models.CharField(max_length=255)
62 translated_title = TranslatedField('title', 'title_sv')
63
64 # TODO: Limit to one form!
65 form_en = StreamField([('google_form', GoogleFormBlock())])
66 form_sv = StreamField([('google_form', GoogleFormBlock())])
67 form = TranslatedField('form_en', 'form_sv')
68
69 deadline = models.DateField(verbose_name=_('Form deadline'))
70
71 results_en = StreamField(
72 WAGTAIL_STATIC_BLOCKTYPES,
73 blank=True,
74 )
75 results_sv = StreamField(
76 WAGTAIL_STATIC_BLOCKTYPES,
77 blank=True,
78 )
79 results = TranslatedField('results_en', 'results_sv')
80
81 @property
82 def is_past_due(self) -> bool:
83 return date.today() > self.deadline
84
85 # Editor panels configuration
86 content_panels = Page.content_panels + [
87 FieldPanel('title_sv', classname="full title"),
88 FieldPanel('deadline'),
89 StreamFieldPanel('form_en'),
90 StreamFieldPanel('form_sv'),
91 ]
92
93 edit_handler = TabbedInterface([
94 ObjectList(content_panels, heading=_('Common')),
95 ObjectList([StreamFieldPanel('results_en')], heading=_('English')),
96 ObjectList([StreamFieldPanel('results_sv')], heading=_('Swedish')),
97 ObjectList(
98 Page.promote_panels + Page.settings_panels, heading=_('Settings')
99 ),
100 ])
101
102 # Search index configuration
103 search_fields = Page.search_fields + [
104 index.SearchField('title_sv'),
105 index.FilterField('results_en'),
106 index.FilterField('results_sv'),
107 index.FilterField('deadline'),
108 ]
109
110 # Parent page / subpage type rules
111 parent_page_types = ['google.GoogleFormIndex']
112 subpage_types = []
113
114
115 class GoogleDriveBlock(blocks.StructBlock):
116 folder_id = blocks.CharBlock()
117 view = blocks.ChoiceBlock(choices=[
118 ('list', _('List')),
119 ('grid', _('Grid')),
120 ])
121 height = blocks.IntegerBlock()
122
123 class Meta:
124 label = _('Google Drive')
125 icon = 'fa-folder-open'
126 template = 'google/blocks/drive.html'
127
[end of src/google/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/google/models.py b/src/google/models.py
--- a/src/google/models.py
+++ b/src/google/models.py
@@ -21,7 +21,7 @@
label = _('Google Form')
icon = 'fa-check-square-o'
template = 'google/blocks/form.html'
- group = _('Meta')
+ group = _('Embed')
class GoogleFormIndex(Page):
@@ -124,3 +124,4 @@
label = _('Google Drive')
icon = 'fa-folder-open'
template = 'google/blocks/drive.html'
+ group = _('Embed')
| {"golden_diff": "diff --git a/src/google/models.py b/src/google/models.py\n--- a/src/google/models.py\n+++ b/src/google/models.py\n@@ -21,7 +21,7 @@\n label = _('Google Form')\n icon = 'fa-check-square-o'\n template = 'google/blocks/form.html'\n- group = _('Meta')\n+ group = _('Embed')\n \n \n class GoogleFormIndex(Page):\n@@ -124,3 +124,4 @@\n label = _('Google Drive')\n icon = 'fa-folder-open'\n template = 'google/blocks/drive.html'\n+ group = _('Embed')\n", "issue": "Centre drive embeds\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Description\r\n\r\nCenter drive embeds. Currently it looks a bit weird.\r\n\r\n\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. [First Step]\r\n2. [Second Step]\r\n3. [and so on...]\r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\r\n\n", "before_files": [{"content": "from datetime import date\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom wagtail.wagtailadmin.edit_handlers import FieldPanel, StreamFieldPanel, \\\n TabbedInterface, ObjectList\nfrom wagtail.wagtailcore import blocks\nfrom wagtail.wagtailcore.fields import StreamField, RichTextField\nfrom wagtail.wagtailcore.models import Page\nfrom wagtail.wagtailsearch import index\n\nfrom blocks.models import WAGTAIL_STATIC_BLOCKTYPES\nfrom utils.translation import TranslatedField\n\n\nclass GoogleFormBlock(blocks.StructBlock):\n form_id = blocks.CharBlock()\n height = blocks.IntegerBlock()\n\n class Meta:\n label = _('Google Form')\n icon = 'fa-check-square-o'\n template = 'google/blocks/form.html'\n group = _('Meta')\n\n\nclass GoogleFormIndex(Page):\n title_sv = models.CharField(max_length=255)\n translated_title = TranslatedField('title', 'title_sv')\n\n description_en = RichTextField(\n verbose_name=_('English description'),\n blank=True,\n )\n description_sv = RichTextField(\n verbose_name=_('Swedish description'),\n blank=True,\n )\n description = TranslatedField('description_en', 'description_sv')\n\n # Editor panels configuration\n content_panels = Page.content_panels + [\n FieldPanel('title_sv', classname=\"full title\"),\n FieldPanel('description_en'),\n FieldPanel('description_sv'),\n ]\n\n # Sub-page type rules\n subpage_types = ['google.GoogleFormPage']\n\n def get_context(self, request, **kwargs):\n context = super(GoogleFormIndex, self).get_context(request, **kwargs)\n\n # Add extra variables and return the updated context\n context['google_forms'] = GoogleFormPage.objects.child_of(self).live()\\\n .order_by('-deadline')\n return context\n\n\nclass GoogleFormPage(Page):\n title_sv = models.CharField(max_length=255)\n translated_title = TranslatedField('title', 'title_sv')\n\n # TODO: Limit to one form!\n form_en = StreamField([('google_form', GoogleFormBlock())])\n form_sv = StreamField([('google_form', GoogleFormBlock())])\n form = TranslatedField('form_en', 'form_sv')\n\n deadline = models.DateField(verbose_name=_('Form deadline'))\n\n results_en = StreamField(\n WAGTAIL_STATIC_BLOCKTYPES,\n blank=True,\n )\n results_sv = StreamField(\n WAGTAIL_STATIC_BLOCKTYPES,\n blank=True,\n )\n results = TranslatedField('results_en', 'results_sv')\n\n @property\n def is_past_due(self) -> bool:\n return date.today() > self.deadline\n\n # Editor panels configuration\n content_panels = Page.content_panels + [\n FieldPanel('title_sv', classname=\"full title\"),\n FieldPanel('deadline'),\n StreamFieldPanel('form_en'),\n StreamFieldPanel('form_sv'),\n ]\n\n edit_handler = TabbedInterface([\n ObjectList(content_panels, heading=_('Common')),\n ObjectList([StreamFieldPanel('results_en')], heading=_('English')),\n ObjectList([StreamFieldPanel('results_sv')], heading=_('Swedish')),\n ObjectList(\n Page.promote_panels + Page.settings_panels, heading=_('Settings')\n ),\n ])\n\n # Search index configuration\n search_fields = Page.search_fields + [\n index.SearchField('title_sv'),\n index.FilterField('results_en'),\n index.FilterField('results_sv'),\n index.FilterField('deadline'),\n ]\n\n # Parent page / subpage type rules\n parent_page_types = ['google.GoogleFormIndex']\n subpage_types = []\n\n\nclass GoogleDriveBlock(blocks.StructBlock):\n folder_id = blocks.CharBlock()\n view = blocks.ChoiceBlock(choices=[\n ('list', _('List')),\n ('grid', _('Grid')),\n ])\n height = blocks.IntegerBlock()\n\n class Meta:\n label = _('Google Drive')\n icon = 'fa-folder-open'\n template = 'google/blocks/drive.html'\n", "path": "src/google/models.py"}]} | 1,834 | 134 |
gh_patches_debug_41589 | rasdani/github-patches | git_diff | getsentry__sentry-python-851 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django 3.1 async views do not work
When using sentry versions greater than 0.16.3, (tested on 0.18.0), Django 3.1 aysnc views do not work.
```
log.py 224 ERROR Internal Server Error: /async_ok
Traceback (most recent call last):
File "/Users/williamchu/dev/sentry-python/.tox/py3.8-django-3.1/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/Users/williamchu/dev/sentry-python/.tox/py3.8-django-3.1/lib/python3.8/site-packages/django/core/handlers/base.py", line 186, in _get_response
self.check_response(response, callback)
File "/Users/williamchu/dev/sentry-python/.tox/py3.8-django-3.1/lib/python3.8/site-packages/django/core/handlers/base.py", line 312, in check_response
raise ValueError(
ValueError: The view tests.integrations.django.myapp.views.async_ok didn't return an HttpResponse object. It returned an unawaited coroutine instead. You may need to add an 'await' into your view.
```
I have made a branch with a test case to demonstrate this: https://github.com/uptickmetachu/sentry-python/tree/django3.1-test-async-view
</issue>
<code>
[start of sentry_sdk/integrations/django/views.py]
1 from sentry_sdk.hub import Hub
2 from sentry_sdk._types import MYPY
3 from sentry_sdk import _functools
4
5 if MYPY:
6 from typing import Any
7
8
9 def patch_views():
10 # type: () -> None
11
12 from django.core.handlers.base import BaseHandler
13 from sentry_sdk.integrations.django import DjangoIntegration
14
15 old_make_view_atomic = BaseHandler.make_view_atomic
16
17 @_functools.wraps(old_make_view_atomic)
18 def sentry_patched_make_view_atomic(self, *args, **kwargs):
19 # type: (Any, *Any, **Any) -> Any
20 callback = old_make_view_atomic(self, *args, **kwargs)
21
22 # XXX: The wrapper function is created for every request. Find more
23 # efficient way to wrap views (or build a cache?)
24
25 hub = Hub.current
26 integration = hub.get_integration(DjangoIntegration)
27
28 if integration is not None and integration.middleware_spans:
29
30 @_functools.wraps(callback)
31 def sentry_wrapped_callback(request, *args, **kwargs):
32 # type: (Any, *Any, **Any) -> Any
33 with hub.start_span(
34 op="django.view", description=request.resolver_match.view_name
35 ):
36 return callback(request, *args, **kwargs)
37
38 else:
39 sentry_wrapped_callback = callback
40
41 return sentry_wrapped_callback
42
43 BaseHandler.make_view_atomic = sentry_patched_make_view_atomic
44
[end of sentry_sdk/integrations/django/views.py]
[start of sentry_sdk/integrations/django/asgi.py]
1 """
2 Instrumentation for Django 3.0
3
4 Since this file contains `async def` it is conditionally imported in
5 `sentry_sdk.integrations.django` (depending on the existence of
6 `django.core.handlers.asgi`.
7 """
8
9 from sentry_sdk import Hub
10 from sentry_sdk._types import MYPY
11
12 from sentry_sdk.integrations.django import DjangoIntegration
13 from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
14
15 if MYPY:
16 from typing import Any
17 from typing import Union
18
19 from django.http.response import HttpResponse
20
21
22 def patch_django_asgi_handler_impl(cls):
23 # type: (Any) -> None
24 old_app = cls.__call__
25
26 async def sentry_patched_asgi_handler(self, scope, receive, send):
27 # type: (Any, Any, Any, Any) -> Any
28 if Hub.current.get_integration(DjangoIntegration) is None:
29 return await old_app(self, scope, receive, send)
30
31 middleware = SentryAsgiMiddleware(
32 old_app.__get__(self, cls), unsafe_context_data=True
33 )._run_asgi3
34 return await middleware(scope, receive, send)
35
36 cls.__call__ = sentry_patched_asgi_handler
37
38
39 def patch_get_response_async(cls, _before_get_response):
40 # type: (Any, Any) -> None
41 old_get_response_async = cls.get_response_async
42
43 async def sentry_patched_get_response_async(self, request):
44 # type: (Any, Any) -> Union[HttpResponse, BaseException]
45 _before_get_response(request)
46 return await old_get_response_async(self, request)
47
48 cls.get_response_async = sentry_patched_get_response_async
49
50
51 def patch_channels_asgi_handler_impl(cls):
52 # type: (Any) -> None
53 old_app = cls.__call__
54
55 async def sentry_patched_asgi_handler(self, receive, send):
56 # type: (Any, Any, Any) -> Any
57 if Hub.current.get_integration(DjangoIntegration) is None:
58 return await old_app(self, receive, send)
59
60 middleware = SentryAsgiMiddleware(
61 lambda _scope: old_app.__get__(self, cls), unsafe_context_data=True
62 )
63
64 return await middleware(self.scope)(receive, send)
65
66 cls.__call__ = sentry_patched_asgi_handler
67
[end of sentry_sdk/integrations/django/asgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/django/asgi.py b/sentry_sdk/integrations/django/asgi.py
--- a/sentry_sdk/integrations/django/asgi.py
+++ b/sentry_sdk/integrations/django/asgi.py
@@ -6,10 +6,9 @@
`django.core.handlers.asgi`.
"""
-from sentry_sdk import Hub
+from sentry_sdk import Hub, _functools
from sentry_sdk._types import MYPY
-from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
if MYPY:
@@ -21,6 +20,9 @@
def patch_django_asgi_handler_impl(cls):
# type: (Any) -> None
+
+ from sentry_sdk.integrations.django import DjangoIntegration
+
old_app = cls.__call__
async def sentry_patched_asgi_handler(self, scope, receive, send):
@@ -50,6 +52,9 @@
def patch_channels_asgi_handler_impl(cls):
# type: (Any) -> None
+
+ from sentry_sdk.integrations.django import DjangoIntegration
+
old_app = cls.__call__
async def sentry_patched_asgi_handler(self, receive, send):
@@ -64,3 +69,17 @@
return await middleware(self.scope)(receive, send)
cls.__call__ = sentry_patched_asgi_handler
+
+
+def wrap_async_view(hub, callback):
+ # type: (Hub, Any) -> Any
+ @_functools.wraps(callback)
+ async def sentry_wrapped_callback(request, *args, **kwargs):
+ # type: (Any, *Any, **Any) -> Any
+
+ with hub.start_span(
+ op="django.view", description=request.resolver_match.view_name
+ ):
+ return await callback(request, *args, **kwargs)
+
+ return sentry_wrapped_callback
diff --git a/sentry_sdk/integrations/django/views.py b/sentry_sdk/integrations/django/views.py
--- a/sentry_sdk/integrations/django/views.py
+++ b/sentry_sdk/integrations/django/views.py
@@ -6,6 +6,18 @@
from typing import Any
+try:
+ from asyncio import iscoroutinefunction
+except ImportError:
+ iscoroutinefunction = None # type: ignore
+
+
+try:
+ from sentry_sdk.integrations.django.asgi import wrap_async_view
+except (ImportError, SyntaxError):
+ wrap_async_view = None # type: ignore
+
+
def patch_views():
# type: () -> None
@@ -27,13 +39,14 @@
if integration is not None and integration.middleware_spans:
- @_functools.wraps(callback)
- def sentry_wrapped_callback(request, *args, **kwargs):
- # type: (Any, *Any, **Any) -> Any
- with hub.start_span(
- op="django.view", description=request.resolver_match.view_name
- ):
- return callback(request, *args, **kwargs)
+ if (
+ iscoroutinefunction is not None
+ and wrap_async_view is not None
+ and iscoroutinefunction(callback)
+ ):
+ sentry_wrapped_callback = wrap_async_view(hub, callback)
+ else:
+ sentry_wrapped_callback = _wrap_sync_view(hub, callback)
else:
sentry_wrapped_callback = callback
@@ -41,3 +54,16 @@
return sentry_wrapped_callback
BaseHandler.make_view_atomic = sentry_patched_make_view_atomic
+
+
+def _wrap_sync_view(hub, callback):
+ # type: (Hub, Any) -> Any
+ @_functools.wraps(callback)
+ def sentry_wrapped_callback(request, *args, **kwargs):
+ # type: (Any, *Any, **Any) -> Any
+ with hub.start_span(
+ op="django.view", description=request.resolver_match.view_name
+ ):
+ return callback(request, *args, **kwargs)
+
+ return sentry_wrapped_callback
| {"golden_diff": "diff --git a/sentry_sdk/integrations/django/asgi.py b/sentry_sdk/integrations/django/asgi.py\n--- a/sentry_sdk/integrations/django/asgi.py\n+++ b/sentry_sdk/integrations/django/asgi.py\n@@ -6,10 +6,9 @@\n `django.core.handlers.asgi`.\n \"\"\"\n \n-from sentry_sdk import Hub\n+from sentry_sdk import Hub, _functools\n from sentry_sdk._types import MYPY\n \n-from sentry_sdk.integrations.django import DjangoIntegration\n from sentry_sdk.integrations.asgi import SentryAsgiMiddleware\n \n if MYPY:\n@@ -21,6 +20,9 @@\n \n def patch_django_asgi_handler_impl(cls):\n # type: (Any) -> None\n+\n+ from sentry_sdk.integrations.django import DjangoIntegration\n+\n old_app = cls.__call__\n \n async def sentry_patched_asgi_handler(self, scope, receive, send):\n@@ -50,6 +52,9 @@\n \n def patch_channels_asgi_handler_impl(cls):\n # type: (Any) -> None\n+\n+ from sentry_sdk.integrations.django import DjangoIntegration\n+\n old_app = cls.__call__\n \n async def sentry_patched_asgi_handler(self, receive, send):\n@@ -64,3 +69,17 @@\n return await middleware(self.scope)(receive, send)\n \n cls.__call__ = sentry_patched_asgi_handler\n+\n+\n+def wrap_async_view(hub, callback):\n+ # type: (Hub, Any) -> Any\n+ @_functools.wraps(callback)\n+ async def sentry_wrapped_callback(request, *args, **kwargs):\n+ # type: (Any, *Any, **Any) -> Any\n+\n+ with hub.start_span(\n+ op=\"django.view\", description=request.resolver_match.view_name\n+ ):\n+ return await callback(request, *args, **kwargs)\n+\n+ return sentry_wrapped_callback\ndiff --git a/sentry_sdk/integrations/django/views.py b/sentry_sdk/integrations/django/views.py\n--- a/sentry_sdk/integrations/django/views.py\n+++ b/sentry_sdk/integrations/django/views.py\n@@ -6,6 +6,18 @@\n from typing import Any\n \n \n+try:\n+ from asyncio import iscoroutinefunction\n+except ImportError:\n+ iscoroutinefunction = None # type: ignore\n+\n+\n+try:\n+ from sentry_sdk.integrations.django.asgi import wrap_async_view\n+except (ImportError, SyntaxError):\n+ wrap_async_view = None # type: ignore\n+\n+\n def patch_views():\n # type: () -> None\n \n@@ -27,13 +39,14 @@\n \n if integration is not None and integration.middleware_spans:\n \n- @_functools.wraps(callback)\n- def sentry_wrapped_callback(request, *args, **kwargs):\n- # type: (Any, *Any, **Any) -> Any\n- with hub.start_span(\n- op=\"django.view\", description=request.resolver_match.view_name\n- ):\n- return callback(request, *args, **kwargs)\n+ if (\n+ iscoroutinefunction is not None\n+ and wrap_async_view is not None\n+ and iscoroutinefunction(callback)\n+ ):\n+ sentry_wrapped_callback = wrap_async_view(hub, callback)\n+ else:\n+ sentry_wrapped_callback = _wrap_sync_view(hub, callback)\n \n else:\n sentry_wrapped_callback = callback\n@@ -41,3 +54,16 @@\n return sentry_wrapped_callback\n \n BaseHandler.make_view_atomic = sentry_patched_make_view_atomic\n+\n+\n+def _wrap_sync_view(hub, callback):\n+ # type: (Hub, Any) -> Any\n+ @_functools.wraps(callback)\n+ def sentry_wrapped_callback(request, *args, **kwargs):\n+ # type: (Any, *Any, **Any) -> Any\n+ with hub.start_span(\n+ op=\"django.view\", description=request.resolver_match.view_name\n+ ):\n+ return callback(request, *args, **kwargs)\n+\n+ return sentry_wrapped_callback\n", "issue": "Django 3.1 async views do not work\nWhen using sentry versions greater than 0.16.3, (tested on 0.18.0), Django 3.1 aysnc views do not work.\r\n\r\n```\r\nlog.py 224 ERROR Internal Server Error: /async_ok\r\nTraceback (most recent call last):\r\n File \"/Users/williamchu/dev/sentry-python/.tox/py3.8-django-3.1/lib/python3.8/site-packages/django/core/handlers/exception.py\", line 47, in inner\r\n response = get_response(request)\r\n File \"/Users/williamchu/dev/sentry-python/.tox/py3.8-django-3.1/lib/python3.8/site-packages/django/core/handlers/base.py\", line 186, in _get_response\r\n self.check_response(response, callback)\r\n File \"/Users/williamchu/dev/sentry-python/.tox/py3.8-django-3.1/lib/python3.8/site-packages/django/core/handlers/base.py\", line 312, in check_response\r\n raise ValueError(\r\nValueError: The view tests.integrations.django.myapp.views.async_ok didn't return an HttpResponse object. It returned an unawaited coroutine instead. You may need to add an 'await' into your view.\r\n```\r\n\r\nI have made a branch with a test case to demonstrate this: https://github.com/uptickmetachu/sentry-python/tree/django3.1-test-async-view\r\n\r\n\n", "before_files": [{"content": "from sentry_sdk.hub import Hub\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk import _functools\n\nif MYPY:\n from typing import Any\n\n\ndef patch_views():\n # type: () -> None\n\n from django.core.handlers.base import BaseHandler\n from sentry_sdk.integrations.django import DjangoIntegration\n\n old_make_view_atomic = BaseHandler.make_view_atomic\n\n @_functools.wraps(old_make_view_atomic)\n def sentry_patched_make_view_atomic(self, *args, **kwargs):\n # type: (Any, *Any, **Any) -> Any\n callback = old_make_view_atomic(self, *args, **kwargs)\n\n # XXX: The wrapper function is created for every request. Find more\n # efficient way to wrap views (or build a cache?)\n\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n\n if integration is not None and integration.middleware_spans:\n\n @_functools.wraps(callback)\n def sentry_wrapped_callback(request, *args, **kwargs):\n # type: (Any, *Any, **Any) -> Any\n with hub.start_span(\n op=\"django.view\", description=request.resolver_match.view_name\n ):\n return callback(request, *args, **kwargs)\n\n else:\n sentry_wrapped_callback = callback\n\n return sentry_wrapped_callback\n\n BaseHandler.make_view_atomic = sentry_patched_make_view_atomic\n", "path": "sentry_sdk/integrations/django/views.py"}, {"content": "\"\"\"\nInstrumentation for Django 3.0\n\nSince this file contains `async def` it is conditionally imported in\n`sentry_sdk.integrations.django` (depending on the existence of\n`django.core.handlers.asgi`.\n\"\"\"\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._types import MYPY\n\nfrom sentry_sdk.integrations.django import DjangoIntegration\nfrom sentry_sdk.integrations.asgi import SentryAsgiMiddleware\n\nif MYPY:\n from typing import Any\n from typing import Union\n\n from django.http.response import HttpResponse\n\n\ndef patch_django_asgi_handler_impl(cls):\n # type: (Any) -> None\n old_app = cls.__call__\n\n async def sentry_patched_asgi_handler(self, scope, receive, send):\n # type: (Any, Any, Any, Any) -> Any\n if Hub.current.get_integration(DjangoIntegration) is None:\n return await old_app(self, scope, receive, send)\n\n middleware = SentryAsgiMiddleware(\n old_app.__get__(self, cls), unsafe_context_data=True\n )._run_asgi3\n return await middleware(scope, receive, send)\n\n cls.__call__ = sentry_patched_asgi_handler\n\n\ndef patch_get_response_async(cls, _before_get_response):\n # type: (Any, Any) -> None\n old_get_response_async = cls.get_response_async\n\n async def sentry_patched_get_response_async(self, request):\n # type: (Any, Any) -> Union[HttpResponse, BaseException]\n _before_get_response(request)\n return await old_get_response_async(self, request)\n\n cls.get_response_async = sentry_patched_get_response_async\n\n\ndef patch_channels_asgi_handler_impl(cls):\n # type: (Any) -> None\n old_app = cls.__call__\n\n async def sentry_patched_asgi_handler(self, receive, send):\n # type: (Any, Any, Any) -> Any\n if Hub.current.get_integration(DjangoIntegration) is None:\n return await old_app(self, receive, send)\n\n middleware = SentryAsgiMiddleware(\n lambda _scope: old_app.__get__(self, cls), unsafe_context_data=True\n )\n\n return await middleware(self.scope)(receive, send)\n\n cls.__call__ = sentry_patched_asgi_handler\n", "path": "sentry_sdk/integrations/django/asgi.py"}]} | 1,957 | 954 |
gh_patches_debug_13321 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1587 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PearsonCorrCoef UnboundLocalError using ddp
I want to use PearsonCorrCoef in my pytorch lightning code with ddp strategy.
My num_outputs > 1
In x_step_end, I use CorrCoef.update(preds, targets)
In x_epoch_end, I use CorrCoef.compute().mean()
Always received
```
File "/home/xx/Documents/anconda3/envs/xx/lib/python3.8/site-packages/torchmetrics/metric.py", line 531, in wrapped_func
value = compute(*args, **kwargs)
File "/home/xx/Documents/anconda3/envs/xx/lib/python3.8/site-packages/torchmetrics/regression/pearson.py", line 152, in compute
_, _, var_x, var_y, corr_xy, n_total = _final_aggregation(
File "/home/xx/Documents/anconda3/envs/xx/lib/python3.8/site-packages/torchmetrics/regression/pearson.py", line 63, in _final_aggregation
return mean_x, mean_y, var_x, var_y, corr_xy, nb
UnboundLocalError: local variable 'mean_x' referenced before assignment
```
But when I change to dp strategy, it's fine.
</issue>
<code>
[start of src/torchmetrics/regression/pearson.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, List, Tuple
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.functional.regression.pearson import _pearson_corrcoef_compute, _pearson_corrcoef_update
20 from torchmetrics.metric import Metric
21
22
23 def _final_aggregation(
24 means_x: Tensor,
25 means_y: Tensor,
26 vars_x: Tensor,
27 vars_y: Tensor,
28 corrs_xy: Tensor,
29 nbs: Tensor,
30 ) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:
31 """Aggregate the statistics from multiple devices.
32
33 Formula taken from here: `Aggregate the statistics from multiple devices`_
34 """
35 # assert len(means_x) > 1 and len(means_y) > 1 and len(vars_x) > 1 and len(vars_y) > 1 and len(corrs_xy) > 1
36 mx1, my1, vx1, vy1, cxy1, n1 = means_x[0], means_y[0], vars_x[0], vars_y[0], corrs_xy[0], nbs[0]
37 for i in range(1, len(means_x)):
38 mx2, my2, vx2, vy2, cxy2, n2 = means_x[i], means_y[i], vars_x[i], vars_y[i], corrs_xy[i], nbs[i]
39 nb = n1 + n2
40 mean_x = (n1 * mx1 + n2 * mx2) / nb
41 mean_y = (n1 * my1 + n2 * my2) / nb
42
43 # var_x
44 element_x1 = (n1 + 1) * mean_x - n1 * mx1
45 vx1 += (element_x1 - mx1) * (element_x1 - mean_x) - (element_x1 - mean_x) ** 2
46 element_x2 = (n2 + 1) * mean_x - n2 * mx2
47 vx2 += (element_x2 - mx2) * (element_x2 - mean_x) - (element_x2 - mean_x) ** 2
48 var_x = vx1 + vx2
49
50 # var_y
51 element_y1 = (n1 + 1) * mean_y - n1 * my1
52 vy1 += (element_y1 - my1) * (element_y1 - mean_y) - (element_y1 - mean_y) ** 2
53 element_y2 = (n2 + 1) * mean_y - n2 * my2
54 vy2 += (element_y2 - my2) * (element_y2 - mean_y) - (element_y2 - mean_y) ** 2
55 var_y = vy1 + vy2
56
57 # corr
58 cxy1 += (element_x1 - mx1) * (element_y1 - mean_y) - (element_x1 - mean_x) * (element_y1 - mean_y)
59 cxy2 += (element_x2 - mx2) * (element_y2 - mean_y) - (element_x2 - mean_x) * (element_y2 - mean_y)
60 corr_xy = cxy1 + cxy2
61
62 mx1, my1, vx1, vy1, cxy1, n1 = mean_x, mean_y, var_x, var_y, corr_xy, nb
63 return mean_x, mean_y, var_x, var_y, corr_xy, nb
64
65
66 class PearsonCorrCoef(Metric):
67 r"""Compute `Pearson Correlation Coefficient`_.
68
69 .. math::
70 P_{corr}(x,y) = \frac{cov(x,y)}{\sigma_x \sigma_y}
71
72 Where :math:`y` is a tensor of target values, and :math:`x` is a tensor of predictions.
73
74 As input to ``forward`` and ``update`` the metric accepts the following input:
75
76 - ``preds`` (:class:`~torch.Tensor`): either single output float tensor with shape ``(N,)``
77 or multioutput float tensor of shape ``(N,d)``
78 - ``target`` (:class:`~torch.Tensor`): either single output tensor with shape ``(N,)``
79 or multioutput tensor of shape ``(N,d)``
80
81 As output of ``forward`` and ``compute`` the metric returns the following output:
82
83 - ``pearson`` (:class:`~torch.Tensor`): A tensor with the Pearson Correlation Coefficient
84
85 Args:
86 num_outputs: Number of outputs in multioutput setting
87 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
88
89 Example (single output regression):
90 >>> from torchmetrics import PearsonCorrCoef
91 >>> target = torch.tensor([3, -0.5, 2, 7])
92 >>> preds = torch.tensor([2.5, 0.0, 2, 8])
93 >>> pearson = PearsonCorrCoef()
94 >>> pearson(preds, target)
95 tensor(0.9849)
96
97 Example (multi output regression):
98 >>> from torchmetrics import PearsonCorrCoef
99 >>> target = torch.tensor([[3, -0.5], [2, 7]])
100 >>> preds = torch.tensor([[2.5, 0.0], [2, 8]])
101 >>> pearson = PearsonCorrCoef(num_outputs=2)
102 >>> pearson(preds, target)
103 tensor([1., 1.])
104 """
105 is_differentiable = True
106 higher_is_better = None # both -1 and 1 are optimal
107 full_state_update: bool = True
108 preds: List[Tensor]
109 target: List[Tensor]
110 mean_x: Tensor
111 mean_y: Tensor
112 var_x: Tensor
113 var_y: Tensor
114 corr_xy: Tensor
115 n_total: Tensor
116
117 def __init__(
118 self,
119 num_outputs: int = 1,
120 **kwargs: Any,
121 ) -> None:
122 super().__init__(**kwargs)
123 if not isinstance(num_outputs, int) and num_outputs < 1:
124 raise ValueError("Expected argument `num_outputs` to be an int larger than 0, but got {num_outputs}")
125 self.num_outputs = num_outputs
126
127 self.add_state("mean_x", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)
128 self.add_state("mean_y", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)
129 self.add_state("var_x", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)
130 self.add_state("var_y", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)
131 self.add_state("corr_xy", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)
132 self.add_state("n_total", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)
133
134 def update(self, preds: Tensor, target: Tensor) -> None:
135 """Update state with predictions and targets."""
136 self.mean_x, self.mean_y, self.var_x, self.var_y, self.corr_xy, self.n_total = _pearson_corrcoef_update(
137 preds,
138 target,
139 self.mean_x,
140 self.mean_y,
141 self.var_x,
142 self.var_y,
143 self.corr_xy,
144 self.n_total,
145 self.num_outputs,
146 )
147
148 def compute(self) -> Tensor:
149 """Compute pearson correlation coefficient over state."""
150 if (self.num_outputs == 1 and self.mean_x.numel() > 1) or (self.num_outputs > 1 and self.mean_x.ndim > 1):
151 # multiple devices, need further reduction
152 _, _, var_x, var_y, corr_xy, n_total = _final_aggregation(
153 self.mean_x, self.mean_y, self.var_x, self.var_y, self.corr_xy, self.n_total
154 )
155 else:
156 var_x = self.var_x
157 var_y = self.var_y
158 corr_xy = self.corr_xy
159 n_total = self.n_total
160 return _pearson_corrcoef_compute(var_x, var_y, corr_xy, n_total)
161
[end of src/torchmetrics/regression/pearson.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/regression/pearson.py b/src/torchmetrics/regression/pearson.py
--- a/src/torchmetrics/regression/pearson.py
+++ b/src/torchmetrics/regression/pearson.py
@@ -32,7 +32,8 @@
Formula taken from here: `Aggregate the statistics from multiple devices`_
"""
- # assert len(means_x) > 1 and len(means_y) > 1 and len(vars_x) > 1 and len(vars_y) > 1 and len(corrs_xy) > 1
+ if len(means_x) == 1:
+ return means_x[0], means_y[0], vars_x[0], vars_y[0], corrs_xy[0], nbs[0]
mx1, my1, vx1, vy1, cxy1, n1 = means_x[0], means_y[0], vars_x[0], vars_y[0], corrs_xy[0], nbs[0]
for i in range(1, len(means_x)):
mx2, my2, vx2, vy2, cxy2, n2 = means_x[i], means_y[i], vars_x[i], vars_y[i], corrs_xy[i], nbs[i]
| {"golden_diff": "diff --git a/src/torchmetrics/regression/pearson.py b/src/torchmetrics/regression/pearson.py\n--- a/src/torchmetrics/regression/pearson.py\n+++ b/src/torchmetrics/regression/pearson.py\n@@ -32,7 +32,8 @@\n \n Formula taken from here: `Aggregate the statistics from multiple devices`_\n \"\"\"\n- # assert len(means_x) > 1 and len(means_y) > 1 and len(vars_x) > 1 and len(vars_y) > 1 and len(corrs_xy) > 1\n+ if len(means_x) == 1:\n+ return means_x[0], means_y[0], vars_x[0], vars_y[0], corrs_xy[0], nbs[0]\n mx1, my1, vx1, vy1, cxy1, n1 = means_x[0], means_y[0], vars_x[0], vars_y[0], corrs_xy[0], nbs[0]\n for i in range(1, len(means_x)):\n mx2, my2, vx2, vy2, cxy2, n2 = means_x[i], means_y[i], vars_x[i], vars_y[i], corrs_xy[i], nbs[i]\n", "issue": "PearsonCorrCoef UnboundLocalError using ddp\nI want to use PearsonCorrCoef in my pytorch lightning code with ddp strategy.\r\nMy num_outputs > 1\r\nIn x_step_end, I use CorrCoef.update(preds, targets)\r\nIn x_epoch_end, I use CorrCoef.compute().mean()\r\nAlways received\r\n\r\n```\r\n File \"/home/xx/Documents/anconda3/envs/xx/lib/python3.8/site-packages/torchmetrics/metric.py\", line 531, in wrapped_func\r\n value = compute(*args, **kwargs)\r\n File \"/home/xx/Documents/anconda3/envs/xx/lib/python3.8/site-packages/torchmetrics/regression/pearson.py\", line 152, in compute\r\n _, _, var_x, var_y, corr_xy, n_total = _final_aggregation(\r\n File \"/home/xx/Documents/anconda3/envs/xx/lib/python3.8/site-packages/torchmetrics/regression/pearson.py\", line 63, in _final_aggregation\r\n return mean_x, mean_y, var_x, var_y, corr_xy, nb\r\nUnboundLocalError: local variable 'mean_x' referenced before assignment\r\n```\r\n\r\nBut when I change to dp strategy, it's fine.\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, List, Tuple\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.regression.pearson import _pearson_corrcoef_compute, _pearson_corrcoef_update\nfrom torchmetrics.metric import Metric\n\n\ndef _final_aggregation(\n means_x: Tensor,\n means_y: Tensor,\n vars_x: Tensor,\n vars_y: Tensor,\n corrs_xy: Tensor,\n nbs: Tensor,\n) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:\n \"\"\"Aggregate the statistics from multiple devices.\n\n Formula taken from here: `Aggregate the statistics from multiple devices`_\n \"\"\"\n # assert len(means_x) > 1 and len(means_y) > 1 and len(vars_x) > 1 and len(vars_y) > 1 and len(corrs_xy) > 1\n mx1, my1, vx1, vy1, cxy1, n1 = means_x[0], means_y[0], vars_x[0], vars_y[0], corrs_xy[0], nbs[0]\n for i in range(1, len(means_x)):\n mx2, my2, vx2, vy2, cxy2, n2 = means_x[i], means_y[i], vars_x[i], vars_y[i], corrs_xy[i], nbs[i]\n nb = n1 + n2\n mean_x = (n1 * mx1 + n2 * mx2) / nb\n mean_y = (n1 * my1 + n2 * my2) / nb\n\n # var_x\n element_x1 = (n1 + 1) * mean_x - n1 * mx1\n vx1 += (element_x1 - mx1) * (element_x1 - mean_x) - (element_x1 - mean_x) ** 2\n element_x2 = (n2 + 1) * mean_x - n2 * mx2\n vx2 += (element_x2 - mx2) * (element_x2 - mean_x) - (element_x2 - mean_x) ** 2\n var_x = vx1 + vx2\n\n # var_y\n element_y1 = (n1 + 1) * mean_y - n1 * my1\n vy1 += (element_y1 - my1) * (element_y1 - mean_y) - (element_y1 - mean_y) ** 2\n element_y2 = (n2 + 1) * mean_y - n2 * my2\n vy2 += (element_y2 - my2) * (element_y2 - mean_y) - (element_y2 - mean_y) ** 2\n var_y = vy1 + vy2\n\n # corr\n cxy1 += (element_x1 - mx1) * (element_y1 - mean_y) - (element_x1 - mean_x) * (element_y1 - mean_y)\n cxy2 += (element_x2 - mx2) * (element_y2 - mean_y) - (element_x2 - mean_x) * (element_y2 - mean_y)\n corr_xy = cxy1 + cxy2\n\n mx1, my1, vx1, vy1, cxy1, n1 = mean_x, mean_y, var_x, var_y, corr_xy, nb\n return mean_x, mean_y, var_x, var_y, corr_xy, nb\n\n\nclass PearsonCorrCoef(Metric):\n r\"\"\"Compute `Pearson Correlation Coefficient`_.\n\n .. math::\n P_{corr}(x,y) = \\frac{cov(x,y)}{\\sigma_x \\sigma_y}\n\n Where :math:`y` is a tensor of target values, and :math:`x` is a tensor of predictions.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): either single output float tensor with shape ``(N,)``\n or multioutput float tensor of shape ``(N,d)``\n - ``target`` (:class:`~torch.Tensor`): either single output tensor with shape ``(N,)``\n or multioutput tensor of shape ``(N,d)``\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``pearson`` (:class:`~torch.Tensor`): A tensor with the Pearson Correlation Coefficient\n\n Args:\n num_outputs: Number of outputs in multioutput setting\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example (single output regression):\n >>> from torchmetrics import PearsonCorrCoef\n >>> target = torch.tensor([3, -0.5, 2, 7])\n >>> preds = torch.tensor([2.5, 0.0, 2, 8])\n >>> pearson = PearsonCorrCoef()\n >>> pearson(preds, target)\n tensor(0.9849)\n\n Example (multi output regression):\n >>> from torchmetrics import PearsonCorrCoef\n >>> target = torch.tensor([[3, -0.5], [2, 7]])\n >>> preds = torch.tensor([[2.5, 0.0], [2, 8]])\n >>> pearson = PearsonCorrCoef(num_outputs=2)\n >>> pearson(preds, target)\n tensor([1., 1.])\n \"\"\"\n is_differentiable = True\n higher_is_better = None # both -1 and 1 are optimal\n full_state_update: bool = True\n preds: List[Tensor]\n target: List[Tensor]\n mean_x: Tensor\n mean_y: Tensor\n var_x: Tensor\n var_y: Tensor\n corr_xy: Tensor\n n_total: Tensor\n\n def __init__(\n self,\n num_outputs: int = 1,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n if not isinstance(num_outputs, int) and num_outputs < 1:\n raise ValueError(\"Expected argument `num_outputs` to be an int larger than 0, but got {num_outputs}\")\n self.num_outputs = num_outputs\n\n self.add_state(\"mean_x\", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)\n self.add_state(\"mean_y\", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)\n self.add_state(\"var_x\", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)\n self.add_state(\"var_y\", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)\n self.add_state(\"corr_xy\", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)\n self.add_state(\"n_total\", default=torch.zeros(self.num_outputs), dist_reduce_fx=None)\n\n def update(self, preds: Tensor, target: Tensor) -> None:\n \"\"\"Update state with predictions and targets.\"\"\"\n self.mean_x, self.mean_y, self.var_x, self.var_y, self.corr_xy, self.n_total = _pearson_corrcoef_update(\n preds,\n target,\n self.mean_x,\n self.mean_y,\n self.var_x,\n self.var_y,\n self.corr_xy,\n self.n_total,\n self.num_outputs,\n )\n\n def compute(self) -> Tensor:\n \"\"\"Compute pearson correlation coefficient over state.\"\"\"\n if (self.num_outputs == 1 and self.mean_x.numel() > 1) or (self.num_outputs > 1 and self.mean_x.ndim > 1):\n # multiple devices, need further reduction\n _, _, var_x, var_y, corr_xy, n_total = _final_aggregation(\n self.mean_x, self.mean_y, self.var_x, self.var_y, self.corr_xy, self.n_total\n )\n else:\n var_x = self.var_x\n var_y = self.var_y\n corr_xy = self.corr_xy\n n_total = self.n_total\n return _pearson_corrcoef_compute(var_x, var_y, corr_xy, n_total)\n", "path": "src/torchmetrics/regression/pearson.py"}]} | 3,076 | 284 |
gh_patches_debug_35141 | rasdani/github-patches | git_diff | modin-project__modin-1137 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Parallel agents for TeamCity
Right now, TeamCity is taking >5 hours to complete. We should create parallel agents for non-dependent builds to run concurrently. The breakdown should be as follows:
1. `MODIN_ENGINE=ray`, `MODIN_ENGINE=dask`, `MODIN_ENGINE=python`
a. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameReduction_A`
b. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameReduction_B`
c. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameBinary`
d. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameMapMetadata`
e. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameUDF`
f. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameDefault`
g. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameWindow`
h. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameIndexing`
i. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameIter`
j. `python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameJoinSort`
k. `python -m pytest modin/pandas/test/test_groupby.py`
l. `python -m pytest modin/pandas/test/test_series.py modin/pandas/test/test_concat.py modin/pandas/test/test_reshape.py modin/pandas/test/test_general.py modin/pandas/test/test_io.py modin/pandas/test/test_io_exp.py`
2. `MODIN_ENGINE=ray MODIN_EXPERIMENTAL=True MODIN_BACKEND=pyarrow python -m pytest modin/pandas/test/test_io.py::test_from_csv`
In total, 37 agents. This does not include agents we will need for Windows and MacOS.
</issue>
<code>
[start of modin/pandas/__init__.py]
1 import pandas
2
3 __pandas_version__ = "1.0.1"
4
5 if pandas.__version__ != __pandas_version__:
6 import warnings
7
8 warnings.warn(
9 "The pandas version installed does not match the required pandas version in "
10 "Modin. This may cause undesired side effects!".format(__pandas_version__)
11 )
12
13 from pandas import (
14 eval,
15 unique,
16 value_counts,
17 cut,
18 to_numeric,
19 factorize,
20 test,
21 qcut,
22 date_range,
23 period_range,
24 Index,
25 MultiIndex,
26 CategoricalIndex,
27 bdate_range,
28 DatetimeIndex,
29 Timedelta,
30 Timestamp,
31 to_timedelta,
32 set_eng_float_format,
33 options,
34 set_option,
35 NaT,
36 PeriodIndex,
37 Categorical,
38 Interval,
39 UInt8Dtype,
40 UInt16Dtype,
41 UInt32Dtype,
42 UInt64Dtype,
43 SparseDtype,
44 Int8Dtype,
45 Int16Dtype,
46 Int32Dtype,
47 Int64Dtype,
48 StringDtype,
49 BooleanDtype,
50 CategoricalDtype,
51 DatetimeTZDtype,
52 IntervalDtype,
53 PeriodDtype,
54 RangeIndex,
55 Int64Index,
56 UInt64Index,
57 Float64Index,
58 TimedeltaIndex,
59 IntervalIndex,
60 IndexSlice,
61 Grouper,
62 array,
63 Period,
64 show_versions,
65 DateOffset,
66 timedelta_range,
67 infer_freq,
68 interval_range,
69 ExcelWriter,
70 datetime,
71 NamedAgg,
72 NA,
73 )
74 import threading
75 import os
76 import types
77 import sys
78
79 from .. import __version__
80 from .concat import concat
81 from .dataframe import DataFrame
82 from .datetimes import to_datetime
83 from .io import (
84 read_csv,
85 read_parquet,
86 read_json,
87 read_html,
88 read_clipboard,
89 read_excel,
90 read_hdf,
91 read_feather,
92 read_stata,
93 read_sas,
94 read_pickle,
95 read_sql,
96 read_gbq,
97 read_table,
98 read_fwf,
99 read_sql_table,
100 read_sql_query,
101 read_spss,
102 ExcelFile,
103 to_pickle,
104 HDFStore,
105 json_normalize,
106 read_orc,
107 )
108 from .reshape import get_dummies, melt, crosstab, lreshape, wide_to_long
109 from .series import Series
110 from .general import (
111 isna,
112 isnull,
113 merge,
114 merge_asof,
115 merge_ordered,
116 pivot_table,
117 notnull,
118 notna,
119 pivot,
120 )
121 from .plotting import Plotting as plotting
122 from .. import __execution_engine__ as execution_engine
123
124 # Set this so that Pandas doesn't try to multithread by itself
125 os.environ["OMP_NUM_THREADS"] = "1"
126 num_cpus = 1
127
128
129 def initialize_ray():
130 import ray
131
132 """Initializes ray based on environment variables and internal defaults."""
133 if threading.current_thread().name == "MainThread":
134 import secrets
135
136 plasma_directory = None
137 cluster = os.environ.get("MODIN_RAY_CLUSTER", None)
138 redis_address = os.environ.get("MODIN_REDIS_ADDRESS", None)
139 redis_password = secrets.token_hex(16)
140 if cluster == "True" and redis_address is not None:
141 # We only start ray in a cluster setting for the head node.
142 ray.init(
143 include_webui=False,
144 ignore_reinit_error=True,
145 redis_address=redis_address,
146 redis_password=redis_password,
147 logging_level=100,
148 )
149 elif cluster is None:
150 object_store_memory = os.environ.get("MODIN_MEMORY", None)
151 if os.environ.get("MODIN_OUT_OF_CORE", "False").title() == "True":
152 from tempfile import gettempdir
153
154 plasma_directory = gettempdir()
155 # We may have already set the memory from the environment variable, we don't
156 # want to overwrite that value if we have.
157 if object_store_memory is None:
158 # Round down to the nearest Gigabyte.
159 mem_bytes = ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9
160 # Default to 8x memory for out of core
161 object_store_memory = 8 * mem_bytes
162 # In case anything failed above, we can still improve the memory for Modin.
163 if object_store_memory is None:
164 # Round down to the nearest Gigabyte.
165 object_store_memory = int(
166 0.6 * ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9
167 )
168 # If the memory pool is smaller than 2GB, just use the default in ray.
169 if object_store_memory == 0:
170 object_store_memory = None
171 else:
172 object_store_memory = int(object_store_memory)
173 ray.init(
174 include_webui=False,
175 ignore_reinit_error=True,
176 plasma_directory=plasma_directory,
177 object_store_memory=object_store_memory,
178 redis_address=redis_address,
179 redis_password=redis_password,
180 logging_level=100,
181 memory=object_store_memory,
182 )
183 # Register custom serializer for method objects to avoid warning message.
184 # We serialize `MethodType` objects when we use AxisPartition operations.
185 ray.register_custom_serializer(types.MethodType, use_pickle=True)
186
187 # Register a fix import function to run on all_workers including the driver.
188 # This is a hack solution to fix #647, #746
189 def move_stdlib_ahead_of_site_packages(*args):
190 site_packages_path = None
191 site_packages_path_index = -1
192 for i, path in enumerate(sys.path):
193 if sys.exec_prefix in path and path.endswith("site-packages"):
194 site_packages_path = path
195 site_packages_path_index = i
196 # break on first found
197 break
198
199 if site_packages_path is not None:
200 # stdlib packages layout as follows:
201 # - python3.x
202 # - typing.py
203 # - site-packages/
204 # - pandas
205 # So extracting the dirname of the site_packages can point us
206 # to the directory containing standard libraries.
207 sys.path.insert(
208 site_packages_path_index, os.path.dirname(site_packages_path)
209 )
210
211 move_stdlib_ahead_of_site_packages()
212 ray.worker.global_worker.run_function_on_all_workers(
213 move_stdlib_ahead_of_site_packages
214 )
215
216
217 if execution_engine == "Ray":
218 import ray
219
220 initialize_ray()
221 num_cpus = ray.cluster_resources()["CPU"]
222 elif execution_engine == "Dask": # pragma: no cover
223 from distributed.client import get_client
224 import warnings
225
226 if threading.current_thread().name == "MainThread":
227 warnings.warn("The Dask Engine for Modin is experimental.")
228 try:
229 client = get_client()
230 except ValueError:
231 from distributed import Client
232 import multiprocessing
233
234 num_cpus = multiprocessing.cpu_count()
235 client = Client(n_workers=num_cpus)
236 elif execution_engine != "Python":
237 raise ImportError("Unrecognized execution engine: {}.".format(execution_engine))
238
239 DEFAULT_NPARTITIONS = max(4, int(num_cpus))
240
241 __all__ = [
242 "DataFrame",
243 "Series",
244 "read_csv",
245 "read_parquet",
246 "read_json",
247 "read_html",
248 "read_clipboard",
249 "read_excel",
250 "read_hdf",
251 "read_feather",
252 "read_stata",
253 "read_sas",
254 "read_pickle",
255 "read_sql",
256 "read_gbq",
257 "read_table",
258 "read_spss",
259 "read_orc",
260 "json_normalize",
261 "concat",
262 "eval",
263 "unique",
264 "value_counts",
265 "cut",
266 "to_numeric",
267 "factorize",
268 "test",
269 "qcut",
270 "to_datetime",
271 "get_dummies",
272 "isna",
273 "isnull",
274 "merge",
275 "pivot_table",
276 "date_range",
277 "Index",
278 "MultiIndex",
279 "Series",
280 "bdate_range",
281 "period_range",
282 "DatetimeIndex",
283 "to_timedelta",
284 "set_eng_float_format",
285 "options",
286 "set_option",
287 "CategoricalIndex",
288 "Timedelta",
289 "Timestamp",
290 "NaT",
291 "PeriodIndex",
292 "Categorical",
293 "__version__",
294 "melt",
295 "crosstab",
296 "plotting",
297 "Interval",
298 "UInt8Dtype",
299 "UInt16Dtype",
300 "UInt32Dtype",
301 "UInt64Dtype",
302 "SparseDtype",
303 "Int8Dtype",
304 "Int16Dtype",
305 "Int32Dtype",
306 "Int64Dtype",
307 "CategoricalDtype",
308 "DatetimeTZDtype",
309 "IntervalDtype",
310 "PeriodDtype",
311 "BooleanDtype",
312 "StringDtype",
313 "NA",
314 "RangeIndex",
315 "Int64Index",
316 "UInt64Index",
317 "Float64Index",
318 "TimedeltaIndex",
319 "IntervalIndex",
320 "IndexSlice",
321 "Grouper",
322 "array",
323 "Period",
324 "show_versions",
325 "DateOffset",
326 "timedelta_range",
327 "infer_freq",
328 "interval_range",
329 "ExcelWriter",
330 "read_fwf",
331 "read_sql_table",
332 "read_sql_query",
333 "ExcelFile",
334 "to_pickle",
335 "HDFStore",
336 "lreshape",
337 "wide_to_long",
338 "merge_asof",
339 "merge_ordered",
340 "notnull",
341 "notna",
342 "pivot",
343 "datetime",
344 "NamedAgg",
345 "DEFAULT_NPARTITIONS",
346 ]
347
348 del pandas
349
[end of modin/pandas/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modin/pandas/__init__.py b/modin/pandas/__init__.py
--- a/modin/pandas/__init__.py
+++ b/modin/pandas/__init__.py
@@ -75,6 +75,7 @@
import os
import types
import sys
+import multiprocessing
from .. import __version__
from .concat import concat
@@ -134,12 +135,14 @@
import secrets
plasma_directory = None
+ num_cpus = os.environ.get("MODIN_CPUS", None) or multiprocessing.cpu_count()
cluster = os.environ.get("MODIN_RAY_CLUSTER", None)
redis_address = os.environ.get("MODIN_REDIS_ADDRESS", None)
redis_password = secrets.token_hex(16)
if cluster == "True" and redis_address is not None:
# We only start ray in a cluster setting for the head node.
ray.init(
+ num_cpus=int(num_cpus),
include_webui=False,
ignore_reinit_error=True,
redis_address=redis_address,
@@ -171,6 +174,7 @@
else:
object_store_memory = int(object_store_memory)
ray.init(
+ num_cpus=int(num_cpus),
include_webui=False,
ignore_reinit_error=True,
plasma_directory=plasma_directory,
@@ -229,10 +233,9 @@
client = get_client()
except ValueError:
from distributed import Client
- import multiprocessing
- num_cpus = multiprocessing.cpu_count()
- client = Client(n_workers=num_cpus)
+ num_cpus = os.environ.get("MODIN_CPUS", None) or multiprocessing.cpu_count()
+ client = Client(n_workers=int(num_cpus))
elif execution_engine != "Python":
raise ImportError("Unrecognized execution engine: {}.".format(execution_engine))
| {"golden_diff": "diff --git a/modin/pandas/__init__.py b/modin/pandas/__init__.py\n--- a/modin/pandas/__init__.py\n+++ b/modin/pandas/__init__.py\n@@ -75,6 +75,7 @@\n import os\n import types\n import sys\n+import multiprocessing\n \n from .. import __version__\n from .concat import concat\n@@ -134,12 +135,14 @@\n import secrets\n \n plasma_directory = None\n+ num_cpus = os.environ.get(\"MODIN_CPUS\", None) or multiprocessing.cpu_count()\n cluster = os.environ.get(\"MODIN_RAY_CLUSTER\", None)\n redis_address = os.environ.get(\"MODIN_REDIS_ADDRESS\", None)\n redis_password = secrets.token_hex(16)\n if cluster == \"True\" and redis_address is not None:\n # We only start ray in a cluster setting for the head node.\n ray.init(\n+ num_cpus=int(num_cpus),\n include_webui=False,\n ignore_reinit_error=True,\n redis_address=redis_address,\n@@ -171,6 +174,7 @@\n else:\n object_store_memory = int(object_store_memory)\n ray.init(\n+ num_cpus=int(num_cpus),\n include_webui=False,\n ignore_reinit_error=True,\n plasma_directory=plasma_directory,\n@@ -229,10 +233,9 @@\n client = get_client()\n except ValueError:\n from distributed import Client\n- import multiprocessing\n \n- num_cpus = multiprocessing.cpu_count()\n- client = Client(n_workers=num_cpus)\n+ num_cpus = os.environ.get(\"MODIN_CPUS\", None) or multiprocessing.cpu_count()\n+ client = Client(n_workers=int(num_cpus))\n elif execution_engine != \"Python\":\n raise ImportError(\"Unrecognized execution engine: {}.\".format(execution_engine))\n", "issue": "Parallel agents for TeamCity\nRight now, TeamCity is taking >5 hours to complete. We should create parallel agents for non-dependent builds to run concurrently. The breakdown should be as follows:\r\n\r\n1.\t`MODIN_ENGINE=ray`, `MODIN_ENGINE=dask`, `MODIN_ENGINE=python`\r\na.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameReduction_A`\r\nb.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameReduction_B`\r\nc.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameBinary`\r\nd.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameMapMetadata`\r\ne.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameUDF`\r\nf.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameDefault`\r\ng.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameWindow`\r\nh.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameIndexing`\r\ni.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameIter`\r\nj.\t`python -m pytest modin/pandas/test/test_dataframe.py::TestDataFrameJoinSort`\r\nk.\t`python -m pytest modin/pandas/test/test_groupby.py`\r\nl.\t`python -m pytest modin/pandas/test/test_series.py modin/pandas/test/test_concat.py modin/pandas/test/test_reshape.py modin/pandas/test/test_general.py modin/pandas/test/test_io.py modin/pandas/test/test_io_exp.py`\r\n2.\t`MODIN_ENGINE=ray MODIN_EXPERIMENTAL=True MODIN_BACKEND=pyarrow python -m pytest modin/pandas/test/test_io.py::test_from_csv`\r\n\r\nIn total, 37 agents. This does not include agents we will need for Windows and MacOS. \r\n\n", "before_files": [{"content": "import pandas\n\n__pandas_version__ = \"1.0.1\"\n\nif pandas.__version__ != __pandas_version__:\n import warnings\n\n warnings.warn(\n \"The pandas version installed does not match the required pandas version in \"\n \"Modin. This may cause undesired side effects!\".format(__pandas_version__)\n )\n\nfrom pandas import (\n eval,\n unique,\n value_counts,\n cut,\n to_numeric,\n factorize,\n test,\n qcut,\n date_range,\n period_range,\n Index,\n MultiIndex,\n CategoricalIndex,\n bdate_range,\n DatetimeIndex,\n Timedelta,\n Timestamp,\n to_timedelta,\n set_eng_float_format,\n options,\n set_option,\n NaT,\n PeriodIndex,\n Categorical,\n Interval,\n UInt8Dtype,\n UInt16Dtype,\n UInt32Dtype,\n UInt64Dtype,\n SparseDtype,\n Int8Dtype,\n Int16Dtype,\n Int32Dtype,\n Int64Dtype,\n StringDtype,\n BooleanDtype,\n CategoricalDtype,\n DatetimeTZDtype,\n IntervalDtype,\n PeriodDtype,\n RangeIndex,\n Int64Index,\n UInt64Index,\n Float64Index,\n TimedeltaIndex,\n IntervalIndex,\n IndexSlice,\n Grouper,\n array,\n Period,\n show_versions,\n DateOffset,\n timedelta_range,\n infer_freq,\n interval_range,\n ExcelWriter,\n datetime,\n NamedAgg,\n NA,\n)\nimport threading\nimport os\nimport types\nimport sys\n\nfrom .. import __version__\nfrom .concat import concat\nfrom .dataframe import DataFrame\nfrom .datetimes import to_datetime\nfrom .io import (\n read_csv,\n read_parquet,\n read_json,\n read_html,\n read_clipboard,\n read_excel,\n read_hdf,\n read_feather,\n read_stata,\n read_sas,\n read_pickle,\n read_sql,\n read_gbq,\n read_table,\n read_fwf,\n read_sql_table,\n read_sql_query,\n read_spss,\n ExcelFile,\n to_pickle,\n HDFStore,\n json_normalize,\n read_orc,\n)\nfrom .reshape import get_dummies, melt, crosstab, lreshape, wide_to_long\nfrom .series import Series\nfrom .general import (\n isna,\n isnull,\n merge,\n merge_asof,\n merge_ordered,\n pivot_table,\n notnull,\n notna,\n pivot,\n)\nfrom .plotting import Plotting as plotting\nfrom .. import __execution_engine__ as execution_engine\n\n# Set this so that Pandas doesn't try to multithread by itself\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\nnum_cpus = 1\n\n\ndef initialize_ray():\n import ray\n\n \"\"\"Initializes ray based on environment variables and internal defaults.\"\"\"\n if threading.current_thread().name == \"MainThread\":\n import secrets\n\n plasma_directory = None\n cluster = os.environ.get(\"MODIN_RAY_CLUSTER\", None)\n redis_address = os.environ.get(\"MODIN_REDIS_ADDRESS\", None)\n redis_password = secrets.token_hex(16)\n if cluster == \"True\" and redis_address is not None:\n # We only start ray in a cluster setting for the head node.\n ray.init(\n include_webui=False,\n ignore_reinit_error=True,\n redis_address=redis_address,\n redis_password=redis_password,\n logging_level=100,\n )\n elif cluster is None:\n object_store_memory = os.environ.get(\"MODIN_MEMORY\", None)\n if os.environ.get(\"MODIN_OUT_OF_CORE\", \"False\").title() == \"True\":\n from tempfile import gettempdir\n\n plasma_directory = gettempdir()\n # We may have already set the memory from the environment variable, we don't\n # want to overwrite that value if we have.\n if object_store_memory is None:\n # Round down to the nearest Gigabyte.\n mem_bytes = ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9\n # Default to 8x memory for out of core\n object_store_memory = 8 * mem_bytes\n # In case anything failed above, we can still improve the memory for Modin.\n if object_store_memory is None:\n # Round down to the nearest Gigabyte.\n object_store_memory = int(\n 0.6 * ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9\n )\n # If the memory pool is smaller than 2GB, just use the default in ray.\n if object_store_memory == 0:\n object_store_memory = None\n else:\n object_store_memory = int(object_store_memory)\n ray.init(\n include_webui=False,\n ignore_reinit_error=True,\n plasma_directory=plasma_directory,\n object_store_memory=object_store_memory,\n redis_address=redis_address,\n redis_password=redis_password,\n logging_level=100,\n memory=object_store_memory,\n )\n # Register custom serializer for method objects to avoid warning message.\n # We serialize `MethodType` objects when we use AxisPartition operations.\n ray.register_custom_serializer(types.MethodType, use_pickle=True)\n\n # Register a fix import function to run on all_workers including the driver.\n # This is a hack solution to fix #647, #746\n def move_stdlib_ahead_of_site_packages(*args):\n site_packages_path = None\n site_packages_path_index = -1\n for i, path in enumerate(sys.path):\n if sys.exec_prefix in path and path.endswith(\"site-packages\"):\n site_packages_path = path\n site_packages_path_index = i\n # break on first found\n break\n\n if site_packages_path is not None:\n # stdlib packages layout as follows:\n # - python3.x\n # - typing.py\n # - site-packages/\n # - pandas\n # So extracting the dirname of the site_packages can point us\n # to the directory containing standard libraries.\n sys.path.insert(\n site_packages_path_index, os.path.dirname(site_packages_path)\n )\n\n move_stdlib_ahead_of_site_packages()\n ray.worker.global_worker.run_function_on_all_workers(\n move_stdlib_ahead_of_site_packages\n )\n\n\nif execution_engine == \"Ray\":\n import ray\n\n initialize_ray()\n num_cpus = ray.cluster_resources()[\"CPU\"]\nelif execution_engine == \"Dask\": # pragma: no cover\n from distributed.client import get_client\n import warnings\n\n if threading.current_thread().name == \"MainThread\":\n warnings.warn(\"The Dask Engine for Modin is experimental.\")\n try:\n client = get_client()\n except ValueError:\n from distributed import Client\n import multiprocessing\n\n num_cpus = multiprocessing.cpu_count()\n client = Client(n_workers=num_cpus)\nelif execution_engine != \"Python\":\n raise ImportError(\"Unrecognized execution engine: {}.\".format(execution_engine))\n\nDEFAULT_NPARTITIONS = max(4, int(num_cpus))\n\n__all__ = [\n \"DataFrame\",\n \"Series\",\n \"read_csv\",\n \"read_parquet\",\n \"read_json\",\n \"read_html\",\n \"read_clipboard\",\n \"read_excel\",\n \"read_hdf\",\n \"read_feather\",\n \"read_stata\",\n \"read_sas\",\n \"read_pickle\",\n \"read_sql\",\n \"read_gbq\",\n \"read_table\",\n \"read_spss\",\n \"read_orc\",\n \"json_normalize\",\n \"concat\",\n \"eval\",\n \"unique\",\n \"value_counts\",\n \"cut\",\n \"to_numeric\",\n \"factorize\",\n \"test\",\n \"qcut\",\n \"to_datetime\",\n \"get_dummies\",\n \"isna\",\n \"isnull\",\n \"merge\",\n \"pivot_table\",\n \"date_range\",\n \"Index\",\n \"MultiIndex\",\n \"Series\",\n \"bdate_range\",\n \"period_range\",\n \"DatetimeIndex\",\n \"to_timedelta\",\n \"set_eng_float_format\",\n \"options\",\n \"set_option\",\n \"CategoricalIndex\",\n \"Timedelta\",\n \"Timestamp\",\n \"NaT\",\n \"PeriodIndex\",\n \"Categorical\",\n \"__version__\",\n \"melt\",\n \"crosstab\",\n \"plotting\",\n \"Interval\",\n \"UInt8Dtype\",\n \"UInt16Dtype\",\n \"UInt32Dtype\",\n \"UInt64Dtype\",\n \"SparseDtype\",\n \"Int8Dtype\",\n \"Int16Dtype\",\n \"Int32Dtype\",\n \"Int64Dtype\",\n \"CategoricalDtype\",\n \"DatetimeTZDtype\",\n \"IntervalDtype\",\n \"PeriodDtype\",\n \"BooleanDtype\",\n \"StringDtype\",\n \"NA\",\n \"RangeIndex\",\n \"Int64Index\",\n \"UInt64Index\",\n \"Float64Index\",\n \"TimedeltaIndex\",\n \"IntervalIndex\",\n \"IndexSlice\",\n \"Grouper\",\n \"array\",\n \"Period\",\n \"show_versions\",\n \"DateOffset\",\n \"timedelta_range\",\n \"infer_freq\",\n \"interval_range\",\n \"ExcelWriter\",\n \"read_fwf\",\n \"read_sql_table\",\n \"read_sql_query\",\n \"ExcelFile\",\n \"to_pickle\",\n \"HDFStore\",\n \"lreshape\",\n \"wide_to_long\",\n \"merge_asof\",\n \"merge_ordered\",\n \"notnull\",\n \"notna\",\n \"pivot\",\n \"datetime\",\n \"NamedAgg\",\n \"DEFAULT_NPARTITIONS\",\n]\n\ndel pandas\n", "path": "modin/pandas/__init__.py"}]} | 4,043 | 401 |
gh_patches_debug_3080 | rasdani/github-patches | git_diff | google__turbinia-1099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plaso VSS option incorrect
https://github.com/log2timeline/plaso/blob/9cc50c972b257d6cbbea38fa8b39f0bf027e0960/plaso/cli/storage_media_tool.py#L581
^ option should be --no_vss in below location
https://github.com/google/turbinia/blob/86158a95a0b134978628c1680d0997667ec7c935/turbinia/workers/plaso.py#L43
Please check how this will work if recipes pass in the --vss_stores option
</issue>
<code>
[start of turbinia/workers/binary_extractor.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task to extract binary files from an evidence object provided."""
16
17 from __future__ import unicode_literals
18
19 import logging
20 import json
21 import os
22 import textwrap
23
24 from turbinia import TurbiniaException
25 from turbinia import config
26 from turbinia.evidence import EvidenceState as state
27 from turbinia.workers import TurbiniaTask
28 from turbinia.evidence import BinaryExtraction
29
30
31 class BinaryExtractorTask(TurbiniaTask):
32 """Extract binaries out of evidence and provide JSON file with hashes.
33
34 Attributes:
35 json_path(str): path to output JSON file.
36 binary_extraction_dir(str): path to extraction directory.
37 """
38
39 REQUIRED_STATES = [state.ATTACHED]
40
41 TASK_CONFIG = {
42 # This is an arbitrary path that will be put into a custom artifact
43 # definition so that the files at this path are extracted. See the path
44 # specification format in the ForensicArtifacts documentation:
45 # https://artifacts.readthedocs.io/en/latest/sources/Format-specification.html
46 'binary_extraction_path': None
47 }
48
49 def __init__(self, *args, **kwargs):
50 """Initializes BinaryExtractorTask."""
51 super(BinaryExtractorTask, self).__init__(*args, **kwargs)
52 self.json_path = None
53 self.binary_extraction_dir = None
54
55 def check_extraction(self):
56 """Checks counts for extracted binaries and hashes.
57
58 Returns:
59 Tuple(
60 binary_cnt(int): Number of extracted binaries.
61 hash_cnt(int): Number of extracted hashes.
62 )
63 """
64
65 # Check if hashes.json file was generated.
66 if not os.path.exists(self.json_path):
67 raise TurbiniaException(
68 'The file {0:s} was not found. Please ensure you '
69 'have Plaso version 20191203 or greater deployed'.format(
70 self.json_path))
71
72 with open(self.json_path) as json_file:
73 hashes = json.load(json_file)
74
75 binary_cnt = sum(
76 len(files) for _, _, files in os.walk(self.binary_extraction_dir)) - 1
77 hash_cnt = len(hashes)
78
79 return (binary_cnt, hash_cnt)
80
81 def run(self, evidence, result):
82 """Task that extracts binaries with image_export.py.
83
84 Args:
85 evidence (Evidence object): The evidence we will process.
86 result (TurbiniaTaskResult): The object to place task results into.
87
88 Returns:
89 TurbiniaTaskResult object.
90 """
91
92 config.LoadConfig()
93 binary_extraction_evidence = BinaryExtraction()
94
95 binary_extraction_evidence.local_path = self.output_dir
96 binary_extraction_evidence.uncompressed_directory = self.output_dir
97 image_export_log = os.path.join(self.output_dir, 'binary_extraction.log')
98 self.binary_extraction_dir = os.path.join(
99 self.output_dir, 'extracted_binaries')
100 self.json_path = os.path.join(self.binary_extraction_dir, 'hashes.json')
101
102 cmd = [
103 'image_export.py', '--partitions', 'all', '--volumes', 'all',
104 '--no_vss', '--unattended', '--logfile', image_export_log
105 ]
106
107 if self.task_config.get('binary_extraction_path'):
108 artifact_dir = os.path.join(self.tmp_dir, 'artifacts')
109 artifact_file = os.path.join(artifact_dir, 'artifacts.yaml')
110 os.mkdir(artifact_dir)
111 binary_extraction_path = self.task_config.get('binary_extraction_path')
112 result.log(
113 'Using custom artifact path {0:s}'.format(binary_extraction_path))
114
115 artifact_text = textwrap.dedent(
116 """
117 name: TurbiniaCustomArtifact
118 doc: Ad hoc artifact created for file extraction.
119 sources:
120 - type: FILE
121 attributes:
122 paths: ['{0:s}']
123 """)
124 artifact_text = artifact_text.format(binary_extraction_path)
125
126 with open(artifact_file, 'wb') as artifact:
127 artifact.write(artifact_text.encode('utf-8'))
128 cmd.extend([
129 '--custom_artifact_definitions', artifact_file, '--artifact_filters',
130 'TurbiniaCustomArtifact'
131 ])
132 else:
133 cmd.extend(['--signatures', 'elf,exe_mz'])
134
135 if evidence.credentials:
136 for credential_type, credential_data in evidence.credentials:
137 cmd.extend([
138 '--credential', '{0:s}:{1:s}'.format(
139 credential_type, credential_data)
140 ])
141
142 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
143 cmd.append('-d')
144 cmd.extend(['-w', self.binary_extraction_dir, evidence.local_path])
145
146 result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))
147 self.execute(
148 cmd, result, log_files=[image_export_log, self.json_path],
149 new_evidence=[binary_extraction_evidence])
150
151 try:
152 binary_cnt, hash_cnt = self.check_extraction()
153 except TurbiniaException as exception:
154 message = 'File extraction failed: {0!s}'.format(exception)
155 result.close(self, success=False, status=message)
156 return result
157
158 status = (
159 'Extracted {0:d} hashes and {1:d} files from the '
160 'evidence.'.format(hash_cnt, binary_cnt))
161
162 if hash_cnt != binary_cnt:
163 result.log(
164 'Number of extracted binaries is not equal to the number '
165 'of extracted hashes. This might indicate issues with '
166 'image_export.py. Check binary_extraction.log for more '
167 'details.', logging.WARNING)
168
169 binary_extraction_evidence.compress()
170 result.close(self, success=True, status=status)
171
172 return result
173
[end of turbinia/workers/binary_extractor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/workers/binary_extractor.py b/turbinia/workers/binary_extractor.py
--- a/turbinia/workers/binary_extractor.py
+++ b/turbinia/workers/binary_extractor.py
@@ -101,7 +101,7 @@
cmd = [
'image_export.py', '--partitions', 'all', '--volumes', 'all',
- '--no_vss', '--unattended', '--logfile', image_export_log
+ '--vss_stores', 'none', '--unattended', '--logfile', image_export_log
]
if self.task_config.get('binary_extraction_path'):
| {"golden_diff": "diff --git a/turbinia/workers/binary_extractor.py b/turbinia/workers/binary_extractor.py\n--- a/turbinia/workers/binary_extractor.py\n+++ b/turbinia/workers/binary_extractor.py\n@@ -101,7 +101,7 @@\n \n cmd = [\n 'image_export.py', '--partitions', 'all', '--volumes', 'all',\n- '--no_vss', '--unattended', '--logfile', image_export_log\n+ '--vss_stores', 'none', '--unattended', '--logfile', image_export_log\n ]\n \n if self.task_config.get('binary_extraction_path'):\n", "issue": "plaso VSS option incorrect\nhttps://github.com/log2timeline/plaso/blob/9cc50c972b257d6cbbea38fa8b39f0bf027e0960/plaso/cli/storage_media_tool.py#L581\r\n\r\n^ option should be --no_vss in below location\r\nhttps://github.com/google/turbinia/blob/86158a95a0b134978628c1680d0997667ec7c935/turbinia/workers/plaso.py#L43\r\n\r\nPlease check how this will work if recipes pass in the --vss_stores option\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task to extract binary files from an evidence object provided.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport json\nimport os\nimport textwrap\n\nfrom turbinia import TurbiniaException\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.evidence import BinaryExtraction\n\n\nclass BinaryExtractorTask(TurbiniaTask):\n \"\"\"Extract binaries out of evidence and provide JSON file with hashes.\n\n Attributes:\n json_path(str): path to output JSON file.\n binary_extraction_dir(str): path to extraction directory.\n \"\"\"\n\n REQUIRED_STATES = [state.ATTACHED]\n\n TASK_CONFIG = {\n # This is an arbitrary path that will be put into a custom artifact\n # definition so that the files at this path are extracted. See the path\n # specification format in the ForensicArtifacts documentation:\n # https://artifacts.readthedocs.io/en/latest/sources/Format-specification.html\n 'binary_extraction_path': None\n }\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initializes BinaryExtractorTask.\"\"\"\n super(BinaryExtractorTask, self).__init__(*args, **kwargs)\n self.json_path = None\n self.binary_extraction_dir = None\n\n def check_extraction(self):\n \"\"\"Checks counts for extracted binaries and hashes.\n\n Returns:\n Tuple(\n binary_cnt(int): Number of extracted binaries.\n hash_cnt(int): Number of extracted hashes.\n )\n \"\"\"\n\n # Check if hashes.json file was generated.\n if not os.path.exists(self.json_path):\n raise TurbiniaException(\n 'The file {0:s} was not found. Please ensure you '\n 'have Plaso version 20191203 or greater deployed'.format(\n self.json_path))\n\n with open(self.json_path) as json_file:\n hashes = json.load(json_file)\n\n binary_cnt = sum(\n len(files) for _, _, files in os.walk(self.binary_extraction_dir)) - 1\n hash_cnt = len(hashes)\n\n return (binary_cnt, hash_cnt)\n\n def run(self, evidence, result):\n \"\"\"Task that extracts binaries with image_export.py.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n binary_extraction_evidence = BinaryExtraction()\n\n binary_extraction_evidence.local_path = self.output_dir\n binary_extraction_evidence.uncompressed_directory = self.output_dir\n image_export_log = os.path.join(self.output_dir, 'binary_extraction.log')\n self.binary_extraction_dir = os.path.join(\n self.output_dir, 'extracted_binaries')\n self.json_path = os.path.join(self.binary_extraction_dir, 'hashes.json')\n\n cmd = [\n 'image_export.py', '--partitions', 'all', '--volumes', 'all',\n '--no_vss', '--unattended', '--logfile', image_export_log\n ]\n\n if self.task_config.get('binary_extraction_path'):\n artifact_dir = os.path.join(self.tmp_dir, 'artifacts')\n artifact_file = os.path.join(artifact_dir, 'artifacts.yaml')\n os.mkdir(artifact_dir)\n binary_extraction_path = self.task_config.get('binary_extraction_path')\n result.log(\n 'Using custom artifact path {0:s}'.format(binary_extraction_path))\n\n artifact_text = textwrap.dedent(\n \"\"\"\n name: TurbiniaCustomArtifact\n doc: Ad hoc artifact created for file extraction.\n sources:\n - type: FILE\n attributes:\n paths: ['{0:s}']\n \"\"\")\n artifact_text = artifact_text.format(binary_extraction_path)\n\n with open(artifact_file, 'wb') as artifact:\n artifact.write(artifact_text.encode('utf-8'))\n cmd.extend([\n '--custom_artifact_definitions', artifact_file, '--artifact_filters',\n 'TurbiniaCustomArtifact'\n ])\n else:\n cmd.extend(['--signatures', 'elf,exe_mz'])\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n cmd.extend(['-w', self.binary_extraction_dir, evidence.local_path])\n\n result.log('Running image_export as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[image_export_log, self.json_path],\n new_evidence=[binary_extraction_evidence])\n\n try:\n binary_cnt, hash_cnt = self.check_extraction()\n except TurbiniaException as exception:\n message = 'File extraction failed: {0!s}'.format(exception)\n result.close(self, success=False, status=message)\n return result\n\n status = (\n 'Extracted {0:d} hashes and {1:d} files from the '\n 'evidence.'.format(hash_cnt, binary_cnt))\n\n if hash_cnt != binary_cnt:\n result.log(\n 'Number of extracted binaries is not equal to the number '\n 'of extracted hashes. This might indicate issues with '\n 'image_export.py. Check binary_extraction.log for more '\n 'details.', logging.WARNING)\n\n binary_extraction_evidence.compress()\n result.close(self, success=True, status=status)\n\n return result\n", "path": "turbinia/workers/binary_extractor.py"}]} | 2,458 | 145 |
gh_patches_debug_797 | rasdani/github-patches | git_diff | pre-commit__pre-commit-167 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
npmrc causes npm to install to home directory instead of nodeenv
Here is what happened when I tried to get eslint installed:
```
$ pre-commit run --all-files
eslint..............................................................................................................................................................................................................................................................................................................Failed
hookid: eslint
xargs: eslint: No such file or directory
```
Moving .npmrc to nope.npmrc fixed the issue.
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 setup(
6 name='pre_commit',
7 description=(
8 'A framework for managing and maintaining multi-language pre-commit '
9 'hooks.'
10 ),
11 url='https://github.com/pre-commit/pre-commit',
12 version='0.2.9',
13
14 author='Anthony Sottile',
15 author_email='[email protected]',
16
17 platforms='linux',
18 classifiers=[
19 'License :: OSI Approved :: MIT License',
20 'Programming Language :: Python :: 2',
21 'Programming Language :: Python :: 2.6',
22 'Programming Language :: Python :: 2.7',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.3',
25 'Programming Language :: Python :: 3.4',
26 'Programming Language :: Python :: Implementation :: CPython',
27 'Programming Language :: Python :: Implementation :: PyPy',
28 ],
29
30 packages=find_packages('.', exclude=('tests*', 'testing*')),
31 package_data={
32 'pre_commit': [
33 'resources/pre-commit-hook',
34 'resources/rbenv.tar.gz',
35 'resources/ruby-build.tar.gz',
36 'resources/ruby-download.tar.gz',
37 ]
38 },
39 install_requires=[
40 'argparse',
41 'aspy.yaml',
42 'cached-property',
43 'jsonschema',
44 'nodeenv>=0.9.4',
45 'ordereddict',
46 'plumbum',
47 'pyyaml',
48 'simplejson',
49 'virtualenv',
50 ],
51 entry_points={
52 'console_scripts': [
53 'pre-commit = pre_commit.main:main',
54 'validate-config = pre_commit.clientlib.validate_config:run',
55 'validate-manifest = pre_commit.clientlib.validate_manifest:run',
56 ],
57 },
58 )
59
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,7 +41,7 @@
'aspy.yaml',
'cached-property',
'jsonschema',
- 'nodeenv>=0.9.4',
+ 'nodeenv>=0.11.1',
'ordereddict',
'plumbum',
'pyyaml',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,7 +41,7 @@\n 'aspy.yaml',\n 'cached-property',\n 'jsonschema',\n- 'nodeenv>=0.9.4',\n+ 'nodeenv>=0.11.1',\n 'ordereddict',\n 'plumbum',\n 'pyyaml',\n", "issue": "npmrc causes npm to install to home directory instead of nodeenv\nHere is what happened when I tried to get eslint installed: \n\n```\n$ pre-commit run --all-files\neslint..............................................................................................................................................................................................................................................................................................................Failed\nhookid: eslint\n\nxargs: eslint: No such file or directory\n```\n\nMoving .npmrc to nope.npmrc fixed the issue.\n\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.2.9',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/pre-commit-hook',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.9.4',\n 'ordereddict',\n 'plumbum',\n 'pyyaml',\n 'simplejson',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'validate-config = pre_commit.clientlib.validate_config:run',\n 'validate-manifest = pre_commit.clientlib.validate_manifest:run',\n ],\n },\n)\n", "path": "setup.py"}]} | 1,096 | 89 |
gh_patches_debug_20336 | rasdani/github-patches | git_diff | numpy__numpy-12268 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
integrate content/images on broadcasting in docs
https://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc explains broadcasting well (including useful diagrams) and is linked to from https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html. It is the only link to https://scipy.github.io/old-wiki left.
This content should be integrated in the user guide. There's also useful code to draw such diagrams at https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html
</issue>
<code>
[start of numpy/doc/broadcasting.py]
1 """
2 ========================
3 Broadcasting over arrays
4 ========================
5
6 The term broadcasting describes how numpy treats arrays with different
7 shapes during arithmetic operations. Subject to certain constraints,
8 the smaller array is "broadcast" across the larger array so that they
9 have compatible shapes. Broadcasting provides a means of vectorizing
10 array operations so that looping occurs in C instead of Python. It does
11 this without making needless copies of data and usually leads to
12 efficient algorithm implementations. There are, however, cases where
13 broadcasting is a bad idea because it leads to inefficient use of memory
14 that slows computation.
15
16 NumPy operations are usually done on pairs of arrays on an
17 element-by-element basis. In the simplest case, the two arrays must
18 have exactly the same shape, as in the following example:
19
20 >>> a = np.array([1.0, 2.0, 3.0])
21 >>> b = np.array([2.0, 2.0, 2.0])
22 >>> a * b
23 array([ 2., 4., 6.])
24
25 NumPy's broadcasting rule relaxes this constraint when the arrays'
26 shapes meet certain constraints. The simplest broadcasting example occurs
27 when an array and a scalar value are combined in an operation:
28
29 >>> a = np.array([1.0, 2.0, 3.0])
30 >>> b = 2.0
31 >>> a * b
32 array([ 2., 4., 6.])
33
34 The result is equivalent to the previous example where ``b`` was an array.
35 We can think of the scalar ``b`` being *stretched* during the arithmetic
36 operation into an array with the same shape as ``a``. The new elements in
37 ``b`` are simply copies of the original scalar. The stretching analogy is
38 only conceptual. NumPy is smart enough to use the original scalar value
39 without actually making copies, so that broadcasting operations are as
40 memory and computationally efficient as possible.
41
42 The code in the second example is more efficient than that in the first
43 because broadcasting moves less memory around during the multiplication
44 (``b`` is a scalar rather than an array).
45
46 General Broadcasting Rules
47 ==========================
48 When operating on two arrays, NumPy compares their shapes element-wise.
49 It starts with the trailing dimensions, and works its way forward. Two
50 dimensions are compatible when
51
52 1) they are equal, or
53 2) one of them is 1
54
55 If these conditions are not met, a
56 ``ValueError: operands could not be broadcast together`` exception is
57 thrown, indicating that the arrays have incompatible shapes. The size of
58 the resulting array is the maximum size along each dimension of the input
59 arrays.
60
61 Arrays do not need to have the same *number* of dimensions. For example,
62 if you have a ``256x256x3`` array of RGB values, and you want to scale
63 each color in the image by a different value, you can multiply the image
64 by a one-dimensional array with 3 values. Lining up the sizes of the
65 trailing axes of these arrays according to the broadcast rules, shows that
66 they are compatible::
67
68 Image (3d array): 256 x 256 x 3
69 Scale (1d array): 3
70 Result (3d array): 256 x 256 x 3
71
72 When either of the dimensions compared is one, the other is
73 used. In other words, dimensions with size 1 are stretched or "copied"
74 to match the other.
75
76 In the following example, both the ``A`` and ``B`` arrays have axes with
77 length one that are expanded to a larger size during the broadcast
78 operation::
79
80 A (4d array): 8 x 1 x 6 x 1
81 B (3d array): 7 x 1 x 5
82 Result (4d array): 8 x 7 x 6 x 5
83
84 Here are some more examples::
85
86 A (2d array): 5 x 4
87 B (1d array): 1
88 Result (2d array): 5 x 4
89
90 A (2d array): 5 x 4
91 B (1d array): 4
92 Result (2d array): 5 x 4
93
94 A (3d array): 15 x 3 x 5
95 B (3d array): 15 x 1 x 5
96 Result (3d array): 15 x 3 x 5
97
98 A (3d array): 15 x 3 x 5
99 B (2d array): 3 x 5
100 Result (3d array): 15 x 3 x 5
101
102 A (3d array): 15 x 3 x 5
103 B (2d array): 3 x 1
104 Result (3d array): 15 x 3 x 5
105
106 Here are examples of shapes that do not broadcast::
107
108 A (1d array): 3
109 B (1d array): 4 # trailing dimensions do not match
110
111 A (2d array): 2 x 1
112 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched
113
114 An example of broadcasting in practice::
115
116 >>> x = np.arange(4)
117 >>> xx = x.reshape(4,1)
118 >>> y = np.ones(5)
119 >>> z = np.ones((3,4))
120
121 >>> x.shape
122 (4,)
123
124 >>> y.shape
125 (5,)
126
127 >>> x + y
128 ValueError: operands could not be broadcast together with shapes (4,) (5,)
129
130 >>> xx.shape
131 (4, 1)
132
133 >>> y.shape
134 (5,)
135
136 >>> (xx + y).shape
137 (4, 5)
138
139 >>> xx + y
140 array([[ 1., 1., 1., 1., 1.],
141 [ 2., 2., 2., 2., 2.],
142 [ 3., 3., 3., 3., 3.],
143 [ 4., 4., 4., 4., 4.]])
144
145 >>> x.shape
146 (4,)
147
148 >>> z.shape
149 (3, 4)
150
151 >>> (x + z).shape
152 (3, 4)
153
154 >>> x + z
155 array([[ 1., 2., 3., 4.],
156 [ 1., 2., 3., 4.],
157 [ 1., 2., 3., 4.]])
158
159 Broadcasting provides a convenient way of taking the outer product (or
160 any other outer operation) of two arrays. The following example shows an
161 outer addition operation of two 1-d arrays::
162
163 >>> a = np.array([0.0, 10.0, 20.0, 30.0])
164 >>> b = np.array([1.0, 2.0, 3.0])
165 >>> a[:, np.newaxis] + b
166 array([[ 1., 2., 3.],
167 [ 11., 12., 13.],
168 [ 21., 22., 23.],
169 [ 31., 32., 33.]])
170
171 Here the ``newaxis`` index operator inserts a new axis into ``a``,
172 making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array
173 with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
174
175 See `this article <https://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc>`_
176 for illustrations of broadcasting concepts.
177
178 """
179 from __future__ import division, absolute_import, print_function
180
[end of numpy/doc/broadcasting.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/numpy/doc/broadcasting.py b/numpy/doc/broadcasting.py
--- a/numpy/doc/broadcasting.py
+++ b/numpy/doc/broadcasting.py
@@ -3,6 +3,12 @@
Broadcasting over arrays
========================
+.. note::
+ See `this article
+ <https://numpy.org/devdocs/user/theory.broadcasting.html>`_
+ for illustrations of broadcasting concepts.
+
+
The term broadcasting describes how numpy treats arrays with different
shapes during arithmetic operations. Subject to certain constraints,
the smaller array is "broadcast" across the larger array so that they
@@ -172,8 +178,5 @@
making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array
with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
-See `this article <https://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc>`_
-for illustrations of broadcasting concepts.
-
"""
from __future__ import division, absolute_import, print_function
| {"golden_diff": "diff --git a/numpy/doc/broadcasting.py b/numpy/doc/broadcasting.py\n--- a/numpy/doc/broadcasting.py\n+++ b/numpy/doc/broadcasting.py\n@@ -3,6 +3,12 @@\n Broadcasting over arrays\n ========================\n \n+.. note::\n+ See `this article\n+ <https://numpy.org/devdocs/user/theory.broadcasting.html>`_\n+ for illustrations of broadcasting concepts.\n+\n+\n The term broadcasting describes how numpy treats arrays with different\n shapes during arithmetic operations. Subject to certain constraints,\n the smaller array is \"broadcast\" across the larger array so that they\n@@ -172,8 +178,5 @@\n making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array\n with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.\n \n-See `this article <https://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc>`_\n-for illustrations of broadcasting concepts.\n-\n \"\"\"\n from __future__ import division, absolute_import, print_function\n", "issue": "integrate content/images on broadcasting in docs\nhttps://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc explains broadcasting well (including useful diagrams) and is linked to from https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html. It is the only link to https://scipy.github.io/old-wiki left.\r\n\r\nThis content should be integrated in the user guide. There's also useful code to draw such diagrams at https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html\n", "before_files": [{"content": "\"\"\"\n========================\nBroadcasting over arrays\n========================\n\nThe term broadcasting describes how numpy treats arrays with different\nshapes during arithmetic operations. Subject to certain constraints,\nthe smaller array is \"broadcast\" across the larger array so that they\nhave compatible shapes. Broadcasting provides a means of vectorizing\narray operations so that looping occurs in C instead of Python. It does\nthis without making needless copies of data and usually leads to\nefficient algorithm implementations. There are, however, cases where\nbroadcasting is a bad idea because it leads to inefficient use of memory\nthat slows computation.\n\nNumPy operations are usually done on pairs of arrays on an\nelement-by-element basis. In the simplest case, the two arrays must\nhave exactly the same shape, as in the following example:\n\n >>> a = np.array([1.0, 2.0, 3.0])\n >>> b = np.array([2.0, 2.0, 2.0])\n >>> a * b\n array([ 2., 4., 6.])\n\nNumPy's broadcasting rule relaxes this constraint when the arrays'\nshapes meet certain constraints. The simplest broadcasting example occurs\nwhen an array and a scalar value are combined in an operation:\n\n>>> a = np.array([1.0, 2.0, 3.0])\n>>> b = 2.0\n>>> a * b\narray([ 2., 4., 6.])\n\nThe result is equivalent to the previous example where ``b`` was an array.\nWe can think of the scalar ``b`` being *stretched* during the arithmetic\noperation into an array with the same shape as ``a``. The new elements in\n``b`` are simply copies of the original scalar. The stretching analogy is\nonly conceptual. NumPy is smart enough to use the original scalar value\nwithout actually making copies, so that broadcasting operations are as\nmemory and computationally efficient as possible.\n\nThe code in the second example is more efficient than that in the first\nbecause broadcasting moves less memory around during the multiplication\n(``b`` is a scalar rather than an array).\n\nGeneral Broadcasting Rules\n==========================\nWhen operating on two arrays, NumPy compares their shapes element-wise.\nIt starts with the trailing dimensions, and works its way forward. Two\ndimensions are compatible when\n\n1) they are equal, or\n2) one of them is 1\n\nIf these conditions are not met, a\n``ValueError: operands could not be broadcast together`` exception is \nthrown, indicating that the arrays have incompatible shapes. The size of \nthe resulting array is the maximum size along each dimension of the input \narrays.\n\nArrays do not need to have the same *number* of dimensions. For example,\nif you have a ``256x256x3`` array of RGB values, and you want to scale\neach color in the image by a different value, you can multiply the image\nby a one-dimensional array with 3 values. Lining up the sizes of the\ntrailing axes of these arrays according to the broadcast rules, shows that\nthey are compatible::\n\n Image (3d array): 256 x 256 x 3\n Scale (1d array): 3\n Result (3d array): 256 x 256 x 3\n\nWhen either of the dimensions compared is one, the other is\nused. In other words, dimensions with size 1 are stretched or \"copied\"\nto match the other.\n\nIn the following example, both the ``A`` and ``B`` arrays have axes with\nlength one that are expanded to a larger size during the broadcast\noperation::\n\n A (4d array): 8 x 1 x 6 x 1\n B (3d array): 7 x 1 x 5\n Result (4d array): 8 x 7 x 6 x 5\n\nHere are some more examples::\n\n A (2d array): 5 x 4\n B (1d array): 1\n Result (2d array): 5 x 4\n\n A (2d array): 5 x 4\n B (1d array): 4\n Result (2d array): 5 x 4\n\n A (3d array): 15 x 3 x 5\n B (3d array): 15 x 1 x 5\n Result (3d array): 15 x 3 x 5\n\n A (3d array): 15 x 3 x 5\n B (2d array): 3 x 5\n Result (3d array): 15 x 3 x 5\n\n A (3d array): 15 x 3 x 5\n B (2d array): 3 x 1\n Result (3d array): 15 x 3 x 5\n\nHere are examples of shapes that do not broadcast::\n\n A (1d array): 3\n B (1d array): 4 # trailing dimensions do not match\n\n A (2d array): 2 x 1\n B (3d array): 8 x 4 x 3 # second from last dimensions mismatched\n\nAn example of broadcasting in practice::\n\n >>> x = np.arange(4)\n >>> xx = x.reshape(4,1)\n >>> y = np.ones(5)\n >>> z = np.ones((3,4))\n\n >>> x.shape\n (4,)\n\n >>> y.shape\n (5,)\n\n >>> x + y\n ValueError: operands could not be broadcast together with shapes (4,) (5,)\n\n >>> xx.shape\n (4, 1)\n\n >>> y.shape\n (5,)\n\n >>> (xx + y).shape\n (4, 5)\n\n >>> xx + y\n array([[ 1., 1., 1., 1., 1.],\n [ 2., 2., 2., 2., 2.],\n [ 3., 3., 3., 3., 3.],\n [ 4., 4., 4., 4., 4.]])\n\n >>> x.shape\n (4,)\n\n >>> z.shape\n (3, 4)\n\n >>> (x + z).shape\n (3, 4)\n\n >>> x + z\n array([[ 1., 2., 3., 4.],\n [ 1., 2., 3., 4.],\n [ 1., 2., 3., 4.]])\n\nBroadcasting provides a convenient way of taking the outer product (or\nany other outer operation) of two arrays. The following example shows an\nouter addition operation of two 1-d arrays::\n\n >>> a = np.array([0.0, 10.0, 20.0, 30.0])\n >>> b = np.array([1.0, 2.0, 3.0])\n >>> a[:, np.newaxis] + b\n array([[ 1., 2., 3.],\n [ 11., 12., 13.],\n [ 21., 22., 23.],\n [ 31., 32., 33.]])\n\nHere the ``newaxis`` index operator inserts a new axis into ``a``,\nmaking it a two-dimensional ``4x1`` array. Combining the ``4x1`` array\nwith ``b``, which has shape ``(3,)``, yields a ``4x3`` array.\n\nSee `this article <https://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc>`_\nfor illustrations of broadcasting concepts.\n\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n", "path": "numpy/doc/broadcasting.py"}]} | 2,882 | 235 |
gh_patches_debug_20356 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-5615 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tools/c7n-org - azure subscription generation includes disabled subscriptions
per report on gitter.
ngibbondaimler - We used azuresubs.py from c7n-org to generate a list of our subscriptions, however it's picking up disabled subscriptions and c7n-org throws an exception when it tries to read from a disabled sub to apply policy. Is there a suggested workaround for this?
Stefan Gordon -
I believe the return from the subscription API list call includes a state attribute, something like "state": "Enabled" - So for your scenario perhaps you can just add a check on that value at https://github.com/cloud-custodian/cloud-custodian/blob/master/tools/c7n_org/scripts/azuresubs.py#L34
Additionally if you can file an issue with the error you are getting in c7n-org I would say that we should update it to handle this error properly. Generating a list without those is an easy workaround but it shouldn't fail on them.
</issue>
<code>
[start of tools/c7n_org/scripts/azuresubs.py]
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import click
16 from c7n_azure.session import Session
17 from c7n.utils import yaml_dump
18 from azure.mgmt.resource.subscriptions import SubscriptionClient
19
20
21 @click.command()
22 @click.option(
23 '-f', '--output', type=click.File('w'),
24 help="File to store the generated config (default stdout)")
25 def main(output):
26 """
27 Generate a c7n-org subscriptions config file
28 """
29
30 client = SubscriptionClient(Session().get_credentials())
31 subs = [sub.serialize(True) for sub in client.subscriptions.list()]
32 results = []
33 for sub in subs:
34 sub_info = {
35 'subscription_id': sub['subscriptionId'],
36 'name': sub['displayName']
37 }
38 results.append(sub_info)
39
40 print(yaml_dump({'subscriptions': results}), file=output)
41
42
43 if __name__ == '__main__':
44 main()
45
[end of tools/c7n_org/scripts/azuresubs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/c7n_org/scripts/azuresubs.py b/tools/c7n_org/scripts/azuresubs.py
--- a/tools/c7n_org/scripts/azuresubs.py
+++ b/tools/c7n_org/scripts/azuresubs.py
@@ -22,7 +22,12 @@
@click.option(
'-f', '--output', type=click.File('w'),
help="File to store the generated config (default stdout)")
-def main(output):
[email protected](
+ '-s', '--state', multiple=True, type=click.Choice(
+ ['Enabled', 'Warned', 'PastDue', 'Disabled', 'Deleted']),
+ default=('Enabled',),
+ help="File to store the generated config (default stdout)")
+def main(output, state):
"""
Generate a c7n-org subscriptions config file
"""
@@ -31,6 +36,8 @@
subs = [sub.serialize(True) for sub in client.subscriptions.list()]
results = []
for sub in subs:
+ if state and sub['state'] not in state:
+ continue
sub_info = {
'subscription_id': sub['subscriptionId'],
'name': sub['displayName']
| {"golden_diff": "diff --git a/tools/c7n_org/scripts/azuresubs.py b/tools/c7n_org/scripts/azuresubs.py\n--- a/tools/c7n_org/scripts/azuresubs.py\n+++ b/tools/c7n_org/scripts/azuresubs.py\n@@ -22,7 +22,12 @@\n @click.option(\n '-f', '--output', type=click.File('w'),\n help=\"File to store the generated config (default stdout)\")\n-def main(output):\[email protected](\n+ '-s', '--state', multiple=True, type=click.Choice(\n+ ['Enabled', 'Warned', 'PastDue', 'Disabled', 'Deleted']),\n+ default=('Enabled',),\n+ help=\"File to store the generated config (default stdout)\")\n+def main(output, state):\n \"\"\"\n Generate a c7n-org subscriptions config file\n \"\"\"\n@@ -31,6 +36,8 @@\n subs = [sub.serialize(True) for sub in client.subscriptions.list()]\n results = []\n for sub in subs:\n+ if state and sub['state'] not in state:\n+ continue\n sub_info = {\n 'subscription_id': sub['subscriptionId'],\n 'name': sub['displayName']\n", "issue": "tools/c7n-org - azure subscription generation includes disabled subscriptions\n\r\nper report on gitter.\r\n\r\nngibbondaimler - We used azuresubs.py from c7n-org to generate a list of our subscriptions, however it's picking up disabled subscriptions and c7n-org throws an exception when it tries to read from a disabled sub to apply policy. Is there a suggested workaround for this?\r\n\r\n\r\nStefan Gordon -\r\nI believe the return from the subscription API list call includes a state attribute, something like \"state\": \"Enabled\" - So for your scenario perhaps you can just add a check on that value at https://github.com/cloud-custodian/cloud-custodian/blob/master/tools/c7n_org/scripts/azuresubs.py#L34\r\nAdditionally if you can file an issue with the error you are getting in c7n-org I would say that we should update it to handle this error properly. Generating a list without those is an easy workaround but it shouldn't fail on them.\r\n\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport click\nfrom c7n_azure.session import Session\nfrom c7n.utils import yaml_dump\nfrom azure.mgmt.resource.subscriptions import SubscriptionClient\n\n\[email protected]()\[email protected](\n '-f', '--output', type=click.File('w'),\n help=\"File to store the generated config (default stdout)\")\ndef main(output):\n \"\"\"\n Generate a c7n-org subscriptions config file\n \"\"\"\n\n client = SubscriptionClient(Session().get_credentials())\n subs = [sub.serialize(True) for sub in client.subscriptions.list()]\n results = []\n for sub in subs:\n sub_info = {\n 'subscription_id': sub['subscriptionId'],\n 'name': sub['displayName']\n }\n results.append(sub_info)\n\n print(yaml_dump({'subscriptions': results}), file=output)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/c7n_org/scripts/azuresubs.py"}]} | 1,151 | 265 |
gh_patches_debug_1667 | rasdani/github-patches | git_diff | learningequality__kolibri-1464 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
hide not-recent learners on 'coach - recent activity' tab
See similar issue for channels: https://github.com/learningequality/kolibri/pull/1406
Now we need to do the same thing for when you drill deeper and reach the learners list. For example here, we're showing all learners regardless of whether or not they've had recent activity:

</issue>
<code>
[start of kolibri/plugins/coach/serializers.py]
1 from dateutil.parser import parse
2
3 from django.db.models import Case, Count, F, IntegerField, Sum, Value as V, When
4 from django.db.models.functions import Coalesce
5 from kolibri.auth.models import FacilityUser
6 from kolibri.content.models import ContentNode
7 from kolibri.logger.models import ContentSummaryLog
8 from le_utils.constants import content_kinds
9 from rest_framework import serializers
10
11 from .utils.return_users import get_members_or_user
12
13
14 class UserReportSerializer(serializers.ModelSerializer):
15 progress = serializers.SerializerMethodField()
16 last_active = serializers.SerializerMethodField()
17
18 class Meta:
19 model = FacilityUser
20 fields = (
21 'pk', 'full_name', 'progress', 'last_active',
22 )
23
24 def get_progress(self, target_user):
25 content_node = ContentNode.objects.get(pk=self.context['view'].kwargs['content_node_id'])
26 # progress details for a topic node and everything under it
27 if content_node.kind == content_kinds.TOPIC:
28 kind_counts = content_node.get_descendant_kind_counts()
29 topic_details = ContentSummaryLog.objects \
30 .filter_by_topic(content_node) \
31 .filter(user=target_user) \
32 .values('kind') \
33 .annotate(total_progress=Sum('progress')) \
34 .annotate(log_count_total=Count('pk')) \
35 .annotate(log_count_complete=Sum(Case(When(progress=1, then=1), default=0, output_field=IntegerField())))
36 # evaluate queryset so we can add data for kinds that do not have logs
37 topic_details = list(topic_details)
38 for kind in topic_details:
39 del kind_counts[kind['kind']]
40 for key in kind_counts:
41 topic_details.append({'kind': key, 'total_progress': 0, 'log_count_total': 0, 'log_count_complete': 0})
42 return topic_details
43 else:
44 # progress details for a leaf node (exercise, video, etc.)
45 leaf_details = ContentSummaryLog.objects \
46 .filter(user=target_user) \
47 .filter(content_id=content_node.content_id) \
48 .annotate(total_progress=F('progress')) \
49 .values('kind', 'time_spent', 'total_progress')
50 return leaf_details if leaf_details else [{'kind': content_node.kind, 'time_spent': 0, 'total_progress': 0}]
51
52 def get_last_active(self, target_user):
53 content_node = ContentNode.objects.get(pk=self.context['view'].kwargs['content_node_id'])
54 try:
55 if content_node.kind == content_kinds.TOPIC:
56 return ContentSummaryLog.objects \
57 .filter_by_topic(content_node) \
58 .filter(user=target_user) \
59 .latest('end_timestamp').end_timestamp
60 else:
61 return ContentSummaryLog.objects \
62 .filter(user=target_user) \
63 .get(content_id=content_node.content_id).end_timestamp
64 except ContentSummaryLog.DoesNotExist:
65 return None
66
67
68 class ContentReportSerializer(serializers.ModelSerializer):
69 progress = serializers.SerializerMethodField()
70 last_active = serializers.SerializerMethodField()
71 parent = serializers.SerializerMethodField()
72
73 class Meta:
74 model = ContentNode
75 fields = (
76 'pk', 'content_id', 'title', 'progress', 'kind', 'last_active', 'parent',
77 )
78
79 def get_progress(self, target_node):
80 kwargs = self.context['view'].kwargs
81 if target_node.kind == content_kinds.TOPIC:
82 kind_counts = target_node.get_descendant_kind_counts()
83 # filter logs by each kind under target node, and sum progress over logs
84 progress_query = ContentSummaryLog.objects \
85 .filter_by_topic(target_node) \
86 .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id']))
87 if kwargs.get('last_active_time'):
88 progress_query.filter(end_timestamp__gte=parse(kwargs.get('last_active_time')))
89 progress = progress_query.values('kind') \
90 .annotate(total_progress=Sum('progress'))
91 # add kind counts under this node to progress dict
92 for kind in progress:
93 kind['node_count'] = kind_counts[kind['kind']]
94 del kind_counts[kind['kind']]
95 # evaluate queryset so we can add data for kinds that do not have logs
96 progress = list(progress)
97 for key in kind_counts:
98 progress.append({'kind': key, 'node_count': kind_counts[key], 'total_progress': 0})
99 return progress
100 else:
101 # filter logs by a specific leaf node and compute stats over queryset
102 leaf_node_stats_query = ContentSummaryLog.objects \
103 .filter(content_id=target_node.content_id) \
104 .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id']))
105 if kwargs.get('last_active_time'):
106 leaf_node_stats_query.filter(end_timestamp__gte=parse(kwargs.get('last_active_time')))
107 leaf_node_stats = leaf_node_stats_query.aggregate(
108 total_progress=Coalesce(Sum('progress'), V(0)),
109 log_count_total=Coalesce(Count('pk'), V(0)),
110 log_count_complete=Coalesce(Sum(Case(When(progress=1, then=1), default=0, output_field=IntegerField())), V(0)))
111 return [leaf_node_stats] # return as array for consistency in api
112
113 def get_last_active(self, target_node):
114 kwargs = self.context['view'].kwargs
115 try:
116 if target_node.kind == content_kinds.TOPIC:
117 return ContentSummaryLog.objects \
118 .filter_by_topic(target_node) \
119 .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id'])) \
120 .latest('end_timestamp').end_timestamp
121 else:
122 return ContentSummaryLog.objects \
123 .filter(content_id=target_node.content_id) \
124 .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id'])) \
125 .latest('end_timestamp').end_timestamp
126 except ContentSummaryLog.DoesNotExist:
127 return None
128
129 def get_parent(self, target_node):
130 # returns immediate parent
131 return target_node.get_ancestors().values('pk', 'title').last()
132
133
134 class ContentSummarySerializer(ContentReportSerializer):
135 ancestors = serializers.SerializerMethodField()
136 num_users = serializers.SerializerMethodField()
137
138 class Meta:
139 model = ContentNode
140 fields = (
141 'pk', 'content_id', 'title', 'progress', 'kind', 'last_active', 'ancestors', 'num_users',
142 )
143
144 def get_ancestors(self, target_node):
145 """
146 in descending order (root ancestor first, immediate parent last)
147 """
148 return target_node.get_ancestors().values('pk', 'title')
149
150 def get_num_users(self, target_node):
151 kwargs = self.context['view'].kwargs
152 return len(get_members_or_user(kwargs['collection_kind'], kwargs['collection_id']))
153
[end of kolibri/plugins/coach/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/plugins/coach/serializers.py b/kolibri/plugins/coach/serializers.py
--- a/kolibri/plugins/coach/serializers.py
+++ b/kolibri/plugins/coach/serializers.py
@@ -18,7 +18,7 @@
class Meta:
model = FacilityUser
fields = (
- 'pk', 'full_name', 'progress', 'last_active',
+ 'pk', 'username', 'full_name', 'progress', 'last_active',
)
def get_progress(self, target_user):
| {"golden_diff": "diff --git a/kolibri/plugins/coach/serializers.py b/kolibri/plugins/coach/serializers.py\n--- a/kolibri/plugins/coach/serializers.py\n+++ b/kolibri/plugins/coach/serializers.py\n@@ -18,7 +18,7 @@\n class Meta:\n model = FacilityUser\n fields = (\n- 'pk', 'full_name', 'progress', 'last_active',\n+ 'pk', 'username', 'full_name', 'progress', 'last_active',\n )\n \n def get_progress(self, target_user):\n", "issue": "hide not-recent learners on 'coach - recent activity' tab\nSee similar issue for channels: https://github.com/learningequality/kolibri/pull/1406\r\n\r\nNow we need to do the same thing for when you drill deeper and reach the learners list. For example here, we're showing all learners regardless of whether or not they've had recent activity:\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from dateutil.parser import parse\n\nfrom django.db.models import Case, Count, F, IntegerField, Sum, Value as V, When\nfrom django.db.models.functions import Coalesce\nfrom kolibri.auth.models import FacilityUser\nfrom kolibri.content.models import ContentNode\nfrom kolibri.logger.models import ContentSummaryLog\nfrom le_utils.constants import content_kinds\nfrom rest_framework import serializers\n\nfrom .utils.return_users import get_members_or_user\n\n\nclass UserReportSerializer(serializers.ModelSerializer):\n progress = serializers.SerializerMethodField()\n last_active = serializers.SerializerMethodField()\n\n class Meta:\n model = FacilityUser\n fields = (\n 'pk', 'full_name', 'progress', 'last_active',\n )\n\n def get_progress(self, target_user):\n content_node = ContentNode.objects.get(pk=self.context['view'].kwargs['content_node_id'])\n # progress details for a topic node and everything under it\n if content_node.kind == content_kinds.TOPIC:\n kind_counts = content_node.get_descendant_kind_counts()\n topic_details = ContentSummaryLog.objects \\\n .filter_by_topic(content_node) \\\n .filter(user=target_user) \\\n .values('kind') \\\n .annotate(total_progress=Sum('progress')) \\\n .annotate(log_count_total=Count('pk')) \\\n .annotate(log_count_complete=Sum(Case(When(progress=1, then=1), default=0, output_field=IntegerField())))\n # evaluate queryset so we can add data for kinds that do not have logs\n topic_details = list(topic_details)\n for kind in topic_details:\n del kind_counts[kind['kind']]\n for key in kind_counts:\n topic_details.append({'kind': key, 'total_progress': 0, 'log_count_total': 0, 'log_count_complete': 0})\n return topic_details\n else:\n # progress details for a leaf node (exercise, video, etc.)\n leaf_details = ContentSummaryLog.objects \\\n .filter(user=target_user) \\\n .filter(content_id=content_node.content_id) \\\n .annotate(total_progress=F('progress')) \\\n .values('kind', 'time_spent', 'total_progress')\n return leaf_details if leaf_details else [{'kind': content_node.kind, 'time_spent': 0, 'total_progress': 0}]\n\n def get_last_active(self, target_user):\n content_node = ContentNode.objects.get(pk=self.context['view'].kwargs['content_node_id'])\n try:\n if content_node.kind == content_kinds.TOPIC:\n return ContentSummaryLog.objects \\\n .filter_by_topic(content_node) \\\n .filter(user=target_user) \\\n .latest('end_timestamp').end_timestamp\n else:\n return ContentSummaryLog.objects \\\n .filter(user=target_user) \\\n .get(content_id=content_node.content_id).end_timestamp\n except ContentSummaryLog.DoesNotExist:\n return None\n\n\nclass ContentReportSerializer(serializers.ModelSerializer):\n progress = serializers.SerializerMethodField()\n last_active = serializers.SerializerMethodField()\n parent = serializers.SerializerMethodField()\n\n class Meta:\n model = ContentNode\n fields = (\n 'pk', 'content_id', 'title', 'progress', 'kind', 'last_active', 'parent',\n )\n\n def get_progress(self, target_node):\n kwargs = self.context['view'].kwargs\n if target_node.kind == content_kinds.TOPIC:\n kind_counts = target_node.get_descendant_kind_counts()\n # filter logs by each kind under target node, and sum progress over logs\n progress_query = ContentSummaryLog.objects \\\n .filter_by_topic(target_node) \\\n .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id']))\n if kwargs.get('last_active_time'):\n progress_query.filter(end_timestamp__gte=parse(kwargs.get('last_active_time')))\n progress = progress_query.values('kind') \\\n .annotate(total_progress=Sum('progress'))\n # add kind counts under this node to progress dict\n for kind in progress:\n kind['node_count'] = kind_counts[kind['kind']]\n del kind_counts[kind['kind']]\n # evaluate queryset so we can add data for kinds that do not have logs\n progress = list(progress)\n for key in kind_counts:\n progress.append({'kind': key, 'node_count': kind_counts[key], 'total_progress': 0})\n return progress\n else:\n # filter logs by a specific leaf node and compute stats over queryset\n leaf_node_stats_query = ContentSummaryLog.objects \\\n .filter(content_id=target_node.content_id) \\\n .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id']))\n if kwargs.get('last_active_time'):\n leaf_node_stats_query.filter(end_timestamp__gte=parse(kwargs.get('last_active_time')))\n leaf_node_stats = leaf_node_stats_query.aggregate(\n total_progress=Coalesce(Sum('progress'), V(0)),\n log_count_total=Coalesce(Count('pk'), V(0)),\n log_count_complete=Coalesce(Sum(Case(When(progress=1, then=1), default=0, output_field=IntegerField())), V(0)))\n return [leaf_node_stats] # return as array for consistency in api\n\n def get_last_active(self, target_node):\n kwargs = self.context['view'].kwargs\n try:\n if target_node.kind == content_kinds.TOPIC:\n return ContentSummaryLog.objects \\\n .filter_by_topic(target_node) \\\n .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id'])) \\\n .latest('end_timestamp').end_timestamp\n else:\n return ContentSummaryLog.objects \\\n .filter(content_id=target_node.content_id) \\\n .filter(user__in=get_members_or_user(kwargs['collection_kind'], kwargs['collection_id'])) \\\n .latest('end_timestamp').end_timestamp\n except ContentSummaryLog.DoesNotExist:\n return None\n\n def get_parent(self, target_node):\n # returns immediate parent\n return target_node.get_ancestors().values('pk', 'title').last()\n\n\nclass ContentSummarySerializer(ContentReportSerializer):\n ancestors = serializers.SerializerMethodField()\n num_users = serializers.SerializerMethodField()\n\n class Meta:\n model = ContentNode\n fields = (\n 'pk', 'content_id', 'title', 'progress', 'kind', 'last_active', 'ancestors', 'num_users',\n )\n\n def get_ancestors(self, target_node):\n \"\"\"\n in descending order (root ancestor first, immediate parent last)\n \"\"\"\n return target_node.get_ancestors().values('pk', 'title')\n\n def get_num_users(self, target_node):\n kwargs = self.context['view'].kwargs\n return len(get_members_or_user(kwargs['collection_kind'], kwargs['collection_id']))\n", "path": "kolibri/plugins/coach/serializers.py"}]} | 2,490 | 125 |
gh_patches_debug_13047 | rasdani/github-patches | git_diff | doccano__doccano-1558 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mutli-label text classification export issues: same classes but in different orders
How to reproduce the behaviour
---------
<!-- Before submitting an issue, make sure to check the docs and closed issues and FAQ to see if any of the solutions work for you. https://github.com/doccano/doccano/wiki/Frequently-Asked-Questions -->
We are two annotators on a multi-label classification project. When I export the annotations, for some examples, me and my co-annotator have put the same labels, but on the exported CSV, they do not appear in the same order:
Annotator 1:
| text | labels |
| example 1 | label1#label2#label3 |
Annotator 2:
| text | labels |
| example 1 | label2#label3#label1 |
As I try to use these CSVs for comparing our annotations, this brings more difficulty.
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Your Environment
---------
<!-- Include details of your environment.-->
* Operating System: Debian
* Python Version Used: Don't know, I pulled the latest version from Docker Hub
* When you install doccano: 3 days ago
* How did you install doccano (Heroku button etc): Docker
</issue>
<code>
[start of backend/api/views/download/writer.py]
1 import abc
2 import csv
3 import itertools
4 import json
5 import os
6 import uuid
7 import zipfile
8 from collections import defaultdict
9 from typing import Dict, Iterable, Iterator, List
10
11 from .data import Record
12
13
14 class BaseWriter:
15
16 def __init__(self, tmpdir: str):
17 self.tmpdir = tmpdir
18
19 @abc.abstractmethod
20 def write(self, records: Iterator[Record]) -> str:
21 raise NotImplementedError()
22
23 def write_zip(self, filenames: Iterable):
24 save_file = '{}.zip'.format(os.path.join(self.tmpdir, str(uuid.uuid4())))
25 with zipfile.ZipFile(save_file, 'w', compression=zipfile.ZIP_DEFLATED) as zf:
26 for file in filenames:
27 zf.write(filename=file, arcname=os.path.basename(file))
28 return save_file
29
30
31 class LineWriter(BaseWriter):
32 extension = 'txt'
33
34 def write(self, records: Iterator[Record]) -> str:
35 files = {}
36 for record in records:
37 filename = os.path.join(self.tmpdir, f'{record.user}.{self.extension}')
38 if filename not in files:
39 f = open(filename, mode='a')
40 files[filename] = f
41 f = files[filename]
42 line = self.create_line(record)
43 f.write(f'{line}\n')
44 for f in files.values():
45 f.close()
46 save_file = self.write_zip(files)
47 for file in files:
48 os.remove(file)
49 return save_file
50
51 @abc.abstractmethod
52 def create_line(self, record) -> str:
53 raise NotImplementedError()
54
55
56 class CsvWriter(BaseWriter):
57 extension = 'csv'
58
59 def write(self, records: Iterator[Record]) -> str:
60 writers = {}
61 file_handlers = set()
62 records = list(records)
63 header = self.create_header(records)
64 for record in records:
65 filename = os.path.join(self.tmpdir, f'{record.user}.{self.extension}')
66 if filename not in writers:
67 f = open(filename, mode='a', encoding='utf-8')
68 writer = csv.DictWriter(f, header)
69 writer.writeheader()
70 writers[filename] = writer
71 file_handlers.add(f)
72 writer = writers[filename]
73 line = self.create_line(record)
74 writer.writerow(line)
75
76 for f in file_handlers:
77 f.close()
78 save_file = self.write_zip(writers)
79 for file in writers:
80 os.remove(file)
81 return save_file
82
83 def create_line(self, record) -> Dict:
84 return {
85 'id': record.id,
86 'data': record.data,
87 'label': '#'.join(record.label),
88 **record.metadata
89 }
90
91 def create_header(self, records: List[Record]) -> Iterable[str]:
92 header = ['id', 'data', 'label']
93 header += sorted(set(itertools.chain(*[r.metadata.keys() for r in records])))
94 return header
95
96
97 class JSONWriter(BaseWriter):
98 extension = 'json'
99
100 def write(self, records: Iterator[Record]) -> str:
101 writers = {}
102 contents = defaultdict(list)
103 for record in records:
104 filename = os.path.join(self.tmpdir, f'{record.user}.{self.extension}')
105 if filename not in writers:
106 f = open(filename, mode='a', encoding='utf-8')
107 writers[filename] = f
108 line = self.create_line(record)
109 contents[filename].append(line)
110
111 for filename, f in writers.items():
112 content = contents[filename]
113 json.dump(content, f, ensure_ascii=False)
114 f.close()
115
116 save_file = self.write_zip(writers)
117 for file in writers:
118 os.remove(file)
119 return save_file
120
121 def create_line(self, record) -> Dict:
122 return {
123 'id': record.id,
124 'data': record.data,
125 'label': record.label,
126 **record.metadata
127 }
128
129
130 class JSONLWriter(LineWriter):
131 extension = 'jsonl'
132
133 def create_line(self, record):
134 return json.dumps({
135 'id': record.id,
136 'data': record.data,
137 'label': record.label,
138 **record.metadata
139 }, ensure_ascii=False)
140
141
142 class FastTextWriter(LineWriter):
143 extension = 'txt'
144
145 def create_line(self, record):
146 line = [f'__label__{label}' for label in record.label]
147 line.append(record.data)
148 line = ' '.join(line)
149 return line
150
[end of backend/api/views/download/writer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/api/views/download/writer.py b/backend/api/views/download/writer.py
--- a/backend/api/views/download/writer.py
+++ b/backend/api/views/download/writer.py
@@ -84,7 +84,7 @@
return {
'id': record.id,
'data': record.data,
- 'label': '#'.join(record.label),
+ 'label': '#'.join(sorted(record.label)),
**record.metadata
}
@@ -144,6 +144,7 @@
def create_line(self, record):
line = [f'__label__{label}' for label in record.label]
+ line.sort()
line.append(record.data)
line = ' '.join(line)
return line
| {"golden_diff": "diff --git a/backend/api/views/download/writer.py b/backend/api/views/download/writer.py\n--- a/backend/api/views/download/writer.py\n+++ b/backend/api/views/download/writer.py\n@@ -84,7 +84,7 @@\n return {\n 'id': record.id,\n 'data': record.data,\n- 'label': '#'.join(record.label),\n+ 'label': '#'.join(sorted(record.label)),\n **record.metadata\n }\n \n@@ -144,6 +144,7 @@\n \n def create_line(self, record):\n line = [f'__label__{label}' for label in record.label]\n+ line.sort()\n line.append(record.data)\n line = ' '.join(line)\n return line\n", "issue": "Mutli-label text classification export issues: same classes but in different orders\nHow to reproduce the behaviour\r\n---------\r\n<!-- Before submitting an issue, make sure to check the docs and closed issues and FAQ to see if any of the solutions work for you. https://github.com/doccano/doccano/wiki/Frequently-Asked-Questions -->\r\nWe are two annotators on a multi-label classification project. When I export the annotations, for some examples, me and my co-annotator have put the same labels, but on the exported CSV, they do not appear in the same order:\r\n\r\nAnnotator 1:\r\n\r\n| text | labels |\r\n| example 1 | label1#label2#label3 |\r\n\r\nAnnotator 2:\r\n\r\n| text | labels |\r\n| example 1 | label2#label3#label1 |\r\n\r\nAs I try to use these CSVs for comparing our annotations, this brings more difficulty.\r\n\r\n<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->\r\n\r\nYour Environment\r\n---------\r\n<!-- Include details of your environment.-->\r\n* Operating System: Debian\r\n* Python Version Used: Don't know, I pulled the latest version from Docker Hub\r\n* When you install doccano: 3 days ago\r\n* How did you install doccano (Heroku button etc): Docker\r\n\n", "before_files": [{"content": "import abc\nimport csv\nimport itertools\nimport json\nimport os\nimport uuid\nimport zipfile\nfrom collections import defaultdict\nfrom typing import Dict, Iterable, Iterator, List\n\nfrom .data import Record\n\n\nclass BaseWriter:\n\n def __init__(self, tmpdir: str):\n self.tmpdir = tmpdir\n\n @abc.abstractmethod\n def write(self, records: Iterator[Record]) -> str:\n raise NotImplementedError()\n\n def write_zip(self, filenames: Iterable):\n save_file = '{}.zip'.format(os.path.join(self.tmpdir, str(uuid.uuid4())))\n with zipfile.ZipFile(save_file, 'w', compression=zipfile.ZIP_DEFLATED) as zf:\n for file in filenames:\n zf.write(filename=file, arcname=os.path.basename(file))\n return save_file\n\n\nclass LineWriter(BaseWriter):\n extension = 'txt'\n\n def write(self, records: Iterator[Record]) -> str:\n files = {}\n for record in records:\n filename = os.path.join(self.tmpdir, f'{record.user}.{self.extension}')\n if filename not in files:\n f = open(filename, mode='a')\n files[filename] = f\n f = files[filename]\n line = self.create_line(record)\n f.write(f'{line}\\n')\n for f in files.values():\n f.close()\n save_file = self.write_zip(files)\n for file in files:\n os.remove(file)\n return save_file\n\n @abc.abstractmethod\n def create_line(self, record) -> str:\n raise NotImplementedError()\n\n\nclass CsvWriter(BaseWriter):\n extension = 'csv'\n\n def write(self, records: Iterator[Record]) -> str:\n writers = {}\n file_handlers = set()\n records = list(records)\n header = self.create_header(records)\n for record in records:\n filename = os.path.join(self.tmpdir, f'{record.user}.{self.extension}')\n if filename not in writers:\n f = open(filename, mode='a', encoding='utf-8')\n writer = csv.DictWriter(f, header)\n writer.writeheader()\n writers[filename] = writer\n file_handlers.add(f)\n writer = writers[filename]\n line = self.create_line(record)\n writer.writerow(line)\n\n for f in file_handlers:\n f.close()\n save_file = self.write_zip(writers)\n for file in writers:\n os.remove(file)\n return save_file\n\n def create_line(self, record) -> Dict:\n return {\n 'id': record.id,\n 'data': record.data,\n 'label': '#'.join(record.label),\n **record.metadata\n }\n\n def create_header(self, records: List[Record]) -> Iterable[str]:\n header = ['id', 'data', 'label']\n header += sorted(set(itertools.chain(*[r.metadata.keys() for r in records])))\n return header\n\n\nclass JSONWriter(BaseWriter):\n extension = 'json'\n\n def write(self, records: Iterator[Record]) -> str:\n writers = {}\n contents = defaultdict(list)\n for record in records:\n filename = os.path.join(self.tmpdir, f'{record.user}.{self.extension}')\n if filename not in writers:\n f = open(filename, mode='a', encoding='utf-8')\n writers[filename] = f\n line = self.create_line(record)\n contents[filename].append(line)\n\n for filename, f in writers.items():\n content = contents[filename]\n json.dump(content, f, ensure_ascii=False)\n f.close()\n\n save_file = self.write_zip(writers)\n for file in writers:\n os.remove(file)\n return save_file\n\n def create_line(self, record) -> Dict:\n return {\n 'id': record.id,\n 'data': record.data,\n 'label': record.label,\n **record.metadata\n }\n\n\nclass JSONLWriter(LineWriter):\n extension = 'jsonl'\n\n def create_line(self, record):\n return json.dumps({\n 'id': record.id,\n 'data': record.data,\n 'label': record.label,\n **record.metadata\n }, ensure_ascii=False)\n\n\nclass FastTextWriter(LineWriter):\n extension = 'txt'\n\n def create_line(self, record):\n line = [f'__label__{label}' for label in record.label]\n line.append(record.data)\n line = ' '.join(line)\n return line\n", "path": "backend/api/views/download/writer.py"}]} | 2,119 | 164 |
gh_patches_debug_33390 | rasdani/github-patches | git_diff | kivy__kivy-1947 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TextInput crashes while using clipboard bubble
After opening clipboard bubble, keyboard doesn't close anymore.
Moreover, when closing application, it gives:
```
I/python ( 4932): [INFO ] [Clipboard ] Provider: dummy(['clipboard_android'] ignored)
I/python ( 4932): [INFO ] [Base ] Leaving application in progress...
I/python ( 4932): Python for android ended.
W/dalvikvm( 4932): threadid=10: thread exiting with uncaught exception (group=0x4001d560)
E/AndroidRuntime( 4932): FATAL EXCEPTION: Thread-11
E/AndroidRuntime( 4932): java.lang.NoClassDefFoundError: android.content.ClipData
E/AndroidRuntime( 4932): at org.renpy.android.SDLSurfaceView.nativeInit(Native Method)
E/AndroidRuntime( 4932): at org.renpy.android.SDLSurfaceView.run(SDLSurfaceView.java:725)
E/AndroidRuntime( 4932): at java.lang.Thread.run(Thread.java:1019)
E/AndroidRuntime( 4932): Caused by: java.lang.ClassNotFoundException: android.content.ClipData in loader dalvik.system.PathClassLoader[/data/app/org.emanuele.LyricsDL-2.apk]
E/AndroidRuntime( 4932): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:240)
E/AndroidRuntime( 4932): at java.lang.ClassLoader.loadClass(ClassLoader.java:551)
E/AndroidRuntime( 4932): at java.lang.ClassLoader.loadClass(ClassLoader.java:511)
E/AndroidRuntime( 4932): ... 3 more
```
If specifing "use_bubble: False" it works correctly, but clipboard is obviously disabled.
android sdk 14
kivy 1.8.0
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1436926-textinput-crashes-while-using-clipboard-bubble?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github).
</bountysource-plugin>
</issue>
<code>
[start of kivy/core/clipboard/clipboard_android.py]
1 '''
2 Clipboard Android
3 =================
4
5 Android implementation of Clipboard provider, using Pyjnius.
6 '''
7
8 __all__ = ('ClipboardAndroid', )
9
10 from kivy.core.clipboard import ClipboardBase
11 from jnius import autoclass
12 from android.runnable import run_on_ui_thread
13
14 AndroidString = autoclass('java.lang.String')
15 PythonActivity = autoclass('org.renpy.android.PythonActivity')
16 Context = autoclass('android.content.Context')
17 ClipData = autoclass('android.content.ClipData')
18 ClipDescription = autoclass('android.content.ClipDescription')
19
20
21 class ClipboardAndroid(ClipboardBase):
22
23 def __init__(self):
24 super(ClipboardAndroid, self).__init__()
25 self._clipboard = None
26 self._data = dict()
27 self._data['text/plain'] = None
28 self._data['application/data'] = None
29 PythonActivity._clipboard = None
30
31 def get(self, mimetype='text/plain'):
32 return self._get(mimetype)
33
34 def put(self, data, mimetype='text/plain'):
35 self._set(data, mimetype)
36
37 def get_types(self):
38 return list(self._data.keys())
39
40 @run_on_ui_thread
41 def _initialize_clipboard(self):
42 PythonActivity._clipboard = PythonActivity.getSystemService(
43 Context.CLIPBOARD_SERVICE)
44
45 def _get_clipboard(f):
46 def called(*args, **kargs):
47 self = args[0]
48 if not PythonActivity._clipboard:
49 self._initialize_clipboard()
50 import time
51 while not PythonActivity._clipboard:
52 time.sleep(.01)
53 return f(*args, **kargs)
54 return called
55
56 @_get_clipboard
57 def _get(self, mimetype='text/plain'):
58 clippy = PythonActivity._clipboard
59 primary_clip = clippy.getPrimaryClip()
60 if primary_clip and clippy.getPrimaryClipDescription().hasMimeType(
61 ClipDescription.MIMETYPE_TEXT_PLAIN):
62 data = primary_clip.getItemAt(0).getText().toString()
63 else:
64 # TODO: non text data types Not yet implemented
65 data = ''
66 return data
67
68 @_get_clipboard
69 def _set(self, data, mimetype):
70 clippy = PythonActivity._clipboard
71 new_clip = ClipData.newPlainText(AndroidString(""),
72 AndroidString(data))
73 # put text data onto clipboard
74 clippy.setPrimaryClip(new_clip)
75
[end of kivy/core/clipboard/clipboard_android.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kivy/core/clipboard/clipboard_android.py b/kivy/core/clipboard/clipboard_android.py
--- a/kivy/core/clipboard/clipboard_android.py
+++ b/kivy/core/clipboard/clipboard_android.py
@@ -14,8 +14,8 @@
AndroidString = autoclass('java.lang.String')
PythonActivity = autoclass('org.renpy.android.PythonActivity')
Context = autoclass('android.content.Context')
-ClipData = autoclass('android.content.ClipData')
-ClipDescription = autoclass('android.content.ClipDescription')
+VER = autoclass('android.os.Build$VERSION')
+sdk = VER.SDK_INT
class ClipboardAndroid(ClipboardBase):
@@ -56,19 +56,29 @@
@_get_clipboard
def _get(self, mimetype='text/plain'):
clippy = PythonActivity._clipboard
- primary_clip = clippy.getPrimaryClip()
- if primary_clip and clippy.getPrimaryClipDescription().hasMimeType(
- ClipDescription.MIMETYPE_TEXT_PLAIN):
- data = primary_clip.getItemAt(0).getText().toString()
+ if sdk < 11:
+ data = clippy.getText().toString()
else:
- # TODO: non text data types Not yet implemented
- data = ''
+ ClipDescription = autoclass('android.content.ClipDescription')
+ primary_clip = clippy.getPrimaryClip()
+ if primary_clip and clippy.getPrimaryClipDescription().hasMimeType(
+ ClipDescription.MIMETYPE_TEXT_PLAIN):
+ data = primary_clip.getItemAt(0).getText().toString()
+ else:
+ # TODO: non text data types Not yet implemented
+ data = ''
return data
@_get_clipboard
def _set(self, data, mimetype):
clippy = PythonActivity._clipboard
- new_clip = ClipData.newPlainText(AndroidString(""),
+
+ if sdk < 11:
+ #versions previous to honeycomb
+ clippy.setText(AndroidString(data))
+ else:
+ ClipData = autoclass('android.content.ClipData')
+ new_clip = ClipData.newPlainText(AndroidString(""),
AndroidString(data))
- # put text data onto clipboard
- clippy.setPrimaryClip(new_clip)
+ # put text data onto clipboard
+ clippy.setPrimaryClip(new_clip)
| {"golden_diff": "diff --git a/kivy/core/clipboard/clipboard_android.py b/kivy/core/clipboard/clipboard_android.py\n--- a/kivy/core/clipboard/clipboard_android.py\n+++ b/kivy/core/clipboard/clipboard_android.py\n@@ -14,8 +14,8 @@\n AndroidString = autoclass('java.lang.String')\n PythonActivity = autoclass('org.renpy.android.PythonActivity')\n Context = autoclass('android.content.Context')\n-ClipData = autoclass('android.content.ClipData')\n-ClipDescription = autoclass('android.content.ClipDescription')\n+VER = autoclass('android.os.Build$VERSION')\n+sdk = VER.SDK_INT\n \n \n class ClipboardAndroid(ClipboardBase):\n@@ -56,19 +56,29 @@\n @_get_clipboard\n def _get(self, mimetype='text/plain'):\n clippy = PythonActivity._clipboard\n- primary_clip = clippy.getPrimaryClip()\n- if primary_clip and clippy.getPrimaryClipDescription().hasMimeType(\n- ClipDescription.MIMETYPE_TEXT_PLAIN):\n- data = primary_clip.getItemAt(0).getText().toString()\n+ if sdk < 11:\n+ data = clippy.getText().toString()\n else:\n- # TODO: non text data types Not yet implemented\n- data = ''\n+ ClipDescription = autoclass('android.content.ClipDescription')\n+ primary_clip = clippy.getPrimaryClip()\n+ if primary_clip and clippy.getPrimaryClipDescription().hasMimeType(\n+ ClipDescription.MIMETYPE_TEXT_PLAIN):\n+ data = primary_clip.getItemAt(0).getText().toString()\n+ else:\n+ # TODO: non text data types Not yet implemented\n+ data = ''\n return data\n \n @_get_clipboard\n def _set(self, data, mimetype):\n clippy = PythonActivity._clipboard\n- new_clip = ClipData.newPlainText(AndroidString(\"\"),\n+\n+ if sdk < 11:\n+ #versions previous to honeycomb\n+ clippy.setText(AndroidString(data))\n+ else:\n+ ClipData = autoclass('android.content.ClipData')\n+ new_clip = ClipData.newPlainText(AndroidString(\"\"),\n AndroidString(data))\n- # put text data onto clipboard\n- clippy.setPrimaryClip(new_clip)\n+ # put text data onto clipboard\n+ clippy.setPrimaryClip(new_clip)\n", "issue": "TextInput crashes while using clipboard bubble\nAfter opening clipboard bubble, keyboard doesn't close anymore.\nMoreover, when closing application, it gives:\n\n```\nI/python ( 4932): [INFO ] [Clipboard ] Provider: dummy(['clipboard_android'] ignored)\nI/python ( 4932): [INFO ] [Base ] Leaving application in progress...\nI/python ( 4932): Python for android ended.\nW/dalvikvm( 4932): threadid=10: thread exiting with uncaught exception (group=0x4001d560)\nE/AndroidRuntime( 4932): FATAL EXCEPTION: Thread-11\nE/AndroidRuntime( 4932): java.lang.NoClassDefFoundError: android.content.ClipData\nE/AndroidRuntime( 4932): at org.renpy.android.SDLSurfaceView.nativeInit(Native Method)\nE/AndroidRuntime( 4932): at org.renpy.android.SDLSurfaceView.run(SDLSurfaceView.java:725)\nE/AndroidRuntime( 4932): at java.lang.Thread.run(Thread.java:1019)\nE/AndroidRuntime( 4932): Caused by: java.lang.ClassNotFoundException: android.content.ClipData in loader dalvik.system.PathClassLoader[/data/app/org.emanuele.LyricsDL-2.apk]\nE/AndroidRuntime( 4932): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:240)\nE/AndroidRuntime( 4932): at java.lang.ClassLoader.loadClass(ClassLoader.java:551)\nE/AndroidRuntime( 4932): at java.lang.ClassLoader.loadClass(ClassLoader.java:511)\nE/AndroidRuntime( 4932): ... 3 more\n```\n\nIf specifing \"use_bubble: False\" it works correctly, but clipboard is obviously disabled.\n\nandroid sdk 14\nkivy 1.8.0\n## <bountysource-plugin>\n\nWant to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1436926-textinput-crashes-while-using-clipboard-bubble?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github).\n</bountysource-plugin>\n\n", "before_files": [{"content": "'''\nClipboard Android\n=================\n\nAndroid implementation of Clipboard provider, using Pyjnius.\n'''\n\n__all__ = ('ClipboardAndroid', )\n\nfrom kivy.core.clipboard import ClipboardBase\nfrom jnius import autoclass\nfrom android.runnable import run_on_ui_thread\n\nAndroidString = autoclass('java.lang.String')\nPythonActivity = autoclass('org.renpy.android.PythonActivity')\nContext = autoclass('android.content.Context')\nClipData = autoclass('android.content.ClipData')\nClipDescription = autoclass('android.content.ClipDescription')\n\n\nclass ClipboardAndroid(ClipboardBase):\n\n def __init__(self):\n super(ClipboardAndroid, self).__init__()\n self._clipboard = None\n self._data = dict()\n self._data['text/plain'] = None\n self._data['application/data'] = None\n PythonActivity._clipboard = None\n\n def get(self, mimetype='text/plain'):\n return self._get(mimetype)\n\n def put(self, data, mimetype='text/plain'):\n self._set(data, mimetype)\n\n def get_types(self):\n return list(self._data.keys())\n\n @run_on_ui_thread\n def _initialize_clipboard(self):\n PythonActivity._clipboard = PythonActivity.getSystemService(\n Context.CLIPBOARD_SERVICE)\n\n def _get_clipboard(f):\n def called(*args, **kargs):\n self = args[0]\n if not PythonActivity._clipboard:\n self._initialize_clipboard()\n import time\n while not PythonActivity._clipboard:\n time.sleep(.01)\n return f(*args, **kargs)\n return called\n\n @_get_clipboard\n def _get(self, mimetype='text/plain'):\n clippy = PythonActivity._clipboard\n primary_clip = clippy.getPrimaryClip()\n if primary_clip and clippy.getPrimaryClipDescription().hasMimeType(\n ClipDescription.MIMETYPE_TEXT_PLAIN):\n data = primary_clip.getItemAt(0).getText().toString()\n else:\n # TODO: non text data types Not yet implemented\n data = ''\n return data\n\n @_get_clipboard\n def _set(self, data, mimetype):\n clippy = PythonActivity._clipboard\n new_clip = ClipData.newPlainText(AndroidString(\"\"),\n AndroidString(data))\n # put text data onto clipboard\n clippy.setPrimaryClip(new_clip)\n", "path": "kivy/core/clipboard/clipboard_android.py"}]} | 1,776 | 514 |
gh_patches_debug_9553 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3064 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not really possible to override settings
## Description
I want to override the mathsar settings in order to allow LDAP login using django-auth-ldap. I changed the `config/settings/production.py` file that mentions: `# Override default settings ` and added the needed configuration.
This worked fine however that file is under version control so if it is changed on the origin I'll get a conflict and would need to also merge my changes. The usual way to implement this functionality is to add a *non tracked* `local.py` file that would contain any extra configuration for each environment (either dev or production) and import *that* file from the corresponding file. I.e the production.py would be changed to:
```python
# Override default settings
try:
from .local import *
except ImportError:
pass
```
This way, if the local.py file is there it will be used to override the config but if it isnt' there it will be ignored.
## Expected behavior
Being able to override django settings for my environment *without* keeping a fork.
## To Reproduce
Change the production.py file and you'll see that it's version controlled so it can't be easily changed!
## Environment
Not needed
## Additional context
I'd be happy to provide a PR implementing the functionality described here, i.e allow an untracked local.py file to override django settings for each *user/environment*.
</issue>
<code>
[start of config/settings/development.py]
1 from config.settings.common_settings import * # noqa
2
3 # Override default settings
4
[end of config/settings/development.py]
[start of config/settings/production.py]
1 from config.settings.common_settings import * # noqa
2
3 # Override default settings
4
[end of config/settings/production.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/config/settings/development.py b/config/settings/development.py
--- a/config/settings/development.py
+++ b/config/settings/development.py
@@ -1,3 +1,10 @@
from config.settings.common_settings import * # noqa
# Override default settings
+
+
+# Use a local.py module for settings that shouldn't be version tracked
+try:
+ from .local import * # noqa
+except ImportError:
+ pass
diff --git a/config/settings/production.py b/config/settings/production.py
--- a/config/settings/production.py
+++ b/config/settings/production.py
@@ -1,3 +1,10 @@
from config.settings.common_settings import * # noqa
# Override default settings
+
+
+# Use a local.py module for settings that shouldn't be version tracked
+try:
+ from .local import * # noqa
+except ImportError:
+ pass
| {"golden_diff": "diff --git a/config/settings/development.py b/config/settings/development.py\n--- a/config/settings/development.py\n+++ b/config/settings/development.py\n@@ -1,3 +1,10 @@\n from config.settings.common_settings import * # noqa\n \n # Override default settings\n+\n+\n+# Use a local.py module for settings that shouldn't be version tracked\n+try:\n+ from .local import * # noqa\n+except ImportError:\n+ pass\ndiff --git a/config/settings/production.py b/config/settings/production.py\n--- a/config/settings/production.py\n+++ b/config/settings/production.py\n@@ -1,3 +1,10 @@\n from config.settings.common_settings import * # noqa\n \n # Override default settings\n+\n+\n+# Use a local.py module for settings that shouldn't be version tracked\n+try:\n+ from .local import * # noqa \n+except ImportError:\n+ pass\n", "issue": "Not really possible to override settings\n## Description\r\nI want to override the mathsar settings in order to allow LDAP login using django-auth-ldap. I changed the `config/settings/production.py` file that mentions: `# Override default settings ` and added the needed configuration. \r\n\r\nThis worked fine however that file is under version control so if it is changed on the origin I'll get a conflict and would need to also merge my changes. The usual way to implement this functionality is to add a *non tracked* `local.py` file that would contain any extra configuration for each environment (either dev or production) and import *that* file from the corresponding file. I.e the production.py would be changed to:\r\n\r\n```python\r\n# Override default settings \r\n\r\ntry:\r\n from .local import *\r\nexcept ImportError:\r\n pass\r\n```\r\n\r\nThis way, if the local.py file is there it will be used to override the config but if it isnt' there it will be ignored. \r\n\r\n## Expected behavior\r\nBeing able to override django settings for my environment *without* keeping a fork.\r\n\r\n## To Reproduce\r\nChange the production.py file and you'll see that it's version controlled so it can't be easily changed!\r\n\r\n## Environment\r\nNot needed\r\n\r\n## Additional context\r\nI'd be happy to provide a PR implementing the functionality described here, i.e allow an untracked local.py file to override django settings for each *user/environment*.\n", "before_files": [{"content": "from config.settings.common_settings import * # noqa\n\n# Override default settings\n", "path": "config/settings/development.py"}, {"content": "from config.settings.common_settings import * # noqa\n\n# Override default settings\n", "path": "config/settings/production.py"}]} | 885 | 192 |
gh_patches_debug_4675 | rasdani/github-patches | git_diff | pypa__pip-5931 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip uses deprecated SafeConfigParser
* Pip version: 9.0.1
* Python version: 3.6.1
* Operating system: Mac OS X 10.12.4
### Description:
With `error::DeprecationWarning` in `PYTHONWARNINGS`:
```
pip uninstall -y faker
/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/pep425tags.py:260: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Exception:
Traceback (most recent call last):
File "/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/commands/uninstall.py", line 76, in run
requirement_set.uninstall(auto_confirm=options.yes)
File "/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/req/req_set.py", line 346, in uninstall
req.uninstall(auto_confirm=auto_confirm)
File "/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/req/req_install.py", line 732, in uninstall
config = configparser.SafeConfigParser(**options)
File "/Users/davidchudzicki/.cache/hypothesis-build-runtimes/versions/python3.6/lib/python3.6/configparser.py", line 1212, in __init__
DeprecationWarning, stacklevel=2
DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.
```
</issue>
<code>
[start of src/pip/_internal/vcs/mercurial.py]
1 from __future__ import absolute_import
2
3 import logging
4 import os
5
6 from pip._vendor.six.moves import configparser
7
8 from pip._internal.download import path_to_url
9 from pip._internal.utils.misc import display_path, make_vcs_requirement_url
10 from pip._internal.utils.temp_dir import TempDirectory
11 from pip._internal.vcs import VersionControl, vcs
12
13 logger = logging.getLogger(__name__)
14
15
16 class Mercurial(VersionControl):
17 name = 'hg'
18 dirname = '.hg'
19 repo_name = 'clone'
20 schemes = ('hg', 'hg+http', 'hg+https', 'hg+ssh', 'hg+static-http')
21
22 def get_base_rev_args(self, rev):
23 return [rev]
24
25 def export(self, location):
26 """Export the Hg repository at the url to the destination location"""
27 with TempDirectory(kind="export") as temp_dir:
28 self.unpack(temp_dir.path)
29
30 self.run_command(
31 ['archive', location], show_stdout=False, cwd=temp_dir.path
32 )
33
34 def fetch_new(self, dest, url, rev_options):
35 rev_display = rev_options.to_display()
36 logger.info(
37 'Cloning hg %s%s to %s',
38 url,
39 rev_display,
40 display_path(dest),
41 )
42 self.run_command(['clone', '--noupdate', '-q', url, dest])
43 cmd_args = ['update', '-q'] + rev_options.to_args()
44 self.run_command(cmd_args, cwd=dest)
45
46 def switch(self, dest, url, rev_options):
47 repo_config = os.path.join(dest, self.dirname, 'hgrc')
48 config = configparser.SafeConfigParser()
49 try:
50 config.read(repo_config)
51 config.set('paths', 'default', url)
52 with open(repo_config, 'w') as config_file:
53 config.write(config_file)
54 except (OSError, configparser.NoSectionError) as exc:
55 logger.warning(
56 'Could not switch Mercurial repository to %s: %s', url, exc,
57 )
58 else:
59 cmd_args = ['update', '-q'] + rev_options.to_args()
60 self.run_command(cmd_args, cwd=dest)
61
62 def update(self, dest, url, rev_options):
63 self.run_command(['pull', '-q'], cwd=dest)
64 cmd_args = ['update', '-q'] + rev_options.to_args()
65 self.run_command(cmd_args, cwd=dest)
66
67 def get_url(self, location):
68 url = self.run_command(
69 ['showconfig', 'paths.default'],
70 show_stdout=False, cwd=location).strip()
71 if self._is_local_repository(url):
72 url = path_to_url(url)
73 return url.strip()
74
75 def get_revision(self, location):
76 current_revision = self.run_command(
77 ['parents', '--template={rev}'],
78 show_stdout=False, cwd=location).strip()
79 return current_revision
80
81 def get_revision_hash(self, location):
82 current_rev_hash = self.run_command(
83 ['parents', '--template={node}'],
84 show_stdout=False, cwd=location).strip()
85 return current_rev_hash
86
87 def get_src_requirement(self, dist, location):
88 repo = self.get_url(location)
89 if not repo.lower().startswith('hg:'):
90 repo = 'hg+' + repo
91 current_rev_hash = self.get_revision_hash(location)
92 egg_project_name = dist.egg_name().split('-', 1)[0]
93 return make_vcs_requirement_url(repo, current_rev_hash,
94 egg_project_name)
95
96 def is_commit_id_equal(self, dest, name):
97 """Always assume the versions don't match"""
98 return False
99
100
101 vcs.register(Mercurial)
102
[end of src/pip/_internal/vcs/mercurial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pip/_internal/vcs/mercurial.py b/src/pip/_internal/vcs/mercurial.py
--- a/src/pip/_internal/vcs/mercurial.py
+++ b/src/pip/_internal/vcs/mercurial.py
@@ -45,7 +45,7 @@
def switch(self, dest, url, rev_options):
repo_config = os.path.join(dest, self.dirname, 'hgrc')
- config = configparser.SafeConfigParser()
+ config = configparser.RawConfigParser()
try:
config.read(repo_config)
config.set('paths', 'default', url)
| {"golden_diff": "diff --git a/src/pip/_internal/vcs/mercurial.py b/src/pip/_internal/vcs/mercurial.py\n--- a/src/pip/_internal/vcs/mercurial.py\n+++ b/src/pip/_internal/vcs/mercurial.py\n@@ -45,7 +45,7 @@\n \n def switch(self, dest, url, rev_options):\n repo_config = os.path.join(dest, self.dirname, 'hgrc')\n- config = configparser.SafeConfigParser()\n+ config = configparser.RawConfigParser()\n try:\n config.read(repo_config)\n config.set('paths', 'default', url)\n", "issue": "pip uses deprecated SafeConfigParser\n* Pip version: 9.0.1\r\n* Python version: 3.6.1\r\n* Operating system: Mac OS X 10.12.4\r\n\r\n### Description:\r\n\r\nWith `error::DeprecationWarning` in `PYTHONWARNINGS`:\r\n\r\n```\r\npip uninstall -y faker\r\n/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/pep425tags.py:260: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\nException:\r\nTraceback (most recent call last):\r\n File \"/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/basecommand.py\", line 215, in main\r\n status = self.run(options, args)\r\n File \"/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/commands/uninstall.py\", line 76, in run\r\n requirement_set.uninstall(auto_confirm=options.yes)\r\n File \"/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/req/req_set.py\", line 346, in uninstall\r\n req.uninstall(auto_confirm=auto_confirm)\r\n File \"/Users/davidchudzicki/.cache/hypothesis-build-runtimes/.tox/py36-full/lib/python3.6/site-packages/pip/req/req_install.py\", line 732, in uninstall\r\n config = configparser.SafeConfigParser(**options)\r\n File \"/Users/davidchudzicki/.cache/hypothesis-build-runtimes/versions/python3.6/lib/python3.6/configparser.py\", line 1212, in __init__\r\n DeprecationWarning, stacklevel=2\r\nDeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport os\n\nfrom pip._vendor.six.moves import configparser\n\nfrom pip._internal.download import path_to_url\nfrom pip._internal.utils.misc import display_path, make_vcs_requirement_url\nfrom pip._internal.utils.temp_dir import TempDirectory\nfrom pip._internal.vcs import VersionControl, vcs\n\nlogger = logging.getLogger(__name__)\n\n\nclass Mercurial(VersionControl):\n name = 'hg'\n dirname = '.hg'\n repo_name = 'clone'\n schemes = ('hg', 'hg+http', 'hg+https', 'hg+ssh', 'hg+static-http')\n\n def get_base_rev_args(self, rev):\n return [rev]\n\n def export(self, location):\n \"\"\"Export the Hg repository at the url to the destination location\"\"\"\n with TempDirectory(kind=\"export\") as temp_dir:\n self.unpack(temp_dir.path)\n\n self.run_command(\n ['archive', location], show_stdout=False, cwd=temp_dir.path\n )\n\n def fetch_new(self, dest, url, rev_options):\n rev_display = rev_options.to_display()\n logger.info(\n 'Cloning hg %s%s to %s',\n url,\n rev_display,\n display_path(dest),\n )\n self.run_command(['clone', '--noupdate', '-q', url, dest])\n cmd_args = ['update', '-q'] + rev_options.to_args()\n self.run_command(cmd_args, cwd=dest)\n\n def switch(self, dest, url, rev_options):\n repo_config = os.path.join(dest, self.dirname, 'hgrc')\n config = configparser.SafeConfigParser()\n try:\n config.read(repo_config)\n config.set('paths', 'default', url)\n with open(repo_config, 'w') as config_file:\n config.write(config_file)\n except (OSError, configparser.NoSectionError) as exc:\n logger.warning(\n 'Could not switch Mercurial repository to %s: %s', url, exc,\n )\n else:\n cmd_args = ['update', '-q'] + rev_options.to_args()\n self.run_command(cmd_args, cwd=dest)\n\n def update(self, dest, url, rev_options):\n self.run_command(['pull', '-q'], cwd=dest)\n cmd_args = ['update', '-q'] + rev_options.to_args()\n self.run_command(cmd_args, cwd=dest)\n\n def get_url(self, location):\n url = self.run_command(\n ['showconfig', 'paths.default'],\n show_stdout=False, cwd=location).strip()\n if self._is_local_repository(url):\n url = path_to_url(url)\n return url.strip()\n\n def get_revision(self, location):\n current_revision = self.run_command(\n ['parents', '--template={rev}'],\n show_stdout=False, cwd=location).strip()\n return current_revision\n\n def get_revision_hash(self, location):\n current_rev_hash = self.run_command(\n ['parents', '--template={node}'],\n show_stdout=False, cwd=location).strip()\n return current_rev_hash\n\n def get_src_requirement(self, dist, location):\n repo = self.get_url(location)\n if not repo.lower().startswith('hg:'):\n repo = 'hg+' + repo\n current_rev_hash = self.get_revision_hash(location)\n egg_project_name = dist.egg_name().split('-', 1)[0]\n return make_vcs_requirement_url(repo, current_rev_hash,\n egg_project_name)\n\n def is_commit_id_equal(self, dest, name):\n \"\"\"Always assume the versions don't match\"\"\"\n return False\n\n\nvcs.register(Mercurial)\n", "path": "src/pip/_internal/vcs/mercurial.py"}]} | 2,039 | 140 |
gh_patches_debug_7285 | rasdani/github-patches | git_diff | spotify__luigi-700 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug on luigi.contrib.ftp.AtomicFtpfile
on luigi.contrib.ftp.AtomicFtpfile, self.path is accessed before being set.
```
luigi/contrib/ftp.py Line 170 Character 26
167: """
168:
169: def __init__(self, fs, path):
170: self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)
Access to member 'path' before its definition line 172
171: self._fs = fs
172: self.path = path
173: super(AtomicFtpfile, self).__init__(self.__tmp_path, 'w')
```
captured by [landscape.io](https://landscape.io/github/steenzout/luigi/4/messages/error)
</issue>
<code>
[start of luigi/contrib/ftp.py]
1 """
2 This library is a wrapper of ftplib.
3 It is convenient to move data from/to FTP.
4
5 There is an example on how to use it (example/ftp_experiment_outputs.py)
6
7 You can also find unittest for each class.
8
9 Be aware that normal ftp do not provide secure communication.
10 """
11 import datetime
12 import os
13 import random
14 import ftplib
15 import luigi
16 import luigi.target
17 import luigi.format
18 from luigi.format import FileWrapper
19
20
21 class RemoteFileSystem(luigi.target.FileSystem):
22
23 def __init__(self, host, username=None, password=None, port=21, tls=False):
24 self.host = host
25 self.username = username
26 self.password = password
27 self.port = port
28 self.tls = tls
29
30 def _connect(self):
31 """
32 Log in to ftp.
33 """
34 if self.tls:
35 self.ftpcon = ftplib.FTP_TLS()
36 else:
37 self.ftpcon = ftplib.FTP()
38 self.ftpcon.connect(self.host, self.port)
39 self.ftpcon.login(self.username, self.password)
40 if self.tls:
41 self.ftpcon.prot_p()
42
43 def exists(self, path, mtime=None):
44 """
45 Return `True` if file or directory at `path` exist, False otherwise.
46
47 Additional check on modified time when mtime is passed in.
48
49 Return False if the file's modified time is older mtime.
50 """
51 self._connect()
52 files = self.ftpcon.nlst(path)
53
54 result = False
55 if files:
56 if mtime:
57 mdtm = self.ftpcon.sendcmd('MDTM ' + path)
58 modified = datetime.datetime.strptime(mdtm[4:], "%Y%m%d%H%M%S")
59 result = modified > mtime
60 else:
61 result = True
62
63 self.ftpcon.quit()
64
65 return result
66
67 def _rm_recursive(self, ftp, path):
68 """
69 Recursively delete a directory tree on a remote server.
70
71 Source: https://gist.github.com/artlogic/2632647
72 """
73 wd = ftp.pwd()
74
75 try:
76 names = ftp.nlst(path)
77 except ftplib.all_errors as e:
78 # some FTP servers complain when you try and list non-existent paths
79 return
80
81 for name in names:
82 if os.path.split(name)[1] in ('.', '..'):
83 continue
84
85 try:
86 ftp.cwd(name) # if we can cwd to it, it's a folder
87 ftp.cwd(wd) # don't try a nuke a folder we're in
88 self._rm_recursive(ftp, name)
89 except ftplib.all_errors:
90 ftp.delete(name)
91
92 try:
93 ftp.rmd(path)
94 except ftplib.all_errors as e:
95 print('_rm_recursive: Could not remove {0}: {1}'.format(path, e))
96
97 def remove(self, path, recursive=True):
98 """
99 Remove file or directory at location ``path``.
100
101 :param path: a path within the FileSystem to remove.
102 :type path: str
103 :param recursive: if the path is a directory, recursively remove the directory and
104 all of its descendants. Defaults to ``True``.
105 :type recursive: bool
106 """
107 self._connect()
108
109 if recursive:
110 self._rm_recursive(self.ftpcon, path)
111 else:
112 try:
113 # try delete file
114 self.ftpcon.delete(path)
115 except ftplib.all_errors:
116 # it is a folder, delete it
117 self.ftpcon.rmd(path)
118
119 self.ftpcon.quit()
120
121 def put(self, local_path, path):
122 # create parent folder if not exists
123 self._connect()
124
125 normpath = os.path.normpath(path)
126 folder = os.path.dirname(normpath)
127
128 # create paths if do not exists
129 for subfolder in folder.split(os.sep):
130 if subfolder and subfolder not in self.ftpcon.nlst():
131 self.ftpcon.mkd(subfolder)
132
133 self.ftpcon.cwd(subfolder)
134
135 # go back to ftp root folder
136 self.ftpcon.cwd("/")
137
138 # random file name
139 tmp_path = folder + os.sep + 'luigi-tmp-%09d' % random.randrange(0, 1e10)
140
141 self.ftpcon.storbinary('STOR %s' % tmp_path, open(local_path, 'rb'))
142 self.ftpcon.rename(tmp_path, normpath)
143
144 self.ftpcon.quit()
145
146 def get(self, path, local_path):
147 # Create folder if it does not exist
148 normpath = os.path.normpath(local_path)
149 folder = os.path.dirname(normpath)
150 if folder and not os.path.exists(folder):
151 os.makedirs(folder)
152
153 tmp_local_path = local_path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)
154 # download file
155 self._connect()
156 self.ftpcon.retrbinary('RETR %s' % path, open(tmp_local_path, 'wb').write)
157 self.ftpcon.quit()
158
159 os.rename(tmp_local_path, local_path)
160
161
162 class AtomicFtpfile(file):
163 """
164 Simple class that writes to a temp file and upload to ftp on close().
165
166 Also cleans up the temp file if close is not invoked.
167 """
168
169 def __init__(self, fs, path):
170 self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)
171 self._fs = fs
172 self.path = path
173 super(AtomicFtpfile, self).__init__(self.__tmp_path, 'w')
174
175 def close(self):
176 # close and upload file to ftp
177 super(AtomicFtpfile, self).close()
178 self._fs.put(self.__tmp_path, self.path)
179 os.remove(self.__tmp_path)
180
181 def __del__(self):
182 if os.path.exists(self.__tmp_path):
183 os.remove(self.__tmp_path)
184
185 @property
186 def tmp_path(self):
187 return self.__tmp_path
188
189 @property
190 def fs(self):
191 return self._fs
192
193 def __exit__(self, exc_type, exc, traceback):
194 """
195 Close/commit the file if there are no exception
196
197 Upload file to ftp
198 """
199 if exc_type:
200 return
201 return file.__exit__(self, exc_type, exc, traceback)
202
203
204 class RemoteTarget(luigi.target.FileSystemTarget):
205 """
206 Target used for reading from remote files.
207
208 The target is implemented using ssh commands streaming data over the network.
209 """
210
211 def __init__(self, path, host, format=None, username=None, password=None, port=21, mtime=None, tls=False):
212 self.path = path
213 self.mtime = mtime
214 self.format = format
215 self.tls = tls
216 self._fs = RemoteFileSystem(host, username, password, port, tls)
217
218 @property
219 def fs(self):
220 return self._fs
221
222 def open(self, mode):
223 """
224 Open the FileSystem target.
225
226 This method returns a file-like object which can either be read from or written to depending
227 on the specified mode.
228
229 :param mode: the mode `r` opens the FileSystemTarget in read-only mode, whereas `w` will
230 open the FileSystemTarget in write mode. Subclasses can implement
231 additional options.
232 :type mode: str
233 """
234 if mode == 'w':
235 if self.format:
236 return self.format.pipe_writer(AtomicFtpfile(self._fs, self.path))
237 else:
238 return AtomicFtpfile(self._fs, self.path)
239
240 elif mode == 'r':
241 self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)
242 # download file to local
243 self._fs.get(self.path, self.__tmp_path)
244
245 # manage tmp file
246 fileobj = FileWrapper(open(self.__tmp_path, 'r'))
247 if self.format:
248 return self.format.pipe_reader(fileobj)
249 return fileobj
250 else:
251 raise Exception('mode must be r/w')
252
253 def exists(self):
254 return self.fs.exists(self.path, self.mtime)
255
256 def put(self, local_path):
257 self.fs.put(local_path, self.path)
258
259 def get(self, local_path):
260 self.fs.get(self.path, local_path)
261
[end of luigi/contrib/ftp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/luigi/contrib/ftp.py b/luigi/contrib/ftp.py
--- a/luigi/contrib/ftp.py
+++ b/luigi/contrib/ftp.py
@@ -167,7 +167,14 @@
"""
def __init__(self, fs, path):
- self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)
+ """
+ Initializes an AtomicFtpfile instance.
+
+ :param fs:
+ :param path:
+ :type path: str
+ """
+ self.__tmp_path = '%s-luigi-tmp-%09d' % (path, random.randrange(0, 1e10))
self._fs = fs
self.path = path
super(AtomicFtpfile, self).__init__(self.__tmp_path, 'w')
| {"golden_diff": "diff --git a/luigi/contrib/ftp.py b/luigi/contrib/ftp.py\n--- a/luigi/contrib/ftp.py\n+++ b/luigi/contrib/ftp.py\n@@ -167,7 +167,14 @@\n \"\"\"\n \n def __init__(self, fs, path):\n- self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)\n+ \"\"\"\n+ Initializes an AtomicFtpfile instance.\n+\n+ :param fs:\n+ :param path:\n+ :type path: str\n+ \"\"\"\n+ self.__tmp_path = '%s-luigi-tmp-%09d' % (path, random.randrange(0, 1e10))\n self._fs = fs\n self.path = path\n super(AtomicFtpfile, self).__init__(self.__tmp_path, 'w')\n", "issue": "bug on luigi.contrib.ftp.AtomicFtpfile\non luigi.contrib.ftp.AtomicFtpfile, self.path is accessed before being set.\n\n```\nluigi/contrib/ftp.py Line 170 Character 26\n167: \"\"\"\n168: \n169: def __init__(self, fs, path):\n170: self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)\n Access to member 'path' before its definition line 172\n171: self._fs = fs\n172: self.path = path\n173: super(AtomicFtpfile, self).__init__(self.__tmp_path, 'w')\n```\n\ncaptured by [landscape.io](https://landscape.io/github/steenzout/luigi/4/messages/error)\n\n", "before_files": [{"content": "\"\"\"\nThis library is a wrapper of ftplib.\nIt is convenient to move data from/to FTP.\n\nThere is an example on how to use it (example/ftp_experiment_outputs.py)\n\nYou can also find unittest for each class.\n\nBe aware that normal ftp do not provide secure communication.\n\"\"\"\nimport datetime\nimport os\nimport random\nimport ftplib\nimport luigi\nimport luigi.target\nimport luigi.format\nfrom luigi.format import FileWrapper\n\n\nclass RemoteFileSystem(luigi.target.FileSystem):\n\n def __init__(self, host, username=None, password=None, port=21, tls=False):\n self.host = host\n self.username = username\n self.password = password\n self.port = port\n self.tls = tls\n\n def _connect(self):\n \"\"\"\n Log in to ftp.\n \"\"\"\n if self.tls:\n self.ftpcon = ftplib.FTP_TLS()\n else:\n self.ftpcon = ftplib.FTP()\n self.ftpcon.connect(self.host, self.port)\n self.ftpcon.login(self.username, self.password)\n if self.tls:\n self.ftpcon.prot_p()\n\n def exists(self, path, mtime=None):\n \"\"\"\n Return `True` if file or directory at `path` exist, False otherwise.\n\n Additional check on modified time when mtime is passed in.\n\n Return False if the file's modified time is older mtime.\n \"\"\"\n self._connect()\n files = self.ftpcon.nlst(path)\n\n result = False\n if files:\n if mtime:\n mdtm = self.ftpcon.sendcmd('MDTM ' + path)\n modified = datetime.datetime.strptime(mdtm[4:], \"%Y%m%d%H%M%S\")\n result = modified > mtime\n else:\n result = True\n\n self.ftpcon.quit()\n\n return result\n\n def _rm_recursive(self, ftp, path):\n \"\"\"\n Recursively delete a directory tree on a remote server.\n\n Source: https://gist.github.com/artlogic/2632647\n \"\"\"\n wd = ftp.pwd()\n\n try:\n names = ftp.nlst(path)\n except ftplib.all_errors as e:\n # some FTP servers complain when you try and list non-existent paths\n return\n\n for name in names:\n if os.path.split(name)[1] in ('.', '..'):\n continue\n\n try:\n ftp.cwd(name) # if we can cwd to it, it's a folder\n ftp.cwd(wd) # don't try a nuke a folder we're in\n self._rm_recursive(ftp, name)\n except ftplib.all_errors:\n ftp.delete(name)\n\n try:\n ftp.rmd(path)\n except ftplib.all_errors as e:\n print('_rm_recursive: Could not remove {0}: {1}'.format(path, e))\n\n def remove(self, path, recursive=True):\n \"\"\"\n Remove file or directory at location ``path``.\n\n :param path: a path within the FileSystem to remove.\n :type path: str\n :param recursive: if the path is a directory, recursively remove the directory and\n all of its descendants. Defaults to ``True``.\n :type recursive: bool\n \"\"\"\n self._connect()\n\n if recursive:\n self._rm_recursive(self.ftpcon, path)\n else:\n try:\n # try delete file\n self.ftpcon.delete(path)\n except ftplib.all_errors:\n # it is a folder, delete it\n self.ftpcon.rmd(path)\n\n self.ftpcon.quit()\n\n def put(self, local_path, path):\n # create parent folder if not exists\n self._connect()\n\n normpath = os.path.normpath(path)\n folder = os.path.dirname(normpath)\n\n # create paths if do not exists\n for subfolder in folder.split(os.sep):\n if subfolder and subfolder not in self.ftpcon.nlst():\n self.ftpcon.mkd(subfolder)\n\n self.ftpcon.cwd(subfolder)\n\n # go back to ftp root folder\n self.ftpcon.cwd(\"/\")\n\n # random file name\n tmp_path = folder + os.sep + 'luigi-tmp-%09d' % random.randrange(0, 1e10)\n\n self.ftpcon.storbinary('STOR %s' % tmp_path, open(local_path, 'rb'))\n self.ftpcon.rename(tmp_path, normpath)\n\n self.ftpcon.quit()\n\n def get(self, path, local_path):\n # Create folder if it does not exist\n normpath = os.path.normpath(local_path)\n folder = os.path.dirname(normpath)\n if folder and not os.path.exists(folder):\n os.makedirs(folder)\n\n tmp_local_path = local_path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)\n # download file\n self._connect()\n self.ftpcon.retrbinary('RETR %s' % path, open(tmp_local_path, 'wb').write)\n self.ftpcon.quit()\n\n os.rename(tmp_local_path, local_path)\n\n\nclass AtomicFtpfile(file):\n \"\"\"\n Simple class that writes to a temp file and upload to ftp on close().\n\n Also cleans up the temp file if close is not invoked.\n \"\"\"\n\n def __init__(self, fs, path):\n self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)\n self._fs = fs\n self.path = path\n super(AtomicFtpfile, self).__init__(self.__tmp_path, 'w')\n\n def close(self):\n # close and upload file to ftp\n super(AtomicFtpfile, self).close()\n self._fs.put(self.__tmp_path, self.path)\n os.remove(self.__tmp_path)\n\n def __del__(self):\n if os.path.exists(self.__tmp_path):\n os.remove(self.__tmp_path)\n\n @property\n def tmp_path(self):\n return self.__tmp_path\n\n @property\n def fs(self):\n return self._fs\n\n def __exit__(self, exc_type, exc, traceback):\n \"\"\"\n Close/commit the file if there are no exception\n\n Upload file to ftp\n \"\"\"\n if exc_type:\n return\n return file.__exit__(self, exc_type, exc, traceback)\n\n\nclass RemoteTarget(luigi.target.FileSystemTarget):\n \"\"\"\n Target used for reading from remote files.\n\n The target is implemented using ssh commands streaming data over the network.\n \"\"\"\n\n def __init__(self, path, host, format=None, username=None, password=None, port=21, mtime=None, tls=False):\n self.path = path\n self.mtime = mtime\n self.format = format\n self.tls = tls\n self._fs = RemoteFileSystem(host, username, password, port, tls)\n\n @property\n def fs(self):\n return self._fs\n\n def open(self, mode):\n \"\"\"\n Open the FileSystem target.\n\n This method returns a file-like object which can either be read from or written to depending\n on the specified mode.\n\n :param mode: the mode `r` opens the FileSystemTarget in read-only mode, whereas `w` will\n open the FileSystemTarget in write mode. Subclasses can implement\n additional options.\n :type mode: str\n \"\"\"\n if mode == 'w':\n if self.format:\n return self.format.pipe_writer(AtomicFtpfile(self._fs, self.path))\n else:\n return AtomicFtpfile(self._fs, self.path)\n\n elif mode == 'r':\n self.__tmp_path = self.path + '-luigi-tmp-%09d' % random.randrange(0, 1e10)\n # download file to local\n self._fs.get(self.path, self.__tmp_path)\n\n # manage tmp file\n fileobj = FileWrapper(open(self.__tmp_path, 'r'))\n if self.format:\n return self.format.pipe_reader(fileobj)\n return fileobj\n else:\n raise Exception('mode must be r/w')\n\n def exists(self):\n return self.fs.exists(self.path, self.mtime)\n\n def put(self, local_path):\n self.fs.put(local_path, self.path)\n\n def get(self, local_path):\n self.fs.get(self.path, local_path)\n", "path": "luigi/contrib/ftp.py"}]} | 3,279 | 209 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.