problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_37821 | rasdani/github-patches | git_diff | sunpy__sunpy-3056 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AIA FITS headers have inaccurate HGS coordinates
AIA FITS headers apparently have inaccurate Heliographic Stonyhurst (HGS) coordinates (`HGLN_OBS` and `HGLT_OBS`). For example, the distance from Earth center does not match the orbit radius. We currently use these keywords to generate the `observer_coordinate` for an AIA map, so we provide an inaccurate observer location.
The headers also have Heliographic Aries Ecliptic (HAE) coordinates (`HAEX_OBS`, `HAEY_OBS`, and `HAEZ_OBS`), and the HAE coordinates are inconsistent with the HGS coordinates in the same header. We have previously verified the accuracy of SunPy's transformation from HAE to HGS (e.g., https://github.com/sunpy/sunpy/issues/2445#issuecomment-364531159). The HAE coordinates appear to be credible, and likely should be trusted over the HGS coordinates.
My recommendation is for `AIAMap` to override the generation of `observer_coordinate` to use the HAE coordinates. Discuss.
</issue>
<code>
[start of sunpy/map/sources/sdo.py]
1 """SDO Map subclass definitions"""
2 #pylint: disable=W0221,W0222,E1101,E1121
3
4 __author__ = "Keith Hughitt"
5 __email__ = "[email protected]"
6
7 import matplotlib.pyplot as plt
8
9 from astropy.visualization.mpl_normalize import ImageNormalize
10 from astropy.visualization import AsinhStretch
11
12 from sunpy.map import GenericMap
13 from sunpy.map.sources.source_type import source_stretch
14
15 __all__ = ['AIAMap', 'HMIMap']
16
17
18 class AIAMap(GenericMap):
19 """AIA Image Map.
20
21 The Atmospheric Imaging Assembly is a set of four telescopes that employ
22 normal-incidence, multi-layer coated optics to provide narrow-band imaging
23 of the Sun. It provides high resolution full-disk images of the corona and
24 transition region up to 0.5 solar radii above the solar limb with 1.5
25 arcsecond angular resolution and 12-second temporal resolution. It observes
26 the Sun in the following seven extreme ultraviolet bandpasses: 94 A
27 (Fe XVIII), 131 A (Fe VIII, XXI), 171 A (Fe IX), 193 A (Fe XII, XXIV),
28 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes
29 in the visible 1600 A (C IV) and the nearby continuun (1700 A).
30
31 References
32 ----------
33 * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_
34 * `Instrument Page <https://aia.lmsal.com>`_
35 * `Fits Header keywords <http://jsoc.stanford.edu/doc/keywords/AIA/AIA02840_A_AIA-SDO_FITS_Keyword_Documents.pdf>`_
36 * `Analysis Guide <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/>`_
37 * `Instrument Paper <https://doi.org/10.1007/s11207-011-9776-8>`_
38 * `wavelengths and temperature response reference <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/figures/aia_tel_resp.png>`_
39 """
40
41 def __init__(self, data, header, **kwargs):
42 GenericMap.__init__(self, data, header, **kwargs)
43
44 # Fill in some missing info
45 self.meta['detector'] = "AIA"
46 self._nickname = self.detector
47 self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())
48 self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))
49
50 @property
51 def observatory(self):
52 """
53 Returns the observatory.
54 """
55 return self.meta['telescop'].split('/')[0]
56
57 @classmethod
58 def is_datasource_for(cls, data, header, **kwargs):
59 """Determines if header corresponds to an AIA image"""
60 return header.get('instrume', '').startswith('AIA')
61
62
63 class HMIMap(GenericMap):
64 """HMI Image Map.
65
66 HMI consists of a refracting telescope, a polarization selector,
67 an image stabilization system, a narrow band tunable filter
68 and two 4096 pixel CCD cameras. It observes the full solar disk in the Fe I
69 absorption line at 6173 Angstrom with a resolution of 1 arc-second.
70 HMI takes images in a sequence of tuning and polarizations at a 4-second
71 cadence for each camera. One camera is dedicated to a 45 s Doppler and
72 line-of-sight field sequence while the other to a 90 s vector field
73 sequence.
74
75 References
76 ----------
77 * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_
78 * `Instrument Page <http://hmi.stanford.edu>`_
79 * `Analysis Guide <http://hmi.stanford.edu/doc/magnetic/guide.pdf>`_
80 """
81 def __init__(self, data, header, **kwargs):
82
83 GenericMap.__init__(self, data, header, **kwargs)
84
85 self.meta['detector'] = "HMI"
86 # self.meta['instrme'] = "HMI"
87 # self.meta['obsrvtry'] = "SDO"
88 self._nickname = self.detector
89
90 @property
91 def measurement(self):
92 """
93 Returns the measurement type.
94 """
95 return self.meta['content'].split(" ")[0].lower()
96
97 @property
98 def observatory(self):
99 """
100 Returns the observatory.
101 """
102 return self.meta['telescop'].split('/')[0]
103
104 @classmethod
105 def is_datasource_for(cls, data, header, **kwargs):
106 """Determines if header corresponds to an HMI image"""
107 return header.get('instrume', '').startswith('HMI')
108
[end of sunpy/map/sources/sdo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sunpy/map/sources/sdo.py b/sunpy/map/sources/sdo.py
--- a/sunpy/map/sources/sdo.py
+++ b/sunpy/map/sources/sdo.py
@@ -6,6 +6,14 @@
import matplotlib.pyplot as plt
+from astropy.coordinates import CartesianRepresentation, SkyCoord
+# Versions of Astropy that do not have HeliocentricMeanEcliptic have the same frame
+# with the incorrect name HeliocentricTrueEcliptic
+try:
+ from astropy.coordinates import HeliocentricMeanEcliptic
+except ImportError:
+ from astropy.coordinates import HeliocentricTrueEcliptic as HeliocentricMeanEcliptic
+import astropy.units as u
from astropy.visualization.mpl_normalize import ImageNormalize
from astropy.visualization import AsinhStretch
@@ -28,6 +36,13 @@
211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes
in the visible 1600 A (C IV) and the nearby continuun (1700 A).
+ Notes
+ -----
+ Observer location: The standard AIA FITS header provides the spacecraft location in multiple
+ coordinate systems, including Heliocentric Aries Ecliptic (HAE) and Heliographic Stonyhurst
+ (HGS). SunPy uses the provided HAE coordinates due to accuracy concerns with the provided
+ HGS coordinates, but other software packages may make different choices.
+
References
----------
* `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_
@@ -47,6 +62,45 @@
self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())
self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))
+ @property
+ def observer_coordinate(self):
+ """
+ The Heliographic Stonyhurst Coordinate of the observer.
+
+ This coordinate is determined using the Heliocentric Aries Ecliptic (HAE) coordinates
+ in the header.
+ """
+ vector = CartesianRepresentation(self.meta['haex_obs'],
+ self.meta['haey_obs'],
+ self.meta['haez_obs'])
+ coord = SkyCoord(vector * u.m, frame=HeliocentricMeanEcliptic, obstime=self.date)
+ return coord.heliographic_stonyhurst
+
+ @property
+ def heliographic_latitude(self):
+ """Heliographic latitude."""
+ return self.observer_coordinate.lat
+
+ @property
+ def heliographic_longitude(self):
+ """Heliographic longitude."""
+ return self.observer_coordinate.lon
+
+ @property
+ def carrington_latitude(self):
+ """Carrington latitude."""
+ return self.observer_coordinate.heliographic_carrington.lat
+
+ @property
+ def carrington_longitude(self):
+ """Carrington longitude."""
+ return self.observer_coordinate.heliographic_carrington.lon
+
+ @property
+ def dsun(self):
+ """The observer distance from the Sun."""
+ return self.observer_coordinate.radius.to('m')
+
@property
def observatory(self):
"""
| {"golden_diff": "diff --git a/sunpy/map/sources/sdo.py b/sunpy/map/sources/sdo.py\n--- a/sunpy/map/sources/sdo.py\n+++ b/sunpy/map/sources/sdo.py\n@@ -6,6 +6,14 @@\n \n import matplotlib.pyplot as plt\n \n+from astropy.coordinates import CartesianRepresentation, SkyCoord\n+# Versions of Astropy that do not have HeliocentricMeanEcliptic have the same frame\n+# with the incorrect name HeliocentricTrueEcliptic\n+try:\n+ from astropy.coordinates import HeliocentricMeanEcliptic\n+except ImportError:\n+ from astropy.coordinates import HeliocentricTrueEcliptic as HeliocentricMeanEcliptic\n+import astropy.units as u\n from astropy.visualization.mpl_normalize import ImageNormalize\n from astropy.visualization import AsinhStretch\n \n@@ -28,6 +36,13 @@\n 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes\n in the visible 1600 A (C IV) and the nearby continuun (1700 A).\n \n+ Notes\n+ -----\n+ Observer location: The standard AIA FITS header provides the spacecraft location in multiple\n+ coordinate systems, including Heliocentric Aries Ecliptic (HAE) and Heliographic Stonyhurst\n+ (HGS). SunPy uses the provided HAE coordinates due to accuracy concerns with the provided\n+ HGS coordinates, but other software packages may make different choices.\n+\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n@@ -47,6 +62,45 @@\n self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())\n self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))\n \n+ @property\n+ def observer_coordinate(self):\n+ \"\"\"\n+ The Heliographic Stonyhurst Coordinate of the observer.\n+\n+ This coordinate is determined using the Heliocentric Aries Ecliptic (HAE) coordinates\n+ in the header.\n+ \"\"\"\n+ vector = CartesianRepresentation(self.meta['haex_obs'],\n+ self.meta['haey_obs'],\n+ self.meta['haez_obs'])\n+ coord = SkyCoord(vector * u.m, frame=HeliocentricMeanEcliptic, obstime=self.date)\n+ return coord.heliographic_stonyhurst\n+\n+ @property\n+ def heliographic_latitude(self):\n+ \"\"\"Heliographic latitude.\"\"\"\n+ return self.observer_coordinate.lat\n+\n+ @property\n+ def heliographic_longitude(self):\n+ \"\"\"Heliographic longitude.\"\"\"\n+ return self.observer_coordinate.lon\n+\n+ @property\n+ def carrington_latitude(self):\n+ \"\"\"Carrington latitude.\"\"\"\n+ return self.observer_coordinate.heliographic_carrington.lat\n+\n+ @property\n+ def carrington_longitude(self):\n+ \"\"\"Carrington longitude.\"\"\"\n+ return self.observer_coordinate.heliographic_carrington.lon\n+\n+ @property\n+ def dsun(self):\n+ \"\"\"The observer distance from the Sun.\"\"\"\n+ return self.observer_coordinate.radius.to('m')\n+\n @property\n def observatory(self):\n \"\"\"\n", "issue": "AIA FITS headers have inaccurate HGS coordinates\nAIA FITS headers apparently have inaccurate Heliographic Stonyhurst (HGS) coordinates (`HGLN_OBS` and `HGLT_OBS`). For example, the distance from Earth center does not match the orbit radius. We currently use these keywords to generate the `observer_coordinate` for an AIA map, so we provide an inaccurate observer location.\r\n\r\nThe headers also have Heliographic Aries Ecliptic (HAE) coordinates (`HAEX_OBS`, `HAEY_OBS`, and `HAEZ_OBS`), and the HAE coordinates are inconsistent with the HGS coordinates in the same header. We have previously verified the accuracy of SunPy's transformation from HAE to HGS (e.g., https://github.com/sunpy/sunpy/issues/2445#issuecomment-364531159). The HAE coordinates appear to be credible, and likely should be trusted over the HGS coordinates.\r\n\r\nMy recommendation is for `AIAMap` to override the generation of `observer_coordinate` to use the HAE coordinates. Discuss.\n", "before_files": [{"content": "\"\"\"SDO Map subclass definitions\"\"\"\n#pylint: disable=W0221,W0222,E1101,E1121\n\n__author__ = \"Keith Hughitt\"\n__email__ = \"[email protected]\"\n\nimport matplotlib.pyplot as plt\n\nfrom astropy.visualization.mpl_normalize import ImageNormalize\nfrom astropy.visualization import AsinhStretch\n\nfrom sunpy.map import GenericMap\nfrom sunpy.map.sources.source_type import source_stretch\n\n__all__ = ['AIAMap', 'HMIMap']\n\n\nclass AIAMap(GenericMap):\n \"\"\"AIA Image Map.\n\n The Atmospheric Imaging Assembly is a set of four telescopes that employ\n normal-incidence, multi-layer coated optics to provide narrow-band imaging\n of the Sun. It provides high resolution full-disk images of the corona and\n transition region up to 0.5 solar radii above the solar limb with 1.5\n arcsecond angular resolution and 12-second temporal resolution. It observes\n the Sun in the following seven extreme ultraviolet bandpasses: 94 A\n (Fe XVIII), 131 A (Fe VIII, XXI), 171 A (Fe IX), 193 A (Fe XII, XXIV),\n 211 A (Fe XIV), 304 A (He II), 335 A (Fe XVI). One telescope observes\n in the visible 1600 A (C IV) and the nearby continuun (1700 A).\n\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n * `Instrument Page <https://aia.lmsal.com>`_\n * `Fits Header keywords <http://jsoc.stanford.edu/doc/keywords/AIA/AIA02840_A_AIA-SDO_FITS_Keyword_Documents.pdf>`_\n * `Analysis Guide <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/>`_\n * `Instrument Paper <https://doi.org/10.1007/s11207-011-9776-8>`_\n * `wavelengths and temperature response reference <https://www.lmsal.com/sdodocs/doc/dcur/SDOD0060.zip/zip/entry/figures/aia_tel_resp.png>`_\n \"\"\"\n\n def __init__(self, data, header, **kwargs):\n GenericMap.__init__(self, data, header, **kwargs)\n\n # Fill in some missing info\n self.meta['detector'] = \"AIA\"\n self._nickname = self.detector\n self.plot_settings['cmap'] = plt.get_cmap(self._get_cmap_name())\n self.plot_settings['norm'] = ImageNormalize(stretch=source_stretch(self.meta, AsinhStretch(0.01)))\n\n @property\n def observatory(self):\n \"\"\"\n Returns the observatory.\n \"\"\"\n return self.meta['telescop'].split('/')[0]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an AIA image\"\"\"\n return header.get('instrume', '').startswith('AIA')\n\n\nclass HMIMap(GenericMap):\n \"\"\"HMI Image Map.\n\n HMI consists of a refracting telescope, a polarization selector,\n an image stabilization system, a narrow band tunable filter\n and two 4096 pixel CCD cameras. It observes the full solar disk in the Fe I\n absorption line at 6173 Angstrom with a resolution of 1 arc-second.\n HMI takes images in a sequence of tuning and polarizations at a 4-second\n cadence for each camera. One camera is dedicated to a 45 s Doppler and\n line-of-sight field sequence while the other to a 90 s vector field\n sequence.\n\n References\n ----------\n * `SDO Mission Page <https://sdo.gsfc.nasa.gov/>`_\n * `Instrument Page <http://hmi.stanford.edu>`_\n * `Analysis Guide <http://hmi.stanford.edu/doc/magnetic/guide.pdf>`_\n \"\"\"\n def __init__(self, data, header, **kwargs):\n\n GenericMap.__init__(self, data, header, **kwargs)\n\n self.meta['detector'] = \"HMI\"\n# self.meta['instrme'] = \"HMI\"\n# self.meta['obsrvtry'] = \"SDO\"\n self._nickname = self.detector\n\n @property\n def measurement(self):\n \"\"\"\n Returns the measurement type.\n \"\"\"\n return self.meta['content'].split(\" \")[0].lower()\n\n @property\n def observatory(self):\n \"\"\"\n Returns the observatory.\n \"\"\"\n return self.meta['telescop'].split('/')[0]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an HMI image\"\"\"\n return header.get('instrume', '').startswith('HMI')\n", "path": "sunpy/map/sources/sdo.py"}]} | 2,136 | 761 |
gh_patches_debug_35532 | rasdani/github-patches | git_diff | getsentry__sentry-53789 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Slack alert message doesn't match up with why alert was actually triggered
### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
Hi Team,
A customer reported to us that the messages in their Slack alerts doesn't match up with the real reason the alert was triggered (displayed correctly once you click into the alert and go to the alert page). I was able to replicate this by setting up a similar type of alert.
1. Set up a metric alert to trigger when `Number of errors is X% higher in Y minutes compared to same time one week ago`
2. Alert action is to be notified via Slack, specifically a Slack channel
3. Receive Slack alert
<img width="1269" alt="Screenshot 2023-05-04 at 3 45 03 PM" src="https://user-images.githubusercontent.com/98921471/236560084-f64cf7cf-7856-4921-8c3d-ca70433afda8.png">
### Expected Result
When the alert triggers, I expect the message in Slack to say: "157 events in the last 10 minutes". This is verified by the alert details of why the alert triggered.
Alternatively, it can also be expected to say: "Number of errors is 89.16% higher in 10 minutes compared to the same time a week ago".
<img width="1277" alt="Screenshot 2023-05-04 at 3 45 14 PM" src="https://user-images.githubusercontent.com/98921471/236560515-38e4361c-0c8e-4c53-8986-fcd5a67f40fe.png">
### Actual Result
The Slack alert message says "168.478260.. events in the last 10 minutes". This doesn't match up with the 157 events that I see in the alert details.
For the customer, it said "400.0 events in the last 60 minutes" when it should've said "4 events in the last 60 minutes", which is 2 orders of magnitudes off.
<img width="544" alt="Screenshot 2023-05-04 at 3 44 30 PM" src="https://user-images.githubusercontent.com/98921471/236560823-894e8da5-6918-416c-8f65-f182456424d7.png">
### Product Area
Alerts
### Link
_No response_
### DSN
_No response_
### Version
_No response_
┆Issue is synchronized with this [Jira Improvement](https://getsentry.atlassian.net/browse/FEEDBACK-1825) by [Unito](https://www.unito.io)
</issue>
<code>
[start of src/sentry/integrations/metric_alerts.py]
1 from datetime import timedelta
2 from typing import Optional
3 from urllib import parse
4
5 from django.db.models import Max
6 from django.urls import reverse
7 from django.utils.translation import gettext as _
8
9 from sentry.constants import CRASH_RATE_ALERT_AGGREGATE_ALIAS
10 from sentry.incidents.logic import get_incident_aggregates
11 from sentry.incidents.models import (
12 INCIDENT_STATUS,
13 AlertRule,
14 Incident,
15 IncidentStatus,
16 IncidentTrigger,
17 )
18 from sentry.utils.assets import get_asset_url
19 from sentry.utils.http import absolute_uri
20
21 QUERY_AGGREGATION_DISPLAY = {
22 "count()": "events",
23 "count_unique(tags[sentry:user])": "users affected",
24 "percentage(sessions_crashed, sessions)": "% sessions crash free rate",
25 "percentage(users_crashed, users)": "% users crash free rate",
26 }
27 LOGO_URL = absolute_uri(get_asset_url("sentry", "images/sentry-email-avatar.png"))
28
29
30 def get_metric_count_from_incident(incident: Incident) -> str:
31 """Returns the current or last count of an incident aggregate."""
32 incident_trigger = (
33 IncidentTrigger.objects.filter(incident=incident).order_by("-date_modified").first()
34 )
35 if incident_trigger:
36 alert_rule_trigger = incident_trigger.alert_rule_trigger
37 # TODO: If we're relying on this and expecting possible delays between a
38 # trigger fired and this function running, then this could actually be
39 # incorrect if they changed the trigger's time window in this time period.
40 # Should we store it?
41 start = incident_trigger.date_modified - timedelta(
42 seconds=alert_rule_trigger.alert_rule.snuba_query.time_window
43 )
44 end = incident_trigger.date_modified
45 else:
46 start, end = None, None
47
48 return get_incident_aggregates(incident=incident, start=start, end=end).get("count")
49
50
51 def get_incident_status_text(alert_rule: AlertRule, metric_value: str) -> str:
52 """Returns a human readable current status of an incident"""
53 agg_display_key = alert_rule.snuba_query.aggregate
54
55 if CRASH_RATE_ALERT_AGGREGATE_ALIAS in alert_rule.snuba_query.aggregate:
56 agg_display_key = agg_display_key.split(f"AS {CRASH_RATE_ALERT_AGGREGATE_ALIAS}")[0].strip()
57
58 agg_text = QUERY_AGGREGATION_DISPLAY.get(agg_display_key, alert_rule.snuba_query.aggregate)
59
60 if agg_text.startswith("%"):
61 if metric_value is not None:
62 metric_and_agg_text = f"{metric_value}{agg_text}"
63 else:
64 metric_and_agg_text = f"No{agg_text[1:]}"
65 else:
66 metric_and_agg_text = f"{metric_value} {agg_text}"
67
68 time_window = alert_rule.snuba_query.time_window // 60
69 interval = "minute" if time_window == 1 else "minutes"
70 text = _("%(metric_and_agg_text)s in the last %(time_window)d %(interval)s") % {
71 "metric_and_agg_text": metric_and_agg_text,
72 "time_window": time_window,
73 "interval": interval,
74 }
75
76 return text
77
78
79 def incident_attachment_info(incident, new_status: IncidentStatus, metric_value=None):
80 alert_rule = incident.alert_rule
81
82 status = INCIDENT_STATUS[new_status]
83
84 if metric_value is None:
85 metric_value = get_metric_count_from_incident(incident)
86
87 text = get_incident_status_text(alert_rule, metric_value)
88 title = f"{status}: {alert_rule.name}"
89
90 title_link = alert_rule.organization.absolute_url(
91 reverse(
92 "sentry-metric-alert-details",
93 kwargs={
94 "organization_slug": alert_rule.organization.slug,
95 "alert_rule_id": alert_rule.id,
96 },
97 ),
98 query=parse.urlencode({"alert": str(incident.identifier)}),
99 )
100
101 return {
102 "title": title,
103 "text": text,
104 "logo_url": LOGO_URL,
105 "status": status,
106 "ts": incident.date_started,
107 "title_link": title_link,
108 }
109
110
111 def metric_alert_attachment_info(
112 alert_rule: AlertRule,
113 selected_incident: Optional[Incident] = None,
114 new_status: Optional[IncidentStatus] = None,
115 metric_value: Optional[str] = None,
116 ):
117 latest_incident = None
118 if selected_incident is None:
119 try:
120 # Use .get() instead of .first() to avoid sorting table by id
121 latest_incident = Incident.objects.filter(
122 id__in=Incident.objects.filter(alert_rule=alert_rule)
123 .values("alert_rule_id")
124 .annotate(incident_id=Max("id"))
125 .values("incident_id")
126 ).get()
127 except Incident.DoesNotExist:
128 latest_incident = None
129
130 if new_status:
131 status = INCIDENT_STATUS[new_status]
132 elif selected_incident:
133 status = INCIDENT_STATUS[IncidentStatus(selected_incident.status)]
134 elif latest_incident:
135 status = INCIDENT_STATUS[IncidentStatus(latest_incident.status)]
136 else:
137 status = INCIDENT_STATUS[IncidentStatus.CLOSED]
138
139 query = None
140 if selected_incident:
141 query = parse.urlencode({"alert": str(selected_incident.identifier)})
142 title = f"{status}: {alert_rule.name}"
143 title_link = alert_rule.organization.absolute_url(
144 reverse(
145 "sentry-metric-alert-details",
146 kwargs={
147 "organization_slug": alert_rule.organization.slug,
148 "alert_rule_id": alert_rule.id,
149 },
150 ),
151 query=query,
152 )
153
154 if metric_value is None:
155 if (
156 selected_incident is None
157 and latest_incident
158 and latest_incident.status != IncidentStatus.CLOSED
159 ):
160 # Without a selected incident, use latest incident if it is not resolved
161 incident_info = latest_incident
162 else:
163 incident_info = selected_incident
164
165 if incident_info:
166 metric_value = get_metric_count_from_incident(incident_info)
167
168 text = ""
169 if metric_value is not None and status != INCIDENT_STATUS[IncidentStatus.CLOSED]:
170 text = get_incident_status_text(alert_rule, metric_value)
171
172 date_started = None
173 if selected_incident:
174 date_started = selected_incident.date_started
175
176 last_triggered_date = None
177 if latest_incident:
178 last_triggered_date = latest_incident.date_started
179
180 return {
181 "title": title,
182 "text": text,
183 "logo_url": LOGO_URL,
184 "status": status,
185 "date_started": date_started,
186 "last_triggered_date": last_triggered_date,
187 "title_link": title_link,
188 }
189
[end of src/sentry/integrations/metric_alerts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/integrations/metric_alerts.py b/src/sentry/integrations/metric_alerts.py
--- a/src/sentry/integrations/metric_alerts.py
+++ b/src/sentry/integrations/metric_alerts.py
@@ -11,6 +11,7 @@
from sentry.incidents.models import (
INCIDENT_STATUS,
AlertRule,
+ AlertRuleThresholdType,
Incident,
IncidentStatus,
IncidentTrigger,
@@ -25,6 +26,16 @@
"percentage(users_crashed, users)": "% users crash free rate",
}
LOGO_URL = absolute_uri(get_asset_url("sentry", "images/sentry-email-avatar.png"))
+# These should be the same as the options in the frontend
+# COMPARISON_DELTA_OPTIONS
+TEXT_COMPARISON_DELTA = {
+ 5: ("same time 5 minutes ago"), # 5 minutes
+ 15: ("same time 15 minutes ago"), # 15 minutes
+ 60: ("same time one hour ago"), # one hour
+ 1440: ("same time one day ago"), # one day
+ 10080: ("same time one week ago"), # one week
+ 43200: ("same time one month ago"), # 30 days
+}
def get_metric_count_from_incident(incident: Incident) -> str:
@@ -67,14 +78,27 @@
time_window = alert_rule.snuba_query.time_window // 60
interval = "minute" if time_window == 1 else "minutes"
- text = _("%(metric_and_agg_text)s in the last %(time_window)d %(interval)s") % {
+ # % change alerts have a comparison delta
+ if alert_rule.comparison_delta:
+ metric_and_agg_text = f"{agg_text.capitalize()} {int(metric_value)}%"
+ higher_or_lower = (
+ "higher" if alert_rule.threshold_type == AlertRuleThresholdType.ABOVE.value else "lower"
+ )
+ comparison_delta_minutes = alert_rule.comparison_delta // 60
+ comparison_string = TEXT_COMPARISON_DELTA.get(
+ comparison_delta_minutes, f"same time {comparison_delta_minutes} minutes ago"
+ )
+ return _(
+ f"{metric_and_agg_text} {higher_or_lower} in the last {time_window} {interval} "
+ f"compared to the {comparison_string}"
+ )
+
+ return _("%(metric_and_agg_text)s in the last %(time_window)d %(interval)s") % {
"metric_and_agg_text": metric_and_agg_text,
"time_window": time_window,
"interval": interval,
}
- return text
-
def incident_attachment_info(incident, new_status: IncidentStatus, metric_value=None):
alert_rule = incident.alert_rule
| {"golden_diff": "diff --git a/src/sentry/integrations/metric_alerts.py b/src/sentry/integrations/metric_alerts.py\n--- a/src/sentry/integrations/metric_alerts.py\n+++ b/src/sentry/integrations/metric_alerts.py\n@@ -11,6 +11,7 @@\n from sentry.incidents.models import (\n INCIDENT_STATUS,\n AlertRule,\n+ AlertRuleThresholdType,\n Incident,\n IncidentStatus,\n IncidentTrigger,\n@@ -25,6 +26,16 @@\n \"percentage(users_crashed, users)\": \"% users crash free rate\",\n }\n LOGO_URL = absolute_uri(get_asset_url(\"sentry\", \"images/sentry-email-avatar.png\"))\n+# These should be the same as the options in the frontend\n+# COMPARISON_DELTA_OPTIONS\n+TEXT_COMPARISON_DELTA = {\n+ 5: (\"same time 5 minutes ago\"), # 5 minutes\n+ 15: (\"same time 15 minutes ago\"), # 15 minutes\n+ 60: (\"same time one hour ago\"), # one hour\n+ 1440: (\"same time one day ago\"), # one day\n+ 10080: (\"same time one week ago\"), # one week\n+ 43200: (\"same time one month ago\"), # 30 days\n+}\n \n \n def get_metric_count_from_incident(incident: Incident) -> str:\n@@ -67,14 +78,27 @@\n \n time_window = alert_rule.snuba_query.time_window // 60\n interval = \"minute\" if time_window == 1 else \"minutes\"\n- text = _(\"%(metric_and_agg_text)s in the last %(time_window)d %(interval)s\") % {\n+ # % change alerts have a comparison delta\n+ if alert_rule.comparison_delta:\n+ metric_and_agg_text = f\"{agg_text.capitalize()} {int(metric_value)}%\"\n+ higher_or_lower = (\n+ \"higher\" if alert_rule.threshold_type == AlertRuleThresholdType.ABOVE.value else \"lower\"\n+ )\n+ comparison_delta_minutes = alert_rule.comparison_delta // 60\n+ comparison_string = TEXT_COMPARISON_DELTA.get(\n+ comparison_delta_minutes, f\"same time {comparison_delta_minutes} minutes ago\"\n+ )\n+ return _(\n+ f\"{metric_and_agg_text} {higher_or_lower} in the last {time_window} {interval} \"\n+ f\"compared to the {comparison_string}\"\n+ )\n+\n+ return _(\"%(metric_and_agg_text)s in the last %(time_window)d %(interval)s\") % {\n \"metric_and_agg_text\": metric_and_agg_text,\n \"time_window\": time_window,\n \"interval\": interval,\n }\n \n- return text\n-\n \n def incident_attachment_info(incident, new_status: IncidentStatus, metric_value=None):\n alert_rule = incident.alert_rule\n", "issue": "Slack alert message doesn't match up with why alert was actually triggered\n### Environment\n\nSaaS (https://sentry.io/)\n\n### Steps to Reproduce\n\nHi Team,\n\nA customer reported to us that the messages in their Slack alerts doesn't match up with the real reason the alert was triggered (displayed correctly once you click into the alert and go to the alert page). I was able to replicate this by setting up a similar type of alert.\n1. Set up a metric alert to trigger when `Number of errors is X% higher in Y minutes compared to same time one week ago`\n2. Alert action is to be notified via Slack, specifically a Slack channel\n3. Receive Slack alert\n<img width=\"1269\" alt=\"Screenshot 2023-05-04 at 3 45 03 PM\" src=\"https://user-images.githubusercontent.com/98921471/236560084-f64cf7cf-7856-4921-8c3d-ca70433afda8.png\">\n\n\n### Expected Result\n\nWhen the alert triggers, I expect the message in Slack to say: \"157 events in the last 10 minutes\". This is verified by the alert details of why the alert triggered.\n\nAlternatively, it can also be expected to say: \"Number of errors is 89.16% higher in 10 minutes compared to the same time a week ago\".\n<img width=\"1277\" alt=\"Screenshot 2023-05-04 at 3 45 14 PM\" src=\"https://user-images.githubusercontent.com/98921471/236560515-38e4361c-0c8e-4c53-8986-fcd5a67f40fe.png\">\n\n\n### Actual Result\n\nThe Slack alert message says \"168.478260.. events in the last 10 minutes\". This doesn't match up with the 157 events that I see in the alert details.\n\nFor the customer, it said \"400.0 events in the last 60 minutes\" when it should've said \"4 events in the last 60 minutes\", which is 2 orders of magnitudes off.\n<img width=\"544\" alt=\"Screenshot 2023-05-04 at 3 44 30 PM\" src=\"https://user-images.githubusercontent.com/98921471/236560823-894e8da5-6918-416c-8f65-f182456424d7.png\">\n\n\n### Product Area\n\nAlerts\n\n### Link\n\n_No response_\n\n### DSN\n\n_No response_\n\n### Version\n\n_No response_\n\n\u2506Issue is synchronized with this [Jira Improvement](https://getsentry.atlassian.net/browse/FEEDBACK-1825) by [Unito](https://www.unito.io)\n\n", "before_files": [{"content": "from datetime import timedelta\nfrom typing import Optional\nfrom urllib import parse\n\nfrom django.db.models import Max\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom sentry.constants import CRASH_RATE_ALERT_AGGREGATE_ALIAS\nfrom sentry.incidents.logic import get_incident_aggregates\nfrom sentry.incidents.models import (\n INCIDENT_STATUS,\n AlertRule,\n Incident,\n IncidentStatus,\n IncidentTrigger,\n)\nfrom sentry.utils.assets import get_asset_url\nfrom sentry.utils.http import absolute_uri\n\nQUERY_AGGREGATION_DISPLAY = {\n \"count()\": \"events\",\n \"count_unique(tags[sentry:user])\": \"users affected\",\n \"percentage(sessions_crashed, sessions)\": \"% sessions crash free rate\",\n \"percentage(users_crashed, users)\": \"% users crash free rate\",\n}\nLOGO_URL = absolute_uri(get_asset_url(\"sentry\", \"images/sentry-email-avatar.png\"))\n\n\ndef get_metric_count_from_incident(incident: Incident) -> str:\n \"\"\"Returns the current or last count of an incident aggregate.\"\"\"\n incident_trigger = (\n IncidentTrigger.objects.filter(incident=incident).order_by(\"-date_modified\").first()\n )\n if incident_trigger:\n alert_rule_trigger = incident_trigger.alert_rule_trigger\n # TODO: If we're relying on this and expecting possible delays between a\n # trigger fired and this function running, then this could actually be\n # incorrect if they changed the trigger's time window in this time period.\n # Should we store it?\n start = incident_trigger.date_modified - timedelta(\n seconds=alert_rule_trigger.alert_rule.snuba_query.time_window\n )\n end = incident_trigger.date_modified\n else:\n start, end = None, None\n\n return get_incident_aggregates(incident=incident, start=start, end=end).get(\"count\")\n\n\ndef get_incident_status_text(alert_rule: AlertRule, metric_value: str) -> str:\n \"\"\"Returns a human readable current status of an incident\"\"\"\n agg_display_key = alert_rule.snuba_query.aggregate\n\n if CRASH_RATE_ALERT_AGGREGATE_ALIAS in alert_rule.snuba_query.aggregate:\n agg_display_key = agg_display_key.split(f\"AS {CRASH_RATE_ALERT_AGGREGATE_ALIAS}\")[0].strip()\n\n agg_text = QUERY_AGGREGATION_DISPLAY.get(agg_display_key, alert_rule.snuba_query.aggregate)\n\n if agg_text.startswith(\"%\"):\n if metric_value is not None:\n metric_and_agg_text = f\"{metric_value}{agg_text}\"\n else:\n metric_and_agg_text = f\"No{agg_text[1:]}\"\n else:\n metric_and_agg_text = f\"{metric_value} {agg_text}\"\n\n time_window = alert_rule.snuba_query.time_window // 60\n interval = \"minute\" if time_window == 1 else \"minutes\"\n text = _(\"%(metric_and_agg_text)s in the last %(time_window)d %(interval)s\") % {\n \"metric_and_agg_text\": metric_and_agg_text,\n \"time_window\": time_window,\n \"interval\": interval,\n }\n\n return text\n\n\ndef incident_attachment_info(incident, new_status: IncidentStatus, metric_value=None):\n alert_rule = incident.alert_rule\n\n status = INCIDENT_STATUS[new_status]\n\n if metric_value is None:\n metric_value = get_metric_count_from_incident(incident)\n\n text = get_incident_status_text(alert_rule, metric_value)\n title = f\"{status}: {alert_rule.name}\"\n\n title_link = alert_rule.organization.absolute_url(\n reverse(\n \"sentry-metric-alert-details\",\n kwargs={\n \"organization_slug\": alert_rule.organization.slug,\n \"alert_rule_id\": alert_rule.id,\n },\n ),\n query=parse.urlencode({\"alert\": str(incident.identifier)}),\n )\n\n return {\n \"title\": title,\n \"text\": text,\n \"logo_url\": LOGO_URL,\n \"status\": status,\n \"ts\": incident.date_started,\n \"title_link\": title_link,\n }\n\n\ndef metric_alert_attachment_info(\n alert_rule: AlertRule,\n selected_incident: Optional[Incident] = None,\n new_status: Optional[IncidentStatus] = None,\n metric_value: Optional[str] = None,\n):\n latest_incident = None\n if selected_incident is None:\n try:\n # Use .get() instead of .first() to avoid sorting table by id\n latest_incident = Incident.objects.filter(\n id__in=Incident.objects.filter(alert_rule=alert_rule)\n .values(\"alert_rule_id\")\n .annotate(incident_id=Max(\"id\"))\n .values(\"incident_id\")\n ).get()\n except Incident.DoesNotExist:\n latest_incident = None\n\n if new_status:\n status = INCIDENT_STATUS[new_status]\n elif selected_incident:\n status = INCIDENT_STATUS[IncidentStatus(selected_incident.status)]\n elif latest_incident:\n status = INCIDENT_STATUS[IncidentStatus(latest_incident.status)]\n else:\n status = INCIDENT_STATUS[IncidentStatus.CLOSED]\n\n query = None\n if selected_incident:\n query = parse.urlencode({\"alert\": str(selected_incident.identifier)})\n title = f\"{status}: {alert_rule.name}\"\n title_link = alert_rule.organization.absolute_url(\n reverse(\n \"sentry-metric-alert-details\",\n kwargs={\n \"organization_slug\": alert_rule.organization.slug,\n \"alert_rule_id\": alert_rule.id,\n },\n ),\n query=query,\n )\n\n if metric_value is None:\n if (\n selected_incident is None\n and latest_incident\n and latest_incident.status != IncidentStatus.CLOSED\n ):\n # Without a selected incident, use latest incident if it is not resolved\n incident_info = latest_incident\n else:\n incident_info = selected_incident\n\n if incident_info:\n metric_value = get_metric_count_from_incident(incident_info)\n\n text = \"\"\n if metric_value is not None and status != INCIDENT_STATUS[IncidentStatus.CLOSED]:\n text = get_incident_status_text(alert_rule, metric_value)\n\n date_started = None\n if selected_incident:\n date_started = selected_incident.date_started\n\n last_triggered_date = None\n if latest_incident:\n last_triggered_date = latest_incident.date_started\n\n return {\n \"title\": title,\n \"text\": text,\n \"logo_url\": LOGO_URL,\n \"status\": status,\n \"date_started\": date_started,\n \"last_triggered_date\": last_triggered_date,\n \"title_link\": title_link,\n }\n", "path": "src/sentry/integrations/metric_alerts.py"}]} | 3,107 | 644 |
gh_patches_debug_17639 | rasdani/github-patches | git_diff | wagtail__wagtail-715 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
search fields can't be overridden
In the past, you were able to override a search field of a parent class by redefining it. This functionality appears to be broken in Wagtail 0.7
</issue>
<code>
[start of wagtail/wagtailsearch/index.py]
1 import warnings
2
3 from six import string_types
4
5 from django.db import models
6
7
8 class Indexed(object):
9 @classmethod
10 def indexed_get_parent(cls, require_model=True):
11 for base in cls.__bases__:
12 if issubclass(base, Indexed) and (issubclass(base, models.Model) or require_model is False):
13 return base
14
15 @classmethod
16 def indexed_get_content_type(cls):
17 # Work out content type
18 content_type = (cls._meta.app_label + '_' + cls.__name__).lower()
19
20 # Get parent content type
21 parent = cls.indexed_get_parent()
22 if parent:
23 parent_content_type = parent.indexed_get_content_type()
24 return parent_content_type + '_' + content_type
25 else:
26 return content_type
27
28 @classmethod
29 def indexed_get_toplevel_content_type(cls):
30 # Get parent content type
31 parent = cls.indexed_get_parent()
32 if parent:
33 return parent.indexed_get_content_type()
34 else:
35 # At toplevel, return this content type
36 return (cls._meta.app_label + '_' + cls.__name__).lower()
37
38 @classmethod
39 def get_search_fields(cls):
40 return cls.search_fields
41
42 @classmethod
43 def get_searchable_search_fields(cls):
44 return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())
45
46 @classmethod
47 def get_filterable_search_fields(cls):
48 return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())
49
50 @classmethod
51 def get_indexed_objects(cls):
52 return cls.objects.all()
53
54 search_fields = ()
55
56
57 class BaseField(object):
58 suffix = ''
59
60 def __init__(self, field_name, **kwargs):
61 self.field_name = field_name
62 self.kwargs = kwargs
63
64 def get_field(self, cls):
65 return cls._meta.get_field_by_name(self.field_name)[0]
66
67 def get_attname(self, cls):
68 try:
69 field = self.get_field(cls)
70 return field.attname
71 except models.fields.FieldDoesNotExist:
72 return self.field_name
73
74 def get_index_name(self, cls):
75 return self.get_attname(cls) + self.suffix
76
77 def get_type(self, cls):
78 if 'type' in self.kwargs:
79 return self.kwargs['type']
80
81 try:
82 field = self.get_field(cls)
83 return field.get_internal_type()
84 except models.fields.FieldDoesNotExist:
85 return 'CharField'
86
87 def get_value(self, obj):
88 try:
89 field = self.get_field(obj.__class__)
90 return field._get_val_from_obj(obj)
91 except models.fields.FieldDoesNotExist:
92 value = getattr(obj, self.field_name, None)
93 if hasattr(value, '__call__'):
94 value = value()
95 return value
96
97 def __repr__(self):
98 return '<%s: %s>' % (self.__class__.__name__, self.field_name)
99
100
101 class SearchField(BaseField):
102 def __init__(self, field_name, boost=None, partial_match=False, **kwargs):
103 super(SearchField, self).__init__(field_name, **kwargs)
104 self.boost = boost
105 self.partial_match = partial_match
106
107
108 class FilterField(BaseField):
109 suffix = '_filter'
110
111
[end of wagtail/wagtailsearch/index.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailsearch/index.py b/wagtail/wagtailsearch/index.py
--- a/wagtail/wagtailsearch/index.py
+++ b/wagtail/wagtailsearch/index.py
@@ -37,15 +37,26 @@
@classmethod
def get_search_fields(cls):
- return cls.search_fields
+ search_fields = {}
+
+ for field in cls.search_fields:
+ search_fields[(type(field), field.field_name)] = field
+
+ return list(search_fields.values())
@classmethod
def get_searchable_search_fields(cls):
- return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())
+ return [
+ field for field in cls.get_search_fields()
+ if isinstance(field, SearchField)
+ ]
@classmethod
def get_filterable_search_fields(cls):
- return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())
+ return [
+ field for field in cls.get_search_fields()
+ if isinstance(field, FilterField)
+ ]
@classmethod
def get_indexed_objects(cls):
| {"golden_diff": "diff --git a/wagtail/wagtailsearch/index.py b/wagtail/wagtailsearch/index.py\n--- a/wagtail/wagtailsearch/index.py\n+++ b/wagtail/wagtailsearch/index.py\n@@ -37,15 +37,26 @@\n \n @classmethod\n def get_search_fields(cls):\n- return cls.search_fields\n+ search_fields = {}\n+\n+ for field in cls.search_fields:\n+ search_fields[(type(field), field.field_name)] = field\n+\n+ return list(search_fields.values())\n \n @classmethod\n def get_searchable_search_fields(cls):\n- return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())\n+ return [\n+ field for field in cls.get_search_fields()\n+ if isinstance(field, SearchField)\n+ ]\n \n @classmethod\n def get_filterable_search_fields(cls):\n- return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())\n+ return [\n+ field for field in cls.get_search_fields()\n+ if isinstance(field, FilterField)\n+ ]\n \n @classmethod\n def get_indexed_objects(cls):\n", "issue": "search fields can't be overridden\nIn the past, you were able to override a search field of a parent class by redefining it. This functionality appears to be broken in Wagtail 0.7\n\n", "before_files": [{"content": "import warnings\n\nfrom six import string_types\n\nfrom django.db import models\n\n\nclass Indexed(object):\n @classmethod\n def indexed_get_parent(cls, require_model=True):\n for base in cls.__bases__:\n if issubclass(base, Indexed) and (issubclass(base, models.Model) or require_model is False):\n return base\n\n @classmethod\n def indexed_get_content_type(cls):\n # Work out content type\n content_type = (cls._meta.app_label + '_' + cls.__name__).lower()\n\n # Get parent content type\n parent = cls.indexed_get_parent()\n if parent:\n parent_content_type = parent.indexed_get_content_type()\n return parent_content_type + '_' + content_type\n else:\n return content_type\n\n @classmethod\n def indexed_get_toplevel_content_type(cls):\n # Get parent content type\n parent = cls.indexed_get_parent()\n if parent:\n return parent.indexed_get_content_type()\n else:\n # At toplevel, return this content type\n return (cls._meta.app_label + '_' + cls.__name__).lower()\n\n @classmethod\n def get_search_fields(cls):\n return cls.search_fields\n\n @classmethod\n def get_searchable_search_fields(cls):\n return filter(lambda field: isinstance(field, SearchField), cls.get_search_fields())\n\n @classmethod\n def get_filterable_search_fields(cls):\n return filter(lambda field: isinstance(field, FilterField), cls.get_search_fields())\n\n @classmethod\n def get_indexed_objects(cls):\n return cls.objects.all()\n\n search_fields = ()\n\n\nclass BaseField(object):\n suffix = ''\n\n def __init__(self, field_name, **kwargs):\n self.field_name = field_name\n self.kwargs = kwargs\n\n def get_field(self, cls):\n return cls._meta.get_field_by_name(self.field_name)[0]\n\n def get_attname(self, cls):\n try:\n field = self.get_field(cls)\n return field.attname\n except models.fields.FieldDoesNotExist:\n return self.field_name\n\n def get_index_name(self, cls):\n return self.get_attname(cls) + self.suffix\n\n def get_type(self, cls):\n if 'type' in self.kwargs:\n return self.kwargs['type']\n\n try:\n field = self.get_field(cls)\n return field.get_internal_type()\n except models.fields.FieldDoesNotExist:\n return 'CharField'\n\n def get_value(self, obj):\n try:\n field = self.get_field(obj.__class__)\n return field._get_val_from_obj(obj)\n except models.fields.FieldDoesNotExist:\n value = getattr(obj, self.field_name, None)\n if hasattr(value, '__call__'):\n value = value()\n return value\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.field_name)\n\n\nclass SearchField(BaseField):\n def __init__(self, field_name, boost=None, partial_match=False, **kwargs):\n super(SearchField, self).__init__(field_name, **kwargs)\n self.boost = boost\n self.partial_match = partial_match\n\n\nclass FilterField(BaseField):\n suffix = '_filter'\n\n", "path": "wagtail/wagtailsearch/index.py"}]} | 1,511 | 252 |
gh_patches_debug_20508 | rasdani/github-patches | git_diff | oobabooga__text-generation-webui-5794 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Save UI defaults to settings.yaml Not working
### Describe the bug
When I try to activate few options and try using Save UI defaults to settings.yaml it save empty settings.yaml
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
I check few options like in the image below:

and press Save UI defaults to settings.yaml
close the termenal and start again and it goes back as if I did not set those options
Add to that when I check settings.yaml it is totally empty file
### Screenshot
_No response_
### Logs
```shell
it does not show any log
```
### System Info
```shell
Windows 11
I9 13900
Nvidia 4090
128GB RAM
```
</issue>
<code>
[start of modules/ui.py]
1 import copy
2 from pathlib import Path
3
4 import gradio as gr
5 import torch
6 import yaml
7 from transformers import is_torch_xpu_available
8
9 import extensions
10 from modules import shared
11
12 with open(Path(__file__).resolve().parent / '../css/NotoSans/stylesheet.css', 'r') as f:
13 css = f.read()
14 with open(Path(__file__).resolve().parent / '../css/main.css', 'r') as f:
15 css += f.read()
16 with open(Path(__file__).resolve().parent / '../js/main.js', 'r') as f:
17 js = f.read()
18 with open(Path(__file__).resolve().parent / '../js/save_files.js', 'r') as f:
19 save_files_js = f.read()
20 with open(Path(__file__).resolve().parent / '../js/switch_tabs.js', 'r') as f:
21 switch_tabs_js = f.read()
22 with open(Path(__file__).resolve().parent / '../js/show_controls.js', 'r') as f:
23 show_controls_js = f.read()
24 with open(Path(__file__).resolve().parent / '../js/update_big_picture.js', 'r') as f:
25 update_big_picture_js = f.read()
26
27 refresh_symbol = '🔄'
28 delete_symbol = '🗑️'
29 save_symbol = '💾'
30
31 theme = gr.themes.Default(
32 font=['Noto Sans', 'Helvetica', 'ui-sans-serif', 'system-ui', 'sans-serif'],
33 font_mono=['IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'],
34 ).set(
35 border_color_primary='#c5c5d2',
36 button_large_padding='6px 12px',
37 body_text_color_subdued='#484848',
38 background_fill_secondary='#eaeaea',
39 background_fill_primary='#fafafa',
40 )
41
42 if Path("notification.mp3").exists():
43 audio_notification_js = "document.querySelector('#audio_notification audio')?.play();"
44 else:
45 audio_notification_js = ""
46
47
48 def list_model_elements():
49 elements = [
50 'loader',
51 'filter_by_loader',
52 'cpu_memory',
53 'auto_devices',
54 'disk',
55 'cpu',
56 'bf16',
57 'load_in_8bit',
58 'trust_remote_code',
59 'no_use_fast',
60 'use_flash_attention_2',
61 'load_in_4bit',
62 'compute_dtype',
63 'quant_type',
64 'use_double_quant',
65 'wbits',
66 'groupsize',
67 'model_type',
68 'pre_layer',
69 'triton',
70 'desc_act',
71 'no_inject_fused_attention',
72 'no_inject_fused_mlp',
73 'no_use_cuda_fp16',
74 'disable_exllama',
75 'disable_exllamav2',
76 'cfg_cache',
77 'no_flash_attn',
78 'num_experts_per_token',
79 'cache_8bit',
80 'cache_4bit',
81 'autosplit',
82 'threads',
83 'threads_batch',
84 'n_batch',
85 'no_mmap',
86 'mlock',
87 'no_mul_mat_q',
88 'n_gpu_layers',
89 'tensor_split',
90 'n_ctx',
91 'gpu_split',
92 'max_seq_len',
93 'compress_pos_emb',
94 'alpha_value',
95 'rope_freq_base',
96 'numa',
97 'logits_all',
98 'no_offload_kqv',
99 'row_split',
100 'tensorcores',
101 'streaming_llm',
102 'attention_sink_size',
103 'hqq_backend',
104 ]
105 if is_torch_xpu_available():
106 for i in range(torch.xpu.device_count()):
107 elements.append(f'gpu_memory_{i}')
108 else:
109 for i in range(torch.cuda.device_count()):
110 elements.append(f'gpu_memory_{i}')
111
112 return elements
113
114
115 def list_interface_input_elements():
116 elements = [
117 'max_new_tokens',
118 'auto_max_new_tokens',
119 'max_tokens_second',
120 'prompt_lookup_num_tokens',
121 'seed',
122 'temperature',
123 'temperature_last',
124 'dynamic_temperature',
125 'dynatemp_low',
126 'dynatemp_high',
127 'dynatemp_exponent',
128 'smoothing_factor',
129 'smoothing_curve',
130 'top_p',
131 'min_p',
132 'top_k',
133 'typical_p',
134 'epsilon_cutoff',
135 'eta_cutoff',
136 'repetition_penalty',
137 'presence_penalty',
138 'frequency_penalty',
139 'repetition_penalty_range',
140 'encoder_repetition_penalty',
141 'no_repeat_ngram_size',
142 'do_sample',
143 'penalty_alpha',
144 'mirostat_mode',
145 'mirostat_tau',
146 'mirostat_eta',
147 'grammar_string',
148 'negative_prompt',
149 'guidance_scale',
150 'add_bos_token',
151 'ban_eos_token',
152 'custom_token_bans',
153 'sampler_priority',
154 'truncation_length',
155 'custom_stopping_strings',
156 'skip_special_tokens',
157 'stream',
158 'tfs',
159 'top_a',
160 ]
161
162 # Chat elements
163 elements += [
164 'textbox',
165 'start_with',
166 'character_menu',
167 'history',
168 'name1',
169 'user_bio',
170 'name2',
171 'greeting',
172 'context',
173 'mode',
174 'custom_system_message',
175 'instruction_template_str',
176 'chat_template_str',
177 'chat_style',
178 'chat-instruct_command',
179 ]
180
181 # Notebook/default elements
182 elements += [
183 'textbox-notebook',
184 'textbox-default',
185 'output_textbox',
186 'prompt_menu-default',
187 'prompt_menu-notebook',
188 ]
189
190 # Model elements
191 elements += list_model_elements()
192
193 return elements
194
195
196 def gather_interface_values(*args):
197 output = {}
198 for i, element in enumerate(list_interface_input_elements()):
199 output[element] = args[i]
200
201 if not shared.args.multi_user:
202 shared.persistent_interface_state = output
203
204 return output
205
206
207 def apply_interface_values(state, use_persistent=False):
208 if use_persistent:
209 state = shared.persistent_interface_state
210
211 elements = list_interface_input_elements()
212 if len(state) == 0:
213 return [gr.update() for k in elements] # Dummy, do nothing
214 else:
215 return [state[k] if k in state else gr.update() for k in elements]
216
217
218 def save_settings(state, preset, extensions_list, show_controls, theme_state):
219 output = copy.deepcopy(shared.settings)
220 exclude = ['name2', 'greeting', 'context', 'turn_template', 'truncation_length']
221 for k in state:
222 if k in shared.settings and k not in exclude:
223 output[k] = state[k]
224
225 output['preset'] = preset
226 output['prompt-default'] = state['prompt_menu-default']
227 output['prompt-notebook'] = state['prompt_menu-notebook']
228 output['character'] = state['character_menu']
229 output['default_extensions'] = extensions_list
230 output['seed'] = int(output['seed'])
231 output['show_controls'] = show_controls
232 output['dark_theme'] = True if theme_state == 'dark' else False
233
234 # Save extension values in the UI
235 for extension_name in extensions_list:
236 extension = getattr(extensions, extension_name).script
237 if hasattr(extension, 'params'):
238 params = getattr(extension, 'params')
239 for param in params:
240 _id = f"{extension_name}-{param}"
241 # Only save if different from default value
242 if param not in shared.default_settings or params[param] != shared.default_settings[param]:
243 output[_id] = params[param]
244
245 # Do not save unchanged settings
246 for key in list(output.keys()):
247 if key in shared.default_settings and output[key] == shared.default_settings[key]:
248 output.pop(key)
249
250 return yaml.dump(output, sort_keys=False, width=float("inf"))
251
252
253 def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class, interactive=True):
254 """
255 Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui
256 """
257 def refresh():
258 refresh_method()
259 args = refreshed_args() if callable(refreshed_args) else refreshed_args
260
261 return gr.update(**(args or {}))
262
263 refresh_button = gr.Button(refresh_symbol, elem_classes=elem_class, interactive=interactive)
264 refresh_button.click(
265 fn=lambda: {k: tuple(v) if type(k) is list else v for k, v in refresh().items()},
266 inputs=[],
267 outputs=[refresh_component]
268 )
269
270 return refresh_button
271
[end of modules/ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modules/ui.py b/modules/ui.py
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -233,14 +233,16 @@
# Save extension values in the UI
for extension_name in extensions_list:
- extension = getattr(extensions, extension_name).script
- if hasattr(extension, 'params'):
- params = getattr(extension, 'params')
- for param in params:
- _id = f"{extension_name}-{param}"
- # Only save if different from default value
- if param not in shared.default_settings or params[param] != shared.default_settings[param]:
- output[_id] = params[param]
+ extension = getattr(extensions, extension_name, None)
+ if extension:
+ extension = extension.script
+ if hasattr(extension, 'params'):
+ params = getattr(extension, 'params')
+ for param in params:
+ _id = f"{extension_name}-{param}"
+ # Only save if different from default value
+ if param not in shared.default_settings or params[param] != shared.default_settings[param]:
+ output[_id] = params[param]
# Do not save unchanged settings
for key in list(output.keys()):
| {"golden_diff": "diff --git a/modules/ui.py b/modules/ui.py\n--- a/modules/ui.py\n+++ b/modules/ui.py\n@@ -233,14 +233,16 @@\n \n # Save extension values in the UI\n for extension_name in extensions_list:\n- extension = getattr(extensions, extension_name).script\n- if hasattr(extension, 'params'):\n- params = getattr(extension, 'params')\n- for param in params:\n- _id = f\"{extension_name}-{param}\"\n- # Only save if different from default value\n- if param not in shared.default_settings or params[param] != shared.default_settings[param]:\n- output[_id] = params[param]\n+ extension = getattr(extensions, extension_name, None)\n+ if extension:\n+ extension = extension.script\n+ if hasattr(extension, 'params'):\n+ params = getattr(extension, 'params')\n+ for param in params:\n+ _id = f\"{extension_name}-{param}\"\n+ # Only save if different from default value\n+ if param not in shared.default_settings or params[param] != shared.default_settings[param]:\n+ output[_id] = params[param]\n \n # Do not save unchanged settings\n for key in list(output.keys()):\n", "issue": "Save UI defaults to settings.yaml Not working\n### Describe the bug\n\nWhen I try to activate few options and try using Save UI defaults to settings.yaml it save empty settings.yaml\r\n\r\n\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nI check few options like in the image below:\r\n\r\n\r\n\r\nand press Save UI defaults to settings.yaml\r\n\r\nclose the termenal and start again and it goes back as if I did not set those options\r\nAdd to that when I check settings.yaml it is totally empty file\r\n\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nit does not show any log\n```\n\n\n### System Info\n\n```shell\nWindows 11\r\nI9 13900\r\nNvidia 4090\r\n128GB RAM\n```\n\n", "before_files": [{"content": "import copy\nfrom pathlib import Path\n\nimport gradio as gr\nimport torch\nimport yaml\nfrom transformers import is_torch_xpu_available\n\nimport extensions\nfrom modules import shared\n\nwith open(Path(__file__).resolve().parent / '../css/NotoSans/stylesheet.css', 'r') as f:\n css = f.read()\nwith open(Path(__file__).resolve().parent / '../css/main.css', 'r') as f:\n css += f.read()\nwith open(Path(__file__).resolve().parent / '../js/main.js', 'r') as f:\n js = f.read()\nwith open(Path(__file__).resolve().parent / '../js/save_files.js', 'r') as f:\n save_files_js = f.read()\nwith open(Path(__file__).resolve().parent / '../js/switch_tabs.js', 'r') as f:\n switch_tabs_js = f.read()\nwith open(Path(__file__).resolve().parent / '../js/show_controls.js', 'r') as f:\n show_controls_js = f.read()\nwith open(Path(__file__).resolve().parent / '../js/update_big_picture.js', 'r') as f:\n update_big_picture_js = f.read()\n\nrefresh_symbol = '\ud83d\udd04'\ndelete_symbol = '\ud83d\uddd1\ufe0f'\nsave_symbol = '\ud83d\udcbe'\n\ntheme = gr.themes.Default(\n font=['Noto Sans', 'Helvetica', 'ui-sans-serif', 'system-ui', 'sans-serif'],\n font_mono=['IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'],\n).set(\n border_color_primary='#c5c5d2',\n button_large_padding='6px 12px',\n body_text_color_subdued='#484848',\n background_fill_secondary='#eaeaea',\n background_fill_primary='#fafafa',\n)\n\nif Path(\"notification.mp3\").exists():\n audio_notification_js = \"document.querySelector('#audio_notification audio')?.play();\"\nelse:\n audio_notification_js = \"\"\n\n\ndef list_model_elements():\n elements = [\n 'loader',\n 'filter_by_loader',\n 'cpu_memory',\n 'auto_devices',\n 'disk',\n 'cpu',\n 'bf16',\n 'load_in_8bit',\n 'trust_remote_code',\n 'no_use_fast',\n 'use_flash_attention_2',\n 'load_in_4bit',\n 'compute_dtype',\n 'quant_type',\n 'use_double_quant',\n 'wbits',\n 'groupsize',\n 'model_type',\n 'pre_layer',\n 'triton',\n 'desc_act',\n 'no_inject_fused_attention',\n 'no_inject_fused_mlp',\n 'no_use_cuda_fp16',\n 'disable_exllama',\n 'disable_exllamav2',\n 'cfg_cache',\n 'no_flash_attn',\n 'num_experts_per_token',\n 'cache_8bit',\n 'cache_4bit',\n 'autosplit',\n 'threads',\n 'threads_batch',\n 'n_batch',\n 'no_mmap',\n 'mlock',\n 'no_mul_mat_q',\n 'n_gpu_layers',\n 'tensor_split',\n 'n_ctx',\n 'gpu_split',\n 'max_seq_len',\n 'compress_pos_emb',\n 'alpha_value',\n 'rope_freq_base',\n 'numa',\n 'logits_all',\n 'no_offload_kqv',\n 'row_split',\n 'tensorcores',\n 'streaming_llm',\n 'attention_sink_size',\n 'hqq_backend',\n ]\n if is_torch_xpu_available():\n for i in range(torch.xpu.device_count()):\n elements.append(f'gpu_memory_{i}')\n else:\n for i in range(torch.cuda.device_count()):\n elements.append(f'gpu_memory_{i}')\n\n return elements\n\n\ndef list_interface_input_elements():\n elements = [\n 'max_new_tokens',\n 'auto_max_new_tokens',\n 'max_tokens_second',\n 'prompt_lookup_num_tokens',\n 'seed',\n 'temperature',\n 'temperature_last',\n 'dynamic_temperature',\n 'dynatemp_low',\n 'dynatemp_high',\n 'dynatemp_exponent',\n 'smoothing_factor',\n 'smoothing_curve',\n 'top_p',\n 'min_p',\n 'top_k',\n 'typical_p',\n 'epsilon_cutoff',\n 'eta_cutoff',\n 'repetition_penalty',\n 'presence_penalty',\n 'frequency_penalty',\n 'repetition_penalty_range',\n 'encoder_repetition_penalty',\n 'no_repeat_ngram_size',\n 'do_sample',\n 'penalty_alpha',\n 'mirostat_mode',\n 'mirostat_tau',\n 'mirostat_eta',\n 'grammar_string',\n 'negative_prompt',\n 'guidance_scale',\n 'add_bos_token',\n 'ban_eos_token',\n 'custom_token_bans',\n 'sampler_priority',\n 'truncation_length',\n 'custom_stopping_strings',\n 'skip_special_tokens',\n 'stream',\n 'tfs',\n 'top_a',\n ]\n\n # Chat elements\n elements += [\n 'textbox',\n 'start_with',\n 'character_menu',\n 'history',\n 'name1',\n 'user_bio',\n 'name2',\n 'greeting',\n 'context',\n 'mode',\n 'custom_system_message',\n 'instruction_template_str',\n 'chat_template_str',\n 'chat_style',\n 'chat-instruct_command',\n ]\n\n # Notebook/default elements\n elements += [\n 'textbox-notebook',\n 'textbox-default',\n 'output_textbox',\n 'prompt_menu-default',\n 'prompt_menu-notebook',\n ]\n\n # Model elements\n elements += list_model_elements()\n\n return elements\n\n\ndef gather_interface_values(*args):\n output = {}\n for i, element in enumerate(list_interface_input_elements()):\n output[element] = args[i]\n\n if not shared.args.multi_user:\n shared.persistent_interface_state = output\n\n return output\n\n\ndef apply_interface_values(state, use_persistent=False):\n if use_persistent:\n state = shared.persistent_interface_state\n\n elements = list_interface_input_elements()\n if len(state) == 0:\n return [gr.update() for k in elements] # Dummy, do nothing\n else:\n return [state[k] if k in state else gr.update() for k in elements]\n\n\ndef save_settings(state, preset, extensions_list, show_controls, theme_state):\n output = copy.deepcopy(shared.settings)\n exclude = ['name2', 'greeting', 'context', 'turn_template', 'truncation_length']\n for k in state:\n if k in shared.settings and k not in exclude:\n output[k] = state[k]\n\n output['preset'] = preset\n output['prompt-default'] = state['prompt_menu-default']\n output['prompt-notebook'] = state['prompt_menu-notebook']\n output['character'] = state['character_menu']\n output['default_extensions'] = extensions_list\n output['seed'] = int(output['seed'])\n output['show_controls'] = show_controls\n output['dark_theme'] = True if theme_state == 'dark' else False\n\n # Save extension values in the UI\n for extension_name in extensions_list:\n extension = getattr(extensions, extension_name).script\n if hasattr(extension, 'params'):\n params = getattr(extension, 'params')\n for param in params:\n _id = f\"{extension_name}-{param}\"\n # Only save if different from default value\n if param not in shared.default_settings or params[param] != shared.default_settings[param]:\n output[_id] = params[param]\n\n # Do not save unchanged settings\n for key in list(output.keys()):\n if key in shared.default_settings and output[key] == shared.default_settings[key]:\n output.pop(key)\n\n return yaml.dump(output, sort_keys=False, width=float(\"inf\"))\n\n\ndef create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class, interactive=True):\n \"\"\"\n Copied from https://github.com/AUTOMATIC1111/stable-diffusion-webui\n \"\"\"\n def refresh():\n refresh_method()\n args = refreshed_args() if callable(refreshed_args) else refreshed_args\n\n return gr.update(**(args or {}))\n\n refresh_button = gr.Button(refresh_symbol, elem_classes=elem_class, interactive=interactive)\n refresh_button.click(\n fn=lambda: {k: tuple(v) if type(k) is list else v for k, v in refresh().items()},\n inputs=[],\n outputs=[refresh_component]\n )\n\n return refresh_button\n", "path": "modules/ui.py"}]} | 3,324 | 269 |
gh_patches_debug_24770 | rasdani/github-patches | git_diff | dask__dask-1121 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
compute node names within delayed functions
I love using `dask.imperative.delayed` as a decorator to hide the daskyness from the driver code, but the names generated for `dask.dot.dot_graph` leave something to be desired. Its often easy to compute unique names within the delayed function, e.g.:
```
@delayed
def run(config):
__task_name = "run_" + config["job_name"]
res = do_stuff(config)
return res
```
which can be done with a wrapper:
```
def run(config):
task_name = "run_" + config["job_name"]
return delayed(_run, name=task_name)(config)
def _run(config)
res = do_stuff(config)
return res
```
Any way to get rid of the wrapper step?
</issue>
<code>
[start of dask/dot.py]
1 from __future__ import absolute_import, division, print_function
2
3 import re
4 from functools import partial
5
6 from graphviz import Digraph
7
8 from .core import istask, get_dependencies, ishashable
9 from .utils import funcname
10
11
12 def task_label(task):
13 """Label for a task on a dot graph.
14
15 Examples
16 --------
17 >>> from operator import add
18 >>> task_label((add, 1, 2))
19 'add'
20 >>> task_label((add, (add, 1, 2), 3))
21 'add(...)'
22 """
23 func = task[0]
24 if hasattr(func, 'funcs'):
25 if len(func.funcs) > 1:
26 return '{0}(...)'.format(funcname(func.funcs[0]))
27 else:
28 head = funcname(func.funcs[0])
29 else:
30 head = funcname(task[0])
31 if any(has_sub_tasks(i) for i in task[1:]):
32 return '{0}(...)'.format(head)
33 else:
34 return head
35
36
37 def has_sub_tasks(task):
38 """Returns True if the task has sub tasks"""
39 if istask(task):
40 return True
41 elif isinstance(task, list):
42 return any(has_sub_tasks(i) for i in task)
43 else:
44 return False
45
46
47 def name(x):
48 try:
49 return str(hash(x))
50 except TypeError:
51 return str(hash(str(x)))
52
53
54 _HASHPAT = re.compile('([0-9a-z]{32})')
55
56
57 def label(x, cache=None):
58 """
59
60 >>> label('x')
61 'x'
62
63 >>> label(('x', 1))
64 "('x', 1)"
65
66 >>> from hashlib import md5
67 >>> x = 'x-%s-hello' % md5(b'1234').hexdigest()
68 >>> x
69 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'
70
71 >>> label(x)
72 'x-#-hello'
73 """
74 s = str(x)
75 m = re.search(_HASHPAT, s)
76 if m is not None:
77 for h in m.groups():
78 if cache is not None:
79 n = cache.get(h, len(cache))
80 label = '#{0}'.format(n)
81 # cache will be overwritten destructively
82 cache[h] = n
83 else:
84 label = '#'
85 s = s.replace(h, label)
86 return s
87
88
89 def to_graphviz(dsk, data_attributes=None, function_attributes=None, **kwargs):
90 if data_attributes is None:
91 data_attributes = {}
92 if function_attributes is None:
93 function_attributes = {}
94
95 attributes = {'rankdir': 'BT'}
96 attributes.update(kwargs)
97 g = Digraph(graph_attr=attributes)
98
99 seen = set()
100 cache = {}
101
102 for k, v in dsk.items():
103 k_name = name(k)
104 if k_name not in seen:
105 seen.add(k_name)
106 g.node(k_name, label=label(k, cache=cache), shape='box',
107 **data_attributes.get(k, {}))
108
109 if istask(v):
110 func_name = name((k, 'function'))
111 if func_name not in seen:
112 seen.add(func_name)
113 g.node(func_name, label=task_label(v), shape='circle',
114 **function_attributes.get(k, {}))
115 g.edge(func_name, k_name)
116
117 for dep in get_dependencies(dsk, k):
118 dep_name = name(dep)
119 if dep_name not in seen:
120 seen.add(dep_name)
121 g.node(dep_name, label=label(dep, cache=cache), shape='box',
122 **data_attributes.get(dep, {}))
123 g.edge(dep_name, func_name)
124 elif ishashable(v) and v in dsk:
125 g.edge(name(v), k_name)
126 return g
127
128
129 IPYTHON_IMAGE_FORMATS = frozenset(['jpeg', 'png'])
130 IPYTHON_NO_DISPLAY_FORMATS = frozenset(['dot', 'pdf'])
131
132
133 def _get_display_cls(format):
134 """
135 Get the appropriate IPython display class for `format`.
136
137 Returns `IPython.display.SVG` if format=='svg', otherwise
138 `IPython.display.Image`.
139
140 If IPython is not importable, return dummy function that swallows its
141 arguments and returns None.
142 """
143 dummy = lambda *args, **kwargs: None
144 try:
145 import IPython.display as display
146 except ImportError:
147 # Can't return a display object if no IPython.
148 return dummy
149
150 if format in IPYTHON_NO_DISPLAY_FORMATS:
151 # IPython can't display this format natively, so just return None.
152 return dummy
153 elif format in IPYTHON_IMAGE_FORMATS:
154 # Partially apply `format` so that `Image` and `SVG` supply a uniform
155 # interface to the caller.
156 return partial(display.Image, format=format)
157 elif format == 'svg':
158 return display.SVG
159 else:
160 raise ValueError("Unknown format '%s' passed to `dot_graph`" % format)
161
162
163 def dot_graph(dsk, filename='mydask', format=None, **kwargs):
164 """
165 Render a task graph using dot.
166
167 If `filename` is not None, write a file to disk with that name in the
168 format specified by `format`. `filename` should not include an extension.
169
170 Parameters
171 ----------
172 dsk : dict
173 The graph to display.
174 filename : str or None, optional
175 The name (without an extension) of the file to write to disk. If
176 `filename` is None, no file will be written, and we communicate with
177 dot using only pipes. Default is 'mydask'.
178 format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
179 Format in which to write output file. Default is 'png'.
180 **kwargs
181 Additional keyword arguments to forward to `to_graphviz`.
182
183 Returns
184 -------
185 result : None or IPython.display.Image or IPython.display.SVG (See below.)
186
187 Notes
188 -----
189 If IPython is installed, we return an IPython.display object in the
190 requested format. If IPython is not installed, we just return None.
191
192 We always return None if format is 'pdf' or 'dot', because IPython can't
193 display these formats natively. Passing these formats with filename=None
194 will not produce any useful output.
195
196 See Also
197 --------
198 dask.dot.to_graphviz
199 """
200 g = to_graphviz(dsk, **kwargs)
201
202 fmts = ['.png', '.pdf', '.dot', '.svg', '.jpeg', '.jpg']
203 if format is None and any(filename.lower().endswith(fmt) for fmt in fmts):
204 format = filename.lower().split('.')[-1]
205 filename = filename.rsplit('.')[0]
206
207 if format is None:
208 format = 'png'
209
210 data = g.pipe(format=format)
211 if not data:
212 raise RuntimeError("Graphviz failed to properly produce an image. "
213 "This probably means your installation of graphviz "
214 "is missing png support. See: "
215 "https://github.com/ContinuumIO/anaconda-issues/"
216 "issues/485 for more information.")
217
218 display_cls = _get_display_cls(format)
219
220 if not filename:
221 return display_cls(data=data)
222
223 full_filename = '.'.join([filename, format])
224 with open(full_filename, 'wb') as f:
225 f.write(data)
226
227 return display_cls(filename=full_filename)
228
[end of dask/dot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dask/dot.py b/dask/dot.py
--- a/dask/dot.py
+++ b/dask/dot.py
@@ -52,6 +52,7 @@
_HASHPAT = re.compile('([0-9a-z]{32})')
+_UUIDPAT = re.compile('([0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12})')
def label(x, cache=None):
@@ -68,21 +69,30 @@
>>> x
'x-81dc9bdb52d04dc20036dbd8313ed055-hello'
+ >>> label(x)
+ 'x-#-hello'
+
+ >>> from uuid import uuid1
+ >>> x = 'x-%s-hello' % uuid1()
+ >>> x # doctest: +SKIP
+ 'x-4c1a3d7e-0b45-11e6-8334-54ee75105593-hello'
+
>>> label(x)
'x-#-hello'
"""
s = str(x)
- m = re.search(_HASHPAT, s)
- if m is not None:
- for h in m.groups():
- if cache is not None:
- n = cache.get(h, len(cache))
- label = '#{0}'.format(n)
- # cache will be overwritten destructively
- cache[h] = n
- else:
- label = '#'
- s = s.replace(h, label)
+ for pattern in (_HASHPAT, _UUIDPAT):
+ m = re.search(pattern, s)
+ if m is not None:
+ for h in m.groups():
+ if cache is not None:
+ n = cache.get(h, len(cache))
+ label = '#{0}'.format(n)
+ # cache will be overwritten destructively
+ cache[h] = n
+ else:
+ label = '#'
+ s = s.replace(h, label)
return s
| {"golden_diff": "diff --git a/dask/dot.py b/dask/dot.py\n--- a/dask/dot.py\n+++ b/dask/dot.py\n@@ -52,6 +52,7 @@\n \n \n _HASHPAT = re.compile('([0-9a-z]{32})')\n+_UUIDPAT = re.compile('([0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12})')\n \n \n def label(x, cache=None):\n@@ -68,21 +69,30 @@\n >>> x\n 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'\n \n+ >>> label(x)\n+ 'x-#-hello'\n+\n+ >>> from uuid import uuid1\n+ >>> x = 'x-%s-hello' % uuid1()\n+ >>> x # doctest: +SKIP\n+ 'x-4c1a3d7e-0b45-11e6-8334-54ee75105593-hello'\n+\n >>> label(x)\n 'x-#-hello'\n \"\"\"\n s = str(x)\n- m = re.search(_HASHPAT, s)\n- if m is not None:\n- for h in m.groups():\n- if cache is not None:\n- n = cache.get(h, len(cache))\n- label = '#{0}'.format(n)\n- # cache will be overwritten destructively\n- cache[h] = n\n- else:\n- label = '#'\n- s = s.replace(h, label)\n+ for pattern in (_HASHPAT, _UUIDPAT):\n+ m = re.search(pattern, s)\n+ if m is not None:\n+ for h in m.groups():\n+ if cache is not None:\n+ n = cache.get(h, len(cache))\n+ label = '#{0}'.format(n)\n+ # cache will be overwritten destructively\n+ cache[h] = n\n+ else:\n+ label = '#'\n+ s = s.replace(h, label)\n return s\n", "issue": "compute node names within delayed functions\nI love using `dask.imperative.delayed` as a decorator to hide the daskyness from the driver code, but the names generated for `dask.dot.dot_graph` leave something to be desired. Its often easy to compute unique names within the delayed function, e.g.:\n\n```\n@delayed\ndef run(config):\n __task_name = \"run_\" + config[\"job_name\"]\n res = do_stuff(config)\n return res\n```\n\nwhich can be done with a wrapper:\n\n```\ndef run(config):\n task_name = \"run_\" + config[\"job_name\"]\n return delayed(_run, name=task_name)(config)\n\ndef _run(config)\n res = do_stuff(config)\n return res\n```\n\nAny way to get rid of the wrapper step?\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport re\nfrom functools import partial\n\nfrom graphviz import Digraph\n\nfrom .core import istask, get_dependencies, ishashable\nfrom .utils import funcname\n\n\ndef task_label(task):\n \"\"\"Label for a task on a dot graph.\n\n Examples\n --------\n >>> from operator import add\n >>> task_label((add, 1, 2))\n 'add'\n >>> task_label((add, (add, 1, 2), 3))\n 'add(...)'\n \"\"\"\n func = task[0]\n if hasattr(func, 'funcs'):\n if len(func.funcs) > 1:\n return '{0}(...)'.format(funcname(func.funcs[0]))\n else:\n head = funcname(func.funcs[0])\n else:\n head = funcname(task[0])\n if any(has_sub_tasks(i) for i in task[1:]):\n return '{0}(...)'.format(head)\n else:\n return head\n\n\ndef has_sub_tasks(task):\n \"\"\"Returns True if the task has sub tasks\"\"\"\n if istask(task):\n return True\n elif isinstance(task, list):\n return any(has_sub_tasks(i) for i in task)\n else:\n return False\n\n\ndef name(x):\n try:\n return str(hash(x))\n except TypeError:\n return str(hash(str(x)))\n\n\n_HASHPAT = re.compile('([0-9a-z]{32})')\n\n\ndef label(x, cache=None):\n \"\"\"\n\n >>> label('x')\n 'x'\n\n >>> label(('x', 1))\n \"('x', 1)\"\n\n >>> from hashlib import md5\n >>> x = 'x-%s-hello' % md5(b'1234').hexdigest()\n >>> x\n 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'\n\n >>> label(x)\n 'x-#-hello'\n \"\"\"\n s = str(x)\n m = re.search(_HASHPAT, s)\n if m is not None:\n for h in m.groups():\n if cache is not None:\n n = cache.get(h, len(cache))\n label = '#{0}'.format(n)\n # cache will be overwritten destructively\n cache[h] = n\n else:\n label = '#'\n s = s.replace(h, label)\n return s\n\n\ndef to_graphviz(dsk, data_attributes=None, function_attributes=None, **kwargs):\n if data_attributes is None:\n data_attributes = {}\n if function_attributes is None:\n function_attributes = {}\n\n attributes = {'rankdir': 'BT'}\n attributes.update(kwargs)\n g = Digraph(graph_attr=attributes)\n\n seen = set()\n cache = {}\n\n for k, v in dsk.items():\n k_name = name(k)\n if k_name not in seen:\n seen.add(k_name)\n g.node(k_name, label=label(k, cache=cache), shape='box',\n **data_attributes.get(k, {}))\n\n if istask(v):\n func_name = name((k, 'function'))\n if func_name not in seen:\n seen.add(func_name)\n g.node(func_name, label=task_label(v), shape='circle',\n **function_attributes.get(k, {}))\n g.edge(func_name, k_name)\n\n for dep in get_dependencies(dsk, k):\n dep_name = name(dep)\n if dep_name not in seen:\n seen.add(dep_name)\n g.node(dep_name, label=label(dep, cache=cache), shape='box',\n **data_attributes.get(dep, {}))\n g.edge(dep_name, func_name)\n elif ishashable(v) and v in dsk:\n g.edge(name(v), k_name)\n return g\n\n\nIPYTHON_IMAGE_FORMATS = frozenset(['jpeg', 'png'])\nIPYTHON_NO_DISPLAY_FORMATS = frozenset(['dot', 'pdf'])\n\n\ndef _get_display_cls(format):\n \"\"\"\n Get the appropriate IPython display class for `format`.\n\n Returns `IPython.display.SVG` if format=='svg', otherwise\n `IPython.display.Image`.\n\n If IPython is not importable, return dummy function that swallows its\n arguments and returns None.\n \"\"\"\n dummy = lambda *args, **kwargs: None\n try:\n import IPython.display as display\n except ImportError:\n # Can't return a display object if no IPython.\n return dummy\n\n if format in IPYTHON_NO_DISPLAY_FORMATS:\n # IPython can't display this format natively, so just return None.\n return dummy\n elif format in IPYTHON_IMAGE_FORMATS:\n # Partially apply `format` so that `Image` and `SVG` supply a uniform\n # interface to the caller.\n return partial(display.Image, format=format)\n elif format == 'svg':\n return display.SVG\n else:\n raise ValueError(\"Unknown format '%s' passed to `dot_graph`\" % format)\n\n\ndef dot_graph(dsk, filename='mydask', format=None, **kwargs):\n \"\"\"\n Render a task graph using dot.\n\n If `filename` is not None, write a file to disk with that name in the\n format specified by `format`. `filename` should not include an extension.\n\n Parameters\n ----------\n dsk : dict\n The graph to display.\n filename : str or None, optional\n The name (without an extension) of the file to write to disk. If\n `filename` is None, no file will be written, and we communicate with\n dot using only pipes. Default is 'mydask'.\n format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional\n Format in which to write output file. Default is 'png'.\n **kwargs\n Additional keyword arguments to forward to `to_graphviz`.\n\n Returns\n -------\n result : None or IPython.display.Image or IPython.display.SVG (See below.)\n\n Notes\n -----\n If IPython is installed, we return an IPython.display object in the\n requested format. If IPython is not installed, we just return None.\n\n We always return None if format is 'pdf' or 'dot', because IPython can't\n display these formats natively. Passing these formats with filename=None\n will not produce any useful output.\n\n See Also\n --------\n dask.dot.to_graphviz\n \"\"\"\n g = to_graphviz(dsk, **kwargs)\n\n fmts = ['.png', '.pdf', '.dot', '.svg', '.jpeg', '.jpg']\n if format is None and any(filename.lower().endswith(fmt) for fmt in fmts):\n format = filename.lower().split('.')[-1]\n filename = filename.rsplit('.')[0]\n\n if format is None:\n format = 'png'\n\n data = g.pipe(format=format)\n if not data:\n raise RuntimeError(\"Graphviz failed to properly produce an image. \"\n \"This probably means your installation of graphviz \"\n \"is missing png support. See: \"\n \"https://github.com/ContinuumIO/anaconda-issues/\"\n \"issues/485 for more information.\")\n\n display_cls = _get_display_cls(format)\n\n if not filename:\n return display_cls(data=data)\n\n full_filename = '.'.join([filename, format])\n with open(full_filename, 'wb') as f:\n f.write(data)\n\n return display_cls(filename=full_filename)\n", "path": "dask/dot.py"}]} | 2,958 | 503 |
gh_patches_debug_14214 | rasdani/github-patches | git_diff | sublimelsp__LSP-2376 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide a way to save all modified files after applying workspace edits
**Is your feature request related to a problem? Please describe.**
When applying refactorings (like renames) it's often the case that multiple files are modified. It's a chore to then have to find and save all those modified files.
**Describe the solution you'd like**
We could provide a dialog after more than one file was modified asking the user whether all modified files should be saved. Note that some people don't like dialogs (#1922).
Or maybe even show a "tree view" in a sheet, showing all modified files and allowing the user to navigate to them and save all at once.
**Describe alternatives you've considered**
- Saving all edited files one by one.
- Using "save all" command but that fails when there is some unsaved buffer in the window (for example I like to have some to keep notes in them)
</issue>
<code>
[start of plugin/save_command.py]
1 from .core.registry import LspTextCommand
2 from .core.settings import userprefs
3 from .core.typing import Callable, List, Type
4 from abc import ABCMeta, abstractmethod
5 import sublime
6 import sublime_plugin
7
8
9 class SaveTask(metaclass=ABCMeta):
10 """
11 Base class for tasks that run on save.
12
13 Note: The whole task runs on the async thread.
14 """
15
16 @classmethod
17 @abstractmethod
18 def is_applicable(cls, view: sublime.View) -> bool:
19 pass
20
21 def __init__(self, task_runner: LspTextCommand, on_done: Callable[[], None]):
22 self._task_runner = task_runner
23 self._on_done = on_done
24 self._completed = False
25 self._cancelled = False
26 self._status_key = type(self).__name__
27
28 def run_async(self) -> None:
29 self._erase_view_status()
30 sublime.set_timeout_async(self._on_timeout, userprefs().on_save_task_timeout_ms)
31
32 def _on_timeout(self) -> None:
33 if not self._completed and not self._cancelled:
34 self._set_view_status('LSP: Timeout processing {}'.format(self.__class__.__name__))
35 self._cancelled = True
36 self._on_done()
37
38 def cancel(self) -> None:
39 self._cancelled = True
40
41 def _set_view_status(self, text: str) -> None:
42 self._task_runner.view.set_status(self._status_key, text)
43 sublime.set_timeout_async(self._erase_view_status, 5000)
44
45 def _erase_view_status(self) -> None:
46 self._task_runner.view.erase_status(self._status_key)
47
48 def _on_complete(self) -> None:
49 assert not self._completed
50 self._completed = True
51 if not self._cancelled:
52 self._on_done()
53
54 def _purge_changes_async(self) -> None:
55 # Supermassive hack that will go away later.
56 listeners = sublime_plugin.view_event_listeners.get(self._task_runner.view.id(), [])
57 for listener in listeners:
58 if listener.__class__.__name__ == 'DocumentSyncListener':
59 listener.purge_changes_async() # type: ignore
60 break
61
62
63 class LspSaveCommand(LspTextCommand):
64 """
65 A command used as a substitute for native save command. Runs code actions and document
66 formatting before triggering the native save command.
67 """
68 _tasks = [] # type: List[Type[SaveTask]]
69
70 @classmethod
71 def register_task(cls, task: Type[SaveTask]) -> None:
72 assert task not in cls._tasks
73 cls._tasks.append(task)
74
75 def __init__(self, view: sublime.View) -> None:
76 super().__init__(view)
77 self._pending_tasks = [] # type: List[SaveTask]
78
79 def run(self, edit: sublime.Edit) -> None:
80 if self._pending_tasks:
81 for task in self._pending_tasks:
82 task.cancel()
83 self._pending_tasks = []
84 sublime.set_timeout_async(self._trigger_on_pre_save_async)
85 for Task in self._tasks:
86 if Task.is_applicable(self.view):
87 self._pending_tasks.append(Task(self, self._on_task_completed_async))
88 if self._pending_tasks:
89 sublime.set_timeout_async(self._run_next_task_async)
90 else:
91 self._trigger_native_save()
92
93 def _trigger_on_pre_save_async(self) -> None:
94 # Supermassive hack that will go away later.
95 listeners = sublime_plugin.view_event_listeners.get(self.view.id(), [])
96 for listener in listeners:
97 if listener.__class__.__name__ == 'DocumentSyncListener':
98 listener.trigger_on_pre_save_async() # type: ignore
99 break
100
101 def _run_next_task_async(self) -> None:
102 current_task = self._pending_tasks[0]
103 current_task.run_async()
104
105 def _on_task_completed_async(self) -> None:
106 self._pending_tasks.pop(0)
107 if self._pending_tasks:
108 # Even though we are on the async thread already, we want to give ST a chance to notify us about
109 # potential document changes.
110 sublime.set_timeout_async(self._run_next_task_async)
111 else:
112 self._trigger_native_save()
113
114 def _trigger_native_save(self) -> None:
115 # Triggered from set_timeout to preserve original semantics of on_pre_save handling
116 sublime.set_timeout(lambda: self.view.run_command('save', {"async": True}))
117
118
119 class LspSaveAllCommand(sublime_plugin.WindowCommand):
120 def run(self) -> None:
121 done = set()
122 for view in self.window.views():
123 buffer_id = view.buffer_id()
124 if buffer_id in done:
125 continue
126 if not view.is_dirty():
127 continue
128 done.add(buffer_id)
129 view.run_command("lsp_save", None)
130
[end of plugin/save_command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/save_command.py b/plugin/save_command.py
--- a/plugin/save_command.py
+++ b/plugin/save_command.py
@@ -117,7 +117,7 @@
class LspSaveAllCommand(sublime_plugin.WindowCommand):
- def run(self) -> None:
+ def run(self, only_files: bool = False) -> None:
done = set()
for view in self.window.views():
buffer_id = view.buffer_id()
@@ -125,5 +125,7 @@
continue
if not view.is_dirty():
continue
+ if only_files and view.file_name() is None:
+ continue
done.add(buffer_id)
view.run_command("lsp_save", None)
| {"golden_diff": "diff --git a/plugin/save_command.py b/plugin/save_command.py\n--- a/plugin/save_command.py\n+++ b/plugin/save_command.py\n@@ -117,7 +117,7 @@\n \n \n class LspSaveAllCommand(sublime_plugin.WindowCommand):\n- def run(self) -> None:\n+ def run(self, only_files: bool = False) -> None:\n done = set()\n for view in self.window.views():\n buffer_id = view.buffer_id()\n@@ -125,5 +125,7 @@\n continue\n if not view.is_dirty():\n continue\n+ if only_files and view.file_name() is None:\n+ continue\n done.add(buffer_id)\n view.run_command(\"lsp_save\", None)\n", "issue": "Provide a way to save all modified files after applying workspace edits\n**Is your feature request related to a problem? Please describe.**\r\n\r\nWhen applying refactorings (like renames) it's often the case that multiple files are modified. It's a chore to then have to find and save all those modified files.\r\n\r\n**Describe the solution you'd like**\r\n\r\nWe could provide a dialog after more than one file was modified asking the user whether all modified files should be saved. Note that some people don't like dialogs (#1922).\r\n\r\nOr maybe even show a \"tree view\" in a sheet, showing all modified files and allowing the user to navigate to them and save all at once.\r\n\r\n**Describe alternatives you've considered**\r\n\r\n- Saving all edited files one by one.\r\n- Using \"save all\" command but that fails when there is some unsaved buffer in the window (for example I like to have some to keep notes in them)\r\n\n", "before_files": [{"content": "from .core.registry import LspTextCommand\nfrom .core.settings import userprefs\nfrom .core.typing import Callable, List, Type\nfrom abc import ABCMeta, abstractmethod\nimport sublime\nimport sublime_plugin\n\n\nclass SaveTask(metaclass=ABCMeta):\n \"\"\"\n Base class for tasks that run on save.\n\n Note: The whole task runs on the async thread.\n \"\"\"\n\n @classmethod\n @abstractmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n pass\n\n def __init__(self, task_runner: LspTextCommand, on_done: Callable[[], None]):\n self._task_runner = task_runner\n self._on_done = on_done\n self._completed = False\n self._cancelled = False\n self._status_key = type(self).__name__\n\n def run_async(self) -> None:\n self._erase_view_status()\n sublime.set_timeout_async(self._on_timeout, userprefs().on_save_task_timeout_ms)\n\n def _on_timeout(self) -> None:\n if not self._completed and not self._cancelled:\n self._set_view_status('LSP: Timeout processing {}'.format(self.__class__.__name__))\n self._cancelled = True\n self._on_done()\n\n def cancel(self) -> None:\n self._cancelled = True\n\n def _set_view_status(self, text: str) -> None:\n self._task_runner.view.set_status(self._status_key, text)\n sublime.set_timeout_async(self._erase_view_status, 5000)\n\n def _erase_view_status(self) -> None:\n self._task_runner.view.erase_status(self._status_key)\n\n def _on_complete(self) -> None:\n assert not self._completed\n self._completed = True\n if not self._cancelled:\n self._on_done()\n\n def _purge_changes_async(self) -> None:\n # Supermassive hack that will go away later.\n listeners = sublime_plugin.view_event_listeners.get(self._task_runner.view.id(), [])\n for listener in listeners:\n if listener.__class__.__name__ == 'DocumentSyncListener':\n listener.purge_changes_async() # type: ignore\n break\n\n\nclass LspSaveCommand(LspTextCommand):\n \"\"\"\n A command used as a substitute for native save command. Runs code actions and document\n formatting before triggering the native save command.\n \"\"\"\n _tasks = [] # type: List[Type[SaveTask]]\n\n @classmethod\n def register_task(cls, task: Type[SaveTask]) -> None:\n assert task not in cls._tasks\n cls._tasks.append(task)\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self._pending_tasks = [] # type: List[SaveTask]\n\n def run(self, edit: sublime.Edit) -> None:\n if self._pending_tasks:\n for task in self._pending_tasks:\n task.cancel()\n self._pending_tasks = []\n sublime.set_timeout_async(self._trigger_on_pre_save_async)\n for Task in self._tasks:\n if Task.is_applicable(self.view):\n self._pending_tasks.append(Task(self, self._on_task_completed_async))\n if self._pending_tasks:\n sublime.set_timeout_async(self._run_next_task_async)\n else:\n self._trigger_native_save()\n\n def _trigger_on_pre_save_async(self) -> None:\n # Supermassive hack that will go away later.\n listeners = sublime_plugin.view_event_listeners.get(self.view.id(), [])\n for listener in listeners:\n if listener.__class__.__name__ == 'DocumentSyncListener':\n listener.trigger_on_pre_save_async() # type: ignore\n break\n\n def _run_next_task_async(self) -> None:\n current_task = self._pending_tasks[0]\n current_task.run_async()\n\n def _on_task_completed_async(self) -> None:\n self._pending_tasks.pop(0)\n if self._pending_tasks:\n # Even though we are on the async thread already, we want to give ST a chance to notify us about\n # potential document changes.\n sublime.set_timeout_async(self._run_next_task_async)\n else:\n self._trigger_native_save()\n\n def _trigger_native_save(self) -> None:\n # Triggered from set_timeout to preserve original semantics of on_pre_save handling\n sublime.set_timeout(lambda: self.view.run_command('save', {\"async\": True}))\n\n\nclass LspSaveAllCommand(sublime_plugin.WindowCommand):\n def run(self) -> None:\n done = set()\n for view in self.window.views():\n buffer_id = view.buffer_id()\n if buffer_id in done:\n continue\n if not view.is_dirty():\n continue\n done.add(buffer_id)\n view.run_command(\"lsp_save\", None)\n", "path": "plugin/save_command.py"}]} | 2,055 | 162 |
gh_patches_debug_23396 | rasdani/github-patches | git_diff | mozilla__bugbug-188 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Author and reviewer experience should ignore backouts
Author and reviewer experience should not be increased for commits that are backed out.
</issue>
<code>
[start of bugbug/repository.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import argparse
7 import concurrent.futures
8 import multiprocessing
9 import os
10 from collections import defaultdict
11 from collections import namedtuple
12 from datetime import datetime
13
14 import hglib
15 import requests
16 from dateutil.relativedelta import relativedelta
17 from parsepatch.patch import Patch
18 from tqdm import tqdm
19
20 from bugbug import db
21
22 COMMITS_DB = 'data/commits.json'
23 db.register(COMMITS_DB, 'https://www.dropbox.com/s/mz3afgncx0siijc/commits.json.xz?dl=1')
24
25 COMPONENTS = {}
26
27 Commit = namedtuple('Commit', ['node', 'author', 'desc', 'date', 'bug', 'ever_backedout', 'author_email'])
28
29 author_experience = {}
30 author_experience_90_days = {}
31
32
33 def get_commits():
34 return db.read(COMMITS_DB)
35
36
37 def _init(repo_dir):
38 global HG
39 HG = hglib.open(repo_dir)
40
41
42 def _transform(commit):
43 desc = commit.desc.decode('utf-8')
44
45 obj = {
46 'author': commit.author.decode('utf-8'),
47 'desc': desc,
48 'date': str(commit.date),
49 'bug_id': commit.bug.decode('utf-8'),
50 'ever_backedout': commit.ever_backedout,
51 'added': 0,
52 'deleted': 0,
53 'files_modified_num': 0,
54 'types': set(),
55 'components': list(),
56 'author_experience': author_experience[commit],
57 'author_experience_90_days': author_experience_90_days[commit],
58 'author_email': commit.author_email.decode('utf-8'),
59 }
60
61 patch = HG.export(revs=[commit.node], git=True)
62 patch_data = Patch.parse_patch(patch.decode('utf-8', 'ignore'), skip_comments=False, add_lines_for_new=True)
63 for path, stats in patch_data.items():
64 if 'added' not in stats:
65 # Must be a binary file
66 obj['types'].add('binary')
67 continue
68
69 obj['added'] += len(stats['added']) + len(stats['touched'])
70 obj['deleted'] += len(stats['deleted']) + len(stats['touched'])
71 ext = os.path.splitext(path)[1]
72 if ext in ['.js', '.jsm']:
73 type_ = 'JavaScript'
74 elif ext in ['.c', '.cpp', '.h']:
75 type_ = 'C/C++'
76 elif ext in ['.java']:
77 type_ = 'Java'
78 elif ext in ['.py']:
79 type_ = 'Python'
80 else:
81 type_ = ext
82 obj['types'].add(type_)
83
84 obj['files_modified_num'] = len(patch_data)
85
86 # Covert to a list, as a set is not JSON-serializable.
87 obj['types'] = list(obj['types'])
88
89 obj['components'] = list(set('::'.join(COMPONENTS[fl]) for fl in patch_data.keys() if COMPONENTS.get(fl)))
90
91 return obj
92
93
94 def hg_log(hg, first_rev):
95 template = '{node}\\0{author}\\0{desc}\\0{date}\\0{bug}\\0{backedoutby}\\0{author|email}\\0'
96
97 args = hglib.util.cmdbuilder(b'log', template=template, no_merges=True, rev=f'{first_rev}:tip')
98 x = hg.rawcommand(args)
99 out = x.split(b'\x00')[:-1]
100
101 revs = []
102 for rev in hglib.util.grouper(template.count('\\0'), out):
103 posixtime = float(rev[3].split(b'.', 1)[0])
104 dt = datetime.fromtimestamp(posixtime)
105
106 revs.append(Commit(
107 node=rev[0],
108 author=rev[1],
109 desc=rev[2],
110 date=dt,
111 bug=rev[4],
112 ever_backedout=(rev[5] != b''),
113 author_email=rev[6],
114 ))
115
116 return revs
117
118
119 def get_rev(hg, date):
120 return hg.log(date=date.strftime('%Y-%m-%d'), limit=1)[0].node.decode('utf-8')
121
122
123 def download_commits(repo_dir, date_from):
124 hg = hglib.open(repo_dir)
125
126 first_rev = get_rev(hg, date_from)
127
128 commits = hg_log(hg, first_rev)
129 commits_num = len(commits)
130
131 hg.close()
132
133 # Total previous number of commits by the author.
134 total_commits_by_author = defaultdict(int)
135 # Previous commits by the author, in a 90 days window.
136 commits_by_author = defaultdict(list)
137
138 global author_experience
139 global author_experience_90_days
140 for commit in commits:
141 author_experience[commit] = total_commits_by_author[commit.author]
142 total_commits_by_author[commit.author] += 1
143
144 # Keep only the previous commits from a window of 90 days in the commits_by_author map.
145 cut = None
146
147 for i, prev_commit in enumerate(commits_by_author[commit.author]):
148 if (commit.date - prev_commit.date).days <= 90:
149 break
150
151 cut = i
152
153 if cut is not None:
154 commits_by_author[commit.author] = commits_by_author[commit.author][cut + 1:]
155
156 author_experience_90_days[commit] = len(commits_by_author[commit.author])
157
158 commits_by_author[commit.author].append(commit)
159
160 global COMPONENTS
161 r = requests.get('https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.latest.source.source-bugzilla-info/artifacts/public/components.json')
162 r.raise_for_status()
163 COMPONENTS = r.json()
164
165 print(f'Mining commits using {multiprocessing.cpu_count()} processes...')
166
167 with concurrent.futures.ProcessPoolExecutor(initializer=_init, initargs=(repo_dir,)) as executor:
168 commits = executor.map(_transform, commits, chunksize=64)
169 commits = tqdm(commits, total=commits_num)
170 db.write(COMMITS_DB, commits)
171
172
173 def get_commit_map():
174 commit_map = {}
175
176 for commit in get_commits():
177 bug_id = commit['bug_id']
178
179 if not bug_id:
180 continue
181
182 if bug_id not in commit_map:
183 commit_map[bug_id] = []
184
185 commit_map[bug_id].append(commit)
186
187 return commit_map
188
189
190 if __name__ == '__main__':
191 parser = argparse.ArgumentParser()
192 parser.add_argument('repository_dir', help='Path to the repository', action='store')
193 args = parser.parse_args()
194
195 two_years_and_six_months_ago = datetime.utcnow() - relativedelta(years=2, months=6)
196
197 download_commits(args.repository_dir, two_years_and_six_months_ago)
198
[end of bugbug/repository.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/repository.py b/bugbug/repository.py
--- a/bugbug/repository.py
+++ b/bugbug/repository.py
@@ -139,7 +139,9 @@
global author_experience_90_days
for commit in commits:
author_experience[commit] = total_commits_by_author[commit.author]
- total_commits_by_author[commit.author] += 1
+ # We don't want to consider backed out commits when calculating author/reviewer experience.
+ if not commit.ever_backedout:
+ total_commits_by_author[commit.author] += 1
# Keep only the previous commits from a window of 90 days in the commits_by_author map.
cut = None
@@ -155,7 +157,8 @@
author_experience_90_days[commit] = len(commits_by_author[commit.author])
- commits_by_author[commit.author].append(commit)
+ if not commit.ever_backedout:
+ commits_by_author[commit.author].append(commit)
global COMPONENTS
r = requests.get('https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.latest.source.source-bugzilla-info/artifacts/public/components.json')
| {"golden_diff": "diff --git a/bugbug/repository.py b/bugbug/repository.py\n--- a/bugbug/repository.py\n+++ b/bugbug/repository.py\n@@ -139,7 +139,9 @@\n global author_experience_90_days\n for commit in commits:\n author_experience[commit] = total_commits_by_author[commit.author]\n- total_commits_by_author[commit.author] += 1\n+ # We don't want to consider backed out commits when calculating author/reviewer experience.\n+ if not commit.ever_backedout:\n+ total_commits_by_author[commit.author] += 1\n \n # Keep only the previous commits from a window of 90 days in the commits_by_author map.\n cut = None\n@@ -155,7 +157,8 @@\n \n author_experience_90_days[commit] = len(commits_by_author[commit.author])\n \n- commits_by_author[commit.author].append(commit)\n+ if not commit.ever_backedout:\n+ commits_by_author[commit.author].append(commit)\n \n global COMPONENTS\n r = requests.get('https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.latest.source.source-bugzilla-info/artifacts/public/components.json')\n", "issue": "Author and reviewer experience should ignore backouts\nAuthor and reviewer experience should not be increased for commits that are backed out.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport concurrent.futures\nimport multiprocessing\nimport os\nfrom collections import defaultdict\nfrom collections import namedtuple\nfrom datetime import datetime\n\nimport hglib\nimport requests\nfrom dateutil.relativedelta import relativedelta\nfrom parsepatch.patch import Patch\nfrom tqdm import tqdm\n\nfrom bugbug import db\n\nCOMMITS_DB = 'data/commits.json'\ndb.register(COMMITS_DB, 'https://www.dropbox.com/s/mz3afgncx0siijc/commits.json.xz?dl=1')\n\nCOMPONENTS = {}\n\nCommit = namedtuple('Commit', ['node', 'author', 'desc', 'date', 'bug', 'ever_backedout', 'author_email'])\n\nauthor_experience = {}\nauthor_experience_90_days = {}\n\n\ndef get_commits():\n return db.read(COMMITS_DB)\n\n\ndef _init(repo_dir):\n global HG\n HG = hglib.open(repo_dir)\n\n\ndef _transform(commit):\n desc = commit.desc.decode('utf-8')\n\n obj = {\n 'author': commit.author.decode('utf-8'),\n 'desc': desc,\n 'date': str(commit.date),\n 'bug_id': commit.bug.decode('utf-8'),\n 'ever_backedout': commit.ever_backedout,\n 'added': 0,\n 'deleted': 0,\n 'files_modified_num': 0,\n 'types': set(),\n 'components': list(),\n 'author_experience': author_experience[commit],\n 'author_experience_90_days': author_experience_90_days[commit],\n 'author_email': commit.author_email.decode('utf-8'),\n }\n\n patch = HG.export(revs=[commit.node], git=True)\n patch_data = Patch.parse_patch(patch.decode('utf-8', 'ignore'), skip_comments=False, add_lines_for_new=True)\n for path, stats in patch_data.items():\n if 'added' not in stats:\n # Must be a binary file\n obj['types'].add('binary')\n continue\n\n obj['added'] += len(stats['added']) + len(stats['touched'])\n obj['deleted'] += len(stats['deleted']) + len(stats['touched'])\n ext = os.path.splitext(path)[1]\n if ext in ['.js', '.jsm']:\n type_ = 'JavaScript'\n elif ext in ['.c', '.cpp', '.h']:\n type_ = 'C/C++'\n elif ext in ['.java']:\n type_ = 'Java'\n elif ext in ['.py']:\n type_ = 'Python'\n else:\n type_ = ext\n obj['types'].add(type_)\n\n obj['files_modified_num'] = len(patch_data)\n\n # Covert to a list, as a set is not JSON-serializable.\n obj['types'] = list(obj['types'])\n\n obj['components'] = list(set('::'.join(COMPONENTS[fl]) for fl in patch_data.keys() if COMPONENTS.get(fl)))\n\n return obj\n\n\ndef hg_log(hg, first_rev):\n template = '{node}\\\\0{author}\\\\0{desc}\\\\0{date}\\\\0{bug}\\\\0{backedoutby}\\\\0{author|email}\\\\0'\n\n args = hglib.util.cmdbuilder(b'log', template=template, no_merges=True, rev=f'{first_rev}:tip')\n x = hg.rawcommand(args)\n out = x.split(b'\\x00')[:-1]\n\n revs = []\n for rev in hglib.util.grouper(template.count('\\\\0'), out):\n posixtime = float(rev[3].split(b'.', 1)[0])\n dt = datetime.fromtimestamp(posixtime)\n\n revs.append(Commit(\n node=rev[0],\n author=rev[1],\n desc=rev[2],\n date=dt,\n bug=rev[4],\n ever_backedout=(rev[5] != b''),\n author_email=rev[6],\n ))\n\n return revs\n\n\ndef get_rev(hg, date):\n return hg.log(date=date.strftime('%Y-%m-%d'), limit=1)[0].node.decode('utf-8')\n\n\ndef download_commits(repo_dir, date_from):\n hg = hglib.open(repo_dir)\n\n first_rev = get_rev(hg, date_from)\n\n commits = hg_log(hg, first_rev)\n commits_num = len(commits)\n\n hg.close()\n\n # Total previous number of commits by the author.\n total_commits_by_author = defaultdict(int)\n # Previous commits by the author, in a 90 days window.\n commits_by_author = defaultdict(list)\n\n global author_experience\n global author_experience_90_days\n for commit in commits:\n author_experience[commit] = total_commits_by_author[commit.author]\n total_commits_by_author[commit.author] += 1\n\n # Keep only the previous commits from a window of 90 days in the commits_by_author map.\n cut = None\n\n for i, prev_commit in enumerate(commits_by_author[commit.author]):\n if (commit.date - prev_commit.date).days <= 90:\n break\n\n cut = i\n\n if cut is not None:\n commits_by_author[commit.author] = commits_by_author[commit.author][cut + 1:]\n\n author_experience_90_days[commit] = len(commits_by_author[commit.author])\n\n commits_by_author[commit.author].append(commit)\n\n global COMPONENTS\n r = requests.get('https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.latest.source.source-bugzilla-info/artifacts/public/components.json')\n r.raise_for_status()\n COMPONENTS = r.json()\n\n print(f'Mining commits using {multiprocessing.cpu_count()} processes...')\n\n with concurrent.futures.ProcessPoolExecutor(initializer=_init, initargs=(repo_dir,)) as executor:\n commits = executor.map(_transform, commits, chunksize=64)\n commits = tqdm(commits, total=commits_num)\n db.write(COMMITS_DB, commits)\n\n\ndef get_commit_map():\n commit_map = {}\n\n for commit in get_commits():\n bug_id = commit['bug_id']\n\n if not bug_id:\n continue\n\n if bug_id not in commit_map:\n commit_map[bug_id] = []\n\n commit_map[bug_id].append(commit)\n\n return commit_map\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('repository_dir', help='Path to the repository', action='store')\n args = parser.parse_args()\n\n two_years_and_six_months_ago = datetime.utcnow() - relativedelta(years=2, months=6)\n\n download_commits(args.repository_dir, two_years_and_six_months_ago)\n", "path": "bugbug/repository.py"}]} | 2,605 | 275 |
gh_patches_debug_29535 | rasdani/github-patches | git_diff | conan-io__conan-center-index-7032 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] imgui/1.84.1: Shared library does not automatically import global data symbols
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **imgui/1.84.1**
* Operating System+version: **Windows 10 21H1 Build 19043.1165**
* Compiler+version: **Visual Studio 16 (2019)**
* Docker image: **N/A**
* Conan version: **conan 1.39.0**
* Python version: **Python 3.9.6**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
[settings]
os=Windows
os_build=Windows
arch=x86_64
arch_build=x86_64
compiler=Visual Studio
compiler.version=16
build_type=Release
[options]
[conf]
[build_requires]
[env]
```
### Steps to reproduce (Include if Applicable)
Try to reference any code that uses global data symbols since those need to use `__declspec(dllimport)` when using [`WINDOWS_EXPORT_ALL_SYMBOLS`](https://cmake.org/cmake/help/latest/prop_tgt/WINDOWS_EXPORT_ALL_SYMBOLS.html#windows-export-all-symbols). One example could be using [`ImGuiTextBuffer`](https://github.com/ocornut/imgui/blob/v1.84.1/imgui.h#L2078) (which has `IMGUI_API static char EmptyString[1];`).
The following diff is for ImGui's [`test_package.cpp`](https://github.com/conan-io/conan-center-index/blob/master/recipes/imgui/all/test_package/test_package.cpp) and can reproduce this issue.
```
--- a/recipes/imgui/all/test_package/test_package.cpp
+++ b/recipes/imgui/all/test_package/test_package.cpp
@@ -5,6 +5,9 @@ int main(int, char**)
{
ImGuiContext* context =ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
+
+ ImGuiTextBuffer textBuffer;
+ textBuffer.append("Hello, ImGui");
// Build atlas
unsigned char* tex_pixels = NULL;
@@ -20,6 +23,7 @@ int main(int, char**)
static float f = 0.0f;
ImGui::Text("Hello, world!");
+ ImGui::Text(textBuffer.begin());
ImGui::SliderFloat("float", &f, 0.0f, 1.0f);
ImGui::Text("Application average %.3f ms/frame (%.1f FPS)", 1000.0f / io.Framerate, io.Framerate);
ImGui::ShowDemoWindow(NULL);
```
Then, try to create the package with `conan create . imgui/1.84.1@ -o imgui:shared=True`.
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
test_package.obj : error LNK2019: unresolved external symbol "public: static char * ImGuiTextBuffer::EmptyString" (?Emp
tyString@ImGuiTextBuffer@@2PADA) referenced in function main
```
</details>
I think the simplest solution would be to add something like this
```
if self.options.shared and self.settings.os == "Windows":
self.cpp_info.defines.append("IMGUI_API=__declspec(dllimport)")
```
I'd be happy to open a PR with this change.
</issue>
<code>
[start of recipes/imgui/all/conanfile.py]
1 from conans import ConanFile, CMake, tools
2 import os
3
4 required_conan_version = ">=1.33.0"
5
6
7 class IMGUIConan(ConanFile):
8 name = "imgui"
9 url = "https://github.com/conan-io/conan-center-index"
10 homepage = "https://github.com/ocornut/imgui"
11 description = "Bloat-free Immediate Mode Graphical User interface for C++ with minimal dependencies"
12 topics = ("conan", "imgui", "gui", "graphical")
13 license = "MIT"
14
15 exports_sources = ["CMakeLists.txt"]
16 generators = "cmake"
17
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False]
22 }
23 default_options = {
24 "shared": False,
25 "fPIC": True
26 }
27
28 _cmake = None
29
30 @property
31 def _source_subfolder(self):
32 return "source_subfolder"
33
34 def config_options(self):
35 if self.settings.os == "Windows":
36 del self.options.fPIC
37
38 def configure(self):
39 if self.options.shared:
40 del self.options.fPIC
41
42 def source(self):
43 tools.get(**self.conan_data["sources"][self.version],
44 destination=self._source_subfolder, strip_root=True)
45
46 def _configure_cmake(self):
47 if self._cmake:
48 return self._cmake
49 self._cmake = CMake(self)
50 self._cmake.configure()
51 return self._cmake
52
53 def build(self):
54 cmake = self._configure_cmake()
55 cmake.build()
56
57 def package(self):
58 self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
59 backends_folder = src=os.path.join(
60 self._source_subfolder,
61 "backends" if tools.Version(self.version) >= "1.80" else "examples"
62 )
63 self.copy(pattern="imgui_impl_*",
64 dst=os.path.join("res", "bindings"),
65 src=backends_folder)
66 cmake = self._configure_cmake()
67 cmake.install()
68
69 def package_info(self):
70 self.cpp_info.libs = ["imgui"]
71 if self.settings.os == "Linux":
72 self.cpp_info.system_libs.append("m")
73 self.cpp_info.srcdirs = [os.path.join("res", "bindings")]
74
75 bin_path = os.path.join(self.package_folder, "bin")
76 self.output.info("Appending PATH env var with : {}".format(bin_path))
77 self.env_info.PATH.append(bin_path)
78
[end of recipes/imgui/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/imgui/all/conanfile.py b/recipes/imgui/all/conanfile.py
--- a/recipes/imgui/all/conanfile.py
+++ b/recipes/imgui/all/conanfile.py
@@ -12,19 +12,18 @@
topics = ("conan", "imgui", "gui", "graphical")
license = "MIT"
- exports_sources = ["CMakeLists.txt"]
- generators = "cmake"
-
settings = "os", "arch", "compiler", "build_type"
options = {
"shared": [True, False],
- "fPIC": [True, False]
+ "fPIC": [True, False],
}
default_options = {
"shared": False,
- "fPIC": True
+ "fPIC": True,
}
+ exports_sources = "CMakeLists.txt"
+ generators = "cmake"
_cmake = None
@property
@@ -56,7 +55,7 @@
def package(self):
self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
- backends_folder = src=os.path.join(
+ backends_folder = os.path.join(
self._source_subfolder,
"backends" if tools.Version(self.version) >= "1.80" else "examples"
)
@@ -68,6 +67,7 @@
def package_info(self):
self.cpp_info.libs = ["imgui"]
+ self.cpp_info.defines.append("IMGUI_USER_CONFIG=\"imgui_user_config.h\"")
if self.settings.os == "Linux":
self.cpp_info.system_libs.append("m")
self.cpp_info.srcdirs = [os.path.join("res", "bindings")]
| {"golden_diff": "diff --git a/recipes/imgui/all/conanfile.py b/recipes/imgui/all/conanfile.py\n--- a/recipes/imgui/all/conanfile.py\n+++ b/recipes/imgui/all/conanfile.py\n@@ -12,19 +12,18 @@\n topics = (\"conan\", \"imgui\", \"gui\", \"graphical\")\n license = \"MIT\"\n \n- exports_sources = [\"CMakeLists.txt\"]\n- generators = \"cmake\"\n-\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n- \"fPIC\": [True, False]\n+ \"fPIC\": [True, False],\n }\n default_options = {\n \"shared\": False,\n- \"fPIC\": True\n+ \"fPIC\": True,\n }\n \n+ exports_sources = \"CMakeLists.txt\"\n+ generators = \"cmake\"\n _cmake = None\n \n @property\n@@ -56,7 +55,7 @@\n \n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n- backends_folder = src=os.path.join(\n+ backends_folder = os.path.join(\n self._source_subfolder,\n \"backends\" if tools.Version(self.version) >= \"1.80\" else \"examples\"\n )\n@@ -68,6 +67,7 @@\n \n def package_info(self):\n self.cpp_info.libs = [\"imgui\"]\n+ self.cpp_info.defines.append(\"IMGUI_USER_CONFIG=\\\"imgui_user_config.h\\\"\")\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"m\")\n self.cpp_info.srcdirs = [os.path.join(\"res\", \"bindings\")]\n", "issue": "[package] imgui/1.84.1: Shared library does not automatically import global data symbols\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **imgui/1.84.1**\r\n * Operating System+version: **Windows 10 21H1 Build 19043.1165**\r\n * Compiler+version: **Visual Studio 16 (2019)**\r\n * Docker image: **N/A**\r\n * Conan version: **conan 1.39.0**\r\n * Python version: **Python 3.9.6**\r\n\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\n[settings]\r\nos=Windows\r\nos_build=Windows\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=Visual Studio\r\ncompiler.version=16\r\nbuild_type=Release\r\n[options]\r\n[conf]\r\n[build_requires]\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nTry to reference any code that uses global data symbols since those need to use `__declspec(dllimport)` when using [`WINDOWS_EXPORT_ALL_SYMBOLS`](https://cmake.org/cmake/help/latest/prop_tgt/WINDOWS_EXPORT_ALL_SYMBOLS.html#windows-export-all-symbols). One example could be using [`ImGuiTextBuffer`](https://github.com/ocornut/imgui/blob/v1.84.1/imgui.h#L2078) (which has `IMGUI_API static char EmptyString[1];`).\r\nThe following diff is for ImGui's [`test_package.cpp`](https://github.com/conan-io/conan-center-index/blob/master/recipes/imgui/all/test_package/test_package.cpp) and can reproduce this issue.\r\n\r\n```\r\n--- a/recipes/imgui/all/test_package/test_package.cpp\r\n+++ b/recipes/imgui/all/test_package/test_package.cpp\r\n@@ -5,6 +5,9 @@ int main(int, char**)\r\n {\r\n ImGuiContext* context =ImGui::CreateContext();\r\n ImGuiIO& io = ImGui::GetIO();\r\n+\t\r\n+ ImGuiTextBuffer textBuffer;\r\n+ textBuffer.append(\"Hello, ImGui\");\r\n \r\n // Build atlas\r\n unsigned char* tex_pixels = NULL;\r\n@@ -20,6 +23,7 @@ int main(int, char**)\r\n \r\n static float f = 0.0f;\r\n ImGui::Text(\"Hello, world!\");\r\n+ ImGui::Text(textBuffer.begin());\r\n ImGui::SliderFloat(\"float\", &f, 0.0f, 1.0f);\r\n ImGui::Text(\"Application average %.3f ms/frame (%.1f FPS)\", 1000.0f / io.Framerate, io.Framerate);\r\n ImGui::ShowDemoWindow(NULL);\r\n```\r\n\r\nThen, try to create the package with `conan create . imgui/1.84.1@ -o imgui:shared=True`.\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\ntest_package.obj : error LNK2019: unresolved external symbol \"public: static char * ImGuiTextBuffer::EmptyString\" (?Emp\r\ntyString@ImGuiTextBuffer@@2PADA) referenced in function main\r\n```\r\n\r\n</details>\r\n\r\nI think the simplest solution would be to add something like this\r\n\r\n```\r\nif self.options.shared and self.settings.os == \"Windows\":\r\n self.cpp_info.defines.append(\"IMGUI_API=__declspec(dllimport)\")\r\n```\r\n\r\nI'd be happy to open a PR with this change.\n", "before_files": [{"content": "from conans import ConanFile, CMake, tools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass IMGUIConan(ConanFile):\n name = \"imgui\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/ocornut/imgui\"\n description = \"Bloat-free Immediate Mode Graphical User interface for C++ with minimal dependencies\"\n topics = (\"conan\", \"imgui\", \"gui\", \"graphical\")\n license = \"MIT\"\n\n exports_sources = [\"CMakeLists.txt\"]\n generators = \"cmake\"\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n backends_folder = src=os.path.join(\n self._source_subfolder,\n \"backends\" if tools.Version(self.version) >= \"1.80\" else \"examples\"\n )\n self.copy(pattern=\"imgui_impl_*\",\n dst=os.path.join(\"res\", \"bindings\"),\n src=backends_folder)\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"imgui\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"m\")\n self.cpp_info.srcdirs = [os.path.join(\"res\", \"bindings\")]\n\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH env var with : {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/imgui/all/conanfile.py"}]} | 2,046 | 396 |
gh_patches_debug_36505 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2690 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/gemini/gemini_context.py]
1 from enum import EnumMeta
2
3
4 class GeminiMemoryManager(object):
5
6 def __init__(self, states_cls: EnumMeta):
7 super().__init__()
8 self.states_cls = states_cls
9 self._cnter = 0 # the counter of instances
10
11 self.total_mem = dict()
12 self.state_mem = dict()
13 self.state_mem['cpu'] = dict()
14 self.state_mem['cuda'] = dict()
15
16 self.reset()
17
18 @property
19 def total_number(self):
20 return self._cnter
21
22 def reset(self):
23 self._cnter = 0 # the counter of instances
24
25 self.total_mem['cpu'] = 0 # memory occupation of instances in cpu
26 self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda
27
28 # memory conditions for all states
29 for state in self.states_cls:
30 self.state_mem['cpu'][state] = 0
31 self.state_mem['cuda'][state] = 0
32
33 def register_new_instance(self):
34 self._cnter += 1
35
36 def delete_instance(self):
37 self._cnter -= 1
38
39 def print_info(self):
40 print(f"Total number: {self.total_number}",
41 f"Total CPU memory occupation: {self.total_mem['cpu']}",
42 f"Total CUDA memory occupation: {self.total_mem['cuda']}\n",
43 sep='\n')
44
45 for state in self.states_cls:
46 print(f"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}",
47 f"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\n",
48 sep='\n')
49
[end of colossalai/gemini/gemini_context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/gemini/gemini_context.py b/colossalai/gemini/gemini_context.py
--- a/colossalai/gemini/gemini_context.py
+++ b/colossalai/gemini/gemini_context.py
@@ -1,48 +1,48 @@
-from enum import EnumMeta
-
-
-class GeminiMemoryManager(object):
-
- def __init__(self, states_cls: EnumMeta):
- super().__init__()
- self.states_cls = states_cls
- self._cnter = 0 # the counter of instances
-
- self.total_mem = dict()
- self.state_mem = dict()
- self.state_mem['cpu'] = dict()
- self.state_mem['cuda'] = dict()
-
- self.reset()
-
- @property
- def total_number(self):
- return self._cnter
-
- def reset(self):
- self._cnter = 0 # the counter of instances
-
- self.total_mem['cpu'] = 0 # memory occupation of instances in cpu
- self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda
-
- # memory conditions for all states
- for state in self.states_cls:
- self.state_mem['cpu'][state] = 0
- self.state_mem['cuda'][state] = 0
-
- def register_new_instance(self):
- self._cnter += 1
-
- def delete_instance(self):
- self._cnter -= 1
-
- def print_info(self):
- print(f"Total number: {self.total_number}",
- f"Total CPU memory occupation: {self.total_mem['cpu']}",
- f"Total CUDA memory occupation: {self.total_mem['cuda']}\n",
- sep='\n')
-
- for state in self.states_cls:
- print(f"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}",
- f"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\n",
- sep='\n')
+from enum import EnumMeta
+
+
+class GeminiMemoryManager(object):
+
+ def __init__(self, states_cls: EnumMeta):
+ super().__init__()
+ self.states_cls = states_cls
+ self._cnter = 0 # the counter of instances
+
+ self.total_mem = dict()
+ self.state_mem = dict()
+ self.state_mem['cpu'] = dict()
+ self.state_mem['cuda'] = dict()
+
+ self.reset()
+
+ @property
+ def total_number(self):
+ return self._cnter
+
+ def reset(self):
+ self._cnter = 0 # the counter of instances
+
+ self.total_mem['cpu'] = 0 # memory occupation of instances in cpu
+ self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda
+
+ # memory conditions for all states
+ for state in self.states_cls:
+ self.state_mem['cpu'][state] = 0
+ self.state_mem['cuda'][state] = 0
+
+ def register_new_instance(self):
+ self._cnter += 1
+
+ def delete_instance(self):
+ self._cnter -= 1
+
+ def print_info(self):
+ print(f"Total number: {self.total_number}",
+ f"Total CPU memory occupation: {self.total_mem['cpu']}",
+ f"Total CUDA memory occupation: {self.total_mem['cuda']}\n",
+ sep='\n')
+
+ for state in self.states_cls:
+ print(f"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}",
+ f"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\n",
+ sep='\n')
| {"golden_diff": "diff --git a/colossalai/gemini/gemini_context.py b/colossalai/gemini/gemini_context.py\n--- a/colossalai/gemini/gemini_context.py\n+++ b/colossalai/gemini/gemini_context.py\n@@ -1,48 +1,48 @@\n-from enum import EnumMeta\r\n-\r\n-\r\n-class GeminiMemoryManager(object):\r\n-\r\n- def __init__(self, states_cls: EnumMeta):\r\n- super().__init__()\r\n- self.states_cls = states_cls\r\n- self._cnter = 0 # the counter of instances\r\n-\r\n- self.total_mem = dict()\r\n- self.state_mem = dict()\r\n- self.state_mem['cpu'] = dict()\r\n- self.state_mem['cuda'] = dict()\r\n-\r\n- self.reset()\r\n-\r\n- @property\r\n- def total_number(self):\r\n- return self._cnter\r\n-\r\n- def reset(self):\r\n- self._cnter = 0 # the counter of instances\r\n-\r\n- self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\r\n- self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\r\n-\r\n- # memory conditions for all states\r\n- for state in self.states_cls:\r\n- self.state_mem['cpu'][state] = 0\r\n- self.state_mem['cuda'][state] = 0\r\n-\r\n- def register_new_instance(self):\r\n- self._cnter += 1\r\n-\r\n- def delete_instance(self):\r\n- self._cnter -= 1\r\n-\r\n- def print_info(self):\r\n- print(f\"Total number: {self.total_number}\",\r\n- f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\r\n- f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\r\n- sep='\\n')\r\n-\r\n- for state in self.states_cls:\r\n- print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\r\n- f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\r\n- sep='\\n')\r\n+from enum import EnumMeta\n+\n+\n+class GeminiMemoryManager(object):\n+\n+ def __init__(self, states_cls: EnumMeta):\n+ super().__init__()\n+ self.states_cls = states_cls\n+ self._cnter = 0 # the counter of instances\n+\n+ self.total_mem = dict()\n+ self.state_mem = dict()\n+ self.state_mem['cpu'] = dict()\n+ self.state_mem['cuda'] = dict()\n+\n+ self.reset()\n+\n+ @property\n+ def total_number(self):\n+ return self._cnter\n+\n+ def reset(self):\n+ self._cnter = 0 # the counter of instances\n+\n+ self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\n+ self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\n+\n+ # memory conditions for all states\n+ for state in self.states_cls:\n+ self.state_mem['cpu'][state] = 0\n+ self.state_mem['cuda'][state] = 0\n+\n+ def register_new_instance(self):\n+ self._cnter += 1\n+\n+ def delete_instance(self):\n+ self._cnter -= 1\n+\n+ def print_info(self):\n+ print(f\"Total number: {self.total_number}\",\n+ f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\n+ f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\n+ sep='\\n')\n+\n+ for state in self.states_cls:\n+ print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\n+ f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\n+ sep='\\n')\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from enum import EnumMeta\r\n\r\n\r\nclass GeminiMemoryManager(object):\r\n\r\n def __init__(self, states_cls: EnumMeta):\r\n super().__init__()\r\n self.states_cls = states_cls\r\n self._cnter = 0 # the counter of instances\r\n\r\n self.total_mem = dict()\r\n self.state_mem = dict()\r\n self.state_mem['cpu'] = dict()\r\n self.state_mem['cuda'] = dict()\r\n\r\n self.reset()\r\n\r\n @property\r\n def total_number(self):\r\n return self._cnter\r\n\r\n def reset(self):\r\n self._cnter = 0 # the counter of instances\r\n\r\n self.total_mem['cpu'] = 0 # memory occupation of instances in cpu\r\n self.total_mem['cuda'] = 0 # memory of occupation of instances in cuda\r\n\r\n # memory conditions for all states\r\n for state in self.states_cls:\r\n self.state_mem['cpu'][state] = 0\r\n self.state_mem['cuda'][state] = 0\r\n\r\n def register_new_instance(self):\r\n self._cnter += 1\r\n\r\n def delete_instance(self):\r\n self._cnter -= 1\r\n\r\n def print_info(self):\r\n print(f\"Total number: {self.total_number}\",\r\n f\"Total CPU memory occupation: {self.total_mem['cpu']}\",\r\n f\"Total CUDA memory occupation: {self.total_mem['cuda']}\\n\",\r\n sep='\\n')\r\n\r\n for state in self.states_cls:\r\n print(f\"{state}: CPU memory occupation: {self.state_mem['cpu'][state]}\",\r\n f\"{state}: CUDA memory occupation: {self.state_mem['cuda'][state]}\\n\",\r\n sep='\\n')\r\n", "path": "colossalai/gemini/gemini_context.py"}]} | 1,024 | 870 |
gh_patches_debug_27273 | rasdani/github-patches | git_diff | ManimCommunity__manim-652 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Making test for SVGMobject
Here, I have some test and folder structure for SVGMobject and ImageMobject.
The images and svgs for the test can be found in a new folder called img_svg_resources
Depends on #650
</issue>
<code>
[start of manim/mobject/types/image_mobject.py]
1 """Mobjects representing raster images."""
2
3 __all__ = ["AbstractImageMobject", "ImageMobject", "ImageMobjectFromCamera"]
4
5 import pathlib
6
7 import numpy as np
8
9 from PIL import Image
10
11 from ... import config
12 from ...constants import *
13 from ...mobject.mobject import Mobject
14 from ...mobject.shape_matchers import SurroundingRectangle
15 from ...utils.bezier import interpolate
16 from ...utils.color import color_to_int_rgb, WHITE
17 from ...utils.config_ops import digest_config
18 from ...utils.images import get_full_raster_image_path
19 from manim.constants import QUALITIES, DEFAULT_QUALITY
20
21
22 class AbstractImageMobject(Mobject):
23 """
24 Automatically filters out black pixels
25
26 Parameters
27 ----------
28 scale_to_resolution : :class:`int`
29 At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.
30 This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.
31 Calculated by `height = image_height / scale_to_resolution * config["frame_height"]`
32 """
33
34 CONFIG = {
35 "pixel_array_dtype": "uint8",
36 }
37
38 def __init__(self, scale_to_resolution, **kwargs):
39 digest_config(self, kwargs)
40 self.scale_to_resolution = scale_to_resolution
41
42 Mobject.__init__(self, **kwargs)
43
44 def get_pixel_array(self):
45 raise NotImplementedError()
46
47 def set_color(self):
48 # Likely to be implemented in subclasses, but no obgligation
49 pass
50
51 def reset_points(self):
52 # Corresponding corners of image are fixed to these 3 points
53 self.points = np.array(
54 [
55 UP + LEFT,
56 UP + RIGHT,
57 DOWN + LEFT,
58 ]
59 )
60 self.center()
61 h, w = self.get_pixel_array().shape[:2]
62 if self.scale_to_resolution:
63 self.height = h / self.scale_to_resolution * config["frame_height"]
64 else:
65 self.height = 3 ## this is the case for ImageMobjectFromCamera
66 self.stretch_to_fit_height(self.height)
67 self.stretch_to_fit_width(self.height * w / h)
68
69
70 class ImageMobject(AbstractImageMobject):
71 """Displays an Image from a numpy array or a file.
72
73 Parameters
74 ----------
75 scale_to_resolution : :class:`int`
76 At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.
77 This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.
78 Calculated by `height = image_height / scale_to_resolution * config["frame_height"]`
79
80
81
82 Example
83 -------
84 .. manim:: ImageFromArray
85 :save_last_frame:
86
87 class ImageFromArray(Scene):
88 def construct(self):
89 image = ImageMobject(np.uint8([[0, 100, 30, 200],
90 [255, 0, 5, 33]]))
91 image.set_height(7)
92 self.add(image)
93
94 """
95
96 CONFIG = {
97 "invert": False,
98 "image_mode": "RGBA",
99 }
100
101 def __init__(
102 self,
103 filename_or_array,
104 scale_to_resolution=QUALITIES[DEFAULT_QUALITY]["pixel_height"],
105 **kwargs,
106 ):
107 digest_config(self, kwargs)
108 if isinstance(filename_or_array, (str, pathlib.PurePath)):
109 path = get_full_raster_image_path(filename_or_array)
110 image = Image.open(path).convert(self.image_mode)
111 self.pixel_array = np.array(image)
112 else:
113 self.pixel_array = np.array(filename_or_array)
114 self.change_to_rgba_array()
115 if self.invert:
116 self.pixel_array[:, :, :3] = 255 - self.pixel_array[:, :, :3]
117 AbstractImageMobject.__init__(self, scale_to_resolution, **kwargs)
118
119 def change_to_rgba_array(self):
120 pa = self.pixel_array
121 if len(pa.shape) == 2:
122 pa = pa.reshape(list(pa.shape) + [1])
123 if pa.shape[2] == 1:
124 pa = pa.repeat(3, axis=2)
125 if pa.shape[2] == 3:
126 alphas = 255 * np.ones(
127 list(pa.shape[:2]) + [1], dtype=self.pixel_array_dtype
128 )
129 pa = np.append(pa, alphas, axis=2)
130 self.pixel_array = pa
131
132 def get_pixel_array(self):
133 return self.pixel_array
134
135 def set_color(self, color, alpha=None, family=True):
136 rgb = color_to_int_rgb(color)
137 self.pixel_array[:, :, :3] = rgb
138 if alpha is not None:
139 self.pixel_array[:, :, 3] = int(255 * alpha)
140 for submob in self.submobjects:
141 submob.set_color(color, alpha, family)
142 self.color = color
143 return self
144
145 def set_opacity(self, alpha):
146 self.pixel_array[:, :, 3] = int(255 * alpha)
147 return self
148
149 def fade(self, darkness=0.5, family=True):
150 self.set_opacity(1 - darkness)
151 super().fade(darkness, family)
152 return self
153
154 def interpolate_color(self, mobject1, mobject2, alpha):
155 assert mobject1.pixel_array.shape == mobject2.pixel_array.shape, (
156 f"Mobject pixel array shapes incompatible for interpolation.\n"
157 f"Mobject 1 ({mobject1}) : {mobject1.pixel_array.shape}\n"
158 f"Mobject 2 ({mobject2}) : {mobject1.pixel_array.shape}"
159 )
160 self.pixel_array = interpolate(
161 mobject1.pixel_array, mobject2.pixel_array, alpha
162 ).astype(self.pixel_array_dtype)
163
164
165 # TODO, add the ability to have the dimensions/orientation of this
166 # mobject more strongly tied to the frame of the camera it contains,
167 # in the case where that's a MovingCamera
168
169
170 class ImageMobjectFromCamera(AbstractImageMobject):
171 CONFIG = {
172 "default_display_frame_config": {
173 "stroke_width": 3,
174 "stroke_color": WHITE,
175 "buff": 0,
176 }
177 }
178
179 def __init__(self, camera, **kwargs):
180 self.camera = camera
181 self.pixel_array = self.camera.pixel_array
182 AbstractImageMobject.__init__(self, scale_to_resolution=False, **kwargs)
183
184 # TODO: Get rid of this.
185 def get_pixel_array(self):
186 self.pixel_array = self.camera.pixel_array
187 return self.pixel_array
188
189 def add_display_frame(self, **kwargs):
190 config = dict(self.default_display_frame_config)
191 config.update(kwargs)
192 self.display_frame = SurroundingRectangle(self, **config)
193 self.add(self.display_frame)
194 return self
195
196 def interpolate_color(self, mobject1, mobject2, alpha):
197 assert mobject1.pixel_array.shape == mobject2.pixel_array.shape, (
198 f"Mobject pixel array shapes incompatible for interpolation.\n"
199 f"Mobject 1 ({mobject1}) : {mobject1.pixel_array.shape}\n"
200 f"Mobject 2 ({mobject2}) : {mobject1.pixel_array.shape}"
201 )
202 self.pixel_array = interpolate(
203 mobject1.pixel_array, mobject2.pixel_array, alpha
204 ).astype(self.pixel_array_dtype)
205
[end of manim/mobject/types/image_mobject.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/manim/mobject/types/image_mobject.py b/manim/mobject/types/image_mobject.py
--- a/manim/mobject/types/image_mobject.py
+++ b/manim/mobject/types/image_mobject.py
@@ -27,8 +27,7 @@
----------
scale_to_resolution : :class:`int`
At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.
- This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.
- Calculated by `height = image_height / scale_to_resolution * config["frame_height"]`
+ This is a custom parameter of ImageMobject so that rendering a scene with e.g. the ``--quality low`` or ``--quality medium`` flag for faster rendering won't effect the position of the image on the screen.
"""
CONFIG = {
@@ -74,8 +73,7 @@
----------
scale_to_resolution : :class:`int`
At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.
- This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.
- Calculated by `height = image_height / scale_to_resolution * config["frame_height"]`
+ This is a custom parameter of ImageMobject so that rendering a scene with e.g. the ``--quality low`` or ``--quality medium`` flag for faster rendering won't effect the position of the image on the screen.
| {"golden_diff": "diff --git a/manim/mobject/types/image_mobject.py b/manim/mobject/types/image_mobject.py\n--- a/manim/mobject/types/image_mobject.py\n+++ b/manim/mobject/types/image_mobject.py\n@@ -27,8 +27,7 @@\n ----------\n scale_to_resolution : :class:`int`\n At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.\n- This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.\n- Calculated by `height = image_height / scale_to_resolution * config[\"frame_height\"]`\n+ This is a custom parameter of ImageMobject so that rendering a scene with e.g. the ``--quality low`` or ``--quality medium`` flag for faster rendering won't effect the position of the image on the screen.\n \"\"\"\n \n CONFIG = {\n@@ -74,8 +73,7 @@\n ----------\n scale_to_resolution : :class:`int`\n At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.\n- This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.\n- Calculated by `height = image_height / scale_to_resolution * config[\"frame_height\"]`\n+ This is a custom parameter of ImageMobject so that rendering a scene with e.g. the ``--quality low`` or ``--quality medium`` flag for faster rendering won't effect the position of the image on the screen.\n", "issue": "Making test for SVGMobject\nHere, I have some test and folder structure for SVGMobject and ImageMobject.\r\nThe images and svgs for the test can be found in a new folder called img_svg_resources\r\n\r\nDepends on #650 \n", "before_files": [{"content": "\"\"\"Mobjects representing raster images.\"\"\"\n\n__all__ = [\"AbstractImageMobject\", \"ImageMobject\", \"ImageMobjectFromCamera\"]\n\nimport pathlib\n\nimport numpy as np\n\nfrom PIL import Image\n\nfrom ... import config\nfrom ...constants import *\nfrom ...mobject.mobject import Mobject\nfrom ...mobject.shape_matchers import SurroundingRectangle\nfrom ...utils.bezier import interpolate\nfrom ...utils.color import color_to_int_rgb, WHITE\nfrom ...utils.config_ops import digest_config\nfrom ...utils.images import get_full_raster_image_path\nfrom manim.constants import QUALITIES, DEFAULT_QUALITY\n\n\nclass AbstractImageMobject(Mobject):\n \"\"\"\n Automatically filters out black pixels\n\n Parameters\n ----------\n scale_to_resolution : :class:`int`\n At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.\n This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.\n Calculated by `height = image_height / scale_to_resolution * config[\"frame_height\"]`\n \"\"\"\n\n CONFIG = {\n \"pixel_array_dtype\": \"uint8\",\n }\n\n def __init__(self, scale_to_resolution, **kwargs):\n digest_config(self, kwargs)\n self.scale_to_resolution = scale_to_resolution\n\n Mobject.__init__(self, **kwargs)\n\n def get_pixel_array(self):\n raise NotImplementedError()\n\n def set_color(self):\n # Likely to be implemented in subclasses, but no obgligation\n pass\n\n def reset_points(self):\n # Corresponding corners of image are fixed to these 3 points\n self.points = np.array(\n [\n UP + LEFT,\n UP + RIGHT,\n DOWN + LEFT,\n ]\n )\n self.center()\n h, w = self.get_pixel_array().shape[:2]\n if self.scale_to_resolution:\n self.height = h / self.scale_to_resolution * config[\"frame_height\"]\n else:\n self.height = 3 ## this is the case for ImageMobjectFromCamera\n self.stretch_to_fit_height(self.height)\n self.stretch_to_fit_width(self.height * w / h)\n\n\nclass ImageMobject(AbstractImageMobject):\n \"\"\"Displays an Image from a numpy array or a file.\n\n Parameters\n ----------\n scale_to_resolution : :class:`int`\n At this resolution the image is placed pixel by pixel onto the screen, so it will look the sharpest and best.\n This is a custom parameter of ImageMobject so that rendering a scene with the `--quality low` flag for faster rendering and testing won't effect the position of the image on the screen.\n Calculated by `height = image_height / scale_to_resolution * config[\"frame_height\"]`\n\n\n\n Example\n -------\n .. manim:: ImageFromArray\n :save_last_frame:\n\n class ImageFromArray(Scene):\n def construct(self):\n image = ImageMobject(np.uint8([[0, 100, 30, 200],\n [255, 0, 5, 33]]))\n image.set_height(7)\n self.add(image)\n\n \"\"\"\n\n CONFIG = {\n \"invert\": False,\n \"image_mode\": \"RGBA\",\n }\n\n def __init__(\n self,\n filename_or_array,\n scale_to_resolution=QUALITIES[DEFAULT_QUALITY][\"pixel_height\"],\n **kwargs,\n ):\n digest_config(self, kwargs)\n if isinstance(filename_or_array, (str, pathlib.PurePath)):\n path = get_full_raster_image_path(filename_or_array)\n image = Image.open(path).convert(self.image_mode)\n self.pixel_array = np.array(image)\n else:\n self.pixel_array = np.array(filename_or_array)\n self.change_to_rgba_array()\n if self.invert:\n self.pixel_array[:, :, :3] = 255 - self.pixel_array[:, :, :3]\n AbstractImageMobject.__init__(self, scale_to_resolution, **kwargs)\n\n def change_to_rgba_array(self):\n pa = self.pixel_array\n if len(pa.shape) == 2:\n pa = pa.reshape(list(pa.shape) + [1])\n if pa.shape[2] == 1:\n pa = pa.repeat(3, axis=2)\n if pa.shape[2] == 3:\n alphas = 255 * np.ones(\n list(pa.shape[:2]) + [1], dtype=self.pixel_array_dtype\n )\n pa = np.append(pa, alphas, axis=2)\n self.pixel_array = pa\n\n def get_pixel_array(self):\n return self.pixel_array\n\n def set_color(self, color, alpha=None, family=True):\n rgb = color_to_int_rgb(color)\n self.pixel_array[:, :, :3] = rgb\n if alpha is not None:\n self.pixel_array[:, :, 3] = int(255 * alpha)\n for submob in self.submobjects:\n submob.set_color(color, alpha, family)\n self.color = color\n return self\n\n def set_opacity(self, alpha):\n self.pixel_array[:, :, 3] = int(255 * alpha)\n return self\n\n def fade(self, darkness=0.5, family=True):\n self.set_opacity(1 - darkness)\n super().fade(darkness, family)\n return self\n\n def interpolate_color(self, mobject1, mobject2, alpha):\n assert mobject1.pixel_array.shape == mobject2.pixel_array.shape, (\n f\"Mobject pixel array shapes incompatible for interpolation.\\n\"\n f\"Mobject 1 ({mobject1}) : {mobject1.pixel_array.shape}\\n\"\n f\"Mobject 2 ({mobject2}) : {mobject1.pixel_array.shape}\"\n )\n self.pixel_array = interpolate(\n mobject1.pixel_array, mobject2.pixel_array, alpha\n ).astype(self.pixel_array_dtype)\n\n\n# TODO, add the ability to have the dimensions/orientation of this\n# mobject more strongly tied to the frame of the camera it contains,\n# in the case where that's a MovingCamera\n\n\nclass ImageMobjectFromCamera(AbstractImageMobject):\n CONFIG = {\n \"default_display_frame_config\": {\n \"stroke_width\": 3,\n \"stroke_color\": WHITE,\n \"buff\": 0,\n }\n }\n\n def __init__(self, camera, **kwargs):\n self.camera = camera\n self.pixel_array = self.camera.pixel_array\n AbstractImageMobject.__init__(self, scale_to_resolution=False, **kwargs)\n\n # TODO: Get rid of this.\n def get_pixel_array(self):\n self.pixel_array = self.camera.pixel_array\n return self.pixel_array\n\n def add_display_frame(self, **kwargs):\n config = dict(self.default_display_frame_config)\n config.update(kwargs)\n self.display_frame = SurroundingRectangle(self, **config)\n self.add(self.display_frame)\n return self\n\n def interpolate_color(self, mobject1, mobject2, alpha):\n assert mobject1.pixel_array.shape == mobject2.pixel_array.shape, (\n f\"Mobject pixel array shapes incompatible for interpolation.\\n\"\n f\"Mobject 1 ({mobject1}) : {mobject1.pixel_array.shape}\\n\"\n f\"Mobject 2 ({mobject2}) : {mobject1.pixel_array.shape}\"\n )\n self.pixel_array = interpolate(\n mobject1.pixel_array, mobject2.pixel_array, alpha\n ).astype(self.pixel_array_dtype)\n", "path": "manim/mobject/types/image_mobject.py"}]} | 2,755 | 373 |
gh_patches_debug_363 | rasdani/github-patches | git_diff | mozilla__bugbug-3921 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[model:regressor] AttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'
https://community-tc.services.mozilla.com/tasks/HncpjvKKRcSnxL_GJ8PV9A/runs/0/logs/public/logs/live.log
```
Traceback (most recent call last):
File "/usr/local/bin/bugbug-train", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/scripts/trainer.py", line 141, in main
retriever.go(args)
File "/usr/local/lib/python3.10/site-packages/scripts/trainer.py", line 41, in go
metrics = model_obj.train(limit=args.limit)
File "/usr/local/lib/python3.10/site-packages/bugbug/model.py", line 418, in train
logger.info("Number of features: %d", self.clf.steps[-1][1].n_features_in_)
AttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'
```
</issue>
<code>
[start of bugbug/model_calibration.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 from sklearn.base import BaseEstimator, ClassifierMixin
7 from sklearn.calibration import CalibratedClassifierCV
8 from sklearn.model_selection import train_test_split
9
10
11 class IsotonicRegressionCalibrator(BaseEstimator, ClassifierMixin):
12 def __init__(self, base_clf):
13 self.base_clf = base_clf
14 self.calibrated_clf = CalibratedClassifierCV(
15 base_clf, cv="prefit", method="isotonic"
16 )
17
18 def fit(self, X_train, y_train):
19 X_train, X_val, y_train, y_val = train_test_split(
20 X_train, y_train, test_size=0.2, random_state=42
21 )
22 self.base_clf.fit(X_train, y_train)
23 self.calibrated_clf.fit(X_val, y_val)
24
25 def predict(self, X):
26 return self.calibrated_clf.predict(X)
27
28 def predict_proba(self, X):
29 return self.calibrated_clf.predict_proba(X)
30
[end of bugbug/model_calibration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/model_calibration.py b/bugbug/model_calibration.py
--- a/bugbug/model_calibration.py
+++ b/bugbug/model_calibration.py
@@ -27,3 +27,7 @@
def predict_proba(self, X):
return self.calibrated_clf.predict_proba(X)
+
+ @property
+ def n_features_in_(self):
+ return self.base_clf.n_features_in_
| {"golden_diff": "diff --git a/bugbug/model_calibration.py b/bugbug/model_calibration.py\n--- a/bugbug/model_calibration.py\n+++ b/bugbug/model_calibration.py\n@@ -27,3 +27,7 @@\n \n def predict_proba(self, X):\n return self.calibrated_clf.predict_proba(X)\n+\n+ @property\n+ def n_features_in_(self):\n+ return self.base_clf.n_features_in_\n", "issue": "[model:regressor] AttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'\nhttps://community-tc.services.mozilla.com/tasks/HncpjvKKRcSnxL_GJ8PV9A/runs/0/logs/public/logs/live.log\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/bugbug-train\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/site-packages/scripts/trainer.py\", line 141, in main\r\n retriever.go(args)\r\n File \"/usr/local/lib/python3.10/site-packages/scripts/trainer.py\", line 41, in go\r\n metrics = model_obj.train(limit=args.limit)\r\n File \"/usr/local/lib/python3.10/site-packages/bugbug/model.py\", line 418, in train\r\n logger.info(\"Number of features: %d\", self.clf.steps[-1][1].n_features_in_)\r\nAttributeError: 'IsotonicRegressionCalibrator' object has no attribute 'n_features_in_'\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nfrom sklearn.base import BaseEstimator, ClassifierMixin\nfrom sklearn.calibration import CalibratedClassifierCV\nfrom sklearn.model_selection import train_test_split\n\n\nclass IsotonicRegressionCalibrator(BaseEstimator, ClassifierMixin):\n def __init__(self, base_clf):\n self.base_clf = base_clf\n self.calibrated_clf = CalibratedClassifierCV(\n base_clf, cv=\"prefit\", method=\"isotonic\"\n )\n\n def fit(self, X_train, y_train):\n X_train, X_val, y_train, y_val = train_test_split(\n X_train, y_train, test_size=0.2, random_state=42\n )\n self.base_clf.fit(X_train, y_train)\n self.calibrated_clf.fit(X_val, y_val)\n\n def predict(self, X):\n return self.calibrated_clf.predict(X)\n\n def predict_proba(self, X):\n return self.calibrated_clf.predict_proba(X)\n", "path": "bugbug/model_calibration.py"}]} | 1,099 | 94 |
gh_patches_debug_20408 | rasdani/github-patches | git_diff | dmlc__dgl-2796 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HGT implementation bug?
https://github.com/dmlc/dgl/blob/master/examples/pytorch/hgt/model.py#L81
The implemented hgt set value of nodes multiple times when calculating by different edge attention, which will cause the `nodes["v"]` of earlier etype be overwritten by later one.
</issue>
<code>
[start of examples/pytorch/hgt/model.py]
1 import dgl
2 import math
3 import torch
4 import torch.nn as nn
5 import torch.nn.functional as F
6 import dgl.function as fn
7 from dgl.nn.functional import edge_softmax
8
9 class HGTLayer(nn.Module):
10 def __init__(self,
11 in_dim,
12 out_dim,
13 node_dict,
14 edge_dict,
15 n_heads,
16 dropout = 0.2,
17 use_norm = False):
18 super(HGTLayer, self).__init__()
19
20 self.in_dim = in_dim
21 self.out_dim = out_dim
22 self.node_dict = node_dict
23 self.edge_dict = edge_dict
24 self.num_types = len(node_dict)
25 self.num_relations = len(edge_dict)
26 self.total_rel = self.num_types * self.num_relations * self.num_types
27 self.n_heads = n_heads
28 self.d_k = out_dim // n_heads
29 self.sqrt_dk = math.sqrt(self.d_k)
30 self.att = None
31
32 self.k_linears = nn.ModuleList()
33 self.q_linears = nn.ModuleList()
34 self.v_linears = nn.ModuleList()
35 self.a_linears = nn.ModuleList()
36 self.norms = nn.ModuleList()
37 self.use_norm = use_norm
38
39 for t in range(self.num_types):
40 self.k_linears.append(nn.Linear(in_dim, out_dim))
41 self.q_linears.append(nn.Linear(in_dim, out_dim))
42 self.v_linears.append(nn.Linear(in_dim, out_dim))
43 self.a_linears.append(nn.Linear(out_dim, out_dim))
44 if use_norm:
45 self.norms.append(nn.LayerNorm(out_dim))
46
47 self.relation_pri = nn.Parameter(torch.ones(self.num_relations, self.n_heads))
48 self.relation_att = nn.Parameter(torch.Tensor(self.num_relations, n_heads, self.d_k, self.d_k))
49 self.relation_msg = nn.Parameter(torch.Tensor(self.num_relations, n_heads, self.d_k, self.d_k))
50 self.skip = nn.Parameter(torch.ones(self.num_types))
51 self.drop = nn.Dropout(dropout)
52
53 nn.init.xavier_uniform_(self.relation_att)
54 nn.init.xavier_uniform_(self.relation_msg)
55
56 def forward(self, G, h):
57 with G.local_scope():
58 node_dict, edge_dict = self.node_dict, self.edge_dict
59 for srctype, etype, dsttype in G.canonical_etypes:
60 sub_graph = G[srctype, etype, dsttype]
61
62 k_linear = self.k_linears[node_dict[srctype]]
63 v_linear = self.v_linears[node_dict[srctype]]
64 q_linear = self.q_linears[node_dict[dsttype]]
65
66 k = k_linear(h[srctype]).view(-1, self.n_heads, self.d_k)
67 v = v_linear(h[srctype]).view(-1, self.n_heads, self.d_k)
68 q = q_linear(h[dsttype]).view(-1, self.n_heads, self.d_k)
69
70 e_id = self.edge_dict[etype]
71
72 relation_att = self.relation_att[e_id]
73 relation_pri = self.relation_pri[e_id]
74 relation_msg = self.relation_msg[e_id]
75
76 k = torch.einsum("bij,ijk->bik", k, relation_att)
77 v = torch.einsum("bij,ijk->bik", v, relation_msg)
78
79 sub_graph.srcdata['k'] = k
80 sub_graph.dstdata['q'] = q
81 sub_graph.srcdata['v'] = v
82
83 sub_graph.apply_edges(fn.v_dot_u('q', 'k', 't'))
84 attn_score = sub_graph.edata.pop('t').sum(-1) * relation_pri / self.sqrt_dk
85 attn_score = edge_softmax(sub_graph, attn_score, norm_by='dst')
86
87 sub_graph.edata['t'] = attn_score.unsqueeze(-1)
88
89 G.multi_update_all({etype : (fn.u_mul_e('v', 't', 'm'), fn.sum('m', 't')) \
90 for etype in edge_dict}, cross_reducer = 'mean')
91
92 new_h = {}
93 for ntype in G.ntypes:
94 '''
95 Step 3: Target-specific Aggregation
96 x = norm( W[node_type] * gelu( Agg(x) ) + x )
97 '''
98 n_id = node_dict[ntype]
99 alpha = torch.sigmoid(self.skip[n_id])
100 t = G.nodes[ntype].data['t'].view(-1, self.out_dim)
101 trans_out = self.drop(self.a_linears[n_id](t))
102 trans_out = trans_out * alpha + h[ntype] * (1-alpha)
103 if self.use_norm:
104 new_h[ntype] = self.norms[n_id](trans_out)
105 else:
106 new_h[ntype] = trans_out
107 return new_h
108
109 class HGT(nn.Module):
110 def __init__(self, G, node_dict, edge_dict, n_inp, n_hid, n_out, n_layers, n_heads, use_norm = True):
111 super(HGT, self).__init__()
112 self.node_dict = node_dict
113 self.edge_dict = edge_dict
114 self.gcs = nn.ModuleList()
115 self.n_inp = n_inp
116 self.n_hid = n_hid
117 self.n_out = n_out
118 self.n_layers = n_layers
119 self.adapt_ws = nn.ModuleList()
120 for t in range(len(node_dict)):
121 self.adapt_ws.append(nn.Linear(n_inp, n_hid))
122 for _ in range(n_layers):
123 self.gcs.append(HGTLayer(n_hid, n_hid, node_dict, edge_dict, n_heads, use_norm = use_norm))
124 self.out = nn.Linear(n_hid, n_out)
125
126 def forward(self, G, out_key):
127 h = {}
128 for ntype in G.ntypes:
129 n_id = self.node_dict[ntype]
130 h[ntype] = F.gelu(self.adapt_ws[n_id](G.nodes[ntype].data['inp']))
131 for i in range(self.n_layers):
132 h = self.gcs[i](G, h)
133 return self.out(h[out_key])
134
135 class HeteroRGCNLayer(nn.Module):
136 def __init__(self, in_size, out_size, etypes):
137 super(HeteroRGCNLayer, self).__init__()
138 # W_r for each relation
139 self.weight = nn.ModuleDict({
140 name : nn.Linear(in_size, out_size) for name in etypes
141 })
142
143 def forward(self, G, feat_dict):
144 # The input is a dictionary of node features for each type
145 funcs = {}
146 for srctype, etype, dsttype in G.canonical_etypes:
147 # Compute W_r * h
148 Wh = self.weight[etype](feat_dict[srctype])
149 # Save it in graph for message passing
150 G.nodes[srctype].data['Wh_%s' % etype] = Wh
151 # Specify per-relation message passing functions: (message_func, reduce_func).
152 # Note that the results are saved to the same destination feature 'h', which
153 # hints the type wise reducer for aggregation.
154 funcs[etype] = (fn.copy_u('Wh_%s' % etype, 'm'), fn.mean('m', 'h'))
155 # Trigger message passing of multiple types.
156 # The first argument is the message passing functions for each relation.
157 # The second one is the type wise reducer, could be "sum", "max",
158 # "min", "mean", "stack"
159 G.multi_update_all(funcs, 'sum')
160 # return the updated node feature dictionary
161 return {ntype : G.nodes[ntype].data['h'] for ntype in G.ntypes}
162
163
164 class HeteroRGCN(nn.Module):
165 def __init__(self, G, in_size, hidden_size, out_size):
166 super(HeteroRGCN, self).__init__()
167 # create layers
168 self.layer1 = HeteroRGCNLayer(in_size, hidden_size, G.etypes)
169 self.layer2 = HeteroRGCNLayer(hidden_size, out_size, G.etypes)
170
171 def forward(self, G, out_key):
172 input_dict = {ntype : G.nodes[ntype].data['inp'] for ntype in G.ntypes}
173 h_dict = self.layer1(G, input_dict)
174 h_dict = {k : F.leaky_relu(h) for k, h in h_dict.items()}
175 h_dict = self.layer2(G, h_dict)
176 # get paper logits
177 return h_dict[out_key]
178
[end of examples/pytorch/hgt/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/pytorch/hgt/model.py b/examples/pytorch/hgt/model.py
--- a/examples/pytorch/hgt/model.py
+++ b/examples/pytorch/hgt/model.py
@@ -78,7 +78,7 @@
sub_graph.srcdata['k'] = k
sub_graph.dstdata['q'] = q
- sub_graph.srcdata['v'] = v
+ sub_graph.srcdata['v_%d' % e_id] = v
sub_graph.apply_edges(fn.v_dot_u('q', 'k', 't'))
attn_score = sub_graph.edata.pop('t').sum(-1) * relation_pri / self.sqrt_dk
@@ -86,8 +86,8 @@
sub_graph.edata['t'] = attn_score.unsqueeze(-1)
- G.multi_update_all({etype : (fn.u_mul_e('v', 't', 'm'), fn.sum('m', 't')) \
- for etype in edge_dict}, cross_reducer = 'mean')
+ G.multi_update_all({etype : (fn.u_mul_e('v_%d' % e_id, 't', 'm'), fn.sum('m', 't')) \
+ for etype, e_id in edge_dict.items()}, cross_reducer = 'mean')
new_h = {}
for ntype in G.ntypes:
| {"golden_diff": "diff --git a/examples/pytorch/hgt/model.py b/examples/pytorch/hgt/model.py\n--- a/examples/pytorch/hgt/model.py\n+++ b/examples/pytorch/hgt/model.py\n@@ -78,7 +78,7 @@\n \n sub_graph.srcdata['k'] = k\n sub_graph.dstdata['q'] = q\n- sub_graph.srcdata['v'] = v\n+ sub_graph.srcdata['v_%d' % e_id] = v\n \n sub_graph.apply_edges(fn.v_dot_u('q', 'k', 't'))\n attn_score = sub_graph.edata.pop('t').sum(-1) * relation_pri / self.sqrt_dk\n@@ -86,8 +86,8 @@\n \n sub_graph.edata['t'] = attn_score.unsqueeze(-1)\n \n- G.multi_update_all({etype : (fn.u_mul_e('v', 't', 'm'), fn.sum('m', 't')) \\\n- for etype in edge_dict}, cross_reducer = 'mean')\n+ G.multi_update_all({etype : (fn.u_mul_e('v_%d' % e_id, 't', 'm'), fn.sum('m', 't')) \\\n+ for etype, e_id in edge_dict.items()}, cross_reducer = 'mean')\n \n new_h = {}\n for ntype in G.ntypes:\n", "issue": "HGT implementation bug?\nhttps://github.com/dmlc/dgl/blob/master/examples/pytorch/hgt/model.py#L81\r\n\r\nThe implemented hgt set value of nodes multiple times when calculating by different edge attention, which will cause the `nodes[\"v\"]` of earlier etype be overwritten by later one. \n", "before_files": [{"content": "import dgl\nimport math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport dgl.function as fn\nfrom dgl.nn.functional import edge_softmax\n\nclass HGTLayer(nn.Module):\n def __init__(self,\n in_dim,\n out_dim,\n node_dict,\n edge_dict,\n n_heads,\n dropout = 0.2,\n use_norm = False):\n super(HGTLayer, self).__init__()\n\n self.in_dim = in_dim\n self.out_dim = out_dim\n self.node_dict = node_dict\n self.edge_dict = edge_dict\n self.num_types = len(node_dict)\n self.num_relations = len(edge_dict)\n self.total_rel = self.num_types * self.num_relations * self.num_types\n self.n_heads = n_heads\n self.d_k = out_dim // n_heads\n self.sqrt_dk = math.sqrt(self.d_k)\n self.att = None\n\n self.k_linears = nn.ModuleList()\n self.q_linears = nn.ModuleList()\n self.v_linears = nn.ModuleList()\n self.a_linears = nn.ModuleList()\n self.norms = nn.ModuleList()\n self.use_norm = use_norm\n\n for t in range(self.num_types):\n self.k_linears.append(nn.Linear(in_dim, out_dim))\n self.q_linears.append(nn.Linear(in_dim, out_dim))\n self.v_linears.append(nn.Linear(in_dim, out_dim))\n self.a_linears.append(nn.Linear(out_dim, out_dim))\n if use_norm:\n self.norms.append(nn.LayerNorm(out_dim))\n\n self.relation_pri = nn.Parameter(torch.ones(self.num_relations, self.n_heads))\n self.relation_att = nn.Parameter(torch.Tensor(self.num_relations, n_heads, self.d_k, self.d_k))\n self.relation_msg = nn.Parameter(torch.Tensor(self.num_relations, n_heads, self.d_k, self.d_k))\n self.skip = nn.Parameter(torch.ones(self.num_types))\n self.drop = nn.Dropout(dropout)\n\n nn.init.xavier_uniform_(self.relation_att)\n nn.init.xavier_uniform_(self.relation_msg)\n\n def forward(self, G, h):\n with G.local_scope():\n node_dict, edge_dict = self.node_dict, self.edge_dict\n for srctype, etype, dsttype in G.canonical_etypes:\n sub_graph = G[srctype, etype, dsttype]\n\n k_linear = self.k_linears[node_dict[srctype]]\n v_linear = self.v_linears[node_dict[srctype]]\n q_linear = self.q_linears[node_dict[dsttype]]\n\n k = k_linear(h[srctype]).view(-1, self.n_heads, self.d_k)\n v = v_linear(h[srctype]).view(-1, self.n_heads, self.d_k)\n q = q_linear(h[dsttype]).view(-1, self.n_heads, self.d_k)\n\n e_id = self.edge_dict[etype]\n\n relation_att = self.relation_att[e_id]\n relation_pri = self.relation_pri[e_id]\n relation_msg = self.relation_msg[e_id]\n\n k = torch.einsum(\"bij,ijk->bik\", k, relation_att)\n v = torch.einsum(\"bij,ijk->bik\", v, relation_msg)\n\n sub_graph.srcdata['k'] = k\n sub_graph.dstdata['q'] = q\n sub_graph.srcdata['v'] = v\n\n sub_graph.apply_edges(fn.v_dot_u('q', 'k', 't'))\n attn_score = sub_graph.edata.pop('t').sum(-1) * relation_pri / self.sqrt_dk\n attn_score = edge_softmax(sub_graph, attn_score, norm_by='dst')\n\n sub_graph.edata['t'] = attn_score.unsqueeze(-1)\n\n G.multi_update_all({etype : (fn.u_mul_e('v', 't', 'm'), fn.sum('m', 't')) \\\n for etype in edge_dict}, cross_reducer = 'mean')\n\n new_h = {}\n for ntype in G.ntypes:\n '''\n Step 3: Target-specific Aggregation\n x = norm( W[node_type] * gelu( Agg(x) ) + x )\n '''\n n_id = node_dict[ntype]\n alpha = torch.sigmoid(self.skip[n_id])\n t = G.nodes[ntype].data['t'].view(-1, self.out_dim)\n trans_out = self.drop(self.a_linears[n_id](t))\n trans_out = trans_out * alpha + h[ntype] * (1-alpha)\n if self.use_norm:\n new_h[ntype] = self.norms[n_id](trans_out)\n else:\n new_h[ntype] = trans_out\n return new_h\n\nclass HGT(nn.Module):\n def __init__(self, G, node_dict, edge_dict, n_inp, n_hid, n_out, n_layers, n_heads, use_norm = True):\n super(HGT, self).__init__()\n self.node_dict = node_dict\n self.edge_dict = edge_dict\n self.gcs = nn.ModuleList()\n self.n_inp = n_inp\n self.n_hid = n_hid\n self.n_out = n_out\n self.n_layers = n_layers\n self.adapt_ws = nn.ModuleList()\n for t in range(len(node_dict)):\n self.adapt_ws.append(nn.Linear(n_inp, n_hid))\n for _ in range(n_layers):\n self.gcs.append(HGTLayer(n_hid, n_hid, node_dict, edge_dict, n_heads, use_norm = use_norm))\n self.out = nn.Linear(n_hid, n_out)\n\n def forward(self, G, out_key):\n h = {}\n for ntype in G.ntypes:\n n_id = self.node_dict[ntype]\n h[ntype] = F.gelu(self.adapt_ws[n_id](G.nodes[ntype].data['inp']))\n for i in range(self.n_layers):\n h = self.gcs[i](G, h)\n return self.out(h[out_key])\n\nclass HeteroRGCNLayer(nn.Module):\n def __init__(self, in_size, out_size, etypes):\n super(HeteroRGCNLayer, self).__init__()\n # W_r for each relation\n self.weight = nn.ModuleDict({\n name : nn.Linear(in_size, out_size) for name in etypes\n })\n\n def forward(self, G, feat_dict):\n # The input is a dictionary of node features for each type\n funcs = {}\n for srctype, etype, dsttype in G.canonical_etypes:\n # Compute W_r * h\n Wh = self.weight[etype](feat_dict[srctype])\n # Save it in graph for message passing\n G.nodes[srctype].data['Wh_%s' % etype] = Wh\n # Specify per-relation message passing functions: (message_func, reduce_func).\n # Note that the results are saved to the same destination feature 'h', which\n # hints the type wise reducer for aggregation.\n funcs[etype] = (fn.copy_u('Wh_%s' % etype, 'm'), fn.mean('m', 'h'))\n # Trigger message passing of multiple types.\n # The first argument is the message passing functions for each relation.\n # The second one is the type wise reducer, could be \"sum\", \"max\",\n # \"min\", \"mean\", \"stack\"\n G.multi_update_all(funcs, 'sum')\n # return the updated node feature dictionary\n return {ntype : G.nodes[ntype].data['h'] for ntype in G.ntypes}\n\n\nclass HeteroRGCN(nn.Module):\n def __init__(self, G, in_size, hidden_size, out_size):\n super(HeteroRGCN, self).__init__()\n # create layers\n self.layer1 = HeteroRGCNLayer(in_size, hidden_size, G.etypes)\n self.layer2 = HeteroRGCNLayer(hidden_size, out_size, G.etypes)\n\n def forward(self, G, out_key):\n input_dict = {ntype : G.nodes[ntype].data['inp'] for ntype in G.ntypes}\n h_dict = self.layer1(G, input_dict)\n h_dict = {k : F.leaky_relu(h) for k, h in h_dict.items()}\n h_dict = self.layer2(G, h_dict)\n # get paper logits\n return h_dict[out_key]\n", "path": "examples/pytorch/hgt/model.py"}]} | 2,916 | 298 |
gh_patches_debug_30439 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-38 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
support blocking pull in PS so client don't need to retry in a loop
</issue>
<code>
[start of tensorflow/ps/ps.py]
1 import tensorflow as tf
2 tf.enable_eager_execution()
3 import tensorflow.contrib.eager as tfe
4 import numpy as np
5 import queue
6 import threading
7
8
9 class ParameterServer(object):
10 def __init__(self, optimizer, vars):
11 self._opt = optimizer
12 self._vars = {}
13 for k, v in vars.items():
14 if (not isinstance(v, np.ndarray)
15 or v.dtype not in (np.float32, np.float64)):
16 raise ValueError(
17 'Initial value for variable %s is not of float type ndarray' %
18 k)
19 self._vars[k] = tfe.Variable(v, name=k)
20 self._step = 0
21 self._grad_q = queue.Queue()
22 self._lock = threading.Lock()
23 self._runner = threading.Thread(target=self._run, name='ps-runner')
24 self._exiting = False
25
26 def pull(self, min_step=0, names=None):
27 with self._lock:
28 if min_step > self._step:
29 raise LookupError('Required step is not ready yet: %s' % min_step)
30 if names:
31 res = {k: self._vars[k].numpy() for k in names}
32 else:
33 res = {k: v.numpy() for k, v in self._vars.items()}
34 return self._step, res
35
36 def push(self, base_step, sub_step, grads):
37 with self._lock:
38 if base_step > self._step:
39 raise ValueError(
40 'Illegal base step %s, parameter server step is %s' %
41 (base_step, self._step))
42
43 if sub_step < 0:
44 raise ValueError('Illegal sub step %s' % sub_step)
45
46 for k, g in grads.items():
47 v = self._vars[k]
48 if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:
49 raise ValueError('Incompatible gradient for variable %s' % k)
50 # TODO(l.zou): use @dataclass when python 3.7 is available.
51 self._grad_q.put((base_step, sub_step, grads))
52
53 def _compute(self, grads):
54 grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
55 with self._lock:
56 self._opt.apply_gradients(grads_vars)
57 self._step += 1
58
59 def _run(self):
60 while not self._exiting:
61 # TODO(l.zou): How to properly accumulate and decay grads?
62 try:
63 base_step, sub_step, grads = self._grad_q.get(timeout=1.0)
64 self._compute(grads)
65 except queue.Empty:
66 pass
67
68 def start(self):
69 self._runner.start()
70
71 def join(self):
72 self._exiting = True
73 self._runner.join()
74
[end of tensorflow/ps/ps.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensorflow/ps/ps.py b/tensorflow/ps/ps.py
--- a/tensorflow/ps/ps.py
+++ b/tensorflow/ps/ps.py
@@ -1,9 +1,9 @@
+import threading
+import queue
+import numpy as np
+import tensorflow.contrib.eager as tfe
import tensorflow as tf
tf.enable_eager_execution()
-import tensorflow.contrib.eager as tfe
-import numpy as np
-import queue
-import threading
class ParameterServer(object):
@@ -22,11 +22,18 @@
self._lock = threading.Lock()
self._runner = threading.Thread(target=self._run, name='ps-runner')
self._exiting = False
+ self._min_step_cv = threading.Condition()
- def pull(self, min_step=0, names=None):
+ def pull(self, names=None, min_step=0, blocking=True, timeout=None):
+ with self._min_step_cv:
+ self._min_step_cv.wait_for(
+ lambda: not blocking or min_step <= self._step,
+ timeout=timeout)
with self._lock:
if min_step > self._step:
- raise LookupError('Required step is not ready yet: %s' % min_step)
+ raise LookupError(
+ 'Required step is not ready yet: %s' %
+ min_step)
if names:
res = {k: self._vars[k].numpy() for k in names}
else:
@@ -54,7 +61,9 @@
grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
with self._lock:
self._opt.apply_gradients(grads_vars)
+ with self._min_step_cv:
self._step += 1
+ self._min_step_cv.notify_all()
def _run(self):
while not self._exiting:
| {"golden_diff": "diff --git a/tensorflow/ps/ps.py b/tensorflow/ps/ps.py\n--- a/tensorflow/ps/ps.py\n+++ b/tensorflow/ps/ps.py\n@@ -1,9 +1,9 @@\n+import threading\n+import queue\n+import numpy as np\n+import tensorflow.contrib.eager as tfe\n import tensorflow as tf\n tf.enable_eager_execution()\n-import tensorflow.contrib.eager as tfe\n-import numpy as np\n-import queue\n-import threading\n \n \n class ParameterServer(object):\n@@ -22,11 +22,18 @@\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n+ self._min_step_cv = threading.Condition()\n \n- def pull(self, min_step=0, names=None):\n+ def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n+ with self._min_step_cv:\n+ self._min_step_cv.wait_for(\n+ lambda: not blocking or min_step <= self._step,\n+ timeout=timeout)\n with self._lock:\n if min_step > self._step:\n- raise LookupError('Required step is not ready yet: %s' % min_step)\n+ raise LookupError(\n+ 'Required step is not ready yet: %s' %\n+ min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n@@ -54,7 +61,9 @@\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n+ with self._min_step_cv:\n self._step += 1\n+ self._min_step_cv.notify_all()\n \n def _run(self):\n while not self._exiting:\n", "issue": "support blocking pull in PS so client don't need to retry in a loop\n\n", "before_files": [{"content": "import tensorflow as tf\ntf.enable_eager_execution()\nimport tensorflow.contrib.eager as tfe\nimport numpy as np\nimport queue\nimport threading\n\n\nclass ParameterServer(object):\n def __init__(self, optimizer, vars):\n self._opt = optimizer\n self._vars = {}\n for k, v in vars.items():\n if (not isinstance(v, np.ndarray)\n or v.dtype not in (np.float32, np.float64)):\n raise ValueError(\n 'Initial value for variable %s is not of float type ndarray' %\n k)\n self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n\n def pull(self, min_step=0, names=None):\n with self._lock:\n if min_step > self._step:\n raise LookupError('Required step is not ready yet: %s' % min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n res = {k: v.numpy() for k, v in self._vars.items()}\n return self._step, res\n\n def push(self, base_step, sub_step, grads):\n with self._lock:\n if base_step > self._step:\n raise ValueError(\n 'Illegal base step %s, parameter server step is %s' %\n (base_step, self._step))\n\n if sub_step < 0:\n raise ValueError('Illegal sub step %s' % sub_step)\n\n for k, g in grads.items():\n v = self._vars[k]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n\n def _compute(self, grads):\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n self._step += 1\n\n def _run(self):\n while not self._exiting:\n # TODO(l.zou): How to properly accumulate and decay grads?\n try:\n base_step, sub_step, grads = self._grad_q.get(timeout=1.0)\n self._compute(grads)\n except queue.Empty:\n pass\n\n def start(self):\n self._runner.start()\n\n def join(self):\n self._exiting = True\n self._runner.join()\n", "path": "tensorflow/ps/ps.py"}]} | 1,297 | 422 |
gh_patches_debug_9871 | rasdani/github-patches | git_diff | OCA__social-623 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[13.0] [BUG]"base_search_mail_content" module > Getting bug with "hr" (Employees) module
module: base_search_mail_content
version: 13.0
**Context :**
OCB 13.0 Odoo Server up to date [(08/30/2020),]
Virgin database , to reproduce issue faced on my test environnement.
Also !! >> Get same bug on runbot : http://3437172-13-0-56e0a2.runbot2-2.odoo-community.org
**Steps to reproduce**
- Install together "base_search_mail_content" & "hr" (Employees) native odoo module, and try to access to : hr" (Employees)
**Current behavior** (model=hr.employee&view_type=kanban, or tree)
When i try to access to menu "Employees"There is this following message :
> Something went wrong !
Only types ['many2one'] are supported for category (found type text)
**Current resolution**
i uninstall "base_search_mail_content" to retreive access to hr" (Employees)
-----------------------------------------------------------------------------------------------------------------------


</issue>
<code>
[start of base_search_mail_content/models/mail_thread.py]
1 # Copyright 2016-17 Eficent Business and IT Consulting Services S.L.
2 # (http://www.eficent.com)
3 # Copyright 2016 Serpent Consulting Services Pvt. Ltd.
4 # (<http://www.serpentcs.com>)
5 # License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl.html).
6
7 from lxml import etree
8
9 from odoo import _, api, fields, models
10 from odoo.osv import expression
11
12
13 class MailThread(models.AbstractModel):
14 _inherit = "mail.thread"
15
16 def _search_message_content(self, operator, value):
17 model_domain = [("model", "=", self._name)]
18 if operator not in expression.NEGATIVE_TERM_OPERATORS:
19 model_domain += ["|"] * 4
20 model_domain += [
21 ("record_name", operator, value),
22 ("subject", operator, value),
23 ("body", operator, value),
24 ("email_from", operator, value),
25 ("reply_to", operator, value),
26 ]
27 recs = self.env["mail.message"].search(model_domain)
28 return [("id", "in", recs.mapped("res_id"))]
29
30 message_content = fields.Text(
31 string="Message Content",
32 help="Message content, to be used only in searches",
33 compute=lambda self: False,
34 search="_search_message_content",
35 )
36
37 @api.model
38 def fields_view_get(
39 self, view_id=None, view_type="form", toolbar=False, submenu=False
40 ):
41 """
42 Override to add message_content field in all the objects
43 that inherits mail.thread
44 """
45 res = super(MailThread, self).fields_view_get(
46 view_id=view_id, view_type=view_type, toolbar=toolbar, submenu=submenu
47 )
48 if view_type == "search" and self._fields.get("message_content"):
49 doc = etree.XML(res["arch"])
50 res["fields"].update(
51 {"message_content": {"type": "char", "string": _("Message Content")}}
52 )
53
54 for node in doc.xpath("//field[last()]"):
55 # Add message_content in search view
56 elem = etree.Element("field", {"name": "message_content"})
57 node.addnext(elem)
58 res["arch"] = etree.tostring(doc)
59 return res
60
[end of base_search_mail_content/models/mail_thread.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/base_search_mail_content/models/mail_thread.py b/base_search_mail_content/models/mail_thread.py
--- a/base_search_mail_content/models/mail_thread.py
+++ b/base_search_mail_content/models/mail_thread.py
@@ -50,8 +50,7 @@
res["fields"].update(
{"message_content": {"type": "char", "string": _("Message Content")}}
)
-
- for node in doc.xpath("//field[last()]"):
+ for node in doc.xpath("/search/field[last()]"):
# Add message_content in search view
elem = etree.Element("field", {"name": "message_content"})
node.addnext(elem)
| {"golden_diff": "diff --git a/base_search_mail_content/models/mail_thread.py b/base_search_mail_content/models/mail_thread.py\n--- a/base_search_mail_content/models/mail_thread.py\n+++ b/base_search_mail_content/models/mail_thread.py\n@@ -50,8 +50,7 @@\n res[\"fields\"].update(\n {\"message_content\": {\"type\": \"char\", \"string\": _(\"Message Content\")}}\n )\n-\n- for node in doc.xpath(\"//field[last()]\"):\n+ for node in doc.xpath(\"/search/field[last()]\"):\n # Add message_content in search view\n elem = etree.Element(\"field\", {\"name\": \"message_content\"})\n node.addnext(elem)\n", "issue": "[13.0] [BUG]\"base_search_mail_content\" module > Getting bug with \"hr\" (Employees) module\nmodule: base_search_mail_content\r\nversion: 13.0\r\n\r\n**Context :**\r\nOCB 13.0 Odoo Server up to date [(08/30/2020),]\r\nVirgin database , to reproduce issue faced on my test environnement.\r\nAlso !! >> Get same bug on runbot : http://3437172-13-0-56e0a2.runbot2-2.odoo-community.org\r\n\r\n**Steps to reproduce**\r\n- Install together \"base_search_mail_content\" & \"hr\" (Employees) native odoo module, and try to access to : hr\" (Employees)\r\n\r\n**Current behavior** (model=hr.employee&view_type=kanban, or tree)\r\nWhen i try to access to menu \"Employees\"There is this following message : \r\n> Something went wrong !\r\nOnly types ['many2one'] are supported for category (found type text)\r\n\r\n**Current resolution**\r\ni uninstall \"base_search_mail_content\" to retreive access to hr\" (Employees)\r\n\r\n-----------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2016-17 Eficent Business and IT Consulting Services S.L.\n# (http://www.eficent.com)\n# Copyright 2016 Serpent Consulting Services Pvt. Ltd.\n# (<http://www.serpentcs.com>)\n# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl.html).\n\nfrom lxml import etree\n\nfrom odoo import _, api, fields, models\nfrom odoo.osv import expression\n\n\nclass MailThread(models.AbstractModel):\n _inherit = \"mail.thread\"\n\n def _search_message_content(self, operator, value):\n model_domain = [(\"model\", \"=\", self._name)]\n if operator not in expression.NEGATIVE_TERM_OPERATORS:\n model_domain += [\"|\"] * 4\n model_domain += [\n (\"record_name\", operator, value),\n (\"subject\", operator, value),\n (\"body\", operator, value),\n (\"email_from\", operator, value),\n (\"reply_to\", operator, value),\n ]\n recs = self.env[\"mail.message\"].search(model_domain)\n return [(\"id\", \"in\", recs.mapped(\"res_id\"))]\n\n message_content = fields.Text(\n string=\"Message Content\",\n help=\"Message content, to be used only in searches\",\n compute=lambda self: False,\n search=\"_search_message_content\",\n )\n\n @api.model\n def fields_view_get(\n self, view_id=None, view_type=\"form\", toolbar=False, submenu=False\n ):\n \"\"\"\n Override to add message_content field in all the objects\n that inherits mail.thread\n \"\"\"\n res = super(MailThread, self).fields_view_get(\n view_id=view_id, view_type=view_type, toolbar=toolbar, submenu=submenu\n )\n if view_type == \"search\" and self._fields.get(\"message_content\"):\n doc = etree.XML(res[\"arch\"])\n res[\"fields\"].update(\n {\"message_content\": {\"type\": \"char\", \"string\": _(\"Message Content\")}}\n )\n\n for node in doc.xpath(\"//field[last()]\"):\n # Add message_content in search view\n elem = etree.Element(\"field\", {\"name\": \"message_content\"})\n node.addnext(elem)\n res[\"arch\"] = etree.tostring(doc)\n return res\n", "path": "base_search_mail_content/models/mail_thread.py"}]} | 1,531 | 141 |
gh_patches_debug_13356 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2791 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Modify page routing to allow for any database name
## Current behavior
- Many of our pages have URLs that begin with the database name.
- We also have routes that begin with things like `administration` and `auth`.
- Those routing rules produce an ambiguous routing grammar making it impossible to use Mathesar with a database named "administration" (for example).
## Desired behavior
We should change `/<db_name>/` to `/db/<db_name>`
</issue>
<code>
[start of mathesar/urls.py]
1 from django.contrib.auth.views import LoginView
2 from django.urls import include, path, re_path
3 from rest_framework_nested import routers
4
5 from mathesar import views
6 from mathesar.api.db import viewsets as db_viewsets
7 from mathesar.api.ui import viewsets as ui_viewsets
8 from mathesar.users.password_reset import MathesarPasswordResetConfirmView
9
10 db_router = routers.DefaultRouter()
11 db_router.register(r'tables', db_viewsets.TableViewSet, basename='table')
12 db_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')
13 db_router.register(r'links', db_viewsets.LinkViewSet, basename='links')
14 db_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')
15 db_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')
16 db_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')
17
18 db_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
19 db_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')
20 db_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')
21 db_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')
22 db_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')
23
24 ui_router = routers.DefaultRouter()
25 ui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')
26 ui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')
27 ui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')
28 ui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')
29 ui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')
30
31 ui_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
32 ui_table_router.register(r'records', ui_viewsets.RecordViewSet, basename='table-record')
33
34 urlpatterns = [
35 path('api/db/v0/', include(db_router.urls)),
36 path('api/db/v0/', include(db_table_router.urls)),
37 path('api/ui/v0/', include(ui_router.urls)),
38 path('api/ui/v0/', include(ui_table_router.urls)),
39 path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),
40 path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),
41 path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),
42 path('auth/', include('django.contrib.auth.urls')),
43 path('', views.home, name='home'),
44 path('profile/', views.profile, name='profile'),
45 path('administration/', views.admin_home, name='admin_home'),
46 path('administration/users/', views.admin_home, name='admin_users_home'),
47 path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
48 path('administration/update/', views.admin_home, name='admin_update'),
49 path('<db_name>/', views.schemas, name='schemas'),
50 re_path(
51 r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
52 views.schema_home,
53 name='schema_home'
54 ),
55 ]
56
[end of mathesar/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -46,9 +46,10 @@
path('administration/users/', views.admin_home, name='admin_users_home'),
path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
path('administration/update/', views.admin_home, name='admin_update'),
- path('<db_name>/', views.schemas, name='schemas'),
+ path('db/', views.home, name='db_home'),
+ path('db/<db_name>/', views.schemas, name='schemas'),
re_path(
- r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
+ r'^db/(?P<db_name>\w+)/(?P<schema_id>\w+)/',
views.schema_home,
name='schema_home'
),
| {"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -46,9 +46,10 @@\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/update/', views.admin_home, name='admin_update'),\n- path('<db_name>/', views.schemas, name='schemas'),\n+ path('db/', views.home, name='db_home'),\n+ path('db/<db_name>/', views.schemas, name='schemas'),\n re_path(\n- r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n+ r'^db/(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n", "issue": "Modify page routing to allow for any database name\n## Current behavior\r\n\r\n- Many of our pages have URLs that begin with the database name.\r\n- We also have routes that begin with things like `administration` and `auth`.\r\n- Those routing rules produce an ambiguous routing grammar making it impossible to use Mathesar with a database named \"administration\" (for example).\r\n\r\n## Desired behavior\r\n\r\nWe should change `/<db_name>/` to `/db/<db_name>`\r\n\r\n\n", "before_files": [{"content": "from django.contrib.auth.views import LoginView\nfrom django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\nfrom mathesar.users.password_reset import MathesarPasswordResetConfirmView\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nui_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\nui_table_router.register(r'records', ui_viewsets.RecordViewSet, basename='table-record')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/', include(ui_table_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n path('profile/', views.profile, name='profile'),\n path('administration/', views.admin_home, name='admin_home'),\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/update/', views.admin_home, name='admin_update'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}]} | 1,436 | 204 |
gh_patches_debug_7778 | rasdani/github-patches | git_diff | nipy__nipype-2096 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
interfaces.camino.convert.FSL2Scheme does not show up in documentation
Diagram on front page of docs contains a typo
"Idiosynchratic" should be "Idiosyncratic"
</issue>
<code>
[start of tools/build_interface_docs.py]
1 #!/usr/bin/env python
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """Script to auto-generate interface docs.
5 """
6 from __future__ import print_function, unicode_literals
7 # stdlib imports
8 import os
9 import sys
10
11 # *****************************************************************************
12 if __name__ == '__main__':
13 nipypepath = os.path.abspath('..')
14 sys.path.insert(1, nipypepath)
15 # local imports
16 from interfacedocgen import InterfaceHelpWriter
17 package = 'nipype'
18 outdir = os.path.join('interfaces', 'generated')
19 docwriter = InterfaceHelpWriter(package)
20 # Packages that should not be included in generated API docs.
21 docwriter.package_skip_patterns += ['\.external$',
22 '\.fixes$',
23 '\.utils$',
24 '\.pipeline',
25 '\.testing',
26 '\.caching',
27 '\.scripts',
28 ]
29 # Modules that should not be included in generated API docs.
30 docwriter.module_skip_patterns += ['\.version$',
31 '\.interfaces\.base$',
32 '\.interfaces\.matlab$',
33 '\.interfaces\.rest$',
34 '\.interfaces\.pymvpa$',
35 '\.interfaces\.slicer\.generate_classes$',
36 '\.interfaces\.spm\.base$',
37 '\.interfaces\.traits',
38 '\.pipeline\.alloy$',
39 '\.pipeline\.s3_node_wrapper$',
40 '\.testing',
41 '\.scripts',
42 ]
43 docwriter.class_skip_patterns += ['AFNICommand',
44 'ANTS',
45 'FSL',
46 'FS',
47 'Info',
48 '^SPM',
49 'Tester',
50 'Spec$',
51 'Numpy'
52 # NipypeTester raises an
53 # exception when instantiated in
54 # InterfaceHelpWriter.generate_api_doc
55 'NipypeTester',
56 ]
57 docwriter.write_api_docs(outdir)
58 docwriter.write_index(outdir, 'gen', relative_to='interfaces')
59 print('%d files written' % len(docwriter.written_modules))
60
[end of tools/build_interface_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/build_interface_docs.py b/tools/build_interface_docs.py
--- a/tools/build_interface_docs.py
+++ b/tools/build_interface_docs.py
@@ -42,7 +42,7 @@
]
docwriter.class_skip_patterns += ['AFNICommand',
'ANTS',
- 'FSL',
+ 'FSLCommand',
'FS',
'Info',
'^SPM',
| {"golden_diff": "diff --git a/tools/build_interface_docs.py b/tools/build_interface_docs.py\n--- a/tools/build_interface_docs.py\n+++ b/tools/build_interface_docs.py\n@@ -42,7 +42,7 @@\n ]\n docwriter.class_skip_patterns += ['AFNICommand',\n 'ANTS',\n- 'FSL',\n+ 'FSLCommand',\n 'FS',\n 'Info',\n '^SPM',\n", "issue": "interfaces.camino.convert.FSL2Scheme does not show up in documentation\n\nDiagram on front page of docs contains a typo\n\"Idiosynchratic\" should be \"Idiosyncratic\"\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Script to auto-generate interface docs.\n\"\"\"\nfrom __future__ import print_function, unicode_literals\n# stdlib imports\nimport os\nimport sys\n\n# *****************************************************************************\nif __name__ == '__main__':\n nipypepath = os.path.abspath('..')\n sys.path.insert(1, nipypepath)\n # local imports\n from interfacedocgen import InterfaceHelpWriter\n package = 'nipype'\n outdir = os.path.join('interfaces', 'generated')\n docwriter = InterfaceHelpWriter(package)\n # Packages that should not be included in generated API docs.\n docwriter.package_skip_patterns += ['\\.external$',\n '\\.fixes$',\n '\\.utils$',\n '\\.pipeline',\n '\\.testing',\n '\\.caching',\n '\\.scripts',\n ]\n # Modules that should not be included in generated API docs.\n docwriter.module_skip_patterns += ['\\.version$',\n '\\.interfaces\\.base$',\n '\\.interfaces\\.matlab$',\n '\\.interfaces\\.rest$',\n '\\.interfaces\\.pymvpa$',\n '\\.interfaces\\.slicer\\.generate_classes$',\n '\\.interfaces\\.spm\\.base$',\n '\\.interfaces\\.traits',\n '\\.pipeline\\.alloy$',\n '\\.pipeline\\.s3_node_wrapper$',\n '\\.testing',\n '\\.scripts',\n ]\n docwriter.class_skip_patterns += ['AFNICommand',\n 'ANTS',\n 'FSL',\n 'FS',\n 'Info',\n '^SPM',\n 'Tester',\n 'Spec$',\n 'Numpy'\n # NipypeTester raises an\n # exception when instantiated in\n # InterfaceHelpWriter.generate_api_doc\n 'NipypeTester',\n ]\n docwriter.write_api_docs(outdir)\n docwriter.write_index(outdir, 'gen', relative_to='interfaces')\n print('%d files written' % len(docwriter.written_modules))\n", "path": "tools/build_interface_docs.py"}]} | 1,149 | 88 |
gh_patches_debug_9682 | rasdani/github-patches | git_diff | pypa__setuptools-781 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError: cannot import name monkey
```
user@hostname:~/project$ venv/bin/pip install -U setuptools==27.1.0
Collecting setuptools==27.1.0
Downloading setuptools-27.1.0-py2.py3-none-any.whl (464kB)
100% |████████████████████████████████| 471kB 1.6MB/s
Installing collected packages: setuptools
Found existing installation: setuptools 27.1.1
Uninstalling setuptools-27.1.1:
Successfully uninstalled setuptools-27.1.1
Successfully installed setuptools-27.1.0
user@hostname:~/project$ venv/bin/python -c "import setuptools"
user@hostname:~/project$ venv/bin/pip install -U setuptools==27.1.1
Collecting setuptools==27.1.1
Using cached setuptools-27.1.1-py2.py3-none-any.whl
Installing collected packages: setuptools
Found existing installation: setuptools 27.1.0
Uninstalling setuptools-27.1.0:
Successfully uninstalled setuptools-27.1.0
Successfully installed setuptools-27.1.1
user@hostname:~/project$ venv/bin/python -c "import setuptools"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/__init__.py", line 13, in <module>
from setuptools.extension import Extension
File "/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/extension.py", line 9, in <module>
from .monkey import get_unpatched
File "/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/monkey.py", line 10, in <module>
from . import msvc
File "/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/msvc.py", line 27, in <module>
from . import monkey
ImportError: cannot import name monkey
user@hostname:~/project$
```
</issue>
<code>
[start of setuptools/monkey.py]
1 """
2 Monkey patching of distutils.
3 """
4
5 import sys
6 import distutils.filelist
7 import platform
8
9 import setuptools
10 from . import msvc
11
12
13 __all__ = []
14 """
15 Everything is private. Contact the project team
16 if you think you need this functionality.
17 """
18
19
20 def get_unpatched(cls):
21 """Protect against re-patching the distutils if reloaded
22
23 Also ensures that no other distutils extension monkeypatched the distutils
24 first.
25 """
26 while cls.__module__.startswith('setuptools'):
27 cls, = cls.__bases__
28 if not cls.__module__.startswith('distutils'):
29 raise AssertionError(
30 "distutils has already been patched by %r" % cls
31 )
32 return cls
33
34
35 def patch_all():
36 # we can't patch distutils.cmd, alas
37 distutils.core.Command = setuptools.Command
38
39 has_issue_12885 = (
40 sys.version_info < (3, 4, 6)
41 or
42 (3, 5) < sys.version_info <= (3, 5, 3)
43 or
44 (3, 6) < sys.version_info
45 )
46
47 if has_issue_12885:
48 # fix findall bug in distutils (http://bugs.python.org/issue12885)
49 distutils.filelist.findall = setuptools.findall
50
51 needs_warehouse = (
52 sys.version_info < (2, 7, 13)
53 or
54 (3, 0) < sys.version_info < (3, 3, 7)
55 or
56 (3, 4) < sys.version_info < (3, 4, 6)
57 or
58 (3, 5) < sys.version_info <= (3, 5, 3)
59 or
60 (3, 6) < sys.version_info
61 )
62
63 if needs_warehouse:
64 warehouse = 'https://upload.pypi.org/legacy/'
65 distutils.config.PyPIRCCommand.DEFAULT_REPOSITORY = warehouse
66
67 _patch_distribution_metadata_write_pkg_file()
68 _patch_distribution_metadata_write_pkg_info()
69
70 # Install Distribution throughout the distutils
71 for module in distutils.dist, distutils.core, distutils.cmd:
72 module.Distribution = setuptools.dist.Distribution
73
74 # Install the patched Extension
75 distutils.core.Extension = setuptools.extension.Extension
76 distutils.extension.Extension = setuptools.extension.Extension
77 if 'distutils.command.build_ext' in sys.modules:
78 sys.modules['distutils.command.build_ext'].Extension = (
79 setuptools.extension.Extension
80 )
81
82 patch_for_msvc_specialized_compiler()
83
84
85 def _patch_distribution_metadata_write_pkg_file():
86 """Patch write_pkg_file to also write Requires-Python/Requires-External"""
87 distutils.dist.DistributionMetadata.write_pkg_file = (
88 setuptools.dist.write_pkg_file
89 )
90
91
92 def _patch_distribution_metadata_write_pkg_info():
93 """
94 Workaround issue #197 - Python 3 prior to 3.2.2 uses an environment-local
95 encoding to save the pkg_info. Monkey-patch its write_pkg_info method to
96 correct this undesirable behavior.
97 """
98 environment_local = (3,) <= sys.version_info[:3] < (3, 2, 2)
99 if not environment_local:
100 return
101
102 distutils.dist.DistributionMetadata.write_pkg_info = (
103 setuptools.dist.write_pkg_info
104 )
105
106
107 unpatched = dict()
108
109
110 def patch_for_msvc_specialized_compiler():
111 """
112 Patch functions in distutils to use standalone Microsoft Visual C++
113 compilers.
114 """
115 try:
116 # Distutil file for MSVC++ 9.0 and upper (Python 2.7 to 3.4)
117 import distutils.msvc9compiler as msvc9compiler
118 except ImportError:
119 pass
120
121 try:
122 # Distutil file for MSVC++ 14.0 and upper (Python 3.5+)
123 import distutils._msvccompiler as msvc14compiler
124 except ImportError:
125 pass
126
127 if platform.system() != 'Windows':
128 # Compilers only availables on Microsoft Windows
129 return
130
131 if unpatched:
132 # Already patched
133 return
134
135 try:
136 # Patch distutils.msvc9compiler
137 unpatched['msvc9_find_vcvarsall'] = msvc9compiler.find_vcvarsall
138 msvc9compiler.find_vcvarsall = msvc.msvc9_find_vcvarsall
139 unpatched['msvc9_query_vcvarsall'] = msvc9compiler.query_vcvarsall
140 msvc9compiler.query_vcvarsall = msvc.msvc9_query_vcvarsall
141 except NameError:
142 pass
143
144 try:
145 # Patch distutils._msvccompiler._get_vc_env
146 unpatched['msvc14_get_vc_env'] = msvc14compiler._get_vc_env
147 msvc14compiler._get_vc_env = msvc.msvc14_get_vc_env
148 except NameError:
149 pass
150
151 try:
152 # Patch distutils._msvccompiler.gen_lib_options for Numpy
153 unpatched['msvc14_gen_lib_options'] = msvc14compiler.gen_lib_options
154 msvc14compiler.gen_lib_options = msvc.msvc14_gen_lib_options
155 except NameError:
156 pass
157
[end of setuptools/monkey.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/monkey.py b/setuptools/monkey.py
--- a/setuptools/monkey.py
+++ b/setuptools/monkey.py
@@ -7,7 +7,6 @@
import platform
import setuptools
-from . import msvc
__all__ = []
@@ -112,6 +111,8 @@
Patch functions in distutils to use standalone Microsoft Visual C++
compilers.
"""
+ from . import msvc
+
try:
# Distutil file for MSVC++ 9.0 and upper (Python 2.7 to 3.4)
import distutils.msvc9compiler as msvc9compiler
| {"golden_diff": "diff --git a/setuptools/monkey.py b/setuptools/monkey.py\n--- a/setuptools/monkey.py\n+++ b/setuptools/monkey.py\n@@ -7,7 +7,6 @@\n import platform\n \n import setuptools\n-from . import msvc\n \n \n __all__ = []\n@@ -112,6 +111,8 @@\n Patch functions in distutils to use standalone Microsoft Visual C++\n compilers.\n \"\"\"\n+ from . import msvc\n+\n try:\n # Distutil file for MSVC++ 9.0 and upper (Python 2.7 to 3.4)\n import distutils.msvc9compiler as msvc9compiler\n", "issue": "ImportError: cannot import name monkey\n```\nuser@hostname:~/project$ venv/bin/pip install -U setuptools==27.1.0\nCollecting setuptools==27.1.0\n Downloading setuptools-27.1.0-py2.py3-none-any.whl (464kB)\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 471kB 1.6MB/s \nInstalling collected packages: setuptools\n Found existing installation: setuptools 27.1.1\n Uninstalling setuptools-27.1.1:\n Successfully uninstalled setuptools-27.1.1\nSuccessfully installed setuptools-27.1.0\nuser@hostname:~/project$ venv/bin/python -c \"import setuptools\"\nuser@hostname:~/project$ venv/bin/pip install -U setuptools==27.1.1\nCollecting setuptools==27.1.1\n Using cached setuptools-27.1.1-py2.py3-none-any.whl\nInstalling collected packages: setuptools\n Found existing installation: setuptools 27.1.0\n Uninstalling setuptools-27.1.0:\n Successfully uninstalled setuptools-27.1.0\nSuccessfully installed setuptools-27.1.1\nuser@hostname:~/project$ venv/bin/python -c \"import setuptools\"\nTraceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/__init__.py\", line 13, in <module>\n from setuptools.extension import Extension\n File \"/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/extension.py\", line 9, in <module>\n from .monkey import get_unpatched\n File \"/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/monkey.py\", line 10, in <module>\n from . import msvc\n File \"/home/user/project/venv/local/lib/python2.7/site-packages/setuptools/msvc.py\", line 27, in <module>\n from . import monkey\nImportError: cannot import name monkey\nuser@hostname:~/project$ \n```\n\n", "before_files": [{"content": "\"\"\"\nMonkey patching of distutils.\n\"\"\"\n\nimport sys\nimport distutils.filelist\nimport platform\n\nimport setuptools\nfrom . import msvc\n\n\n__all__ = []\n\"\"\"\nEverything is private. Contact the project team\nif you think you need this functionality.\n\"\"\"\n\n\ndef get_unpatched(cls):\n \"\"\"Protect against re-patching the distutils if reloaded\n\n Also ensures that no other distutils extension monkeypatched the distutils\n first.\n \"\"\"\n while cls.__module__.startswith('setuptools'):\n cls, = cls.__bases__\n if not cls.__module__.startswith('distutils'):\n raise AssertionError(\n \"distutils has already been patched by %r\" % cls\n )\n return cls\n\n\ndef patch_all():\n # we can't patch distutils.cmd, alas\n distutils.core.Command = setuptools.Command\n\n has_issue_12885 = (\n sys.version_info < (3, 4, 6)\n or\n (3, 5) < sys.version_info <= (3, 5, 3)\n or\n (3, 6) < sys.version_info\n )\n\n if has_issue_12885:\n # fix findall bug in distutils (http://bugs.python.org/issue12885)\n distutils.filelist.findall = setuptools.findall\n\n needs_warehouse = (\n sys.version_info < (2, 7, 13)\n or\n (3, 0) < sys.version_info < (3, 3, 7)\n or\n (3, 4) < sys.version_info < (3, 4, 6)\n or\n (3, 5) < sys.version_info <= (3, 5, 3)\n or\n (3, 6) < sys.version_info\n )\n\n if needs_warehouse:\n warehouse = 'https://upload.pypi.org/legacy/'\n distutils.config.PyPIRCCommand.DEFAULT_REPOSITORY = warehouse\n\n _patch_distribution_metadata_write_pkg_file()\n _patch_distribution_metadata_write_pkg_info()\n\n # Install Distribution throughout the distutils\n for module in distutils.dist, distutils.core, distutils.cmd:\n module.Distribution = setuptools.dist.Distribution\n\n # Install the patched Extension\n distutils.core.Extension = setuptools.extension.Extension\n distutils.extension.Extension = setuptools.extension.Extension\n if 'distutils.command.build_ext' in sys.modules:\n sys.modules['distutils.command.build_ext'].Extension = (\n setuptools.extension.Extension\n )\n\n patch_for_msvc_specialized_compiler()\n\n\ndef _patch_distribution_metadata_write_pkg_file():\n \"\"\"Patch write_pkg_file to also write Requires-Python/Requires-External\"\"\"\n distutils.dist.DistributionMetadata.write_pkg_file = (\n setuptools.dist.write_pkg_file\n )\n\n\ndef _patch_distribution_metadata_write_pkg_info():\n \"\"\"\n Workaround issue #197 - Python 3 prior to 3.2.2 uses an environment-local\n encoding to save the pkg_info. Monkey-patch its write_pkg_info method to\n correct this undesirable behavior.\n \"\"\"\n environment_local = (3,) <= sys.version_info[:3] < (3, 2, 2)\n if not environment_local:\n return\n\n distutils.dist.DistributionMetadata.write_pkg_info = (\n setuptools.dist.write_pkg_info\n )\n\n\nunpatched = dict()\n\n\ndef patch_for_msvc_specialized_compiler():\n \"\"\"\n Patch functions in distutils to use standalone Microsoft Visual C++\n compilers.\n \"\"\"\n try:\n # Distutil file for MSVC++ 9.0 and upper (Python 2.7 to 3.4)\n import distutils.msvc9compiler as msvc9compiler\n except ImportError:\n pass\n\n try:\n # Distutil file for MSVC++ 14.0 and upper (Python 3.5+)\n import distutils._msvccompiler as msvc14compiler\n except ImportError:\n pass\n\n if platform.system() != 'Windows':\n # Compilers only availables on Microsoft Windows\n return\n\n if unpatched:\n # Already patched\n return\n\n try:\n # Patch distutils.msvc9compiler\n unpatched['msvc9_find_vcvarsall'] = msvc9compiler.find_vcvarsall\n msvc9compiler.find_vcvarsall = msvc.msvc9_find_vcvarsall\n unpatched['msvc9_query_vcvarsall'] = msvc9compiler.query_vcvarsall\n msvc9compiler.query_vcvarsall = msvc.msvc9_query_vcvarsall\n except NameError:\n pass\n\n try:\n # Patch distutils._msvccompiler._get_vc_env\n unpatched['msvc14_get_vc_env'] = msvc14compiler._get_vc_env\n msvc14compiler._get_vc_env = msvc.msvc14_get_vc_env\n except NameError:\n pass\n\n try:\n # Patch distutils._msvccompiler.gen_lib_options for Numpy\n unpatched['msvc14_gen_lib_options'] = msvc14compiler.gen_lib_options\n msvc14compiler.gen_lib_options = msvc.msvc14_gen_lib_options\n except NameError:\n pass\n", "path": "setuptools/monkey.py"}]} | 2,566 | 145 |
gh_patches_debug_2400 | rasdani/github-patches | git_diff | dask__distributed-2975 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dask.distributed.progress no longer callable in 2.3.0?
We've used the progress() function from dask.distributed a bunch in the past to display a progress bar in JupyterLab, but it seems to have stopped working after upgrading to Dask 2.3.0:
```
from dask.distributed import Client, progress
import dask.dataframe as dd
df = dd.demo.make_timeseries('2010', '2016',
{'value': float, 'name': str, 'id': int},
freq='10s', partition_freq='7d', seed=1)
df = df.persist()
progress(df)
```
Executing this in a single cell in JupyterLab (with an existing Dask cluster already running) results in:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-16af814d7204> in <module>
7
8 df = df.persist()
----> 9 progress(df)
TypeError: 'module' object is not callable
```
Let me know if I can provide any more info. Thanks!
</issue>
<code>
[start of distributed/__init__.py]
1 from . import config
2 from dask.config import config
3 from .actor import Actor, ActorFuture
4 from .core import connect, rpc
5 from .deploy import LocalCluster, Adaptive, SpecCluster
6 from .diagnostics import progress
7 from .client import (
8 Client,
9 Executor,
10 CompatibleExecutor,
11 wait,
12 as_completed,
13 default_client,
14 fire_and_forget,
15 Future,
16 futures_of,
17 get_task_stream,
18 )
19 from .lock import Lock
20 from .nanny import Nanny
21 from .pubsub import Pub, Sub
22 from .queues import Queue
23 from .scheduler import Scheduler
24 from .threadpoolexecutor import rejoin
25 from .utils import sync
26 from .variable import Variable
27 from .worker import Worker, get_worker, get_client, secede, Reschedule
28 from .worker_client import local_client, worker_client
29
30 from tornado.gen import TimeoutError
31
32 from ._version import get_versions
33
34 versions = get_versions()
35 __version__ = versions["version"]
36 __git_revision__ = versions["full-revisionid"]
37 del get_versions, versions
38
[end of distributed/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/distributed/__init__.py b/distributed/__init__.py
--- a/distributed/__init__.py
+++ b/distributed/__init__.py
@@ -3,7 +3,7 @@
from .actor import Actor, ActorFuture
from .core import connect, rpc
from .deploy import LocalCluster, Adaptive, SpecCluster
-from .diagnostics import progress
+from .diagnostics.progressbar import progress
from .client import (
Client,
Executor,
| {"golden_diff": "diff --git a/distributed/__init__.py b/distributed/__init__.py\n--- a/distributed/__init__.py\n+++ b/distributed/__init__.py\n@@ -3,7 +3,7 @@\n from .actor import Actor, ActorFuture\n from .core import connect, rpc\n from .deploy import LocalCluster, Adaptive, SpecCluster\n-from .diagnostics import progress\n+from .diagnostics.progressbar import progress\n from .client import (\n Client,\n Executor,\n", "issue": "dask.distributed.progress no longer callable in 2.3.0?\nWe've used the progress() function from dask.distributed a bunch in the past to display a progress bar in JupyterLab, but it seems to have stopped working after upgrading to Dask 2.3.0:\r\n\r\n```\r\nfrom dask.distributed import Client, progress\r\nimport dask.dataframe as dd\r\n\r\ndf = dd.demo.make_timeseries('2010', '2016',\r\n {'value': float, 'name': str, 'id': int},\r\n freq='10s', partition_freq='7d', seed=1)\r\n\r\ndf = df.persist()\r\nprogress(df)\r\n```\r\n\r\nExecuting this in a single cell in JupyterLab (with an existing Dask cluster already running) results in:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-16af814d7204> in <module>\r\n 7 \r\n 8 df = df.persist()\r\n----> 9 progress(df)\r\n\r\nTypeError: 'module' object is not callable\r\n```\r\n\r\nLet me know if I can provide any more info. Thanks!\n", "before_files": [{"content": "from . import config\nfrom dask.config import config\nfrom .actor import Actor, ActorFuture\nfrom .core import connect, rpc\nfrom .deploy import LocalCluster, Adaptive, SpecCluster\nfrom .diagnostics import progress\nfrom .client import (\n Client,\n Executor,\n CompatibleExecutor,\n wait,\n as_completed,\n default_client,\n fire_and_forget,\n Future,\n futures_of,\n get_task_stream,\n)\nfrom .lock import Lock\nfrom .nanny import Nanny\nfrom .pubsub import Pub, Sub\nfrom .queues import Queue\nfrom .scheduler import Scheduler\nfrom .threadpoolexecutor import rejoin\nfrom .utils import sync\nfrom .variable import Variable\nfrom .worker import Worker, get_worker, get_client, secede, Reschedule\nfrom .worker_client import local_client, worker_client\n\nfrom tornado.gen import TimeoutError\n\nfrom ._version import get_versions\n\nversions = get_versions()\n__version__ = versions[\"version\"]\n__git_revision__ = versions[\"full-revisionid\"]\ndel get_versions, versions\n", "path": "distributed/__init__.py"}]} | 1,078 | 103 |
gh_patches_debug_31673 | rasdani/github-patches | git_diff | getnikola__nikola-2541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugin dependencies
Plugins (from the plugins repo) should be able to depend on each other.
Plugin dependencies
Plugins (from the plugins repo) should be able to depend on each other.
</issue>
<code>
[start of nikola/plugins/command/plugin.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2016 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Manage plugins."""
28
29 from __future__ import print_function
30 import io
31 import os
32 import sys
33 import shutil
34 import subprocess
35 import time
36 import requests
37
38 import pygments
39 from pygments.lexers import PythonLexer
40 from pygments.formatters import TerminalFormatter
41
42 from nikola.plugin_categories import Command
43 from nikola import utils
44
45 LOGGER = utils.get_logger('plugin', utils.STDERR_HANDLER)
46
47
48 class CommandPlugin(Command):
49 """Manage plugins."""
50
51 json = None
52 name = "plugin"
53 doc_usage = "[-u url] [--user] [-i name] [-r name] [--upgrade] [-l] [--list-installed]"
54 doc_purpose = "manage plugins"
55 output_dir = None
56 needs_config = False
57 cmd_options = [
58 {
59 'name': 'install',
60 'short': 'i',
61 'long': 'install',
62 'type': str,
63 'default': '',
64 'help': 'Install a plugin.',
65 },
66 {
67 'name': 'uninstall',
68 'long': 'uninstall',
69 'short': 'r',
70 'type': str,
71 'default': '',
72 'help': 'Uninstall a plugin.'
73 },
74 {
75 'name': 'list',
76 'short': 'l',
77 'long': 'list',
78 'type': bool,
79 'default': False,
80 'help': 'Show list of available plugins.'
81 },
82 {
83 'name': 'url',
84 'short': 'u',
85 'long': 'url',
86 'type': str,
87 'help': "URL for the plugin repository (default: "
88 "https://plugins.getnikola.com/v7/plugins.json)",
89 'default': 'https://plugins.getnikola.com/v7/plugins.json'
90 },
91 {
92 'name': 'user',
93 'long': 'user',
94 'type': bool,
95 'help': "Install user-wide, available for all sites.",
96 'default': False
97 },
98 {
99 'name': 'upgrade',
100 'long': 'upgrade',
101 'type': bool,
102 'help': "Upgrade all installed plugins.",
103 'default': False
104 },
105 {
106 'name': 'list_installed',
107 'long': 'list-installed',
108 'type': bool,
109 'help': "List the installed plugins with their location.",
110 'default': False
111 },
112 ]
113
114 def _execute(self, options, args):
115 """Install plugin into current site."""
116 url = options['url']
117 user_mode = options['user']
118
119 # See the "mode" we need to operate in
120 install = options.get('install')
121 uninstall = options.get('uninstall')
122 upgrade = options.get('upgrade')
123 list_available = options.get('list')
124 list_installed = options.get('list_installed')
125 show_install_notes = options.get('show_install_notes', True)
126 command_count = [bool(x) for x in (
127 install,
128 uninstall,
129 upgrade,
130 list_available,
131 list_installed)].count(True)
132 if command_count > 1 or command_count == 0:
133 print(self.help())
134 return 2
135
136 if options.get('output_dir') is not None:
137 self.output_dir = options.get('output_dir')
138 else:
139 if not self.site.configured and not user_mode and install:
140 LOGGER.notice('No site found, assuming --user')
141 user_mode = True
142
143 if user_mode:
144 self.output_dir = os.path.expanduser('~/.nikola/plugins')
145 else:
146 self.output_dir = 'plugins'
147
148 if list_available:
149 return self.list_available(url)
150 elif list_installed:
151 return self.list_installed()
152 elif upgrade:
153 return self.do_upgrade(url)
154 elif uninstall:
155 return self.do_uninstall(uninstall)
156 elif install:
157 return self.do_install(url, install, show_install_notes)
158
159 def list_available(self, url):
160 """List all available plugins."""
161 data = self.get_json(url)
162 print("Available Plugins:")
163 print("------------------")
164 for plugin in sorted(data.keys()):
165 print(plugin)
166 return 0
167
168 def list_installed(self):
169 """List installed plugins."""
170 plugins = []
171 for plugin in self.site.plugin_manager.getAllPlugins():
172 p = plugin.path
173 if os.path.isdir(p):
174 p = p + os.sep
175 else:
176 p = p + '.py'
177 plugins.append([plugin.name, p])
178
179 plugins.sort()
180 print('Installed Plugins:')
181 print('------------------')
182 for name, path in plugins:
183 print('{0} at {1}'.format(name, path))
184 print('\n\nAlso, you have disabled these plugins: {}'.format(self.site.config['DISABLED_PLUGINS']))
185 return 0
186
187 def do_upgrade(self, url):
188 """Upgrade all installed plugins."""
189 LOGGER.warning('This is not very smart, it just reinstalls some plugins and hopes for the best')
190 data = self.get_json(url)
191 plugins = []
192 for plugin in self.site.plugin_manager.getAllPlugins():
193 p = plugin.path
194 if os.path.isdir(p):
195 p = p + os.sep
196 else:
197 p = p + '.py'
198 if plugin.name in data:
199 plugins.append([plugin.name, p])
200 print('Will upgrade {0} plugins: {1}'.format(len(plugins), ', '.join(n for n, _ in plugins)))
201 for name, path in plugins:
202 print('Upgrading {0}'.format(name))
203 p = path
204 while True:
205 tail, head = os.path.split(path)
206 if head == 'plugins':
207 self.output_dir = path
208 break
209 elif tail == '':
210 LOGGER.error("Can't find the plugins folder for path: {0}".format(p))
211 return 1
212 else:
213 path = tail
214 self.do_install(url, name)
215 return 0
216
217 def do_install(self, url, name, show_install_notes=True):
218 """Download and install a plugin."""
219 data = self.get_json(url)
220 if name in data:
221 utils.makedirs(self.output_dir)
222 url = data[name]
223 LOGGER.info("Downloading '{0}'".format(url))
224 try:
225 zip_data = requests.get(url).content
226 except requests.exceptions.SSLError:
227 LOGGER.warning("SSL error, using http instead of https (press ^C to abort)")
228 time.sleep(1)
229 url = url.replace('https', 'http', 1)
230 zip_data = requests.get(url).content
231
232 zip_file = io.BytesIO()
233 zip_file.write(zip_data)
234 LOGGER.info('Extracting: {0} into {1}/'.format(name, self.output_dir))
235 utils.extract_all(zip_file, self.output_dir)
236 dest_path = os.path.join(self.output_dir, name)
237 else:
238 try:
239 plugin_path = utils.get_plugin_path(name)
240 except:
241 LOGGER.error("Can't find plugin " + name)
242 return 1
243
244 utils.makedirs(self.output_dir)
245 dest_path = os.path.join(self.output_dir, name)
246 if os.path.exists(dest_path):
247 LOGGER.error("{0} is already installed".format(name))
248 return 1
249
250 LOGGER.info('Copying {0} into plugins'.format(plugin_path))
251 shutil.copytree(plugin_path, dest_path)
252
253 reqpath = os.path.join(dest_path, 'requirements.txt')
254 if os.path.exists(reqpath):
255 LOGGER.notice('This plugin has Python dependencies.')
256 LOGGER.info('Installing dependencies with pip...')
257 try:
258 subprocess.check_call((sys.executable, '-m', 'pip', 'install', '-r', reqpath))
259 except subprocess.CalledProcessError:
260 LOGGER.error('Could not install the dependencies.')
261 print('Contents of the requirements.txt file:\n')
262 with io.open(reqpath, 'r', encoding='utf-8') as fh:
263 print(utils.indent(fh.read(), 4 * ' '))
264 print('You have to install those yourself or through a '
265 'package manager.')
266 else:
267 LOGGER.info('Dependency installation succeeded.')
268 reqnpypath = os.path.join(dest_path, 'requirements-nonpy.txt')
269 if os.path.exists(reqnpypath):
270 LOGGER.notice('This plugin has third-party '
271 'dependencies you need to install '
272 'manually.')
273 print('Contents of the requirements-nonpy.txt file:\n')
274 with io.open(reqnpypath, 'r', encoding='utf-8') as fh:
275 for l in fh.readlines():
276 i, j = l.split('::')
277 print(utils.indent(i.strip(), 4 * ' '))
278 print(utils.indent(j.strip(), 8 * ' '))
279 print()
280
281 print('You have to install those yourself or through a package '
282 'manager.')
283 confpypath = os.path.join(dest_path, 'conf.py.sample')
284 if os.path.exists(confpypath) and show_install_notes:
285 LOGGER.notice('This plugin has a sample config file. Integrate it with yours in order to make this plugin work!')
286 print('Contents of the conf.py.sample file:\n')
287 with io.open(confpypath, 'r', encoding='utf-8') as fh:
288 if self.site.colorful:
289 print(utils.indent(pygments.highlight(
290 fh.read(), PythonLexer(), TerminalFormatter()),
291 4 * ' '))
292 else:
293 print(utils.indent(fh.read(), 4 * ' '))
294 return 0
295
296 def do_uninstall(self, name):
297 """Uninstall a plugin."""
298 for plugin in self.site.plugin_manager.getAllPlugins(): # FIXME: this is repeated thrice
299 if name == plugin.name: # Uninstall this one
300 p = plugin.path
301 if os.path.isdir(p):
302 # Plugins that have a package in them need to delete parent
303 # Issue #2356
304 p = p + os.sep
305 p = os.path.abspath(os.path.join(p, os.pardir))
306 else:
307 p = os.path.dirname(p)
308 LOGGER.warning('About to uninstall plugin: {0}'.format(name))
309 LOGGER.warning('This will delete {0}'.format(p))
310 sure = utils.ask_yesno('Are you sure?')
311 if sure:
312 LOGGER.warning('Removing {0}'.format(p))
313 shutil.rmtree(p)
314 return 0
315 return 1
316 LOGGER.error('Unknown plugin: {0}'.format(name))
317 return 1
318
319 def get_json(self, url):
320 """Download the JSON file with all plugins."""
321 if self.json is None:
322 try:
323 self.json = requests.get(url).json()
324 except requests.exceptions.SSLError:
325 LOGGER.warning("SSL error, using http instead of https (press ^C to abort)")
326 time.sleep(1)
327 url = url.replace('https', 'http', 1)
328 self.json = requests.get(url).json()
329 return self.json
330
[end of nikola/plugins/command/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/command/plugin.py b/nikola/plugins/command/plugin.py
--- a/nikola/plugins/command/plugin.py
+++ b/nikola/plugins/command/plugin.py
@@ -265,6 +265,7 @@
'package manager.')
else:
LOGGER.info('Dependency installation succeeded.')
+
reqnpypath = os.path.join(dest_path, 'requirements-nonpy.txt')
if os.path.exists(reqnpypath):
LOGGER.notice('This plugin has third-party '
@@ -280,6 +281,25 @@
print('You have to install those yourself or through a package '
'manager.')
+
+ req_plug_path = os.path.join(dest_path, 'requirements-plugins.txt')
+ if os.path.exists(req_plug_path):
+ LOGGER.notice('This plugin requires other Nikola plugins.')
+ LOGGER.info('Installing plugins using Nikola...')
+ print('Contents of the requirements-nonpy.txt file:\n')
+ try:
+ with io.open(req_plug_path, 'r', encoding='utf-8') as inf:
+ for plugname in inf.readlines():
+ self.do_install(url, plugname, show_install_notes)
+ except subprocess.CalledProcessError:
+ LOGGER.error('Could not install a plugin.')
+ print('Contents of the requirements-plugins.txt file:\n')
+ with io.open(req_plug_path, 'r', encoding='utf-8') as fh:
+ print(utils.indent(fh.read(), 4 * ' '))
+ print('You have to install those yourself manually.')
+ else:
+ LOGGER.info('Dependency installation succeeded.')
+
confpypath = os.path.join(dest_path, 'conf.py.sample')
if os.path.exists(confpypath) and show_install_notes:
LOGGER.notice('This plugin has a sample config file. Integrate it with yours in order to make this plugin work!')
| {"golden_diff": "diff --git a/nikola/plugins/command/plugin.py b/nikola/plugins/command/plugin.py\n--- a/nikola/plugins/command/plugin.py\n+++ b/nikola/plugins/command/plugin.py\n@@ -265,6 +265,7 @@\n 'package manager.')\n else:\n LOGGER.info('Dependency installation succeeded.')\n+\n reqnpypath = os.path.join(dest_path, 'requirements-nonpy.txt')\n if os.path.exists(reqnpypath):\n LOGGER.notice('This plugin has third-party '\n@@ -280,6 +281,25 @@\n \n print('You have to install those yourself or through a package '\n 'manager.')\n+\n+ req_plug_path = os.path.join(dest_path, 'requirements-plugins.txt')\n+ if os.path.exists(req_plug_path):\n+ LOGGER.notice('This plugin requires other Nikola plugins.')\n+ LOGGER.info('Installing plugins using Nikola...')\n+ print('Contents of the requirements-nonpy.txt file:\\n')\n+ try:\n+ with io.open(req_plug_path, 'r', encoding='utf-8') as inf:\n+ for plugname in inf.readlines():\n+ self.do_install(url, plugname, show_install_notes)\n+ except subprocess.CalledProcessError:\n+ LOGGER.error('Could not install a plugin.')\n+ print('Contents of the requirements-plugins.txt file:\\n')\n+ with io.open(req_plug_path, 'r', encoding='utf-8') as fh:\n+ print(utils.indent(fh.read(), 4 * ' '))\n+ print('You have to install those yourself manually.')\n+ else:\n+ LOGGER.info('Dependency installation succeeded.')\n+\n confpypath = os.path.join(dest_path, 'conf.py.sample')\n if os.path.exists(confpypath) and show_install_notes:\n LOGGER.notice('This plugin has a sample config file. Integrate it with yours in order to make this plugin work!')\n", "issue": "Plugin dependencies\nPlugins (from the plugins repo) should be able to depend on each other.\n\nPlugin dependencies\nPlugins (from the plugins repo) should be able to depend on each other.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2016 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Manage plugins.\"\"\"\n\nfrom __future__ import print_function\nimport io\nimport os\nimport sys\nimport shutil\nimport subprocess\nimport time\nimport requests\n\nimport pygments\nfrom pygments.lexers import PythonLexer\nfrom pygments.formatters import TerminalFormatter\n\nfrom nikola.plugin_categories import Command\nfrom nikola import utils\n\nLOGGER = utils.get_logger('plugin', utils.STDERR_HANDLER)\n\n\nclass CommandPlugin(Command):\n \"\"\"Manage plugins.\"\"\"\n\n json = None\n name = \"plugin\"\n doc_usage = \"[-u url] [--user] [-i name] [-r name] [--upgrade] [-l] [--list-installed]\"\n doc_purpose = \"manage plugins\"\n output_dir = None\n needs_config = False\n cmd_options = [\n {\n 'name': 'install',\n 'short': 'i',\n 'long': 'install',\n 'type': str,\n 'default': '',\n 'help': 'Install a plugin.',\n },\n {\n 'name': 'uninstall',\n 'long': 'uninstall',\n 'short': 'r',\n 'type': str,\n 'default': '',\n 'help': 'Uninstall a plugin.'\n },\n {\n 'name': 'list',\n 'short': 'l',\n 'long': 'list',\n 'type': bool,\n 'default': False,\n 'help': 'Show list of available plugins.'\n },\n {\n 'name': 'url',\n 'short': 'u',\n 'long': 'url',\n 'type': str,\n 'help': \"URL for the plugin repository (default: \"\n \"https://plugins.getnikola.com/v7/plugins.json)\",\n 'default': 'https://plugins.getnikola.com/v7/plugins.json'\n },\n {\n 'name': 'user',\n 'long': 'user',\n 'type': bool,\n 'help': \"Install user-wide, available for all sites.\",\n 'default': False\n },\n {\n 'name': 'upgrade',\n 'long': 'upgrade',\n 'type': bool,\n 'help': \"Upgrade all installed plugins.\",\n 'default': False\n },\n {\n 'name': 'list_installed',\n 'long': 'list-installed',\n 'type': bool,\n 'help': \"List the installed plugins with their location.\",\n 'default': False\n },\n ]\n\n def _execute(self, options, args):\n \"\"\"Install plugin into current site.\"\"\"\n url = options['url']\n user_mode = options['user']\n\n # See the \"mode\" we need to operate in\n install = options.get('install')\n uninstall = options.get('uninstall')\n upgrade = options.get('upgrade')\n list_available = options.get('list')\n list_installed = options.get('list_installed')\n show_install_notes = options.get('show_install_notes', True)\n command_count = [bool(x) for x in (\n install,\n uninstall,\n upgrade,\n list_available,\n list_installed)].count(True)\n if command_count > 1 or command_count == 0:\n print(self.help())\n return 2\n\n if options.get('output_dir') is not None:\n self.output_dir = options.get('output_dir')\n else:\n if not self.site.configured and not user_mode and install:\n LOGGER.notice('No site found, assuming --user')\n user_mode = True\n\n if user_mode:\n self.output_dir = os.path.expanduser('~/.nikola/plugins')\n else:\n self.output_dir = 'plugins'\n\n if list_available:\n return self.list_available(url)\n elif list_installed:\n return self.list_installed()\n elif upgrade:\n return self.do_upgrade(url)\n elif uninstall:\n return self.do_uninstall(uninstall)\n elif install:\n return self.do_install(url, install, show_install_notes)\n\n def list_available(self, url):\n \"\"\"List all available plugins.\"\"\"\n data = self.get_json(url)\n print(\"Available Plugins:\")\n print(\"------------------\")\n for plugin in sorted(data.keys()):\n print(plugin)\n return 0\n\n def list_installed(self):\n \"\"\"List installed plugins.\"\"\"\n plugins = []\n for plugin in self.site.plugin_manager.getAllPlugins():\n p = plugin.path\n if os.path.isdir(p):\n p = p + os.sep\n else:\n p = p + '.py'\n plugins.append([plugin.name, p])\n\n plugins.sort()\n print('Installed Plugins:')\n print('------------------')\n for name, path in plugins:\n print('{0} at {1}'.format(name, path))\n print('\\n\\nAlso, you have disabled these plugins: {}'.format(self.site.config['DISABLED_PLUGINS']))\n return 0\n\n def do_upgrade(self, url):\n \"\"\"Upgrade all installed plugins.\"\"\"\n LOGGER.warning('This is not very smart, it just reinstalls some plugins and hopes for the best')\n data = self.get_json(url)\n plugins = []\n for plugin in self.site.plugin_manager.getAllPlugins():\n p = plugin.path\n if os.path.isdir(p):\n p = p + os.sep\n else:\n p = p + '.py'\n if plugin.name in data:\n plugins.append([plugin.name, p])\n print('Will upgrade {0} plugins: {1}'.format(len(plugins), ', '.join(n for n, _ in plugins)))\n for name, path in plugins:\n print('Upgrading {0}'.format(name))\n p = path\n while True:\n tail, head = os.path.split(path)\n if head == 'plugins':\n self.output_dir = path\n break\n elif tail == '':\n LOGGER.error(\"Can't find the plugins folder for path: {0}\".format(p))\n return 1\n else:\n path = tail\n self.do_install(url, name)\n return 0\n\n def do_install(self, url, name, show_install_notes=True):\n \"\"\"Download and install a plugin.\"\"\"\n data = self.get_json(url)\n if name in data:\n utils.makedirs(self.output_dir)\n url = data[name]\n LOGGER.info(\"Downloading '{0}'\".format(url))\n try:\n zip_data = requests.get(url).content\n except requests.exceptions.SSLError:\n LOGGER.warning(\"SSL error, using http instead of https (press ^C to abort)\")\n time.sleep(1)\n url = url.replace('https', 'http', 1)\n zip_data = requests.get(url).content\n\n zip_file = io.BytesIO()\n zip_file.write(zip_data)\n LOGGER.info('Extracting: {0} into {1}/'.format(name, self.output_dir))\n utils.extract_all(zip_file, self.output_dir)\n dest_path = os.path.join(self.output_dir, name)\n else:\n try:\n plugin_path = utils.get_plugin_path(name)\n except:\n LOGGER.error(\"Can't find plugin \" + name)\n return 1\n\n utils.makedirs(self.output_dir)\n dest_path = os.path.join(self.output_dir, name)\n if os.path.exists(dest_path):\n LOGGER.error(\"{0} is already installed\".format(name))\n return 1\n\n LOGGER.info('Copying {0} into plugins'.format(plugin_path))\n shutil.copytree(plugin_path, dest_path)\n\n reqpath = os.path.join(dest_path, 'requirements.txt')\n if os.path.exists(reqpath):\n LOGGER.notice('This plugin has Python dependencies.')\n LOGGER.info('Installing dependencies with pip...')\n try:\n subprocess.check_call((sys.executable, '-m', 'pip', 'install', '-r', reqpath))\n except subprocess.CalledProcessError:\n LOGGER.error('Could not install the dependencies.')\n print('Contents of the requirements.txt file:\\n')\n with io.open(reqpath, 'r', encoding='utf-8') as fh:\n print(utils.indent(fh.read(), 4 * ' '))\n print('You have to install those yourself or through a '\n 'package manager.')\n else:\n LOGGER.info('Dependency installation succeeded.')\n reqnpypath = os.path.join(dest_path, 'requirements-nonpy.txt')\n if os.path.exists(reqnpypath):\n LOGGER.notice('This plugin has third-party '\n 'dependencies you need to install '\n 'manually.')\n print('Contents of the requirements-nonpy.txt file:\\n')\n with io.open(reqnpypath, 'r', encoding='utf-8') as fh:\n for l in fh.readlines():\n i, j = l.split('::')\n print(utils.indent(i.strip(), 4 * ' '))\n print(utils.indent(j.strip(), 8 * ' '))\n print()\n\n print('You have to install those yourself or through a package '\n 'manager.')\n confpypath = os.path.join(dest_path, 'conf.py.sample')\n if os.path.exists(confpypath) and show_install_notes:\n LOGGER.notice('This plugin has a sample config file. Integrate it with yours in order to make this plugin work!')\n print('Contents of the conf.py.sample file:\\n')\n with io.open(confpypath, 'r', encoding='utf-8') as fh:\n if self.site.colorful:\n print(utils.indent(pygments.highlight(\n fh.read(), PythonLexer(), TerminalFormatter()),\n 4 * ' '))\n else:\n print(utils.indent(fh.read(), 4 * ' '))\n return 0\n\n def do_uninstall(self, name):\n \"\"\"Uninstall a plugin.\"\"\"\n for plugin in self.site.plugin_manager.getAllPlugins(): # FIXME: this is repeated thrice\n if name == plugin.name: # Uninstall this one\n p = plugin.path\n if os.path.isdir(p):\n # Plugins that have a package in them need to delete parent\n # Issue #2356\n p = p + os.sep\n p = os.path.abspath(os.path.join(p, os.pardir))\n else:\n p = os.path.dirname(p)\n LOGGER.warning('About to uninstall plugin: {0}'.format(name))\n LOGGER.warning('This will delete {0}'.format(p))\n sure = utils.ask_yesno('Are you sure?')\n if sure:\n LOGGER.warning('Removing {0}'.format(p))\n shutil.rmtree(p)\n return 0\n return 1\n LOGGER.error('Unknown plugin: {0}'.format(name))\n return 1\n\n def get_json(self, url):\n \"\"\"Download the JSON file with all plugins.\"\"\"\n if self.json is None:\n try:\n self.json = requests.get(url).json()\n except requests.exceptions.SSLError:\n LOGGER.warning(\"SSL error, using http instead of https (press ^C to abort)\")\n time.sleep(1)\n url = url.replace('https', 'http', 1)\n self.json = requests.get(url).json()\n return self.json\n", "path": "nikola/plugins/command/plugin.py"}]} | 4,050 | 412 |
gh_patches_debug_18781 | rasdani/github-patches | git_diff | ivy-llc__ivy-15979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
selu
</issue>
<code>
[start of ivy/functional/frontends/paddle/nn/functional/activation.py]
1 # local
2 from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh
3 from ivy.functional.frontends.paddle.tensor.math import (
4 log_softmax as paddle_log_softmax,
5 )
6
7
8 tanh = paddle_tanh
9 log_softmax = paddle_log_softmax
10
[end of ivy/functional/frontends/paddle/nn/functional/activation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/nn/functional/activation.py b/ivy/functional/frontends/paddle/nn/functional/activation.py
--- a/ivy/functional/frontends/paddle/nn/functional/activation.py
+++ b/ivy/functional/frontends/paddle/nn/functional/activation.py
@@ -1,9 +1,33 @@
# local
+import ivy
+from ivy.func_wrapper import with_supported_dtypes
+from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh
from ivy.functional.frontends.paddle.tensor.math import (
log_softmax as paddle_log_softmax,
)
+@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
+@to_ivy_arrays_and_back
+def selu(
+ x,
+ /,
+ *,
+ alpha=1.6732632423543772848170429916717,
+ scale=1.0507009873554804934193349852946,
+ name=None,
+):
+ if scale <= 1.0:
+ raise ValueError(f"The scale must be greater than 1.0. Received: {scale}.")
+
+ if alpha < 0:
+ raise ValueError(f"The alpha must be no less than zero. Received: {alpha}.")
+
+ ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))
+ arr = scale * ret
+ return ivy.astype(arr, x.dtype)
+
+
tanh = paddle_tanh
log_softmax = paddle_log_softmax
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/nn/functional/activation.py b/ivy/functional/frontends/paddle/nn/functional/activation.py\n--- a/ivy/functional/frontends/paddle/nn/functional/activation.py\n+++ b/ivy/functional/frontends/paddle/nn/functional/activation.py\n@@ -1,9 +1,33 @@\n # local\n+import ivy\n+from ivy.func_wrapper import with_supported_dtypes\n+from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh\n from ivy.functional.frontends.paddle.tensor.math import (\n log_softmax as paddle_log_softmax,\n )\n \n \n+@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def selu(\n+ x,\n+ /,\n+ *,\n+ alpha=1.6732632423543772848170429916717,\n+ scale=1.0507009873554804934193349852946,\n+ name=None,\n+):\n+ if scale <= 1.0:\n+ raise ValueError(f\"The scale must be greater than 1.0. Received: {scale}.\")\n+\n+ if alpha < 0:\n+ raise ValueError(f\"The alpha must be no less than zero. Received: {alpha}.\")\n+\n+ ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))\n+ arr = scale * ret\n+ return ivy.astype(arr, x.dtype)\n+\n+\n tanh = paddle_tanh\n log_softmax = paddle_log_softmax\n", "issue": "selu\n\n", "before_files": [{"content": "# local\nfrom ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh\nfrom ivy.functional.frontends.paddle.tensor.math import (\n log_softmax as paddle_log_softmax,\n)\n\n\ntanh = paddle_tanh\nlog_softmax = paddle_log_softmax\n", "path": "ivy/functional/frontends/paddle/nn/functional/activation.py"}]} | 627 | 420 |
gh_patches_debug_22439 | rasdani/github-patches | git_diff | scrapy__scrapy-5722 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add logging functionality to `memusage` extension
## Summary
To add logging functionality to memusage extension.
## Motivation
Scrapy jobs with `MEMUSAGE_ENABLED : True` and defined `MEMUSAGE_LIMIT_MB` (all jobs on scrapy cloud) can be stopped early due to overuse of RAM memory and receive `memusage_exceeded` outcome.
First thing required to debug RAM memory leaks - is to identify.. pattern of RAM memory usage.
Is RAM usage continuously increased at higher rates during runtime?
or Is RAM usage rapidly increased over limit in last several minutes after hours or even days of stable runtime performance?
Each reason require different approaches to debug RAM memory leaks.
It will be much easier to debug this if value of `self.get_virtual_size()` will be added to log in `_check_limit` method of `memusage` extension
https://github.com/scrapy/scrapy/blob/6ded3cf4cd134b615239babe28bb28c3ff524b05/scrapy/extensions/memusage.py#L77-L89
## Describe alternatives you've considered
Applying `MEMUSAGE_WARNING_MB` setting to ~80-90% of `MEMUSAGE_LIMIT_MB` - current implementation of `memusage` extension warns only 1 time so it is not enough data for this.
Manually subclass `memusage` extension with similar changes - as well as any other option it will require to reschedule job. It may be not siutable for jobs with several days(and more) total runtime. So from this side it is preferable that it will be applied in scrapy itself and with enabled this loggin by default.
## Additional context
Similar functionality previously requested here https://github.com/scrapy/scrapy/issues/2173
</issue>
<code>
[start of scrapy/extensions/memusage.py]
1 """
2 MemoryUsage extension
3
4 See documentation in docs/topics/extensions.rst
5 """
6 import sys
7 import socket
8 import logging
9 from pprint import pformat
10 from importlib import import_module
11
12 from twisted.internet import task
13
14 from scrapy import signals
15 from scrapy.exceptions import NotConfigured
16 from scrapy.mail import MailSender
17 from scrapy.utils.engine import get_engine_status
18
19 logger = logging.getLogger(__name__)
20
21
22 class MemoryUsage:
23
24 def __init__(self, crawler):
25 if not crawler.settings.getbool('MEMUSAGE_ENABLED'):
26 raise NotConfigured
27 try:
28 # stdlib's resource module is only available on unix platforms.
29 self.resource = import_module('resource')
30 except ImportError:
31 raise NotConfigured
32
33 self.crawler = crawler
34 self.warned = False
35 self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')
36 self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB') * 1024 * 1024
37 self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB') * 1024 * 1024
38 self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')
39 self.mail = MailSender.from_settings(crawler.settings)
40 crawler.signals.connect(self.engine_started, signal=signals.engine_started)
41 crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)
42
43 @classmethod
44 def from_crawler(cls, crawler):
45 return cls(crawler)
46
47 def get_virtual_size(self):
48 size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss
49 if sys.platform != 'darwin':
50 # on macOS ru_maxrss is in bytes, on Linux it is in KB
51 size *= 1024
52 return size
53
54 def engine_started(self):
55 self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())
56 self.tasks = []
57 tsk = task.LoopingCall(self.update)
58 self.tasks.append(tsk)
59 tsk.start(self.check_interval, now=True)
60 if self.limit:
61 tsk = task.LoopingCall(self._check_limit)
62 self.tasks.append(tsk)
63 tsk.start(self.check_interval, now=True)
64 if self.warning:
65 tsk = task.LoopingCall(self._check_warning)
66 self.tasks.append(tsk)
67 tsk.start(self.check_interval, now=True)
68
69 def engine_stopped(self):
70 for tsk in self.tasks:
71 if tsk.running:
72 tsk.stop()
73
74 def update(self):
75 self.crawler.stats.max_value('memusage/max', self.get_virtual_size())
76
77 def _check_limit(self):
78 if self.get_virtual_size() > self.limit:
79 self.crawler.stats.set_value('memusage/limit_reached', 1)
80 mem = self.limit / 1024 / 1024
81 logger.error("Memory usage exceeded %(memusage)dM. Shutting down Scrapy...",
82 {'memusage': mem}, extra={'crawler': self.crawler})
83 if self.notify_mails:
84 subj = (
85 f"{self.crawler.settings['BOT_NAME']} terminated: "
86 f"memory usage exceeded {mem}M at {socket.gethostname()}"
87 )
88 self._send_report(self.notify_mails, subj)
89 self.crawler.stats.set_value('memusage/limit_notified', 1)
90
91 if self.crawler.engine.spider is not None:
92 self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')
93 else:
94 self.crawler.stop()
95
96 def _check_warning(self):
97 if self.warned: # warn only once
98 return
99 if self.get_virtual_size() > self.warning:
100 self.crawler.stats.set_value('memusage/warning_reached', 1)
101 mem = self.warning / 1024 / 1024
102 logger.warning("Memory usage reached %(memusage)dM",
103 {'memusage': mem}, extra={'crawler': self.crawler})
104 if self.notify_mails:
105 subj = (
106 f"{self.crawler.settings['BOT_NAME']} warning: "
107 f"memory usage reached {mem}M at {socket.gethostname()}"
108 )
109 self._send_report(self.notify_mails, subj)
110 self.crawler.stats.set_value('memusage/warning_notified', 1)
111 self.warned = True
112
113 def _send_report(self, rcpts, subject):
114 """send notification mail with some additional useful info"""
115 stats = self.crawler.stats
116 s = f"Memory usage at engine startup : {stats.get_value('memusage/startup')/1024/1024}M\r\n"
117 s += f"Maximum memory usage : {stats.get_value('memusage/max')/1024/1024}M\r\n"
118 s += f"Current memory usage : {self.get_virtual_size()/1024/1024}M\r\n"
119
120 s += "ENGINE STATUS ------------------------------------------------------- \r\n"
121 s += "\r\n"
122 s += pformat(get_engine_status(self.crawler.engine))
123 s += "\r\n"
124 self.mail.send(rcpts, subject, s)
125
[end of scrapy/extensions/memusage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/extensions/memusage.py b/scrapy/extensions/memusage.py
--- a/scrapy/extensions/memusage.py
+++ b/scrapy/extensions/memusage.py
@@ -75,7 +75,8 @@
self.crawler.stats.max_value('memusage/max', self.get_virtual_size())
def _check_limit(self):
- if self.get_virtual_size() > self.limit:
+ current_mem_usage = self.get_virtual_size()
+ if current_mem_usage > self.limit:
self.crawler.stats.set_value('memusage/limit_reached', 1)
mem = self.limit / 1024 / 1024
logger.error("Memory usage exceeded %(memusage)dM. Shutting down Scrapy...",
@@ -92,6 +93,8 @@
self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')
else:
self.crawler.stop()
+ else:
+ logger.info("Current memory usage is %(virtualsize)dM", {'virtualsize': current_mem_usage / 1024 / 1024})
def _check_warning(self):
if self.warned: # warn only once
| {"golden_diff": "diff --git a/scrapy/extensions/memusage.py b/scrapy/extensions/memusage.py\n--- a/scrapy/extensions/memusage.py\n+++ b/scrapy/extensions/memusage.py\n@@ -75,7 +75,8 @@\n self.crawler.stats.max_value('memusage/max', self.get_virtual_size())\n \n def _check_limit(self):\n- if self.get_virtual_size() > self.limit:\n+ current_mem_usage = self.get_virtual_size()\n+ if current_mem_usage > self.limit:\n self.crawler.stats.set_value('memusage/limit_reached', 1)\n mem = self.limit / 1024 / 1024\n logger.error(\"Memory usage exceeded %(memusage)dM. Shutting down Scrapy...\",\n@@ -92,6 +93,8 @@\n self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')\n else:\n self.crawler.stop()\n+ else:\n+ logger.info(\"Current memory usage is %(virtualsize)dM\", {'virtualsize': current_mem_usage / 1024 / 1024})\n \n def _check_warning(self):\n if self.warned: # warn only once\n", "issue": "Add logging functionality to `memusage` extension\n\r\n\r\n## Summary\r\n\r\nTo add logging functionality to memusage extension.\r\n\r\n## Motivation\r\n\r\nScrapy jobs with `MEMUSAGE_ENABLED : True` and defined `MEMUSAGE_LIMIT_MB` (all jobs on scrapy cloud) can be stopped early due to overuse of RAM memory and receive `memusage_exceeded` outcome.\r\n\r\nFirst thing required to debug RAM memory leaks - is to identify.. pattern of RAM memory usage.\r\nIs RAM usage continuously increased at higher rates during runtime?\r\nor Is RAM usage rapidly increased over limit in last several minutes after hours or even days of stable runtime performance?\r\nEach reason require different approaches to debug RAM memory leaks.\r\n\r\nIt will be much easier to debug this if value of `self.get_virtual_size()` will be added to log in `_check_limit` method of `memusage` extension\r\nhttps://github.com/scrapy/scrapy/blob/6ded3cf4cd134b615239babe28bb28c3ff524b05/scrapy/extensions/memusage.py#L77-L89\r\n\r\n## Describe alternatives you've considered\r\n\r\nApplying `MEMUSAGE_WARNING_MB` setting to ~80-90% of `MEMUSAGE_LIMIT_MB` - current implementation of `memusage` extension warns only 1 time so it is not enough data for this.\r\n\r\nManually subclass `memusage` extension with similar changes - as well as any other option it will require to reschedule job. It may be not siutable for jobs with several days(and more) total runtime. So from this side it is preferable that it will be applied in scrapy itself and with enabled this loggin by default.\r\n \r\n## Additional context\r\n\r\nSimilar functionality previously requested here https://github.com/scrapy/scrapy/issues/2173\r\n\n", "before_files": [{"content": "\"\"\"\nMemoryUsage extension\n\nSee documentation in docs/topics/extensions.rst\n\"\"\"\nimport sys\nimport socket\nimport logging\nfrom pprint import pformat\nfrom importlib import import_module\n\nfrom twisted.internet import task\n\nfrom scrapy import signals\nfrom scrapy.exceptions import NotConfigured\nfrom scrapy.mail import MailSender\nfrom scrapy.utils.engine import get_engine_status\n\nlogger = logging.getLogger(__name__)\n\n\nclass MemoryUsage:\n\n def __init__(self, crawler):\n if not crawler.settings.getbool('MEMUSAGE_ENABLED'):\n raise NotConfigured\n try:\n # stdlib's resource module is only available on unix platforms.\n self.resource = import_module('resource')\n except ImportError:\n raise NotConfigured\n\n self.crawler = crawler\n self.warned = False\n self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')\n self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB') * 1024 * 1024\n self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB') * 1024 * 1024\n self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')\n self.mail = MailSender.from_settings(crawler.settings)\n crawler.signals.connect(self.engine_started, signal=signals.engine_started)\n crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler)\n\n def get_virtual_size(self):\n size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss\n if sys.platform != 'darwin':\n # on macOS ru_maxrss is in bytes, on Linux it is in KB\n size *= 1024\n return size\n\n def engine_started(self):\n self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())\n self.tasks = []\n tsk = task.LoopingCall(self.update)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n if self.limit:\n tsk = task.LoopingCall(self._check_limit)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n if self.warning:\n tsk = task.LoopingCall(self._check_warning)\n self.tasks.append(tsk)\n tsk.start(self.check_interval, now=True)\n\n def engine_stopped(self):\n for tsk in self.tasks:\n if tsk.running:\n tsk.stop()\n\n def update(self):\n self.crawler.stats.max_value('memusage/max', self.get_virtual_size())\n\n def _check_limit(self):\n if self.get_virtual_size() > self.limit:\n self.crawler.stats.set_value('memusage/limit_reached', 1)\n mem = self.limit / 1024 / 1024\n logger.error(\"Memory usage exceeded %(memusage)dM. Shutting down Scrapy...\",\n {'memusage': mem}, extra={'crawler': self.crawler})\n if self.notify_mails:\n subj = (\n f\"{self.crawler.settings['BOT_NAME']} terminated: \"\n f\"memory usage exceeded {mem}M at {socket.gethostname()}\"\n )\n self._send_report(self.notify_mails, subj)\n self.crawler.stats.set_value('memusage/limit_notified', 1)\n\n if self.crawler.engine.spider is not None:\n self.crawler.engine.close_spider(self.crawler.engine.spider, 'memusage_exceeded')\n else:\n self.crawler.stop()\n\n def _check_warning(self):\n if self.warned: # warn only once\n return\n if self.get_virtual_size() > self.warning:\n self.crawler.stats.set_value('memusage/warning_reached', 1)\n mem = self.warning / 1024 / 1024\n logger.warning(\"Memory usage reached %(memusage)dM\",\n {'memusage': mem}, extra={'crawler': self.crawler})\n if self.notify_mails:\n subj = (\n f\"{self.crawler.settings['BOT_NAME']} warning: \"\n f\"memory usage reached {mem}M at {socket.gethostname()}\"\n )\n self._send_report(self.notify_mails, subj)\n self.crawler.stats.set_value('memusage/warning_notified', 1)\n self.warned = True\n\n def _send_report(self, rcpts, subject):\n \"\"\"send notification mail with some additional useful info\"\"\"\n stats = self.crawler.stats\n s = f\"Memory usage at engine startup : {stats.get_value('memusage/startup')/1024/1024}M\\r\\n\"\n s += f\"Maximum memory usage : {stats.get_value('memusage/max')/1024/1024}M\\r\\n\"\n s += f\"Current memory usage : {self.get_virtual_size()/1024/1024}M\\r\\n\"\n\n s += \"ENGINE STATUS ------------------------------------------------------- \\r\\n\"\n s += \"\\r\\n\"\n s += pformat(get_engine_status(self.crawler.engine))\n s += \"\\r\\n\"\n self.mail.send(rcpts, subject, s)\n", "path": "scrapy/extensions/memusage.py"}]} | 2,314 | 266 |
gh_patches_debug_41505 | rasdani/github-patches | git_diff | great-expectations__great_expectations-3279 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py]
1 from dateutil.parser import parse
2
3 from great_expectations.execution_engine import (
4 PandasExecutionEngine,
5 SqlAlchemyExecutionEngine,
6 )
7 from great_expectations.expectations.metrics.import_manager import sa
8 from great_expectations.expectations.metrics.map_metric_provider import (
9 ColumnPairMapMetricProvider,
10 column_pair_condition_partial,
11 )
12
13
14 class ColumnPairValuesEqual(ColumnPairMapMetricProvider):
15 condition_metric_name = "column_pair_values.equal"
16 condition_domain_keys = (
17 "batch_id",
18 "table",
19 "column_A",
20 "column_B",
21 "row_condition",
22 "condition_parser",
23 "ignore_row_if",
24 )
25 condition_value_keys = ()
26
27 # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>
28 @column_pair_condition_partial(engine=PandasExecutionEngine)
29 def _pandas(cls, column_A, column_B, **kwargs):
30 return column_A == column_B
31
32 @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)
33 def _sqlalchemy(cls, column_A, column_B, **kwargs):
34 return sa.case((column_A == column_B, True), else_=False)
35
[end of great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py]
[start of great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py]
1 from dateutil.parser import parse
2
3 from great_expectations.execution_engine import (
4 PandasExecutionEngine,
5 SqlAlchemyExecutionEngine,
6 )
7 from great_expectations.expectations.metrics.import_manager import sa
8 from great_expectations.expectations.metrics.map_metric_provider import (
9 ColumnPairMapMetricProvider,
10 column_pair_condition_partial,
11 )
12
13
14 class ColumnPairValuesAGreaterThanB(ColumnPairMapMetricProvider):
15 condition_metric_name = "column_pair_values.a_greater_than_b"
16 condition_domain_keys = (
17 "batch_id",
18 "table",
19 "column_A",
20 "column_B",
21 "row_condition",
22 "condition_parser",
23 "ignore_row_if",
24 )
25 condition_value_keys = (
26 "or_equal",
27 "parse_strings_as_datetimes",
28 "allow_cross_type_comparisons",
29 )
30
31 # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>
32 # noinspection PyPep8Naming
33 @column_pair_condition_partial(engine=PandasExecutionEngine)
34 def _pandas(cls, column_A, column_B, **kwargs):
35 allow_cross_type_comparisons = kwargs.get("allow_cross_type_comparisons")
36 if allow_cross_type_comparisons:
37 raise NotImplementedError
38
39 parse_strings_as_datetimes = kwargs.get("parse_strings_as_datetimes")
40 if parse_strings_as_datetimes:
41 # noinspection PyPep8Naming
42 temp_column_A = column_A.map(parse)
43 # noinspection PyPep8Naming
44 temp_column_B = column_B.map(parse)
45 else:
46 temp_column_A = column_A
47 temp_column_B = column_B
48
49 or_equal = kwargs.get("or_equal")
50 if or_equal:
51 return temp_column_A >= temp_column_B
52 else:
53 return temp_column_A > temp_column_B
54
55 # noinspection PyPep8Naming
56 @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)
57 def _sqlalchemy(cls, column_A, column_B, **kwargs):
58 allow_cross_type_comparisons = kwargs.get("allow_cross_type_comparisons")
59 if allow_cross_type_comparisons:
60 raise NotImplementedError
61
62 parse_strings_as_datetimes = kwargs.get("parse_strings_as_datetimes")
63 if parse_strings_as_datetimes:
64 raise NotImplementedError
65
66 or_equal = kwargs.get("or_equal")
67 if or_equal:
68 return sa.case((column_A >= column_B, True), else_=False)
69 else:
70 return sa.case((column_A > column_B, True), else_=False)
71
[end of great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py]
[start of great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py]
1 from great_expectations.execution_engine import PandasExecutionEngine
2 from great_expectations.expectations.metrics.map_metric_provider import (
3 MulticolumnMapMetricProvider,
4 multicolumn_condition_partial,
5 )
6
7
8 class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):
9 condition_metric_name = "select_column_values.unique.within_record"
10 condition_domain_keys = (
11 "batch_id",
12 "table",
13 "column_list",
14 "row_condition",
15 "condition_parser",
16 "ignore_row_if",
17 )
18
19 # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>
20 @multicolumn_condition_partial(engine=PandasExecutionEngine)
21 def _pandas(cls, column_list, **kwargs):
22 num_columns = len(column_list.columns)
23 row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns
24 return row_wise_cond
25
[end of great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py
--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py
+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py
@@ -1,5 +1,3 @@
-from dateutil.parser import parse
-
from great_expectations.execution_engine import (
PandasExecutionEngine,
SqlAlchemyExecutionEngine,
diff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py
--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py
+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py
@@ -38,9 +38,7 @@
parse_strings_as_datetimes = kwargs.get("parse_strings_as_datetimes")
if parse_strings_as_datetimes:
- # noinspection PyPep8Naming
temp_column_A = column_A.map(parse)
- # noinspection PyPep8Naming
temp_column_B = column_B.map(parse)
else:
temp_column_A = column_A
diff --git a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
--- a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
+++ b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py
@@ -1,9 +1,17 @@
-from great_expectations.execution_engine import PandasExecutionEngine
+import logging
+
+from great_expectations.execution_engine import (
+ PandasExecutionEngine,
+ SqlAlchemyExecutionEngine,
+)
+from great_expectations.expectations.metrics.import_manager import sa
from great_expectations.expectations.metrics.map_metric_provider import (
MulticolumnMapMetricProvider,
multicolumn_condition_partial,
)
+logger = logging.getLogger(__name__)
+
class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):
condition_metric_name = "select_column_values.unique.within_record"
@@ -16,9 +24,37 @@
"ignore_row_if",
)
- # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>
+ # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>
@multicolumn_condition_partial(engine=PandasExecutionEngine)
def _pandas(cls, column_list, **kwargs):
num_columns = len(column_list.columns)
row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns
return row_wise_cond
+
+ @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)
+ def _sqlalchemy(cls, column_list, **kwargs):
+ """
+ The present approach relies on an inefficient query condition construction implementation, whose computational
+ cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is
+ available, this is the only feasible mechanism under the current architecture, where map metric providers must
+ return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).
+ """
+ num_columns = len(column_list)
+
+ # An arbitrary "num_columns" value used for issuing an explanatory message as a warning.
+ if num_columns > 100:
+ logger.warning(
+ f"""Batch data with {num_columns} columns is detected. Computing the "{cls.condition_metric_name}" \
+metric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.
+"""
+ )
+
+ condition = sa.or_()
+ for idx_src in range(num_columns - 1):
+ for idx_dest in range(idx_src + 1, num_columns):
+ condition = sa.or_(
+ condition, (column_list[idx_src] == column_list[idx_dest])
+ )
+
+ condition = sa.not_(condition)
+ return sa.case((condition, True), else_=False)
| {"golden_diff": "diff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py\n--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py\n+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py\n@@ -1,5 +1,3 @@\n-from dateutil.parser import parse\n-\n from great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\ndiff --git a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py\n--- a/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py\n+++ b/great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py\n@@ -38,9 +38,7 @@\n \n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n- # noinspection PyPep8Naming\n temp_column_A = column_A.map(parse)\n- # noinspection PyPep8Naming\n temp_column_B = column_B.map(parse)\n else:\n temp_column_A = column_A\ndiff --git a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n--- a/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n+++ b/great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py\n@@ -1,9 +1,17 @@\n-from great_expectations.execution_engine import PandasExecutionEngine\n+import logging\n+\n+from great_expectations.execution_engine import (\n+ PandasExecutionEngine,\n+ SqlAlchemyExecutionEngine,\n+)\n+from great_expectations.expectations.metrics.import_manager import sa\n from great_expectations.expectations.metrics.map_metric_provider import (\n MulticolumnMapMetricProvider,\n multicolumn_condition_partial,\n )\n \n+logger = logging.getLogger(__name__)\n+\n \n class SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):\n condition_metric_name = \"select_column_values.unique.within_record\"\n@@ -16,9 +24,37 @@\n \"ignore_row_if\",\n )\n \n- # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>\n+ # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n @multicolumn_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_list, **kwargs):\n num_columns = len(column_list.columns)\n row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns\n return row_wise_cond\n+\n+ @multicolumn_condition_partial(engine=SqlAlchemyExecutionEngine)\n+ def _sqlalchemy(cls, column_list, **kwargs):\n+ \"\"\"\n+ The present approach relies on an inefficient query condition construction implementation, whose computational\n+ cost is O(num_columns^2). However, until a more efficient implementation compatible with SQLAlchemy is\n+ available, this is the only feasible mechanism under the current architecture, where map metric providers must\n+ return a condition. Nevertheless, SQL query length limit is 1GB (sufficient for most practical scenarios).\n+ \"\"\"\n+ num_columns = len(column_list)\n+\n+ # An arbitrary \"num_columns\" value used for issuing an explanatory message as a warning.\n+ if num_columns > 100:\n+ logger.warning(\n+ f\"\"\"Batch data with {num_columns} columns is detected. Computing the \"{cls.condition_metric_name}\" \\\n+metric for wide tables using SQLAlchemy leads to long WHERE clauses for the underlying database engine to process.\n+\"\"\"\n+ )\n+\n+ condition = sa.or_()\n+ for idx_src in range(num_columns - 1):\n+ for idx_dest in range(idx_src + 1, num_columns):\n+ condition = sa.or_(\n+ condition, (column_list[idx_src] == column_list[idx_dest])\n+ )\n+\n+ condition = sa.not_(condition)\n+ return sa.case((condition, True), else_=False)\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from dateutil.parser import parse\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n ColumnPairMapMetricProvider,\n column_pair_condition_partial,\n)\n\n\nclass ColumnPairValuesEqual(ColumnPairMapMetricProvider):\n condition_metric_name = \"column_pair_values.equal\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_A\",\n \"column_B\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n condition_value_keys = ()\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n @column_pair_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_A, column_B, **kwargs):\n return column_A == column_B\n\n @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_A, column_B, **kwargs):\n return sa.case((column_A == column_B, True), else_=False)\n", "path": "great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_equal.py"}, {"content": "from dateutil.parser import parse\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.metrics.import_manager import sa\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n ColumnPairMapMetricProvider,\n column_pair_condition_partial,\n)\n\n\nclass ColumnPairValuesAGreaterThanB(ColumnPairMapMetricProvider):\n condition_metric_name = \"column_pair_values.a_greater_than_b\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_A\",\n \"column_B\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n condition_value_keys = (\n \"or_equal\",\n \"parse_strings_as_datetimes\",\n \"allow_cross_type_comparisons\",\n )\n\n # TODO: <Alex>ALEX -- temporarily only Pandas and SQL Alchemy implementations are provided (Spark to follow).</Alex>\n # noinspection PyPep8Naming\n @column_pair_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_A, column_B, **kwargs):\n allow_cross_type_comparisons = kwargs.get(\"allow_cross_type_comparisons\")\n if allow_cross_type_comparisons:\n raise NotImplementedError\n\n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n # noinspection PyPep8Naming\n temp_column_A = column_A.map(parse)\n # noinspection PyPep8Naming\n temp_column_B = column_B.map(parse)\n else:\n temp_column_A = column_A\n temp_column_B = column_B\n\n or_equal = kwargs.get(\"or_equal\")\n if or_equal:\n return temp_column_A >= temp_column_B\n else:\n return temp_column_A > temp_column_B\n\n # noinspection PyPep8Naming\n @column_pair_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column_A, column_B, **kwargs):\n allow_cross_type_comparisons = kwargs.get(\"allow_cross_type_comparisons\")\n if allow_cross_type_comparisons:\n raise NotImplementedError\n\n parse_strings_as_datetimes = kwargs.get(\"parse_strings_as_datetimes\")\n if parse_strings_as_datetimes:\n raise NotImplementedError\n\n or_equal = kwargs.get(\"or_equal\")\n if or_equal:\n return sa.case((column_A >= column_B, True), else_=False)\n else:\n return sa.case((column_A > column_B, True), else_=False)\n", "path": "great_expectations/expectations/metrics/column_pair_map_metrics/column_pair_values_greater.py"}, {"content": "from great_expectations.execution_engine import PandasExecutionEngine\nfrom great_expectations.expectations.metrics.map_metric_provider import (\n MulticolumnMapMetricProvider,\n multicolumn_condition_partial,\n)\n\n\nclass SelectColumnValuesUniqueWithinRecord(MulticolumnMapMetricProvider):\n condition_metric_name = \"select_column_values.unique.within_record\"\n condition_domain_keys = (\n \"batch_id\",\n \"table\",\n \"column_list\",\n \"row_condition\",\n \"condition_parser\",\n \"ignore_row_if\",\n )\n\n # TODO: <Alex>ALEX -- temporarily only a Pandas implementation is provided (others to follow).</Alex>\n @multicolumn_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column_list, **kwargs):\n num_columns = len(column_list.columns)\n row_wise_cond = column_list.nunique(dropna=False, axis=1) >= num_columns\n return row_wise_cond\n", "path": "great_expectations/expectations/metrics/multicolumn_map_metrics/select_column_values_unique_within_record.py"}]} | 1,906 | 974 |
gh_patches_debug_8219 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-1411 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Type Error in plans list
if a project connected to the plans list adds a module, but no phase dates, the date might become none and the whole list will not be shown
</issue>
<code>
[start of meinberlin/apps/plans/views.py]
1 import json
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.core.exceptions import ObjectDoesNotExist
6 from django.urls import reverse
7 from django.utils import timezone
8 from django.utils.translation import ugettext
9 from django.utils.translation import ugettext_lazy as _
10 from django.views import generic
11
12 from adhocracy4.dashboard import mixins as a4dashboard_mixins
13 from adhocracy4.exports import mixins as export_mixins
14 from adhocracy4.exports import unescape_and_strip_html
15 from adhocracy4.exports import views as export_views
16 from adhocracy4.rules import mixins as rules_mixins
17 from meinberlin.apps.contrib.views import CanonicalURLDetailView
18 from meinberlin.apps.maps.models import MapPreset
19 from meinberlin.apps.plans.forms import PlanForm
20 from meinberlin.apps.plans.models import Plan
21
22 from . import models
23
24
25 class PlanDetailView(rules_mixins.PermissionRequiredMixin,
26 CanonicalURLDetailView):
27 model = models.Plan
28 template_name = 'meinberlin_plans/plan_detail.html'
29 permission_required = 'meinberlin_plans.view_plan'
30
31 def get_context_data(self, **kwargs):
32 context = super().get_context_data(**kwargs)
33 context['berlin_polygon'] = settings.BERLIN_POLYGON
34 return context
35
36
37 class PlanListView(rules_mixins.PermissionRequiredMixin,
38 generic.ListView):
39 model = models.Plan
40 template_name = 'meinberlin_plans/plan_list.html'
41 permission_required = 'meinberlin_plans.list_plan'
42
43 def get_districts(self):
44 try:
45 return MapPreset.objects.filter(
46 category__name='Bezirke - Berlin')
47 except ObjectDoesNotExist:
48 return []
49
50 def _get_status_string(self, projects):
51
52 future_phase = None
53 for project in projects:
54 phases = project.phases
55 if phases.active_phases():
56 return ugettext('running')
57 if phases.future_phases():
58 date = phases.future_phases().first().start_date
59 if not future_phase:
60 future_phase = date
61 else:
62 if date < future_phase:
63 future_phase = date
64
65 if future_phase:
66 return ugettext('starts at {}').format(future_phase.date())
67
68 def _get_participation_status(self, item):
69 projects = item.projects.all()\
70 .filter(is_draft=False,
71 is_archived=False,
72 is_public=True)
73 if not projects:
74 return item.get_participation_display(), False
75 else:
76 status_string = self._get_status_string(projects)
77 if status_string:
78 return status_string, True
79 else:
80 return item.get_participation_display(), False
81
82 def get_context_data(self, **kwargs):
83 context = super().get_context_data(**kwargs)
84
85 districts = self.get_districts()
86
87 district_list = json.dumps([district.polygon
88 for district in districts])
89 district_names = json.dumps([district.name
90 for district in districts])
91 context['districts'] = district_list
92 context['district_names'] = district_names
93
94 items = sorted(context['object_list'],
95 key=lambda x: x.modified or x.created,
96 reverse=True)
97
98 result = []
99
100 for item in items:
101 participation_string, active = self._get_participation_status(item)
102 result.append({
103 'title': item.title,
104 'url': item.get_absolute_url(),
105 'organisation': item.organisation.name,
106 'point': item.point,
107 'point_label': item.point_label,
108 'cost': item.cost,
109 'district': item.district.name,
110 'category': item.category,
111 'status': item.status,
112 'status_display': item.get_status_display(),
113 'participation_string': participation_string,
114 'participation_active': active,
115 'participation': item.participation,
116 'participation_display': item.get_participation_display(),
117 })
118
119 context['items'] = json.dumps(result)
120 context['baseurl'] = settings.A4_MAP_BASEURL
121 context['attribution'] = settings.A4_MAP_ATTRIBUTION
122 context['bounds'] = json.dumps(settings.A4_MAP_BOUNDING_BOX)
123
124 return context
125
126
127 class PlanExportView(rules_mixins.PermissionRequiredMixin,
128 export_mixins.ItemExportWithLinkMixin,
129 export_mixins.ExportModelFieldsMixin,
130 export_mixins.ItemExportWithLocationMixin,
131 export_views.BaseExport,
132 export_views.AbstractXlsxExportView):
133
134 permission_required = 'meinberlin_plans.list_plan'
135 model = models.Plan
136 fields = ['title', 'organisation', 'contact', 'district', 'cost',
137 'description', 'category', 'status', 'participation']
138 html_fields = ['description']
139
140 def get_object_list(self):
141 return models.Plan.objects.all()
142
143 def get_base_filename(self):
144 return 'plans_%s' % timezone.now().strftime('%Y%m%dT%H%M%S')
145
146 def get_virtual_fields(self, virtual):
147 virtual = super().get_virtual_fields(virtual)
148 virtual['projects'] = ugettext('Projects')
149 virtual['projects_links'] = ugettext('Project Links')
150 return virtual
151
152 def get_organisation_data(self, item):
153 return item.organisation.name
154
155 def get_district_data(self, item):
156 return item.district.name
157
158 def get_contact_data(self, item):
159 return unescape_and_strip_html(item.contact)
160
161 def get_status_data(self, item):
162 return item.get_status_display()
163
164 def get_participation_data(self, item):
165 return item.get_participation_display()
166
167 def get_description_data(self, item):
168 return unescape_and_strip_html(item.description)
169
170 def get_projects_data(self, item):
171 if item.projects.all():
172 return ', \n'.join(
173 [project.name
174 for project in item.projects.all()]
175 )
176 return ''
177
178 def get_projects_links_data(self, item):
179 if item.projects.all():
180 return str([self.request.build_absolute_uri(
181 project.get_absolute_url())
182 for project in item.projects.all()
183 ])
184 return ''
185
186
187 class DashboardPlanListView(a4dashboard_mixins.DashboardBaseMixin,
188 generic.ListView):
189 model = Plan
190 template_name = 'meinberlin_plans/plan_dashboard_list.html'
191 permission_required = 'meinberlin_plans.add_plan'
192 menu_item = 'project'
193
194 def get_permission_object(self):
195 return self.organisation
196
197 def get_queryset(self):
198 return super().get_queryset().filter(organisation=self.organisation)
199
200
201 class DashboardPlanCreateView(a4dashboard_mixins.DashboardBaseMixin,
202 generic.CreateView):
203 model = Plan
204 form_class = PlanForm
205 permission_required = 'meinberlin_plans.add_plan'
206 template_name = 'meinberlin_plans/plan_create_form.html'
207 menu_item = 'project'
208
209 def form_valid(self, form):
210 form.instance.creator = self.request.user
211 form.instance.organisation = self.organisation
212 return super().form_valid(form)
213
214 def get_permission_object(self):
215 return self.organisation
216
217 def get_success_url(self):
218 return reverse(
219 'a4dashboard:plan-list',
220 kwargs={'organisation_slug': self.organisation.slug})
221
222
223 class DashboardPlanUpdateView(a4dashboard_mixins.DashboardBaseMixin,
224 generic.UpdateView):
225 model = Plan
226 form_class = PlanForm
227 permission_required = 'meinberlin_plans.change_plan'
228 template_name = 'meinberlin_plans/plan_update_form.html'
229 menu_item = 'project'
230
231 def get_permission_object(self):
232 return self.organisation
233
234 def get_success_url(self):
235 return reverse(
236 'a4dashboard:plan-list',
237 kwargs={'organisation_slug': self.organisation.slug})
238
239
240 class DashboardPlanDeleteView(a4dashboard_mixins.DashboardBaseMixin,
241 generic.DeleteView):
242 model = Plan
243 success_message = _('The plan has been deleted')
244 permission_required = 'meinberlin_plans.change_plan'
245 template_name = 'meinberlin_plans/plan_confirm_delete.html'
246 menu_item = 'project'
247
248 def delete(self, request, *args, **kwargs):
249 messages.success(self.request, self.success_message)
250 return super().delete(request, *args, **kwargs)
251
252 def get_permission_object(self):
253 return self.organisation
254
255 def get_success_url(self):
256 return reverse(
257 'a4dashboard:plan-list',
258 kwargs={'organisation_slug': self.organisation.slug})
259
[end of meinberlin/apps/plans/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/plans/views.py b/meinberlin/apps/plans/views.py
--- a/meinberlin/apps/plans/views.py
+++ b/meinberlin/apps/plans/views.py
@@ -54,7 +54,8 @@
phases = project.phases
if phases.active_phases():
return ugettext('running')
- if phases.future_phases():
+ if phases.future_phases() and \
+ phases.future_phases().first().start_date:
date = phases.future_phases().first().start_date
if not future_phase:
future_phase = date
| {"golden_diff": "diff --git a/meinberlin/apps/plans/views.py b/meinberlin/apps/plans/views.py\n--- a/meinberlin/apps/plans/views.py\n+++ b/meinberlin/apps/plans/views.py\n@@ -54,7 +54,8 @@\n phases = project.phases\n if phases.active_phases():\n return ugettext('running')\n- if phases.future_phases():\n+ if phases.future_phases() and \\\n+ phases.future_phases().first().start_date:\n date = phases.future_phases().first().start_date\n if not future_phase:\n future_phase = date\n", "issue": "Type Error in plans list\nif a project connected to the plans list adds a module, but no phase dates, the date might become none and the whole list will not be shown\n", "before_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views import generic\n\nfrom adhocracy4.dashboard import mixins as a4dashboard_mixins\nfrom adhocracy4.exports import mixins as export_mixins\nfrom adhocracy4.exports import unescape_and_strip_html\nfrom adhocracy4.exports import views as export_views\nfrom adhocracy4.rules import mixins as rules_mixins\nfrom meinberlin.apps.contrib.views import CanonicalURLDetailView\nfrom meinberlin.apps.maps.models import MapPreset\nfrom meinberlin.apps.plans.forms import PlanForm\nfrom meinberlin.apps.plans.models import Plan\n\nfrom . import models\n\n\nclass PlanDetailView(rules_mixins.PermissionRequiredMixin,\n CanonicalURLDetailView):\n model = models.Plan\n template_name = 'meinberlin_plans/plan_detail.html'\n permission_required = 'meinberlin_plans.view_plan'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['berlin_polygon'] = settings.BERLIN_POLYGON\n return context\n\n\nclass PlanListView(rules_mixins.PermissionRequiredMixin,\n generic.ListView):\n model = models.Plan\n template_name = 'meinberlin_plans/plan_list.html'\n permission_required = 'meinberlin_plans.list_plan'\n\n def get_districts(self):\n try:\n return MapPreset.objects.filter(\n category__name='Bezirke - Berlin')\n except ObjectDoesNotExist:\n return []\n\n def _get_status_string(self, projects):\n\n future_phase = None\n for project in projects:\n phases = project.phases\n if phases.active_phases():\n return ugettext('running')\n if phases.future_phases():\n date = phases.future_phases().first().start_date\n if not future_phase:\n future_phase = date\n else:\n if date < future_phase:\n future_phase = date\n\n if future_phase:\n return ugettext('starts at {}').format(future_phase.date())\n\n def _get_participation_status(self, item):\n projects = item.projects.all()\\\n .filter(is_draft=False,\n is_archived=False,\n is_public=True)\n if not projects:\n return item.get_participation_display(), False\n else:\n status_string = self._get_status_string(projects)\n if status_string:\n return status_string, True\n else:\n return item.get_participation_display(), False\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n districts = self.get_districts()\n\n district_list = json.dumps([district.polygon\n for district in districts])\n district_names = json.dumps([district.name\n for district in districts])\n context['districts'] = district_list\n context['district_names'] = district_names\n\n items = sorted(context['object_list'],\n key=lambda x: x.modified or x.created,\n reverse=True)\n\n result = []\n\n for item in items:\n participation_string, active = self._get_participation_status(item)\n result.append({\n 'title': item.title,\n 'url': item.get_absolute_url(),\n 'organisation': item.organisation.name,\n 'point': item.point,\n 'point_label': item.point_label,\n 'cost': item.cost,\n 'district': item.district.name,\n 'category': item.category,\n 'status': item.status,\n 'status_display': item.get_status_display(),\n 'participation_string': participation_string,\n 'participation_active': active,\n 'participation': item.participation,\n 'participation_display': item.get_participation_display(),\n })\n\n context['items'] = json.dumps(result)\n context['baseurl'] = settings.A4_MAP_BASEURL\n context['attribution'] = settings.A4_MAP_ATTRIBUTION\n context['bounds'] = json.dumps(settings.A4_MAP_BOUNDING_BOX)\n\n return context\n\n\nclass PlanExportView(rules_mixins.PermissionRequiredMixin,\n export_mixins.ItemExportWithLinkMixin,\n export_mixins.ExportModelFieldsMixin,\n export_mixins.ItemExportWithLocationMixin,\n export_views.BaseExport,\n export_views.AbstractXlsxExportView):\n\n permission_required = 'meinberlin_plans.list_plan'\n model = models.Plan\n fields = ['title', 'organisation', 'contact', 'district', 'cost',\n 'description', 'category', 'status', 'participation']\n html_fields = ['description']\n\n def get_object_list(self):\n return models.Plan.objects.all()\n\n def get_base_filename(self):\n return 'plans_%s' % timezone.now().strftime('%Y%m%dT%H%M%S')\n\n def get_virtual_fields(self, virtual):\n virtual = super().get_virtual_fields(virtual)\n virtual['projects'] = ugettext('Projects')\n virtual['projects_links'] = ugettext('Project Links')\n return virtual\n\n def get_organisation_data(self, item):\n return item.organisation.name\n\n def get_district_data(self, item):\n return item.district.name\n\n def get_contact_data(self, item):\n return unescape_and_strip_html(item.contact)\n\n def get_status_data(self, item):\n return item.get_status_display()\n\n def get_participation_data(self, item):\n return item.get_participation_display()\n\n def get_description_data(self, item):\n return unescape_and_strip_html(item.description)\n\n def get_projects_data(self, item):\n if item.projects.all():\n return ', \\n'.join(\n [project.name\n for project in item.projects.all()]\n )\n return ''\n\n def get_projects_links_data(self, item):\n if item.projects.all():\n return str([self.request.build_absolute_uri(\n project.get_absolute_url())\n for project in item.projects.all()\n ])\n return ''\n\n\nclass DashboardPlanListView(a4dashboard_mixins.DashboardBaseMixin,\n generic.ListView):\n model = Plan\n template_name = 'meinberlin_plans/plan_dashboard_list.html'\n permission_required = 'meinberlin_plans.add_plan'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n def get_queryset(self):\n return super().get_queryset().filter(organisation=self.organisation)\n\n\nclass DashboardPlanCreateView(a4dashboard_mixins.DashboardBaseMixin,\n generic.CreateView):\n model = Plan\n form_class = PlanForm\n permission_required = 'meinberlin_plans.add_plan'\n template_name = 'meinberlin_plans/plan_create_form.html'\n menu_item = 'project'\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.organisation = self.organisation\n return super().form_valid(form)\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse(\n 'a4dashboard:plan-list',\n kwargs={'organisation_slug': self.organisation.slug})\n\n\nclass DashboardPlanUpdateView(a4dashboard_mixins.DashboardBaseMixin,\n generic.UpdateView):\n model = Plan\n form_class = PlanForm\n permission_required = 'meinberlin_plans.change_plan'\n template_name = 'meinberlin_plans/plan_update_form.html'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse(\n 'a4dashboard:plan-list',\n kwargs={'organisation_slug': self.organisation.slug})\n\n\nclass DashboardPlanDeleteView(a4dashboard_mixins.DashboardBaseMixin,\n generic.DeleteView):\n model = Plan\n success_message = _('The plan has been deleted')\n permission_required = 'meinberlin_plans.change_plan'\n template_name = 'meinberlin_plans/plan_confirm_delete.html'\n menu_item = 'project'\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super().delete(request, *args, **kwargs)\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse(\n 'a4dashboard:plan-list',\n kwargs={'organisation_slug': self.organisation.slug})\n", "path": "meinberlin/apps/plans/views.py"}]} | 3,082 | 136 |
gh_patches_debug_6723 | rasdani/github-patches | git_diff | pre-commit__pre-commit-427 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
setup.cfg prevent pre-commit to install
For some reason I have a setup.cfg file in the root directory of my repo for my app where the parameter **install_scripts** is set to **/usr/sbin**. This prevent pre-commit to set up and crash.
Here is a repro in a fresh git repository containing only **setup.cfg** file and **.pre-commit-config.yaml** (for the [install guide](http://pre-commit.com/#install)
<pre>
$ mkdir repro; cd repro
$ git init
Dépôt Git vide initialisé dans /home/wilfried/repro/.git/
$ pre-commit clean
Cleaned /home/wilfried/.pre-commit.
$ pre-commit install
pre-commit installed at /home/wilfried/repro/.git/hooks/pre-commit
$ cat setup.cfg
[install]
install_scripts=/usr/sbin
$ cat .pre-commit-config.yaml
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: v0.4.2
hooks:
- id: trailing-whitespace
</pre>
Now, with those two files setup, I try to run a simple pre-commit run which try to initiate the virtualenv.
<pre>
$ pre-commit run --all-files
[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/python', '-m', 'virtualenv', '/home/wilfried/.pre-commit/repoaXLSIv/py_env-default', '-p', '/usr/bin/python2.7')
Return code: 1
Expected return code: 0
Output:
New python executable in /home/wilfried/.pre-commit/repoaXLSIv/py_env-default/bin/python2.7
Also creating executable in /home/wilfried/.pre-commit/repoaXLSIv/py_env-default/bin/python
Installing setuptools, pip, wheel...
Complete output from command /home/wilfried/.pre-...efault/bin/python2.7 - setuptools pip wheel:
...Installing setuptools, pip, wheel...done.
Running virtualenv with interpreter /usr/bin/python2.7
Errors:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2327, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 711, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 944, in create_environment
download=download,
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 900, in install_wheel
call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 792, in call_subprocess
logger.notify('\n'.join(all_output) + '\n----------------------------------------')
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 199, in notify
self.log(self.NOTIFY, msg, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 231, in log
consumer.write(rendered+'\n')
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 2254: ordinal not in range(128)
Check the log at ~/.pre-commit/pre-commit.log
</pre>
You'll find the content on pre-commit.log on [pastebin](http://pastebin.com/Ls61EQDj).
Now the if I comment out the install_scripts parameter, everything works fine
<pre>
$ cat setup.cfg
[install]
#install_scripts=/usr/sbin
$ pre-commit clean
Cleaned /home/wilfried/.pre-commit.
$ pre-commit run --all-files
[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
Trim Trailing Whitespace.............................(no files to check)Skipped
</pre>
I'm running on a linux mint 18, with python 2.7.12 and pre-commit 0.9.2
<pre>
$ python --version
Python 2.7.12
$ pre-commit --version
pre-commit 0.9.2
</pre>
Let my know if you need anything else.
</issue>
<code>
[start of pre_commit/languages/python.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6 import sys
7
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import UNSET
10 from pre_commit.envcontext import Var
11 from pre_commit.languages import helpers
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.xargs import xargs
14
15
16 ENVIRONMENT_DIR = 'py_env'
17
18
19 def bin_dir(venv):
20 """On windows there's a different directory for the virtualenv"""
21 bin_part = 'Scripts' if os.name == 'nt' else 'bin'
22 return os.path.join(venv, bin_part)
23
24
25 def get_env_patch(venv):
26 return (
27 ('PYTHONHOME', UNSET),
28 ('VIRTUAL_ENV', venv),
29 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
30 )
31
32
33 @contextlib.contextmanager
34 def in_env(repo_cmd_runner, language_version):
35 envdir = os.path.join(
36 repo_cmd_runner.prefix_dir,
37 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
38 )
39 with envcontext(get_env_patch(envdir)):
40 yield
41
42
43 def norm_version(version):
44 if os.name == 'nt': # pragma: no cover (windows)
45 # Try looking up by name
46 if distutils.spawn.find_executable(version):
47 return version
48
49 # If it is in the form pythonx.x search in the default
50 # place on windows
51 if version.startswith('python'):
52 return r'C:\{}\python.exe'.format(version.replace('.', ''))
53
54 # Otherwise assume it is a path
55 return os.path.expanduser(version)
56
57
58 def install_environment(
59 repo_cmd_runner,
60 version='default',
61 additional_dependencies=(),
62 ):
63 additional_dependencies = tuple(additional_dependencies)
64 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
65
66 # Install a virtualenv
67 with clean_path_on_failure(repo_cmd_runner.path(directory)):
68 venv_cmd = [
69 sys.executable, '-m', 'virtualenv',
70 '{{prefix}}{}'.format(directory)
71 ]
72 if version != 'default':
73 venv_cmd.extend(['-p', norm_version(version)])
74 else:
75 venv_cmd.extend(['-p', os.path.realpath(sys.executable)])
76 repo_cmd_runner.run(venv_cmd)
77 with in_env(repo_cmd_runner, version):
78 helpers.run_setup_cmd(
79 repo_cmd_runner,
80 ('pip', 'install', '.') + additional_dependencies,
81 )
82
83
84 def run_hook(repo_cmd_runner, hook, file_args):
85 with in_env(repo_cmd_runner, hook['language_version']):
86 return xargs((hook['entry'],) + tuple(hook['args']), file_args)
87
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -73,7 +73,7 @@
venv_cmd.extend(['-p', norm_version(version)])
else:
venv_cmd.extend(['-p', os.path.realpath(sys.executable)])
- repo_cmd_runner.run(venv_cmd)
+ repo_cmd_runner.run(venv_cmd, cwd='/')
with in_env(repo_cmd_runner, version):
helpers.run_setup_cmd(
repo_cmd_runner,
| {"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -73,7 +73,7 @@\n venv_cmd.extend(['-p', norm_version(version)])\n else:\n venv_cmd.extend(['-p', os.path.realpath(sys.executable)])\n- repo_cmd_runner.run(venv_cmd)\n+ repo_cmd_runner.run(venv_cmd, cwd='/')\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n repo_cmd_runner,\n", "issue": "setup.cfg prevent pre-commit to install\nFor some reason I have a setup.cfg file in the root directory of my repo for my app where the parameter **install_scripts** is set to **/usr/sbin**. This prevent pre-commit to set up and crash.\r\nHere is a repro in a fresh git repository containing only **setup.cfg** file and **.pre-commit-config.yaml** (for the [install guide](http://pre-commit.com/#install)\r\n\r\n<pre>\r\n$ mkdir repro; cd repro\r\n$ git init\r\nD\u00e9p\u00f4t Git vide initialis\u00e9 dans /home/wilfried/repro/.git/\r\n$ pre-commit clean\r\nCleaned /home/wilfried/.pre-commit.\r\n$ pre-commit install\r\npre-commit installed at /home/wilfried/repro/.git/hooks/pre-commit\r\n$ cat setup.cfg \r\n[install]\r\ninstall_scripts=/usr/sbin\r\n$ cat .pre-commit-config.yaml \r\n- repo: git://github.com/pre-commit/pre-commit-hooks\r\n sha: v0.4.2\r\n hooks:\r\n - id: trailing-whitespace\r\n</pre>\r\n\r\nNow, with those two files setup, I try to run a simple pre-commit run which try to initiate the virtualenv.\r\n<pre>\r\n$ pre-commit run --all-files\r\n[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\nAn unexpected error has occurred: CalledProcessError: Command: ('/usr/bin/python', '-m', 'virtualenv', '/home/wilfried/.pre-commit/repoaXLSIv/py_env-default', '-p', '/usr/bin/python2.7')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: \r\n New python executable in /home/wilfried/.pre-commit/repoaXLSIv/py_env-default/bin/python2.7\r\n Also creating executable in /home/wilfried/.pre-commit/repoaXLSIv/py_env-default/bin/python\r\n Installing setuptools, pip, wheel...\r\n Complete output from command /home/wilfried/.pre-...efault/bin/python2.7 - setuptools pip wheel:\r\n ...Installing setuptools, pip, wheel...done.\r\n Running virtualenv with interpreter /usr/bin/python2.7\r\n \r\nErrors: \r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 2327, in <module>\r\n main()\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 711, in main\r\n symlink=options.symlink)\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 944, in create_environment\r\n download=download,\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 900, in install_wheel\r\n call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT)\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 792, in call_subprocess\r\n logger.notify('\\n'.join(all_output) + '\\n----------------------------------------')\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 199, in notify\r\n self.log(self.NOTIFY, msg, *args, **kw)\r\n File \"/usr/local/lib/python2.7/dist-packages/virtualenv.py\", line 231, in log\r\n consumer.write(rendered+'\\n')\r\n UnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 2254: ordinal not in range(128)\r\n \r\n\r\nCheck the log at ~/.pre-commit/pre-commit.log\r\n</pre>\r\n\r\nYou'll find the content on pre-commit.log on [pastebin](http://pastebin.com/Ls61EQDj).\r\n\r\nNow the if I comment out the install_scripts parameter, everything works fine\r\n\r\n<pre>\r\n$ cat setup.cfg \r\n[install]\r\n#install_scripts=/usr/sbin\r\n$ pre-commit clean\r\nCleaned /home/wilfried/.pre-commit.\r\n$ pre-commit run --all-files\r\n[INFO] Initializing environment for git://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\nTrim Trailing Whitespace.............................(no files to check)Skipped\r\n</pre>\r\n\r\nI'm running on a linux mint 18, with python 2.7.12 and pre-commit 0.9.2\r\n<pre>\r\n$ python --version\r\nPython 2.7.12\r\n$ pre-commit --version\r\npre-commit 0.9.2\r\n</pre>\r\n\r\nLet my know if you need anything else.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv):\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv):\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n else:\n venv_cmd.extend(['-p', os.path.realpath(sys.executable)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('pip', 'install', '.') + additional_dependencies,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/python.py"}]} | 2,330 | 126 |
gh_patches_debug_16285 | rasdani/github-patches | git_diff | statsmodels__statsmodels-4197 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue in removing nan's from the trend variable
Hi,
I am trying to use your code given in the link: https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/seasonal.py
to remove the nan's which occur due to the moving averages. When i put the variable extrapolate_trend >0 then i get a value error saying:
ValueError: could not broadcast input array from shape (26,1) into shape (26)
I want to extrapolate the trend values for a time series at a week level
</issue>
<code>
[start of statsmodels/tsa/seasonal.py]
1 """
2 Seasonal Decomposition by Moving Averages
3 """
4 from statsmodels.compat.python import lmap, range, iteritems
5 import numpy as np
6 from pandas.core.nanops import nanmean as pd_nanmean
7 from .filters._utils import (_maybe_get_pandas_wrapper_freq,
8 _maybe_get_pandas_wrapper)
9 from .filters.filtertools import convolution_filter
10 from statsmodels.tsa.tsatools import freq_to_period
11
12
13 def seasonal_mean(x, freq):
14 """
15 Return means for each period in x. freq is an int that gives the
16 number of periods per cycle. E.g., 12 for monthly. NaNs are ignored
17 in the mean.
18 """
19 return np.array([pd_nanmean(x[i::freq], axis=0) for i in range(freq)])
20
21
22 def _extrapolate_trend(trend, npoints):
23 """
24 Replace nan values on trend's end-points with least-squares extrapolated
25 values with regression considering npoints closest defined points.
26 """
27 front = next(i for i, vals in enumerate(trend)
28 if not np.any(np.isnan(vals)))
29 back = trend.shape[0] - 1 - next(i for i, vals in enumerate(trend[::-1])
30 if not np.any(np.isnan(vals)))
31 front_last = min(front + npoints, back)
32 back_first = max(front, back - npoints)
33
34 k, n = np.linalg.lstsq(
35 np.c_[np.arange(front, front_last), np.ones(front_last - front)],
36 trend[front:front_last])[0]
37 trend[:front] = (np.arange(0, front) * np.c_[k] + np.c_[n]).T
38
39 k, n = np.linalg.lstsq(
40 np.c_[np.arange(back_first, back), np.ones(back - back_first)],
41 trend[back_first:back])[0]
42 trend[back + 1:] = (np.arange(back + 1, trend.shape[0]) * np.c_[k] + np.c_[n]).T
43
44 return trend
45
46
47 def seasonal_decompose(x, model="additive", filt=None, freq=None, two_sided=True,
48 extrapolate_trend=0):
49 """
50 Seasonal decomposition using moving averages
51
52 Parameters
53 ----------
54 x : array-like
55 Time series. If 2d, individual series are in columns.
56 model : str {"additive", "multiplicative"}
57 Type of seasonal component. Abbreviations are accepted.
58 filt : array-like
59 The filter coefficients for filtering out the seasonal component.
60 The concrete moving average method used in filtering is determined by two_sided.
61 freq : int, optional
62 Frequency of the series. Must be used if x is not a pandas object.
63 Overrides default periodicity of x if x is a pandas
64 object with a timeseries index.
65 two_sided : bool
66 The moving average method used in filtering.
67 If True (default), a centered moving average is computed using the filt.
68 If False, the filter coefficients are for past values only.
69 extrapolate_trend : int or 'freq', optional
70 If set to > 0, the trend resulting from the convolution is
71 linear least-squares extrapolated on both ends (or the single one
72 if two_sided is False) considering this many (+1) closest points.
73 If set to 'freq', use `freq` closest points. Setting this parameter
74 results in no NaN values in trend or resid components.
75
76 Returns
77 -------
78 results : obj
79 A object with seasonal, trend, and resid attributes.
80
81 Notes
82 -----
83 This is a naive decomposition. More sophisticated methods should
84 be preferred.
85
86 The additive model is Y[t] = T[t] + S[t] + e[t]
87
88 The multiplicative model is Y[t] = T[t] * S[t] * e[t]
89
90 The seasonal component is first removed by applying a convolution
91 filter to the data. The average of this smoothed series for each
92 period is the returned seasonal component.
93
94 See Also
95 --------
96 statsmodels.tsa.filters.bk_filter.bkfilter
97 statsmodels.tsa.filters.cf_filter.xffilter
98 statsmodels.tsa.filters.hp_filter.hpfilter
99 statsmodels.tsa.filters.convolution_filter
100 """
101 if freq is None:
102 _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)
103 else:
104 _pandas_wrapper = _maybe_get_pandas_wrapper(x)
105 pfreq = None
106 x = np.asanyarray(x).squeeze()
107 nobs = len(x)
108
109 if not np.all(np.isfinite(x)):
110 raise ValueError("This function does not handle missing values")
111 if model.startswith('m'):
112 if np.any(x <= 0):
113 raise ValueError("Multiplicative seasonality is not appropriate "
114 "for zero and negative values")
115
116 if freq is None:
117 if pfreq is not None:
118 pfreq = freq_to_period(pfreq)
119 freq = pfreq
120 else:
121 raise ValueError("You must specify a freq or x must be a "
122 "pandas object with a timeseries index with "
123 "a freq not set to None")
124
125 if filt is None:
126 if freq % 2 == 0: # split weights at ends
127 filt = np.array([.5] + [1] * (freq - 1) + [.5]) / freq
128 else:
129 filt = np.repeat(1./freq, freq)
130
131 nsides = int(two_sided) + 1
132 trend = convolution_filter(x, filt, nsides)
133
134 if extrapolate_trend == 'freq':
135 extrapolate_trend = freq - 1
136
137 if extrapolate_trend > 0:
138 trend = _extrapolate_trend(trend, extrapolate_trend + 1)
139
140 if model.startswith('m'):
141 detrended = x / trend
142 else:
143 detrended = x - trend
144
145 period_averages = seasonal_mean(detrended, freq)
146
147 if model.startswith('m'):
148 period_averages /= np.mean(period_averages, axis=0)
149 else:
150 period_averages -= np.mean(period_averages, axis=0)
151
152 seasonal = np.tile(period_averages.T, nobs // freq + 1).T[:nobs]
153
154 if model.startswith('m'):
155 resid = x / seasonal / trend
156 else:
157 resid = detrended - seasonal
158
159 results = lmap(_pandas_wrapper, [seasonal, trend, resid, x])
160 return DecomposeResult(seasonal=results[0], trend=results[1],
161 resid=results[2], observed=results[3])
162
163
164 class DecomposeResult(object):
165 def __init__(self, **kwargs):
166 for key, value in iteritems(kwargs):
167 setattr(self, key, value)
168 self.nobs = len(self.observed)
169
170 def plot(self):
171 from statsmodels.graphics.utils import _import_mpl
172 plt = _import_mpl()
173 fig, axes = plt.subplots(4, 1, sharex=True)
174 if hasattr(self.observed, 'plot'): # got pandas use it
175 self.observed.plot(ax=axes[0], legend=False)
176 axes[0].set_ylabel('Observed')
177 self.trend.plot(ax=axes[1], legend=False)
178 axes[1].set_ylabel('Trend')
179 self.seasonal.plot(ax=axes[2], legend=False)
180 axes[2].set_ylabel('Seasonal')
181 self.resid.plot(ax=axes[3], legend=False)
182 axes[3].set_ylabel('Residual')
183 else:
184 axes[0].plot(self.observed)
185 axes[0].set_ylabel('Observed')
186 axes[1].plot(self.trend)
187 axes[1].set_ylabel('Trend')
188 axes[2].plot(self.seasonal)
189 axes[2].set_ylabel('Seasonal')
190 axes[3].plot(self.resid)
191 axes[3].set_ylabel('Residual')
192 axes[3].set_xlabel('Time')
193 axes[3].set_xlim(0, self.nobs)
194
195 fig.tight_layout()
196 return fig
197
198
199 if __name__ == "__main__":
200 x = np.array([-50, 175, 149, 214, 247, 237, 225, 329, 729, 809,
201 530, 489, 540, 457, 195, 176, 337, 239, 128, 102,
202 232, 429, 3, 98, 43, -141, -77, -13, 125, 361, -45, 184])
203 results = seasonal_decompose(x, freq=4)
204
205 from pandas import DataFrame, DatetimeIndex
206 data = DataFrame(x, DatetimeIndex(start='1/1/1951',
207 periods=len(x),
208 freq='Q'))
209
210 res = seasonal_decompose(data)
211
212
[end of statsmodels/tsa/seasonal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/statsmodels/tsa/seasonal.py b/statsmodels/tsa/seasonal.py
--- a/statsmodels/tsa/seasonal.py
+++ b/statsmodels/tsa/seasonal.py
@@ -34,12 +34,18 @@
k, n = np.linalg.lstsq(
np.c_[np.arange(front, front_last), np.ones(front_last - front)],
trend[front:front_last])[0]
- trend[:front] = (np.arange(0, front) * np.c_[k] + np.c_[n]).T
+ extra = (np.arange(0, front) * np.c_[k] + np.c_[n]).T
+ if trend.ndim == 1:
+ extra = extra.squeeze()
+ trend[:front] = extra
k, n = np.linalg.lstsq(
np.c_[np.arange(back_first, back), np.ones(back - back_first)],
trend[back_first:back])[0]
- trend[back + 1:] = (np.arange(back + 1, trend.shape[0]) * np.c_[k] + np.c_[n]).T
+ extra = (np.arange(back + 1, trend.shape[0]) * np.c_[k] + np.c_[n]).T
+ if trend.ndim == 1:
+ extra = extra.squeeze()
+ trend[back + 1:] = extra
return trend
| {"golden_diff": "diff --git a/statsmodels/tsa/seasonal.py b/statsmodels/tsa/seasonal.py\n--- a/statsmodels/tsa/seasonal.py\n+++ b/statsmodels/tsa/seasonal.py\n@@ -34,12 +34,18 @@\n k, n = np.linalg.lstsq(\n np.c_[np.arange(front, front_last), np.ones(front_last - front)],\n trend[front:front_last])[0]\n- trend[:front] = (np.arange(0, front) * np.c_[k] + np.c_[n]).T\n+ extra = (np.arange(0, front) * np.c_[k] + np.c_[n]).T\n+ if trend.ndim == 1:\n+ extra = extra.squeeze()\n+ trend[:front] = extra\n \n k, n = np.linalg.lstsq(\n np.c_[np.arange(back_first, back), np.ones(back - back_first)],\n trend[back_first:back])[0]\n- trend[back + 1:] = (np.arange(back + 1, trend.shape[0]) * np.c_[k] + np.c_[n]).T\n+ extra = (np.arange(back + 1, trend.shape[0]) * np.c_[k] + np.c_[n]).T\n+ if trend.ndim == 1:\n+ extra = extra.squeeze()\n+ trend[back + 1:] = extra\n \n return trend\n", "issue": "Issue in removing nan's from the trend variable\nHi,\r\nI am trying to use your code given in the link: https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/seasonal.py\r\nto remove the nan's which occur due to the moving averages. When i put the variable extrapolate_trend >0 then i get a value error saying: \r\nValueError: could not broadcast input array from shape (26,1) into shape (26)\r\nI want to extrapolate the trend values for a time series at a week level\n", "before_files": [{"content": "\"\"\"\nSeasonal Decomposition by Moving Averages\n\"\"\"\nfrom statsmodels.compat.python import lmap, range, iteritems\nimport numpy as np\nfrom pandas.core.nanops import nanmean as pd_nanmean\nfrom .filters._utils import (_maybe_get_pandas_wrapper_freq,\n _maybe_get_pandas_wrapper)\nfrom .filters.filtertools import convolution_filter\nfrom statsmodels.tsa.tsatools import freq_to_period\n\n\ndef seasonal_mean(x, freq):\n \"\"\"\n Return means for each period in x. freq is an int that gives the\n number of periods per cycle. E.g., 12 for monthly. NaNs are ignored\n in the mean.\n \"\"\"\n return np.array([pd_nanmean(x[i::freq], axis=0) for i in range(freq)])\n\n\ndef _extrapolate_trend(trend, npoints):\n \"\"\"\n Replace nan values on trend's end-points with least-squares extrapolated\n values with regression considering npoints closest defined points.\n \"\"\"\n front = next(i for i, vals in enumerate(trend)\n if not np.any(np.isnan(vals)))\n back = trend.shape[0] - 1 - next(i for i, vals in enumerate(trend[::-1])\n if not np.any(np.isnan(vals)))\n front_last = min(front + npoints, back)\n back_first = max(front, back - npoints)\n\n k, n = np.linalg.lstsq(\n np.c_[np.arange(front, front_last), np.ones(front_last - front)],\n trend[front:front_last])[0]\n trend[:front] = (np.arange(0, front) * np.c_[k] + np.c_[n]).T\n\n k, n = np.linalg.lstsq(\n np.c_[np.arange(back_first, back), np.ones(back - back_first)],\n trend[back_first:back])[0]\n trend[back + 1:] = (np.arange(back + 1, trend.shape[0]) * np.c_[k] + np.c_[n]).T\n\n return trend\n\n\ndef seasonal_decompose(x, model=\"additive\", filt=None, freq=None, two_sided=True,\n extrapolate_trend=0):\n \"\"\"\n Seasonal decomposition using moving averages\n\n Parameters\n ----------\n x : array-like\n Time series. If 2d, individual series are in columns.\n model : str {\"additive\", \"multiplicative\"}\n Type of seasonal component. Abbreviations are accepted.\n filt : array-like\n The filter coefficients for filtering out the seasonal component.\n The concrete moving average method used in filtering is determined by two_sided.\n freq : int, optional\n Frequency of the series. Must be used if x is not a pandas object.\n Overrides default periodicity of x if x is a pandas\n object with a timeseries index.\n two_sided : bool\n The moving average method used in filtering.\n If True (default), a centered moving average is computed using the filt.\n If False, the filter coefficients are for past values only.\n extrapolate_trend : int or 'freq', optional\n If set to > 0, the trend resulting from the convolution is\n linear least-squares extrapolated on both ends (or the single one\n if two_sided is False) considering this many (+1) closest points.\n If set to 'freq', use `freq` closest points. Setting this parameter\n results in no NaN values in trend or resid components.\n\n Returns\n -------\n results : obj\n A object with seasonal, trend, and resid attributes.\n\n Notes\n -----\n This is a naive decomposition. More sophisticated methods should\n be preferred.\n\n The additive model is Y[t] = T[t] + S[t] + e[t]\n\n The multiplicative model is Y[t] = T[t] * S[t] * e[t]\n\n The seasonal component is first removed by applying a convolution\n filter to the data. The average of this smoothed series for each\n period is the returned seasonal component.\n\n See Also\n --------\n statsmodels.tsa.filters.bk_filter.bkfilter\n statsmodels.tsa.filters.cf_filter.xffilter\n statsmodels.tsa.filters.hp_filter.hpfilter\n statsmodels.tsa.filters.convolution_filter\n \"\"\"\n if freq is None:\n _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)\n else:\n _pandas_wrapper = _maybe_get_pandas_wrapper(x)\n pfreq = None\n x = np.asanyarray(x).squeeze()\n nobs = len(x)\n\n if not np.all(np.isfinite(x)):\n raise ValueError(\"This function does not handle missing values\")\n if model.startswith('m'):\n if np.any(x <= 0):\n raise ValueError(\"Multiplicative seasonality is not appropriate \"\n \"for zero and negative values\")\n\n if freq is None:\n if pfreq is not None:\n pfreq = freq_to_period(pfreq)\n freq = pfreq\n else:\n raise ValueError(\"You must specify a freq or x must be a \"\n \"pandas object with a timeseries index with \"\n \"a freq not set to None\")\n\n if filt is None:\n if freq % 2 == 0: # split weights at ends\n filt = np.array([.5] + [1] * (freq - 1) + [.5]) / freq\n else:\n filt = np.repeat(1./freq, freq)\n\n nsides = int(two_sided) + 1\n trend = convolution_filter(x, filt, nsides)\n\n if extrapolate_trend == 'freq':\n extrapolate_trend = freq - 1\n\n if extrapolate_trend > 0:\n trend = _extrapolate_trend(trend, extrapolate_trend + 1)\n\n if model.startswith('m'):\n detrended = x / trend\n else:\n detrended = x - trend\n\n period_averages = seasonal_mean(detrended, freq)\n\n if model.startswith('m'):\n period_averages /= np.mean(period_averages, axis=0)\n else:\n period_averages -= np.mean(period_averages, axis=0)\n\n seasonal = np.tile(period_averages.T, nobs // freq + 1).T[:nobs]\n\n if model.startswith('m'):\n resid = x / seasonal / trend\n else:\n resid = detrended - seasonal\n\n results = lmap(_pandas_wrapper, [seasonal, trend, resid, x])\n return DecomposeResult(seasonal=results[0], trend=results[1],\n resid=results[2], observed=results[3])\n\n\nclass DecomposeResult(object):\n def __init__(self, **kwargs):\n for key, value in iteritems(kwargs):\n setattr(self, key, value)\n self.nobs = len(self.observed)\n\n def plot(self):\n from statsmodels.graphics.utils import _import_mpl\n plt = _import_mpl()\n fig, axes = plt.subplots(4, 1, sharex=True)\n if hasattr(self.observed, 'plot'): # got pandas use it\n self.observed.plot(ax=axes[0], legend=False)\n axes[0].set_ylabel('Observed')\n self.trend.plot(ax=axes[1], legend=False)\n axes[1].set_ylabel('Trend')\n self.seasonal.plot(ax=axes[2], legend=False)\n axes[2].set_ylabel('Seasonal')\n self.resid.plot(ax=axes[3], legend=False)\n axes[3].set_ylabel('Residual')\n else:\n axes[0].plot(self.observed)\n axes[0].set_ylabel('Observed')\n axes[1].plot(self.trend)\n axes[1].set_ylabel('Trend')\n axes[2].plot(self.seasonal)\n axes[2].set_ylabel('Seasonal')\n axes[3].plot(self.resid)\n axes[3].set_ylabel('Residual')\n axes[3].set_xlabel('Time')\n axes[3].set_xlim(0, self.nobs)\n\n fig.tight_layout()\n return fig\n\n\nif __name__ == \"__main__\":\n x = np.array([-50, 175, 149, 214, 247, 237, 225, 329, 729, 809,\n 530, 489, 540, 457, 195, 176, 337, 239, 128, 102,\n 232, 429, 3, 98, 43, -141, -77, -13, 125, 361, -45, 184])\n results = seasonal_decompose(x, freq=4)\n\n from pandas import DataFrame, DatetimeIndex\n data = DataFrame(x, DatetimeIndex(start='1/1/1951',\n periods=len(x),\n freq='Q'))\n\n res = seasonal_decompose(data)\n\n", "path": "statsmodels/tsa/seasonal.py"}]} | 3,208 | 314 |
gh_patches_debug_16228 | rasdani/github-patches | git_diff | mesonbuild__meson-6606 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
.hgtags included in dist archive
**Describe the bug**
When running `ninja dist` the resulting `tar.xz` includes the `.hgtags` files that distros (namely debian) will mark as a packaging error.
**To Reproduce**
Run `ninja dist` on a mercurial repository that has at least 1 tag.
**Expected behavior**
That the archive does not have the `.hgtags` file in it. I just did an `hg archive` and unfortunately saw that `.hgtags` was there as well.
**system parameters**
* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? Native
* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) Linux, Debian Unstable
* what Python version are you using e.g. 3.8.0: 3.7.6
* what `meson --version`: 0.53.1
* what `ninja --version` if it's a Ninja build: 1.9.0
</issue>
<code>
[start of mesonbuild/mdist.py]
1 # Copyright 2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import gzip
17 import os
18 import sys
19 import shutil
20 import subprocess
21 import hashlib
22 import json
23 from glob import glob
24 from pathlib import Path
25 from mesonbuild.environment import detect_ninja
26 from mesonbuild.mesonlib import windows_proof_rmtree, MesonException
27 from mesonbuild.wrap import wrap
28 from mesonbuild import mlog, build
29
30 archive_choices = ['gztar', 'xztar', 'zip']
31 archive_extension = {'gztar': '.tar.gz',
32 'xztar': '.tar.xz',
33 'zip': '.zip'}
34
35 def add_arguments(parser):
36 parser.add_argument('-C', default='.', dest='wd',
37 help='directory to cd into before running')
38 parser.add_argument('--formats', default='xztar',
39 help='Comma separated list of archive types to create.')
40 parser.add_argument('--include-subprojects', action='store_true',
41 help='Include source code of subprojects that have been used for the build.')
42
43
44 def create_hash(fname):
45 hashname = fname + '.sha256sum'
46 m = hashlib.sha256()
47 m.update(open(fname, 'rb').read())
48 with open(hashname, 'w') as f:
49 f.write('%s %s\n' % (m.hexdigest(), os.path.basename(fname)))
50
51
52 def del_gitfiles(dirname):
53 for f in glob(os.path.join(dirname, '.git*')):
54 if os.path.isdir(f) and not os.path.islink(f):
55 windows_proof_rmtree(f)
56 else:
57 os.unlink(f)
58
59 def process_submodules(dirname):
60 module_file = os.path.join(dirname, '.gitmodules')
61 if not os.path.exists(module_file):
62 return
63 subprocess.check_call(['git', 'submodule', 'update', '--init', '--recursive'], cwd=dirname)
64 for line in open(module_file):
65 line = line.strip()
66 if '=' not in line:
67 continue
68 k, v = line.split('=', 1)
69 k = k.strip()
70 v = v.strip()
71 if k != 'path':
72 continue
73 del_gitfiles(os.path.join(dirname, v))
74
75
76 def run_dist_scripts(src_root, bld_root, dist_root, dist_scripts):
77 assert(os.path.isabs(dist_root))
78 env = os.environ.copy()
79 env['MESON_DIST_ROOT'] = dist_root
80 env['MESON_SOURCE_ROOT'] = src_root
81 env['MESON_BUILD_ROOT'] = bld_root
82 for d in dist_scripts:
83 script = d['exe']
84 args = d['args']
85 name = ' '.join(script + args)
86 print('Running custom dist script {!r}'.format(name))
87 try:
88 rc = subprocess.call(script + args, env=env)
89 if rc != 0:
90 sys.exit('Dist script errored out')
91 except OSError:
92 print('Failed to run dist script {!r}'.format(name))
93 sys.exit(1)
94
95 def is_git(src_root):
96 _git = os.path.join(src_root, '.git')
97 return os.path.isdir(_git) or os.path.isfile(_git)
98
99 def git_have_dirty_index(src_root):
100 '''Check whether there are uncommitted changes in git'''
101 ret = subprocess.call(['git', '-C', src_root, 'diff-index', '--quiet', 'HEAD'])
102 return ret == 1
103
104 def git_clone(src_root, distdir):
105 if git_have_dirty_index(src_root):
106 mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')
107 if os.path.exists(distdir):
108 shutil.rmtree(distdir)
109 os.makedirs(distdir)
110 subprocess.check_call(['git', 'clone', '--shared', src_root, distdir])
111 process_submodules(distdir)
112 del_gitfiles(distdir)
113
114 def create_dist_git(dist_name, archives, src_root, bld_root, dist_sub, dist_scripts, subprojects):
115 distdir = os.path.join(dist_sub, dist_name)
116 git_clone(src_root, distdir)
117 for path in subprojects:
118 sub_src_root = os.path.join(src_root, path)
119 sub_distdir = os.path.join(distdir, path)
120 if os.path.exists(sub_distdir):
121 continue
122 if is_git(sub_src_root):
123 git_clone(sub_src_root, sub_distdir)
124 else:
125 shutil.copytree(sub_src_root, sub_distdir)
126 run_dist_scripts(src_root, bld_root, distdir, dist_scripts)
127 output_names = []
128 for a in archives:
129 compressed_name = distdir + archive_extension[a]
130 shutil.make_archive(distdir, a, root_dir=dist_sub, base_dir=dist_name)
131 output_names.append(compressed_name)
132 shutil.rmtree(distdir)
133 return output_names
134
135 def is_hg(src_root):
136 return os.path.isdir(os.path.join(src_root, '.hg'))
137
138 def hg_have_dirty_index(src_root):
139 '''Check whether there are uncommitted changes in hg'''
140 out = subprocess.check_output(['hg', '-R', src_root, 'summary'])
141 return b'commit: (clean)' not in out
142
143 def create_dist_hg(dist_name, archives, src_root, bld_root, dist_sub, dist_scripts):
144 if hg_have_dirty_index(src_root):
145 mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')
146 if dist_scripts:
147 mlog.warning('dist scripts are not supported in Mercurial projects')
148
149 os.makedirs(dist_sub, exist_ok=True)
150 tarname = os.path.join(dist_sub, dist_name + '.tar')
151 xzname = tarname + '.xz'
152 gzname = tarname + '.gz'
153 zipname = os.path.join(dist_sub, dist_name + '.zip')
154 subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])
155 output_names = []
156 if 'xztar' in archives:
157 import lzma
158 with lzma.open(xzname, 'wb') as xf, open(tarname, 'rb') as tf:
159 shutil.copyfileobj(tf, xf)
160 output_names.append(xzname)
161 if 'gztar' in archives:
162 with gzip.open(gzname, 'wb') as zf, open(tarname, 'rb') as tf:
163 shutil.copyfileobj(tf, zf)
164 output_names.append(gzname)
165 os.unlink(tarname)
166 if 'zip' in archives:
167 subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'zip', zipname])
168 output_names.append(zipname)
169 return output_names
170
171
172 def check_dist(packagename, meson_command, extra_meson_args, bld_root, privdir):
173 print('Testing distribution package %s' % packagename)
174 unpackdir = os.path.join(privdir, 'dist-unpack')
175 builddir = os.path.join(privdir, 'dist-build')
176 installdir = os.path.join(privdir, 'dist-install')
177 for p in (unpackdir, builddir, installdir):
178 if os.path.exists(p):
179 shutil.rmtree(p)
180 os.mkdir(p)
181 ninja_bin = detect_ninja()
182 try:
183 shutil.unpack_archive(packagename, unpackdir)
184 unpacked_files = glob(os.path.join(unpackdir, '*'))
185 assert(len(unpacked_files) == 1)
186 unpacked_src_dir = unpacked_files[0]
187 with open(os.path.join(bld_root, 'meson-info', 'intro-buildoptions.json')) as boptions:
188 meson_command += ['-D{name}={value}'.format(**o) for o in json.load(boptions)
189 if o['name'] not in ['backend', 'install_umask']]
190 meson_command += extra_meson_args
191 if subprocess.call(meson_command + ['--backend=ninja', unpacked_src_dir, builddir]) != 0:
192 print('Running Meson on distribution package failed')
193 return 1
194 if subprocess.call([ninja_bin], cwd=builddir) != 0:
195 print('Compiling the distribution package failed')
196 return 1
197 if subprocess.call([ninja_bin, 'test'], cwd=builddir) != 0:
198 print('Running unit tests on the distribution package failed')
199 return 1
200 myenv = os.environ.copy()
201 myenv['DESTDIR'] = installdir
202 if subprocess.call([ninja_bin, 'install'], cwd=builddir, env=myenv) != 0:
203 print('Installing the distribution package failed')
204 return 1
205 finally:
206 shutil.rmtree(unpackdir)
207 shutil.rmtree(builddir)
208 shutil.rmtree(installdir)
209 print('Distribution package %s tested' % packagename)
210 return 0
211
212 def determine_archives_to_generate(options):
213 result = []
214 for i in options.formats.split(','):
215 if i not in archive_choices:
216 sys.exit('Value "{}" not one of permitted values {}.'.format(i, archive_choices))
217 result.append(i)
218 if len(i) == 0:
219 sys.exit('No archive types specified.')
220 return result
221
222 def run(options):
223 options.wd = os.path.abspath(options.wd)
224 buildfile = Path(options.wd) / 'meson-private' / 'build.dat'
225 if not buildfile.is_file():
226 raise MesonException('Directory {!r} does not seem to be a Meson build directory.'.format(options.wd))
227 b = build.load(options.wd)
228 # This import must be load delayed, otherwise it will get the default
229 # value of None.
230 from mesonbuild.mesonlib import meson_command
231 src_root = b.environment.source_dir
232 bld_root = b.environment.build_dir
233 priv_dir = os.path.join(bld_root, 'meson-private')
234 dist_sub = os.path.join(bld_root, 'meson-dist')
235
236 dist_name = b.project_name + '-' + b.project_version
237
238 archives = determine_archives_to_generate(options)
239
240 subprojects = []
241 extra_meson_args = []
242 if options.include_subprojects:
243 subproject_dir = os.path.join(src_root, b.subproject_dir)
244 for sub in b.subprojects:
245 _, directory = wrap.get_directory(subproject_dir, sub)
246 subprojects.append(os.path.join(b.subproject_dir, directory))
247 extra_meson_args.append('-Dwrap_mode=nodownload')
248
249 if is_git(src_root):
250 names = create_dist_git(dist_name, archives, src_root, bld_root, dist_sub, b.dist_scripts, subprojects)
251 elif is_hg(src_root):
252 if subprojects:
253 print('--include-subprojects option currently not supported with Mercurial')
254 return 1
255 names = create_dist_hg(dist_name, archives, src_root, bld_root, dist_sub, b.dist_scripts)
256 else:
257 print('Dist currently only works with Git or Mercurial repos')
258 return 1
259 if names is None:
260 return 1
261 # Check only one.
262 rc = check_dist(names[0], meson_command, extra_meson_args, bld_root, priv_dir)
263 if rc == 0:
264 for name in names:
265 create_hash(name)
266 return rc
267
[end of mesonbuild/mdist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesonbuild/mdist.py b/mesonbuild/mdist.py
--- a/mesonbuild/mdist.py
+++ b/mesonbuild/mdist.py
@@ -151,7 +151,16 @@
xzname = tarname + '.xz'
gzname = tarname + '.gz'
zipname = os.path.join(dist_sub, dist_name + '.zip')
- subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])
+ # Note that -X interprets relative paths using the current working
+ # directory, not the repository root, so this must be an absolute path:
+ # https://bz.mercurial-scm.org/show_bug.cgi?id=6267
+ #
+ # .hg[a-z]* is used instead of .hg* to keep .hg_archival.txt, which may
+ # be useful to link the tarball to the Mercurial revision for either
+ # manual inspection or in case any code interprets it for a --version or
+ # similar.
+ subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar',
+ '-X', src_root + '/.hg[a-z]*', tarname])
output_names = []
if 'xztar' in archives:
import lzma
| {"golden_diff": "diff --git a/mesonbuild/mdist.py b/mesonbuild/mdist.py\n--- a/mesonbuild/mdist.py\n+++ b/mesonbuild/mdist.py\n@@ -151,7 +151,16 @@\n xzname = tarname + '.xz'\n gzname = tarname + '.gz'\n zipname = os.path.join(dist_sub, dist_name + '.zip')\n- subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])\n+ # Note that -X interprets relative paths using the current working\n+ # directory, not the repository root, so this must be an absolute path:\n+ # https://bz.mercurial-scm.org/show_bug.cgi?id=6267\n+ #\n+ # .hg[a-z]* is used instead of .hg* to keep .hg_archival.txt, which may\n+ # be useful to link the tarball to the Mercurial revision for either\n+ # manual inspection or in case any code interprets it for a --version or\n+ # similar.\n+ subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar',\n+ '-X', src_root + '/.hg[a-z]*', tarname])\n output_names = []\n if 'xztar' in archives:\n import lzma\n", "issue": ".hgtags included in dist archive\n**Describe the bug**\r\n\r\nWhen running `ninja dist` the resulting `tar.xz` includes the `.hgtags` files that distros (namely debian) will mark as a packaging error.\r\n\r\n**To Reproduce**\r\n\r\nRun `ninja dist` on a mercurial repository that has at least 1 tag.\r\n\r\n**Expected behavior**\r\nThat the archive does not have the `.hgtags` file in it. I just did an `hg archive` and unfortunately saw that `.hgtags` was there as well.\r\n\r\n**system parameters**\r\n* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? Native\r\n* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) Linux, Debian Unstable\r\n* what Python version are you using e.g. 3.8.0: 3.7.6\r\n* what `meson --version`: 0.53.1\r\n* what `ninja --version` if it's a Ninja build: 1.9.0\r\n\n", "before_files": [{"content": "# Copyright 2017 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport gzip\nimport os\nimport sys\nimport shutil\nimport subprocess\nimport hashlib\nimport json\nfrom glob import glob\nfrom pathlib import Path\nfrom mesonbuild.environment import detect_ninja\nfrom mesonbuild.mesonlib import windows_proof_rmtree, MesonException\nfrom mesonbuild.wrap import wrap\nfrom mesonbuild import mlog, build\n\narchive_choices = ['gztar', 'xztar', 'zip']\narchive_extension = {'gztar': '.tar.gz',\n 'xztar': '.tar.xz',\n 'zip': '.zip'}\n\ndef add_arguments(parser):\n parser.add_argument('-C', default='.', dest='wd',\n help='directory to cd into before running')\n parser.add_argument('--formats', default='xztar',\n help='Comma separated list of archive types to create.')\n parser.add_argument('--include-subprojects', action='store_true',\n help='Include source code of subprojects that have been used for the build.')\n\n\ndef create_hash(fname):\n hashname = fname + '.sha256sum'\n m = hashlib.sha256()\n m.update(open(fname, 'rb').read())\n with open(hashname, 'w') as f:\n f.write('%s %s\\n' % (m.hexdigest(), os.path.basename(fname)))\n\n\ndef del_gitfiles(dirname):\n for f in glob(os.path.join(dirname, '.git*')):\n if os.path.isdir(f) and not os.path.islink(f):\n windows_proof_rmtree(f)\n else:\n os.unlink(f)\n\ndef process_submodules(dirname):\n module_file = os.path.join(dirname, '.gitmodules')\n if not os.path.exists(module_file):\n return\n subprocess.check_call(['git', 'submodule', 'update', '--init', '--recursive'], cwd=dirname)\n for line in open(module_file):\n line = line.strip()\n if '=' not in line:\n continue\n k, v = line.split('=', 1)\n k = k.strip()\n v = v.strip()\n if k != 'path':\n continue\n del_gitfiles(os.path.join(dirname, v))\n\n\ndef run_dist_scripts(src_root, bld_root, dist_root, dist_scripts):\n assert(os.path.isabs(dist_root))\n env = os.environ.copy()\n env['MESON_DIST_ROOT'] = dist_root\n env['MESON_SOURCE_ROOT'] = src_root\n env['MESON_BUILD_ROOT'] = bld_root\n for d in dist_scripts:\n script = d['exe']\n args = d['args']\n name = ' '.join(script + args)\n print('Running custom dist script {!r}'.format(name))\n try:\n rc = subprocess.call(script + args, env=env)\n if rc != 0:\n sys.exit('Dist script errored out')\n except OSError:\n print('Failed to run dist script {!r}'.format(name))\n sys.exit(1)\n\ndef is_git(src_root):\n _git = os.path.join(src_root, '.git')\n return os.path.isdir(_git) or os.path.isfile(_git)\n\ndef git_have_dirty_index(src_root):\n '''Check whether there are uncommitted changes in git'''\n ret = subprocess.call(['git', '-C', src_root, 'diff-index', '--quiet', 'HEAD'])\n return ret == 1\n\ndef git_clone(src_root, distdir):\n if git_have_dirty_index(src_root):\n mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')\n if os.path.exists(distdir):\n shutil.rmtree(distdir)\n os.makedirs(distdir)\n subprocess.check_call(['git', 'clone', '--shared', src_root, distdir])\n process_submodules(distdir)\n del_gitfiles(distdir)\n\ndef create_dist_git(dist_name, archives, src_root, bld_root, dist_sub, dist_scripts, subprojects):\n distdir = os.path.join(dist_sub, dist_name)\n git_clone(src_root, distdir)\n for path in subprojects:\n sub_src_root = os.path.join(src_root, path)\n sub_distdir = os.path.join(distdir, path)\n if os.path.exists(sub_distdir):\n continue\n if is_git(sub_src_root):\n git_clone(sub_src_root, sub_distdir)\n else:\n shutil.copytree(sub_src_root, sub_distdir)\n run_dist_scripts(src_root, bld_root, distdir, dist_scripts)\n output_names = []\n for a in archives:\n compressed_name = distdir + archive_extension[a]\n shutil.make_archive(distdir, a, root_dir=dist_sub, base_dir=dist_name)\n output_names.append(compressed_name)\n shutil.rmtree(distdir)\n return output_names\n\ndef is_hg(src_root):\n return os.path.isdir(os.path.join(src_root, '.hg'))\n\ndef hg_have_dirty_index(src_root):\n '''Check whether there are uncommitted changes in hg'''\n out = subprocess.check_output(['hg', '-R', src_root, 'summary'])\n return b'commit: (clean)' not in out\n\ndef create_dist_hg(dist_name, archives, src_root, bld_root, dist_sub, dist_scripts):\n if hg_have_dirty_index(src_root):\n mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')\n if dist_scripts:\n mlog.warning('dist scripts are not supported in Mercurial projects')\n\n os.makedirs(dist_sub, exist_ok=True)\n tarname = os.path.join(dist_sub, dist_name + '.tar')\n xzname = tarname + '.xz'\n gzname = tarname + '.gz'\n zipname = os.path.join(dist_sub, dist_name + '.zip')\n subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])\n output_names = []\n if 'xztar' in archives:\n import lzma\n with lzma.open(xzname, 'wb') as xf, open(tarname, 'rb') as tf:\n shutil.copyfileobj(tf, xf)\n output_names.append(xzname)\n if 'gztar' in archives:\n with gzip.open(gzname, 'wb') as zf, open(tarname, 'rb') as tf:\n shutil.copyfileobj(tf, zf)\n output_names.append(gzname)\n os.unlink(tarname)\n if 'zip' in archives:\n subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'zip', zipname])\n output_names.append(zipname)\n return output_names\n\n\ndef check_dist(packagename, meson_command, extra_meson_args, bld_root, privdir):\n print('Testing distribution package %s' % packagename)\n unpackdir = os.path.join(privdir, 'dist-unpack')\n builddir = os.path.join(privdir, 'dist-build')\n installdir = os.path.join(privdir, 'dist-install')\n for p in (unpackdir, builddir, installdir):\n if os.path.exists(p):\n shutil.rmtree(p)\n os.mkdir(p)\n ninja_bin = detect_ninja()\n try:\n shutil.unpack_archive(packagename, unpackdir)\n unpacked_files = glob(os.path.join(unpackdir, '*'))\n assert(len(unpacked_files) == 1)\n unpacked_src_dir = unpacked_files[0]\n with open(os.path.join(bld_root, 'meson-info', 'intro-buildoptions.json')) as boptions:\n meson_command += ['-D{name}={value}'.format(**o) for o in json.load(boptions)\n if o['name'] not in ['backend', 'install_umask']]\n meson_command += extra_meson_args\n if subprocess.call(meson_command + ['--backend=ninja', unpacked_src_dir, builddir]) != 0:\n print('Running Meson on distribution package failed')\n return 1\n if subprocess.call([ninja_bin], cwd=builddir) != 0:\n print('Compiling the distribution package failed')\n return 1\n if subprocess.call([ninja_bin, 'test'], cwd=builddir) != 0:\n print('Running unit tests on the distribution package failed')\n return 1\n myenv = os.environ.copy()\n myenv['DESTDIR'] = installdir\n if subprocess.call([ninja_bin, 'install'], cwd=builddir, env=myenv) != 0:\n print('Installing the distribution package failed')\n return 1\n finally:\n shutil.rmtree(unpackdir)\n shutil.rmtree(builddir)\n shutil.rmtree(installdir)\n print('Distribution package %s tested' % packagename)\n return 0\n\ndef determine_archives_to_generate(options):\n result = []\n for i in options.formats.split(','):\n if i not in archive_choices:\n sys.exit('Value \"{}\" not one of permitted values {}.'.format(i, archive_choices))\n result.append(i)\n if len(i) == 0:\n sys.exit('No archive types specified.')\n return result\n\ndef run(options):\n options.wd = os.path.abspath(options.wd)\n buildfile = Path(options.wd) / 'meson-private' / 'build.dat'\n if not buildfile.is_file():\n raise MesonException('Directory {!r} does not seem to be a Meson build directory.'.format(options.wd))\n b = build.load(options.wd)\n # This import must be load delayed, otherwise it will get the default\n # value of None.\n from mesonbuild.mesonlib import meson_command\n src_root = b.environment.source_dir\n bld_root = b.environment.build_dir\n priv_dir = os.path.join(bld_root, 'meson-private')\n dist_sub = os.path.join(bld_root, 'meson-dist')\n\n dist_name = b.project_name + '-' + b.project_version\n\n archives = determine_archives_to_generate(options)\n\n subprojects = []\n extra_meson_args = []\n if options.include_subprojects:\n subproject_dir = os.path.join(src_root, b.subproject_dir)\n for sub in b.subprojects:\n _, directory = wrap.get_directory(subproject_dir, sub)\n subprojects.append(os.path.join(b.subproject_dir, directory))\n extra_meson_args.append('-Dwrap_mode=nodownload')\n\n if is_git(src_root):\n names = create_dist_git(dist_name, archives, src_root, bld_root, dist_sub, b.dist_scripts, subprojects)\n elif is_hg(src_root):\n if subprojects:\n print('--include-subprojects option currently not supported with Mercurial')\n return 1\n names = create_dist_hg(dist_name, archives, src_root, bld_root, dist_sub, b.dist_scripts)\n else:\n print('Dist currently only works with Git or Mercurial repos')\n return 1\n if names is None:\n return 1\n # Check only one.\n rc = check_dist(names[0], meson_command, extra_meson_args, bld_root, priv_dir)\n if rc == 0:\n for name in names:\n create_hash(name)\n return rc\n", "path": "mesonbuild/mdist.py"}]} | 4,071 | 310 |
gh_patches_debug_28711 | rasdani/github-patches | git_diff | Mailu__Mailu-877 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
command documentation uses `_` where `-` should be used
https://mailu.io/1.6/cli.html shows commands
```
alias
alias_delete
user
user_import
user_delete
config_update
```
where `flask mailu` shows the following subcommands:
```
Commands:
admin Create an admin user
advertise Advertise this server against statistic services.
alias Create an alias
alias-delete delete alias
config-update sync configuration with data from YAML-formatted stdin
domain
setlimits Set domain limits
setmanager Make a user manager of a domain
user Create a user
user-delete delete user
user-import Import a user along with password hash.
```
I think the documentation must be updated to use `-` rather than `_`.
</issue>
<code>
[start of core/admin/mailu/manage.py]
1 from mailu import models
2
3 from flask import current_app as app
4 from flask import cli as flask_cli
5
6 import flask
7 import os
8 import socket
9 import uuid
10 import click
11
12
13 db = models.db
14
15
16 @click.group()
17 def mailu(cls=flask_cli.FlaskGroup):
18 """ Mailu command line
19 """
20
21
22 @mailu.command()
23 @flask_cli.with_appcontext
24 def advertise():
25 """ Advertise this server against statistic services.
26 """
27 if os.path.isfile(app.config["INSTANCE_ID_PATH"]):
28 with open(app.config["INSTANCE_ID_PATH"], "r") as handle:
29 instance_id = handle.read()
30 else:
31 instance_id = str(uuid.uuid4())
32 with open(app.config["INSTANCE_ID_PATH"], "w") as handle:
33 handle.write(instance_id)
34 if not app.config["DISABLE_STATISTICS"]:
35 try:
36 socket.gethostbyname(app.config["STATS_ENDPOINT"].format(instance_id))
37 except:
38 pass
39
40
41 @mailu.command()
42 @click.argument('localpart')
43 @click.argument('domain_name')
44 @click.argument('password')
45 @flask_cli.with_appcontext
46 def admin(localpart, domain_name, password):
47 """ Create an admin user
48 """
49 domain = models.Domain.query.get(domain_name)
50 if not domain:
51 domain = models.Domain(name=domain_name)
52 db.session.add(domain)
53 user = models.User(
54 localpart=localpart,
55 domain=domain,
56 global_admin=True
57 )
58 user.set_password(password)
59 db.session.add(user)
60 db.session.commit()
61
62
63 @mailu.command()
64 @click.argument('localpart')
65 @click.argument('domain_name')
66 @click.argument('password')
67 @click.argument('hash_scheme')
68 @flask_cli.with_appcontext
69 def user(localpart, domain_name, password, hash_scheme=None):
70 """ Create a user
71 """
72 if hash_scheme is None:
73 hash_scheme = app.config['PASSWORD_SCHEME']
74 domain = models.Domain.query.get(domain_name)
75 if not domain:
76 domain = models.Domain(name=domain_name)
77 db.session.add(domain)
78 user = models.User(
79 localpart=localpart,
80 domain=domain,
81 global_admin=False
82 )
83 user.set_password(password, hash_scheme=hash_scheme)
84 db.session.add(user)
85 db.session.commit()
86
87
88 @mailu.command()
89 @click.option('-n', '--domain_name')
90 @click.option('-u', '--max_users')
91 @click.option('-a', '--max_aliases')
92 @click.option('-q', '--max_quota_bytes')
93 @flask_cli.with_appcontext
94 def domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):
95 domain = models.Domain.query.get(domain_name)
96 if not domain:
97 domain = models.Domain(name=domain_name)
98 db.session.add(domain)
99 db.session.commit()
100
101
102 @mailu.command()
103 @click.argument('localpart')
104 @click.argument('domain_name')
105 @click.argument('password_hash')
106 @click.argument('hash_scheme')
107 @flask_cli.with_appcontext
108 def user_import(localpart, domain_name, password_hash, hash_scheme = None):
109 """ Import a user along with password hash.
110 """
111 if hash_scheme is None:
112 hash_scheme = app.config['PASSWORD_SCHEME']
113 domain = models.Domain.query.get(domain_name)
114 if not domain:
115 domain = models.Domain(name=domain_name)
116 db.session.add(domain)
117 user = models.User(
118 localpart=localpart,
119 domain=domain,
120 global_admin=False
121 )
122 user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)
123 db.session.add(user)
124 db.session.commit()
125
126
127 @mailu.command()
128 @click.option('-v', '--verbose')
129 @click.option('-d', '--delete_objects')
130 @flask_cli.with_appcontext
131 def config_update(verbose=False, delete_objects=False):
132 """sync configuration with data from YAML-formatted stdin"""
133 import yaml
134 import sys
135 new_config = yaml.load(sys.stdin)
136 # print new_config
137 domains = new_config.get('domains', [])
138 tracked_domains = set()
139 for domain_config in domains:
140 if verbose:
141 print(str(domain_config))
142 domain_name = domain_config['name']
143 max_users = domain_config.get('max_users', -1)
144 max_aliases = domain_config.get('max_aliases', -1)
145 max_quota_bytes = domain_config.get('max_quota_bytes', 0)
146 tracked_domains.add(domain_name)
147 domain = models.Domain.query.get(domain_name)
148 if not domain:
149 domain = models.Domain(name=domain_name,
150 max_users=max_users,
151 max_aliases=max_aliases,
152 max_quota_bytes=max_quota_bytes)
153 db.session.add(domain)
154 print("Added " + str(domain_config))
155 else:
156 domain.max_users = max_users
157 domain.max_aliases = max_aliases
158 domain.max_quota_bytes = max_quota_bytes
159 db.session.add(domain)
160 print("Updated " + str(domain_config))
161
162 users = new_config.get('users', [])
163 tracked_users = set()
164 user_optional_params = ('comment', 'quota_bytes', 'global_admin',
165 'enable_imap', 'enable_pop', 'forward_enabled',
166 'forward_destination', 'reply_enabled',
167 'reply_subject', 'reply_body', 'displayed_name',
168 'spam_enabled', 'email', 'spam_threshold')
169 for user_config in users:
170 if verbose:
171 print(str(user_config))
172 localpart = user_config['localpart']
173 domain_name = user_config['domain']
174 password_hash = user_config.get('password_hash', None)
175 hash_scheme = user_config.get('hash_scheme', None)
176 domain = models.Domain.query.get(domain_name)
177 email = '{0}@{1}'.format(localpart, domain_name)
178 optional_params = {}
179 for k in user_optional_params:
180 if k in user_config:
181 optional_params[k] = user_config[k]
182 if not domain:
183 domain = models.Domain(name=domain_name)
184 db.session.add(domain)
185 user = models.User.query.get(email)
186 tracked_users.add(email)
187 tracked_domains.add(domain_name)
188 if not user:
189 user = models.User(
190 localpart=localpart,
191 domain=domain,
192 **optional_params
193 )
194 else:
195 for k in optional_params:
196 setattr(user, k, optional_params[k])
197 user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)
198 db.session.add(user)
199
200 aliases = new_config.get('aliases', [])
201 tracked_aliases = set()
202 for alias_config in aliases:
203 if verbose:
204 print(str(alias_config))
205 localpart = alias_config['localpart']
206 domain_name = alias_config['domain']
207 if type(alias_config['destination']) is str:
208 destination = alias_config['destination'].split(',')
209 else:
210 destination = alias_config['destination']
211 wildcard = alias_config.get('wildcard', False)
212 domain = models.Domain.query.get(domain_name)
213 email = '{0}@{1}'.format(localpart, domain_name)
214 if not domain:
215 domain = models.Domain(name=domain_name)
216 db.session.add(domain)
217 alias = models.Alias.query.get(email)
218 tracked_aliases.add(email)
219 tracked_domains.add(domain_name)
220 if not alias:
221 alias = models.Alias(
222 localpart=localpart,
223 domain=domain,
224 wildcard=wildcard,
225 destination=destination,
226 email=email
227 )
228 else:
229 alias.destination = destination
230 alias.wildcard = wildcard
231 db.session.add(alias)
232
233 db.session.commit()
234
235 managers = new_config.get('managers', [])
236 # tracked_managers=set()
237 for manager_config in managers:
238 if verbose:
239 print(str(manager_config))
240 domain_name = manager_config['domain']
241 user_name = manager_config['user']
242 domain = models.Domain.query.get(domain_name)
243 manageruser = models.User.query.get(user_name + '@' + domain_name)
244 if manageruser not in domain.managers:
245 domain.managers.append(manageruser)
246 db.session.add(domain)
247
248 db.session.commit()
249
250 if delete_objects:
251 for user in db.session.query(models.User).all():
252 if not (user.email in tracked_users):
253 if verbose:
254 print("Deleting user: " + str(user.email))
255 db.session.delete(user)
256 for alias in db.session.query(models.Alias).all():
257 if not (alias.email in tracked_aliases):
258 if verbose:
259 print("Deleting alias: " + str(alias.email))
260 db.session.delete(alias)
261 for domain in db.session.query(models.Domain).all():
262 if not (domain.name in tracked_domains):
263 if verbose:
264 print("Deleting domain: " + str(domain.name))
265 db.session.delete(domain)
266 db.session.commit()
267
268
269 @mailu.command()
270 @click.argument('email')
271 @flask_cli.with_appcontext
272 def user_delete(email):
273 """delete user"""
274 user = models.User.query.get(email)
275 if user:
276 db.session.delete(user)
277 db.session.commit()
278
279
280 @mailu.command()
281 @click.argument('email')
282 @flask_cli.with_appcontext
283 def alias_delete(email):
284 """delete alias"""
285 alias = models.Alias.query.get(email)
286 if alias:
287 db.session.delete(alias)
288 db.session.commit()
289
290
291 @mailu.command()
292 @click.argument('localpart')
293 @click.argument('domain_name')
294 @click.argument('destination')
295 @click.option('-w', '--wildcard', is_flag=True)
296 @flask_cli.with_appcontext
297 def alias(localpart, domain_name, destination, wildcard=False):
298 """ Create an alias
299 """
300 domain = models.Domain.query.get(domain_name)
301 if not domain:
302 domain = models.Domain(name=domain_name)
303 db.session.add(domain)
304 alias = models.Alias(
305 localpart=localpart,
306 domain=domain,
307 wildcard=wildcard,
308 destination=destination.split(','),
309 email="%s@%s" % (localpart, domain_name)
310 )
311 db.session.add(alias)
312 db.session.commit()
313
314
315 @mailu.command()
316 @click.argument('domain_name')
317 @click.argument('max_users')
318 @click.argument('max_aliases')
319 @click.argument('max_quota_bytes')
320 @flask_cli.with_appcontext
321 def setlimits(domain_name, max_users, max_aliases, max_quota_bytes):
322 """ Set domain limits
323 """
324 domain = models.Domain.query.get(domain_name)
325 domain.max_users = max_users
326 domain.max_aliases = max_aliases
327 domain.max_quota_bytes = max_quota_bytes
328 db.session.add(domain)
329 db.session.commit()
330
331
332 @mailu.command()
333 @click.argument('domain_name')
334 @click.argument('user_name')
335 @flask_cli.with_appcontext
336 def setmanager(domain_name, user_name='manager'):
337 """ Make a user manager of a domain
338 """
339 domain = models.Domain.query.get(domain_name)
340 manageruser = models.User.query.get(user_name + '@' + domain_name)
341 domain.managers.append(manageruser)
342 db.session.add(domain)
343 db.session.commit()
344
345
346 if __name__ == '__main__':
347 cli()
348
[end of core/admin/mailu/manage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/manage.py b/core/admin/mailu/manage.py
--- a/core/admin/mailu/manage.py
+++ b/core/admin/mailu/manage.py
@@ -64,7 +64,7 @@
@click.argument('localpart')
@click.argument('domain_name')
@click.argument('password')
[email protected]('hash_scheme')
[email protected]('hash_scheme', required=False)
@flask_cli.with_appcontext
def user(localpart, domain_name, password, hash_scheme=None):
""" Create a user
@@ -86,12 +86,14 @@
@mailu.command()
[email protected]('-n', '--domain_name')
[email protected]('-u', '--max_users')
[email protected]('-a', '--max_aliases')
[email protected]('-q', '--max_quota_bytes')
[email protected]('domain_name')
[email protected]('-u', '--max-users')
[email protected]('-a', '--max-aliases')
[email protected]('-q', '--max-quota-bytes')
@flask_cli.with_appcontext
def domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):
+ """ Create a domain
+ """
domain = models.Domain.query.get(domain_name)
if not domain:
domain = models.Domain(name=domain_name)
@@ -126,7 +128,7 @@
@mailu.command()
@click.option('-v', '--verbose')
[email protected]('-d', '--delete_objects')
[email protected]('-d', '--delete-objects')
@flask_cli.with_appcontext
def config_update(verbose=False, delete_objects=False):
"""sync configuration with data from YAML-formatted stdin"""
| {"golden_diff": "diff --git a/core/admin/mailu/manage.py b/core/admin/mailu/manage.py\n--- a/core/admin/mailu/manage.py\n+++ b/core/admin/mailu/manage.py\n@@ -64,7 +64,7 @@\n @click.argument('localpart')\n @click.argument('domain_name')\n @click.argument('password')\[email protected]('hash_scheme')\[email protected]('hash_scheme', required=False)\n @flask_cli.with_appcontext\n def user(localpart, domain_name, password, hash_scheme=None):\n \"\"\" Create a user\n@@ -86,12 +86,14 @@\n \n \n @mailu.command()\[email protected]('-n', '--domain_name')\[email protected]('-u', '--max_users')\[email protected]('-a', '--max_aliases')\[email protected]('-q', '--max_quota_bytes')\[email protected]('domain_name')\[email protected]('-u', '--max-users')\[email protected]('-a', '--max-aliases')\[email protected]('-q', '--max-quota-bytes')\n @flask_cli.with_appcontext\n def domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):\n+ \"\"\" Create a domain\n+ \"\"\"\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n@@ -126,7 +128,7 @@\n \n @mailu.command()\n @click.option('-v', '--verbose')\[email protected]('-d', '--delete_objects')\[email protected]('-d', '--delete-objects')\n @flask_cli.with_appcontext\n def config_update(verbose=False, delete_objects=False):\n \"\"\"sync configuration with data from YAML-formatted stdin\"\"\"\n", "issue": "command documentation uses `_` where `-` should be used\nhttps://mailu.io/1.6/cli.html shows commands\r\n\r\n```\r\nalias\r\nalias_delete\r\nuser\r\nuser_import\r\nuser_delete\r\nconfig_update\r\n```\r\n\r\nwhere `flask mailu` shows the following subcommands:\r\n\r\n```\r\nCommands:\r\n admin Create an admin user\r\n advertise Advertise this server against statistic services.\r\n alias Create an alias\r\n alias-delete delete alias\r\n config-update sync configuration with data from YAML-formatted stdin\r\n domain\r\n setlimits Set domain limits\r\n setmanager Make a user manager of a domain\r\n user Create a user\r\n user-delete delete user\r\n user-import Import a user along with password hash.\r\n```\r\n\r\nI think the documentation must be updated to use `-` rather than `_`.\n", "before_files": [{"content": "from mailu import models\n\nfrom flask import current_app as app\nfrom flask import cli as flask_cli\n\nimport flask\nimport os\nimport socket\nimport uuid\nimport click\n\n\ndb = models.db\n\n\[email protected]()\ndef mailu(cls=flask_cli.FlaskGroup):\n \"\"\" Mailu command line\n \"\"\"\n\n\[email protected]()\n@flask_cli.with_appcontext\ndef advertise():\n \"\"\" Advertise this server against statistic services.\n \"\"\"\n if os.path.isfile(app.config[\"INSTANCE_ID_PATH\"]):\n with open(app.config[\"INSTANCE_ID_PATH\"], \"r\") as handle:\n instance_id = handle.read()\n else:\n instance_id = str(uuid.uuid4())\n with open(app.config[\"INSTANCE_ID_PATH\"], \"w\") as handle:\n handle.write(instance_id)\n if not app.config[\"DISABLE_STATISTICS\"]:\n try:\n socket.gethostbyname(app.config[\"STATS_ENDPOINT\"].format(instance_id))\n except:\n pass\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('password')\n@flask_cli.with_appcontext\ndef admin(localpart, domain_name, password):\n \"\"\" Create an admin user\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User(\n localpart=localpart,\n domain=domain,\n global_admin=True\n )\n user.set_password(password)\n db.session.add(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('password')\[email protected]('hash_scheme')\n@flask_cli.with_appcontext\ndef user(localpart, domain_name, password, hash_scheme=None):\n \"\"\" Create a user\n \"\"\"\n if hash_scheme is None:\n hash_scheme = app.config['PASSWORD_SCHEME']\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User(\n localpart=localpart,\n domain=domain,\n global_admin=False\n )\n user.set_password(password, hash_scheme=hash_scheme)\n db.session.add(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('-n', '--domain_name')\[email protected]('-u', '--max_users')\[email protected]('-a', '--max_aliases')\[email protected]('-q', '--max_quota_bytes')\n@flask_cli.with_appcontext\ndef domain(domain_name, max_users=-1, max_aliases=-1, max_quota_bytes=0):\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n db.session.commit()\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('password_hash')\[email protected]('hash_scheme')\n@flask_cli.with_appcontext\ndef user_import(localpart, domain_name, password_hash, hash_scheme = None):\n \"\"\" Import a user along with password hash.\n \"\"\"\n if hash_scheme is None:\n hash_scheme = app.config['PASSWORD_SCHEME']\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User(\n localpart=localpart,\n domain=domain,\n global_admin=False\n )\n user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)\n db.session.add(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('-v', '--verbose')\[email protected]('-d', '--delete_objects')\n@flask_cli.with_appcontext\ndef config_update(verbose=False, delete_objects=False):\n \"\"\"sync configuration with data from YAML-formatted stdin\"\"\"\n import yaml\n import sys\n new_config = yaml.load(sys.stdin)\n # print new_config\n domains = new_config.get('domains', [])\n tracked_domains = set()\n for domain_config in domains:\n if verbose:\n print(str(domain_config))\n domain_name = domain_config['name']\n max_users = domain_config.get('max_users', -1)\n max_aliases = domain_config.get('max_aliases', -1)\n max_quota_bytes = domain_config.get('max_quota_bytes', 0)\n tracked_domains.add(domain_name)\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name,\n max_users=max_users,\n max_aliases=max_aliases,\n max_quota_bytes=max_quota_bytes)\n db.session.add(domain)\n print(\"Added \" + str(domain_config))\n else:\n domain.max_users = max_users\n domain.max_aliases = max_aliases\n domain.max_quota_bytes = max_quota_bytes\n db.session.add(domain)\n print(\"Updated \" + str(domain_config))\n\n users = new_config.get('users', [])\n tracked_users = set()\n user_optional_params = ('comment', 'quota_bytes', 'global_admin',\n 'enable_imap', 'enable_pop', 'forward_enabled',\n 'forward_destination', 'reply_enabled',\n 'reply_subject', 'reply_body', 'displayed_name',\n 'spam_enabled', 'email', 'spam_threshold')\n for user_config in users:\n if verbose:\n print(str(user_config))\n localpart = user_config['localpart']\n domain_name = user_config['domain']\n password_hash = user_config.get('password_hash', None)\n hash_scheme = user_config.get('hash_scheme', None)\n domain = models.Domain.query.get(domain_name)\n email = '{0}@{1}'.format(localpart, domain_name)\n optional_params = {}\n for k in user_optional_params:\n if k in user_config:\n optional_params[k] = user_config[k]\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n user = models.User.query.get(email)\n tracked_users.add(email)\n tracked_domains.add(domain_name)\n if not user:\n user = models.User(\n localpart=localpart,\n domain=domain,\n **optional_params\n )\n else:\n for k in optional_params:\n setattr(user, k, optional_params[k])\n user.set_password(password_hash, hash_scheme=hash_scheme, raw=True)\n db.session.add(user)\n\n aliases = new_config.get('aliases', [])\n tracked_aliases = set()\n for alias_config in aliases:\n if verbose:\n print(str(alias_config))\n localpart = alias_config['localpart']\n domain_name = alias_config['domain']\n if type(alias_config['destination']) is str:\n destination = alias_config['destination'].split(',')\n else:\n destination = alias_config['destination']\n wildcard = alias_config.get('wildcard', False)\n domain = models.Domain.query.get(domain_name)\n email = '{0}@{1}'.format(localpart, domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n alias = models.Alias.query.get(email)\n tracked_aliases.add(email)\n tracked_domains.add(domain_name)\n if not alias:\n alias = models.Alias(\n localpart=localpart,\n domain=domain,\n wildcard=wildcard,\n destination=destination,\n email=email\n )\n else:\n alias.destination = destination\n alias.wildcard = wildcard\n db.session.add(alias)\n\n db.session.commit()\n\n managers = new_config.get('managers', [])\n # tracked_managers=set()\n for manager_config in managers:\n if verbose:\n print(str(manager_config))\n domain_name = manager_config['domain']\n user_name = manager_config['user']\n domain = models.Domain.query.get(domain_name)\n manageruser = models.User.query.get(user_name + '@' + domain_name)\n if manageruser not in domain.managers:\n domain.managers.append(manageruser)\n db.session.add(domain)\n\n db.session.commit()\n\n if delete_objects:\n for user in db.session.query(models.User).all():\n if not (user.email in tracked_users):\n if verbose:\n print(\"Deleting user: \" + str(user.email))\n db.session.delete(user)\n for alias in db.session.query(models.Alias).all():\n if not (alias.email in tracked_aliases):\n if verbose:\n print(\"Deleting alias: \" + str(alias.email))\n db.session.delete(alias)\n for domain in db.session.query(models.Domain).all():\n if not (domain.name in tracked_domains):\n if verbose:\n print(\"Deleting domain: \" + str(domain.name))\n db.session.delete(domain)\n db.session.commit()\n\n\[email protected]()\[email protected]('email')\n@flask_cli.with_appcontext\ndef user_delete(email):\n \"\"\"delete user\"\"\"\n user = models.User.query.get(email)\n if user:\n db.session.delete(user)\n db.session.commit()\n\n\[email protected]()\[email protected]('email')\n@flask_cli.with_appcontext\ndef alias_delete(email):\n \"\"\"delete alias\"\"\"\n alias = models.Alias.query.get(email)\n if alias:\n db.session.delete(alias)\n db.session.commit()\n\n\[email protected]()\[email protected]('localpart')\[email protected]('domain_name')\[email protected]('destination')\[email protected]('-w', '--wildcard', is_flag=True)\n@flask_cli.with_appcontext\ndef alias(localpart, domain_name, destination, wildcard=False):\n \"\"\" Create an alias\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n if not domain:\n domain = models.Domain(name=domain_name)\n db.session.add(domain)\n alias = models.Alias(\n localpart=localpart,\n domain=domain,\n wildcard=wildcard,\n destination=destination.split(','),\n email=\"%s@%s\" % (localpart, domain_name)\n )\n db.session.add(alias)\n db.session.commit()\n\n\[email protected]()\[email protected]('domain_name')\[email protected]('max_users')\[email protected]('max_aliases')\[email protected]('max_quota_bytes')\n@flask_cli.with_appcontext\ndef setlimits(domain_name, max_users, max_aliases, max_quota_bytes):\n \"\"\" Set domain limits\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n domain.max_users = max_users\n domain.max_aliases = max_aliases\n domain.max_quota_bytes = max_quota_bytes\n db.session.add(domain)\n db.session.commit()\n\n\[email protected]()\[email protected]('domain_name')\[email protected]('user_name')\n@flask_cli.with_appcontext\ndef setmanager(domain_name, user_name='manager'):\n \"\"\" Make a user manager of a domain\n \"\"\"\n domain = models.Domain.query.get(domain_name)\n manageruser = models.User.query.get(user_name + '@' + domain_name)\n domain.managers.append(manageruser)\n db.session.add(domain)\n db.session.commit()\n\n\nif __name__ == '__main__':\n cli()\n", "path": "core/admin/mailu/manage.py"}]} | 4,015 | 371 |
gh_patches_debug_29365 | rasdani/github-patches | git_diff | archlinux__archinstall-1840 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The parallel downloads page is shown half in English, half in another language if a language other than English is selected
I checked every language that has this page translated and it happens on every single one.
### Example:

### Another one:

</issue>
<code>
[start of archinstall/lib/interactions/general_conf.py]
1 from __future__ import annotations
2
3 import pathlib
4 from typing import List, Any, Optional, TYPE_CHECKING
5
6 from ..locale import list_timezones, list_keyboard_languages
7 from ..menu import MenuSelectionType, Menu, TextInput
8 from ..output import warn
9 from ..packages.packages import validate_package_list
10 from ..storage import storage
11 from ..translationhandler import Language
12
13 if TYPE_CHECKING:
14 _: Any
15
16
17 def ask_ntp(preset: bool = True) -> bool:
18 prompt = str(_('Would you like to use automatic time synchronization (NTP) with the default time servers?\n'))
19 prompt += str(_('Hardware time and other post-configuration steps might be required in order for NTP to work.\nFor more information, please check the Arch wiki'))
20 if preset:
21 preset_val = Menu.yes()
22 else:
23 preset_val = Menu.no()
24 choice = Menu(prompt, Menu.yes_no(), skip=False, preset_values=preset_val, default_option=Menu.yes()).run()
25
26 return False if choice.value == Menu.no() else True
27
28
29 def ask_hostname(preset: str = '') -> str:
30 while True:
31 hostname = TextInput(
32 str(_('Desired hostname for the installation: ')),
33 preset
34 ).run().strip()
35
36 if hostname:
37 return hostname
38
39
40 def ask_for_a_timezone(preset: Optional[str] = None) -> Optional[str]:
41 timezones = list_timezones()
42 default = 'UTC'
43
44 choice = Menu(
45 _('Select a timezone'),
46 list(timezones),
47 preset_values=preset,
48 default_option=default
49 ).run()
50
51 match choice.type_:
52 case MenuSelectionType.Skip: return preset
53 case MenuSelectionType.Selection: return choice.single_value
54
55 return None
56
57
58 def ask_for_audio_selection(desktop: bool = True, preset: Optional[str] = None) -> Optional[str]:
59 no_audio = str(_('No audio server'))
60 choices = ['pipewire', 'pulseaudio'] if desktop else ['pipewire', 'pulseaudio', no_audio]
61 default = 'pipewire' if desktop else no_audio
62
63 choice = Menu(_('Choose an audio server'), choices, preset_values=preset, default_option=default).run()
64
65 match choice.type_:
66 case MenuSelectionType.Skip: return preset
67 case MenuSelectionType.Selection: return choice.single_value
68
69 return None
70
71
72 def select_language(preset: Optional[str] = None) -> Optional[str]:
73 """
74 Asks the user to select a language
75 Usually this is combined with :ref:`archinstall.list_keyboard_languages`.
76
77 :return: The language/dictionary key of the selected language
78 :rtype: str
79 """
80 kb_lang = list_keyboard_languages()
81 # sort alphabetically and then by length
82 sorted_kb_lang = sorted(sorted(list(kb_lang)), key=len)
83
84 choice = Menu(
85 _('Select keyboard layout'),
86 sorted_kb_lang,
87 preset_values=preset,
88 sort=False
89 ).run()
90
91 match choice.type_:
92 case MenuSelectionType.Skip: return preset
93 case MenuSelectionType.Selection: return choice.single_value
94
95 return None
96
97
98 def select_archinstall_language(languages: List[Language], preset: Language) -> Language:
99 # these are the displayed language names which can either be
100 # the english name of a language or, if present, the
101 # name of the language in its own language
102 options = {lang.display_name: lang for lang in languages}
103
104 title = 'NOTE: If a language can not displayed properly, a proper font must be set manually in the console.\n'
105 title += 'All available fonts can be found in "/usr/share/kbd/consolefonts"\n'
106 title += 'e.g. setfont LatGrkCyr-8x16 (to display latin/greek/cyrillic characters)\n'
107
108 choice = Menu(
109 title,
110 list(options.keys()),
111 default_option=preset.display_name,
112 preview_size=0.5
113 ).run()
114
115 match choice.type_:
116 case MenuSelectionType.Skip: return preset
117 case MenuSelectionType.Selection: return options[choice.single_value]
118
119 raise ValueError('Language selection not handled')
120
121
122 def ask_additional_packages_to_install(preset: List[str] = []) -> List[str]:
123 # Additional packages (with some light weight error handling for invalid package names)
124 print(_('Only packages such as base, base-devel, linux, linux-firmware, efibootmgr and optional profile packages are installed.'))
125 print(_('If you desire a web browser, such as firefox or chromium, you may specify it in the following prompt.'))
126
127 def read_packages(p: List = []) -> list:
128 display = ' '.join(p)
129 input_packages = TextInput(_('Write additional packages to install (space separated, leave blank to skip): '), display).run().strip()
130 return input_packages.split() if input_packages else []
131
132 preset = preset if preset else []
133 packages = read_packages(preset)
134
135 if not storage['arguments']['offline'] and not storage['arguments']['no_pkg_lookups']:
136 while True:
137 if len(packages):
138 # Verify packages that were given
139 print(_("Verifying that additional packages exist (this might take a few seconds)"))
140 valid, invalid = validate_package_list(packages)
141
142 if invalid:
143 warn(f"Some packages could not be found in the repository: {invalid}")
144 packages = read_packages(valid)
145 continue
146 break
147
148 return packages
149
150
151 def add_number_of_parrallel_downloads(input_number :Optional[int] = None) -> Optional[int]:
152 max_downloads = 5
153 print(_(f"This option enables the number of parallel downloads that can occur during installation"))
154 print(_(f"Enter the number of parallel downloads to be enabled.\n (Enter a value between 1 to {max_downloads})\nNote:"))
155 print(_(f" - Maximum value : {max_downloads} ( Allows {max_downloads} parallel downloads, allows {max_downloads+1} downloads at a time )"))
156 print(_(f" - Minimum value : 1 ( Allows 1 parallel download, allows 2 downloads at a time )"))
157 print(_(f" - Disable/Default : 0 ( Disables parallel downloading, allows only 1 download at a time )"))
158
159 while True:
160 try:
161 input_number = int(TextInput(_("[Default value: 0] > ")).run().strip() or 0)
162 if input_number <= 0:
163 input_number = 0
164 elif input_number > max_downloads:
165 input_number = max_downloads
166 break
167 except:
168 print(_(f"Invalid input! Try again with a valid input [1 to {max_downloads}, or 0 to disable]"))
169
170 pacman_conf_path = pathlib.Path("/etc/pacman.conf")
171 with pacman_conf_path.open() as f:
172 pacman_conf = f.read().split("\n")
173
174 with pacman_conf_path.open("w") as fwrite:
175 for line in pacman_conf:
176 if "ParallelDownloads" in line:
177 fwrite.write(f"ParallelDownloads = {input_number+1}\n") if not input_number == 0 else fwrite.write("#ParallelDownloads = 0\n")
178 else:
179 fwrite.write(f"{line}\n")
180
181 return input_number
182
183
184 def select_additional_repositories(preset: List[str]) -> List[str]:
185 """
186 Allows the user to select additional repositories (multilib, and testing) if desired.
187
188 :return: The string as a selected repository
189 :rtype: string
190 """
191
192 repositories = ["multilib", "testing"]
193
194 choice = Menu(
195 _('Choose which optional additional repositories to enable'),
196 repositories,
197 sort=False,
198 multi=True,
199 preset_values=preset,
200 allow_reset=True
201 ).run()
202
203 match choice.type_:
204 case MenuSelectionType.Skip: return preset
205 case MenuSelectionType.Reset: return []
206 case MenuSelectionType.Selection: return choice.single_value
207
208 return []
209
[end of archinstall/lib/interactions/general_conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/lib/interactions/general_conf.py b/archinstall/lib/interactions/general_conf.py
--- a/archinstall/lib/interactions/general_conf.py
+++ b/archinstall/lib/interactions/general_conf.py
@@ -151,10 +151,10 @@
def add_number_of_parrallel_downloads(input_number :Optional[int] = None) -> Optional[int]:
max_downloads = 5
print(_(f"This option enables the number of parallel downloads that can occur during installation"))
- print(_(f"Enter the number of parallel downloads to be enabled.\n (Enter a value between 1 to {max_downloads})\nNote:"))
- print(_(f" - Maximum value : {max_downloads} ( Allows {max_downloads} parallel downloads, allows {max_downloads+1} downloads at a time )"))
- print(_(f" - Minimum value : 1 ( Allows 1 parallel download, allows 2 downloads at a time )"))
- print(_(f" - Disable/Default : 0 ( Disables parallel downloading, allows only 1 download at a time )"))
+ print(str(_("Enter the number of parallel downloads to be enabled.\n (Enter a value between 1 to {})\nNote:")).format(max_downloads))
+ print(str(_(" - Maximum value : {} ( Allows {} parallel downloads, allows {} downloads at a time )")).format(max_downloads, max_downloads, max_downloads + 1))
+ print(_(" - Minimum value : 1 ( Allows 1 parallel download, allows 2 downloads at a time )"))
+ print(_(" - Disable/Default : 0 ( Disables parallel downloading, allows only 1 download at a time )"))
while True:
try:
@@ -165,7 +165,7 @@
input_number = max_downloads
break
except:
- print(_(f"Invalid input! Try again with a valid input [1 to {max_downloads}, or 0 to disable]"))
+ print(str(_("Invalid input! Try again with a valid input [1 to {}, or 0 to disable]")).format(max_downloads))
pacman_conf_path = pathlib.Path("/etc/pacman.conf")
with pacman_conf_path.open() as f:
| {"golden_diff": "diff --git a/archinstall/lib/interactions/general_conf.py b/archinstall/lib/interactions/general_conf.py\n--- a/archinstall/lib/interactions/general_conf.py\n+++ b/archinstall/lib/interactions/general_conf.py\n@@ -151,10 +151,10 @@\n def add_number_of_parrallel_downloads(input_number :Optional[int] = None) -> Optional[int]:\n \tmax_downloads = 5\n \tprint(_(f\"This option enables the number of parallel downloads that can occur during installation\"))\n-\tprint(_(f\"Enter the number of parallel downloads to be enabled.\\n (Enter a value between 1 to {max_downloads})\\nNote:\"))\n-\tprint(_(f\" - Maximum value : {max_downloads} ( Allows {max_downloads} parallel downloads, allows {max_downloads+1} downloads at a time )\"))\n-\tprint(_(f\" - Minimum value : 1 ( Allows 1 parallel download, allows 2 downloads at a time )\"))\n-\tprint(_(f\" - Disable/Default : 0 ( Disables parallel downloading, allows only 1 download at a time )\"))\n+\tprint(str(_(\"Enter the number of parallel downloads to be enabled.\\n (Enter a value between 1 to {})\\nNote:\")).format(max_downloads))\n+\tprint(str(_(\" - Maximum value : {} ( Allows {} parallel downloads, allows {} downloads at a time )\")).format(max_downloads, max_downloads, max_downloads + 1))\n+\tprint(_(\" - Minimum value : 1 ( Allows 1 parallel download, allows 2 downloads at a time )\"))\n+\tprint(_(\" - Disable/Default : 0 ( Disables parallel downloading, allows only 1 download at a time )\"))\n \n \twhile True:\n \t\ttry:\n@@ -165,7 +165,7 @@\n \t\t\t\tinput_number = max_downloads\n \t\t\tbreak\n \t\texcept:\n-\t\t\tprint(_(f\"Invalid input! Try again with a valid input [1 to {max_downloads}, or 0 to disable]\"))\n+\t\t\tprint(str(_(\"Invalid input! Try again with a valid input [1 to {}, or 0 to disable]\")).format(max_downloads))\n \n \tpacman_conf_path = pathlib.Path(\"/etc/pacman.conf\")\n \twith pacman_conf_path.open() as f:\n", "issue": "The parallel downloads page is shown half in English, half in another language if a language other than English is selected\nI checked every language that has this page translated and it happens on every single one.\r\n### Example:\r\n\r\n\r\n### Another one:\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport pathlib\nfrom typing import List, Any, Optional, TYPE_CHECKING\n\nfrom ..locale import list_timezones, list_keyboard_languages\nfrom ..menu import MenuSelectionType, Menu, TextInput\nfrom ..output import warn\nfrom ..packages.packages import validate_package_list\nfrom ..storage import storage\nfrom ..translationhandler import Language\n\nif TYPE_CHECKING:\n\t_: Any\n\n\ndef ask_ntp(preset: bool = True) -> bool:\n\tprompt = str(_('Would you like to use automatic time synchronization (NTP) with the default time servers?\\n'))\n\tprompt += str(_('Hardware time and other post-configuration steps might be required in order for NTP to work.\\nFor more information, please check the Arch wiki'))\n\tif preset:\n\t\tpreset_val = Menu.yes()\n\telse:\n\t\tpreset_val = Menu.no()\n\tchoice = Menu(prompt, Menu.yes_no(), skip=False, preset_values=preset_val, default_option=Menu.yes()).run()\n\n\treturn False if choice.value == Menu.no() else True\n\n\ndef ask_hostname(preset: str = '') -> str:\n\twhile True:\n\t\thostname = TextInput(\n\t\t\tstr(_('Desired hostname for the installation: ')),\n\t\t\tpreset\n\t\t).run().strip()\n\n\t\tif hostname:\n\t\t\treturn hostname\n\n\ndef ask_for_a_timezone(preset: Optional[str] = None) -> Optional[str]:\n\ttimezones = list_timezones()\n\tdefault = 'UTC'\n\n\tchoice = Menu(\n\t\t_('Select a timezone'),\n\t\tlist(timezones),\n\t\tpreset_values=preset,\n\t\tdefault_option=default\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\n\treturn None\n\n\ndef ask_for_audio_selection(desktop: bool = True, preset: Optional[str] = None) -> Optional[str]:\n\tno_audio = str(_('No audio server'))\n\tchoices = ['pipewire', 'pulseaudio'] if desktop else ['pipewire', 'pulseaudio', no_audio]\n\tdefault = 'pipewire' if desktop else no_audio\n\n\tchoice = Menu(_('Choose an audio server'), choices, preset_values=preset, default_option=default).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\n\treturn None\n\n\ndef select_language(preset: Optional[str] = None) -> Optional[str]:\n\t\"\"\"\n\tAsks the user to select a language\n\tUsually this is combined with :ref:`archinstall.list_keyboard_languages`.\n\n\t:return: The language/dictionary key of the selected language\n\t:rtype: str\n\t\"\"\"\n\tkb_lang = list_keyboard_languages()\n\t# sort alphabetically and then by length\n\tsorted_kb_lang = sorted(sorted(list(kb_lang)), key=len)\n\n\tchoice = Menu(\n\t\t_('Select keyboard layout'),\n\t\tsorted_kb_lang,\n\t\tpreset_values=preset,\n\t\tsort=False\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\n\treturn None\n\n\ndef select_archinstall_language(languages: List[Language], preset: Language) -> Language:\n\t# these are the displayed language names which can either be\n\t# the english name of a language or, if present, the\n\t# name of the language in its own language\n\toptions = {lang.display_name: lang for lang in languages}\n\n\ttitle = 'NOTE: If a language can not displayed properly, a proper font must be set manually in the console.\\n'\n\ttitle += 'All available fonts can be found in \"/usr/share/kbd/consolefonts\"\\n'\n\ttitle += 'e.g. setfont LatGrkCyr-8x16 (to display latin/greek/cyrillic characters)\\n'\n\n\tchoice = Menu(\n\t\ttitle,\n\t\tlist(options.keys()),\n\t\tdefault_option=preset.display_name,\n\t\tpreview_size=0.5\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Selection: return options[choice.single_value]\n\n\traise ValueError('Language selection not handled')\n\n\ndef ask_additional_packages_to_install(preset: List[str] = []) -> List[str]:\n\t# Additional packages (with some light weight error handling for invalid package names)\n\tprint(_('Only packages such as base, base-devel, linux, linux-firmware, efibootmgr and optional profile packages are installed.'))\n\tprint(_('If you desire a web browser, such as firefox or chromium, you may specify it in the following prompt.'))\n\n\tdef read_packages(p: List = []) -> list:\n\t\tdisplay = ' '.join(p)\n\t\tinput_packages = TextInput(_('Write additional packages to install (space separated, leave blank to skip): '), display).run().strip()\n\t\treturn input_packages.split() if input_packages else []\n\n\tpreset = preset if preset else []\n\tpackages = read_packages(preset)\n\n\tif not storage['arguments']['offline'] and not storage['arguments']['no_pkg_lookups']:\n\t\twhile True:\n\t\t\tif len(packages):\n\t\t\t\t# Verify packages that were given\n\t\t\t\tprint(_(\"Verifying that additional packages exist (this might take a few seconds)\"))\n\t\t\t\tvalid, invalid = validate_package_list(packages)\n\n\t\t\t\tif invalid:\n\t\t\t\t\twarn(f\"Some packages could not be found in the repository: {invalid}\")\n\t\t\t\t\tpackages = read_packages(valid)\n\t\t\t\t\tcontinue\n\t\t\tbreak\n\n\treturn packages\n\n\ndef add_number_of_parrallel_downloads(input_number :Optional[int] = None) -> Optional[int]:\n\tmax_downloads = 5\n\tprint(_(f\"This option enables the number of parallel downloads that can occur during installation\"))\n\tprint(_(f\"Enter the number of parallel downloads to be enabled.\\n (Enter a value between 1 to {max_downloads})\\nNote:\"))\n\tprint(_(f\" - Maximum value : {max_downloads} ( Allows {max_downloads} parallel downloads, allows {max_downloads+1} downloads at a time )\"))\n\tprint(_(f\" - Minimum value : 1 ( Allows 1 parallel download, allows 2 downloads at a time )\"))\n\tprint(_(f\" - Disable/Default : 0 ( Disables parallel downloading, allows only 1 download at a time )\"))\n\n\twhile True:\n\t\ttry:\n\t\t\tinput_number = int(TextInput(_(\"[Default value: 0] > \")).run().strip() or 0)\n\t\t\tif input_number <= 0:\n\t\t\t\tinput_number = 0\n\t\t\telif input_number > max_downloads:\n\t\t\t\tinput_number = max_downloads\n\t\t\tbreak\n\t\texcept:\n\t\t\tprint(_(f\"Invalid input! Try again with a valid input [1 to {max_downloads}, or 0 to disable]\"))\n\n\tpacman_conf_path = pathlib.Path(\"/etc/pacman.conf\")\n\twith pacman_conf_path.open() as f:\n\t\tpacman_conf = f.read().split(\"\\n\")\n\n\twith pacman_conf_path.open(\"w\") as fwrite:\n\t\tfor line in pacman_conf:\n\t\t\tif \"ParallelDownloads\" in line:\n\t\t\t\tfwrite.write(f\"ParallelDownloads = {input_number+1}\\n\") if not input_number == 0 else fwrite.write(\"#ParallelDownloads = 0\\n\")\n\t\t\telse:\n\t\t\t\tfwrite.write(f\"{line}\\n\")\n\n\treturn input_number\n\n\ndef select_additional_repositories(preset: List[str]) -> List[str]:\n\t\"\"\"\n\tAllows the user to select additional repositories (multilib, and testing) if desired.\n\n\t:return: The string as a selected repository\n\t:rtype: string\n\t\"\"\"\n\n\trepositories = [\"multilib\", \"testing\"]\n\n\tchoice = Menu(\n\t\t_('Choose which optional additional repositories to enable'),\n\t\trepositories,\n\t\tsort=False,\n\t\tmulti=True,\n\t\tpreset_values=preset,\n\t\tallow_reset=True\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Reset: return []\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\n\treturn []\n", "path": "archinstall/lib/interactions/general_conf.py"}]} | 3,034 | 494 |
gh_patches_debug_18110 | rasdani/github-patches | git_diff | goauthentik__authentik-1514 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ldap group membership not updated
**Is your feature request related to a problem? Please describe.**
The ldap syncing code assumes group membership is based on DN.
In our ldap tree this is unfortunately not the case. Instead the group membership is based on CN.
So while users and groups are synced, the group memberships are not established in authentik's database.
**Describe the solution you'd like**
I need the 'group membership field' to be matched against the CN of a user instead of it's fully qualified DN.
**Additional context**
Example pseudo ldif for groups:
```
dn: cn=group-1,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-1
memberUid: user-1
memberUid: user-2
memberUid: user-3
dn: cn=group-2,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-2
memberUid: user-1
memberUid: user-2
```
And for users:
```
dn: cn=user-1,ou=users,o=example,c=ch
objectClass: posixAccount
cn: user-1
ou: group-1
ou: group-2
dn: cn=user-2,ou=users,o=example,c=ch
objectClass: posixAccount
cn: user-2
ou: group-1
ou: group-2
dn: cn=user-3,ou=users,o=example,c=ch
objectClass: posixAccount
cn: user-3
ou: group-1
```
If think for the current authentik sync code to work the groups would have to look like this:
```
dn: cn=group-1,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-1
memberUid: user-1,ou=users,o=example,c=ch
memberUid: user-2,ou=users,o=example,c=ch
memberUid: user-3,ou=users,o=example,c=ch
dn: cn=group-2,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-2
memberUid: user-1,ou=users,o=example,c=ch
memberUid: user-2,ou=users,o=example,c=ch
```
But changing this is beyond my control.
This is the OpenLDAP directory tree of a large University. I'm not sure if this setup and therefor also the problem is common or unique to our deployment.
I guess for the membership mapping to work I'd need the code that authentik uses to lookup group memberships in ./authentik/sources/ldap/sync/membership.py to look something like this:
```
LDAP_UNIQUENESS = "ldap_uniq"
users = User.objects.filter(
Q(**{f"attributes__{LDAP_UNIQUENESS}__in": members})
| Q(
**{
f"attributes__{LDAP_UNIQUENESS}__isnull": True,
"ak_groups__in": [ak_group],
}
)
)
```
Instead of the current:
```
LDAP_DISTINGUISHED_NAME = "distinguishedName"
users = User.objects.filter(
Q(**{f"attributes__{LDAP_DISTINGUISHED_NAME}__in": members})
| Q(
**{
f"attributes__{LDAP_DISTINGUISHED_NAME}__isnull": True,
"ak_groups__in": [ak_group],
}
)
)
```
I'm happy to provide a PR after discussing the right approach on how to make this configurable.
ldap group membership not updated
**Is your feature request related to a problem? Please describe.**
The ldap syncing code assumes group membership is based on DN.
In our ldap tree this is unfortunately not the case. Instead the group membership is based on CN.
So while users and groups are synced, the group memberships are not established in authentik's database.
**Describe the solution you'd like**
I need the 'group membership field' to be matched against the CN of a user instead of it's fully qualified DN.
**Additional context**
Example pseudo ldif for groups:
```
dn: cn=group-1,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-1
memberUid: user-1
memberUid: user-2
memberUid: user-3
dn: cn=group-2,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-2
memberUid: user-1
memberUid: user-2
```
And for users:
```
dn: cn=user-1,ou=users,o=example,c=ch
objectClass: posixAccount
cn: user-1
ou: group-1
ou: group-2
dn: cn=user-2,ou=users,o=example,c=ch
objectClass: posixAccount
cn: user-2
ou: group-1
ou: group-2
dn: cn=user-3,ou=users,o=example,c=ch
objectClass: posixAccount
cn: user-3
ou: group-1
```
If think for the current authentik sync code to work the groups would have to look like this:
```
dn: cn=group-1,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-1
memberUid: user-1,ou=users,o=example,c=ch
memberUid: user-2,ou=users,o=example,c=ch
memberUid: user-3,ou=users,o=example,c=ch
dn: cn=group-2,ou=groups,o=example,c=ch
objectClass: posixGroup
cn: group-2
memberUid: user-1,ou=users,o=example,c=ch
memberUid: user-2,ou=users,o=example,c=ch
```
But changing this is beyond my control.
This is the OpenLDAP directory tree of a large University. I'm not sure if this setup and therefor also the problem is common or unique to our deployment.
I guess for the membership mapping to work I'd need the code that authentik uses to lookup group memberships in ./authentik/sources/ldap/sync/membership.py to look something like this:
```
LDAP_UNIQUENESS = "ldap_uniq"
users = User.objects.filter(
Q(**{f"attributes__{LDAP_UNIQUENESS}__in": members})
| Q(
**{
f"attributes__{LDAP_UNIQUENESS}__isnull": True,
"ak_groups__in": [ak_group],
}
)
)
```
Instead of the current:
```
LDAP_DISTINGUISHED_NAME = "distinguishedName"
users = User.objects.filter(
Q(**{f"attributes__{LDAP_DISTINGUISHED_NAME}__in": members})
| Q(
**{
f"attributes__{LDAP_DISTINGUISHED_NAME}__isnull": True,
"ak_groups__in": [ak_group],
}
)
)
```
I'm happy to provide a PR after discussing the right approach on how to make this configurable.
</issue>
<code>
[start of authentik/sources/ldap/sync/membership.py]
1 """Sync LDAP Users and groups into authentik"""
2 from typing import Any, Optional
3
4 import ldap3
5 import ldap3.core.exceptions
6 from django.db.models import Q
7
8 from authentik.core.models import Group, User
9 from authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME
10 from authentik.sources.ldap.models import LDAPSource
11 from authentik.sources.ldap.sync.base import LDAP_UNIQUENESS, BaseLDAPSynchronizer
12
13
14 class MembershipLDAPSynchronizer(BaseLDAPSynchronizer):
15 """Sync LDAP Users and groups into authentik"""
16
17 group_cache: dict[str, Group]
18
19 def __init__(self, source: LDAPSource):
20 super().__init__(source)
21 self.group_cache: dict[str, Group] = {}
22
23 def sync(self) -> int:
24 """Iterate over all Users and assign Groups using memberOf Field"""
25 groups = self._source.connection.extend.standard.paged_search(
26 search_base=self.base_dn_groups,
27 search_filter=self._source.group_object_filter,
28 search_scope=ldap3.SUBTREE,
29 attributes=[
30 self._source.group_membership_field,
31 self._source.object_uniqueness_field,
32 LDAP_DISTINGUISHED_NAME,
33 ],
34 )
35 membership_count = 0
36 for group in groups:
37 members = group.get("attributes", {}).get(self._source.group_membership_field, [])
38 ak_group = self.get_group(group)
39 if not ak_group:
40 continue
41
42 users = User.objects.filter(
43 Q(**{f"attributes__{LDAP_DISTINGUISHED_NAME}__in": members})
44 | Q(
45 **{
46 f"attributes__{LDAP_DISTINGUISHED_NAME}__isnull": True,
47 "ak_groups__in": [ak_group],
48 }
49 )
50 )
51 membership_count += 1
52 membership_count += users.count()
53 ak_group.users.set(users)
54 ak_group.save()
55 self._logger.debug("Successfully updated group membership")
56 return membership_count
57
58 def get_group(self, group_dict: dict[str, Any]) -> Optional[Group]:
59 """Check if we fetched the group already, and if not cache it for later"""
60 group_dn = group_dict.get("attributes", {}).get(LDAP_DISTINGUISHED_NAME, [])
61 group_uniq = group_dict.get("attributes", {}).get(self._source.object_uniqueness_field, [])
62 # group_uniq might be a single string or an array with (hopefully) a single string
63 if isinstance(group_uniq, list):
64 if len(group_uniq) < 1:
65 self.message(
66 f"Group does not have a uniqueness attribute: '{group_dn}'",
67 group=group_dn,
68 )
69 return None
70 group_uniq = group_uniq[0]
71 if group_uniq not in self.group_cache:
72 groups = Group.objects.filter(**{f"attributes__{LDAP_UNIQUENESS}": group_uniq})
73 if not groups.exists():
74 self.message(
75 f"Group does not exist in our DB yet, run sync_groups first: '{group_dn}'",
76 group=group_dn,
77 )
78 return None
79 self.group_cache[group_uniq] = groups.first()
80 return self.group_cache[group_uniq]
81
[end of authentik/sources/ldap/sync/membership.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/sources/ldap/sync/membership.py b/authentik/sources/ldap/sync/membership.py
--- a/authentik/sources/ldap/sync/membership.py
+++ b/authentik/sources/ldap/sync/membership.py
@@ -39,11 +39,17 @@
if not ak_group:
continue
+ membership_mapping_attribute = LDAP_DISTINGUISHED_NAME
+ if self._source.group_membership_field == "memberUid":
+ # If memberships are based on the posixGroup's 'memberUid'
+ # attribute we use the RDN instead of the FDN to lookup members.
+ membership_mapping_attribute = LDAP_UNIQUENESS
+
users = User.objects.filter(
- Q(**{f"attributes__{LDAP_DISTINGUISHED_NAME}__in": members})
+ Q(**{f"attributes__{membership_mapping_attribute}__in": members})
| Q(
**{
- f"attributes__{LDAP_DISTINGUISHED_NAME}__isnull": True,
+ f"attributes__{membership_mapping_attribute}__isnull": True,
"ak_groups__in": [ak_group],
}
)
| {"golden_diff": "diff --git a/authentik/sources/ldap/sync/membership.py b/authentik/sources/ldap/sync/membership.py\n--- a/authentik/sources/ldap/sync/membership.py\n+++ b/authentik/sources/ldap/sync/membership.py\n@@ -39,11 +39,17 @@\n if not ak_group:\n continue\n \n+ membership_mapping_attribute = LDAP_DISTINGUISHED_NAME\n+ if self._source.group_membership_field == \"memberUid\":\n+ # If memberships are based on the posixGroup's 'memberUid'\n+ # attribute we use the RDN instead of the FDN to lookup members.\n+ membership_mapping_attribute = LDAP_UNIQUENESS\n+\n users = User.objects.filter(\n- Q(**{f\"attributes__{LDAP_DISTINGUISHED_NAME}__in\": members})\n+ Q(**{f\"attributes__{membership_mapping_attribute}__in\": members})\n | Q(\n **{\n- f\"attributes__{LDAP_DISTINGUISHED_NAME}__isnull\": True,\n+ f\"attributes__{membership_mapping_attribute}__isnull\": True,\n \"ak_groups__in\": [ak_group],\n }\n )\n", "issue": "ldap group membership not updated\n**Is your feature request related to a problem? Please describe.**\r\nThe ldap syncing code assumes group membership is based on DN.\r\nIn our ldap tree this is unfortunately not the case. Instead the group membership is based on CN.\r\nSo while users and groups are synced, the group memberships are not established in authentik's database.\r\n\r\n**Describe the solution you'd like**\r\nI need the 'group membership field' to be matched against the CN of a user instead of it's fully qualified DN.\r\n\r\n**Additional context**\r\nExample pseudo ldif for groups:\r\n```\r\ndn: cn=group-1,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-1\r\nmemberUid: user-1\r\nmemberUid: user-2\r\nmemberUid: user-3\r\n\r\ndn: cn=group-2,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-2\r\nmemberUid: user-1\r\nmemberUid: user-2\r\n```\r\nAnd for users:\r\n\r\n```\r\ndn: cn=user-1,ou=users,o=example,c=ch\r\nobjectClass: posixAccount\r\ncn: user-1\r\nou: group-1\r\nou: group-2\r\n\r\ndn: cn=user-2,ou=users,o=example,c=ch\r\nobjectClass: posixAccount\r\ncn: user-2\r\nou: group-1\r\nou: group-2\r\n\r\ndn: cn=user-3,ou=users,o=example,c=ch\r\nobjectClass: posixAccount\r\ncn: user-3\r\nou: group-1\r\n```\r\n\r\nIf think for the current authentik sync code to work the groups would have to look like this:\r\n```\r\ndn: cn=group-1,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-1\r\nmemberUid: user-1,ou=users,o=example,c=ch\r\nmemberUid: user-2,ou=users,o=example,c=ch\r\nmemberUid: user-3,ou=users,o=example,c=ch\r\n\r\ndn: cn=group-2,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-2\r\nmemberUid: user-1,ou=users,o=example,c=ch\r\nmemberUid: user-2,ou=users,o=example,c=ch\r\n```\r\n\r\nBut changing this is beyond my control.\r\n\r\nThis is the OpenLDAP directory tree of a large University. I'm not sure if this setup and therefor also the problem is common or unique to our deployment.\r\n\r\nI guess for the membership mapping to work I'd need the code that authentik uses to lookup group memberships in ./authentik/sources/ldap/sync/membership.py to look something like this:\r\n\r\n```\r\nLDAP_UNIQUENESS = \"ldap_uniq\"\r\n\r\n users = User.objects.filter(\r\n Q(**{f\"attributes__{LDAP_UNIQUENESS}__in\": members})\r\n | Q(\r\n **{\r\n f\"attributes__{LDAP_UNIQUENESS}__isnull\": True,\r\n \"ak_groups__in\": [ak_group],\r\n }\r\n )\r\n )\r\n\r\n```\r\n\r\nInstead of the current:\r\n\r\n```\r\nLDAP_DISTINGUISHED_NAME = \"distinguishedName\"\r\n\r\n users = User.objects.filter(\r\n Q(**{f\"attributes__{LDAP_DISTINGUISHED_NAME}__in\": members})\r\n | Q(\r\n **{\r\n f\"attributes__{LDAP_DISTINGUISHED_NAME}__isnull\": True,\r\n \"ak_groups__in\": [ak_group],\r\n }\r\n )\r\n )\r\n\r\n```\r\n\r\nI'm happy to provide a PR after discussing the right approach on how to make this configurable.\nldap group membership not updated\n**Is your feature request related to a problem? Please describe.**\r\nThe ldap syncing code assumes group membership is based on DN.\r\nIn our ldap tree this is unfortunately not the case. Instead the group membership is based on CN.\r\nSo while users and groups are synced, the group memberships are not established in authentik's database.\r\n\r\n**Describe the solution you'd like**\r\nI need the 'group membership field' to be matched against the CN of a user instead of it's fully qualified DN.\r\n\r\n**Additional context**\r\nExample pseudo ldif for groups:\r\n```\r\ndn: cn=group-1,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-1\r\nmemberUid: user-1\r\nmemberUid: user-2\r\nmemberUid: user-3\r\n\r\ndn: cn=group-2,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-2\r\nmemberUid: user-1\r\nmemberUid: user-2\r\n```\r\nAnd for users:\r\n\r\n```\r\ndn: cn=user-1,ou=users,o=example,c=ch\r\nobjectClass: posixAccount\r\ncn: user-1\r\nou: group-1\r\nou: group-2\r\n\r\ndn: cn=user-2,ou=users,o=example,c=ch\r\nobjectClass: posixAccount\r\ncn: user-2\r\nou: group-1\r\nou: group-2\r\n\r\ndn: cn=user-3,ou=users,o=example,c=ch\r\nobjectClass: posixAccount\r\ncn: user-3\r\nou: group-1\r\n```\r\n\r\nIf think for the current authentik sync code to work the groups would have to look like this:\r\n```\r\ndn: cn=group-1,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-1\r\nmemberUid: user-1,ou=users,o=example,c=ch\r\nmemberUid: user-2,ou=users,o=example,c=ch\r\nmemberUid: user-3,ou=users,o=example,c=ch\r\n\r\ndn: cn=group-2,ou=groups,o=example,c=ch\r\nobjectClass: posixGroup\r\ncn: group-2\r\nmemberUid: user-1,ou=users,o=example,c=ch\r\nmemberUid: user-2,ou=users,o=example,c=ch\r\n```\r\n\r\nBut changing this is beyond my control.\r\n\r\nThis is the OpenLDAP directory tree of a large University. I'm not sure if this setup and therefor also the problem is common or unique to our deployment.\r\n\r\nI guess for the membership mapping to work I'd need the code that authentik uses to lookup group memberships in ./authentik/sources/ldap/sync/membership.py to look something like this:\r\n\r\n```\r\nLDAP_UNIQUENESS = \"ldap_uniq\"\r\n\r\n users = User.objects.filter(\r\n Q(**{f\"attributes__{LDAP_UNIQUENESS}__in\": members})\r\n | Q(\r\n **{\r\n f\"attributes__{LDAP_UNIQUENESS}__isnull\": True,\r\n \"ak_groups__in\": [ak_group],\r\n }\r\n )\r\n )\r\n\r\n```\r\n\r\nInstead of the current:\r\n\r\n```\r\nLDAP_DISTINGUISHED_NAME = \"distinguishedName\"\r\n\r\n users = User.objects.filter(\r\n Q(**{f\"attributes__{LDAP_DISTINGUISHED_NAME}__in\": members})\r\n | Q(\r\n **{\r\n f\"attributes__{LDAP_DISTINGUISHED_NAME}__isnull\": True,\r\n \"ak_groups__in\": [ak_group],\r\n }\r\n )\r\n )\r\n\r\n```\r\n\r\nI'm happy to provide a PR after discussing the right approach on how to make this configurable.\n", "before_files": [{"content": "\"\"\"Sync LDAP Users and groups into authentik\"\"\"\nfrom typing import Any, Optional\n\nimport ldap3\nimport ldap3.core.exceptions\nfrom django.db.models import Q\n\nfrom authentik.core.models import Group, User\nfrom authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME\nfrom authentik.sources.ldap.models import LDAPSource\nfrom authentik.sources.ldap.sync.base import LDAP_UNIQUENESS, BaseLDAPSynchronizer\n\n\nclass MembershipLDAPSynchronizer(BaseLDAPSynchronizer):\n \"\"\"Sync LDAP Users and groups into authentik\"\"\"\n\n group_cache: dict[str, Group]\n\n def __init__(self, source: LDAPSource):\n super().__init__(source)\n self.group_cache: dict[str, Group] = {}\n\n def sync(self) -> int:\n \"\"\"Iterate over all Users and assign Groups using memberOf Field\"\"\"\n groups = self._source.connection.extend.standard.paged_search(\n search_base=self.base_dn_groups,\n search_filter=self._source.group_object_filter,\n search_scope=ldap3.SUBTREE,\n attributes=[\n self._source.group_membership_field,\n self._source.object_uniqueness_field,\n LDAP_DISTINGUISHED_NAME,\n ],\n )\n membership_count = 0\n for group in groups:\n members = group.get(\"attributes\", {}).get(self._source.group_membership_field, [])\n ak_group = self.get_group(group)\n if not ak_group:\n continue\n\n users = User.objects.filter(\n Q(**{f\"attributes__{LDAP_DISTINGUISHED_NAME}__in\": members})\n | Q(\n **{\n f\"attributes__{LDAP_DISTINGUISHED_NAME}__isnull\": True,\n \"ak_groups__in\": [ak_group],\n }\n )\n )\n membership_count += 1\n membership_count += users.count()\n ak_group.users.set(users)\n ak_group.save()\n self._logger.debug(\"Successfully updated group membership\")\n return membership_count\n\n def get_group(self, group_dict: dict[str, Any]) -> Optional[Group]:\n \"\"\"Check if we fetched the group already, and if not cache it for later\"\"\"\n group_dn = group_dict.get(\"attributes\", {}).get(LDAP_DISTINGUISHED_NAME, [])\n group_uniq = group_dict.get(\"attributes\", {}).get(self._source.object_uniqueness_field, [])\n # group_uniq might be a single string or an array with (hopefully) a single string\n if isinstance(group_uniq, list):\n if len(group_uniq) < 1:\n self.message(\n f\"Group does not have a uniqueness attribute: '{group_dn}'\",\n group=group_dn,\n )\n return None\n group_uniq = group_uniq[0]\n if group_uniq not in self.group_cache:\n groups = Group.objects.filter(**{f\"attributes__{LDAP_UNIQUENESS}\": group_uniq})\n if not groups.exists():\n self.message(\n f\"Group does not exist in our DB yet, run sync_groups first: '{group_dn}'\",\n group=group_dn,\n )\n return None\n self.group_cache[group_uniq] = groups.first()\n return self.group_cache[group_uniq]\n", "path": "authentik/sources/ldap/sync/membership.py"}]} | 2,981 | 265 |
gh_patches_debug_27085 | rasdani/github-patches | git_diff | fossasia__open-event-server-2825 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Event does not show up on "manage events" page when it is a copy
When the user copies an event and edits it, it does not show up on the event management page.

</issue>
<code>
[start of app/helpers/wizard/clone.py]
1 from sqlalchemy.orm import make_transient
2
3 from app.helpers.data import save_to_db
4 from app.helpers.data_getter import DataGetter
5 from app.models import db
6
7
8 def clone_row(row, event_id=None):
9 db.session.expunge(row)
10 make_transient(row)
11 row.id = None
12 if event_id:
13 row.event_id = event_id
14 save_to_db(row)
15 db.session.flush()
16 return row
17
18
19 def create_event_copy(event_id):
20 old_event = DataGetter.get_event(event_id)
21 event = clone_row(old_event)
22 event.name = "Copy of " + event.name
23 event.state = "Draft"
24 save_to_db(event)
25
26 sponsors_old = DataGetter.get_sponsors(event_id).all()
27 tracks_old = DataGetter.get_tracks(event_id).all()
28 microlocations_old = DataGetter.get_microlocations(event_id).all()
29 call_for_paper_old = DataGetter.get_call_for_papers(event_id).first()
30 social_links = DataGetter.get_social_links_by_event_id(event_id).all()
31 custom_forms = DataGetter.get_custom_form_elements(event_id)
32
33 for social_link in social_links:
34 clone_row(social_link, event.id)
35
36 for sponsor in sponsors_old:
37 clone_row(sponsor, event.id)
38
39 for track in tracks_old:
40 clone_row(track, event.id)
41
42 for microlocation in microlocations_old:
43 clone_row(microlocation, event.id)
44
45 if call_for_paper_old:
46 clone_row(call_for_paper_old, event.id)
47
48 if custom_forms:
49 clone_row(custom_forms, event.id)
50
51 return event
52
[end of app/helpers/wizard/clone.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/helpers/wizard/clone.py b/app/helpers/wizard/clone.py
--- a/app/helpers/wizard/clone.py
+++ b/app/helpers/wizard/clone.py
@@ -1,8 +1,13 @@
from sqlalchemy.orm import make_transient
+from flask.ext import login
from app.helpers.data import save_to_db
from app.helpers.data_getter import DataGetter
from app.models import db
+from app.models.users_events_roles import UsersEventsRoles
+from app.models.role import Role
+from app.models.email_notifications import EmailNotification
+from app.models.user import ORGANIZER
def clone_row(row, event_id=None):
@@ -23,6 +28,17 @@
event.state = "Draft"
save_to_db(event)
+ role = Role.query.filter_by(name=ORGANIZER).first()
+ uer = UsersEventsRoles(login.current_user, event, role)
+ if save_to_db(uer, "Event saved"):
+ new_email_notification_setting = EmailNotification(next_event=1,
+ new_paper=1,
+ session_schedule=1,
+ session_accept_reject=1,
+ user_id=login.current_user.id,
+ event_id=event.id)
+ save_to_db(new_email_notification_setting, "EmailSetting Saved")
+
sponsors_old = DataGetter.get_sponsors(event_id).all()
tracks_old = DataGetter.get_tracks(event_id).all()
microlocations_old = DataGetter.get_microlocations(event_id).all()
| {"golden_diff": "diff --git a/app/helpers/wizard/clone.py b/app/helpers/wizard/clone.py\n--- a/app/helpers/wizard/clone.py\n+++ b/app/helpers/wizard/clone.py\n@@ -1,8 +1,13 @@\n from sqlalchemy.orm import make_transient\n+from flask.ext import login\n \n from app.helpers.data import save_to_db\n from app.helpers.data_getter import DataGetter\n from app.models import db\n+from app.models.users_events_roles import UsersEventsRoles\n+from app.models.role import Role\n+from app.models.email_notifications import EmailNotification\n+from app.models.user import ORGANIZER\n \n \n def clone_row(row, event_id=None):\n@@ -23,6 +28,17 @@\n event.state = \"Draft\"\n save_to_db(event)\n \n+ role = Role.query.filter_by(name=ORGANIZER).first()\n+ uer = UsersEventsRoles(login.current_user, event, role)\n+ if save_to_db(uer, \"Event saved\"):\n+ new_email_notification_setting = EmailNotification(next_event=1,\n+ new_paper=1,\n+ session_schedule=1,\n+ session_accept_reject=1,\n+ user_id=login.current_user.id,\n+ event_id=event.id)\n+ save_to_db(new_email_notification_setting, \"EmailSetting Saved\")\n+\n sponsors_old = DataGetter.get_sponsors(event_id).all()\n tracks_old = DataGetter.get_tracks(event_id).all()\n microlocations_old = DataGetter.get_microlocations(event_id).all()\n", "issue": "Event does not show up on \"manage events\" page when it is a copy\nWhen the user copies an event and edits it, it does not show up on the event management page.\r\n\r\n\r\n\n", "before_files": [{"content": "from sqlalchemy.orm import make_transient\n\nfrom app.helpers.data import save_to_db\nfrom app.helpers.data_getter import DataGetter\nfrom app.models import db\n\n\ndef clone_row(row, event_id=None):\n db.session.expunge(row)\n make_transient(row)\n row.id = None\n if event_id:\n row.event_id = event_id\n save_to_db(row)\n db.session.flush()\n return row\n\n\ndef create_event_copy(event_id):\n old_event = DataGetter.get_event(event_id)\n event = clone_row(old_event)\n event.name = \"Copy of \" + event.name\n event.state = \"Draft\"\n save_to_db(event)\n\n sponsors_old = DataGetter.get_sponsors(event_id).all()\n tracks_old = DataGetter.get_tracks(event_id).all()\n microlocations_old = DataGetter.get_microlocations(event_id).all()\n call_for_paper_old = DataGetter.get_call_for_papers(event_id).first()\n social_links = DataGetter.get_social_links_by_event_id(event_id).all()\n custom_forms = DataGetter.get_custom_form_elements(event_id)\n\n for social_link in social_links:\n clone_row(social_link, event.id)\n\n for sponsor in sponsors_old:\n clone_row(sponsor, event.id)\n\n for track in tracks_old:\n clone_row(track, event.id)\n\n for microlocation in microlocations_old:\n clone_row(microlocation, event.id)\n\n if call_for_paper_old:\n clone_row(call_for_paper_old, event.id)\n\n if custom_forms:\n clone_row(custom_forms, event.id)\n\n return event\n", "path": "app/helpers/wizard/clone.py"}]} | 1,103 | 324 |
gh_patches_debug_6751 | rasdani/github-patches | git_diff | mne-tools__mne-bids-1135 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
doc build fails: "tut-ieeg-localize" moved
Our doc build is failing with a warning because the reference to `tut-ieeg-localize` cannot be found anymore. Was this something in MNE-Python that has now moved? @alexrockhill do you know more? :-)
> /home/circleci/project/doc/auto_examples/convert_ieeg_to_bids.rst:1083: WARNING: undefined label: 'tut-ieeg-localize'
https://app.circleci.com/pipelines/github/mne-tools/mne-bids/5179/workflows/d0b5cc9b-cda4-436b-9e96-1edeea651d21/jobs/7347
</issue>
<code>
[start of doc/conf.py]
1 """Configure details for documentation with sphinx."""
2 import os
3 import sys
4 from datetime import date
5
6 import sphinx_gallery # noqa: F401
7 from sphinx_gallery.sorting import ExampleTitleSortKey
8
9 import mne_bids
10
11
12 # If extensions (or modules to document with autodoc) are in another directory,
13 # add these directories to sys.path here. If the directory is relative to the
14 # documentation root, use os.path.abspath to make it absolute, like shown here.
15 curdir = os.path.dirname(__file__)
16 sys.path.append(os.path.abspath(os.path.join(curdir, '..', 'mne_bids')))
17 sys.path.append(os.path.abspath(os.path.join(curdir, 'sphinxext')))
18
19
20 # -- General configuration ------------------------------------------------
21
22 # If your documentation needs a minimal Sphinx version, state it here.
23 #
24 # needs_sphinx = '1.0'
25
26 # Add any Sphinx extension module names here, as strings. They can be
27 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
28 # ones.
29 extensions = [
30 'sphinx.ext.githubpages',
31 'sphinx.ext.autodoc',
32 'sphinx.ext.mathjax',
33 'sphinx.ext.viewcode',
34 'sphinx.ext.autosummary',
35 'sphinx.ext.doctest',
36 'sphinx.ext.intersphinx',
37 'sphinx_gallery.gen_gallery',
38 'numpydoc',
39 'sphinx_copybutton',
40 'gen_cli', # custom extension, see ./sphinxext/gen_cli.py
41 'gh_substitutions', # custom extension, see ./sphinxext/gh_substitutions.py
42 ]
43
44 # configure sphinx-copybutton
45 copybutton_prompt_text = r">>> |\.\.\. |\$ "
46 copybutton_prompt_is_regexp = True
47
48 # configure numpydoc
49 numpydoc_xref_param_type = True
50 numpydoc_class_members_toctree = False
51 numpydoc_attributes_as_param_list = True
52 numpydoc_xref_aliases = {
53 'BIDSPath': ':class:`BIDSPath <mne_bids.BIDSPath>`',
54 'path-like': ':term:`path-like <mne:path-like>`',
55 'array-like': ':term:`array_like <numpy:array_like>`',
56 'int': ':class:`int <python:int>`',
57 'bool': ':class:`bool <python:bool>`',
58 'float': ':class:`float <python:float>`',
59 'list': ':class:`list <python:list>`',
60 'tuple': ':class:`tuple <python:tuple>`',
61 'NibabelImageObject': 'nibabel.spatialimages.SpatialImage',
62 }
63 numpydoc_xref_ignore = {
64 # words
65 'instance', 'instances', 'of'
66 }
67
68
69 # generate autosummary even if no references
70 autosummary_generate = True
71 autodoc_default_options = {'inherited-members': None}
72 default_role = 'autolink' # XXX silently allows bad syntax, someone should fix
73
74 # configure linkcheck
75 # https://sphinx-doc.org/en/master/usage/configuration.html?#options-for-the-linkcheck-builder
76 linkcheck_retries = 2
77 linkcheck_rate_limit_timeout = 15.0
78 linkcheck_ignore = [
79 r'https://www.researchgate.net/profile/.*',
80 ]
81
82 # The suffix(es) of source filenames.
83 # You can specify multiple suffix as a list of string:
84 #
85 # source_suffix = ['.rst', '.md']
86 source_suffix = '.rst'
87
88 # The master toctree document.
89 master_doc = 'index'
90
91 # General information about the project.
92 project = u'MNE-BIDS'
93 td = date.today()
94 copyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,
95 td.isoformat())
96
97 author = u'MNE Developers'
98
99 # The version info for the project you're documenting, acts as replacement for
100 # |version| and |release|, also used in various other places throughout the
101 # built documents.
102 #
103 # The short X.Y version.
104 version = mne_bids.__version__
105 # The full version, including alpha/beta/rc tags.
106 release = version
107
108 # List of patterns, relative to source directory, that match files and
109 # directories to ignore when looking for source files.
110 # This patterns also effect to html_static_path and html_extra_path
111 exclude_patterns = ['auto_examples/index.rst', '_build', 'Thumbs.db',
112 '.DS_Store']
113
114 # HTML options (e.g., theme)
115 html_show_sourcelink = False
116 html_copy_source = False
117
118 html_theme = 'pydata_sphinx_theme'
119
120 # Add any paths that contain templates here, relative to this directory.
121 templates_path = ['_templates']
122 html_static_path = ['_static']
123 html_css_files = ['style.css']
124
125 # Theme options are theme-specific and customize the look and feel of a theme
126 # further. For a list of options available for each theme, see the
127 # documentation.
128 html_theme_options = {
129 'icon_links': [
130 dict(name='GitHub',
131 url='https://github.com/mne-tools/mne-bids',
132 icon='fab fa-github-square'),
133 dict(name='Discourse',
134 url='https://mne.discourse.group/tags/mne-bids',
135 icon='fab fa-discourse'),
136 ],
137 'icon_links_label': 'Quick Links', # for screen reader
138 'use_edit_page_button': False,
139 'navigation_with_keys': False,
140 'show_toc_level': 1,
141 'navbar_end': ['version-switcher', 'navbar-icon-links'],
142 'analytics': dict(google_analytics_id='G-C8SH9E98QC'),
143 }
144
145 html_context = {
146 'versions_dropdown': {
147 'dev': 'v0.13 (devel)',
148 'stable': 'v0.12 (stable)',
149 'v0.11': 'v0.11',
150 'v0.10': 'v0.10',
151 'v0.9': 'v0.9',
152 'v0.8': 'v0.8',
153 'v0.7': 'v0.7',
154 'v0.6': 'v0.6',
155 'v0.5': 'v0.5',
156 'v0.4': 'v0.4',
157 'v0.3': 'v0.3',
158 'v0.2': 'v0.2',
159 'v0.1': 'v0.1',
160 },
161 }
162
163 html_sidebars = {}
164
165 # Example configuration for intersphinx: refer to the Python standard library.
166 intersphinx_mapping = {
167 'python': ('https://docs.python.org/3', None),
168 'mne': ('https://mne.tools/dev', None),
169 'numpy': ('https://numpy.org/devdocs', None),
170 'scipy': ('https://scipy.github.io/devdocs', None),
171 'matplotlib': ('https://matplotlib.org', None),
172 'nilearn': ('http://nilearn.github.io/stable', None),
173 'pandas': ('https://pandas.pydata.org/pandas-docs/dev', None),
174 'nibabel': ('https://nipy.org/nibabel', None),
175 }
176 intersphinx_timeout = 5
177
178 # Resolve binder filepath_prefix. From the docs:
179 # "A prefix to append to the filepath in the Binder links. You should use this
180 # if you will store your built documentation in a sub-folder of a repository,
181 # instead of in the root."
182 # we will store dev docs in a `dev` subdirectory and all other docs in a
183 # directory "v" + version_str. E.g., "v0.3"
184 if 'dev' in version:
185 filepath_prefix = 'dev'
186 else:
187 filepath_prefix = 'v{}'.format(version)
188
189 sphinx_gallery_conf = {
190 'doc_module': 'mne_bids',
191 'reference_url': {
192 'mne_bids': None,
193 },
194 'backreferences_dir': 'generated',
195 'examples_dirs': '../examples',
196 'within_subsection_order': ExampleTitleSortKey,
197 'gallery_dirs': 'auto_examples',
198 'filename_pattern': '^((?!sgskip).)*$',
199 'binder': {
200 # Required keys
201 'org': 'mne-tools',
202 'repo': 'mne-bids',
203 'branch': 'gh-pages', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.
204 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).
205 'filepath_prefix': filepath_prefix, # noqa: E501 A prefix to prepend to any filepaths in Binder links.
206 'dependencies': [
207 '../test_requirements.txt',
208 './requirements.txt',
209 ],
210 }
211 }
212
[end of doc/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -166,6 +166,7 @@
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'mne': ('https://mne.tools/dev', None),
+ 'mne-gui-addons': ('https://mne.tools/mne-gui-addons', None),
'numpy': ('https://numpy.org/devdocs', None),
'scipy': ('https://scipy.github.io/devdocs', None),
'matplotlib': ('https://matplotlib.org', None),
| {"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -166,6 +166,7 @@\n intersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'mne': ('https://mne.tools/dev', None),\n+ 'mne-gui-addons': ('https://mne.tools/mne-gui-addons', None),\n 'numpy': ('https://numpy.org/devdocs', None),\n 'scipy': ('https://scipy.github.io/devdocs', None),\n 'matplotlib': ('https://matplotlib.org', None),\n", "issue": "doc build fails: \"tut-ieeg-localize\" moved\nOur doc build is failing with a warning because the reference to `tut-ieeg-localize` cannot be found anymore. Was this something in MNE-Python that has now moved? @alexrockhill do you know more? :-)\r\n\r\n> /home/circleci/project/doc/auto_examples/convert_ieeg_to_bids.rst:1083: WARNING: undefined label: 'tut-ieeg-localize'\r\n\r\nhttps://app.circleci.com/pipelines/github/mne-tools/mne-bids/5179/workflows/d0b5cc9b-cda4-436b-9e96-1edeea651d21/jobs/7347\n", "before_files": [{"content": "\"\"\"Configure details for documentation with sphinx.\"\"\"\nimport os\nimport sys\nfrom datetime import date\n\nimport sphinx_gallery # noqa: F401\nfrom sphinx_gallery.sorting import ExampleTitleSortKey\n\nimport mne_bids\n\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\ncurdir = os.path.dirname(__file__)\nsys.path.append(os.path.abspath(os.path.join(curdir, '..', 'mne_bids')))\nsys.path.append(os.path.abspath(os.path.join(curdir, 'sphinxext')))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.githubpages',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx_gallery.gen_gallery',\n 'numpydoc',\n 'sphinx_copybutton',\n 'gen_cli', # custom extension, see ./sphinxext/gen_cli.py\n 'gh_substitutions', # custom extension, see ./sphinxext/gh_substitutions.py\n]\n\n# configure sphinx-copybutton\ncopybutton_prompt_text = r\">>> |\\.\\.\\. |\\$ \"\ncopybutton_prompt_is_regexp = True\n\n# configure numpydoc\nnumpydoc_xref_param_type = True\nnumpydoc_class_members_toctree = False\nnumpydoc_attributes_as_param_list = True\nnumpydoc_xref_aliases = {\n 'BIDSPath': ':class:`BIDSPath <mne_bids.BIDSPath>`',\n 'path-like': ':term:`path-like <mne:path-like>`',\n 'array-like': ':term:`array_like <numpy:array_like>`',\n 'int': ':class:`int <python:int>`',\n 'bool': ':class:`bool <python:bool>`',\n 'float': ':class:`float <python:float>`',\n 'list': ':class:`list <python:list>`',\n 'tuple': ':class:`tuple <python:tuple>`',\n 'NibabelImageObject': 'nibabel.spatialimages.SpatialImage',\n}\nnumpydoc_xref_ignore = {\n # words\n 'instance', 'instances', 'of'\n}\n\n\n# generate autosummary even if no references\nautosummary_generate = True\nautodoc_default_options = {'inherited-members': None}\ndefault_role = 'autolink' # XXX silently allows bad syntax, someone should fix\n\n# configure linkcheck\n# https://sphinx-doc.org/en/master/usage/configuration.html?#options-for-the-linkcheck-builder\nlinkcheck_retries = 2\nlinkcheck_rate_limit_timeout = 15.0\nlinkcheck_ignore = [\n r'https://www.researchgate.net/profile/.*',\n]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'MNE-BIDS'\ntd = date.today()\ncopyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,\n td.isoformat())\n\nauthor = u'MNE Developers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = mne_bids.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['auto_examples/index.rst', '_build', 'Thumbs.db',\n '.DS_Store']\n\n# HTML options (e.g., theme)\nhtml_show_sourcelink = False\nhtml_copy_source = False\n\nhtml_theme = 'pydata_sphinx_theme'\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\nhtml_static_path = ['_static']\nhtml_css_files = ['style.css']\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'icon_links': [\n dict(name='GitHub',\n url='https://github.com/mne-tools/mne-bids',\n icon='fab fa-github-square'),\n dict(name='Discourse',\n url='https://mne.discourse.group/tags/mne-bids',\n icon='fab fa-discourse'),\n ],\n 'icon_links_label': 'Quick Links', # for screen reader\n 'use_edit_page_button': False,\n 'navigation_with_keys': False,\n 'show_toc_level': 1,\n 'navbar_end': ['version-switcher', 'navbar-icon-links'],\n 'analytics': dict(google_analytics_id='G-C8SH9E98QC'),\n}\n\nhtml_context = {\n 'versions_dropdown': {\n 'dev': 'v0.13 (devel)',\n 'stable': 'v0.12 (stable)',\n 'v0.11': 'v0.11',\n 'v0.10': 'v0.10',\n 'v0.9': 'v0.9',\n 'v0.8': 'v0.8',\n 'v0.7': 'v0.7',\n 'v0.6': 'v0.6',\n 'v0.5': 'v0.5',\n 'v0.4': 'v0.4',\n 'v0.3': 'v0.3',\n 'v0.2': 'v0.2',\n 'v0.1': 'v0.1',\n },\n}\n\nhtml_sidebars = {}\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'mne': ('https://mne.tools/dev', None),\n 'numpy': ('https://numpy.org/devdocs', None),\n 'scipy': ('https://scipy.github.io/devdocs', None),\n 'matplotlib': ('https://matplotlib.org', None),\n 'nilearn': ('http://nilearn.github.io/stable', None),\n 'pandas': ('https://pandas.pydata.org/pandas-docs/dev', None),\n 'nibabel': ('https://nipy.org/nibabel', None),\n}\nintersphinx_timeout = 5\n\n# Resolve binder filepath_prefix. From the docs:\n# \"A prefix to append to the filepath in the Binder links. You should use this\n# if you will store your built documentation in a sub-folder of a repository,\n# instead of in the root.\"\n# we will store dev docs in a `dev` subdirectory and all other docs in a\n# directory \"v\" + version_str. E.g., \"v0.3\"\nif 'dev' in version:\n filepath_prefix = 'dev'\nelse:\n filepath_prefix = 'v{}'.format(version)\n\nsphinx_gallery_conf = {\n 'doc_module': 'mne_bids',\n 'reference_url': {\n 'mne_bids': None,\n },\n 'backreferences_dir': 'generated',\n 'examples_dirs': '../examples',\n 'within_subsection_order': ExampleTitleSortKey,\n 'gallery_dirs': 'auto_examples',\n 'filename_pattern': '^((?!sgskip).)*$',\n 'binder': {\n # Required keys\n 'org': 'mne-tools',\n 'repo': 'mne-bids',\n 'branch': 'gh-pages', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.\n 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).\n 'filepath_prefix': filepath_prefix, # noqa: E501 A prefix to prepend to any filepaths in Binder links.\n 'dependencies': [\n '../test_requirements.txt',\n './requirements.txt',\n ],\n }\n}\n", "path": "doc/conf.py"}]} | 3,156 | 139 |
gh_patches_debug_31040 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-4372 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
distutils not included with latest virtualenv (16.4.0)
This was already reported in #4031. The issue was closed without a fix so I'm creating this one.
**With virtualenv 16.4.0, pyinstaller reports :**
```
3583 INFO: distutils: retargeting to non-venv dir '/usr/lib64/python3.6/distutils/__init__.py'
```
and then during "Loading module hook" sequence, the `hook-distutils.py` is missing and distutils modules are not included into the final executable binary.
When executing the binary the error is:
```
ModuleNotFoundError: No module named 'distutils'
[10373] Failed to execute script <name here>
```
**With virtualenv 16.1.0, pyinstaller reports :**
```
3157 INFO: Processing pre-find module path hook distutils
5053 INFO: Loading module hook "hook-distutils.py"...
```
and distutils modules are included into the final executable binary.
</issue>
<code>
[start of PyInstaller/hooks/pre_find_module_path/hook-distutils.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2019, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10 """
11 `distutils`-specific pre-find module path hook.
12
13 When run from within a venv (virtual environment), this hook changes the
14 `__path__` of the `distutils` package to that of the system-wide rather than
15 venv-specific `distutils` package. While the former is suitable for freezing,
16 the latter is intended for use _only_ from within venvs.
17 """
18
19
20 import distutils
21 import os
22
23 from PyInstaller.utils.hooks import logger
24
25
26 def pre_find_module_path(api):
27 # Absolute path of the system-wide "distutils" package when run from within
28 # a venv or None otherwise.
29 distutils_dir = getattr(distutils, 'distutils_path', None)
30 if distutils_dir is not None:
31 # Find this package in its parent directory.
32 api.search_dirs = [os.path.dirname(distutils_dir)]
33 logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)
34
[end of PyInstaller/hooks/pre_find_module_path/hook-distutils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py
--- a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py
+++ b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py
@@ -1,11 +1,11 @@
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
# Copyright (c) 2005-2019, PyInstaller Development Team.
#
# Distributed under the terms of the GNU General Public License with exception
# for distributing bootloader.
#
# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
"""
`distutils`-specific pre-find module path hook.
@@ -26,8 +26,16 @@
def pre_find_module_path(api):
# Absolute path of the system-wide "distutils" package when run from within
# a venv or None otherwise.
- distutils_dir = getattr(distutils, 'distutils_path', None)
- if distutils_dir is not None:
+
+ # opcode is not a virtualenv module, so we can use it to find the stdlib.
+ # Technique taken from virtualenv's "distutils" package detection at
+ # https://github.com/pypa/virtualenv/blob/16.3.0/virtualenv_embedded/distutils-init.py#L5
+ import opcode
+
+ system_module_path = os.path.normpath(os.path.dirname(opcode.__file__))
+ loaded_module_path = os.path.normpath(os.path.dirname(distutils.__file__))
+ if system_module_path != loaded_module_path:
# Find this package in its parent directory.
- api.search_dirs = [os.path.dirname(distutils_dir)]
- logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)
+ api.search_dirs = [system_module_path]
+ logger.info('distutils: retargeting to non-venv dir %r',
+ system_module_path)
| {"golden_diff": "diff --git a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py\n--- a/PyInstaller/hooks/pre_find_module_path/hook-distutils.py\n+++ b/PyInstaller/hooks/pre_find_module_path/hook-distutils.py\n@@ -1,11 +1,11 @@\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n # Copyright (c) 2005-2019, PyInstaller Development Team.\n #\n # Distributed under the terms of the GNU General Public License with exception\n # for distributing bootloader.\n #\n # The full license is in the file COPYING.txt, distributed with this software.\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n \n \"\"\"\n `distutils`-specific pre-find module path hook.\n@@ -26,8 +26,16 @@\n def pre_find_module_path(api):\n # Absolute path of the system-wide \"distutils\" package when run from within\n # a venv or None otherwise.\n- distutils_dir = getattr(distutils, 'distutils_path', None)\n- if distutils_dir is not None:\n+\n+ # opcode is not a virtualenv module, so we can use it to find the stdlib.\n+ # Technique taken from virtualenv's \"distutils\" package detection at\n+ # https://github.com/pypa/virtualenv/blob/16.3.0/virtualenv_embedded/distutils-init.py#L5\n+ import opcode\n+\n+ system_module_path = os.path.normpath(os.path.dirname(opcode.__file__))\n+ loaded_module_path = os.path.normpath(os.path.dirname(distutils.__file__))\n+ if system_module_path != loaded_module_path:\n # Find this package in its parent directory.\n- api.search_dirs = [os.path.dirname(distutils_dir)]\n- logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)\n+ api.search_dirs = [system_module_path]\n+ logger.info('distutils: retargeting to non-venv dir %r',\n+ system_module_path)\n", "issue": "distutils not included with latest virtualenv (16.4.0)\nThis was already reported in #4031. The issue was closed without a fix so I'm creating this one.\r\n\r\n**With virtualenv 16.4.0, pyinstaller reports :**\r\n\r\n```\r\n3583 INFO: distutils: retargeting to non-venv dir '/usr/lib64/python3.6/distutils/__init__.py'\r\n```\r\nand then during \"Loading module hook\" sequence, the `hook-distutils.py` is missing and distutils modules are not included into the final executable binary.\r\n\r\nWhen executing the binary the error is:\r\n\r\n```\r\nModuleNotFoundError: No module named 'distutils'\r\n[10373] Failed to execute script <name here>\r\n```\r\n\r\n**With virtualenv 16.1.0, pyinstaller reports :**\r\n\r\n```\r\n3157 INFO: Processing pre-find module path hook distutils\r\n5053 INFO: Loading module hook \"hook-distutils.py\"...\r\n```\r\n\r\nand distutils modules are included into the final executable binary.\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n\"\"\"\n`distutils`-specific pre-find module path hook.\n\nWhen run from within a venv (virtual environment), this hook changes the\n`__path__` of the `distutils` package to that of the system-wide rather than\nvenv-specific `distutils` package. While the former is suitable for freezing,\nthe latter is intended for use _only_ from within venvs.\n\"\"\"\n\n\nimport distutils\nimport os\n\nfrom PyInstaller.utils.hooks import logger\n\n\ndef pre_find_module_path(api):\n # Absolute path of the system-wide \"distutils\" package when run from within\n # a venv or None otherwise.\n distutils_dir = getattr(distutils, 'distutils_path', None)\n if distutils_dir is not None:\n # Find this package in its parent directory.\n api.search_dirs = [os.path.dirname(distutils_dir)]\n logger.info('distutils: retargeting to non-venv dir %r' % distutils_dir)\n", "path": "PyInstaller/hooks/pre_find_module_path/hook-distutils.py"}]} | 1,118 | 444 |
gh_patches_debug_14392 | rasdani/github-patches | git_diff | pre-commit__pre-commit-216 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pre-commit potentially uses the wrong `virtualenv` when building environments
It should use `sys.executable, '-m', 'virtualenv'` instead of `'virtualenv'`
</issue>
<code>
[start of pre_commit/languages/python.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6
7 import virtualenv
8
9 from pre_commit.languages import helpers
10 from pre_commit.util import clean_path_on_failure
11
12
13 ENVIRONMENT_DIR = 'py_env'
14
15
16 class PythonEnv(helpers.Environment):
17 @property
18 def env_prefix(self):
19 return ". '{{prefix}}{0}activate' &&".format(
20 virtualenv.path_locations(
21 ENVIRONMENT_DIR,
22 )[-1].rstrip(os.sep) + os.sep,
23 'activate',
24 )
25
26
27 @contextlib.contextmanager
28 def in_env(repo_cmd_runner):
29 yield PythonEnv(repo_cmd_runner)
30
31
32 def norm_version(version):
33 if os.name == 'nt': # pragma: no cover (windows)
34 if not distutils.spawn.find_executable(version):
35 # The default place for python on windows is:
36 # C:\PythonXX\python.exe
37 version = r'C:\{0}\python.exe'.format(version.replace('.', ''))
38 return version
39
40
41 def install_environment(repo_cmd_runner, version='default'):
42 assert repo_cmd_runner.exists('setup.py')
43
44 # Install a virtualenv
45 with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):
46 venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]
47 if version != 'default':
48 venv_cmd.extend(['-p', norm_version(version)])
49 repo_cmd_runner.run(venv_cmd)
50 with in_env(repo_cmd_runner) as env:
51 env.run("cd '{prefix}' && pip install .")
52
53
54 def run_hook(repo_cmd_runner, hook, file_args):
55 with in_env(repo_cmd_runner) as env:
56 return helpers.run_hook(env, hook, file_args)
57
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -3,6 +3,7 @@
import contextlib
import distutils.spawn
import os
+import sys
import virtualenv
@@ -43,7 +44,10 @@
# Install a virtualenv
with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):
- venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]
+ venv_cmd = [
+ sys.executable, '-m', 'virtualenv',
+ '{{prefix}}{0}'.format(ENVIRONMENT_DIR)
+ ]
if version != 'default':
venv_cmd.extend(['-p', norm_version(version)])
repo_cmd_runner.run(venv_cmd)
| {"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -3,6 +3,7 @@\n import contextlib\n import distutils.spawn\n import os\n+import sys\n \n import virtualenv\n \n@@ -43,7 +44,10 @@\n \n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):\n- venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]\n+ venv_cmd = [\n+ sys.executable, '-m', 'virtualenv',\n+ '{{prefix}}{0}'.format(ENVIRONMENT_DIR)\n+ ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n", "issue": "pre-commit potentially uses the wrong `virtualenv` when building environments\nIt should use `sys.executable, '-m', 'virtualenv'` instead of `'virtualenv'`\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n ENVIRONMENT_DIR,\n )[-1].rstrip(os.sep) + os.sep,\n 'activate',\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner):\n yield PythonEnv(repo_cmd_runner)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n if not distutils.spawn.find_executable(version):\n # The default place for python on windows is:\n # C:\\PythonXX\\python.exe\n version = r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n return version\n\n\ndef install_environment(repo_cmd_runner, version='default'):\n assert repo_cmd_runner.exists('setup.py')\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(ENVIRONMENT_DIR)):\n venv_cmd = ['virtualenv', '{{prefix}}{0}'.format(ENVIRONMENT_DIR)]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner) as env:\n env.run(\"cd '{prefix}' && pip install .\")\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}]} | 1,057 | 191 |
gh_patches_debug_30909 | rasdani/github-patches | git_diff | ephios-dev__ephios-1012 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve explanation for page slugs
Users did not understand what the page slug means. We should provide a help text, either directly or in the docs.
</issue>
<code>
[start of ephios/plugins/pages/models.py]
1 from django.db import models
2 from django.utils.translation import gettext_lazy as _
3
4
5 class Page(models.Model):
6 title = models.CharField(verbose_name=_("Title"), max_length=250)
7 content = models.TextField(_("Content"), blank=True)
8 slug = models.SlugField(_("Slug"), max_length=250, unique=True)
9 show_in_footer = models.BooleanField(_("Show in footer"), default=False)
10 publicly_visible = models.BooleanField(_("Publicly visible"), default=False)
11
12 def __str__(self):
13 return str(self.title)
14
15 class Meta:
16 verbose_name = "Page"
17 verbose_name_plural = "Pages"
18
[end of ephios/plugins/pages/models.py]
[start of ephios/plugins/pages/views.py]
1 from django.contrib import messages
2 from django.contrib.auth.views import redirect_to_login
3 from django.urls import reverse
4 from django.utils.translation import gettext as _
5 from django.views.generic import CreateView, DeleteView, DetailView, ListView, UpdateView
6
7 from ephios.extra.mixins import CustomPermissionRequiredMixin
8 from ephios.plugins.pages.models import Page
9
10
11 class PageListView(CustomPermissionRequiredMixin, ListView):
12 model = Page
13 permission_required = "pages.add_page"
14
15
16 class PageView(DetailView):
17 model = Page
18
19 def setup(self, request, *args, **kwargs):
20 super().setup(request, *args, **kwargs)
21 self.object = self.get_object()
22
23 def dispatch(self, request, *args, **kwargs):
24 if not request.user.is_authenticated and not self.object.publicly_visible:
25 return redirect_to_login(self.request.get_full_path())
26 return super().dispatch(request, *args, **kwargs)
27
28
29 class PageCreateView(CustomPermissionRequiredMixin, CreateView):
30 model = Page
31 permission_required = "pages.add_page"
32 fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
33
34 def get_success_url(self):
35 messages.success(self.request, _("Page saved successfully."))
36 return reverse("pages:settings_page_list")
37
38
39 class PageUpdateView(CustomPermissionRequiredMixin, UpdateView):
40 model = Page
41 permission_required = "pages.change_page"
42 fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
43
44 def get_success_url(self):
45 messages.success(self.request, _("Page saved successfully."))
46 return reverse("pages:settings_page_list")
47
48
49 class PageDeleteView(CustomPermissionRequiredMixin, DeleteView):
50 model = Page
51 permission_required = "pages.delete_page"
52
53 def get_success_url(self):
54 messages.info(self.request, _("Page deleted successfully."))
55 return reverse("pages:settings_page_list")
56
[end of ephios/plugins/pages/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/plugins/pages/models.py b/ephios/plugins/pages/models.py
--- a/ephios/plugins/pages/models.py
+++ b/ephios/plugins/pages/models.py
@@ -5,7 +5,12 @@
class Page(models.Model):
title = models.CharField(verbose_name=_("Title"), max_length=250)
content = models.TextField(_("Content"), blank=True)
- slug = models.SlugField(_("Slug"), max_length=250, unique=True)
+ slug = models.SlugField(
+ _("URL slug"),
+ help_text=_("The slug is used to generate the page's URL."),
+ max_length=250,
+ unique=True,
+ )
show_in_footer = models.BooleanField(_("Show in footer"), default=False)
publicly_visible = models.BooleanField(_("Publicly visible"), default=False)
diff --git a/ephios/plugins/pages/views.py b/ephios/plugins/pages/views.py
--- a/ephios/plugins/pages/views.py
+++ b/ephios/plugins/pages/views.py
@@ -29,7 +29,7 @@
class PageCreateView(CustomPermissionRequiredMixin, CreateView):
model = Page
permission_required = "pages.add_page"
- fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
+ fields = ["title", "slug", "content", "show_in_footer", "publicly_visible"]
def get_success_url(self):
messages.success(self.request, _("Page saved successfully."))
@@ -39,7 +39,7 @@
class PageUpdateView(CustomPermissionRequiredMixin, UpdateView):
model = Page
permission_required = "pages.change_page"
- fields = ["title", "content", "slug", "show_in_footer", "publicly_visible"]
+ fields = ["title", "slug", "content", "show_in_footer", "publicly_visible"]
def get_success_url(self):
messages.success(self.request, _("Page saved successfully."))
| {"golden_diff": "diff --git a/ephios/plugins/pages/models.py b/ephios/plugins/pages/models.py\n--- a/ephios/plugins/pages/models.py\n+++ b/ephios/plugins/pages/models.py\n@@ -5,7 +5,12 @@\n class Page(models.Model):\n title = models.CharField(verbose_name=_(\"Title\"), max_length=250)\n content = models.TextField(_(\"Content\"), blank=True)\n- slug = models.SlugField(_(\"Slug\"), max_length=250, unique=True)\n+ slug = models.SlugField(\n+ _(\"URL slug\"),\n+ help_text=_(\"The slug is used to generate the page's URL.\"),\n+ max_length=250,\n+ unique=True,\n+ )\n show_in_footer = models.BooleanField(_(\"Show in footer\"), default=False)\n publicly_visible = models.BooleanField(_(\"Publicly visible\"), default=False)\n \ndiff --git a/ephios/plugins/pages/views.py b/ephios/plugins/pages/views.py\n--- a/ephios/plugins/pages/views.py\n+++ b/ephios/plugins/pages/views.py\n@@ -29,7 +29,7 @@\n class PageCreateView(CustomPermissionRequiredMixin, CreateView):\n model = Page\n permission_required = \"pages.add_page\"\n- fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n+ fields = [\"title\", \"slug\", \"content\", \"show_in_footer\", \"publicly_visible\"]\n \n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n@@ -39,7 +39,7 @@\n class PageUpdateView(CustomPermissionRequiredMixin, UpdateView):\n model = Page\n permission_required = \"pages.change_page\"\n- fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n+ fields = [\"title\", \"slug\", \"content\", \"show_in_footer\", \"publicly_visible\"]\n \n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n", "issue": "Improve explanation for page slugs\nUsers did not understand what the page slug means. We should provide a help text, either directly or in the docs.\n", "before_files": [{"content": "from django.db import models\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass Page(models.Model):\n title = models.CharField(verbose_name=_(\"Title\"), max_length=250)\n content = models.TextField(_(\"Content\"), blank=True)\n slug = models.SlugField(_(\"Slug\"), max_length=250, unique=True)\n show_in_footer = models.BooleanField(_(\"Show in footer\"), default=False)\n publicly_visible = models.BooleanField(_(\"Publicly visible\"), default=False)\n\n def __str__(self):\n return str(self.title)\n\n class Meta:\n verbose_name = \"Page\"\n verbose_name_plural = \"Pages\"\n", "path": "ephios/plugins/pages/models.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.views import redirect_to_login\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.views.generic import CreateView, DeleteView, DetailView, ListView, UpdateView\n\nfrom ephios.extra.mixins import CustomPermissionRequiredMixin\nfrom ephios.plugins.pages.models import Page\n\n\nclass PageListView(CustomPermissionRequiredMixin, ListView):\n model = Page\n permission_required = \"pages.add_page\"\n\n\nclass PageView(DetailView):\n model = Page\n\n def setup(self, request, *args, **kwargs):\n super().setup(request, *args, **kwargs)\n self.object = self.get_object()\n\n def dispatch(self, request, *args, **kwargs):\n if not request.user.is_authenticated and not self.object.publicly_visible:\n return redirect_to_login(self.request.get_full_path())\n return super().dispatch(request, *args, **kwargs)\n\n\nclass PageCreateView(CustomPermissionRequiredMixin, CreateView):\n model = Page\n permission_required = \"pages.add_page\"\n fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n\n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n return reverse(\"pages:settings_page_list\")\n\n\nclass PageUpdateView(CustomPermissionRequiredMixin, UpdateView):\n model = Page\n permission_required = \"pages.change_page\"\n fields = [\"title\", \"content\", \"slug\", \"show_in_footer\", \"publicly_visible\"]\n\n def get_success_url(self):\n messages.success(self.request, _(\"Page saved successfully.\"))\n return reverse(\"pages:settings_page_list\")\n\n\nclass PageDeleteView(CustomPermissionRequiredMixin, DeleteView):\n model = Page\n permission_required = \"pages.delete_page\"\n\n def get_success_url(self):\n messages.info(self.request, _(\"Page deleted successfully.\"))\n return reverse(\"pages:settings_page_list\")\n", "path": "ephios/plugins/pages/views.py"}]} | 1,272 | 431 |
gh_patches_debug_4988 | rasdani/github-patches | git_diff | deis__deis-2513 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
make discovery-url is doesn't work .
shows no rule to create discovery-url .stop!
i have search the old issues,it didn't solve the problem.
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # deis documentation build configuration file, created by
4 # sphinx-quickstart on Fri Jul 26 12:12:00 2013.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import os
15 import sys
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20
21 # Some hackery here to get deis.py to be importable as client.deis
22 open(os.path.join('..', '__init__.py'), 'a')
23 sys.path.insert(0, os.path.abspath(os.path.join('..')))
24 sys.path.insert(0, os.path.abspath(os.path.join('..', 'controller')))
25 # create local_settings.py for SECRET_KEY if necessary
26 local_settings_path = os.path.abspath(
27 os.path.join('..', 'controller', 'deis', 'local_settings.py'))
28 if not os.path.exists(local_settings_path):
29 with open(local_settings_path, 'w') as local_settings:
30 local_settings.write("SECRET_KEY = 'DummySecretKey'\n")
31 # set up Django
32 os.environ['DJANGO_SETTINGS_MODULE'] = 'deis.settings'
33 from django.conf import settings # noqa
34
35 # -- General configuration -----------------------------------------------------
36
37 # If your documentation needs a minimal Sphinx version, state it here.
38 #needs_sphinx = '1.0'
39
40 # Add any Sphinx extension module names here, as strings. They can be extensions
41 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
42 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary',
43 'sphinx.ext.viewcode']
44
45 # default flags for auto-generated python code documetation
46 autodoc_default_flags = ['members', 'undoc-members']
47
48 # Add any paths that contain templates here, relative to this directory.
49 templates_path = ['_templates']
50
51 # The suffix of source filenames.
52 source_suffix = '.rst'
53
54 # The encoding of source files.
55 #source_encoding = 'utf-8-sig'
56
57 # The master toctree document.
58 master_doc = 'toctree'
59
60 # General information about the project.
61 project = u'deis'
62 copyright = u'2013, OpDemand LLC'
63
64 # The version info for the project you're documenting, acts as replacement for
65 # |version| and |release|, also used in various other places throughout the
66 # built documents.
67 #
68 from deis import __version__
69
70 # The short X.Y version.
71 version = __version__.rsplit('.', 1)[0]
72 # The full version, including alpha/beta/rc tags.
73 release = __version__
74
75 # The language for content autogenerated by Sphinx. Refer to documentation
76 # for a list of supported languages.
77 #language = None
78
79 # There are two options for replacing |today|: either, you set today to some
80 # non-false value, then it is used:
81 #today = ''
82 # Else, today_fmt is used as the format for a strftime call.
83 #today_fmt = '%B %d, %Y'
84
85 # List of patterns, relative to source directory, that match files and
86 # directories to ignore when looking for source files.
87 exclude_patterns = ['_build', 'venv']
88
89 # The reST default role (used for this markup: `text`) to use for all documents.
90 #default_role = None
91
92 # If true, '()' will be appended to :func: etc. cross-reference text.
93 #add_function_parentheses = True
94
95 # If true, the current module name will be prepended to all description
96 # unit titles (such as .. function::).
97 #add_module_names = True
98
99 # If true, sectionauthor and moduleauthor directives will be shown in the
100 # output. They are ignored by default.
101 #show_authors = False
102
103 # The name of the Pygments (syntax highlighting) style to use.
104 pygments_style = 'sphinx'
105
106 # A list of ignored prefixes for module index sorting.
107 #modindex_common_prefix = []
108
109 # If true, keep warnings as "system message" paragraphs in the built documents.
110 #keep_warnings = False
111
112
113 # -- Options for HTML output ---------------------------------------------------
114
115 # The theme to use for HTML and HTML Help pages. See the documentation for
116 # a list of builtin themes.
117 html_theme = 'deis'
118
119 # Theme options are theme-specific and customize the look and feel of a theme
120 # further. For a list of options available for each theme, see the
121 # documentation.
122 #html_theme_options = {}
123
124 # Add any paths that contain custom themes here, relative to this directory.
125 html_theme_path = ['theme']
126
127 # The name for this set of Sphinx documents. If None, it defaults to
128 # "<project> v<release> documentation".
129 #html_title = None
130
131 # A shorter title for the navigation bar. Default is the same as html_title.
132 #html_short_title = None
133
134 # The name of an image file (relative to this directory) to place at the top
135 # of the sidebar.
136 #html_logo = None
137
138 # The name of an image file (within the static path) to use as favicon of the
139 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
140 # pixels large.
141 #html_favicon = None
142
143 # Add any paths that contain custom static files (such as style sheets) here,
144 # relative to this directory. They are copied after the builtin static files,
145 # so a file named "default.css" will overwrite the builtin "default.css".
146 html_static_path = ['../controller/web/static']
147
148 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
149 # using the given strftime format.
150 #html_last_updated_fmt = '%b %d, %Y'
151
152 # If true, SmartyPants will be used to convert quotes and dashes to
153 # typographically correct entities.
154 html_use_smartypants = True
155
156 html_add_permalinks = True
157
158 # Custom sidebar templates, maps document names to template names.
159 #html_sidebars = {}
160
161 # Additional templates that should be rendered to pages, maps page names to
162 # template names.
163 #html_additional_pages = {}
164
165 # If false, no module index is generated.
166 #html_domain_indices = True
167
168 # If false, no index is generated.
169 #html_use_index = True
170
171 # If true, the index is split into individual pages for each letter.
172 #html_split_index = False
173
174 # If true, links to the reST sources are added to the pages.
175 #html_show_sourcelink = True
176
177 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
178 #html_show_sphinx = True
179
180 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
181 #html_show_copyright = True
182
183 # If true, an OpenSearch description file will be output, and all pages will
184 # contain a <link> tag referring to it. The value of this option must be the
185 # base URL from which the finished HTML is served.
186 #html_use_opensearch = ''
187
188 # This is the file name suffix for HTML files (e.g. ".xhtml").
189 #html_file_suffix = None
190
191 # Output file base name for HTML help builder.
192 htmlhelp_basename = 'deisdoc'
193
194
195 # -- Options for LaTeX output --------------------------------------------------
196
197 latex_elements = {
198 # The paper size ('letterpaper' or 'a4paper').
199 #'papersize': 'letterpaper',
200
201 # The font size ('10pt', '11pt' or '12pt').
202 #'pointsize': '10pt',
203
204 # Additional stuff for the LaTeX preamble.
205 #'preamble': '',
206 }
207
208 # Grouping the document tree into LaTeX files. List of tuples
209 # (source start file, target name, title, author, documentclass [howto/manual]).
210 latex_documents = [
211 ('index', 'deis.tex', u'deis Documentation',
212 u'Author', 'manual'),
213 ]
214
215 # The name of an image file (relative to this directory) to place at the top of
216 # the title page.
217 #latex_logo = None
218
219 # For "manual" documents, if this is true, then toplevel headings are parts,
220 # not chapters.
221 #latex_use_parts = False
222
223 # If true, show page references after internal links.
224 #latex_show_pagerefs = False
225
226 # If true, show URL addresses after external links.
227 #latex_show_urls = False
228
229 # Documents to append as an appendix to all manuals.
230 #latex_appendices = []
231
232 # If false, no module index is generated.
233 #latex_domain_indices = True
234
235
236 # -- Options for manual page output --------------------------------------------
237
238 # One entry per manual page. List of tuples
239 # (source start file, name, description, authors, manual section).
240 man_pages = [
241 ('index', 'deis', u'deis Documentation',
242 [u'Author'], 1)
243 ]
244
245 # If true, show URL addresses after external links.
246 #man_show_urls = False
247
248
249 # -- Options for Texinfo output ------------------------------------------------
250
251 # Grouping the document tree into Texinfo files. List of tuples
252 # (source start file, target name, title, author,
253 # dir menu entry, description, category)
254 texinfo_documents = [
255 ('index', 'deis', u'deis Documentation',
256 u'Author', 'deis', 'One line description of project.',
257 'Miscellaneous'),
258 ]
259
260 # Documents to append as an appendix to all manuals.
261 #texinfo_appendices = []
262
263 # If false, no module index is generated.
264 #texinfo_domain_indices = True
265
266 # How to display URL addresses: 'footnote', 'no', or 'inline'.
267 #texinfo_show_urls = 'footnote'
268
269 # If true, do not generate a @detailmenu in the "Top" node's menu.
270 #texinfo_no_detailmenu = False
271
272
273 # -- Options for Epub output ---------------------------------------------------
274
275 # Bibliographic Dublin Core info.
276 epub_title = u'deis'
277 epub_author = u'OpDemand LLC'
278 epub_publisher = u'OpDemand LLC'
279 epub_copyright = u'2013, OpDemand LLC'
280
281 # The language of the text. It defaults to the language option
282 # or en if the language is not set.
283 #epub_language = ''
284
285 # The scheme of the identifier. Typical schemes are ISBN or URL.
286 #epub_scheme = ''
287
288 # The unique identifier of the text. This can be a ISBN number
289 # or the project homepage.
290 #epub_identifier = ''
291
292 # A unique identification for the text.
293 #epub_uid = ''
294
295 # A tuple containing the cover image and cover page html template filenames.
296 #epub_cover = ()
297
298 # A sequence of (type, uri, title) tuples for the guide element of content.opf.
299 #epub_guide = ()
300
301 # HTML files that should be inserted before the pages created by sphinx.
302 # The format is a list of tuples containing the path and title.
303 #epub_pre_files = []
304
305 # HTML files shat should be inserted after the pages created by sphinx.
306 # The format is a list of tuples containing the path and title.
307 #epub_post_files = []
308
309 # A list of files that should not be packed into the epub file.
310 #epub_exclude_files = []
311
312 # The depth of the table of contents in toc.ncx.
313 #epub_tocdepth = 3
314
315 # Allow duplicate toc entries.
316 #epub_tocdup = True
317
318 # Fix unsupported image types using the PIL.
319 #epub_fix_images = False
320
321 # Scale large images.
322 #epub_max_image_width = 0
323
324 # If 'no', URL addresses will not be shown.
325 #epub_show_urls = 'inline'
326
327 # If false, no index is generated.
328 #epub_use_index = True
329
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -84,7 +84,7 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
-exclude_patterns = ['_build', 'venv']
+exclude_patterns = ['_build', 'venv', '**/_*.rst']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -84,7 +84,7 @@\n \n # List of patterns, relative to source directory, that match files and\n # directories to ignore when looking for source files.\n-exclude_patterns = ['_build', 'venv']\n+exclude_patterns = ['_build', 'venv', '**/_*.rst']\n \n # The reST default role (used for this markup: `text`) to use for all documents.\n #default_role = None\n", "issue": "make discovery-url is doesn't work .\nshows no rule to create discovery-url .stop!\ni have search the old issues,it didn't solve the problem.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# deis documentation build configuration file, created by\n# sphinx-quickstart on Fri Jul 26 12:12:00 2013.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport sys\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n# Some hackery here to get deis.py to be importable as client.deis\nopen(os.path.join('..', '__init__.py'), 'a')\nsys.path.insert(0, os.path.abspath(os.path.join('..')))\nsys.path.insert(0, os.path.abspath(os.path.join('..', 'controller')))\n# create local_settings.py for SECRET_KEY if necessary\nlocal_settings_path = os.path.abspath(\n os.path.join('..', 'controller', 'deis', 'local_settings.py'))\nif not os.path.exists(local_settings_path):\n with open(local_settings_path, 'w') as local_settings:\n local_settings.write(\"SECRET_KEY = 'DummySecretKey'\\n\")\n# set up Django\nos.environ['DJANGO_SETTINGS_MODULE'] = 'deis.settings'\nfrom django.conf import settings # noqa\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary',\n 'sphinx.ext.viewcode']\n\n# default flags for auto-generated python code documetation\nautodoc_default_flags = ['members', 'undoc-members']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'toctree'\n\n# General information about the project.\nproject = u'deis'\ncopyright = u'2013, OpDemand LLC'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\nfrom deis import __version__\n\n# The short X.Y version.\nversion = __version__.rsplit('.', 1)[0]\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build', 'venv']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'deis'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['theme']\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['../controller/web/static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\nhtml_use_smartypants = True\n\nhtml_add_permalinks = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'deisdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'deis.tex', u'deis Documentation',\n u'Author', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'deis', u'deis Documentation',\n [u'Author'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'deis', u'deis Documentation',\n u'Author', 'deis', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# -- Options for Epub output ---------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = u'deis'\nepub_author = u'OpDemand LLC'\nepub_publisher = u'OpDemand LLC'\nepub_copyright = u'2013, OpDemand LLC'\n\n# The language of the text. It defaults to the language option\n# or en if the language is not set.\n#epub_language = ''\n\n# The scheme of the identifier. Typical schemes are ISBN or URL.\n#epub_scheme = ''\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#epub_identifier = ''\n\n# A unique identification for the text.\n#epub_uid = ''\n\n# A tuple containing the cover image and cover page html template filenames.\n#epub_cover = ()\n\n# A sequence of (type, uri, title) tuples for the guide element of content.opf.\n#epub_guide = ()\n\n# HTML files that should be inserted before the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_pre_files = []\n\n# HTML files shat should be inserted after the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_post_files = []\n\n# A list of files that should not be packed into the epub file.\n#epub_exclude_files = []\n\n# The depth of the table of contents in toc.ncx.\n#epub_tocdepth = 3\n\n# Allow duplicate toc entries.\n#epub_tocdup = True\n\n# Fix unsupported image types using the PIL.\n#epub_fix_images = False\n\n# Scale large images.\n#epub_max_image_width = 0\n\n# If 'no', URL addresses will not be shown.\n#epub_show_urls = 'inline'\n\n# If false, no index is generated.\n#epub_use_index = True\n", "path": "docs/conf.py"}]} | 4,079 | 119 |
gh_patches_debug_12459 | rasdani/github-patches | git_diff | azavea__raster-vision-497 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unit Tests: `task` Directory
</issue>
<code>
[start of rastervision/utils/files.py]
1 import os
2 import shutil
3 from threading import Timer
4 import logging
5
6 from google.protobuf import json_format
7
8 from rastervision.filesystem.filesystem import FileSystem
9 from rastervision.filesystem.filesystem import ProtobufParseException
10 from rastervision.filesystem.local_filesystem import make_dir
11
12 log = logging.getLogger(__name__)
13
14
15 def get_local_path(uri, download_dir, fs=None):
16 """Convert a URI into a corresponding local path.
17
18 If a uri is local, return it. If it's remote, we generate a path for it
19 within download_dir. For an S3 path of form s3://<bucket>/<key>, the path
20 is <download_dir>/s3/<bucket>/<key>.
21
22 Args:
23 uri: (string) URI of file
24 download_dir: (string) path to directory
25 fs: Optional FileSystem to use
26
27 Returns:
28 (string) a local path
29 """
30 if uri is None:
31 return None
32
33 if not fs:
34 fs = FileSystem.get_file_system(uri, 'r')
35 path = fs.local_path(uri, download_dir)
36
37 return path
38
39
40 def sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):
41 """Synchronize a local to a local or remote directory.
42
43 Transfers files from source to destination directories so that the
44 destination has all the source files. If delete is True, also delete
45 files in the destination to match those in the source directory.
46
47 Args:
48 src_dir_uri: (string) URI of local source directory
49 dest_dir_uri: (string) URI of destination directory
50 delete: (bool)
51 fs: Optional FileSystem to use for destination
52 """
53 if not fs:
54 fs = FileSystem.get_file_system(dest_dir_uri, 'w')
55 fs.sync_to_dir(src_dir_uri, dest_dir_uri, delete=delete)
56
57
58 def sync_from_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):
59 """Synchronize a local or remote directory to a local directory.
60
61 Transfers files from source to destination directories so that the
62 destination has all the source files. If delete is True, also delete
63 files in the destination to match those in the source directory.
64
65 Args:
66 src_dir_uri: (string) URI of source directory
67 dest_dir_uri: (string) URI of local destination directory
68 delete: (bool)
69 fs: Optional FileSystem to use
70 """
71 if not fs:
72 fs = FileSystem.get_file_system(src_dir_uri, 'r')
73 fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)
74
75
76 def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):
77 """Start syncing a directory on a schedule.
78
79 Calls sync_to_dir on a schedule.
80
81 Args:
82 src_dir_uri: (string) Path of the local source directory
83 dest_dir_uri: (string) URI of destination directory
84 sync_interval: (int) period in seconds for syncing
85 fs: Optional FileSystem to use
86 """
87
88 def _sync_dir():
89 log.info('Syncing {} to {}...'.format(src_dir_uri, dest_dir_uri))
90 sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=fs)
91
92 class SyncThread:
93 def __init__(self):
94 thread = Timer(sync_interval, _sync_dir)
95 thread.daemon = True
96 thread.start()
97 self.thread = thread
98
99 def __enter__(self):
100 return self.thread
101
102 def __exit__(self, type, value, traceback):
103 self.thread.cancel()
104
105 return SyncThread()
106
107
108 def download_if_needed(uri, download_dir, fs=None):
109 """Download a file into a directory if it's remote.
110
111 If uri is local, there is no need to download the file.
112
113 Args:
114 uri: (string) URI of file
115 download_dir: (string) local directory to download file into
116 fs: Optional FileSystem to use.
117
118 Returns:
119 (string) path to local file
120
121 Raises:
122 NotReadableError if URI cannot be read from
123 """
124 if uri is None:
125 return None
126
127 if not fs:
128 fs = FileSystem.get_file_system(uri, 'r')
129
130 path = get_local_path(uri, download_dir, fs=fs)
131 make_dir(path, use_dirname=True)
132
133 if path != uri:
134 log.info('Downloading {} to {}'.format(uri, path))
135
136 fs.copy_from(uri, path)
137
138 return path
139
140
141 def download_or_copy(uri, target_dir, fs=None):
142 """Downloads or copies a file to a directory
143
144 Args:
145 uri: (string) URI of file
146 target_dir: (string) local directory to copy file to
147 fs: Optional FileSystem to use
148 """
149 local_path = download_if_needed(uri, target_dir, fs=fs)
150 shutil.copy(local_path, target_dir)
151 return local_path
152
153
154 def file_exists(uri, fs=None):
155 if not fs:
156 fs = FileSystem.get_file_system(uri, 'r')
157 return fs.file_exists(uri)
158
159
160 def list_paths(uri, ext=None, fs=None):
161 if uri is None:
162 return None
163
164 if not fs:
165 fs = FileSystem.get_file_system(uri, 'r')
166
167 return fs.list_paths(uri, ext=ext)
168
169
170 def upload_or_copy(src_path, dst_uri, fs=None):
171 """Upload a file if the destination is remote.
172
173 If dst_uri is local, the file is copied.
174
175 Args:
176 src_path: (string) path to source file
177 dst_uri: (string) URI of destination for file
178 fs: Optional FileSystem to use
179 Raises:
180 NotWritableError if URI cannot be written to
181 """
182 if dst_uri is None:
183 return
184
185 if not (os.path.isfile(src_path) or os.path.isdir(src_path)):
186 raise Exception('{} does not exist.'.format(src_path))
187
188 if not src_path == dst_uri:
189 log.info('Uploading {} to {}'.format(src_path, dst_uri))
190
191 if not fs:
192 fs = FileSystem.get_file_system(dst_uri, 'w')
193 fs.copy_to(src_path, dst_uri)
194
195
196 def file_to_str(uri, fs=None):
197 """Download contents of text file into a string.
198
199 Args:
200 uri: (string) URI of file
201 fs: Optional FileSystem to use
202
203 Returns:
204 (string) with contents of text file
205
206 Raises:
207 NotReadableError if URI cannot be read from
208 """
209 if not fs:
210 fs = FileSystem.get_file_system(uri, 'r')
211 return fs.read_str(uri)
212
213
214 def str_to_file(content_str, uri, fs=None):
215 """Writes string to text file.
216
217 Args:
218 content_str: string to write
219 uri: (string) URI of file to write
220 fs: Optional FileSystem to use
221
222 Raise:
223 NotWritableError if file_uri cannot be written
224 """
225 if not fs:
226 fs = FileSystem.get_file_system(uri, 'r')
227 return fs.write_str(uri, content_str)
228
229
230 def load_json_config(uri, message, fs=None):
231 """Load a JSON-formatted protobuf config file.
232
233 Args:
234 uri: (string) URI of config file
235 message: (google.protobuf.message.Message) empty protobuf message of
236 to load the config into. The type needs to match the content of
237 uri.
238 fs: Optional FileSystem to use.
239
240 Returns:
241 the same message passed as input with fields filled in from uri
242
243 Raises:
244 ProtobufParseException if uri cannot be parsed
245 """
246 try:
247 return json_format.Parse(file_to_str(uri, fs=fs), message)
248 except json_format.ParseError as e:
249 error_msg = ('Problem parsing protobuf file {}. '.format(uri) +
250 'You might need to run scripts/compile')
251 raise ProtobufParseException(error_msg) from e
252
253
254 def save_json_config(message, uri, fs=None):
255 """Save a protobuf object to a JSON file.
256
257 Args:
258 message: (google.protobuf.message.Message) protobuf message
259 uri: (string) URI of JSON file to write message to
260 fs: Optional FileSystem to use
261
262 Raises:
263 NotWritableError if uri cannot be written
264 """
265 json_str = json_format.MessageToJson(message)
266 str_to_file(json_str, uri, fs=fs)
267
[end of rastervision/utils/files.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision/utils/files.py b/rastervision/utils/files.py
--- a/rastervision/utils/files.py
+++ b/rastervision/utils/files.py
@@ -73,7 +73,8 @@
fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)
-def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):
+def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600,
+ fs=None): # pragma: no cover
"""Start syncing a directory on a schedule.
Calls sync_to_dir on a schedule.
@@ -157,7 +158,7 @@
return fs.file_exists(uri)
-def list_paths(uri, ext=None, fs=None):
+def list_paths(uri, ext='', fs=None):
if uri is None:
return None
| {"golden_diff": "diff --git a/rastervision/utils/files.py b/rastervision/utils/files.py\n--- a/rastervision/utils/files.py\n+++ b/rastervision/utils/files.py\n@@ -73,7 +73,8 @@\n fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)\n \n \n-def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):\n+def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600,\n+ fs=None): # pragma: no cover\n \"\"\"Start syncing a directory on a schedule.\n \n Calls sync_to_dir on a schedule.\n@@ -157,7 +158,7 @@\n return fs.file_exists(uri)\n \n \n-def list_paths(uri, ext=None, fs=None):\n+def list_paths(uri, ext='', fs=None):\n if uri is None:\n return None\n", "issue": "Unit Tests: `task` Directory\n\n", "before_files": [{"content": "import os\nimport shutil\nfrom threading import Timer\nimport logging\n\nfrom google.protobuf import json_format\n\nfrom rastervision.filesystem.filesystem import FileSystem\nfrom rastervision.filesystem.filesystem import ProtobufParseException\nfrom rastervision.filesystem.local_filesystem import make_dir\n\nlog = logging.getLogger(__name__)\n\n\ndef get_local_path(uri, download_dir, fs=None):\n \"\"\"Convert a URI into a corresponding local path.\n\n If a uri is local, return it. If it's remote, we generate a path for it\n within download_dir. For an S3 path of form s3://<bucket>/<key>, the path\n is <download_dir>/s3/<bucket>/<key>.\n\n Args:\n uri: (string) URI of file\n download_dir: (string) path to directory\n fs: Optional FileSystem to use\n\n Returns:\n (string) a local path\n \"\"\"\n if uri is None:\n return None\n\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n path = fs.local_path(uri, download_dir)\n\n return path\n\n\ndef sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):\n \"\"\"Synchronize a local to a local or remote directory.\n\n Transfers files from source to destination directories so that the\n destination has all the source files. If delete is True, also delete\n files in the destination to match those in the source directory.\n\n Args:\n src_dir_uri: (string) URI of local source directory\n dest_dir_uri: (string) URI of destination directory\n delete: (bool)\n fs: Optional FileSystem to use for destination\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(dest_dir_uri, 'w')\n fs.sync_to_dir(src_dir_uri, dest_dir_uri, delete=delete)\n\n\ndef sync_from_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):\n \"\"\"Synchronize a local or remote directory to a local directory.\n\n Transfers files from source to destination directories so that the\n destination has all the source files. If delete is True, also delete\n files in the destination to match those in the source directory.\n\n Args:\n src_dir_uri: (string) URI of source directory\n dest_dir_uri: (string) URI of local destination directory\n delete: (bool)\n fs: Optional FileSystem to use\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(src_dir_uri, 'r')\n fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)\n\n\ndef start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):\n \"\"\"Start syncing a directory on a schedule.\n\n Calls sync_to_dir on a schedule.\n\n Args:\n src_dir_uri: (string) Path of the local source directory\n dest_dir_uri: (string) URI of destination directory\n sync_interval: (int) period in seconds for syncing\n fs: Optional FileSystem to use\n \"\"\"\n\n def _sync_dir():\n log.info('Syncing {} to {}...'.format(src_dir_uri, dest_dir_uri))\n sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=fs)\n\n class SyncThread:\n def __init__(self):\n thread = Timer(sync_interval, _sync_dir)\n thread.daemon = True\n thread.start()\n self.thread = thread\n\n def __enter__(self):\n return self.thread\n\n def __exit__(self, type, value, traceback):\n self.thread.cancel()\n\n return SyncThread()\n\n\ndef download_if_needed(uri, download_dir, fs=None):\n \"\"\"Download a file into a directory if it's remote.\n\n If uri is local, there is no need to download the file.\n\n Args:\n uri: (string) URI of file\n download_dir: (string) local directory to download file into\n fs: Optional FileSystem to use.\n\n Returns:\n (string) path to local file\n\n Raises:\n NotReadableError if URI cannot be read from\n \"\"\"\n if uri is None:\n return None\n\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n\n path = get_local_path(uri, download_dir, fs=fs)\n make_dir(path, use_dirname=True)\n\n if path != uri:\n log.info('Downloading {} to {}'.format(uri, path))\n\n fs.copy_from(uri, path)\n\n return path\n\n\ndef download_or_copy(uri, target_dir, fs=None):\n \"\"\"Downloads or copies a file to a directory\n\n Args:\n uri: (string) URI of file\n target_dir: (string) local directory to copy file to\n fs: Optional FileSystem to use\n \"\"\"\n local_path = download_if_needed(uri, target_dir, fs=fs)\n shutil.copy(local_path, target_dir)\n return local_path\n\n\ndef file_exists(uri, fs=None):\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n return fs.file_exists(uri)\n\n\ndef list_paths(uri, ext=None, fs=None):\n if uri is None:\n return None\n\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n\n return fs.list_paths(uri, ext=ext)\n\n\ndef upload_or_copy(src_path, dst_uri, fs=None):\n \"\"\"Upload a file if the destination is remote.\n\n If dst_uri is local, the file is copied.\n\n Args:\n src_path: (string) path to source file\n dst_uri: (string) URI of destination for file\n fs: Optional FileSystem to use\n Raises:\n NotWritableError if URI cannot be written to\n \"\"\"\n if dst_uri is None:\n return\n\n if not (os.path.isfile(src_path) or os.path.isdir(src_path)):\n raise Exception('{} does not exist.'.format(src_path))\n\n if not src_path == dst_uri:\n log.info('Uploading {} to {}'.format(src_path, dst_uri))\n\n if not fs:\n fs = FileSystem.get_file_system(dst_uri, 'w')\n fs.copy_to(src_path, dst_uri)\n\n\ndef file_to_str(uri, fs=None):\n \"\"\"Download contents of text file into a string.\n\n Args:\n uri: (string) URI of file\n fs: Optional FileSystem to use\n\n Returns:\n (string) with contents of text file\n\n Raises:\n NotReadableError if URI cannot be read from\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n return fs.read_str(uri)\n\n\ndef str_to_file(content_str, uri, fs=None):\n \"\"\"Writes string to text file.\n\n Args:\n content_str: string to write\n uri: (string) URI of file to write\n fs: Optional FileSystem to use\n\n Raise:\n NotWritableError if file_uri cannot be written\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n return fs.write_str(uri, content_str)\n\n\ndef load_json_config(uri, message, fs=None):\n \"\"\"Load a JSON-formatted protobuf config file.\n\n Args:\n uri: (string) URI of config file\n message: (google.protobuf.message.Message) empty protobuf message of\n to load the config into. The type needs to match the content of\n uri.\n fs: Optional FileSystem to use.\n\n Returns:\n the same message passed as input with fields filled in from uri\n\n Raises:\n ProtobufParseException if uri cannot be parsed\n \"\"\"\n try:\n return json_format.Parse(file_to_str(uri, fs=fs), message)\n except json_format.ParseError as e:\n error_msg = ('Problem parsing protobuf file {}. '.format(uri) +\n 'You might need to run scripts/compile')\n raise ProtobufParseException(error_msg) from e\n\n\ndef save_json_config(message, uri, fs=None):\n \"\"\"Save a protobuf object to a JSON file.\n\n Args:\n message: (google.protobuf.message.Message) protobuf message\n uri: (string) URI of JSON file to write message to\n fs: Optional FileSystem to use\n\n Raises:\n NotWritableError if uri cannot be written\n \"\"\"\n json_str = json_format.MessageToJson(message)\n str_to_file(json_str, uri, fs=fs)\n", "path": "rastervision/utils/files.py"}]} | 3,073 | 196 |
gh_patches_debug_22015 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-6195 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs - value_from examples in value filter
the value_from source docs aren't making it through to the documentation page on value filter, those examples should ideally be there as its a common integration technique with external vocabularies.
</issue>
<code>
[start of docs/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Cloud Custodian documentation build configuration file, created by
4 # sphinx-quickstart on Mon Dec 21 08:34:24 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17 import shlex
18 import sphinx_rtd_theme
19
20 # If extensions (or modules to document with autodoc) are in another directory,
21 # add these directories to sys.path here. If the directory is relative to the
22 # documentation root, use os.path.abspath to make it absolute, like shown here.
23 sys.path.insert(0, os.path.abspath('../..'))
24
25 # -- General configuration ------------------------------------------------
26
27 # If your documentation needs a minimal Sphinx version, state it here.
28 #needs_sphinx = '1.0'
29
30 # Add any Sphinx extension module names here, as strings. They can be
31 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
32 # ones.
33 extensions = [
34 'sphinx.ext.autodoc',
35 'sphinx.ext.viewcode',
36 'c7n_sphinxext.docgen',
37 'recommonmark',
38 'sphinx_markdown_tables'
39 ]
40
41 # Add any paths that contain templates here, relative to this directory.
42 #templates_path = ['_templates']
43
44 # The suffix(es) of source filenames.
45 # You can specify multiple suffix as a list of string:
46 source_suffix = ['.rst', '.md']
47
48
49 # The encoding of source files.
50 #source_encoding = 'utf-8-sig'
51
52 # The master toctree document.
53 master_doc = 'index'
54
55 # General information about the project.
56 project = u'Cloud Custodian'
57 author = u'Kapil Thangavelu'
58
59 # The version info for the project you're documenting, acts as replacement for
60 # |version| and |release|, also used in various other places throughout the
61 # built documents.
62 #
63 # The short X.Y version.
64 # version = u'1.0'
65 # The full version, including alpha/beta/rc tags.
66 # release = u'1.0'
67
68 # The language for content autogenerated by Sphinx. Refer to documentation
69 # for a list of supported languages.
70 #
71 # This is also used if you do content translation via gettext catalogs.
72 # Usually you set "language" from the command line for these cases.
73 language = None
74
75 # There are two options for replacing |today|: either, you set today to some
76 # non-false value, then it is used:
77 #today = ''
78 # Else, today_fmt is used as the format for a strftime call.
79 #today_fmt = '%B %d, %Y'
80
81 # List of patterns, relative to source directory, that match files and
82 # directories to ignore when looking for source files.
83 exclude_patterns = []
84
85 # The reST default role (used for this markup: `text`) to use for all
86 # documents.
87 #default_role = None
88
89 # If true, '()' will be appended to :func: etc. cross-reference text.
90 #add_function_parentheses = True
91
92 # If true, the current module name will be prepended to all description
93 # unit titles (such as .. function::).
94 #add_module_names = True
95
96 # If true, sectionauthor and moduleauthor directives will be shown in the
97 # output. They are ignored by default.
98 #show_authors = False
99
100 # The name of the Pygments (syntax highlighting) style to use.
101 pygments_style = 'sphinx'
102
103 # A list of ignored prefixes for module index sorting.
104 #modindex_common_prefix = []
105
106 # If true, keep warnings as "system message" paragraphs in the built documents.
107 #keep_warnings = False
108
109 # If true, `todo` and `todoList` produce output, else they produce nothing.
110 todo_include_todos = False
111
112
113 # -- Options for HTML output ----------------------------------------------
114
115 # The theme to use for HTML and HTML Help pages. See the documentation for
116 # a list of builtin themes.
117 html_theme = 'sphinx_rtd_theme'
118
119 # Theme options are theme-specific and customize the look and feel of a theme
120 # further. For a list of options available for each theme, see the
121 # documentation.
122 html_theme_options = {
123 'prev_next_buttons_location': 'both',
124 'style_external_links': True,
125 'analytics_id': "UA-162730326-1",
126 # Toc options
127 'collapse_navigation': False,
128 'sticky_navigation': True,
129 'includehidden': True,
130 'titles_only': False
131 }
132
133 # Add any paths that contain custom themes here, relative to this directory.
134 #html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
135 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
136
137 # The name for this set of Sphinx documents. If None, it defaults to
138 # "<project> v<release> documentation".
139 #html_title = None
140
141 # A shorter title for the navigation bar. Default is the same as html_title.
142 #html_short_title = None
143
144 # The name of an image file (relative to this directory) to place at the top
145 # of the sidebar.
146 #html_logo = 'c1_labs.png'
147
148 # The name of an image file (within the static path) to use as favicon of the
149 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
150 # pixels large.
151 html_favicon = 'icon.ico'
152
153 # Add any paths that contain custom static files (such as style sheets) here,
154 # relative to this directory. They are copied after the builtin static files,
155 # so a file named "default.css" will overwrite the builtin "default.css".
156 html_static_path = ['_static']
157
158 # Add any extra paths that contain custom files (such as robots.txt or
159 # .htaccess) here, relative to this directory. These files are copied
160 # directly to the root of the documentation.
161 #html_extra_path = []
162
163 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
164 # using the given strftime format.
165 #html_last_updated_fmt = '%b %d, %Y'
166
167 # If true, SmartyPants will be used to convert quotes and dashes to
168 # typographically correct entities.
169 #html_use_smartypants = True
170
171 # Custom sidebar templates, maps document names to template names.
172 #html_sidebars = {}
173
174 # Additional templates that should be rendered to pages, maps page names to
175 # template names.
176 #html_additional_pages = {}
177
178 # If false, no module index is generated.
179 #html_domain_indices = True
180
181 # If false, no index is generated.
182 #html_use_index = True
183
184 # If true, the index is split into individual pages for each letter.
185 #html_split_index = False
186
187 # If true, links to the reST sources are added to the pages.
188 html_show_sourcelink = False
189
190 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
191 #html_show_sphinx = True
192
193 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
194 #html_show_copyright = True
195
196 # If true, an OpenSearch description file will be output, and all pages will
197 # contain a <link> tag referring to it. The value of this option must be the
198 # base URL from which the finished HTML is served.
199 #html_use_opensearch = ''
200
201 # This is the file name suffix for HTML files (e.g. ".xhtml").
202 #html_file_suffix = None
203
204 # Language to be used for generating the HTML full-text search index.
205 # Sphinx supports the following languages:
206 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
207 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
208 #html_search_language = 'en'
209
210 # A dictionary with options for the search language support, empty by default.
211 # Now only 'ja' uses this config value
212 #html_search_options = {'type': 'default'}
213
214 # The name of a javascript file (relative to the configuration directory) that
215 # implements a search results scorer. If empty, the default will be used.
216 #html_search_scorer = 'scorer.js'
217
218 # Output file base name for HTML help builder.
219 htmlhelp_basename = 'CloudCustodiandoc'
220
221 # -- Options for LaTeX output ---------------------------------------------
222
223 latex_elements = {
224 # The paper size ('letterpaper' or 'a4paper').
225 #'papersize': 'letterpaper',
226
227 # The font size ('10pt', '11pt' or '12pt').
228 #'pointsize': '10pt',
229
230 # Additional stuff for the LaTeX preamble.
231 #'preamble': '',
232
233 # Latex figure (float) alignment
234 #'figure_align': 'htbp',
235 }
236
237 # Grouping the document tree into LaTeX files. List of tuples
238 # (source start file, target name, title,
239 # author, documentclass [howto, manual, or own class]).
240 latex_documents = [
241 (master_doc, 'CloudCustodian.tex', u'Cloud Custodian Documentation',
242 u'Kapil Thangavelu', 'manual'),
243 ]
244
245 # The name of an image file (relative to this directory) to place at the top of
246 # the title page.
247 #latex_logo = None
248
249 # For "manual" documents, if this is true, then toplevel headings are parts,
250 # not chapters.
251 #latex_use_parts = False
252
253 # If true, show page references after internal links.
254 #latex_show_pagerefs = False
255
256 # If true, show URL addresses after external links.
257 #latex_show_urls = False
258
259 # Documents to append as an appendix to all manuals.
260 #latex_appendices = []
261
262 # If false, no module index is generated.
263 #latex_domain_indices = True
264
265
266 # -- Options for manual page output ---------------------------------------
267
268 # One entry per manual page. List of tuples
269 # (source start file, name, description, authors, manual section).
270 man_pages = [
271 (master_doc, 'cloudCustodian', u'Cloud Custodian Documentation',
272 [author], 1)
273 ]
274
275 # If true, show URL addresses after external links.
276 #man_show_urls = False
277
278
279 # -- Options for Texinfo output -------------------------------------------
280
281 # Grouping the document tree into Texinfo files. List of tuples
282 # (source start file, target name, title, author,
283 # dir menu entry, description, category)
284 texinfo_documents = [
285 (master_doc, 'CloudCustodian', u'Cloud Custodian Documentation',
286 author, 'CloudCustodian', 'One line description of project.',
287 'Miscellaneous'),
288 ]
289
290 # Documents to append as an appendix to all manuals.
291 #texinfo_appendices = []
292
293 # If false, no module index is generated.
294 #texinfo_domain_indices = True
295
296 # How to display URL addresses: 'footnote', 'no', or 'inline'.
297 #texinfo_show_urls = 'footnote'
298
299 # If true, do not generate a @detailmenu in the "Top" node's menu.
300 #texinfo_no_detailmenu = False
301
302 # This adds in our expand/collapse JS/CSS
303 def setup(app):
304 app.add_javascript('js/expand.js')
305 app.add_stylesheet('css/expand.css')
306
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -16,6 +16,7 @@
import os
import shlex
import sphinx_rtd_theme
+from sphinx.ext import autodoc
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
@@ -38,6 +39,17 @@
'sphinx_markdown_tables'
]
+# Extract only a classes docstrings
+class DocsonlyMethodDocumenter(autodoc.MethodDocumenter):
+ objtype = "doconly"
+ content_indent = ""
+
+ def format_signature(self, **kwargs):
+ return ""
+
+ def add_directive_header(self, sig: str):
+ return None
+
# Add any paths that contain templates here, relative to this directory.
#templates_path = ['_templates']
@@ -303,3 +315,4 @@
def setup(app):
app.add_javascript('js/expand.js')
app.add_stylesheet('css/expand.css')
+ app.add_autodocumenter(DocsonlyMethodDocumenter)
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -16,6 +16,7 @@\n import os\n import shlex\n import sphinx_rtd_theme\n+from sphinx.ext import autodoc\n \n # If extensions (or modules to document with autodoc) are in another directory,\n # add these directories to sys.path here. If the directory is relative to the\n@@ -38,6 +39,17 @@\n 'sphinx_markdown_tables'\n ]\n \n+# Extract only a classes docstrings\n+class DocsonlyMethodDocumenter(autodoc.MethodDocumenter):\n+ objtype = \"doconly\"\n+ content_indent = \"\"\n+\n+ def format_signature(self, **kwargs):\n+ return \"\"\n+\n+ def add_directive_header(self, sig: str):\n+ return None\n+\n # Add any paths that contain templates here, relative to this directory.\n #templates_path = ['_templates']\n \n@@ -303,3 +315,4 @@\n def setup(app):\n app.add_javascript('js/expand.js')\n app.add_stylesheet('css/expand.css')\n+ app.add_autodocumenter(DocsonlyMethodDocumenter)\n", "issue": "docs - value_from examples in value filter\nthe value_from source docs aren't making it through to the documentation page on value filter, those examples should ideally be there as its a common integration technique with external vocabularies.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Cloud Custodian documentation build configuration file, created by\n# sphinx-quickstart on Mon Dec 21 08:34:24 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\nimport sphinx_rtd_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('../..'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.viewcode',\n 'c7n_sphinxext.docgen',\n 'recommonmark',\n 'sphinx_markdown_tables'\n]\n\n# Add any paths that contain templates here, relative to this directory.\n#templates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = ['.rst', '.md']\n\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Cloud Custodian'\nauthor = u'Kapil Thangavelu'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\n# version = u'1.0'\n# The full version, including alpha/beta/rc tags.\n# release = u'1.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = []\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'prev_next_buttons_location': 'both',\n 'style_external_links': True,\n 'analytics_id': \"UA-162730326-1\",\n # Toc options\n 'collapse_navigation': False,\n 'sticky_navigation': True,\n 'includehidden': True,\n 'titles_only': False\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = 'c1_labs.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = 'icon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CloudCustodiandoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'CloudCustodian.tex', u'Cloud Custodian Documentation',\n u'Kapil Thangavelu', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'cloudCustodian', u'Cloud Custodian Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'CloudCustodian', u'Cloud Custodian Documentation',\n author, 'CloudCustodian', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n# This adds in our expand/collapse JS/CSS\ndef setup(app):\n app.add_javascript('js/expand.js')\n app.add_stylesheet('css/expand.css')\n", "path": "docs/source/conf.py"}]} | 3,901 | 275 |
gh_patches_debug_15187 | rasdani/github-patches | git_diff | vispy__vispy-1362 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
examples/tutorial/app/simple_wx.py issue
```
Traceback (most recent call last):
File "simple_wx.py", line 58, in <module>
frame = TestFrame()
File "simple_wx.py", line 49, in __init__
self.canvas = Canvas(app="wx", parent=self, show=True)
File "simple_wx.py", line 20, in __init__
app.Canvas.__init__(self, *args, **kwargs)
File "/home/eldar/src/vispy/vispy/app/canvas.py", line 208, in __init__
self.set_current()
File "/home/eldar/src/vispy/vispy/app/canvas.py", line 406, in set_current
self._backend._vispy_set_current()
File "/home/eldar/src/vispy/vispy/app/backends/_wx.py", line 302, in _vispy_set_current
self.SetCurrent(self._gl_context)
wx._core.wxAssertionError: C++ assertion "xid" failed at /home/eldar/src/wx/wxPython_Phoenix/wxPython-4.0.0b2/ext/wxWidgets/src/unix/glx11.cpp(194) in SetCurrent(): window must be shown
```
</issue>
<code>
[start of examples/tutorial/app/simple_wx.py]
1 # -*- coding: utf-8 -*-
2 # vispy: testskip
3 # Copyright (c) 2015, Vispy Development Team.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 """
6 This is a very minimal example that opens a window and makes the background
7 color to change from black to white to black ...
8
9 The wx backend is used to embed the canvas in a simple wx Frame with
10 a menubar.
11 """
12
13 import wx
14 import math
15 from vispy import app, gloo
16
17
18 class Canvas(app.Canvas):
19 def __init__(self, *args, **kwargs):
20 app.Canvas.__init__(self, *args, **kwargs)
21 self._timer = app.Timer('auto', connect=self.on_timer, start=True)
22 self.tick = 0
23
24 def on_draw(self, event):
25 gloo.clear(color=True)
26
27 def on_timer(self, event):
28 self.tick += 1 / 60.0
29 c = abs(math.sin(self.tick))
30 gloo.set_clear_color((c, c, c, 1))
31 self.update()
32
33 def stop_timer(self):
34 self._timer.stop()
35
36
37 class TestFrame(wx.Frame):
38 def __init__(self):
39 wx.Frame.__init__(self, None, -1, "Vispy Test",
40 wx.DefaultPosition, size=(500, 500))
41
42 MenuBar = wx.MenuBar()
43 file_menu = wx.Menu()
44 file_menu.Append(wx.ID_EXIT, "&Quit")
45 self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)
46 MenuBar.Append(file_menu, "&File")
47 self.SetMenuBar(MenuBar)
48
49 self.canvas = Canvas(app="wx", parent=self, show=True)
50
51 def on_quit(self, event):
52 self.canvas.stop_timer()
53 self.Close(True)
54
55
56 if __name__ == '__main__':
57 myapp = wx.App(0)
58 frame = TestFrame()
59 frame.Show(True)
60 myapp.MainLoop()
61
[end of examples/tutorial/app/simple_wx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/tutorial/app/simple_wx.py b/examples/tutorial/app/simple_wx.py
--- a/examples/tutorial/app/simple_wx.py
+++ b/examples/tutorial/app/simple_wx.py
@@ -43,15 +43,20 @@
file_menu = wx.Menu()
file_menu.Append(wx.ID_EXIT, "&Quit")
self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)
+ self.Bind(wx.EVT_SHOW, self.on_show)
MenuBar.Append(file_menu, "&File")
self.SetMenuBar(MenuBar)
- self.canvas = Canvas(app="wx", parent=self, show=True)
+ self.canvas = Canvas(app="wx", parent=self)
def on_quit(self, event):
self.canvas.stop_timer()
self.Close(True)
+ def on_show(self, event):
+ self.canvas.show()
+ event.Skip()
+
if __name__ == '__main__':
myapp = wx.App(0)
| {"golden_diff": "diff --git a/examples/tutorial/app/simple_wx.py b/examples/tutorial/app/simple_wx.py\n--- a/examples/tutorial/app/simple_wx.py\n+++ b/examples/tutorial/app/simple_wx.py\n@@ -43,15 +43,20 @@\n file_menu = wx.Menu()\n file_menu.Append(wx.ID_EXIT, \"&Quit\")\n self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)\n+ self.Bind(wx.EVT_SHOW, self.on_show)\n MenuBar.Append(file_menu, \"&File\")\n self.SetMenuBar(MenuBar)\n \n- self.canvas = Canvas(app=\"wx\", parent=self, show=True)\n+ self.canvas = Canvas(app=\"wx\", parent=self)\n \n def on_quit(self, event):\n self.canvas.stop_timer()\n self.Close(True)\n \n+ def on_show(self, event):\n+ self.canvas.show()\n+ event.Skip()\n+\n \n if __name__ == '__main__':\n myapp = wx.App(0)\n", "issue": "examples/tutorial/app/simple_wx.py issue\n```\r\nTraceback (most recent call last):\r\n File \"simple_wx.py\", line 58, in <module>\r\n frame = TestFrame()\r\n File \"simple_wx.py\", line 49, in __init__\r\n self.canvas = Canvas(app=\"wx\", parent=self, show=True)\r\n File \"simple_wx.py\", line 20, in __init__\r\n app.Canvas.__init__(self, *args, **kwargs)\r\n File \"/home/eldar/src/vispy/vispy/app/canvas.py\", line 208, in __init__\r\n self.set_current()\r\n File \"/home/eldar/src/vispy/vispy/app/canvas.py\", line 406, in set_current\r\n self._backend._vispy_set_current()\r\n File \"/home/eldar/src/vispy/vispy/app/backends/_wx.py\", line 302, in _vispy_set_current\r\n self.SetCurrent(self._gl_context)\r\nwx._core.wxAssertionError: C++ assertion \"xid\" failed at /home/eldar/src/wx/wxPython_Phoenix/wxPython-4.0.0b2/ext/wxWidgets/src/unix/glx11.cpp(194) in SetCurrent(): window must be shown\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# vispy: testskip\n# Copyright (c) 2015, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nThis is a very minimal example that opens a window and makes the background\ncolor to change from black to white to black ...\n\nThe wx backend is used to embed the canvas in a simple wx Frame with\na menubar.\n\"\"\"\n\nimport wx\nimport math\nfrom vispy import app, gloo\n\n\nclass Canvas(app.Canvas):\n def __init__(self, *args, **kwargs):\n app.Canvas.__init__(self, *args, **kwargs)\n self._timer = app.Timer('auto', connect=self.on_timer, start=True)\n self.tick = 0\n\n def on_draw(self, event):\n gloo.clear(color=True)\n\n def on_timer(self, event):\n self.tick += 1 / 60.0\n c = abs(math.sin(self.tick))\n gloo.set_clear_color((c, c, c, 1))\n self.update()\n\n def stop_timer(self):\n self._timer.stop()\n\n\nclass TestFrame(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None, -1, \"Vispy Test\",\n wx.DefaultPosition, size=(500, 500))\n\n MenuBar = wx.MenuBar()\n file_menu = wx.Menu()\n file_menu.Append(wx.ID_EXIT, \"&Quit\")\n self.Bind(wx.EVT_MENU, self.on_quit, id=wx.ID_EXIT)\n MenuBar.Append(file_menu, \"&File\")\n self.SetMenuBar(MenuBar)\n\n self.canvas = Canvas(app=\"wx\", parent=self, show=True)\n\n def on_quit(self, event):\n self.canvas.stop_timer()\n self.Close(True)\n\n\nif __name__ == '__main__':\n myapp = wx.App(0)\n frame = TestFrame()\n frame.Show(True)\n myapp.MainLoop()\n", "path": "examples/tutorial/app/simple_wx.py"}]} | 1,375 | 207 |
gh_patches_debug_975 | rasdani/github-patches | git_diff | PennyLaneAI__pennylane-2947 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `qml.equal` ignore in-place inversion
Currently, we have:
```
>>> qml.equal(qml.RX(1.0, wires=0), qml.RX(1.0, wires=0).inv())
True
```
If two operations are inverses of each other, they should not be equal.
</issue>
<code>
[start of pennylane/ops/functions/equal.py]
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This module contains the qml.equal function.
16 """
17 # pylint: disable=too-many-arguments,too-many-return-statements
18 import pennylane as qml
19 from pennylane.operation import Operator
20
21
22 def equal(
23 op1: Operator,
24 op2: Operator,
25 check_interface=True,
26 check_trainability=True,
27 rtol=1e-5,
28 atol=1e-9,
29 ):
30 r"""Function for determining operator equality.
31
32 Args:
33 op1 (.Operator): First operator to compare
34 op2 (.Operator): Second operator to compare
35 check_interface (bool, optional): Whether to compare interfaces. Default: `True`
36 check_trainability (bool, optional): Whether to compare trainability status. Default: `True`
37 rtol (float, optional): Relative tolerance for parameters
38 atol (float, optional): Absolute tolerance for parameters
39
40 Returns:
41 bool: `True` if the operators are equal, else `False`
42
43 **Example**
44
45 Given two operators, ``qml.equal`` determines their equality:
46
47 >>> op1 = qml.RX(np.array(.12), wires=0)
48 >>> op2 = qml.RY(np.array(1.23), wires=0)
49 >>> qml.equal(op1, op1), qml.equal(op1, op2)
50 True False
51
52 .. details::
53 :title: Usage Details
54
55 You can use the optional arguments to get more specific results.
56
57 Consider the following comparisons:
58
59 >>> op1 = qml.RX(torch.tensor(1.2), wires=0)
60 >>> op2 = qml.RX(jax.numpy.array(1.2), wires=0)
61 >>> qml.equal(op1, op2)
62 False
63
64 >>> qml.equal(op1, op2, check_interface=False, check_trainability=False)
65 True
66
67 >>> op3 = qml.RX(np.array(1.2, requires_grad=True), wires=0)
68 >>> op4 = qml.RX(np.array(1.2, requires_grad=False), wires=0)
69 >>> qml.equal(op3, op4)
70 False
71
72 >>> qml.equal(op3, op4, check_trainability=False)
73 True
74 """
75 if op1.__class__ is not op2.__class__ or op1.arithmetic_depth != op2.arithmetic_depth:
76 return False
77 if op1.arithmetic_depth > 0:
78 raise NotImplementedError(
79 "Comparison of operators with an arithmetic depth larger than 0 is not yet implemented."
80 )
81 if not all(
82 qml.math.allclose(d1, d2, rtol=rtol, atol=atol) for d1, d2 in zip(op1.data, op2.data)
83 ):
84 return False
85 if op1.wires != op2.wires:
86 return False
87 for kwarg in op1.hyperparameters:
88 if op1.hyperparameters[kwarg] != op2.hyperparameters[kwarg]:
89 return False
90
91 if check_trainability:
92 for params_1, params_2 in zip(op1.data, op2.data):
93 if qml.math.requires_grad(params_1) != qml.math.requires_grad(params_2):
94 return False
95
96 if check_interface:
97 for params_1, params_2 in zip(op1.data, op2.data):
98 if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):
99 return False
100
101 return True
102
[end of pennylane/ops/functions/equal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pennylane/ops/functions/equal.py b/pennylane/ops/functions/equal.py
--- a/pennylane/ops/functions/equal.py
+++ b/pennylane/ops/functions/equal.py
@@ -98,4 +98,4 @@
if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):
return False
- return True
+ return getattr(op1, "inverse", False) == getattr(op2, "inverse", False)
| {"golden_diff": "diff --git a/pennylane/ops/functions/equal.py b/pennylane/ops/functions/equal.py\n--- a/pennylane/ops/functions/equal.py\n+++ b/pennylane/ops/functions/equal.py\n@@ -98,4 +98,4 @@\n if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):\n return False\n \n- return True\n+ return getattr(op1, \"inverse\", False) == getattr(op2, \"inverse\", False)\n", "issue": "[BUG] `qml.equal` ignore in-place inversion\nCurrently, we have:\r\n```\r\n>>> qml.equal(qml.RX(1.0, wires=0), qml.RX(1.0, wires=0).inv())\r\nTrue\r\n```\r\n\r\nIf two operations are inverses of each other, they should not be equal.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis module contains the qml.equal function.\n\"\"\"\n# pylint: disable=too-many-arguments,too-many-return-statements\nimport pennylane as qml\nfrom pennylane.operation import Operator\n\n\ndef equal(\n op1: Operator,\n op2: Operator,\n check_interface=True,\n check_trainability=True,\n rtol=1e-5,\n atol=1e-9,\n):\n r\"\"\"Function for determining operator equality.\n\n Args:\n op1 (.Operator): First operator to compare\n op2 (.Operator): Second operator to compare\n check_interface (bool, optional): Whether to compare interfaces. Default: `True`\n check_trainability (bool, optional): Whether to compare trainability status. Default: `True`\n rtol (float, optional): Relative tolerance for parameters\n atol (float, optional): Absolute tolerance for parameters\n\n Returns:\n bool: `True` if the operators are equal, else `False`\n\n **Example**\n\n Given two operators, ``qml.equal`` determines their equality:\n\n >>> op1 = qml.RX(np.array(.12), wires=0)\n >>> op2 = qml.RY(np.array(1.23), wires=0)\n >>> qml.equal(op1, op1), qml.equal(op1, op2)\n True False\n\n .. details::\n :title: Usage Details\n\n You can use the optional arguments to get more specific results.\n\n Consider the following comparisons:\n\n >>> op1 = qml.RX(torch.tensor(1.2), wires=0)\n >>> op2 = qml.RX(jax.numpy.array(1.2), wires=0)\n >>> qml.equal(op1, op2)\n False\n\n >>> qml.equal(op1, op2, check_interface=False, check_trainability=False)\n True\n\n >>> op3 = qml.RX(np.array(1.2, requires_grad=True), wires=0)\n >>> op4 = qml.RX(np.array(1.2, requires_grad=False), wires=0)\n >>> qml.equal(op3, op4)\n False\n\n >>> qml.equal(op3, op4, check_trainability=False)\n True\n \"\"\"\n if op1.__class__ is not op2.__class__ or op1.arithmetic_depth != op2.arithmetic_depth:\n return False\n if op1.arithmetic_depth > 0:\n raise NotImplementedError(\n \"Comparison of operators with an arithmetic depth larger than 0 is not yet implemented.\"\n )\n if not all(\n qml.math.allclose(d1, d2, rtol=rtol, atol=atol) for d1, d2 in zip(op1.data, op2.data)\n ):\n return False\n if op1.wires != op2.wires:\n return False\n for kwarg in op1.hyperparameters:\n if op1.hyperparameters[kwarg] != op2.hyperparameters[kwarg]:\n return False\n\n if check_trainability:\n for params_1, params_2 in zip(op1.data, op2.data):\n if qml.math.requires_grad(params_1) != qml.math.requires_grad(params_2):\n return False\n\n if check_interface:\n for params_1, params_2 in zip(op1.data, op2.data):\n if qml.math.get_interface(params_1) != qml.math.get_interface(params_2):\n return False\n\n return True\n", "path": "pennylane/ops/functions/equal.py"}]} | 1,717 | 116 |
gh_patches_debug_2308 | rasdani/github-patches | git_diff | nonebot__nonebot2-1166 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: MessageTemplate无法对数字字符等类型进行格式化
**描述问题:**
上游issue:https://github.com/mnixry/nonebot-plugin-gocqhttp/issues/281
**如何复现?**
在一个消息模板中格式化数字
**期望的结果**
不报错
**故障原因**
推测原因如下
在函数`Message.__iadd__`中,如果相加类型错误会抛出`ValueError`异常:
https://github.com/nonebot/nonebot2/blob/549a37b172ceeedac183fb48b58fe13a16eb56cd/nonebot/internal/adapter/message.py#L181-L190
在`MessageTemplate._add`中,只对一般类型相加出错的`TypeError`做了处理:
https://github.com/nonebot/nonebot2/blob/549a37b172ceeedac183fb48b58fe13a16eb56cd/nonebot/internal/adapter/template.py#L180-L184
因此导致了该错误
</issue>
<code>
[start of nonebot/internal/adapter/message.py]
1 import abc
2 from copy import deepcopy
3 from dataclasses import field, asdict, dataclass
4 from typing import (
5 Any,
6 Dict,
7 List,
8 Type,
9 Tuple,
10 Union,
11 Generic,
12 TypeVar,
13 Iterable,
14 Optional,
15 overload,
16 )
17
18 from pydantic import parse_obj_as
19
20 from .template import MessageTemplate
21
22 T = TypeVar("T")
23 TMS = TypeVar("TMS", bound="MessageSegment")
24 TM = TypeVar("TM", bound="Message")
25
26
27 @dataclass
28 class MessageSegment(abc.ABC, Generic[TM]):
29 """消息段基类"""
30
31 type: str
32 """消息段类型"""
33 data: Dict[str, Any] = field(default_factory=dict)
34 """消息段数据"""
35
36 @classmethod
37 @abc.abstractmethod
38 def get_message_class(cls) -> Type[TM]:
39 """获取消息数组类型"""
40 raise NotImplementedError
41
42 @abc.abstractmethod
43 def __str__(self) -> str:
44 """该消息段所代表的 str,在命令匹配部分使用"""
45 raise NotImplementedError
46
47 def __len__(self) -> int:
48 return len(str(self))
49
50 def __ne__(self: T, other: T) -> bool:
51 return not self == other
52
53 def __add__(self: TMS, other: Union[str, TMS, Iterable[TMS]]) -> TM:
54 return self.get_message_class()(self) + other
55
56 def __radd__(self: TMS, other: Union[str, TMS, Iterable[TMS]]) -> TM:
57 return self.get_message_class()(other) + self
58
59 @classmethod
60 def __get_validators__(cls):
61 yield cls._validate
62
63 @classmethod
64 def _validate(cls, value):
65 if isinstance(value, cls):
66 return value
67 if not isinstance(value, dict):
68 raise ValueError(f"Expected dict for MessageSegment, got {type(value)}")
69 if "type" not in value:
70 raise ValueError(
71 f"Expected dict with 'type' for MessageSegment, got {value}"
72 )
73 return cls(type=value["type"], data=value.get("data", {}))
74
75 def get(self, key: str, default: Any = None):
76 return asdict(self).get(key, default)
77
78 def keys(self):
79 return asdict(self).keys()
80
81 def values(self):
82 return asdict(self).values()
83
84 def items(self):
85 return asdict(self).items()
86
87 def copy(self: T) -> T:
88 return deepcopy(self)
89
90 @abc.abstractmethod
91 def is_text(self) -> bool:
92 """当前消息段是否为纯文本"""
93 raise NotImplementedError
94
95
96 class Message(List[TMS], abc.ABC):
97 """消息数组
98
99 参数:
100 message: 消息内容
101 """
102
103 def __init__(
104 self,
105 message: Union[str, None, Iterable[TMS], TMS] = None,
106 ):
107 super().__init__()
108 if message is None:
109 return
110 elif isinstance(message, str):
111 self.extend(self._construct(message))
112 elif isinstance(message, MessageSegment):
113 self.append(message)
114 elif isinstance(message, Iterable):
115 self.extend(message)
116 else:
117 self.extend(self._construct(message)) # pragma: no cover
118
119 @classmethod
120 def template(cls: Type[TM], format_string: Union[str, TM]) -> MessageTemplate[TM]:
121 """创建消息模板。
122
123 用法和 `str.format` 大致相同, 但是可以输出消息对象, 并且支持以 `Message` 对象作为消息模板
124
125 并且提供了拓展的格式化控制符, 可以用适用于该消息类型的 `MessageSegment` 的工厂方法创建消息
126
127 参数:
128 format_string: 格式化模板
129
130 返回:
131 消息格式化器
132 """
133 return MessageTemplate(format_string, cls)
134
135 @classmethod
136 @abc.abstractmethod
137 def get_segment_class(cls) -> Type[TMS]:
138 """获取消息段类型"""
139 raise NotImplementedError
140
141 def __str__(self) -> str:
142 return "".join(str(seg) for seg in self)
143
144 @classmethod
145 def __get_validators__(cls):
146 yield cls._validate
147
148 @classmethod
149 def _validate(cls, value):
150 if isinstance(value, cls):
151 return value
152 elif isinstance(value, Message):
153 raise ValueError(f"Type {type(value)} can not be converted to {cls}")
154 elif isinstance(value, str):
155 pass
156 elif isinstance(value, dict):
157 value = parse_obj_as(cls.get_segment_class(), value)
158 elif isinstance(value, Iterable):
159 value = [parse_obj_as(cls.get_segment_class(), v) for v in value]
160 else:
161 raise ValueError(
162 f"Expected str, dict or iterable for Message, got {type(value)}"
163 )
164 return cls(value)
165
166 @staticmethod
167 @abc.abstractmethod
168 def _construct(msg: str) -> Iterable[TMS]:
169 """构造消息数组"""
170 raise NotImplementedError
171
172 def __add__(self: TM, other: Union[str, TMS, Iterable[TMS]]) -> TM:
173 result = self.copy()
174 result += other
175 return result
176
177 def __radd__(self: TM, other: Union[str, TMS, Iterable[TMS]]) -> TM:
178 result = self.__class__(other)
179 return result + self
180
181 def __iadd__(self: TM, other: Union[str, TMS, Iterable[TMS]]) -> TM:
182 if isinstance(other, str):
183 self.extend(self._construct(other))
184 elif isinstance(other, MessageSegment):
185 self.append(other)
186 elif isinstance(other, Iterable):
187 self.extend(other)
188 else:
189 raise ValueError(f"Unsupported type: {type(other)}") # pragma: no cover
190 return self
191
192 @overload
193 def __getitem__(self: TM, __args: str) -> TM:
194 """
195 参数:
196 __args: 消息段类型
197
198 返回:
199 所有类型为 `__args` 的消息段
200 """
201
202 @overload
203 def __getitem__(self, __args: Tuple[str, int]) -> TMS:
204 """
205 参数:
206 __args: 消息段类型和索引
207
208 返回:
209 类型为 `__args[0]` 的消息段第 `__args[1]` 个
210 """
211
212 @overload
213 def __getitem__(self: TM, __args: Tuple[str, slice]) -> TM:
214 """
215 参数:
216 __args: 消息段类型和切片
217
218 返回:
219 类型为 `__args[0]` 的消息段切片 `__args[1]`
220 """
221
222 @overload
223 def __getitem__(self, __args: int) -> TMS:
224 """
225 参数:
226 __args: 索引
227
228 返回:
229 第 `__args` 个消息段
230 """
231
232 @overload
233 def __getitem__(self: TM, __args: slice) -> TM:
234 """
235 参数:
236 __args: 切片
237
238 返回:
239 消息切片 `__args`
240 """
241
242 def __getitem__(
243 self: TM,
244 args: Union[
245 str,
246 Tuple[str, int],
247 Tuple[str, slice],
248 int,
249 slice,
250 ],
251 ) -> Union[TMS, TM]:
252 arg1, arg2 = args if isinstance(args, tuple) else (args, None)
253 if isinstance(arg1, int) and arg2 is None:
254 return super().__getitem__(arg1)
255 elif isinstance(arg1, slice) and arg2 is None:
256 return self.__class__(super().__getitem__(arg1))
257 elif isinstance(arg1, str) and arg2 is None:
258 return self.__class__(seg for seg in self if seg.type == arg1)
259 elif isinstance(arg1, str) and isinstance(arg2, int):
260 return [seg for seg in self if seg.type == arg1][arg2]
261 elif isinstance(arg1, str) and isinstance(arg2, slice):
262 return self.__class__([seg for seg in self if seg.type == arg1][arg2])
263 else:
264 raise ValueError("Incorrect arguments to slice") # pragma: no cover
265
266 def index(self, value: Union[TMS, str], *args) -> int:
267 if isinstance(value, str):
268 first_segment = next((seg for seg in self if seg.type == value), None)
269 if first_segment is None:
270 raise ValueError(f"Segment with type {value} is not in message")
271 return super().index(first_segment, *args)
272 return super().index(value, *args)
273
274 def get(self: TM, type_: str, count: Optional[int] = None) -> TM:
275 if count is None:
276 return self[type_]
277
278 iterator, filtered = (
279 seg for seg in self if seg.type == type_
280 ), self.__class__()
281 for _ in range(count):
282 seg = next(iterator, None)
283 if seg is None:
284 break
285 filtered.append(seg)
286 return filtered
287
288 def count(self, value: Union[TMS, str]) -> int:
289 return len(self[value]) if isinstance(value, str) else super().count(value)
290
291 def append(self: TM, obj: Union[str, TMS]) -> TM:
292 """添加一个消息段到消息数组末尾。
293
294 参数:
295 obj: 要添加的消息段
296 """
297 if isinstance(obj, MessageSegment):
298 super().append(obj)
299 elif isinstance(obj, str):
300 self.extend(self._construct(obj))
301 else:
302 raise ValueError(f"Unexpected type: {type(obj)} {obj}") # pragma: no cover
303 return self
304
305 def extend(self: TM, obj: Union[TM, Iterable[TMS]]) -> TM:
306 """拼接一个消息数组或多个消息段到消息数组末尾。
307
308 参数:
309 obj: 要添加的消息数组
310 """
311 for segment in obj:
312 self.append(segment)
313 return self
314
315 def copy(self: TM) -> TM:
316 return deepcopy(self)
317
318 def extract_plain_text(self) -> str:
319 """提取消息内纯文本消息"""
320
321 return "".join(str(seg) for seg in self if seg.is_text())
322
323
324 __autodoc__ = {
325 "MessageSegment.__str__": True,
326 "MessageSegment.__add__": True,
327 "Message.__getitem__": True,
328 "Message._construct": True,
329 }
330
[end of nonebot/internal/adapter/message.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nonebot/internal/adapter/message.py b/nonebot/internal/adapter/message.py
--- a/nonebot/internal/adapter/message.py
+++ b/nonebot/internal/adapter/message.py
@@ -186,7 +186,7 @@
elif isinstance(other, Iterable):
self.extend(other)
else:
- raise ValueError(f"Unsupported type: {type(other)}") # pragma: no cover
+ raise TypeError(f"Unsupported type {type(other)!r}")
return self
@overload
| {"golden_diff": "diff --git a/nonebot/internal/adapter/message.py b/nonebot/internal/adapter/message.py\n--- a/nonebot/internal/adapter/message.py\n+++ b/nonebot/internal/adapter/message.py\n@@ -186,7 +186,7 @@\n elif isinstance(other, Iterable):\n self.extend(other)\n else:\n- raise ValueError(f\"Unsupported type: {type(other)}\") # pragma: no cover\n+ raise TypeError(f\"Unsupported type {type(other)!r}\")\n return self\n \n @overload\n", "issue": "Bug: MessageTemplate\u65e0\u6cd5\u5bf9\u6570\u5b57\u5b57\u7b26\u7b49\u7c7b\u578b\u8fdb\u884c\u683c\u5f0f\u5316\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n\u4e0a\u6e38issue\uff1ahttps://github.com/mnixry/nonebot-plugin-gocqhttp/issues/281\r\n\r\n**\u5982\u4f55\u590d\u73b0\uff1f**\r\n\r\n\u5728\u4e00\u4e2a\u6d88\u606f\u6a21\u677f\u4e2d\u683c\u5f0f\u5316\u6570\u5b57\r\n\r\n**\u671f\u671b\u7684\u7ed3\u679c**\r\n\r\n\u4e0d\u62a5\u9519\r\n\r\n**\u6545\u969c\u539f\u56e0**\r\n\r\n\u63a8\u6d4b\u539f\u56e0\u5982\u4e0b\r\n\r\n\u5728\u51fd\u6570`Message.__iadd__`\u4e2d\uff0c\u5982\u679c\u76f8\u52a0\u7c7b\u578b\u9519\u8bef\u4f1a\u629b\u51fa`ValueError`\u5f02\u5e38\uff1a\r\nhttps://github.com/nonebot/nonebot2/blob/549a37b172ceeedac183fb48b58fe13a16eb56cd/nonebot/internal/adapter/message.py#L181-L190\r\n\r\n\u5728`MessageTemplate._add`\u4e2d\uff0c\u53ea\u5bf9\u4e00\u822c\u7c7b\u578b\u76f8\u52a0\u51fa\u9519\u7684`TypeError`\u505a\u4e86\u5904\u7406\uff1a\r\nhttps://github.com/nonebot/nonebot2/blob/549a37b172ceeedac183fb48b58fe13a16eb56cd/nonebot/internal/adapter/template.py#L180-L184\r\n\r\n\u56e0\u6b64\u5bfc\u81f4\u4e86\u8be5\u9519\u8bef\n", "before_files": [{"content": "import abc\nfrom copy import deepcopy\nfrom dataclasses import field, asdict, dataclass\nfrom typing import (\n Any,\n Dict,\n List,\n Type,\n Tuple,\n Union,\n Generic,\n TypeVar,\n Iterable,\n Optional,\n overload,\n)\n\nfrom pydantic import parse_obj_as\n\nfrom .template import MessageTemplate\n\nT = TypeVar(\"T\")\nTMS = TypeVar(\"TMS\", bound=\"MessageSegment\")\nTM = TypeVar(\"TM\", bound=\"Message\")\n\n\n@dataclass\nclass MessageSegment(abc.ABC, Generic[TM]):\n \"\"\"\u6d88\u606f\u6bb5\u57fa\u7c7b\"\"\"\n\n type: str\n \"\"\"\u6d88\u606f\u6bb5\u7c7b\u578b\"\"\"\n data: Dict[str, Any] = field(default_factory=dict)\n \"\"\"\u6d88\u606f\u6bb5\u6570\u636e\"\"\"\n\n @classmethod\n @abc.abstractmethod\n def get_message_class(cls) -> Type[TM]:\n \"\"\"\u83b7\u53d6\u6d88\u606f\u6570\u7ec4\u7c7b\u578b\"\"\"\n raise NotImplementedError\n\n @abc.abstractmethod\n def __str__(self) -> str:\n \"\"\"\u8be5\u6d88\u606f\u6bb5\u6240\u4ee3\u8868\u7684 str\uff0c\u5728\u547d\u4ee4\u5339\u914d\u90e8\u5206\u4f7f\u7528\"\"\"\n raise NotImplementedError\n\n def __len__(self) -> int:\n return len(str(self))\n\n def __ne__(self: T, other: T) -> bool:\n return not self == other\n\n def __add__(self: TMS, other: Union[str, TMS, Iterable[TMS]]) -> TM:\n return self.get_message_class()(self) + other\n\n def __radd__(self: TMS, other: Union[str, TMS, Iterable[TMS]]) -> TM:\n return self.get_message_class()(other) + self\n\n @classmethod\n def __get_validators__(cls):\n yield cls._validate\n\n @classmethod\n def _validate(cls, value):\n if isinstance(value, cls):\n return value\n if not isinstance(value, dict):\n raise ValueError(f\"Expected dict for MessageSegment, got {type(value)}\")\n if \"type\" not in value:\n raise ValueError(\n f\"Expected dict with 'type' for MessageSegment, got {value}\"\n )\n return cls(type=value[\"type\"], data=value.get(\"data\", {}))\n\n def get(self, key: str, default: Any = None):\n return asdict(self).get(key, default)\n\n def keys(self):\n return asdict(self).keys()\n\n def values(self):\n return asdict(self).values()\n\n def items(self):\n return asdict(self).items()\n\n def copy(self: T) -> T:\n return deepcopy(self)\n\n @abc.abstractmethod\n def is_text(self) -> bool:\n \"\"\"\u5f53\u524d\u6d88\u606f\u6bb5\u662f\u5426\u4e3a\u7eaf\u6587\u672c\"\"\"\n raise NotImplementedError\n\n\nclass Message(List[TMS], abc.ABC):\n \"\"\"\u6d88\u606f\u6570\u7ec4\n\n \u53c2\u6570:\n message: \u6d88\u606f\u5185\u5bb9\n \"\"\"\n\n def __init__(\n self,\n message: Union[str, None, Iterable[TMS], TMS] = None,\n ):\n super().__init__()\n if message is None:\n return\n elif isinstance(message, str):\n self.extend(self._construct(message))\n elif isinstance(message, MessageSegment):\n self.append(message)\n elif isinstance(message, Iterable):\n self.extend(message)\n else:\n self.extend(self._construct(message)) # pragma: no cover\n\n @classmethod\n def template(cls: Type[TM], format_string: Union[str, TM]) -> MessageTemplate[TM]:\n \"\"\"\u521b\u5efa\u6d88\u606f\u6a21\u677f\u3002\n\n \u7528\u6cd5\u548c `str.format` \u5927\u81f4\u76f8\u540c, \u4f46\u662f\u53ef\u4ee5\u8f93\u51fa\u6d88\u606f\u5bf9\u8c61, \u5e76\u4e14\u652f\u6301\u4ee5 `Message` \u5bf9\u8c61\u4f5c\u4e3a\u6d88\u606f\u6a21\u677f\n\n \u5e76\u4e14\u63d0\u4f9b\u4e86\u62d3\u5c55\u7684\u683c\u5f0f\u5316\u63a7\u5236\u7b26, \u53ef\u4ee5\u7528\u9002\u7528\u4e8e\u8be5\u6d88\u606f\u7c7b\u578b\u7684 `MessageSegment` \u7684\u5de5\u5382\u65b9\u6cd5\u521b\u5efa\u6d88\u606f\n\n \u53c2\u6570:\n format_string: \u683c\u5f0f\u5316\u6a21\u677f\n\n \u8fd4\u56de:\n \u6d88\u606f\u683c\u5f0f\u5316\u5668\n \"\"\"\n return MessageTemplate(format_string, cls)\n\n @classmethod\n @abc.abstractmethod\n def get_segment_class(cls) -> Type[TMS]:\n \"\"\"\u83b7\u53d6\u6d88\u606f\u6bb5\u7c7b\u578b\"\"\"\n raise NotImplementedError\n\n def __str__(self) -> str:\n return \"\".join(str(seg) for seg in self)\n\n @classmethod\n def __get_validators__(cls):\n yield cls._validate\n\n @classmethod\n def _validate(cls, value):\n if isinstance(value, cls):\n return value\n elif isinstance(value, Message):\n raise ValueError(f\"Type {type(value)} can not be converted to {cls}\")\n elif isinstance(value, str):\n pass\n elif isinstance(value, dict):\n value = parse_obj_as(cls.get_segment_class(), value)\n elif isinstance(value, Iterable):\n value = [parse_obj_as(cls.get_segment_class(), v) for v in value]\n else:\n raise ValueError(\n f\"Expected str, dict or iterable for Message, got {type(value)}\"\n )\n return cls(value)\n\n @staticmethod\n @abc.abstractmethod\n def _construct(msg: str) -> Iterable[TMS]:\n \"\"\"\u6784\u9020\u6d88\u606f\u6570\u7ec4\"\"\"\n raise NotImplementedError\n\n def __add__(self: TM, other: Union[str, TMS, Iterable[TMS]]) -> TM:\n result = self.copy()\n result += other\n return result\n\n def __radd__(self: TM, other: Union[str, TMS, Iterable[TMS]]) -> TM:\n result = self.__class__(other)\n return result + self\n\n def __iadd__(self: TM, other: Union[str, TMS, Iterable[TMS]]) -> TM:\n if isinstance(other, str):\n self.extend(self._construct(other))\n elif isinstance(other, MessageSegment):\n self.append(other)\n elif isinstance(other, Iterable):\n self.extend(other)\n else:\n raise ValueError(f\"Unsupported type: {type(other)}\") # pragma: no cover\n return self\n\n @overload\n def __getitem__(self: TM, __args: str) -> TM:\n \"\"\"\n \u53c2\u6570:\n __args: \u6d88\u606f\u6bb5\u7c7b\u578b\n\n \u8fd4\u56de:\n \u6240\u6709\u7c7b\u578b\u4e3a `__args` \u7684\u6d88\u606f\u6bb5\n \"\"\"\n\n @overload\n def __getitem__(self, __args: Tuple[str, int]) -> TMS:\n \"\"\"\n \u53c2\u6570:\n __args: \u6d88\u606f\u6bb5\u7c7b\u578b\u548c\u7d22\u5f15\n\n \u8fd4\u56de:\n \u7c7b\u578b\u4e3a `__args[0]` \u7684\u6d88\u606f\u6bb5\u7b2c `__args[1]` \u4e2a\n \"\"\"\n\n @overload\n def __getitem__(self: TM, __args: Tuple[str, slice]) -> TM:\n \"\"\"\n \u53c2\u6570:\n __args: \u6d88\u606f\u6bb5\u7c7b\u578b\u548c\u5207\u7247\n\n \u8fd4\u56de:\n \u7c7b\u578b\u4e3a `__args[0]` \u7684\u6d88\u606f\u6bb5\u5207\u7247 `__args[1]`\n \"\"\"\n\n @overload\n def __getitem__(self, __args: int) -> TMS:\n \"\"\"\n \u53c2\u6570:\n __args: \u7d22\u5f15\n\n \u8fd4\u56de:\n \u7b2c `__args` \u4e2a\u6d88\u606f\u6bb5\n \"\"\"\n\n @overload\n def __getitem__(self: TM, __args: slice) -> TM:\n \"\"\"\n \u53c2\u6570:\n __args: \u5207\u7247\n\n \u8fd4\u56de:\n \u6d88\u606f\u5207\u7247 `__args`\n \"\"\"\n\n def __getitem__(\n self: TM,\n args: Union[\n str,\n Tuple[str, int],\n Tuple[str, slice],\n int,\n slice,\n ],\n ) -> Union[TMS, TM]:\n arg1, arg2 = args if isinstance(args, tuple) else (args, None)\n if isinstance(arg1, int) and arg2 is None:\n return super().__getitem__(arg1)\n elif isinstance(arg1, slice) and arg2 is None:\n return self.__class__(super().__getitem__(arg1))\n elif isinstance(arg1, str) and arg2 is None:\n return self.__class__(seg for seg in self if seg.type == arg1)\n elif isinstance(arg1, str) and isinstance(arg2, int):\n return [seg for seg in self if seg.type == arg1][arg2]\n elif isinstance(arg1, str) and isinstance(arg2, slice):\n return self.__class__([seg for seg in self if seg.type == arg1][arg2])\n else:\n raise ValueError(\"Incorrect arguments to slice\") # pragma: no cover\n\n def index(self, value: Union[TMS, str], *args) -> int:\n if isinstance(value, str):\n first_segment = next((seg for seg in self if seg.type == value), None)\n if first_segment is None:\n raise ValueError(f\"Segment with type {value} is not in message\")\n return super().index(first_segment, *args)\n return super().index(value, *args)\n\n def get(self: TM, type_: str, count: Optional[int] = None) -> TM:\n if count is None:\n return self[type_]\n\n iterator, filtered = (\n seg for seg in self if seg.type == type_\n ), self.__class__()\n for _ in range(count):\n seg = next(iterator, None)\n if seg is None:\n break\n filtered.append(seg)\n return filtered\n\n def count(self, value: Union[TMS, str]) -> int:\n return len(self[value]) if isinstance(value, str) else super().count(value)\n\n def append(self: TM, obj: Union[str, TMS]) -> TM:\n \"\"\"\u6dfb\u52a0\u4e00\u4e2a\u6d88\u606f\u6bb5\u5230\u6d88\u606f\u6570\u7ec4\u672b\u5c3e\u3002\n\n \u53c2\u6570:\n obj: \u8981\u6dfb\u52a0\u7684\u6d88\u606f\u6bb5\n \"\"\"\n if isinstance(obj, MessageSegment):\n super().append(obj)\n elif isinstance(obj, str):\n self.extend(self._construct(obj))\n else:\n raise ValueError(f\"Unexpected type: {type(obj)} {obj}\") # pragma: no cover\n return self\n\n def extend(self: TM, obj: Union[TM, Iterable[TMS]]) -> TM:\n \"\"\"\u62fc\u63a5\u4e00\u4e2a\u6d88\u606f\u6570\u7ec4\u6216\u591a\u4e2a\u6d88\u606f\u6bb5\u5230\u6d88\u606f\u6570\u7ec4\u672b\u5c3e\u3002\n\n \u53c2\u6570:\n obj: \u8981\u6dfb\u52a0\u7684\u6d88\u606f\u6570\u7ec4\n \"\"\"\n for segment in obj:\n self.append(segment)\n return self\n\n def copy(self: TM) -> TM:\n return deepcopy(self)\n\n def extract_plain_text(self) -> str:\n \"\"\"\u63d0\u53d6\u6d88\u606f\u5185\u7eaf\u6587\u672c\u6d88\u606f\"\"\"\n\n return \"\".join(str(seg) for seg in self if seg.is_text())\n\n\n__autodoc__ = {\n \"MessageSegment.__str__\": True,\n \"MessageSegment.__add__\": True,\n \"Message.__getitem__\": True,\n \"Message._construct\": True,\n}\n", "path": "nonebot/internal/adapter/message.py"}]} | 4,002 | 117 |
gh_patches_debug_5263 | rasdani/github-patches | git_diff | python-poetry__poetry-4420 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Throw a RuntimeError on hash mismatch in Chooser._get_links
Throw a specific exception in the case of finding a matching name+version, but none of the digests for a link matching the `poetry.lock` metadata.
Resolves: #2422
- [✅] Added **tests** for changed code.
- [✅] Updated **documentation** for changed code.
</issue>
<code>
[start of poetry/installation/chooser.py]
1 import re
2
3 from typing import List
4 from typing import Tuple
5
6 from packaging.tags import Tag
7
8 from poetry.core.packages.package import Package
9 from poetry.core.packages.utils.link import Link
10 from poetry.repositories.pool import Pool
11 from poetry.utils.env import Env
12 from poetry.utils.patterns import wheel_file_re
13
14
15 class InvalidWheelName(Exception):
16 pass
17
18
19 class Wheel(object):
20 def __init__(self, filename): # type: (str) -> None
21 wheel_info = wheel_file_re.match(filename)
22 if not wheel_info:
23 raise InvalidWheelName("{} is not a valid wheel filename.".format(filename))
24
25 self.filename = filename
26 self.name = wheel_info.group("name").replace("_", "-")
27 self.version = wheel_info.group("ver").replace("_", "-")
28 self.build_tag = wheel_info.group("build")
29 self.pyversions = wheel_info.group("pyver").split(".")
30 self.abis = wheel_info.group("abi").split(".")
31 self.plats = wheel_info.group("plat").split(".")
32
33 self.tags = {
34 Tag(x, y, z) for x in self.pyversions for y in self.abis for z in self.plats
35 }
36
37 def get_minimum_supported_index(self, tags):
38 indexes = [tags.index(t) for t in self.tags if t in tags]
39
40 return min(indexes) if indexes else None
41
42 def is_supported_by_environment(self, env):
43 return bool(set(env.supported_tags).intersection(self.tags))
44
45
46 class Chooser:
47 """
48 A Chooser chooses an appropriate release archive for packages.
49 """
50
51 def __init__(self, pool, env): # type: (Pool, Env) -> None
52 self._pool = pool
53 self._env = env
54
55 def choose_for(self, package): # type: (Package) -> Link
56 """
57 Return the url of the selected archive for a given package.
58 """
59 links = []
60 for link in self._get_links(package):
61 if link.is_wheel and not Wheel(link.filename).is_supported_by_environment(
62 self._env
63 ):
64 continue
65
66 if link.ext in {".egg", ".exe", ".msi", ".rpm", ".srpm"}:
67 continue
68
69 links.append(link)
70
71 if not links:
72 raise RuntimeError(
73 "Unable to find installation candidates for {}".format(package)
74 )
75
76 # Get the best link
77 chosen = max(links, key=lambda link: self._sort_key(package, link))
78 if not chosen:
79 raise RuntimeError(
80 "Unable to find installation candidates for {}".format(package)
81 )
82
83 return chosen
84
85 def _get_links(self, package): # type: (Package) -> List[Link]
86 if not package.source_type:
87 if not self._pool.has_repository("pypi"):
88 repository = self._pool.repositories[0]
89 else:
90 repository = self._pool.repository("pypi")
91 else:
92 repository = self._pool.repository(package.source_reference)
93
94 links = repository.find_links_for_package(package)
95
96 hashes = [f["hash"] for f in package.files]
97 if not hashes:
98 return links
99
100 selected_links = []
101 for link in links:
102 if not link.hash:
103 selected_links.append(link)
104 continue
105
106 h = link.hash_name + ":" + link.hash
107 if h not in hashes:
108 continue
109
110 selected_links.append(link)
111
112 return selected_links
113
114 def _sort_key(self, package, link): # type: (Package, Link) -> Tuple
115 """
116 Function to pass as the `key` argument to a call to sorted() to sort
117 InstallationCandidates by preference.
118 Returns a tuple such that tuples sorting as greater using Python's
119 default comparison operator are more preferred.
120 The preference is as follows:
121 First and foremost, candidates with allowed (matching) hashes are
122 always preferred over candidates without matching hashes. This is
123 because e.g. if the only candidate with an allowed hash is yanked,
124 we still want to use that candidate.
125 Second, excepting hash considerations, candidates that have been
126 yanked (in the sense of PEP 592) are always less preferred than
127 candidates that haven't been yanked. Then:
128 If not finding wheels, they are sorted by version only.
129 If finding wheels, then the sort order is by version, then:
130 1. existing installs
131 2. wheels ordered via Wheel.support_index_min(self._supported_tags)
132 3. source archives
133 If prefer_binary was set, then all wheels are sorted above sources.
134 Note: it was considered to embed this logic into the Link
135 comparison operators, but then different sdist links
136 with the same version, would have to be considered equal
137 """
138 support_num = len(self._env.supported_tags)
139 build_tag = ()
140 binary_preference = 0
141 if link.is_wheel:
142 wheel = Wheel(link.filename)
143 if not wheel.is_supported_by_environment(self._env):
144 raise RuntimeError(
145 "{} is not a supported wheel for this platform. It "
146 "can't be sorted.".format(wheel.filename)
147 )
148
149 # TODO: Binary preference
150 pri = -(wheel.get_minimum_supported_index(self._env.supported_tags))
151 if wheel.build_tag is not None:
152 match = re.match(r"^(\d+)(.*)$", wheel.build_tag)
153 build_tag_groups = match.groups()
154 build_tag = (int(build_tag_groups[0]), build_tag_groups[1])
155 else: # sdist
156 pri = -support_num
157
158 has_allowed_hash = int(self._is_link_hash_allowed_for_package(link, package))
159
160 # TODO: Proper yank value
161 yank_value = 0
162
163 return (
164 has_allowed_hash,
165 yank_value,
166 binary_preference,
167 package.version,
168 build_tag,
169 pri,
170 )
171
172 def _is_link_hash_allowed_for_package(
173 self, link, package
174 ): # type: (Link, Package) -> bool
175 if not link.hash:
176 return True
177
178 h = link.hash_name + ":" + link.hash
179
180 return h in {f["hash"] for f in package.files}
181
[end of poetry/installation/chooser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/poetry/installation/chooser.py b/poetry/installation/chooser.py
--- a/poetry/installation/chooser.py
+++ b/poetry/installation/chooser.py
@@ -109,6 +109,13 @@
selected_links.append(link)
+ if links and not selected_links:
+ raise RuntimeError(
+ "Retrieved digest for link {}({}) not in poetry.lock metadata {}".format(
+ link.filename, h, hashes
+ )
+ )
+
return selected_links
def _sort_key(self, package, link): # type: (Package, Link) -> Tuple
| {"golden_diff": "diff --git a/poetry/installation/chooser.py b/poetry/installation/chooser.py\n--- a/poetry/installation/chooser.py\n+++ b/poetry/installation/chooser.py\n@@ -109,6 +109,13 @@\n \n selected_links.append(link)\n \n+ if links and not selected_links:\n+ raise RuntimeError(\n+ \"Retrieved digest for link {}({}) not in poetry.lock metadata {}\".format(\n+ link.filename, h, hashes\n+ )\n+ )\n+\n return selected_links\n \n def _sort_key(self, package, link): # type: (Package, Link) -> Tuple\n", "issue": "Throw a RuntimeError on hash mismatch in Chooser._get_links\nThrow a specific exception in the case of finding a matching name+version, but none of the digests for a link matching the `poetry.lock` metadata.\r\n\r\nResolves: #2422\r\n\r\n- [\u2705] Added **tests** for changed code.\r\n- [\u2705] Updated **documentation** for changed code.\r\n\n", "before_files": [{"content": "import re\n\nfrom typing import List\nfrom typing import Tuple\n\nfrom packaging.tags import Tag\n\nfrom poetry.core.packages.package import Package\nfrom poetry.core.packages.utils.link import Link\nfrom poetry.repositories.pool import Pool\nfrom poetry.utils.env import Env\nfrom poetry.utils.patterns import wheel_file_re\n\n\nclass InvalidWheelName(Exception):\n pass\n\n\nclass Wheel(object):\n def __init__(self, filename): # type: (str) -> None\n wheel_info = wheel_file_re.match(filename)\n if not wheel_info:\n raise InvalidWheelName(\"{} is not a valid wheel filename.\".format(filename))\n\n self.filename = filename\n self.name = wheel_info.group(\"name\").replace(\"_\", \"-\")\n self.version = wheel_info.group(\"ver\").replace(\"_\", \"-\")\n self.build_tag = wheel_info.group(\"build\")\n self.pyversions = wheel_info.group(\"pyver\").split(\".\")\n self.abis = wheel_info.group(\"abi\").split(\".\")\n self.plats = wheel_info.group(\"plat\").split(\".\")\n\n self.tags = {\n Tag(x, y, z) for x in self.pyversions for y in self.abis for z in self.plats\n }\n\n def get_minimum_supported_index(self, tags):\n indexes = [tags.index(t) for t in self.tags if t in tags]\n\n return min(indexes) if indexes else None\n\n def is_supported_by_environment(self, env):\n return bool(set(env.supported_tags).intersection(self.tags))\n\n\nclass Chooser:\n \"\"\"\n A Chooser chooses an appropriate release archive for packages.\n \"\"\"\n\n def __init__(self, pool, env): # type: (Pool, Env) -> None\n self._pool = pool\n self._env = env\n\n def choose_for(self, package): # type: (Package) -> Link\n \"\"\"\n Return the url of the selected archive for a given package.\n \"\"\"\n links = []\n for link in self._get_links(package):\n if link.is_wheel and not Wheel(link.filename).is_supported_by_environment(\n self._env\n ):\n continue\n\n if link.ext in {\".egg\", \".exe\", \".msi\", \".rpm\", \".srpm\"}:\n continue\n\n links.append(link)\n\n if not links:\n raise RuntimeError(\n \"Unable to find installation candidates for {}\".format(package)\n )\n\n # Get the best link\n chosen = max(links, key=lambda link: self._sort_key(package, link))\n if not chosen:\n raise RuntimeError(\n \"Unable to find installation candidates for {}\".format(package)\n )\n\n return chosen\n\n def _get_links(self, package): # type: (Package) -> List[Link]\n if not package.source_type:\n if not self._pool.has_repository(\"pypi\"):\n repository = self._pool.repositories[0]\n else:\n repository = self._pool.repository(\"pypi\")\n else:\n repository = self._pool.repository(package.source_reference)\n\n links = repository.find_links_for_package(package)\n\n hashes = [f[\"hash\"] for f in package.files]\n if not hashes:\n return links\n\n selected_links = []\n for link in links:\n if not link.hash:\n selected_links.append(link)\n continue\n\n h = link.hash_name + \":\" + link.hash\n if h not in hashes:\n continue\n\n selected_links.append(link)\n\n return selected_links\n\n def _sort_key(self, package, link): # type: (Package, Link) -> Tuple\n \"\"\"\n Function to pass as the `key` argument to a call to sorted() to sort\n InstallationCandidates by preference.\n Returns a tuple such that tuples sorting as greater using Python's\n default comparison operator are more preferred.\n The preference is as follows:\n First and foremost, candidates with allowed (matching) hashes are\n always preferred over candidates without matching hashes. This is\n because e.g. if the only candidate with an allowed hash is yanked,\n we still want to use that candidate.\n Second, excepting hash considerations, candidates that have been\n yanked (in the sense of PEP 592) are always less preferred than\n candidates that haven't been yanked. Then:\n If not finding wheels, they are sorted by version only.\n If finding wheels, then the sort order is by version, then:\n 1. existing installs\n 2. wheels ordered via Wheel.support_index_min(self._supported_tags)\n 3. source archives\n If prefer_binary was set, then all wheels are sorted above sources.\n Note: it was considered to embed this logic into the Link\n comparison operators, but then different sdist links\n with the same version, would have to be considered equal\n \"\"\"\n support_num = len(self._env.supported_tags)\n build_tag = ()\n binary_preference = 0\n if link.is_wheel:\n wheel = Wheel(link.filename)\n if not wheel.is_supported_by_environment(self._env):\n raise RuntimeError(\n \"{} is not a supported wheel for this platform. It \"\n \"can't be sorted.\".format(wheel.filename)\n )\n\n # TODO: Binary preference\n pri = -(wheel.get_minimum_supported_index(self._env.supported_tags))\n if wheel.build_tag is not None:\n match = re.match(r\"^(\\d+)(.*)$\", wheel.build_tag)\n build_tag_groups = match.groups()\n build_tag = (int(build_tag_groups[0]), build_tag_groups[1])\n else: # sdist\n pri = -support_num\n\n has_allowed_hash = int(self._is_link_hash_allowed_for_package(link, package))\n\n # TODO: Proper yank value\n yank_value = 0\n\n return (\n has_allowed_hash,\n yank_value,\n binary_preference,\n package.version,\n build_tag,\n pri,\n )\n\n def _is_link_hash_allowed_for_package(\n self, link, package\n ): # type: (Link, Package) -> bool\n if not link.hash:\n return True\n\n h = link.hash_name + \":\" + link.hash\n\n return h in {f[\"hash\"] for f in package.files}\n", "path": "poetry/installation/chooser.py"}]} | 2,402 | 144 |
gh_patches_debug_27327 | rasdani/github-patches | git_diff | quantumlib__Cirq-4805 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`cirq.plot_density_matrix` outputs buggy plots (abstract art) and doesn't scale with number of qubits.
**Description of the issue**
`cirq.plot_density_matrix` is buggy and output's random plots (tested in a colab) and would also doesn't scale well with number of qubits.
My guess is that this due to the magic numbers hardcoded in the code, which should be changed and computed on the fly based on the size of the density matrix.
https://github.com/quantumlib/Cirq/blob/09a4667e3d1768fc2f1934b18ae52e5b13ecd378/cirq-core/cirq/vis/density_matrix.py#L39
**How to reproduce the issue**
```python
import cirq
cirq.plot_density_matrix(cirq.testing.random_density_matrix(dim=4, random_state = 12345))
```
<details>

</details>
**Cirq version**
0.14.0.dev20220105215702
cc @AnimeshSinha1309
`cirq.plot_density_matrix` outputs buggy plots (abstract art) and doesn't scale with number of qubits.
**Description of the issue**
`cirq.plot_density_matrix` is buggy and output's random plots (tested in a colab) and would also doesn't scale well with number of qubits.
My guess is that this due to the magic numbers hardcoded in the code, which should be changed and computed on the fly based on the size of the density matrix.
https://github.com/quantumlib/Cirq/blob/09a4667e3d1768fc2f1934b18ae52e5b13ecd378/cirq-core/cirq/vis/density_matrix.py#L39
**How to reproduce the issue**
```python
import cirq
cirq.plot_density_matrix(cirq.testing.random_density_matrix(dim=4, random_state = 12345))
```
<details>

</details>
**Cirq version**
0.14.0.dev20220105215702
cc @AnimeshSinha1309
</issue>
<code>
[start of cirq-core/cirq/vis/density_matrix.py]
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Tool to visualize the magnitudes and phases in the density matrix"""
16
17 from typing import Optional
18
19 import numpy as np
20 import matplotlib.pyplot as plt
21 from matplotlib import lines, patches
22
23 from cirq.qis.states import validate_density_matrix
24
25
26 def _plot_element_of_density_matrix(ax, x, y, r, phase, show_rect=False, show_text=False):
27 """Plots a single element of a density matrix
28
29 Args:
30 x: x coordinate of the cell we are plotting
31 y: y coordinate of the cell we are plotting
32 r: the amplitude of the qubit in that cell
33 phase: phase of the qubit in that cell, in radians
34 show_rect: Boolean on if to show the amplitude rectangle, used for diagonal elements
35 show_text: Boolean on if to show text labels or not
36 ax: The axes to plot on
37 """
38 # Setting up a few magic numbers for graphics
39 _half_cell_size_after_padding = (1 / 1.1) * 0.5
40 _rectangle_margin = 0.01
41 _image_opacity = 0.8 if not show_text else 0.4
42
43 circle_out = plt.Circle(
44 (x + 0.5, y + 0.5), radius=1 / _half_cell_size_after_padding, fill=False, color='#333333'
45 )
46 circle_in = plt.Circle(
47 (x + 0.5, y + 0.5),
48 radius=r / _half_cell_size_after_padding,
49 fill=True,
50 color='IndianRed',
51 alpha=_image_opacity,
52 )
53 line = lines.Line2D(
54 (x + 0.5, x + 0.5 + np.cos(phase) / _half_cell_size_after_padding),
55 (y + 0.5, y + 0.5 + np.sin(phase) / _half_cell_size_after_padding),
56 color='#333333',
57 alpha=_image_opacity,
58 )
59 ax.add_artist(circle_in)
60 ax.add_artist(circle_out)
61 ax.add_artist(line)
62 if show_rect:
63 rect = patches.Rectangle(
64 (x + _rectangle_margin, y + _rectangle_margin),
65 1.0 - 2 * _rectangle_margin,
66 r * (1 - 2 * _rectangle_margin),
67 alpha=0.25,
68 )
69 ax.add_artist(rect)
70 if show_text:
71 plt.text(
72 x + 0.5,
73 y + 0.5,
74 f"{np.round(r, decimals=2)}\n{np.round(phase * 180 / np.pi, decimals=2)} deg",
75 horizontalalignment='center',
76 verticalalignment='center',
77 )
78
79
80 def plot_density_matrix(
81 matrix: np.ndarray,
82 ax: Optional[plt.Axes] = None,
83 *,
84 show_text: bool = False,
85 title: Optional[str] = None,
86 ) -> plt.Axes:
87 """Generates a plot for a given density matrix.
88
89 1. Each entry of the density matrix, a complex number, is plotted as an
90 Argand Diagram where the partially filled red circle represents the magnitude
91 and the line represents the phase angle, going anti-clockwise from positive x - axis.
92 2. The blue rectangles on the diagonal elements represent the probability
93 of measuring the system in state $|i\rangle$.
94 Rendering scheme is inspired from https://algassert.com/quirk
95
96 Args:
97 matrix: The density matrix to visualize
98 show_text: If true, the density matrix values are also shown as text labels
99 ax: The axes to plot on
100 title: Title of the plot
101 """
102 plt.style.use('ggplot')
103
104 _padding_around_plot = 0.001
105
106 matrix = matrix.astype(np.complex128)
107 num_qubits = int(np.log2(matrix.shape[0]))
108 validate_density_matrix(matrix, qid_shape=(2 ** num_qubits,))
109
110 if ax is None:
111 _, ax = plt.subplots(figsize=(10, 10))
112 ax.set_xlim(0 - _padding_around_plot, 2 ** num_qubits + _padding_around_plot)
113 ax.set_ylim(0 - _padding_around_plot, 2 ** num_qubits + _padding_around_plot)
114
115 for i in range(matrix.shape[0]):
116 for j in range(matrix.shape[1]):
117 _plot_element_of_density_matrix(
118 ax,
119 i,
120 j,
121 np.abs(matrix[i][-j - 1]),
122 np.angle(matrix[i][-j - 1]),
123 show_rect=(i == matrix.shape[1] - j - 1),
124 show_text=show_text,
125 )
126
127 ticks, labels = np.arange(0.5, matrix.shape[0]), [
128 f"{'0'*(num_qubits - len(f'{i:b}'))}{i:b}" for i in range(matrix.shape[0])
129 ]
130 ax.set_xticks(ticks)
131 ax.set_xticklabels(labels)
132 ax.set_yticks(ticks)
133 ax.set_yticklabels(reversed(labels))
134 ax.set_facecolor('#eeeeee')
135 if title is not None:
136 ax.set_title(title)
137 return ax
138
[end of cirq-core/cirq/vis/density_matrix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq-core/cirq/vis/density_matrix.py b/cirq-core/cirq/vis/density_matrix.py
--- a/cirq-core/cirq/vis/density_matrix.py
+++ b/cirq-core/cirq/vis/density_matrix.py
@@ -41,18 +41,18 @@
_image_opacity = 0.8 if not show_text else 0.4
circle_out = plt.Circle(
- (x + 0.5, y + 0.5), radius=1 / _half_cell_size_after_padding, fill=False, color='#333333'
+ (x + 0.5, y + 0.5), radius=1 * _half_cell_size_after_padding, fill=False, color='#333333'
)
circle_in = plt.Circle(
(x + 0.5, y + 0.5),
- radius=r / _half_cell_size_after_padding,
+ radius=r * _half_cell_size_after_padding,
fill=True,
color='IndianRed',
alpha=_image_opacity,
)
line = lines.Line2D(
- (x + 0.5, x + 0.5 + np.cos(phase) / _half_cell_size_after_padding),
- (y + 0.5, y + 0.5 + np.sin(phase) / _half_cell_size_after_padding),
+ (x + 0.5, x + 0.5 + np.cos(phase) * _half_cell_size_after_padding),
+ (y + 0.5, y + 0.5 + np.sin(phase) * _half_cell_size_after_padding),
color='#333333',
alpha=_image_opacity,
)
@@ -128,7 +128,7 @@
f"{'0'*(num_qubits - len(f'{i:b}'))}{i:b}" for i in range(matrix.shape[0])
]
ax.set_xticks(ticks)
- ax.set_xticklabels(labels)
+ ax.set_xticklabels(labels, rotation=90)
ax.set_yticks(ticks)
ax.set_yticklabels(reversed(labels))
ax.set_facecolor('#eeeeee')
| {"golden_diff": "diff --git a/cirq-core/cirq/vis/density_matrix.py b/cirq-core/cirq/vis/density_matrix.py\n--- a/cirq-core/cirq/vis/density_matrix.py\n+++ b/cirq-core/cirq/vis/density_matrix.py\n@@ -41,18 +41,18 @@\n _image_opacity = 0.8 if not show_text else 0.4\n \n circle_out = plt.Circle(\n- (x + 0.5, y + 0.5), radius=1 / _half_cell_size_after_padding, fill=False, color='#333333'\n+ (x + 0.5, y + 0.5), radius=1 * _half_cell_size_after_padding, fill=False, color='#333333'\n )\n circle_in = plt.Circle(\n (x + 0.5, y + 0.5),\n- radius=r / _half_cell_size_after_padding,\n+ radius=r * _half_cell_size_after_padding,\n fill=True,\n color='IndianRed',\n alpha=_image_opacity,\n )\n line = lines.Line2D(\n- (x + 0.5, x + 0.5 + np.cos(phase) / _half_cell_size_after_padding),\n- (y + 0.5, y + 0.5 + np.sin(phase) / _half_cell_size_after_padding),\n+ (x + 0.5, x + 0.5 + np.cos(phase) * _half_cell_size_after_padding),\n+ (y + 0.5, y + 0.5 + np.sin(phase) * _half_cell_size_after_padding),\n color='#333333',\n alpha=_image_opacity,\n )\n@@ -128,7 +128,7 @@\n f\"{'0'*(num_qubits - len(f'{i:b}'))}{i:b}\" for i in range(matrix.shape[0])\n ]\n ax.set_xticks(ticks)\n- ax.set_xticklabels(labels)\n+ ax.set_xticklabels(labels, rotation=90)\n ax.set_yticks(ticks)\n ax.set_yticklabels(reversed(labels))\n ax.set_facecolor('#eeeeee')\n", "issue": "`cirq.plot_density_matrix` outputs buggy plots (abstract art) and doesn't scale with number of qubits. \n**Description of the issue**\r\n`cirq.plot_density_matrix` is buggy and output's random plots (tested in a colab) and would also doesn't scale well with number of qubits. \r\n\r\nMy guess is that this due to the magic numbers hardcoded in the code, which should be changed and computed on the fly based on the size of the density matrix. \r\n\r\nhttps://github.com/quantumlib/Cirq/blob/09a4667e3d1768fc2f1934b18ae52e5b13ecd378/cirq-core/cirq/vis/density_matrix.py#L39\r\n\r\n**How to reproduce the issue**\r\n\r\n```python\r\nimport cirq\r\ncirq.plot_density_matrix(cirq.testing.random_density_matrix(dim=4, random_state = 12345))\r\n```\r\n\r\n<details>\r\n\r\n\r\n\r\n\r\n</details>\r\n\r\n**Cirq version**\r\n0.14.0.dev20220105215702\r\n\r\n\r\ncc @AnimeshSinha1309 \n`cirq.plot_density_matrix` outputs buggy plots (abstract art) and doesn't scale with number of qubits. \n**Description of the issue**\r\n`cirq.plot_density_matrix` is buggy and output's random plots (tested in a colab) and would also doesn't scale well with number of qubits. \r\n\r\nMy guess is that this due to the magic numbers hardcoded in the code, which should be changed and computed on the fly based on the size of the density matrix. \r\n\r\nhttps://github.com/quantumlib/Cirq/blob/09a4667e3d1768fc2f1934b18ae52e5b13ecd378/cirq-core/cirq/vis/density_matrix.py#L39\r\n\r\n**How to reproduce the issue**\r\n\r\n```python\r\nimport cirq\r\ncirq.plot_density_matrix(cirq.testing.random_density_matrix(dim=4, random_state = 12345))\r\n```\r\n\r\n<details>\r\n\r\n\r\n\r\n\r\n</details>\r\n\r\n**Cirq version**\r\n0.14.0.dev20220105215702\r\n\r\n\r\ncc @AnimeshSinha1309 \n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Tool to visualize the magnitudes and phases in the density matrix\"\"\"\n\nfrom typing import Optional\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import lines, patches\n\nfrom cirq.qis.states import validate_density_matrix\n\n\ndef _plot_element_of_density_matrix(ax, x, y, r, phase, show_rect=False, show_text=False):\n \"\"\"Plots a single element of a density matrix\n\n Args:\n x: x coordinate of the cell we are plotting\n y: y coordinate of the cell we are plotting\n r: the amplitude of the qubit in that cell\n phase: phase of the qubit in that cell, in radians\n show_rect: Boolean on if to show the amplitude rectangle, used for diagonal elements\n show_text: Boolean on if to show text labels or not\n ax: The axes to plot on\n \"\"\"\n # Setting up a few magic numbers for graphics\n _half_cell_size_after_padding = (1 / 1.1) * 0.5\n _rectangle_margin = 0.01\n _image_opacity = 0.8 if not show_text else 0.4\n\n circle_out = plt.Circle(\n (x + 0.5, y + 0.5), radius=1 / _half_cell_size_after_padding, fill=False, color='#333333'\n )\n circle_in = plt.Circle(\n (x + 0.5, y + 0.5),\n radius=r / _half_cell_size_after_padding,\n fill=True,\n color='IndianRed',\n alpha=_image_opacity,\n )\n line = lines.Line2D(\n (x + 0.5, x + 0.5 + np.cos(phase) / _half_cell_size_after_padding),\n (y + 0.5, y + 0.5 + np.sin(phase) / _half_cell_size_after_padding),\n color='#333333',\n alpha=_image_opacity,\n )\n ax.add_artist(circle_in)\n ax.add_artist(circle_out)\n ax.add_artist(line)\n if show_rect:\n rect = patches.Rectangle(\n (x + _rectangle_margin, y + _rectangle_margin),\n 1.0 - 2 * _rectangle_margin,\n r * (1 - 2 * _rectangle_margin),\n alpha=0.25,\n )\n ax.add_artist(rect)\n if show_text:\n plt.text(\n x + 0.5,\n y + 0.5,\n f\"{np.round(r, decimals=2)}\\n{np.round(phase * 180 / np.pi, decimals=2)} deg\",\n horizontalalignment='center',\n verticalalignment='center',\n )\n\n\ndef plot_density_matrix(\n matrix: np.ndarray,\n ax: Optional[plt.Axes] = None,\n *,\n show_text: bool = False,\n title: Optional[str] = None,\n) -> plt.Axes:\n \"\"\"Generates a plot for a given density matrix.\n\n 1. Each entry of the density matrix, a complex number, is plotted as an\n Argand Diagram where the partially filled red circle represents the magnitude\n and the line represents the phase angle, going anti-clockwise from positive x - axis.\n 2. The blue rectangles on the diagonal elements represent the probability\n of measuring the system in state $|i\\rangle$.\n Rendering scheme is inspired from https://algassert.com/quirk\n\n Args:\n matrix: The density matrix to visualize\n show_text: If true, the density matrix values are also shown as text labels\n ax: The axes to plot on\n title: Title of the plot\n \"\"\"\n plt.style.use('ggplot')\n\n _padding_around_plot = 0.001\n\n matrix = matrix.astype(np.complex128)\n num_qubits = int(np.log2(matrix.shape[0]))\n validate_density_matrix(matrix, qid_shape=(2 ** num_qubits,))\n\n if ax is None:\n _, ax = plt.subplots(figsize=(10, 10))\n ax.set_xlim(0 - _padding_around_plot, 2 ** num_qubits + _padding_around_plot)\n ax.set_ylim(0 - _padding_around_plot, 2 ** num_qubits + _padding_around_plot)\n\n for i in range(matrix.shape[0]):\n for j in range(matrix.shape[1]):\n _plot_element_of_density_matrix(\n ax,\n i,\n j,\n np.abs(matrix[i][-j - 1]),\n np.angle(matrix[i][-j - 1]),\n show_rect=(i == matrix.shape[1] - j - 1),\n show_text=show_text,\n )\n\n ticks, labels = np.arange(0.5, matrix.shape[0]), [\n f\"{'0'*(num_qubits - len(f'{i:b}'))}{i:b}\" for i in range(matrix.shape[0])\n ]\n ax.set_xticks(ticks)\n ax.set_xticklabels(labels)\n ax.set_yticks(ticks)\n ax.set_yticklabels(reversed(labels))\n ax.set_facecolor('#eeeeee')\n if title is not None:\n ax.set_title(title)\n return ax\n", "path": "cirq-core/cirq/vis/density_matrix.py"}]} | 2,759 | 490 |
gh_patches_debug_4858 | rasdani/github-patches | git_diff | Gallopsled__pwntools-752 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
3.0.3 Release Broken
It appears that the archive uploaded to PyPI does not include README.md, which is referred to by setup.py.
@Idolf can you update the release to include the README?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python2
2 import glob
3 import os
4 import platform
5 import sys
6 from distutils.command.install import INSTALL_SCHEMES
7 from distutils.sysconfig import get_python_inc
8 from distutils.util import convert_path
9
10 from setuptools import find_packages
11 from setuptools import setup
12
13 # Get all template files
14 templates = []
15 for dirpath, dirnames, filenames in os.walk(convert_path('pwnlib/shellcraft/templates')):
16 for f in filenames:
17 templates.append(os.path.relpath(os.path.join(dirpath, f), 'pwnlib'))
18
19 # This makes pwntools-LICENSE.txt appear with the package folders
20 for scheme in INSTALL_SCHEMES.values():
21 scheme['data'] = scheme['purelib']
22
23 # Find all of the console scripts
24 console_scripts = []
25
26 for filename in glob.glob('pwnlib/commandline/*'):
27 filename = os.path.basename(filename)
28 filename, ext = os.path.splitext(filename)
29
30 if ext != '.py' or '__init__' in filename:
31 continue
32
33 script = '%s=pwnlib.commandline.%s:main' % (filename, filename)
34 console_scripts.append(script)
35
36 install_requires = ['paramiko>=1.15.2',
37 'mako>=1.0.0',
38 'pyelftools>=0.2.4',
39 'capstone',
40 'ropgadget>=5.3',
41 'pyserial>=2.7',
42 'requests>=2.0',
43 'pip>=6.0.8',
44 'tox>=1.8.1',
45 'pygments>=2.0',
46 'pysocks',
47 'python-dateutil',
48 'pypandoc',
49 'packaging']
50
51 # This is a hack until somebody ports psutil to OpenBSD
52 if platform.system() != 'OpenBSD':
53 install_requires.append('psutil>=2.1.3')
54
55 # Check that the user has installed the Python development headers
56 PythonH = os.path.join(get_python_inc(), 'Python.h')
57 if not os.path.exists(PythonH):
58 print >> sys.stderr, "You must install the Python development headers!"
59 print >> sys.stderr, "$ apt-get install python-dev"
60 sys.exit(-1)
61
62 # Convert README.md to reStructuredText for PyPI
63 long_description = ''
64 try:
65 import pypandoc
66 try:
67 pypandoc.get_pandoc_path()
68 except OSError:
69 pypandoc.download_pandoc()
70 long_description = pypandoc.convert_file('README.md', 'rst')
71 except ImportError:
72 pass
73
74
75 setup(
76 name = 'pwntools',
77 packages = find_packages(),
78 version = '3.0.3',
79 data_files = [('',
80 ['LICENSE-pwntools.txt',
81 ]),
82 ],
83 package_data = {
84 'pwnlib': [
85 'data/crcsums.txt',
86 'data/useragents/useragents.txt',
87 'data/binutils/*',
88 'data/includes/*.h',
89 'data/includes/*/*.h',
90 ] + templates,
91 },
92 entry_points = {'console_scripts': console_scripts},
93 scripts = glob.glob("bin/*"),
94 description = "Pwntools CTF framework and exploit development library.",
95 long_description = long_description,
96 author = "Gallopsled et al.",
97 author_email = "#pwntools @ freenode.net",
98 url = 'https://pwntools.com',
99 download_url = "https://github.com/Gallopsled/pwntools/releases",
100 install_requires = install_requires,
101 license = "Mostly MIT, some GPL/BSD, see LICENSE-pwntools.txt",
102 keywords = 'pwntools exploit ctf capture the flag binary wargame overflow stack heap defcon',
103 classifiers = [
104 'Development Status :: 5 - Production/Stable',
105 'Environment :: Console',
106 'Intended Audience :: Developers',
107 'Intended Audience :: Science/Research',
108 'Intended Audience :: System Administrators',
109 'License :: OSI Approved :: MIT License',
110 'Natural Language :: English',
111 'Operating System :: POSIX :: Linux',
112 'Programming Language :: Python :: 2.7',
113 'Topic :: Security',
114 'Topic :: Software Development :: Assemblers',
115 'Topic :: Software Development :: Debuggers',
116 'Topic :: Software Development :: Disassemblers',
117 'Topic :: Software Development :: Embedded Systems',
118 'Topic :: Software Development :: Libraries :: Python Modules',
119 'Topic :: System :: System Shells',
120 'Topic :: Utilities',
121 ]
122 )
123
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -77,8 +77,7 @@
packages = find_packages(),
version = '3.0.3',
data_files = [('',
- ['LICENSE-pwntools.txt',
- ]),
+ glob.glob('*.md') + glob.glob('*.txt')),
],
package_data = {
'pwnlib': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -77,8 +77,7 @@\n packages = find_packages(),\n version = '3.0.3',\n data_files = [('',\n- ['LICENSE-pwntools.txt',\n- ]),\n+ glob.glob('*.md') + glob.glob('*.txt')),\n ],\n package_data = {\n 'pwnlib': [\n", "issue": "3.0.3 Release Broken\nIt appears that the archive uploaded to PyPI does not include README.md, which is referred to by setup.py.\n\n@Idolf can you update the release to include the README?\n\n", "before_files": [{"content": "#!/usr/bin/env python2\nimport glob\nimport os\nimport platform\nimport sys\nfrom distutils.command.install import INSTALL_SCHEMES\nfrom distutils.sysconfig import get_python_inc\nfrom distutils.util import convert_path\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Get all template files\ntemplates = []\nfor dirpath, dirnames, filenames in os.walk(convert_path('pwnlib/shellcraft/templates')):\n for f in filenames:\n templates.append(os.path.relpath(os.path.join(dirpath, f), 'pwnlib'))\n\n# This makes pwntools-LICENSE.txt appear with the package folders\nfor scheme in INSTALL_SCHEMES.values():\n scheme['data'] = scheme['purelib']\n\n# Find all of the console scripts\nconsole_scripts = []\n\nfor filename in glob.glob('pwnlib/commandline/*'):\n filename = os.path.basename(filename)\n filename, ext = os.path.splitext(filename)\n\n if ext != '.py' or '__init__' in filename:\n continue\n\n script = '%s=pwnlib.commandline.%s:main' % (filename, filename)\n console_scripts.append(script)\n\ninstall_requires = ['paramiko>=1.15.2',\n 'mako>=1.0.0',\n 'pyelftools>=0.2.4',\n 'capstone',\n 'ropgadget>=5.3',\n 'pyserial>=2.7',\n 'requests>=2.0',\n 'pip>=6.0.8',\n 'tox>=1.8.1',\n 'pygments>=2.0',\n 'pysocks',\n 'python-dateutil',\n 'pypandoc',\n 'packaging']\n\n# This is a hack until somebody ports psutil to OpenBSD\nif platform.system() != 'OpenBSD':\n install_requires.append('psutil>=2.1.3')\n\n# Check that the user has installed the Python development headers\nPythonH = os.path.join(get_python_inc(), 'Python.h')\nif not os.path.exists(PythonH):\n print >> sys.stderr, \"You must install the Python development headers!\"\n print >> sys.stderr, \"$ apt-get install python-dev\"\n sys.exit(-1)\n\n# Convert README.md to reStructuredText for PyPI\nlong_description = ''\ntry:\n import pypandoc\n try:\n pypandoc.get_pandoc_path()\n except OSError:\n pypandoc.download_pandoc()\n long_description = pypandoc.convert_file('README.md', 'rst')\nexcept ImportError:\n pass\n\n\nsetup(\n name = 'pwntools',\n packages = find_packages(),\n version = '3.0.3',\n data_files = [('',\n ['LICENSE-pwntools.txt',\n ]),\n ],\n package_data = {\n 'pwnlib': [\n 'data/crcsums.txt',\n 'data/useragents/useragents.txt',\n 'data/binutils/*',\n 'data/includes/*.h',\n 'data/includes/*/*.h',\n ] + templates,\n },\n entry_points = {'console_scripts': console_scripts},\n scripts = glob.glob(\"bin/*\"),\n description = \"Pwntools CTF framework and exploit development library.\",\n long_description = long_description,\n author = \"Gallopsled et al.\",\n author_email = \"#pwntools @ freenode.net\",\n url = 'https://pwntools.com',\n download_url = \"https://github.com/Gallopsled/pwntools/releases\",\n install_requires = install_requires,\n license = \"Mostly MIT, some GPL/BSD, see LICENSE-pwntools.txt\",\n keywords = 'pwntools exploit ctf capture the flag binary wargame overflow stack heap defcon',\n classifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Security',\n 'Topic :: Software Development :: Assemblers',\n 'Topic :: Software Development :: Debuggers',\n 'Topic :: Software Development :: Disassemblers',\n 'Topic :: Software Development :: Embedded Systems',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: System :: System Shells',\n 'Topic :: Utilities',\n ]\n)\n", "path": "setup.py"}]} | 1,829 | 100 |
gh_patches_debug_2598 | rasdani/github-patches | git_diff | ivy-llc__ivy-13425 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
normal
</issue>
<code>
[start of ivy/functional/frontends/torch/random_sampling.py]
1 import ivy
2 from ivy.func_wrapper import with_supported_dtypes
3 from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
4
5 try:
6 from torch import Generator
7 except ImportError:
8 from types import SimpleNamespace
9
10 Generator = SimpleNamespace
11
12
13 def seed() -> int:
14 """Returns a 64 bit number used to seed the RNG"""
15 return int(ivy.randint(-(2**63), 2**63 - 1))
16
17
18 @to_ivy_arrays_and_back
19 def manual_seed(seed: int):
20 ivy.seed(seed_value=seed)
21 return Generator().manual_seed(seed)
22
23
24 @with_supported_dtypes(
25 {
26 "1.11.0 and below": (
27 "float32",
28 "float64",
29 )
30 },
31 "torch",
32 )
33 @to_ivy_arrays_and_back
34 def multinomial(input, num_samples, replacement=False, *, generator=None, out=None):
35 return ivy.multinomial(
36 num_samples + 1, # doesn't matter because `probs` is provided, but should be
37 # greater than the number of samples
38 num_samples,
39 probs=input,
40 replace=replacement,
41 out=out,
42 )
43
44
45 @with_supported_dtypes(
46 {
47 "1.11.0 and below": (
48 "float32",
49 "float64",
50 )
51 },
52 "torch",
53 )
54 @to_ivy_arrays_and_back
55 def poisson(input, generator=None):
56 return ivy.poisson(input, shape=None)
57
58
59 @to_ivy_arrays_and_back
60 def rand(
61 size,
62 *,
63 generator=None,
64 out=None,
65 dtype=None,
66 layout=None,
67 device=None,
68 requires_grad=False,
69 pin_memory=False
70 ):
71 return ivy.random_uniform(
72 shape=size,
73 out=out,
74 dtype=dtype,
75 device=device,
76 )
77
78
79 @to_ivy_arrays_and_back
80 def rand_like(
81 input,
82 *,
83 dtype=None,
84 layout=None,
85 device=None,
86 requires_grad=False,
87 memory_format=False
88 ):
89 shape = input.shape
90 if not dtype:
91 dtype = input.dtype
92
93 return ivy.random_uniform(
94 shape=shape,
95 dtype=dtype,
96 device=device,
97 )
98
99
100 @to_ivy_arrays_and_back
101 def randn(
102 size,
103 *,
104 generator=None,
105 out=None,
106 dtype=None,
107 layout=None,
108 device=None,
109 requires_grad=False,
110 pin_memory=False
111 ):
112 return ivy.random_normal(
113 shape=size,
114 out=out,
115 dtype=dtype,
116 device=device,
117 )
118
[end of ivy/functional/frontends/torch/random_sampling.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/torch/random_sampling.py b/ivy/functional/frontends/torch/random_sampling.py
--- a/ivy/functional/frontends/torch/random_sampling.py
+++ b/ivy/functional/frontends/torch/random_sampling.py
@@ -76,6 +76,20 @@
)
+@with_supported_dtypes(
+ {
+ "1.11.0 and below": (
+ "float32",
+ "float64",
+ )
+ },
+ "torch",
+)
+@to_ivy_arrays_and_back
+def normal(mean, std, *, generator=None, out=None):
+ return ivy.random_normal(mean=mean, std=std, out=out)
+
+
@to_ivy_arrays_and_back
def rand_like(
input,
| {"golden_diff": "diff --git a/ivy/functional/frontends/torch/random_sampling.py b/ivy/functional/frontends/torch/random_sampling.py\n--- a/ivy/functional/frontends/torch/random_sampling.py\n+++ b/ivy/functional/frontends/torch/random_sampling.py\n@@ -76,6 +76,20 @@\n )\n \n \n+@with_supported_dtypes(\n+ {\n+ \"1.11.0 and below\": (\n+ \"float32\",\n+ \"float64\",\n+ )\n+ },\n+ \"torch\",\n+)\n+@to_ivy_arrays_and_back\n+def normal(mean, std, *, generator=None, out=None):\n+ return ivy.random_normal(mean=mean, std=std, out=out)\n+ \n+\n @to_ivy_arrays_and_back\n def rand_like(\n input,\n", "issue": "normal\n\n", "before_files": [{"content": "import ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\ntry:\n from torch import Generator\nexcept ImportError:\n from types import SimpleNamespace\n\n Generator = SimpleNamespace\n\n\ndef seed() -> int:\n \"\"\"Returns a 64 bit number used to seed the RNG\"\"\"\n return int(ivy.randint(-(2**63), 2**63 - 1))\n\n\n@to_ivy_arrays_and_back\ndef manual_seed(seed: int):\n ivy.seed(seed_value=seed)\n return Generator().manual_seed(seed)\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef multinomial(input, num_samples, replacement=False, *, generator=None, out=None):\n return ivy.multinomial(\n num_samples + 1, # doesn't matter because `probs` is provided, but should be\n # greater than the number of samples\n num_samples,\n probs=input,\n replace=replacement,\n out=out,\n )\n\n\n@with_supported_dtypes(\n {\n \"1.11.0 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"torch\",\n)\n@to_ivy_arrays_and_back\ndef poisson(input, generator=None):\n return ivy.poisson(input, shape=None)\n\n\n@to_ivy_arrays_and_back\ndef rand(\n size,\n *,\n generator=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False\n):\n return ivy.random_uniform(\n shape=size,\n out=out,\n dtype=dtype,\n device=device,\n )\n\n\n@to_ivy_arrays_and_back\ndef rand_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=False\n):\n shape = input.shape\n if not dtype:\n dtype = input.dtype\n\n return ivy.random_uniform(\n shape=shape,\n dtype=dtype,\n device=device,\n )\n\n\n@to_ivy_arrays_and_back\ndef randn(\n size,\n *,\n generator=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False\n):\n return ivy.random_normal(\n shape=size,\n out=out,\n dtype=dtype,\n device=device,\n )\n", "path": "ivy/functional/frontends/torch/random_sampling.py"}]} | 1,366 | 181 |
gh_patches_debug_4790 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3040 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
show total ever given
Suggested [via Twitter](https://twitter.com/tripflex/status/532597015210131456):
> is there no way for me to see the total I have donated? I know I can see it weekly, but what about overall total?
</issue>
<code>
[start of gratipay/utils/history.py]
1 def iter_payday_events(db, participant):
2 """Yields payday events for the given participant.
3 """
4 username = participant.username
5 exchanges = db.all("""
6 SELECT *
7 FROM exchanges
8 WHERE participant=%s
9 """, (username,), back_as=dict)
10 transfers = db.all("""
11 SELECT *
12 FROM transfers
13 WHERE tipper=%(username)s OR tippee=%(username)s
14 """, locals(), back_as=dict)
15
16 if not (exchanges or transfers):
17 return
18
19 payday_dates = db.all("""
20 SELECT ts_start::date
21 FROM paydays
22 ORDER BY ts_start ASC
23 """)
24
25 balance = participant.balance
26 prev_date = None
27 get_timestamp = lambda e: e['timestamp']
28 events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)
29 for event in events:
30
31 event['balance'] = balance
32
33 event_date = event['timestamp'].date()
34 if event_date != prev_date:
35 if prev_date:
36 yield dict(kind='day-close', balance=balance)
37 day_open = dict(kind='day-open', date=event_date, balance=balance)
38 if payday_dates:
39 while payday_dates and payday_dates[-1] > event_date:
40 payday_dates.pop()
41 payday_date = payday_dates[-1] if payday_dates else None
42 if event_date == payday_date:
43 day_open['payday_number'] = len(payday_dates) - 1
44 yield day_open
45 prev_date = event_date
46
47 if 'fee' in event:
48 if event['amount'] > 0:
49 kind = 'charge'
50 if event['status'] in (None, 'succeeded'):
51 balance -= event['amount']
52 else:
53 kind = 'credit'
54 if event['status'] != 'failed':
55 balance -= event['amount'] - event['fee']
56 else:
57 kind = 'transfer'
58 if event['tippee'] == username:
59 balance -= event['amount']
60 else:
61 balance += event['amount']
62 event['kind'] = kind
63
64 yield event
65
66 yield dict(kind='day-close', balance='0.00')
67
[end of gratipay/utils/history.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py
--- a/gratipay/utils/history.py
+++ b/gratipay/utils/history.py
@@ -16,6 +16,13 @@
if not (exchanges or transfers):
return
+ if transfers:
+ yield dict(
+ kind='totals',
+ given=sum(t['amount'] for t in transfers if t['tipper'] == username),
+ received=sum(t['amount'] for t in transfers if t['tippee'] == username),
+ )
+
payday_dates = db.all("""
SELECT ts_start::date
FROM paydays
| {"golden_diff": "diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py\n--- a/gratipay/utils/history.py\n+++ b/gratipay/utils/history.py\n@@ -16,6 +16,13 @@\n if not (exchanges or transfers):\n return\n \n+ if transfers:\n+ yield dict(\n+ kind='totals',\n+ given=sum(t['amount'] for t in transfers if t['tipper'] == username),\n+ received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n+ )\n+\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n", "issue": "show total ever given\nSuggested [via Twitter](https://twitter.com/tripflex/status/532597015210131456):\n\n> is there no way for me to see the total I have donated? I know I can see it weekly, but what about overall total?\n\n", "before_files": [{"content": "def iter_payday_events(db, participant):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%s\n \"\"\", (username,), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE tipper=%(username)s OR tippee=%(username)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or transfers):\n return\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = participant.balance\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance='0.00')\n", "path": "gratipay/utils/history.py"}]} | 1,197 | 142 |
gh_patches_debug_8048 | rasdani/github-patches | git_diff | mosaicml__composer-756 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BlurPool has incorrect docstrings
BlurPool docstrings imply that parameters have default values, but I don't see those in the function signatures: https://github.com/mosaicml/composer/blob/dev/composer/algorithms/blurpool/blurpool.py#L96
</issue>
<code>
[start of composer/algorithms/blurpool/blurpool.py]
1 # Copyright 2021 MosaicML. All Rights Reserved.
2
3 from __future__ import annotations
4
5 import functools
6 import logging
7 from typing import Optional
8
9 import numpy as np
10 import torch
11
12 from composer.algorithms.blurpool.blurpool_layers import BlurConv2d, BlurMaxPool2d
13 from composer.core import Algorithm, Event, State
14 from composer.core.types import Optimizers
15 from composer.loggers import Logger
16 from composer.utils import module_surgery
17
18 log = logging.getLogger(__name__)
19
20
21 def apply_blurpool(model: torch.nn.Module,
22 replace_convs: bool = True,
23 replace_maxpools: bool = True,
24 blur_first: bool = True,
25 optimizers: Optional[Optimizers] = None) -> torch.nn.Module:
26 """Add anti-aliasing filters to the strided :class:`torch.nn.Conv2d` and/or :class:`torch.nn.MaxPool2d` modules
27 within `model`.
28
29 These filters increase invariance to small spatial shifts in the input
30 (`Zhang 2019 <http://proceedings.mlr.press/v97/zhang19a.html>`_).
31
32 Args:
33 model (torch.nn.Module): the model to modify in-place
34 replace_convs (bool, optional): replace strided :class:`torch.nn.Conv2d` modules with
35 :class:`.BlurConv2d` modules. Default: ``True``.
36 replace_maxpools (bool, optional): replace eligible :class:`torch.nn.MaxPool2d` modules
37 with :class:`.BlurMaxPool2d` modules. Default: ``True``.
38 blur_first (bool, optional): for ``replace_convs``, blur input before the associated
39 convolution. When set to ``False``, the convolution is applied with
40 a stride of 1 before the blurring, resulting in significant
41 overhead (though more closely matching
42 `the paper <http://proceedings.mlr.press/v97/zhang19a.html>`_).
43 See :class:`.BlurConv2d` for further discussion. Default: ``True``.
44 optimizers (Optimizers, optional): Existing optimizers bound to
45 ``model.parameters()``. All optimizers that have already been
46 constructed with ``model.parameters()`` must be specified here so
47 they will optimize the correct parameters.
48
49 If the optimizer(s) are constructed *after* calling this function,
50 then it is safe to omit this parameter. These optimizers will see
51 the correct model parameters.
52
53 Returns:
54 The modified model
55
56 Example:
57 .. testcode::
58
59 import composer.functional as cf
60 from torchvision import models
61 model = models.resnet50()
62 cf.apply_blurpool(model)
63 """
64 transforms = {}
65 if replace_maxpools:
66 transforms[torch.nn.MaxPool2d] = BlurMaxPool2d.from_maxpool2d
67 if replace_convs:
68 transforms[torch.nn.Conv2d] = functools.partial(
69 _maybe_replace_strided_conv2d,
70 blur_first=blur_first,
71 )
72 module_surgery.replace_module_classes(model, optimizers=optimizers, policies=transforms)
73 _log_surgery_result(model)
74
75 return model
76
77
78 class BlurPool(Algorithm):
79 """`BlurPool <http://proceedings.mlr.press/v97/zhang19a.html>`_ adds anti-aliasing filters to convolutional layers
80 to increase accuracy and invariance to small shifts in the input.
81
82 Runs on :attr:`~composer.core.event.Event.INIT`.
83
84 Args:
85 replace_convs (bool): replace strided :class:`torch.nn.Conv2d` modules with
86 :class:`.BlurConv2d` modules. Default: ``True``.
87 replace_maxpools (bool): replace eligible :class:`torch.nn.MaxPool2d` modules
88 with :class:`.BlurMaxPool2d` modules. Default: ``True``.
89 blur_first (bool): when ``replace_convs`` is ``True``, blur input before the
90 associated convolution. When set to ``False``, the convolution is
91 applied with a stride of 1 before the blurring, resulting in
92 significant overhead (though more closely matching the paper).
93 See :class:`.BlurConv2d` for further discussion. Default: ``True``.
94 """
95
96 def __init__(self, replace_convs: bool, replace_maxpools: bool, blur_first: bool) -> None:
97 self.replace_convs = replace_convs
98 self.replace_maxpools = replace_maxpools
99 self.blur_first = blur_first
100
101 if self.replace_maxpools is False and \
102 self.replace_convs is False:
103 log.warning('Both replace_maxpool and replace_convs set to false '
104 'BlurPool will not be modifying the model.')
105
106 def match(self, event: Event, state: State) -> bool:
107 """Runs on :attr:`~composer.core.event.Event.INIT`.
108
109 Args:
110 event (Event): The current event.
111 state (State): The current state.
112 Returns:
113 bool: True if this algorithm should run now.
114 """
115 return event == Event.INIT
116
117 def apply(self, event: Event, state: State, logger: Logger) -> Optional[int]:
118 """Adds anti-aliasing filters to the maxpools and/or convolutions.
119
120 Args:
121 event (Event): the current event
122 state (State): the current trainer state
123 logger (Logger): the training logger
124 """
125 assert state.model is not None
126
127 apply_blurpool(state.model,
128 optimizers=state.optimizers,
129 replace_convs=self.replace_convs,
130 replace_maxpools=self.replace_maxpools,
131 blur_first=self.blur_first)
132 self._log_results(event, state, logger)
133
134 def _log_results(self, event: Event, state: State, logger: Logger) -> None:
135 """Logs the result of BlurPool application, including the number of layers that have been replaced."""
136 assert state.model is not None
137
138 num_blurpool_layers = module_surgery.count_module_instances(state.model, BlurMaxPool2d)
139 num_blurconv_layers = module_surgery.count_module_instances(state.model, BlurConv2d)
140
141 # python logger
142 log.info(f'Applied BlurPool to model {state.model.__class__.__name__} '
143 f'with replace_maxpools={self.replace_maxpools}, '
144 f'replace_convs={self.replace_convs}. '
145 f'Model now has {num_blurpool_layers} BlurMaxPool2d '
146 f'and {num_blurconv_layers} BlurConv2D layers.')
147
148 logger.data_fit({
149 'blurpool/num_blurpool_layers': num_blurpool_layers,
150 'blurpool/num_blurconv_layers': num_blurconv_layers,
151 })
152
153
154 def _log_surgery_result(model: torch.nn.Module):
155 num_blurpool_layers = module_surgery.count_module_instances(model, BlurMaxPool2d)
156 num_blurconv_layers = module_surgery.count_module_instances(model, BlurConv2d)
157 log.info(f'Applied BlurPool to model {model.__class__.__name__}. '
158 f'Model now has {num_blurpool_layers} BlurMaxPool2d '
159 f'and {num_blurconv_layers} BlurConv2D layers.')
160
161
162 def _maybe_replace_strided_conv2d(module: torch.nn.Conv2d, module_index: int, blur_first: bool):
163 if (np.max(module.stride) > 1 and module.in_channels >= 16):
164 return BlurConv2d.from_conv2d(module, module_index, blur_first=blur_first)
165 return None
166
[end of composer/algorithms/blurpool/blurpool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/composer/algorithms/blurpool/blurpool.py b/composer/algorithms/blurpool/blurpool.py
--- a/composer/algorithms/blurpool/blurpool.py
+++ b/composer/algorithms/blurpool/blurpool.py
@@ -93,7 +93,7 @@
See :class:`.BlurConv2d` for further discussion. Default: ``True``.
"""
- def __init__(self, replace_convs: bool, replace_maxpools: bool, blur_first: bool) -> None:
+ def __init__(self, replace_convs: bool = True, replace_maxpools: bool = True, blur_first: bool = True) -> None:
self.replace_convs = replace_convs
self.replace_maxpools = replace_maxpools
self.blur_first = blur_first
| {"golden_diff": "diff --git a/composer/algorithms/blurpool/blurpool.py b/composer/algorithms/blurpool/blurpool.py\n--- a/composer/algorithms/blurpool/blurpool.py\n+++ b/composer/algorithms/blurpool/blurpool.py\n@@ -93,7 +93,7 @@\n See :class:`.BlurConv2d` for further discussion. Default: ``True``.\n \"\"\"\n \n- def __init__(self, replace_convs: bool, replace_maxpools: bool, blur_first: bool) -> None:\n+ def __init__(self, replace_convs: bool = True, replace_maxpools: bool = True, blur_first: bool = True) -> None:\n self.replace_convs = replace_convs\n self.replace_maxpools = replace_maxpools\n self.blur_first = blur_first\n", "issue": "BlurPool has incorrect docstrings\nBlurPool docstrings imply that parameters have default values, but I don't see those in the function signatures: https://github.com/mosaicml/composer/blob/dev/composer/algorithms/blurpool/blurpool.py#L96\r\n\n", "before_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom __future__ import annotations\n\nimport functools\nimport logging\nfrom typing import Optional\n\nimport numpy as np\nimport torch\n\nfrom composer.algorithms.blurpool.blurpool_layers import BlurConv2d, BlurMaxPool2d\nfrom composer.core import Algorithm, Event, State\nfrom composer.core.types import Optimizers\nfrom composer.loggers import Logger\nfrom composer.utils import module_surgery\n\nlog = logging.getLogger(__name__)\n\n\ndef apply_blurpool(model: torch.nn.Module,\n replace_convs: bool = True,\n replace_maxpools: bool = True,\n blur_first: bool = True,\n optimizers: Optional[Optimizers] = None) -> torch.nn.Module:\n \"\"\"Add anti-aliasing filters to the strided :class:`torch.nn.Conv2d` and/or :class:`torch.nn.MaxPool2d` modules\n within `model`.\n\n These filters increase invariance to small spatial shifts in the input\n (`Zhang 2019 <http://proceedings.mlr.press/v97/zhang19a.html>`_).\n\n Args:\n model (torch.nn.Module): the model to modify in-place\n replace_convs (bool, optional): replace strided :class:`torch.nn.Conv2d` modules with\n :class:`.BlurConv2d` modules. Default: ``True``.\n replace_maxpools (bool, optional): replace eligible :class:`torch.nn.MaxPool2d` modules\n with :class:`.BlurMaxPool2d` modules. Default: ``True``.\n blur_first (bool, optional): for ``replace_convs``, blur input before the associated\n convolution. When set to ``False``, the convolution is applied with\n a stride of 1 before the blurring, resulting in significant\n overhead (though more closely matching\n `the paper <http://proceedings.mlr.press/v97/zhang19a.html>`_).\n See :class:`.BlurConv2d` for further discussion. Default: ``True``.\n optimizers (Optimizers, optional): Existing optimizers bound to\n ``model.parameters()``. All optimizers that have already been\n constructed with ``model.parameters()`` must be specified here so\n they will optimize the correct parameters.\n\n If the optimizer(s) are constructed *after* calling this function,\n then it is safe to omit this parameter. These optimizers will see\n the correct model parameters.\n\n Returns:\n The modified model\n\n Example:\n .. testcode::\n\n import composer.functional as cf\n from torchvision import models\n model = models.resnet50()\n cf.apply_blurpool(model)\n \"\"\"\n transforms = {}\n if replace_maxpools:\n transforms[torch.nn.MaxPool2d] = BlurMaxPool2d.from_maxpool2d\n if replace_convs:\n transforms[torch.nn.Conv2d] = functools.partial(\n _maybe_replace_strided_conv2d,\n blur_first=blur_first,\n )\n module_surgery.replace_module_classes(model, optimizers=optimizers, policies=transforms)\n _log_surgery_result(model)\n\n return model\n\n\nclass BlurPool(Algorithm):\n \"\"\"`BlurPool <http://proceedings.mlr.press/v97/zhang19a.html>`_ adds anti-aliasing filters to convolutional layers\n to increase accuracy and invariance to small shifts in the input.\n\n Runs on :attr:`~composer.core.event.Event.INIT`.\n\n Args:\n replace_convs (bool): replace strided :class:`torch.nn.Conv2d` modules with\n :class:`.BlurConv2d` modules. Default: ``True``.\n replace_maxpools (bool): replace eligible :class:`torch.nn.MaxPool2d` modules\n with :class:`.BlurMaxPool2d` modules. Default: ``True``.\n blur_first (bool): when ``replace_convs`` is ``True``, blur input before the\n associated convolution. When set to ``False``, the convolution is\n applied with a stride of 1 before the blurring, resulting in\n significant overhead (though more closely matching the paper).\n See :class:`.BlurConv2d` for further discussion. Default: ``True``.\n \"\"\"\n\n def __init__(self, replace_convs: bool, replace_maxpools: bool, blur_first: bool) -> None:\n self.replace_convs = replace_convs\n self.replace_maxpools = replace_maxpools\n self.blur_first = blur_first\n\n if self.replace_maxpools is False and \\\n self.replace_convs is False:\n log.warning('Both replace_maxpool and replace_convs set to false '\n 'BlurPool will not be modifying the model.')\n\n def match(self, event: Event, state: State) -> bool:\n \"\"\"Runs on :attr:`~composer.core.event.Event.INIT`.\n\n Args:\n event (Event): The current event.\n state (State): The current state.\n Returns:\n bool: True if this algorithm should run now.\n \"\"\"\n return event == Event.INIT\n\n def apply(self, event: Event, state: State, logger: Logger) -> Optional[int]:\n \"\"\"Adds anti-aliasing filters to the maxpools and/or convolutions.\n\n Args:\n event (Event): the current event\n state (State): the current trainer state\n logger (Logger): the training logger\n \"\"\"\n assert state.model is not None\n\n apply_blurpool(state.model,\n optimizers=state.optimizers,\n replace_convs=self.replace_convs,\n replace_maxpools=self.replace_maxpools,\n blur_first=self.blur_first)\n self._log_results(event, state, logger)\n\n def _log_results(self, event: Event, state: State, logger: Logger) -> None:\n \"\"\"Logs the result of BlurPool application, including the number of layers that have been replaced.\"\"\"\n assert state.model is not None\n\n num_blurpool_layers = module_surgery.count_module_instances(state.model, BlurMaxPool2d)\n num_blurconv_layers = module_surgery.count_module_instances(state.model, BlurConv2d)\n\n # python logger\n log.info(f'Applied BlurPool to model {state.model.__class__.__name__} '\n f'with replace_maxpools={self.replace_maxpools}, '\n f'replace_convs={self.replace_convs}. '\n f'Model now has {num_blurpool_layers} BlurMaxPool2d '\n f'and {num_blurconv_layers} BlurConv2D layers.')\n\n logger.data_fit({\n 'blurpool/num_blurpool_layers': num_blurpool_layers,\n 'blurpool/num_blurconv_layers': num_blurconv_layers,\n })\n\n\ndef _log_surgery_result(model: torch.nn.Module):\n num_blurpool_layers = module_surgery.count_module_instances(model, BlurMaxPool2d)\n num_blurconv_layers = module_surgery.count_module_instances(model, BlurConv2d)\n log.info(f'Applied BlurPool to model {model.__class__.__name__}. '\n f'Model now has {num_blurpool_layers} BlurMaxPool2d '\n f'and {num_blurconv_layers} BlurConv2D layers.')\n\n\ndef _maybe_replace_strided_conv2d(module: torch.nn.Conv2d, module_index: int, blur_first: bool):\n if (np.max(module.stride) > 1 and module.in_channels >= 16):\n return BlurConv2d.from_conv2d(module, module_index, blur_first=blur_first)\n return None\n", "path": "composer/algorithms/blurpool/blurpool.py"}]} | 2,673 | 186 |
gh_patches_debug_5928 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-616 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to install with opentracing extras
I was following the [OpenTracing setup instructions](https://docs.datadoghq.com/tracing/advanced_usage/?tab=python#opentracing) but got a warning about missing extras:
```sh
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip list
Package Version
---------- -------
pip 18.0
setuptools 40.4.1
wheel 0.31.1
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> python --version
Python 2.7.14
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip --version
pip 18.0 from /home/sam/.local/share/virtualenvs/blah-YneZd-6L/lib/python2.7/site-packages/pip (python 2.7)
(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip install 'ddtrace[opentracing] == 0.14.0'
Collecting ddtrace[opentracing]==0.14.0
ddtrace 0.14.0 does not provide the extra 'opentracing'
Collecting msgpack-python (from ddtrace[opentracing]==0.14.0)
Collecting wrapt (from ddtrace[opentracing]==0.14.0)
Installing collected packages: msgpack-python, wrapt, ddtrace
Successfully installed ddtrace-0.14.0 msgpack-python-0.5.6 wrapt-1.10.11
```
> `ddtrace 0.14.0 does not provide the extra 'opentracing'`
Happens on Python 3.6 as well.
</issue>
<code>
[start of setup.py]
1 import os
2 import sys
3 import re
4
5 from setuptools import setup, find_packages
6 from setuptools.command.test import test as TestCommand
7
8
9 def get_version(package):
10 """
11 Return package version as listed in `__version__` in `__init__.py`.
12 This method prevents to import packages at setup-time.
13 """
14 init_py = open(os.path.join(package, '__init__.py')).read()
15 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
16
17
18 class Tox(TestCommand):
19
20 user_options = [('tox-args=', 'a', "Arguments to pass to tox")]
21
22 def initialize_options(self):
23 TestCommand.initialize_options(self)
24 self.tox_args = None
25
26 def finalize_options(self):
27 TestCommand.finalize_options(self)
28 self.test_args = []
29 self.test_suite = True
30
31 def run_tests(self):
32 # import here, cause outside the eggs aren't loaded
33 import tox
34 import shlex
35 args = self.tox_args
36 if args:
37 args = shlex.split(self.tox_args)
38 errno = tox.cmdline(args=args)
39 sys.exit(errno)
40
41
42 version = get_version('ddtrace')
43 # Append a suffix to the version for dev builds
44 if os.environ.get('VERSION_SUFFIX'):
45 version = '{v}+{s}'.format(
46 v=version,
47 s=os.environ.get('VERSION_SUFFIX'),
48 )
49
50 setup(
51 name='ddtrace',
52 version=version,
53 description='Datadog tracing code',
54 url='https://github.com/DataDog/dd-trace-py',
55 author='Datadog, Inc.',
56 author_email='[email protected]',
57 license='BSD',
58 packages=find_packages(exclude=['tests*']),
59 install_requires=[
60 "wrapt",
61 "msgpack-python",
62 ],
63 extra_requires={
64 # users can include opentracing by having:
65 # install_requires=["ddtrace[opentracing]", ...]
66 "opentracing": ["opentracing"],
67 },
68 # plugin tox
69 tests_require=['tox', 'flake8'],
70 cmdclass={'test': Tox},
71 entry_points={
72 'console_scripts': [
73 'ddtrace-run = ddtrace.commands.ddtrace_run:main'
74 ]
75 },
76 classifiers=[
77 'Programming Language :: Python',
78 'Programming Language :: Python :: 2.7',
79 'Programming Language :: Python :: 3.4',
80 'Programming Language :: Python :: 3.5',
81 'Programming Language :: Python :: 3.6',
82 ],
83 )
84
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,10 +60,10 @@
"wrapt",
"msgpack-python",
],
- extra_requires={
+ extras_require={
# users can include opentracing by having:
# install_requires=["ddtrace[opentracing]", ...]
- "opentracing": ["opentracing"],
+ "opentracing": ["opentracing>=2.0.0"],
},
# plugin tox
tests_require=['tox', 'flake8'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,10 +60,10 @@\n \"wrapt\",\n \"msgpack-python\",\n ],\n- extra_requires={\n+ extras_require={\n # users can include opentracing by having:\n # install_requires=[\"ddtrace[opentracing]\", ...]\n- \"opentracing\": [\"opentracing\"],\n+ \"opentracing\": [\"opentracing>=2.0.0\"],\n },\n # plugin tox\n tests_require=['tox', 'flake8'],\n", "issue": "Unable to install with opentracing extras\nI was following the [OpenTracing setup instructions](https://docs.datadoghq.com/tracing/advanced_usage/?tab=python#opentracing) but got a warning about missing extras:\r\n\r\n```sh\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip list\r\nPackage Version\r\n---------- -------\r\npip 18.0 \r\nsetuptools 40.4.1 \r\nwheel 0.31.1 \r\n\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> python --version\r\nPython 2.7.14\r\n\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip --version\r\npip 18.0 from /home/sam/.local/share/virtualenvs/blah-YneZd-6L/lib/python2.7/site-packages/pip (python 2.7)\r\n\r\n\r\n(blah-YneZd-6L) sam@sam-Q325UAR ~/t/blah> pip install 'ddtrace[opentracing] == 0.14.0'\r\nCollecting ddtrace[opentracing]==0.14.0\r\n ddtrace 0.14.0 does not provide the extra 'opentracing'\r\nCollecting msgpack-python (from ddtrace[opentracing]==0.14.0)\r\nCollecting wrapt (from ddtrace[opentracing]==0.14.0)\r\nInstalling collected packages: msgpack-python, wrapt, ddtrace\r\nSuccessfully installed ddtrace-0.14.0 msgpack-python-0.5.6 wrapt-1.10.11\r\n```\r\n\r\n> `ddtrace 0.14.0 does not provide the extra 'opentracing'`\r\n\r\nHappens on Python 3.6 as well.\n", "before_files": [{"content": "import os\nimport sys\nimport re\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `__init__.py`.\n This method prevents to import packages at setup-time.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\nclass Tox(TestCommand):\n\n user_options = [('tox-args=', 'a', \"Arguments to pass to tox\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.tox_args = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import tox\n import shlex\n args = self.tox_args\n if args:\n args = shlex.split(self.tox_args)\n errno = tox.cmdline(args=args)\n sys.exit(errno)\n\n\nversion = get_version('ddtrace')\n# Append a suffix to the version for dev builds\nif os.environ.get('VERSION_SUFFIX'):\n version = '{v}+{s}'.format(\n v=version,\n s=os.environ.get('VERSION_SUFFIX'),\n )\n\nsetup(\n name='ddtrace',\n version=version,\n description='Datadog tracing code',\n url='https://github.com/DataDog/dd-trace-py',\n author='Datadog, Inc.',\n author_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=['tests*']),\n install_requires=[\n \"wrapt\",\n \"msgpack-python\",\n ],\n extra_requires={\n # users can include opentracing by having:\n # install_requires=[\"ddtrace[opentracing]\", ...]\n \"opentracing\": [\"opentracing\"],\n },\n # plugin tox\n tests_require=['tox', 'flake8'],\n cmdclass={'test': Tox},\n entry_points={\n 'console_scripts': [\n 'ddtrace-run = ddtrace.commands.ddtrace_run:main'\n ]\n },\n classifiers=[\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}]} | 1,668 | 129 |
gh_patches_debug_30674 | rasdani/github-patches | git_diff | mirumee__ariadne-181 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Example not working
I followed the example from the documentation
```python
from ariadne import QueryType, gql, make_executable_schema
from ariadne.asgi import GraphQL
type_defs = gql("""
type Query {
hello: String!
}
""")
# Create type instance for Query type defined in our schema...
query = QueryType()
# ...and assign our resolver function to its "hello" field.
@query.field("hello")
def resolve_hello(_, info):
request = info.context["request"]
user_agent = request.headers.get("user-agent", "guest")
return "Hello, %s!" % user_agent
schema = make_executable_schema(type_defs, query)
app = GraphQL(schema, debug=True)
```
I started the app with `uvicorn test:app`, which starts just fine.
But when I browse to http://localhost:8000/ it errors:
```ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/jonas/.local/share/virtualenvs/api-ariadne-WWJK7Q5Z/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 372, in run_asgi
result = await asgi(self.receive, self.send)
File "/home/jonas/.local/share/virtualenvs/api-ariadne-WWJK7Q5Z/lib/python3.7/site-packages/ariadne/asgi.py", line 71, in handle_http
await response(receive, send)
TypeError: __call__() missing 1 required positional argument: 'send'
INFO: ('127.0.0.1', 42908) - "GET / HTTP/1.1" 500
```
I'm using ariadne 0.3.0 and tried with uvicorn version 0.7.1, 0.7.0 and 0.6.1
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Development Status :: 4 - Beta",
7 "Intended Audience :: Developers",
8 "License :: OSI Approved :: BSD License",
9 "Operating System :: OS Independent",
10 "Programming Language :: Python",
11 "Programming Language :: Python :: 3.6",
12 "Programming Language :: Python :: 3.7",
13 "Topic :: Software Development :: Libraries :: Python Modules",
14 ]
15
16 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
17 with open(README_PATH, "r") as f:
18 README = f.read()
19
20 setup(
21 name="ariadne",
22 author="Mirumee Software",
23 author_email="[email protected]",
24 description="Ariadne is a Python library for implementing GraphQL servers.",
25 long_description=README,
26 long_description_content_type="text/markdown",
27 license="BSD",
28 version="0.4.0",
29 url="https://github.com/mirumee/ariadne",
30 packages=["ariadne"],
31 package_data={"ariadne": ["py.typed"]},
32 install_requires=[
33 "graphql-core-next>=1.0.3",
34 "starlette>=0.12.0b3",
35 "typing_extensions>=3.6.0",
36 ],
37 classifiers=CLASSIFIERS,
38 platforms=["any"],
39 zip_safe=False,
40 )
41
[end of setup.py]
[start of ariadne/asgi.py]
1 import asyncio
2 from typing import Any, AsyncGenerator, Dict, List, Optional, Tuple, cast
3
4 from graphql import GraphQLError, GraphQLSchema
5 from starlette.requests import Request
6 from starlette.responses import HTMLResponse, JSONResponse, Response
7 from starlette.types import Receive, Scope, Send
8 from starlette.websockets import WebSocket, WebSocketState, WebSocketDisconnect
9
10 from .constants import DATA_TYPE_JSON, PLAYGROUND_HTML
11 from .exceptions import HttpBadRequestError, HttpError
12 from .format_error import format_error
13 from .graphql import graphql, subscribe
14 from .logger import log_error
15 from .types import ContextValue, ErrorFormatter, RootValue
16
17 GQL_CONNECTION_INIT = "connection_init" # Client -> Server
18 GQL_CONNECTION_ACK = "connection_ack" # Server -> Client
19 GQL_CONNECTION_ERROR = "connection_error" # Server -> Client
20
21 # NOTE: The keep alive message type does not follow the standard due to connection optimizations
22 GQL_CONNECTION_KEEP_ALIVE = "ka" # Server -> Client
23
24 GQL_CONNECTION_TERMINATE = "connection_terminate" # Client -> Server
25 GQL_START = "start" # Client -> Server
26 GQL_DATA = "data" # Server -> Client
27 GQL_ERROR = "error" # Server -> Client
28 GQL_COMPLETE = "complete" # Server -> Client
29 GQL_STOP = "stop" # Client -> Server
30
31
32 class GraphQL:
33 def __init__(
34 self,
35 schema: GraphQLSchema,
36 *,
37 context_value: Optional[ContextValue] = None,
38 root_value: Optional[RootValue] = None,
39 debug: bool = False,
40 logger: Optional[str] = None,
41 error_formatter: ErrorFormatter = format_error,
42 keepalive: float = None,
43 ):
44 self.context_value = context_value
45 self.root_value = root_value
46 self.debug = debug
47 self.logger = logger
48 self.error_formatter = error_formatter
49 self.keepalive = keepalive
50 self.schema = schema
51
52 async def __call__(self, scope: Scope, receive: Receive, send: Send):
53 if scope["type"] == "http":
54 await self.handle_http(scope=scope, receive=receive, send=send)
55 elif scope["type"] == "websocket":
56 await self.handle_websocket(scope=scope, receive=receive, send=send)
57 else:
58 raise ValueError("Unknown scope type: %r" % (scope["type"],))
59
60 async def get_context_for_request(self, request: Any) -> Any:
61 if callable(self.context_value):
62 return self.context_value(request)
63 return self.context_value or {"request": request}
64
65 async def handle_http(self, scope: Scope, receive: Receive, send: Send):
66 request = Request(scope=scope, receive=receive)
67 if request.method == "GET":
68 response = await self.render_playground(request)
69 elif request.method == "POST":
70 response = await self.graphql_http_server(request)
71 else:
72 response = Response(status_code=405)
73 await response(scope, receive, send)
74
75 async def handle_websocket(self, scope: Scope, receive: Receive, send: Send):
76 websocket = WebSocket(scope=scope, receive=receive, send=send)
77 await self.websocket_server(websocket)
78
79 async def extract_data_from_request(
80 self, request: Request
81 ) -> Tuple[str, Optional[dict], Optional[str]]:
82 if request.headers.get("Content-Type") != DATA_TYPE_JSON:
83 raise HttpBadRequestError(
84 "Posted content must be of type {}".format(DATA_TYPE_JSON)
85 )
86 try:
87 return await request.json()
88 except ValueError:
89 raise HttpBadRequestError("Request body is not a valid JSON")
90
91 async def render_playground( # pylint: disable=unused-argument
92 self, request: Request
93 ) -> HTMLResponse:
94 return HTMLResponse(PLAYGROUND_HTML)
95
96 async def graphql_http_server(self, request: Request) -> Response:
97 try:
98 data = await self.extract_data_from_request(request)
99 except HttpError as error:
100 return Response(error.message or error.status, status_code=400)
101
102 context_value = await self.get_context_for_request(request)
103 success, response = await graphql(
104 self.schema,
105 data,
106 context_value=context_value,
107 root_value=self.root_value,
108 debug=self.debug,
109 logger=self.logger,
110 error_formatter=self.error_formatter,
111 )
112 status_code = 200 if success else 400
113 return JSONResponse(response, status_code=status_code)
114
115 async def websocket_server(self, websocket: WebSocket) -> None:
116 subscriptions: Dict[str, AsyncGenerator] = {}
117 await websocket.accept("graphql-ws")
118 try:
119 while websocket.application_state != WebSocketState.DISCONNECTED:
120 message = await websocket.receive_json()
121 await self.handle_websocket_message(message, websocket, subscriptions)
122 except WebSocketDisconnect:
123 pass
124 finally:
125 for operation_id in subscriptions:
126 await subscriptions[operation_id].aclose()
127
128 async def handle_websocket_message( # pylint: disable=too-complex
129 self,
130 message: dict,
131 websocket: WebSocket,
132 subscriptions: Dict[str, AsyncGenerator],
133 ):
134 operation_id = cast(str, message.get("id"))
135 message_type = cast(str, message.get("type"))
136
137 if message_type == GQL_CONNECTION_INIT:
138 await websocket.send_json({"type": GQL_CONNECTION_ACK})
139 asyncio.ensure_future(self.keep_websocket_alive(websocket))
140 elif message_type == GQL_CONNECTION_TERMINATE:
141 await websocket.close()
142 elif message_type == GQL_START:
143 await self.start_websocket_subscription(
144 message.get("payload"), operation_id, websocket, subscriptions
145 )
146 elif message_type == GQL_STOP:
147 if operation_id in subscriptions:
148 await subscriptions[operation_id].aclose()
149 del subscriptions[operation_id]
150
151 async def keep_websocket_alive(self, websocket: WebSocket):
152 if not self.keepalive:
153 return
154 while websocket.application_state != WebSocketState.DISCONNECTED:
155 try:
156 await websocket.send_json({"type": GQL_CONNECTION_KEEP_ALIVE})
157 except WebSocketDisconnect:
158 return
159 await asyncio.sleep(self.keepalive)
160
161 async def start_websocket_subscription(
162 self,
163 data: Any,
164 operation_id: str,
165 websocket: WebSocket,
166 subscriptions: Dict[str, AsyncGenerator],
167 ):
168 context_value = await self.get_context_for_request(websocket)
169 success, results = await subscribe(
170 self.schema,
171 data,
172 context_value=context_value,
173 root_value=self.root_value,
174 debug=self.debug,
175 logger=self.logger,
176 error_formatter=self.error_formatter,
177 )
178 if not success:
179 results = cast(List[dict], results)
180 await websocket.send_json(
181 {"type": GQL_ERROR, "id": operation_id, "payload": results[0]}
182 )
183 else:
184 results = cast(AsyncGenerator, results)
185 subscriptions[operation_id] = results
186 asyncio.ensure_future(
187 self.observe_async_results(results, operation_id, websocket)
188 )
189
190 async def observe_async_results( # pylint: disable=too-complex
191 self, results: AsyncGenerator, operation_id: str, websocket: WebSocket
192 ) -> None:
193 try:
194 async for result in results:
195 payload = {}
196 if result.data:
197 payload["data"] = result.data
198 if result.errors:
199 for error in result.errors:
200 log_error(error, self.logger)
201 payload["errors"] = [
202 self.error_formatter(error, self.debug)
203 for error in result.errors
204 ]
205 await websocket.send_json(
206 {"type": GQL_DATA, "id": operation_id, "payload": payload}
207 )
208 except Exception as error: # pylint: disable=broad-except
209 if not isinstance(error, GraphQLError):
210 error = GraphQLError(str(error), original_error=error)
211 log_error(error, self.logger)
212 payload = {"errors": [self.error_formatter(error, self.debug)]}
213 await websocket.send_json(
214 {"type": GQL_DATA, "id": operation_id, "payload": payload}
215 )
216
217 if websocket.application_state != WebSocketState.DISCONNECTED:
218 await websocket.send_json({"type": GQL_COMPLETE, "id": operation_id})
219
[end of ariadne/asgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/asgi.py b/ariadne/asgi.py
--- a/ariadne/asgi.py
+++ b/ariadne/asgi.py
@@ -3,7 +3,7 @@
from graphql import GraphQLError, GraphQLSchema
from starlette.requests import Request
-from starlette.responses import HTMLResponse, JSONResponse, Response
+from starlette.responses import HTMLResponse, JSONResponse, PlainTextResponse, Response
from starlette.types import Receive, Scope, Send
from starlette.websockets import WebSocket, WebSocketState, WebSocketDisconnect
@@ -90,14 +90,14 @@
async def render_playground( # pylint: disable=unused-argument
self, request: Request
- ) -> HTMLResponse:
+ ) -> Response:
return HTMLResponse(PLAYGROUND_HTML)
async def graphql_http_server(self, request: Request) -> Response:
try:
data = await self.extract_data_from_request(request)
except HttpError as error:
- return Response(error.message or error.status, status_code=400)
+ return PlainTextResponse(error.message or error.status, status_code=400)
context_value = await self.get_context_for_request(request)
success, response = await graphql(
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
package_data={"ariadne": ["py.typed"]},
install_requires=[
"graphql-core-next>=1.0.3",
- "starlette>=0.12.0b3",
+ "starlette<0.13",
"typing_extensions>=3.6.0",
],
classifiers=CLASSIFIERS,
| {"golden_diff": "diff --git a/ariadne/asgi.py b/ariadne/asgi.py\n--- a/ariadne/asgi.py\n+++ b/ariadne/asgi.py\n@@ -3,7 +3,7 @@\n \n from graphql import GraphQLError, GraphQLSchema\n from starlette.requests import Request\n-from starlette.responses import HTMLResponse, JSONResponse, Response\n+from starlette.responses import HTMLResponse, JSONResponse, PlainTextResponse, Response\n from starlette.types import Receive, Scope, Send\n from starlette.websockets import WebSocket, WebSocketState, WebSocketDisconnect\n \n@@ -90,14 +90,14 @@\n \n async def render_playground( # pylint: disable=unused-argument\n self, request: Request\n- ) -> HTMLResponse:\n+ ) -> Response:\n return HTMLResponse(PLAYGROUND_HTML)\n \n async def graphql_http_server(self, request: Request) -> Response:\n try:\n data = await self.extract_data_from_request(request)\n except HttpError as error:\n- return Response(error.message or error.status, status_code=400)\n+ return PlainTextResponse(error.message or error.status, status_code=400)\n \n context_value = await self.get_context_for_request(request)\n success, response = await graphql(\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n package_data={\"ariadne\": [\"py.typed\"]},\n install_requires=[\n \"graphql-core-next>=1.0.3\",\n- \"starlette>=0.12.0b3\",\n+ \"starlette<0.13\",\n \"typing_extensions>=3.6.0\",\n ],\n classifiers=CLASSIFIERS,\n", "issue": "Example not working\nI followed the example from the documentation\r\n```python\r\nfrom ariadne import QueryType, gql, make_executable_schema\r\nfrom ariadne.asgi import GraphQL\r\n\r\ntype_defs = gql(\"\"\"\r\n type Query {\r\n hello: String!\r\n }\r\n\"\"\")\r\n\r\n# Create type instance for Query type defined in our schema...\r\nquery = QueryType()\r\n\r\n# ...and assign our resolver function to its \"hello\" field.\r\[email protected](\"hello\")\r\ndef resolve_hello(_, info):\r\n request = info.context[\"request\"]\r\n user_agent = request.headers.get(\"user-agent\", \"guest\")\r\n return \"Hello, %s!\" % user_agent\r\n\r\nschema = make_executable_schema(type_defs, query)\r\napp = GraphQL(schema, debug=True)\r\n```\r\nI started the app with `uvicorn test:app`, which starts just fine.\r\nBut when I browse to http://localhost:8000/ it errors:\r\n```ERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/home/jonas/.local/share/virtualenvs/api-ariadne-WWJK7Q5Z/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 372, in run_asgi\r\n result = await asgi(self.receive, self.send)\r\n File \"/home/jonas/.local/share/virtualenvs/api-ariadne-WWJK7Q5Z/lib/python3.7/site-packages/ariadne/asgi.py\", line 71, in handle_http\r\n await response(receive, send)\r\nTypeError: __call__() missing 1 required positional argument: 'send'\r\nINFO: ('127.0.0.1', 42908) - \"GET / HTTP/1.1\" 500\r\n``` \r\nI'm using ariadne 0.3.0 and tried with uvicorn version 0.7.1, 0.7.0 and 0.6.1\n", "before_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.4.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n package_data={\"ariadne\": [\"py.typed\"]},\n install_requires=[\n \"graphql-core-next>=1.0.3\",\n \"starlette>=0.12.0b3\",\n \"typing_extensions>=3.6.0\",\n ],\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}, {"content": "import asyncio\nfrom typing import Any, AsyncGenerator, Dict, List, Optional, Tuple, cast\n\nfrom graphql import GraphQLError, GraphQLSchema\nfrom starlette.requests import Request\nfrom starlette.responses import HTMLResponse, JSONResponse, Response\nfrom starlette.types import Receive, Scope, Send\nfrom starlette.websockets import WebSocket, WebSocketState, WebSocketDisconnect\n\nfrom .constants import DATA_TYPE_JSON, PLAYGROUND_HTML\nfrom .exceptions import HttpBadRequestError, HttpError\nfrom .format_error import format_error\nfrom .graphql import graphql, subscribe\nfrom .logger import log_error\nfrom .types import ContextValue, ErrorFormatter, RootValue\n\nGQL_CONNECTION_INIT = \"connection_init\" # Client -> Server\nGQL_CONNECTION_ACK = \"connection_ack\" # Server -> Client\nGQL_CONNECTION_ERROR = \"connection_error\" # Server -> Client\n\n# NOTE: The keep alive message type does not follow the standard due to connection optimizations\nGQL_CONNECTION_KEEP_ALIVE = \"ka\" # Server -> Client\n\nGQL_CONNECTION_TERMINATE = \"connection_terminate\" # Client -> Server\nGQL_START = \"start\" # Client -> Server\nGQL_DATA = \"data\" # Server -> Client\nGQL_ERROR = \"error\" # Server -> Client\nGQL_COMPLETE = \"complete\" # Server -> Client\nGQL_STOP = \"stop\" # Client -> Server\n\n\nclass GraphQL:\n def __init__(\n self,\n schema: GraphQLSchema,\n *,\n context_value: Optional[ContextValue] = None,\n root_value: Optional[RootValue] = None,\n debug: bool = False,\n logger: Optional[str] = None,\n error_formatter: ErrorFormatter = format_error,\n keepalive: float = None,\n ):\n self.context_value = context_value\n self.root_value = root_value\n self.debug = debug\n self.logger = logger\n self.error_formatter = error_formatter\n self.keepalive = keepalive\n self.schema = schema\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send):\n if scope[\"type\"] == \"http\":\n await self.handle_http(scope=scope, receive=receive, send=send)\n elif scope[\"type\"] == \"websocket\":\n await self.handle_websocket(scope=scope, receive=receive, send=send)\n else:\n raise ValueError(\"Unknown scope type: %r\" % (scope[\"type\"],))\n\n async def get_context_for_request(self, request: Any) -> Any:\n if callable(self.context_value):\n return self.context_value(request)\n return self.context_value or {\"request\": request}\n\n async def handle_http(self, scope: Scope, receive: Receive, send: Send):\n request = Request(scope=scope, receive=receive)\n if request.method == \"GET\":\n response = await self.render_playground(request)\n elif request.method == \"POST\":\n response = await self.graphql_http_server(request)\n else:\n response = Response(status_code=405)\n await response(scope, receive, send)\n\n async def handle_websocket(self, scope: Scope, receive: Receive, send: Send):\n websocket = WebSocket(scope=scope, receive=receive, send=send)\n await self.websocket_server(websocket)\n\n async def extract_data_from_request(\n self, request: Request\n ) -> Tuple[str, Optional[dict], Optional[str]]:\n if request.headers.get(\"Content-Type\") != DATA_TYPE_JSON:\n raise HttpBadRequestError(\n \"Posted content must be of type {}\".format(DATA_TYPE_JSON)\n )\n try:\n return await request.json()\n except ValueError:\n raise HttpBadRequestError(\"Request body is not a valid JSON\")\n\n async def render_playground( # pylint: disable=unused-argument\n self, request: Request\n ) -> HTMLResponse:\n return HTMLResponse(PLAYGROUND_HTML)\n\n async def graphql_http_server(self, request: Request) -> Response:\n try:\n data = await self.extract_data_from_request(request)\n except HttpError as error:\n return Response(error.message or error.status, status_code=400)\n\n context_value = await self.get_context_for_request(request)\n success, response = await graphql(\n self.schema,\n data,\n context_value=context_value,\n root_value=self.root_value,\n debug=self.debug,\n logger=self.logger,\n error_formatter=self.error_formatter,\n )\n status_code = 200 if success else 400\n return JSONResponse(response, status_code=status_code)\n\n async def websocket_server(self, websocket: WebSocket) -> None:\n subscriptions: Dict[str, AsyncGenerator] = {}\n await websocket.accept(\"graphql-ws\")\n try:\n while websocket.application_state != WebSocketState.DISCONNECTED:\n message = await websocket.receive_json()\n await self.handle_websocket_message(message, websocket, subscriptions)\n except WebSocketDisconnect:\n pass\n finally:\n for operation_id in subscriptions:\n await subscriptions[operation_id].aclose()\n\n async def handle_websocket_message( # pylint: disable=too-complex\n self,\n message: dict,\n websocket: WebSocket,\n subscriptions: Dict[str, AsyncGenerator],\n ):\n operation_id = cast(str, message.get(\"id\"))\n message_type = cast(str, message.get(\"type\"))\n\n if message_type == GQL_CONNECTION_INIT:\n await websocket.send_json({\"type\": GQL_CONNECTION_ACK})\n asyncio.ensure_future(self.keep_websocket_alive(websocket))\n elif message_type == GQL_CONNECTION_TERMINATE:\n await websocket.close()\n elif message_type == GQL_START:\n await self.start_websocket_subscription(\n message.get(\"payload\"), operation_id, websocket, subscriptions\n )\n elif message_type == GQL_STOP:\n if operation_id in subscriptions:\n await subscriptions[operation_id].aclose()\n del subscriptions[operation_id]\n\n async def keep_websocket_alive(self, websocket: WebSocket):\n if not self.keepalive:\n return\n while websocket.application_state != WebSocketState.DISCONNECTED:\n try:\n await websocket.send_json({\"type\": GQL_CONNECTION_KEEP_ALIVE})\n except WebSocketDisconnect:\n return\n await asyncio.sleep(self.keepalive)\n\n async def start_websocket_subscription(\n self,\n data: Any,\n operation_id: str,\n websocket: WebSocket,\n subscriptions: Dict[str, AsyncGenerator],\n ):\n context_value = await self.get_context_for_request(websocket)\n success, results = await subscribe(\n self.schema,\n data,\n context_value=context_value,\n root_value=self.root_value,\n debug=self.debug,\n logger=self.logger,\n error_formatter=self.error_formatter,\n )\n if not success:\n results = cast(List[dict], results)\n await websocket.send_json(\n {\"type\": GQL_ERROR, \"id\": operation_id, \"payload\": results[0]}\n )\n else:\n results = cast(AsyncGenerator, results)\n subscriptions[operation_id] = results\n asyncio.ensure_future(\n self.observe_async_results(results, operation_id, websocket)\n )\n\n async def observe_async_results( # pylint: disable=too-complex\n self, results: AsyncGenerator, operation_id: str, websocket: WebSocket\n ) -> None:\n try:\n async for result in results:\n payload = {}\n if result.data:\n payload[\"data\"] = result.data\n if result.errors:\n for error in result.errors:\n log_error(error, self.logger)\n payload[\"errors\"] = [\n self.error_formatter(error, self.debug)\n for error in result.errors\n ]\n await websocket.send_json(\n {\"type\": GQL_DATA, \"id\": operation_id, \"payload\": payload}\n )\n except Exception as error: # pylint: disable=broad-except\n if not isinstance(error, GraphQLError):\n error = GraphQLError(str(error), original_error=error)\n log_error(error, self.logger)\n payload = {\"errors\": [self.error_formatter(error, self.debug)]}\n await websocket.send_json(\n {\"type\": GQL_DATA, \"id\": operation_id, \"payload\": payload}\n )\n\n if websocket.application_state != WebSocketState.DISCONNECTED:\n await websocket.send_json({\"type\": GQL_COMPLETE, \"id\": operation_id})\n", "path": "ariadne/asgi.py"}]} | 3,685 | 387 |
gh_patches_debug_33594 | rasdani/github-patches | git_diff | saleor__saleor-5530 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ProductCreate weight mutation


productCreate mutation
I followed the "amount unit" as said in the comment, but returns a "'StringValue(value='10.00 kg')' value must be a float."
Sorry just a beginner
</issue>
<code>
[start of saleor/graphql/core/scalars.py]
1 import decimal
2
3 import graphene
4 from measurement.measures import Weight
5
6 from ...core.weight import convert_weight, get_default_weight_unit
7
8
9 class Decimal(graphene.Float):
10 """Custom Decimal implementation.
11
12 Returns Decimal as a float in the API,
13 parses float to the Decimal on the way back.
14 """
15
16 @staticmethod
17 def parse_literal(node):
18 try:
19 return decimal.Decimal(node.value)
20 except decimal.DecimalException:
21 return None
22
23 @staticmethod
24 def parse_value(value):
25 try:
26 # Converting the float to str before parsing it to Decimal is
27 # necessary to keep the decimal places as typed
28 value = str(value)
29 return decimal.Decimal(value)
30 except decimal.DecimalException:
31 return None
32
33
34 class WeightScalar(graphene.Scalar):
35 @staticmethod
36 def parse_value(value):
37 # Expects value to be a string "amount unit" separated by a single
38 # space.
39 try:
40 value = decimal.Decimal(value)
41 except decimal.DecimalException:
42 return None
43 default_unit = get_default_weight_unit()
44 return Weight(**{default_unit: value})
45
46 @staticmethod
47 def serialize(weight):
48 if isinstance(weight, Weight):
49 default_unit = get_default_weight_unit()
50 if weight.unit != default_unit:
51 weight = convert_weight(weight, default_unit)
52 return str(weight)
53 return None
54
55 @staticmethod
56 def parse_literal(node):
57 return node
58
[end of saleor/graphql/core/scalars.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/graphql/core/scalars.py b/saleor/graphql/core/scalars.py
--- a/saleor/graphql/core/scalars.py
+++ b/saleor/graphql/core/scalars.py
@@ -1,6 +1,8 @@
import decimal
import graphene
+from graphql.language import ast
+from graphql.error import GraphQLError
from measurement.measures import Weight
from ...core.weight import convert_weight, get_default_weight_unit
@@ -34,14 +36,14 @@
class WeightScalar(graphene.Scalar):
@staticmethod
def parse_value(value):
- # Expects value to be a string "amount unit" separated by a single
- # space.
- try:
- value = decimal.Decimal(value)
- except decimal.DecimalException:
- return None
- default_unit = get_default_weight_unit()
- return Weight(**{default_unit: value})
+ weight = None
+ if isinstance(value, dict):
+ weight = Weight(**{value["unit"]: value["value"]})
+ else:
+ weight = WeightScalar.parse_decimal(value)
+ if not weight:
+ raise GraphQLError(f"Unsupported value: {value}")
+ return weight
@staticmethod
def serialize(weight):
@@ -54,4 +56,35 @@
@staticmethod
def parse_literal(node):
- return node
+ weight = None
+ if isinstance(node, ast.ObjectValue):
+ weight = WeightScalar.parse_literal_object(node)
+ else:
+ weight = WeightScalar.parse_decimal(node.value)
+ if not weight:
+ raise GraphQLError(f"Unsupported value: {node.value}")
+ return weight
+
+ @staticmethod
+ def parse_decimal(value):
+ try:
+ value = decimal.Decimal(value)
+ except decimal.DecimalException:
+ return None
+ default_unit = get_default_weight_unit()
+ return Weight(**{default_unit: value})
+
+ @staticmethod
+ def parse_literal_object(node):
+ value = 0
+ unit = get_default_weight_unit()
+
+ for field in node.fields:
+ if field.name.value == "value":
+ try:
+ value = decimal.Decimal(field.value.value)
+ except decimal.DecimalException:
+ raise GraphQLError(f"Unsupported value: {field.value.value}")
+ if field.name.value == "unit":
+ unit = field.value.value
+ return Weight(**{unit: value})
| {"golden_diff": "diff --git a/saleor/graphql/core/scalars.py b/saleor/graphql/core/scalars.py\n--- a/saleor/graphql/core/scalars.py\n+++ b/saleor/graphql/core/scalars.py\n@@ -1,6 +1,8 @@\n import decimal\n \n import graphene\n+from graphql.language import ast\n+from graphql.error import GraphQLError\n from measurement.measures import Weight\n \n from ...core.weight import convert_weight, get_default_weight_unit\n@@ -34,14 +36,14 @@\n class WeightScalar(graphene.Scalar):\n @staticmethod\n def parse_value(value):\n- # Expects value to be a string \"amount unit\" separated by a single\n- # space.\n- try:\n- value = decimal.Decimal(value)\n- except decimal.DecimalException:\n- return None\n- default_unit = get_default_weight_unit()\n- return Weight(**{default_unit: value})\n+ weight = None\n+ if isinstance(value, dict):\n+ weight = Weight(**{value[\"unit\"]: value[\"value\"]})\n+ else:\n+ weight = WeightScalar.parse_decimal(value)\n+ if not weight:\n+ raise GraphQLError(f\"Unsupported value: {value}\")\n+ return weight\n \n @staticmethod\n def serialize(weight):\n@@ -54,4 +56,35 @@\n \n @staticmethod\n def parse_literal(node):\n- return node\n+ weight = None\n+ if isinstance(node, ast.ObjectValue):\n+ weight = WeightScalar.parse_literal_object(node)\n+ else:\n+ weight = WeightScalar.parse_decimal(node.value)\n+ if not weight:\n+ raise GraphQLError(f\"Unsupported value: {node.value}\")\n+ return weight\n+\n+ @staticmethod\n+ def parse_decimal(value):\n+ try:\n+ value = decimal.Decimal(value)\n+ except decimal.DecimalException:\n+ return None\n+ default_unit = get_default_weight_unit()\n+ return Weight(**{default_unit: value})\n+\n+ @staticmethod\n+ def parse_literal_object(node):\n+ value = 0\n+ unit = get_default_weight_unit()\n+\n+ for field in node.fields:\n+ if field.name.value == \"value\":\n+ try:\n+ value = decimal.Decimal(field.value.value)\n+ except decimal.DecimalException:\n+ raise GraphQLError(f\"Unsupported value: {field.value.value}\")\n+ if field.name.value == \"unit\":\n+ unit = field.value.value\n+ return Weight(**{unit: value})\n", "issue": "ProductCreate weight mutation\n\r\n\r\n\r\n\r\n\r\nproductCreate mutation\r\n\r\nI followed the \"amount unit\" as said in the comment, but returns a \"'StringValue(value='10.00 kg')' value must be a float.\"\r\nSorry just a beginner\n", "before_files": [{"content": "import decimal\n\nimport graphene\nfrom measurement.measures import Weight\n\nfrom ...core.weight import convert_weight, get_default_weight_unit\n\n\nclass Decimal(graphene.Float):\n \"\"\"Custom Decimal implementation.\n\n Returns Decimal as a float in the API,\n parses float to the Decimal on the way back.\n \"\"\"\n\n @staticmethod\n def parse_literal(node):\n try:\n return decimal.Decimal(node.value)\n except decimal.DecimalException:\n return None\n\n @staticmethod\n def parse_value(value):\n try:\n # Converting the float to str before parsing it to Decimal is\n # necessary to keep the decimal places as typed\n value = str(value)\n return decimal.Decimal(value)\n except decimal.DecimalException:\n return None\n\n\nclass WeightScalar(graphene.Scalar):\n @staticmethod\n def parse_value(value):\n # Expects value to be a string \"amount unit\" separated by a single\n # space.\n try:\n value = decimal.Decimal(value)\n except decimal.DecimalException:\n return None\n default_unit = get_default_weight_unit()\n return Weight(**{default_unit: value})\n\n @staticmethod\n def serialize(weight):\n if isinstance(weight, Weight):\n default_unit = get_default_weight_unit()\n if weight.unit != default_unit:\n weight = convert_weight(weight, default_unit)\n return str(weight)\n return None\n\n @staticmethod\n def parse_literal(node):\n return node\n", "path": "saleor/graphql/core/scalars.py"}]} | 1,128 | 544 |
gh_patches_debug_23896 | rasdani/github-patches | git_diff | rdmorganiser__rdmo-524 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sorting causes problems with import
### Description / Beschreibung
When using different `uri_prefix`es for, e.g. a domain import, the sorting by `uri` destroys the order in the file, and parent Attributes are imported *after* their children (with a different `uri_prefix` earlier in the alphabet). This is the problematic line:
https://github.com/rdmorganiser/rdmo/blob/master/rdmo/core/xml.py#L52
### Expected behaviour / Erwartetes Verhalten
I am not sure if we could get rid of the sorting, we could also sort by `path` (which would give the field some meaning in the xml again). Ideas? @triole @MyPyDavid
</issue>
<code>
[start of rdmo/core/xml.py]
1 import logging
2 import re
3
4 import defusedxml.ElementTree as ET
5
6 log = logging.getLogger(__name__)
7
8
9 def read_xml_file(file_name):
10 try:
11 return ET.parse(file_name).getroot()
12 except Exception as e:
13 log.error('Xml parsing error: ' + str(e))
14
15
16 def parse_xml_string(string):
17 try:
18 return ET.fromstring(string)
19 except Exception as e:
20 log.error('Xml parsing error: ' + str(e))
21
22
23 def flat_xml_to_elements(treenode):
24 elements = []
25 ns_map = get_ns_map(treenode)
26 uri_attrib = get_ns_tag('dc:uri', ns_map)
27
28 for node in treenode:
29
30 element = {
31 'uri': get_uri(node, ns_map),
32 'type': node.tag
33 }
34
35 for subnode in node:
36 tag = strip_ns(subnode.tag, ns_map)
37
38 if uri_attrib in subnode.attrib:
39 # this node has an uri!
40 element[tag] = subnode.attrib[uri_attrib]
41 elif 'lang' in subnode.attrib:
42 # this node has the lang attribute!
43 element['%s_%s' % (tag, subnode.attrib['lang'])] = subnode.text
44 elif list(subnode):
45 # this node is a list!
46 element[tag] = [subsubnode.attrib[uri_attrib] for subsubnode in subnode]
47 else:
48 element[tag] = subnode.text
49
50 elements.append(element)
51
52 elements = sort_elements_by_key(elements, 'uri')
53 return elements
54
55
56 def get_ns_tag(tag, ns_map):
57 tag_split = tag.split(':')
58 try:
59 return '{%s}%s' % (ns_map[tag_split[0]], tag_split[1])
60 except KeyError:
61 return None
62
63
64 def get_ns_map(treenode):
65 ns_map = {}
66 treestring = ET.tostring(treenode, encoding='utf8', method='xml')
67
68 for match in re.finditer(r'(xmlns:)(.*?)(=")(.*?)(")', str(treestring)):
69 if match:
70 ns_map[match.group(2)] = match.group(4)
71
72 return ns_map
73
74
75 def get_uri(treenode, ns_map):
76 if treenode is not None:
77 ns_tag = get_ns_tag('dc:uri', ns_map)
78 if ns_tag is not None:
79 return treenode.attrib.get(ns_tag)
80
81
82 def strip_ns(tag, ns_map):
83 for ns in ns_map.values():
84 if tag.startswith('{%s}' % ns):
85 return tag.replace('{%s}' % ns, '')
86 return tag
87
88
89 def filter_elements_by_type(elements, element_type):
90 for element in elements:
91 if element['type'] == element_type:
92 yield element
93
94
95 def sort_elements_by_key(dictlist, key, reverse=False):
96 return sorted(dictlist, key=lambda k: k[key], reverse=reverse)
97
[end of rdmo/core/xml.py]
[start of rdmo/core/constants.py]
1 from django.utils.translation import gettext_lazy as _
2
3 VALUE_TYPE_TEXT = 'text'
4 VALUE_TYPE_URL = 'url'
5 VALUE_TYPE_INTEGER = 'integer'
6 VALUE_TYPE_FLOAT = 'float'
7 VALUE_TYPE_BOOLEAN = 'boolean'
8 VALUE_TYPE_DATETIME = 'datetime'
9 VALUE_TYPE_OPTIONS = 'option'
10 VALUE_TYPE_EMAIL = 'email'
11 VALUE_TYPE_PHONE = 'phone'
12 VALUE_TYPE_FILE = 'file'
13 VALUE_TYPE_CHOICES = (
14 (VALUE_TYPE_TEXT, _('Text')),
15 (VALUE_TYPE_URL, _('URL')),
16 (VALUE_TYPE_INTEGER, _('Integer')),
17 (VALUE_TYPE_FLOAT, _('Float')),
18 (VALUE_TYPE_BOOLEAN, _('Boolean')),
19 (VALUE_TYPE_DATETIME, _('Datetime')),
20 (VALUE_TYPE_EMAIL, _('E-mail')),
21 (VALUE_TYPE_PHONE, _('Phone')),
22 (VALUE_TYPE_OPTIONS, _('Option')),
23 (VALUE_TYPE_FILE, _('File'))
24 )
25
26 PERMISSIONS = {
27 'condition': (
28 'conditions.add_condition', 'conditions.change_condition', 'conditions.delete_condition'
29 ),
30 'attribute': (
31 'domain.add_attribute', 'domain.change_attribute', 'domain.delete_attribute'
32 ),
33 'optionset': (
34 'options.add_optionset', 'options.change_optionset', 'options.delete_optionset'
35 ),
36 'option': (
37 'options.add_option', 'options.change_option', 'options.delete_option'
38 ),
39 'catalog': (
40 'questions.add_catalog', 'questions.change_catalog', 'questions.delete_catalog'
41 ),
42 'section': (
43 'questions.add_section', 'questions.change_section', 'questions.delete_section'
44 ),
45 'questionset': (
46 'questions.add_questionset', 'questions.change_questionset', 'questions.delete_questionset'
47 ),
48 'question': (
49 'questions.add_question', 'questions.change_question', 'questions.delete_question'
50 ),
51 'task': (
52 'tasks.add_task', 'tasks.change_task', 'tasks.delete_task'
53 ),
54 'view': (
55 'views.add_view', 'views.change_view', 'views.delete_view'
56 )
57 }
58
[end of rdmo/core/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rdmo/core/constants.py b/rdmo/core/constants.py
--- a/rdmo/core/constants.py
+++ b/rdmo/core/constants.py
@@ -55,3 +55,12 @@
'views.add_view', 'views.change_view', 'views.delete_view'
)
}
+
+IMPORT_SORT_ORDER = (
+ 'domain',
+ 'conditions',
+ 'options',
+ 'questions',
+ 'tasks',
+ 'views'
+)
diff --git a/rdmo/core/xml.py b/rdmo/core/xml.py
--- a/rdmo/core/xml.py
+++ b/rdmo/core/xml.py
@@ -3,6 +3,9 @@
import defusedxml.ElementTree as ET
+from .constants import IMPORT_SORT_ORDER
+
+
log = logging.getLogger(__name__)
@@ -49,7 +52,7 @@
elements.append(element)
- elements = sort_elements_by_key(elements, 'uri')
+ elements = sorted(elements, key=sort_elements)
return elements
@@ -92,5 +95,13 @@
yield element
-def sort_elements_by_key(dictlist, key, reverse=False):
- return sorted(dictlist, key=lambda k: k[key], reverse=reverse)
+def sort_elements(element):
+ # remove the uri_prefix from the uri to create the key to be sorted by
+ sort_key = element['uri'].replace(element['uri_prefix'], '')
+
+ # remove the app name from the sort_key and replace it by its import order
+ for i, item in enumerate(IMPORT_SORT_ORDER):
+ if sort_key.startswith(item):
+ sort_key = sort_key.replace(item, str(i))
+
+ return sort_key
| {"golden_diff": "diff --git a/rdmo/core/constants.py b/rdmo/core/constants.py\n--- a/rdmo/core/constants.py\n+++ b/rdmo/core/constants.py\n@@ -55,3 +55,12 @@\n 'views.add_view', 'views.change_view', 'views.delete_view'\n )\n }\n+\n+IMPORT_SORT_ORDER = (\n+ 'domain',\n+ 'conditions',\n+ 'options',\n+ 'questions',\n+ 'tasks',\n+ 'views'\n+)\ndiff --git a/rdmo/core/xml.py b/rdmo/core/xml.py\n--- a/rdmo/core/xml.py\n+++ b/rdmo/core/xml.py\n@@ -3,6 +3,9 @@\n \n import defusedxml.ElementTree as ET\n \n+from .constants import IMPORT_SORT_ORDER\n+\n+\n log = logging.getLogger(__name__)\n \n \n@@ -49,7 +52,7 @@\n \n elements.append(element)\n \n- elements = sort_elements_by_key(elements, 'uri')\n+ elements = sorted(elements, key=sort_elements)\n return elements\n \n \n@@ -92,5 +95,13 @@\n yield element\n \n \n-def sort_elements_by_key(dictlist, key, reverse=False):\n- return sorted(dictlist, key=lambda k: k[key], reverse=reverse)\n+def sort_elements(element):\n+ # remove the uri_prefix from the uri to create the key to be sorted by\n+ sort_key = element['uri'].replace(element['uri_prefix'], '')\n+\n+ # remove the app name from the sort_key and replace it by its import order\n+ for i, item in enumerate(IMPORT_SORT_ORDER):\n+ if sort_key.startswith(item):\n+ sort_key = sort_key.replace(item, str(i))\n+\n+ return sort_key\n", "issue": "Sorting causes problems with import\n### Description / Beschreibung\r\n\r\nWhen using different `uri_prefix`es for, e.g. a domain import, the sorting by `uri` destroys the order in the file, and parent Attributes are imported *after* their children (with a different `uri_prefix` earlier in the alphabet). This is the problematic line:\r\n\r\nhttps://github.com/rdmorganiser/rdmo/blob/master/rdmo/core/xml.py#L52\r\n\r\n### Expected behaviour / Erwartetes Verhalten\r\n\r\nI am not sure if we could get rid of the sorting, we could also sort by `path` (which would give the field some meaning in the xml again). Ideas? @triole @MyPyDavid \r\n\n", "before_files": [{"content": "import logging\nimport re\n\nimport defusedxml.ElementTree as ET\n\nlog = logging.getLogger(__name__)\n\n\ndef read_xml_file(file_name):\n try:\n return ET.parse(file_name).getroot()\n except Exception as e:\n log.error('Xml parsing error: ' + str(e))\n\n\ndef parse_xml_string(string):\n try:\n return ET.fromstring(string)\n except Exception as e:\n log.error('Xml parsing error: ' + str(e))\n\n\ndef flat_xml_to_elements(treenode):\n elements = []\n ns_map = get_ns_map(treenode)\n uri_attrib = get_ns_tag('dc:uri', ns_map)\n\n for node in treenode:\n\n element = {\n 'uri': get_uri(node, ns_map),\n 'type': node.tag\n }\n\n for subnode in node:\n tag = strip_ns(subnode.tag, ns_map)\n\n if uri_attrib in subnode.attrib:\n # this node has an uri!\n element[tag] = subnode.attrib[uri_attrib]\n elif 'lang' in subnode.attrib:\n # this node has the lang attribute!\n element['%s_%s' % (tag, subnode.attrib['lang'])] = subnode.text\n elif list(subnode):\n # this node is a list!\n element[tag] = [subsubnode.attrib[uri_attrib] for subsubnode in subnode]\n else:\n element[tag] = subnode.text\n\n elements.append(element)\n\n elements = sort_elements_by_key(elements, 'uri')\n return elements\n\n\ndef get_ns_tag(tag, ns_map):\n tag_split = tag.split(':')\n try:\n return '{%s}%s' % (ns_map[tag_split[0]], tag_split[1])\n except KeyError:\n return None\n\n\ndef get_ns_map(treenode):\n ns_map = {}\n treestring = ET.tostring(treenode, encoding='utf8', method='xml')\n\n for match in re.finditer(r'(xmlns:)(.*?)(=\")(.*?)(\")', str(treestring)):\n if match:\n ns_map[match.group(2)] = match.group(4)\n\n return ns_map\n\n\ndef get_uri(treenode, ns_map):\n if treenode is not None:\n ns_tag = get_ns_tag('dc:uri', ns_map)\n if ns_tag is not None:\n return treenode.attrib.get(ns_tag)\n\n\ndef strip_ns(tag, ns_map):\n for ns in ns_map.values():\n if tag.startswith('{%s}' % ns):\n return tag.replace('{%s}' % ns, '')\n return tag\n\n\ndef filter_elements_by_type(elements, element_type):\n for element in elements:\n if element['type'] == element_type:\n yield element\n\n\ndef sort_elements_by_key(dictlist, key, reverse=False):\n return sorted(dictlist, key=lambda k: k[key], reverse=reverse)\n", "path": "rdmo/core/xml.py"}, {"content": "from django.utils.translation import gettext_lazy as _\n\nVALUE_TYPE_TEXT = 'text'\nVALUE_TYPE_URL = 'url'\nVALUE_TYPE_INTEGER = 'integer'\nVALUE_TYPE_FLOAT = 'float'\nVALUE_TYPE_BOOLEAN = 'boolean'\nVALUE_TYPE_DATETIME = 'datetime'\nVALUE_TYPE_OPTIONS = 'option'\nVALUE_TYPE_EMAIL = 'email'\nVALUE_TYPE_PHONE = 'phone'\nVALUE_TYPE_FILE = 'file'\nVALUE_TYPE_CHOICES = (\n (VALUE_TYPE_TEXT, _('Text')),\n (VALUE_TYPE_URL, _('URL')),\n (VALUE_TYPE_INTEGER, _('Integer')),\n (VALUE_TYPE_FLOAT, _('Float')),\n (VALUE_TYPE_BOOLEAN, _('Boolean')),\n (VALUE_TYPE_DATETIME, _('Datetime')),\n (VALUE_TYPE_EMAIL, _('E-mail')),\n (VALUE_TYPE_PHONE, _('Phone')),\n (VALUE_TYPE_OPTIONS, _('Option')),\n (VALUE_TYPE_FILE, _('File'))\n)\n\nPERMISSIONS = {\n 'condition': (\n 'conditions.add_condition', 'conditions.change_condition', 'conditions.delete_condition'\n ),\n 'attribute': (\n 'domain.add_attribute', 'domain.change_attribute', 'domain.delete_attribute'\n ),\n 'optionset': (\n 'options.add_optionset', 'options.change_optionset', 'options.delete_optionset'\n ),\n 'option': (\n 'options.add_option', 'options.change_option', 'options.delete_option'\n ),\n 'catalog': (\n 'questions.add_catalog', 'questions.change_catalog', 'questions.delete_catalog'\n ),\n 'section': (\n 'questions.add_section', 'questions.change_section', 'questions.delete_section'\n ),\n 'questionset': (\n 'questions.add_questionset', 'questions.change_questionset', 'questions.delete_questionset'\n ),\n 'question': (\n 'questions.add_question', 'questions.change_question', 'questions.delete_question'\n ),\n 'task': (\n 'tasks.add_task', 'tasks.change_task', 'tasks.delete_task'\n ),\n 'view': (\n 'views.add_view', 'views.change_view', 'views.delete_view'\n )\n}\n", "path": "rdmo/core/constants.py"}]} | 2,061 | 382 |
gh_patches_debug_18183 | rasdani/github-patches | git_diff | keras-team__autokeras-627 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip install autokeras fails on torch ==1.1.0
### Bug Description
When executing `pip install autokeras`, I get the following message:
`Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch==1.0.1.post2 (from autokeras)`
### Reproducing Steps
Steps to reproduce the behavior:
* Step 1: set up anaconda environment
* Step 2: install pytorch via their website's recommended command: `conda install pytorch-cpu torchvision-cpu -c pytorch`
* Step 3: try to install autokeras via `pip install autokeras`
* Step 4: get the following output:
```
Collecting autokeras
Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)
100% |████████████████████████████████| 71kB 1.3MB/s
Collecting scipy==1.2.0 (from autokeras)
Downloading https://files.pythonhosted.org/packages/c4/0f/2bdeab43db2b4a75863863bf7eddda8920b031b0a70494fd2665c73c9aec/scipy-1.2.0-cp36-cp36m-win_amd64.whl (31.9MB)
100% |████████████████████████████████| 31.9MB 508kB/s
Requirement already satisfied: tensorflow==1.13.1 in c:\[...]\lib\site-packages (from autokeras) (1.13.1)
Collecting torch==1.0.1.post2 (from autokeras)
Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch==1.0.1.post2 (from autokeras)
```
### Expected Behavior
Autokeras is installed without error.
### Setup Details
Include the details about the versions of:
- OS type and version: Windows 10 Version 10.0.17763 Build 17763
- Python: 3.6.8 (anaconda)
- autokeras: 0.4.0
- scikit-learn: 0.20.3
- numpy:1.16.2
- keras: 2.2.4
- scipy:1.2.1
- tensorflow:1.13.1
- pytorch:1.1.0
### Additional context
<!---
Add any other context about the problem here.
-->
</issue>
<code>
[start of setup.py]
1 from distutils.core import setup
2 from setuptools import find_packages
3
4 setup(
5 name='autokeras',
6 packages=find_packages(exclude=('tests',)),
7 install_requires=['scipy==1.2.0',
8 'tensorflow==1.13.1',
9 'torch==1.0.1.post2',
10 'torchvision==0.2.1',
11 'numpy==1.16.1',
12 'scikit-learn==0.20.2',
13 'scikit-image==0.14.2',
14 'tqdm==4.31.0',
15 'imageio==2.5.0',
16 'requests==2.21.0'
17 ],
18 version='0.4.0',
19 description='AutoML for deep learning',
20 author='DATA Lab at Texas A&M University',
21 author_email='[email protected]',
22 url='http://autokeras.com',
23 download_url='https://github.com/keras-team/autokeras/archive/0.3.7.tar.gz',
24 keywords=['AutoML', 'keras'],
25 classifiers=[]
26 )
27
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,16 +4,16 @@
setup(
name='autokeras',
packages=find_packages(exclude=('tests',)),
- install_requires=['scipy==1.2.0',
- 'tensorflow==1.13.1',
- 'torch==1.0.1.post2',
- 'torchvision==0.2.1',
- 'numpy==1.16.1',
- 'scikit-learn==0.20.2',
- 'scikit-image==0.14.2',
- 'tqdm==4.31.0',
- 'imageio==2.5.0',
- 'requests==2.21.0'
+ install_requires=['scipy>=1.2.0',
+ 'tensorflow>=1.13.1',
+ 'torch>=1.0.1.post2',
+ 'torchvision>=0.2.1',
+ 'numpy>=1.16.1',
+ 'scikit-learn>=0.20.2',
+ 'scikit-image>=0.14.2',
+ 'tqdm>=4.31.0',
+ 'imageio>=2.5.0',
+ 'requests>=2.21.0'
],
version='0.4.0',
description='AutoML for deep learning',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,16 +4,16 @@\n setup(\n name='autokeras',\n packages=find_packages(exclude=('tests',)),\n- install_requires=['scipy==1.2.0',\n- 'tensorflow==1.13.1',\n- 'torch==1.0.1.post2',\n- 'torchvision==0.2.1',\n- 'numpy==1.16.1',\n- 'scikit-learn==0.20.2',\n- 'scikit-image==0.14.2',\n- 'tqdm==4.31.0',\n- 'imageio==2.5.0',\n- 'requests==2.21.0'\n+ install_requires=['scipy>=1.2.0',\n+ 'tensorflow>=1.13.1',\n+ 'torch>=1.0.1.post2',\n+ 'torchvision>=0.2.1',\n+ 'numpy>=1.16.1',\n+ 'scikit-learn>=0.20.2',\n+ 'scikit-image>=0.14.2',\n+ 'tqdm>=4.31.0',\n+ 'imageio>=2.5.0',\n+ 'requests>=2.21.0'\n ],\n version='0.4.0',\n description='AutoML for deep learning',\n", "issue": "pip install autokeras fails on torch ==1.1.0\n### Bug Description\r\nWhen executing `pip install autokeras`, I get the following message:\r\n`Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)\r\nNo matching distribution found for torch==1.0.1.post2 (from autokeras)`\r\n\r\n### Reproducing Steps\r\nSteps to reproduce the behavior:\r\n * Step 1: set up anaconda environment\r\n * Step 2: install pytorch via their website's recommended command: `conda install pytorch-cpu torchvision-cpu -c pytorch`\r\n * Step 3: try to install autokeras via `pip install autokeras`\r\n * Step 4: get the following output:\r\n\r\n```\r\nCollecting autokeras\r\n Downloading https://files.pythonhosted.org/packages/c2/32/de74bf6afd09925980340355a05aa6a19e7378ed91dac09e76a487bd136d/autokeras-0.4.0.tar.gz (67kB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 71kB 1.3MB/s\r\nCollecting scipy==1.2.0 (from autokeras)\r\n Downloading https://files.pythonhosted.org/packages/c4/0f/2bdeab43db2b4a75863863bf7eddda8920b031b0a70494fd2665c73c9aec/scipy-1.2.0-cp36-cp36m-win_amd64.whl (31.9MB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 31.9MB 508kB/s\r\nRequirement already satisfied: tensorflow==1.13.1 in c:\\[...]\\lib\\site-packages (from autokeras) (1.13.1)\r\nCollecting torch==1.0.1.post2 (from autokeras)\r\n Could not find a version that satisfies the requirement torch==1.0.1.post2 (from autokeras) (from versions: 0.1.2, 0.1.2.post1)\r\nNo matching distribution found for torch==1.0.1.post2 (from autokeras)\r\n```\r\n\r\n### Expected Behavior\r\nAutokeras is installed without error.\r\n\r\n### Setup Details\r\nInclude the details about the versions of:\r\n - OS type and version: Windows 10 Version\t10.0.17763 Build 17763\r\n - Python: 3.6.8 (anaconda)\r\n - autokeras: 0.4.0\r\n - scikit-learn: 0.20.3\r\n - numpy:1.16.2\r\n - keras: 2.2.4\r\n - scipy:1.2.1\r\n - tensorflow:1.13.1\r\n - pytorch:1.1.0\r\n\r\n### Additional context\r\n<!---\r\nAdd any other context about the problem here.\r\n-->\r\n\n", "before_files": [{"content": "from distutils.core import setup\nfrom setuptools import find_packages\n\nsetup(\n name='autokeras',\n packages=find_packages(exclude=('tests',)),\n install_requires=['scipy==1.2.0',\n 'tensorflow==1.13.1',\n 'torch==1.0.1.post2',\n 'torchvision==0.2.1',\n 'numpy==1.16.1',\n 'scikit-learn==0.20.2',\n 'scikit-image==0.14.2',\n 'tqdm==4.31.0',\n 'imageio==2.5.0',\n 'requests==2.21.0'\n ],\n version='0.4.0',\n description='AutoML for deep learning',\n author='DATA Lab at Texas A&M University',\n author_email='[email protected]',\n url='http://autokeras.com',\n download_url='https://github.com/keras-team/autokeras/archive/0.3.7.tar.gz',\n keywords=['AutoML', 'keras'],\n classifiers=[]\n)\n", "path": "setup.py"}]} | 1,547 | 331 |
gh_patches_debug_7877 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-679 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OverflowError in Point.py
It seems like [this old (2013) bug](https://bugs.launchpad.net/pyqtgraph/+bug/1234528) still hasn't been fixed. I've just bumped into the same error.
I will test solution suggested by author.
</issue>
<code>
[start of pyqtgraph/Point.py]
1 # -*- coding: utf-8 -*-
2 """
3 Point.py - Extension of QPointF which adds a few missing methods.
4 Copyright 2010 Luke Campagnola
5 Distributed under MIT/X11 license. See license.txt for more infomation.
6 """
7
8 from .Qt import QtCore
9 import numpy as np
10
11 def clip(x, mn, mx):
12 if x > mx:
13 return mx
14 if x < mn:
15 return mn
16 return x
17
18 class Point(QtCore.QPointF):
19 """Extension of QPointF which adds a few missing methods."""
20
21 def __init__(self, *args):
22 if len(args) == 1:
23 if isinstance(args[0], QtCore.QSizeF):
24 QtCore.QPointF.__init__(self, float(args[0].width()), float(args[0].height()))
25 return
26 elif isinstance(args[0], float) or isinstance(args[0], int):
27 QtCore.QPointF.__init__(self, float(args[0]), float(args[0]))
28 return
29 elif hasattr(args[0], '__getitem__'):
30 QtCore.QPointF.__init__(self, float(args[0][0]), float(args[0][1]))
31 return
32 elif len(args) == 2:
33 QtCore.QPointF.__init__(self, args[0], args[1])
34 return
35 QtCore.QPointF.__init__(self, *args)
36
37 def __len__(self):
38 return 2
39
40 def __reduce__(self):
41 return (Point, (self.x(), self.y()))
42
43 def __getitem__(self, i):
44 if i == 0:
45 return self.x()
46 elif i == 1:
47 return self.y()
48 else:
49 raise IndexError("Point has no index %s" % str(i))
50
51 def __setitem__(self, i, x):
52 if i == 0:
53 return self.setX(x)
54 elif i == 1:
55 return self.setY(x)
56 else:
57 raise IndexError("Point has no index %s" % str(i))
58
59 def __radd__(self, a):
60 return self._math_('__radd__', a)
61
62 def __add__(self, a):
63 return self._math_('__add__', a)
64
65 def __rsub__(self, a):
66 return self._math_('__rsub__', a)
67
68 def __sub__(self, a):
69 return self._math_('__sub__', a)
70
71 def __rmul__(self, a):
72 return self._math_('__rmul__', a)
73
74 def __mul__(self, a):
75 return self._math_('__mul__', a)
76
77 def __rdiv__(self, a):
78 return self._math_('__rdiv__', a)
79
80 def __div__(self, a):
81 return self._math_('__div__', a)
82
83 def __truediv__(self, a):
84 return self._math_('__truediv__', a)
85
86 def __rtruediv__(self, a):
87 return self._math_('__rtruediv__', a)
88
89 def __rpow__(self, a):
90 return self._math_('__rpow__', a)
91
92 def __pow__(self, a):
93 return self._math_('__pow__', a)
94
95 def _math_(self, op, x):
96 #print "point math:", op
97 #try:
98 #fn = getattr(QtCore.QPointF, op)
99 #pt = fn(self, x)
100 #print fn, pt, self, x
101 #return Point(pt)
102 #except AttributeError:
103 x = Point(x)
104 return Point(getattr(self[0], op)(x[0]), getattr(self[1], op)(x[1]))
105
106 def length(self):
107 """Returns the vector length of this Point."""
108 return (self[0]**2 + self[1]**2) ** 0.5
109
110 def norm(self):
111 """Returns a vector in the same direction with unit length."""
112 return self / self.length()
113
114 def angle(self, a):
115 """Returns the angle in degrees between this vector and the vector a."""
116 n1 = self.length()
117 n2 = a.length()
118 if n1 == 0. or n2 == 0.:
119 return None
120 ## Probably this should be done with arctan2 instead..
121 ang = np.arccos(clip(self.dot(a) / (n1 * n2), -1.0, 1.0)) ### in radians
122 c = self.cross(a)
123 if c > 0:
124 ang *= -1.
125 return ang * 180. / np.pi
126
127 def dot(self, a):
128 """Returns the dot product of a and this Point."""
129 a = Point(a)
130 return self[0]*a[0] + self[1]*a[1]
131
132 def cross(self, a):
133 a = Point(a)
134 return self[0]*a[1] - self[1]*a[0]
135
136 def proj(self, b):
137 """Return the projection of this vector onto the vector b"""
138 b1 = b / b.length()
139 return self.dot(b1) * b1
140
141 def __repr__(self):
142 return "Point(%f, %f)" % (self[0], self[1])
143
144
145 def min(self):
146 return min(self[0], self[1])
147
148 def max(self):
149 return max(self[0], self[1])
150
151 def copy(self):
152 return Point(self)
153
154 def toQPoint(self):
155 return QtCore.QPoint(*self)
156
[end of pyqtgraph/Point.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyqtgraph/Point.py b/pyqtgraph/Point.py
--- a/pyqtgraph/Point.py
+++ b/pyqtgraph/Point.py
@@ -105,7 +105,13 @@
def length(self):
"""Returns the vector length of this Point."""
- return (self[0]**2 + self[1]**2) ** 0.5
+ try:
+ return (self[0]**2 + self[1]**2) ** 0.5
+ except OverflowError:
+ try:
+ return self[1] / np.sin(np.arctan2(self[1], self[0]))
+ except OverflowError:
+ return np.inf
def norm(self):
"""Returns a vector in the same direction with unit length."""
| {"golden_diff": "diff --git a/pyqtgraph/Point.py b/pyqtgraph/Point.py\n--- a/pyqtgraph/Point.py\n+++ b/pyqtgraph/Point.py\n@@ -105,7 +105,13 @@\n \n def length(self):\n \"\"\"Returns the vector length of this Point.\"\"\"\n- return (self[0]**2 + self[1]**2) ** 0.5\n+ try:\n+ return (self[0]**2 + self[1]**2) ** 0.5\n+ except OverflowError:\n+ try:\n+ return self[1] / np.sin(np.arctan2(self[1], self[0]))\n+ except OverflowError:\n+ return np.inf\n \n def norm(self):\n \"\"\"Returns a vector in the same direction with unit length.\"\"\"\n", "issue": "OverflowError in Point.py\nIt seems like [this old (2013) bug](https://bugs.launchpad.net/pyqtgraph/+bug/1234528) still hasn't been fixed. I've just bumped into the same error.\r\n\r\nI will test solution suggested by author.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nPoint.py - Extension of QPointF which adds a few missing methods.\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more infomation.\n\"\"\"\n\nfrom .Qt import QtCore\nimport numpy as np\n\ndef clip(x, mn, mx):\n if x > mx:\n return mx\n if x < mn:\n return mn\n return x\n\nclass Point(QtCore.QPointF):\n \"\"\"Extension of QPointF which adds a few missing methods.\"\"\"\n \n def __init__(self, *args):\n if len(args) == 1:\n if isinstance(args[0], QtCore.QSizeF):\n QtCore.QPointF.__init__(self, float(args[0].width()), float(args[0].height()))\n return\n elif isinstance(args[0], float) or isinstance(args[0], int):\n QtCore.QPointF.__init__(self, float(args[0]), float(args[0]))\n return\n elif hasattr(args[0], '__getitem__'):\n QtCore.QPointF.__init__(self, float(args[0][0]), float(args[0][1]))\n return\n elif len(args) == 2:\n QtCore.QPointF.__init__(self, args[0], args[1])\n return\n QtCore.QPointF.__init__(self, *args)\n \n def __len__(self):\n return 2\n \n def __reduce__(self):\n return (Point, (self.x(), self.y()))\n \n def __getitem__(self, i):\n if i == 0:\n return self.x()\n elif i == 1:\n return self.y()\n else:\n raise IndexError(\"Point has no index %s\" % str(i))\n \n def __setitem__(self, i, x):\n if i == 0:\n return self.setX(x)\n elif i == 1:\n return self.setY(x)\n else:\n raise IndexError(\"Point has no index %s\" % str(i))\n \n def __radd__(self, a):\n return self._math_('__radd__', a)\n \n def __add__(self, a):\n return self._math_('__add__', a)\n \n def __rsub__(self, a):\n return self._math_('__rsub__', a)\n \n def __sub__(self, a):\n return self._math_('__sub__', a)\n \n def __rmul__(self, a):\n return self._math_('__rmul__', a)\n \n def __mul__(self, a):\n return self._math_('__mul__', a)\n \n def __rdiv__(self, a):\n return self._math_('__rdiv__', a)\n \n def __div__(self, a):\n return self._math_('__div__', a)\n \n def __truediv__(self, a):\n return self._math_('__truediv__', a)\n \n def __rtruediv__(self, a):\n return self._math_('__rtruediv__', a)\n \n def __rpow__(self, a):\n return self._math_('__rpow__', a)\n \n def __pow__(self, a):\n return self._math_('__pow__', a)\n \n def _math_(self, op, x):\n #print \"point math:\", op\n #try:\n #fn = getattr(QtCore.QPointF, op)\n #pt = fn(self, x)\n #print fn, pt, self, x\n #return Point(pt)\n #except AttributeError:\n x = Point(x)\n return Point(getattr(self[0], op)(x[0]), getattr(self[1], op)(x[1]))\n \n def length(self):\n \"\"\"Returns the vector length of this Point.\"\"\"\n return (self[0]**2 + self[1]**2) ** 0.5\n \n def norm(self):\n \"\"\"Returns a vector in the same direction with unit length.\"\"\"\n return self / self.length()\n \n def angle(self, a):\n \"\"\"Returns the angle in degrees between this vector and the vector a.\"\"\"\n n1 = self.length()\n n2 = a.length()\n if n1 == 0. or n2 == 0.:\n return None\n ## Probably this should be done with arctan2 instead..\n ang = np.arccos(clip(self.dot(a) / (n1 * n2), -1.0, 1.0)) ### in radians\n c = self.cross(a)\n if c > 0:\n ang *= -1.\n return ang * 180. / np.pi\n \n def dot(self, a):\n \"\"\"Returns the dot product of a and this Point.\"\"\"\n a = Point(a)\n return self[0]*a[0] + self[1]*a[1]\n \n def cross(self, a):\n a = Point(a)\n return self[0]*a[1] - self[1]*a[0]\n \n def proj(self, b):\n \"\"\"Return the projection of this vector onto the vector b\"\"\"\n b1 = b / b.length()\n return self.dot(b1) * b1\n \n def __repr__(self):\n return \"Point(%f, %f)\" % (self[0], self[1])\n \n \n def min(self):\n return min(self[0], self[1])\n \n def max(self):\n return max(self[0], self[1])\n \n def copy(self):\n return Point(self)\n \n def toQPoint(self):\n return QtCore.QPoint(*self)\n", "path": "pyqtgraph/Point.py"}]} | 2,177 | 178 |
gh_patches_debug_39980 | rasdani/github-patches | git_diff | microsoft__ptvsd-1161 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sys.stdin not None and missing encoding attribute when running with pythonw.exe
## Environment data
- PTVSD version: 4.2
- OS and version: windows 10
- Python version (& distribution if applicable, e.g. Anaconda): CPython 3.7 using **pythonw.exe**
- Using VS Code or Visual Studio: VS
## Actual behavior
None has no attribute encoding exception
## Expected behavior
Either sys.stdin.encoding works, or sys.stdin is None (it is None when running without debugging)
## Steps to reproduce:
1. Debug this code using pythonw.exe (no console)
```
import sys
with open('issue4866.txt', 'wt') as f:
f.write('hello\n')
f.write(str(type(sys.stdin)) + '\n')
if sys.stdin is not None:
f.write(str(sys.stdin.encoding) + '\n')
f.write('bye\n')
```
From https://github.com/Microsoft/PTVS/issues/4866
</issue>
<code>
[start of src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py]
1 from _pydevd_bundle import pydevd_constants
2
3 IS_PY3K = pydevd_constants.IS_PY3K
4
5 class IORedirector:
6 '''
7 This class works to wrap a stream (stdout/stderr) with an additional redirect.
8 '''
9
10 def __init__(self, original, new_redirect, wrap_buffer=False):
11 '''
12 :param stream original:
13 The stream to be wrapped (usually stdout/stderr).
14
15 :param stream new_redirect:
16 Usually IOBuf (below).
17
18 :param bool wrap_buffer:
19 Whether to create a buffer attribute (needed to mimick python 3 s
20 tdout/stderr which has a buffer to write binary data).
21 '''
22 self._redirect_to = (original, new_redirect)
23 if wrap_buffer and hasattr(original, 'buffer'):
24 self.buffer = IORedirector(original.buffer, new_redirect.buffer, False)
25
26 def write(self, s):
27 # Note that writing to the original stream may fail for some reasons
28 # (such as trying to write something that's not a string or having it closed).
29 for r in self._redirect_to:
30 r.write(s)
31
32 def isatty(self):
33 return self._redirect_to[0].isatty()
34
35 def flush(self):
36 for r in self._redirect_to:
37 r.flush()
38
39 def __getattr__(self, name):
40 for r in self._redirect_to:
41 if hasattr(r, name):
42 return getattr(r, name)
43 raise AttributeError(name)
44
45 class IOBuf:
46 '''This class works as a replacement for stdio and stderr.
47 It is a buffer and when its contents are requested, it will erase what
48 it has so far so that the next return will not return the same contents again.
49 '''
50 def __init__(self):
51 self.buflist = []
52 import os
53 self.encoding = os.environ.get('PYTHONIOENCODING', 'utf-8')
54
55 def getvalue(self):
56 b = self.buflist
57 self.buflist = [] # clear it
58 return ''.join(b) # bytes on py2, str on py3.
59
60 def write(self, s):
61 if not IS_PY3K:
62 if isinstance(s, unicode):
63 # can't use 'errors' as kwargs in py 2.6
64 s = s.encode(self.encoding, 'replace')
65 else:
66 if isinstance(s, bytes):
67 s = s.decode(self.encoding, errors='replace')
68 self.buflist.append(s)
69
70 def isatty(self):
71 return False
72
73 def flush(self):
74 pass
75
76 def empty(self):
77 return len(self.buflist) == 0
78
79 class _RedirectionsHolder:
80 _stack_stdout = []
81 _stack_stderr = []
82
83
84 def start_redirect(keep_original_redirection=False, std='stdout'):
85 '''
86 @param std: 'stdout', 'stderr', or 'both'
87 '''
88 import sys
89 buf = IOBuf()
90
91 if std == 'both':
92 config_stds = ['stdout', 'stderr']
93 else:
94 config_stds = [std]
95
96 for std in config_stds:
97 original = getattr(sys, std)
98 stack = getattr(_RedirectionsHolder, '_stack_%s' % std)
99 stack.append(original)
100
101 if keep_original_redirection:
102 setattr(sys, std, IORedirector(getattr(sys, std), buf))
103 else:
104 setattr(sys, std, buf)
105 return buf
106
107
108 def end_redirect(std='stdout'):
109 import sys
110 if std == 'both':
111 config_stds = ['stdout', 'stderr']
112 else:
113 config_stds = [std]
114 for std in config_stds:
115 stack = getattr(_RedirectionsHolder, '_stack_%s' % std)
116 setattr(sys, std, stack.pop())
117
118
[end of src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py
--- a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py
+++ b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py
@@ -2,6 +2,7 @@
IS_PY3K = pydevd_constants.IS_PY3K
+
class IORedirector:
'''
This class works to wrap a stream (stdout/stderr) with an additional redirect.
@@ -10,7 +11,7 @@
def __init__(self, original, new_redirect, wrap_buffer=False):
'''
:param stream original:
- The stream to be wrapped (usually stdout/stderr).
+ The stream to be wrapped (usually stdout/stderr, but could be None).
:param stream new_redirect:
Usually IOBuf (below).
@@ -27,14 +28,19 @@
# Note that writing to the original stream may fail for some reasons
# (such as trying to write something that's not a string or having it closed).
for r in self._redirect_to:
- r.write(s)
+ if hasattr(r, 'write'):
+ r.write(s)
def isatty(self):
- return self._redirect_to[0].isatty()
+ for r in self._redirect_to:
+ if hasattr(r, 'isatty'):
+ return r.isatty()
+ return False
def flush(self):
for r in self._redirect_to:
- r.flush()
+ if hasattr(r, 'flush'):
+ r.flush()
def __getattr__(self, name):
for r in self._redirect_to:
@@ -42,11 +48,13 @@
return getattr(r, name)
raise AttributeError(name)
+
class IOBuf:
'''This class works as a replacement for stdio and stderr.
It is a buffer and when its contents are requested, it will erase what
it has so far so that the next return will not return the same contents again.
'''
+
def __init__(self):
self.buflist = []
import os
@@ -56,7 +64,7 @@
b = self.buflist
self.buflist = [] # clear it
return ''.join(b) # bytes on py2, str on py3.
-
+
def write(self, s):
if not IS_PY3K:
if isinstance(s, unicode):
@@ -76,6 +84,7 @@
def empty(self):
return len(self.buflist) == 0
+
class _RedirectionsHolder:
_stack_stdout = []
_stack_stderr = []
| {"golden_diff": "diff --git a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py\n--- a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py\n+++ b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py\n@@ -2,6 +2,7 @@\n \n IS_PY3K = pydevd_constants.IS_PY3K\n \n+\n class IORedirector:\n '''\n This class works to wrap a stream (stdout/stderr) with an additional redirect.\n@@ -10,7 +11,7 @@\n def __init__(self, original, new_redirect, wrap_buffer=False):\n '''\n :param stream original:\n- The stream to be wrapped (usually stdout/stderr).\n+ The stream to be wrapped (usually stdout/stderr, but could be None).\n \n :param stream new_redirect:\n Usually IOBuf (below).\n@@ -27,14 +28,19 @@\n # Note that writing to the original stream may fail for some reasons\n # (such as trying to write something that's not a string or having it closed).\n for r in self._redirect_to:\n- r.write(s)\n+ if hasattr(r, 'write'):\n+ r.write(s)\n \n def isatty(self):\n- return self._redirect_to[0].isatty()\n+ for r in self._redirect_to:\n+ if hasattr(r, 'isatty'):\n+ return r.isatty()\n+ return False\n \n def flush(self):\n for r in self._redirect_to:\n- r.flush()\n+ if hasattr(r, 'flush'):\n+ r.flush()\n \n def __getattr__(self, name):\n for r in self._redirect_to:\n@@ -42,11 +48,13 @@\n return getattr(r, name)\n raise AttributeError(name)\n \n+\n class IOBuf:\n '''This class works as a replacement for stdio and stderr.\n It is a buffer and when its contents are requested, it will erase what\n it has so far so that the next return will not return the same contents again.\n '''\n+\n def __init__(self):\n self.buflist = []\n import os\n@@ -56,7 +64,7 @@\n b = self.buflist\n self.buflist = [] # clear it\n return ''.join(b) # bytes on py2, str on py3.\n- \n+\n def write(self, s):\n if not IS_PY3K:\n if isinstance(s, unicode):\n@@ -76,6 +84,7 @@\n def empty(self):\n return len(self.buflist) == 0\n \n+\n class _RedirectionsHolder:\n _stack_stdout = []\n _stack_stderr = []\n", "issue": "sys.stdin not None and missing encoding attribute when running with pythonw.exe\n## Environment data\r\n\r\n- PTVSD version: 4.2\r\n- OS and version: windows 10\r\n- Python version (& distribution if applicable, e.g. Anaconda): CPython 3.7 using **pythonw.exe**\r\n- Using VS Code or Visual Studio: VS\r\n\r\n## Actual behavior\r\n\r\nNone has no attribute encoding exception\r\n\r\n## Expected behavior\r\n\r\nEither sys.stdin.encoding works, or sys.stdin is None (it is None when running without debugging)\r\n\r\n\r\n## Steps to reproduce:\r\n1. Debug this code using pythonw.exe (no console)\r\n```\r\nimport sys\r\n\r\nwith open('issue4866.txt', 'wt') as f:\r\n f.write('hello\\n')\r\n f.write(str(type(sys.stdin)) + '\\n')\r\n if sys.stdin is not None:\r\n f.write(str(sys.stdin.encoding) + '\\n')\r\n f.write('bye\\n')\r\n```\r\n\r\nFrom https://github.com/Microsoft/PTVS/issues/4866\n", "before_files": [{"content": "from _pydevd_bundle import pydevd_constants\n\nIS_PY3K = pydevd_constants.IS_PY3K\n\nclass IORedirector:\n '''\n This class works to wrap a stream (stdout/stderr) with an additional redirect.\n '''\n\n def __init__(self, original, new_redirect, wrap_buffer=False):\n '''\n :param stream original:\n The stream to be wrapped (usually stdout/stderr).\n\n :param stream new_redirect:\n Usually IOBuf (below).\n\n :param bool wrap_buffer:\n Whether to create a buffer attribute (needed to mimick python 3 s\n tdout/stderr which has a buffer to write binary data).\n '''\n self._redirect_to = (original, new_redirect)\n if wrap_buffer and hasattr(original, 'buffer'):\n self.buffer = IORedirector(original.buffer, new_redirect.buffer, False)\n\n def write(self, s):\n # Note that writing to the original stream may fail for some reasons\n # (such as trying to write something that's not a string or having it closed).\n for r in self._redirect_to:\n r.write(s)\n\n def isatty(self):\n return self._redirect_to[0].isatty()\n\n def flush(self):\n for r in self._redirect_to:\n r.flush()\n\n def __getattr__(self, name):\n for r in self._redirect_to:\n if hasattr(r, name):\n return getattr(r, name)\n raise AttributeError(name)\n\nclass IOBuf:\n '''This class works as a replacement for stdio and stderr.\n It is a buffer and when its contents are requested, it will erase what\n it has so far so that the next return will not return the same contents again.\n '''\n def __init__(self):\n self.buflist = []\n import os\n self.encoding = os.environ.get('PYTHONIOENCODING', 'utf-8')\n\n def getvalue(self):\n b = self.buflist\n self.buflist = [] # clear it\n return ''.join(b) # bytes on py2, str on py3.\n \n def write(self, s):\n if not IS_PY3K:\n if isinstance(s, unicode):\n # can't use 'errors' as kwargs in py 2.6\n s = s.encode(self.encoding, 'replace')\n else:\n if isinstance(s, bytes):\n s = s.decode(self.encoding, errors='replace')\n self.buflist.append(s)\n\n def isatty(self):\n return False\n\n def flush(self):\n pass\n\n def empty(self):\n return len(self.buflist) == 0\n\nclass _RedirectionsHolder:\n _stack_stdout = []\n _stack_stderr = []\n\n\ndef start_redirect(keep_original_redirection=False, std='stdout'):\n '''\n @param std: 'stdout', 'stderr', or 'both'\n '''\n import sys\n buf = IOBuf()\n\n if std == 'both':\n config_stds = ['stdout', 'stderr']\n else:\n config_stds = [std]\n\n for std in config_stds:\n original = getattr(sys, std)\n stack = getattr(_RedirectionsHolder, '_stack_%s' % std)\n stack.append(original)\n\n if keep_original_redirection:\n setattr(sys, std, IORedirector(getattr(sys, std), buf))\n else:\n setattr(sys, std, buf)\n return buf\n\n\ndef end_redirect(std='stdout'):\n import sys\n if std == 'both':\n config_stds = ['stdout', 'stderr']\n else:\n config_stds = [std]\n for std in config_stds:\n stack = getattr(_RedirectionsHolder, '_stack_%s' % std)\n setattr(sys, std, stack.pop())\n\n", "path": "src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py"}]} | 1,868 | 646 |
gh_patches_debug_4883 | rasdani/github-patches | git_diff | pre-commit__pre-commit-2996 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use of --dev deprecated for npm
I'm seeing this warning sometimes (output seems to be hidden unless the install fails):
```
npm WARN install Usage of the `--dev` option is deprecated. Use `--include=dev` instead.
```
Which seems to be because of this:
https://github.com/pre-commit/pre-commit/blob/fe436f1eb09dfdd67032b4f9f8dfa543fb99cf06/pre_commit/languages/node.py#L104
The problem with this command was that it installed dependencies recursively, rendering them useless (AFAICT, not a node expert). The developers decided it was only a footgun in https://github.com/npm/npm/issues/5554#issuecomment-56121953 and deprecated in https://github.com/npm/npm/issues/6200.
</issue>
<code>
[start of pre_commit/languages/node.py]
1 from __future__ import annotations
2
3 import contextlib
4 import functools
5 import os
6 import sys
7 from typing import Generator
8 from typing import Sequence
9
10 import pre_commit.constants as C
11 from pre_commit import lang_base
12 from pre_commit.envcontext import envcontext
13 from pre_commit.envcontext import PatchesT
14 from pre_commit.envcontext import UNSET
15 from pre_commit.envcontext import Var
16 from pre_commit.languages.python import bin_dir
17 from pre_commit.prefix import Prefix
18 from pre_commit.util import cmd_output
19 from pre_commit.util import cmd_output_b
20 from pre_commit.util import rmtree
21
22 ENVIRONMENT_DIR = 'node_env'
23 run_hook = lang_base.basic_run_hook
24
25
26 @functools.lru_cache(maxsize=1)
27 def get_default_version() -> str:
28 # nodeenv does not yet support `-n system` on windows
29 if sys.platform == 'win32':
30 return C.DEFAULT
31 # if node is already installed, we can save a bunch of setup time by
32 # using the installed version
33 elif all(lang_base.exe_exists(exe) for exe in ('node', 'npm')):
34 return 'system'
35 else:
36 return C.DEFAULT
37
38
39 def get_env_patch(venv: str) -> PatchesT:
40 if sys.platform == 'cygwin': # pragma: no cover
41 _, win_venv, _ = cmd_output('cygpath', '-w', venv)
42 install_prefix = fr'{win_venv.strip()}\bin'
43 lib_dir = 'lib'
44 elif sys.platform == 'win32': # pragma: no cover
45 install_prefix = bin_dir(venv)
46 lib_dir = 'Scripts'
47 else: # pragma: win32 no cover
48 install_prefix = venv
49 lib_dir = 'lib'
50 return (
51 ('NODE_VIRTUAL_ENV', venv),
52 ('NPM_CONFIG_PREFIX', install_prefix),
53 ('npm_config_prefix', install_prefix),
54 ('NPM_CONFIG_USERCONFIG', UNSET),
55 ('npm_config_userconfig', UNSET),
56 ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),
57 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
58 )
59
60
61 @contextlib.contextmanager
62 def in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:
63 envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)
64 with envcontext(get_env_patch(envdir)):
65 yield
66
67
68 def health_check(prefix: Prefix, version: str) -> str | None:
69 with in_env(prefix, version):
70 retcode, _, _ = cmd_output_b('node', '--version', check=False)
71 if retcode != 0: # pragma: win32 no cover
72 return f'`node --version` returned {retcode}'
73 else:
74 return None
75
76
77 def install_environment(
78 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
79 ) -> None:
80 assert prefix.exists('package.json')
81 envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)
82
83 # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath
84 if sys.platform == 'win32': # pragma: no cover
85 envdir = fr'\\?\{os.path.normpath(envdir)}'
86 cmd = [sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir]
87 if version != C.DEFAULT:
88 cmd.extend(['-n', version])
89 cmd_output_b(*cmd)
90
91 with in_env(prefix, version):
92 # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449
93 # install as if we installed from git
94
95 local_install_cmd = (
96 'npm', 'install', '--dev', '--prod',
97 '--ignore-prepublish', '--no-progress', '--no-save',
98 )
99 lang_base.setup_cmd(prefix, local_install_cmd)
100
101 _, pkg, _ = cmd_output('npm', 'pack', cwd=prefix.prefix_dir)
102 pkg = prefix.path(pkg.strip())
103
104 install = ('npm', 'install', '-g', pkg, *additional_dependencies)
105 lang_base.setup_cmd(prefix, install)
106
107 # clean these up after installation
108 if prefix.exists('node_modules'): # pragma: win32 no cover
109 rmtree(prefix.path('node_modules'))
110 os.remove(pkg)
111
[end of pre_commit/languages/node.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py
--- a/pre_commit/languages/node.py
+++ b/pre_commit/languages/node.py
@@ -93,7 +93,7 @@
# install as if we installed from git
local_install_cmd = (
- 'npm', 'install', '--dev', '--prod',
+ 'npm', 'install', '--include=dev', '--include=prod',
'--ignore-prepublish', '--no-progress', '--no-save',
)
lang_base.setup_cmd(prefix, local_install_cmd)
| {"golden_diff": "diff --git a/pre_commit/languages/node.py b/pre_commit/languages/node.py\n--- a/pre_commit/languages/node.py\n+++ b/pre_commit/languages/node.py\n@@ -93,7 +93,7 @@\n # install as if we installed from git\n \n local_install_cmd = (\n- 'npm', 'install', '--dev', '--prod',\n+ 'npm', 'install', '--include=dev', '--include=prod',\n '--ignore-prepublish', '--no-progress', '--no-save',\n )\n lang_base.setup_cmd(prefix, local_install_cmd)\n", "issue": "Use of --dev deprecated for npm\nI'm seeing this warning sometimes (output seems to be hidden unless the install fails):\r\n\r\n```\r\nnpm WARN install Usage of the `--dev` option is deprecated. Use `--include=dev` instead.\r\n```\r\n\r\nWhich seems to be because of this:\r\n\r\nhttps://github.com/pre-commit/pre-commit/blob/fe436f1eb09dfdd67032b4f9f8dfa543fb99cf06/pre_commit/languages/node.py#L104\r\n\r\nThe problem with this command was that it installed dependencies recursively, rendering them useless (AFAICT, not a node expert). The developers decided it was only a footgun in https://github.com/npm/npm/issues/5554#issuecomment-56121953 and deprecated in https://github.com/npm/npm/issues/6200.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport contextlib\nimport functools\nimport os\nimport sys\nfrom typing import Generator\nfrom typing import Sequence\n\nimport pre_commit.constants as C\nfrom pre_commit import lang_base\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages.python import bin_dir\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import rmtree\n\nENVIRONMENT_DIR = 'node_env'\nrun_hook = lang_base.basic_run_hook\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n # nodeenv does not yet support `-n system` on windows\n if sys.platform == 'win32':\n return C.DEFAULT\n # if node is already installed, we can save a bunch of setup time by\n # using the installed version\n elif all(lang_base.exe_exists(exe) for exe in ('node', 'npm')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef get_env_patch(venv: str) -> PatchesT:\n if sys.platform == 'cygwin': # pragma: no cover\n _, win_venv, _ = cmd_output('cygpath', '-w', venv)\n install_prefix = fr'{win_venv.strip()}\\bin'\n lib_dir = 'lib'\n elif sys.platform == 'win32': # pragma: no cover\n install_prefix = bin_dir(venv)\n lib_dir = 'Scripts'\n else: # pragma: win32 no cover\n install_prefix = venv\n lib_dir = 'lib'\n return (\n ('NODE_VIRTUAL_ENV', venv),\n ('NPM_CONFIG_PREFIX', install_prefix),\n ('npm_config_prefix', install_prefix),\n ('NPM_CONFIG_USERCONFIG', UNSET),\n ('npm_config_userconfig', UNSET),\n ('NODE_PATH', os.path.join(venv, lib_dir, 'node_modules')),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef health_check(prefix: Prefix, version: str) -> str | None:\n with in_env(prefix, version):\n retcode, _, _ = cmd_output_b('node', '--version', check=False)\n if retcode != 0: # pragma: win32 no cover\n return f'`node --version` returned {retcode}'\n else:\n return None\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n assert prefix.exists('package.json')\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n\n # https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx?f=255&MSPPError=-2147217396#maxpath\n if sys.platform == 'win32': # pragma: no cover\n envdir = fr'\\\\?\\{os.path.normpath(envdir)}'\n cmd = [sys.executable, '-mnodeenv', '--prebuilt', '--clean-src', envdir]\n if version != C.DEFAULT:\n cmd.extend(['-n', version])\n cmd_output_b(*cmd)\n\n with in_env(prefix, version):\n # https://npm.community/t/npm-install-g-git-vs-git-clone-cd-npm-install-g/5449\n # install as if we installed from git\n\n local_install_cmd = (\n 'npm', 'install', '--dev', '--prod',\n '--ignore-prepublish', '--no-progress', '--no-save',\n )\n lang_base.setup_cmd(prefix, local_install_cmd)\n\n _, pkg, _ = cmd_output('npm', 'pack', cwd=prefix.prefix_dir)\n pkg = prefix.path(pkg.strip())\n\n install = ('npm', 'install', '-g', pkg, *additional_dependencies)\n lang_base.setup_cmd(prefix, install)\n\n # clean these up after installation\n if prefix.exists('node_modules'): # pragma: win32 no cover\n rmtree(prefix.path('node_modules'))\n os.remove(pkg)\n", "path": "pre_commit/languages/node.py"}]} | 1,950 | 123 |
gh_patches_debug_7242 | rasdani/github-patches | git_diff | dmlc__dgl-1183 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
build_graph in MovieLens class
## ❓ Questions and Help
Dear Quan (Andy) Gan,
i have a question about the method build_graph in the MovieLens class.
I see that the dataset is processed and you consider all the User and Movie features in generating the node features of the graph. In particular, you assign id of the node with this scheme:
from [ 0 : Length(Users) ] -> Features of User (encoded)
from [ Length(Users) : Length(Users) + Length(Movies) ] -> Features of Movies (encoded)
The problem comes out when you consider the Title of the movies because in the code there is the following (line 158):
for i, tw in enumerate(tqdm.tqdm(title_words)):
g.ndata['title'][i, [vocab_invmap[w] for w in tw]] = 1
which would mean that the encoded title would be assigned to the nodes related to the Users
[0 : Len(Movies)]
Maybe i didn't understand the logic behind this, is this correct ? or am i missing something?
Thank you,
Best regards
Francesco
</issue>
<code>
[start of examples/pytorch/recommendation/rec/datasets/movielens.py]
1 import pandas as pd
2 import dgl
3 import os
4 import torch
5 import numpy as np
6 import scipy.sparse as sp
7 import time
8 from functools import partial
9 from .. import randomwalk
10 import stanfordnlp
11 import re
12 import tqdm
13 import string
14
15 class MovieLens(object):
16 def __init__(self, directory):
17 '''
18 directory: path to movielens directory which should have the three
19 files:
20 users.dat
21 movies.dat
22 ratings.dat
23 '''
24 self.directory = directory
25
26 users = []
27 movies = []
28 ratings = []
29
30 # read users
31 with open(os.path.join(directory, 'users.dat')) as f:
32 for l in f:
33 id_, gender, age, occupation, zip_ = l.strip().split('::')
34 users.append({
35 'id': int(id_),
36 'gender': gender,
37 'age': age,
38 'occupation': occupation,
39 'zip': zip_,
40 })
41 self.users = pd.DataFrame(users).set_index('id').astype('category')
42
43 # read movies
44 with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:
45 for l in f:
46 id_, title, genres = l.strip().split('::')
47 genres_set = set(genres.split('|'))
48
49 # extract year
50 assert re.match(r'.*\([0-9]{4}\)$', title)
51 year = title[-5:-1]
52 title = title[:-6].strip()
53
54 data = {'id': int(id_), 'title': title, 'year': year}
55 for g in genres_set:
56 data[g] = True
57 movies.append(data)
58 self.movies = (
59 pd.DataFrame(movies)
60 .set_index('id')
61 .fillna(False)
62 .astype({'year': 'category'}))
63 self.genres = self.movies.columns[self.movies.dtypes == bool]
64
65 # read ratings
66 with open(os.path.join(directory, 'ratings.dat')) as f:
67 for l in f:
68 user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]
69 ratings.append({
70 'user_id': user_id,
71 'movie_id': movie_id,
72 'rating': rating,
73 'timestamp': timestamp,
74 })
75 ratings = pd.DataFrame(ratings)
76 movie_count = ratings['movie_id'].value_counts()
77 movie_count.name = 'movie_count'
78 ratings = ratings.join(movie_count, on='movie_id')
79 self.ratings = ratings
80
81 # drop users and movies which do not exist in ratings
82 self.users = self.users[self.users.index.isin(self.ratings['user_id'])]
83 self.movies = self.movies[self.movies.index.isin(self.ratings['movie_id'])]
84
85 self.data_split()
86 self.build_graph()
87
88 def split_user(self, df, filter_counts=False):
89 df_new = df.copy()
90 df_new['prob'] = 0
91
92 if filter_counts:
93 df_new_sub = (df_new['movie_count'] >= 10).nonzero()[0]
94 else:
95 df_new_sub = df_new['train'].nonzero()[0]
96 prob = np.linspace(0, 1, df_new_sub.shape[0], endpoint=False)
97 np.random.shuffle(prob)
98 df_new['prob'].iloc[df_new_sub] = prob
99 return df_new
100
101 def data_split(self):
102 self.ratings = self.ratings.groupby('user_id', group_keys=False).apply(
103 partial(self.split_user, filter_counts=True))
104 self.ratings['train'] = self.ratings['prob'] <= 0.8
105 self.ratings['valid'] = (self.ratings['prob'] > 0.8) & (self.ratings['prob'] <= 0.9)
106 self.ratings['test'] = self.ratings['prob'] > 0.9
107 self.ratings.drop(['prob'], axis=1, inplace=True)
108
109 def build_graph(self):
110 user_ids = list(self.users.index)
111 movie_ids = list(self.movies.index)
112 user_ids_invmap = {id_: i for i, id_ in enumerate(user_ids)}
113 movie_ids_invmap = {id_: i for i, id_ in enumerate(movie_ids)}
114 self.user_ids = user_ids
115 self.movie_ids = movie_ids
116 self.user_ids_invmap = user_ids_invmap
117 self.movie_ids_invmap = movie_ids_invmap
118
119 g = dgl.DGLGraph()
120 g.add_nodes(len(user_ids) + len(movie_ids))
121
122 # user features
123 for user_column in self.users.columns:
124 udata = torch.zeros(g.number_of_nodes(), dtype=torch.int64)
125 # 0 for padding
126 udata[:len(user_ids)] = \
127 torch.LongTensor(self.users[user_column].cat.codes.values.astype('int64') + 1)
128 g.ndata[user_column] = udata
129
130 # movie genre
131 movie_genres = torch.from_numpy(self.movies[self.genres].values.astype('float32'))
132 g.ndata['genre'] = torch.zeros(g.number_of_nodes(), len(self.genres))
133 g.ndata['genre'][len(user_ids):len(user_ids) + len(movie_ids)] = movie_genres
134
135 # movie year
136 g.ndata['year'] = torch.zeros(g.number_of_nodes(), dtype=torch.int64)
137 # 0 for padding
138 g.ndata['year'][len(user_ids):len(user_ids) + len(movie_ids)] = \
139 torch.LongTensor(self.movies['year'].cat.codes.values.astype('int64') + 1)
140
141 # movie title
142 nlp = stanfordnlp.Pipeline(use_gpu=False, processors='tokenize,lemma')
143 vocab = set()
144 title_words = []
145 for t in tqdm.tqdm(self.movies['title'].values):
146 doc = nlp(t)
147 words = set()
148 for s in doc.sentences:
149 words.update(w.lemma.lower() for w in s.words
150 if not re.fullmatch(r'['+string.punctuation+']+', w.lemma))
151 vocab.update(words)
152 title_words.append(words)
153 vocab = list(vocab)
154 vocab_invmap = {w: i for i, w in enumerate(vocab)}
155 # bag-of-words
156 g.ndata['title'] = torch.zeros(g.number_of_nodes(), len(vocab))
157 for i, tw in enumerate(tqdm.tqdm(title_words)):
158 g.ndata['title'][i, [vocab_invmap[w] for w in tw]] = 1
159 self.vocab = vocab
160 self.vocab_invmap = vocab_invmap
161
162 rating_user_vertices = [user_ids_invmap[id_] for id_ in self.ratings['user_id'].values]
163 rating_movie_vertices = [movie_ids_invmap[id_] + len(user_ids)
164 for id_ in self.ratings['movie_id'].values]
165 self.rating_user_vertices = rating_user_vertices
166 self.rating_movie_vertices = rating_movie_vertices
167
168 g.add_edges(
169 rating_user_vertices,
170 rating_movie_vertices,
171 data={'inv': torch.zeros(self.ratings.shape[0], dtype=torch.uint8)})
172 g.add_edges(
173 rating_movie_vertices,
174 rating_user_vertices,
175 data={'inv': torch.ones(self.ratings.shape[0], dtype=torch.uint8)})
176 self.g = g
177
178 def generate_mask(self):
179 while True:
180 ratings = self.ratings.groupby('user_id', group_keys=False).apply(self.split_user)
181 prior_prob = ratings['prob'].values
182 for i in range(5):
183 train_mask = (prior_prob >= 0.2 * i) & (prior_prob < 0.2 * (i + 1))
184 prior_mask = ~train_mask
185 train_mask &= ratings['train'].values
186 prior_mask &= ratings['train'].values
187 yield prior_mask, train_mask
188
189 def refresh_mask(self):
190 if not hasattr(self, 'masks'):
191 self.masks = self.generate_mask()
192 prior_mask, train_mask = next(self.masks)
193
194 valid_tensor = torch.from_numpy(self.ratings['valid'].values.astype('uint8'))
195 test_tensor = torch.from_numpy(self.ratings['test'].values.astype('uint8'))
196 train_tensor = torch.from_numpy(train_mask.astype('uint8'))
197 prior_tensor = torch.from_numpy(prior_mask.astype('uint8'))
198 edge_data = {
199 'prior': prior_tensor,
200 'valid': valid_tensor,
201 'test': test_tensor,
202 'train': train_tensor,
203 }
204
205 self.g.edges[self.rating_user_vertices, self.rating_movie_vertices].data.update(edge_data)
206 self.g.edges[self.rating_movie_vertices, self.rating_user_vertices].data.update(edge_data)
207
[end of examples/pytorch/recommendation/rec/datasets/movielens.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/pytorch/recommendation/rec/datasets/movielens.py b/examples/pytorch/recommendation/rec/datasets/movielens.py
--- a/examples/pytorch/recommendation/rec/datasets/movielens.py
+++ b/examples/pytorch/recommendation/rec/datasets/movielens.py
@@ -155,7 +155,7 @@
# bag-of-words
g.ndata['title'] = torch.zeros(g.number_of_nodes(), len(vocab))
for i, tw in enumerate(tqdm.tqdm(title_words)):
- g.ndata['title'][i, [vocab_invmap[w] for w in tw]] = 1
+ g.ndata['title'][len(user_ids) + i, [vocab_invmap[w] for w in tw]] = 1
self.vocab = vocab
self.vocab_invmap = vocab_invmap
| {"golden_diff": "diff --git a/examples/pytorch/recommendation/rec/datasets/movielens.py b/examples/pytorch/recommendation/rec/datasets/movielens.py\n--- a/examples/pytorch/recommendation/rec/datasets/movielens.py\n+++ b/examples/pytorch/recommendation/rec/datasets/movielens.py\n@@ -155,7 +155,7 @@\n # bag-of-words\n g.ndata['title'] = torch.zeros(g.number_of_nodes(), len(vocab))\n for i, tw in enumerate(tqdm.tqdm(title_words)):\n- g.ndata['title'][i, [vocab_invmap[w] for w in tw]] = 1\n+ g.ndata['title'][len(user_ids) + i, [vocab_invmap[w] for w in tw]] = 1\n self.vocab = vocab\n self.vocab_invmap = vocab_invmap\n", "issue": "build_graph in MovieLens class \n## \u2753 Questions and Help\r\n\r\nDear Quan (Andy) Gan,\r\n\r\ni have a question about the method build_graph in the MovieLens class.\r\nI see that the dataset is processed and you consider all the User and Movie features in generating the node features of the graph. In particular, you assign id of the node with this scheme: \r\n\r\nfrom [ 0 : Length(Users) ] -> Features of User (encoded) \r\nfrom [ Length(Users) : Length(Users) + Length(Movies) ] -> Features of Movies (encoded) \r\n\r\nThe problem comes out when you consider the Title of the movies because in the code there is the following (line 158): \r\n\r\n for i, tw in enumerate(tqdm.tqdm(title_words)):\r\n g.ndata['title'][i, [vocab_invmap[w] for w in tw]] = 1\r\n\r\nwhich would mean that the encoded title would be assigned to the nodes related to the Users\r\n[0 : Len(Movies)]\r\nMaybe i didn't understand the logic behind this, is this correct ? or am i missing something? \r\n\r\nThank you,\r\nBest regards\r\nFrancesco\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import pandas as pd\nimport dgl\nimport os\nimport torch\nimport numpy as np\nimport scipy.sparse as sp\nimport time\nfrom functools import partial\nfrom .. import randomwalk\nimport stanfordnlp\nimport re\nimport tqdm\nimport string\n\nclass MovieLens(object):\n def __init__(self, directory):\n '''\n directory: path to movielens directory which should have the three\n files:\n users.dat\n movies.dat\n ratings.dat\n '''\n self.directory = directory\n\n users = []\n movies = []\n ratings = []\n\n # read users\n with open(os.path.join(directory, 'users.dat')) as f:\n for l in f:\n id_, gender, age, occupation, zip_ = l.strip().split('::')\n users.append({\n 'id': int(id_),\n 'gender': gender,\n 'age': age,\n 'occupation': occupation,\n 'zip': zip_,\n })\n self.users = pd.DataFrame(users).set_index('id').astype('category')\n\n # read movies\n with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:\n for l in f:\n id_, title, genres = l.strip().split('::')\n genres_set = set(genres.split('|'))\n\n # extract year\n assert re.match(r'.*\\([0-9]{4}\\)$', title)\n year = title[-5:-1]\n title = title[:-6].strip()\n\n data = {'id': int(id_), 'title': title, 'year': year}\n for g in genres_set:\n data[g] = True\n movies.append(data)\n self.movies = (\n pd.DataFrame(movies)\n .set_index('id')\n .fillna(False)\n .astype({'year': 'category'}))\n self.genres = self.movies.columns[self.movies.dtypes == bool]\n\n # read ratings\n with open(os.path.join(directory, 'ratings.dat')) as f:\n for l in f:\n user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]\n ratings.append({\n 'user_id': user_id,\n 'movie_id': movie_id,\n 'rating': rating,\n 'timestamp': timestamp,\n })\n ratings = pd.DataFrame(ratings)\n movie_count = ratings['movie_id'].value_counts()\n movie_count.name = 'movie_count'\n ratings = ratings.join(movie_count, on='movie_id')\n self.ratings = ratings\n\n # drop users and movies which do not exist in ratings\n self.users = self.users[self.users.index.isin(self.ratings['user_id'])]\n self.movies = self.movies[self.movies.index.isin(self.ratings['movie_id'])]\n\n self.data_split()\n self.build_graph()\n\n def split_user(self, df, filter_counts=False):\n df_new = df.copy()\n df_new['prob'] = 0\n\n if filter_counts:\n df_new_sub = (df_new['movie_count'] >= 10).nonzero()[0]\n else:\n df_new_sub = df_new['train'].nonzero()[0]\n prob = np.linspace(0, 1, df_new_sub.shape[0], endpoint=False)\n np.random.shuffle(prob)\n df_new['prob'].iloc[df_new_sub] = prob\n return df_new\n\n def data_split(self):\n self.ratings = self.ratings.groupby('user_id', group_keys=False).apply(\n partial(self.split_user, filter_counts=True))\n self.ratings['train'] = self.ratings['prob'] <= 0.8\n self.ratings['valid'] = (self.ratings['prob'] > 0.8) & (self.ratings['prob'] <= 0.9)\n self.ratings['test'] = self.ratings['prob'] > 0.9\n self.ratings.drop(['prob'], axis=1, inplace=True)\n\n def build_graph(self):\n user_ids = list(self.users.index)\n movie_ids = list(self.movies.index)\n user_ids_invmap = {id_: i for i, id_ in enumerate(user_ids)}\n movie_ids_invmap = {id_: i for i, id_ in enumerate(movie_ids)}\n self.user_ids = user_ids\n self.movie_ids = movie_ids\n self.user_ids_invmap = user_ids_invmap\n self.movie_ids_invmap = movie_ids_invmap\n\n g = dgl.DGLGraph()\n g.add_nodes(len(user_ids) + len(movie_ids))\n\n # user features\n for user_column in self.users.columns:\n udata = torch.zeros(g.number_of_nodes(), dtype=torch.int64)\n # 0 for padding\n udata[:len(user_ids)] = \\\n torch.LongTensor(self.users[user_column].cat.codes.values.astype('int64') + 1)\n g.ndata[user_column] = udata\n\n # movie genre\n movie_genres = torch.from_numpy(self.movies[self.genres].values.astype('float32'))\n g.ndata['genre'] = torch.zeros(g.number_of_nodes(), len(self.genres))\n g.ndata['genre'][len(user_ids):len(user_ids) + len(movie_ids)] = movie_genres\n\n # movie year\n g.ndata['year'] = torch.zeros(g.number_of_nodes(), dtype=torch.int64)\n # 0 for padding\n g.ndata['year'][len(user_ids):len(user_ids) + len(movie_ids)] = \\\n torch.LongTensor(self.movies['year'].cat.codes.values.astype('int64') + 1)\n\n # movie title\n nlp = stanfordnlp.Pipeline(use_gpu=False, processors='tokenize,lemma')\n vocab = set()\n title_words = []\n for t in tqdm.tqdm(self.movies['title'].values):\n doc = nlp(t)\n words = set()\n for s in doc.sentences:\n words.update(w.lemma.lower() for w in s.words\n if not re.fullmatch(r'['+string.punctuation+']+', w.lemma))\n vocab.update(words)\n title_words.append(words)\n vocab = list(vocab)\n vocab_invmap = {w: i for i, w in enumerate(vocab)}\n # bag-of-words\n g.ndata['title'] = torch.zeros(g.number_of_nodes(), len(vocab))\n for i, tw in enumerate(tqdm.tqdm(title_words)):\n g.ndata['title'][i, [vocab_invmap[w] for w in tw]] = 1\n self.vocab = vocab\n self.vocab_invmap = vocab_invmap\n\n rating_user_vertices = [user_ids_invmap[id_] for id_ in self.ratings['user_id'].values]\n rating_movie_vertices = [movie_ids_invmap[id_] + len(user_ids)\n for id_ in self.ratings['movie_id'].values]\n self.rating_user_vertices = rating_user_vertices\n self.rating_movie_vertices = rating_movie_vertices\n\n g.add_edges(\n rating_user_vertices,\n rating_movie_vertices,\n data={'inv': torch.zeros(self.ratings.shape[0], dtype=torch.uint8)})\n g.add_edges(\n rating_movie_vertices,\n rating_user_vertices,\n data={'inv': torch.ones(self.ratings.shape[0], dtype=torch.uint8)})\n self.g = g\n\n def generate_mask(self):\n while True:\n ratings = self.ratings.groupby('user_id', group_keys=False).apply(self.split_user)\n prior_prob = ratings['prob'].values\n for i in range(5):\n train_mask = (prior_prob >= 0.2 * i) & (prior_prob < 0.2 * (i + 1))\n prior_mask = ~train_mask\n train_mask &= ratings['train'].values\n prior_mask &= ratings['train'].values\n yield prior_mask, train_mask\n\n def refresh_mask(self):\n if not hasattr(self, 'masks'):\n self.masks = self.generate_mask()\n prior_mask, train_mask = next(self.masks)\n\n valid_tensor = torch.from_numpy(self.ratings['valid'].values.astype('uint8'))\n test_tensor = torch.from_numpy(self.ratings['test'].values.astype('uint8'))\n train_tensor = torch.from_numpy(train_mask.astype('uint8'))\n prior_tensor = torch.from_numpy(prior_mask.astype('uint8'))\n edge_data = {\n 'prior': prior_tensor,\n 'valid': valid_tensor,\n 'test': test_tensor,\n 'train': train_tensor,\n }\n\n self.g.edges[self.rating_user_vertices, self.rating_movie_vertices].data.update(edge_data)\n self.g.edges[self.rating_movie_vertices, self.rating_user_vertices].data.update(edge_data)\n", "path": "examples/pytorch/recommendation/rec/datasets/movielens.py"}]} | 3,199 | 195 |
gh_patches_debug_42163 | rasdani/github-patches | git_diff | cupy__cupy-2290 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`TestNpz.test_dump` test failure
https://jenkins.preferred.jp/job/chainer/job/cupy_pr/161/TEST=cupy-py3,label=mn1-p100/console
</issue>
<code>
[start of cupy/io/npz.py]
1 import numpy
2
3 import cupy
4
5
6 class NpzFile(object):
7
8 def __init__(self, npz_file):
9 self.npz_file = npz_file
10
11 def __enter__(self):
12 self.npz_file.__enter__()
13 return self
14
15 def __exit__(self, typ, val, traceback):
16 self.npz_file.__exit__(typ, val, traceback)
17
18 def __getitem__(self, key):
19 arr = self.npz_file[key]
20 return cupy.array(arr)
21
22 def close(self):
23 self.npz_file.close()
24
25
26 def load(file, mmap_mode=None):
27 """Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.
28
29 This function just calls ``numpy.load`` and then sends the arrays to the
30 current device. NPZ file is converted to NpzFile object, which defers the
31 transfer to the time of accessing the items.
32
33 Args:
34 file (file-like object or string): The file to read.
35 mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the
36 file to construct an intermediate :class:`numpy.ndarray` object and
37 transfer it to the current device.
38
39 Returns:
40 CuPy array or NpzFile object depending on the type of the file. NpzFile
41 object is a dictionary-like object with the context manager protocol
42 (which enables us to use *with* statement on it).
43
44 .. seealso:: :func:`numpy.load`
45
46 """
47 obj = numpy.load(file, mmap_mode)
48 if isinstance(obj, numpy.ndarray):
49 return cupy.array(obj)
50 elif isinstance(obj, numpy.lib.npyio.NpzFile):
51 return NpzFile(obj)
52 else:
53 return obj
54
55
56 def save(file, arr):
57 """Saves an array to a binary file in ``.npy`` format.
58
59 Args:
60 file (file or str): File or filename to save.
61 arr (array_like): Array to save. It should be able to feed to
62 :func:`cupy.asnumpy`.
63
64 .. seealso:: :func:`numpy.save`
65
66 """
67 numpy.save(file, cupy.asnumpy(arr))
68
69
70 def savez(file, *args, **kwds):
71 """Saves one or more arrays into a file in uncompressed ``.npz`` format.
72
73 Arguments without keys are treated as arguments with automatic keys named
74 ``arr_0``, ``arr_1``, etc. corresponding to the positions in the argument
75 list. The keys of arguments are used as keys in the ``.npz`` file, which
76 are used for accessing NpzFile object when the file is read by
77 :func:`cupy.load` function.
78
79 Args:
80 file (file or str): File or filename to save.
81 *args: Arrays with implicit keys.
82 **kwds: Arrays with explicit keys.
83
84 .. seealso:: :func:`numpy.savez`
85
86 """
87 args = map(cupy.asnumpy, args)
88 for key in kwds:
89 kwds[key] = cupy.asnumpy(kwds[key])
90 numpy.savez(file, *args, **kwds)
91
92
93 def savez_compressed(file, *args, **kwds):
94 """Saves one or more arrays into a file in compressed ``.npz`` format.
95
96 It is equivalent to :func:`cupy.savez` function except the output file is
97 compressed.
98
99 .. seealso::
100 :func:`cupy.savez` for more detail,
101 :func:`numpy.savez_compressed`
102
103 """
104 args = map(cupy.asnumpy, args)
105 for key in kwds:
106 kwds[key] = cupy.asnumpy(kwds[key])
107 numpy.savez_compressed(file, *args, **kwds)
108
[end of cupy/io/npz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/io/npz.py b/cupy/io/npz.py
--- a/cupy/io/npz.py
+++ b/cupy/io/npz.py
@@ -1,8 +1,13 @@
+import warnings
+
import numpy
import cupy
+_support_allow_pickle = (numpy.lib.NumpyVersion(numpy.__version__) >= '1.10.0')
+
+
class NpzFile(object):
def __init__(self, npz_file):
@@ -23,7 +28,7 @@
self.npz_file.close()
-def load(file, mmap_mode=None):
+def load(file, mmap_mode=None, allow_pickle=None):
"""Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.
This function just calls ``numpy.load`` and then sends the arrays to the
@@ -35,6 +40,16 @@
mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the
file to construct an intermediate :class:`numpy.ndarray` object and
transfer it to the current device.
+ allow_pickle (bool): Allow loading pickled object arrays stored in npy
+ files. Reasons for disallowing pickles include security, as
+ loading pickled data can execute arbitrary code. If pickles are
+ disallowed, loading object arrays will fail.
+ Please be aware that CuPy does not support arrays with dtype of
+ `object`.
+ The default is False.
+ This option is available only for NumPy 1.10 or later.
+ In NumPy 1.9, this option cannot be specified (loading pickled
+ objects is always allowed).
Returns:
CuPy array or NpzFile object depending on the type of the file. NpzFile
@@ -44,7 +59,14 @@
.. seealso:: :func:`numpy.load`
"""
- obj = numpy.load(file, mmap_mode)
+ if _support_allow_pickle:
+ allow_pickle = False if allow_pickle is None else allow_pickle
+ obj = numpy.load(file, mmap_mode, allow_pickle)
+ else:
+ if allow_pickle is not None:
+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')
+ obj = numpy.load(file, mmap_mode)
+
if isinstance(obj, numpy.ndarray):
return cupy.array(obj)
elif isinstance(obj, numpy.lib.npyio.NpzFile):
@@ -53,18 +75,35 @@
return obj
-def save(file, arr):
+def save(file, arr, allow_pickle=None):
"""Saves an array to a binary file in ``.npy`` format.
Args:
file (file or str): File or filename to save.
arr (array_like): Array to save. It should be able to feed to
:func:`cupy.asnumpy`.
+ allow_pickle (bool): Allow saving object arrays using Python pickles.
+ Reasons for disallowing pickles include security (loading pickled
+ data can execute arbitrary code) and portability (pickled objects
+ may not be loadable on different Python installations, for example
+ if the stored objects require libraries that are not available,
+ and not all pickled data is compatible between Python 2 and Python
+ 3).
+ The default is True.
+ This option is available only for NumPy 1.10 or later.
+ In NumPy 1.9, this option cannot be specified (saving objects
+ using pickles is always allowed).
.. seealso:: :func:`numpy.save`
"""
- numpy.save(file, cupy.asnumpy(arr))
+ if _support_allow_pickle:
+ allow_pickle = True if allow_pickle is None else allow_pickle
+ numpy.save(file, cupy.asnumpy(arr), allow_pickle)
+ else:
+ if allow_pickle is not None:
+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')
+ numpy.save(file, cupy.asnumpy(arr))
def savez(file, *args, **kwds):
| {"golden_diff": "diff --git a/cupy/io/npz.py b/cupy/io/npz.py\n--- a/cupy/io/npz.py\n+++ b/cupy/io/npz.py\n@@ -1,8 +1,13 @@\n+import warnings\n+\n import numpy\n \n import cupy\n \n \n+_support_allow_pickle = (numpy.lib.NumpyVersion(numpy.__version__) >= '1.10.0')\n+\n+\n class NpzFile(object):\n \n def __init__(self, npz_file):\n@@ -23,7 +28,7 @@\n self.npz_file.close()\n \n \n-def load(file, mmap_mode=None):\n+def load(file, mmap_mode=None, allow_pickle=None):\n \"\"\"Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.\n \n This function just calls ``numpy.load`` and then sends the arrays to the\n@@ -35,6 +40,16 @@\n mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the\n file to construct an intermediate :class:`numpy.ndarray` object and\n transfer it to the current device.\n+ allow_pickle (bool): Allow loading pickled object arrays stored in npy\n+ files. Reasons for disallowing pickles include security, as\n+ loading pickled data can execute arbitrary code. If pickles are\n+ disallowed, loading object arrays will fail.\n+ Please be aware that CuPy does not support arrays with dtype of\n+ `object`.\n+ The default is False.\n+ This option is available only for NumPy 1.10 or later.\n+ In NumPy 1.9, this option cannot be specified (loading pickled\n+ objects is always allowed).\n \n Returns:\n CuPy array or NpzFile object depending on the type of the file. NpzFile\n@@ -44,7 +59,14 @@\n .. seealso:: :func:`numpy.load`\n \n \"\"\"\n- obj = numpy.load(file, mmap_mode)\n+ if _support_allow_pickle:\n+ allow_pickle = False if allow_pickle is None else allow_pickle\n+ obj = numpy.load(file, mmap_mode, allow_pickle)\n+ else:\n+ if allow_pickle is not None:\n+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')\n+ obj = numpy.load(file, mmap_mode)\n+\n if isinstance(obj, numpy.ndarray):\n return cupy.array(obj)\n elif isinstance(obj, numpy.lib.npyio.NpzFile):\n@@ -53,18 +75,35 @@\n return obj\n \n \n-def save(file, arr):\n+def save(file, arr, allow_pickle=None):\n \"\"\"Saves an array to a binary file in ``.npy`` format.\n \n Args:\n file (file or str): File or filename to save.\n arr (array_like): Array to save. It should be able to feed to\n :func:`cupy.asnumpy`.\n+ allow_pickle (bool): Allow saving object arrays using Python pickles.\n+ Reasons for disallowing pickles include security (loading pickled\n+ data can execute arbitrary code) and portability (pickled objects\n+ may not be loadable on different Python installations, for example\n+ if the stored objects require libraries that are not available,\n+ and not all pickled data is compatible between Python 2 and Python\n+ 3).\n+ The default is True.\n+ This option is available only for NumPy 1.10 or later.\n+ In NumPy 1.9, this option cannot be specified (saving objects\n+ using pickles is always allowed).\n \n .. seealso:: :func:`numpy.save`\n \n \"\"\"\n- numpy.save(file, cupy.asnumpy(arr))\n+ if _support_allow_pickle:\n+ allow_pickle = True if allow_pickle is None else allow_pickle\n+ numpy.save(file, cupy.asnumpy(arr), allow_pickle)\n+ else:\n+ if allow_pickle is not None:\n+ warnings.warn('allow_pickle option is not supported in NumPy 1.9')\n+ numpy.save(file, cupy.asnumpy(arr))\n \n \n def savez(file, *args, **kwds):\n", "issue": "`TestNpz.test_dump` test failure\nhttps://jenkins.preferred.jp/job/chainer/job/cupy_pr/161/TEST=cupy-py3,label=mn1-p100/console\r\n\n", "before_files": [{"content": "import numpy\n\nimport cupy\n\n\nclass NpzFile(object):\n\n def __init__(self, npz_file):\n self.npz_file = npz_file\n\n def __enter__(self):\n self.npz_file.__enter__()\n return self\n\n def __exit__(self, typ, val, traceback):\n self.npz_file.__exit__(typ, val, traceback)\n\n def __getitem__(self, key):\n arr = self.npz_file[key]\n return cupy.array(arr)\n\n def close(self):\n self.npz_file.close()\n\n\ndef load(file, mmap_mode=None):\n \"\"\"Loads arrays or pickled objects from ``.npy``, ``.npz`` or pickled file.\n\n This function just calls ``numpy.load`` and then sends the arrays to the\n current device. NPZ file is converted to NpzFile object, which defers the\n transfer to the time of accessing the items.\n\n Args:\n file (file-like object or string): The file to read.\n mmap_mode (None, 'r+', 'r', 'w+', 'c'): If not ``None``, memory-map the\n file to construct an intermediate :class:`numpy.ndarray` object and\n transfer it to the current device.\n\n Returns:\n CuPy array or NpzFile object depending on the type of the file. NpzFile\n object is a dictionary-like object with the context manager protocol\n (which enables us to use *with* statement on it).\n\n .. seealso:: :func:`numpy.load`\n\n \"\"\"\n obj = numpy.load(file, mmap_mode)\n if isinstance(obj, numpy.ndarray):\n return cupy.array(obj)\n elif isinstance(obj, numpy.lib.npyio.NpzFile):\n return NpzFile(obj)\n else:\n return obj\n\n\ndef save(file, arr):\n \"\"\"Saves an array to a binary file in ``.npy`` format.\n\n Args:\n file (file or str): File or filename to save.\n arr (array_like): Array to save. It should be able to feed to\n :func:`cupy.asnumpy`.\n\n .. seealso:: :func:`numpy.save`\n\n \"\"\"\n numpy.save(file, cupy.asnumpy(arr))\n\n\ndef savez(file, *args, **kwds):\n \"\"\"Saves one or more arrays into a file in uncompressed ``.npz`` format.\n\n Arguments without keys are treated as arguments with automatic keys named\n ``arr_0``, ``arr_1``, etc. corresponding to the positions in the argument\n list. The keys of arguments are used as keys in the ``.npz`` file, which\n are used for accessing NpzFile object when the file is read by\n :func:`cupy.load` function.\n\n Args:\n file (file or str): File or filename to save.\n *args: Arrays with implicit keys.\n **kwds: Arrays with explicit keys.\n\n .. seealso:: :func:`numpy.savez`\n\n \"\"\"\n args = map(cupy.asnumpy, args)\n for key in kwds:\n kwds[key] = cupy.asnumpy(kwds[key])\n numpy.savez(file, *args, **kwds)\n\n\ndef savez_compressed(file, *args, **kwds):\n \"\"\"Saves one or more arrays into a file in compressed ``.npz`` format.\n\n It is equivalent to :func:`cupy.savez` function except the output file is\n compressed.\n\n .. seealso::\n :func:`cupy.savez` for more detail,\n :func:`numpy.savez_compressed`\n\n \"\"\"\n args = map(cupy.asnumpy, args)\n for key in kwds:\n kwds[key] = cupy.asnumpy(kwds[key])\n numpy.savez_compressed(file, *args, **kwds)\n", "path": "cupy/io/npz.py"}]} | 1,646 | 931 |
gh_patches_debug_15544 | rasdani/github-patches | git_diff | qtile__qtile-4610 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ImapWidget` may call `keyring.get_password()` with `username=None`, violating API and potentially crashing it
### Issue description
The problematic code is:
https://github.com/qtile/qtile/blob/9ccaf6f1c01a9ffbd7beacdd8f405884bd81e1c0/libqtile/widget/imapwidget.py#L78
At this point, `self.user` may be `None`. However, according to the API definition at:
https://github.com/jaraco/keyring/blob/0cebfebbf516a47e4e45911ba6b4d4dd2699845c/keyring/core.py#L54
`keyring.get_password()` expects two `str` argument, i.e. `None` is not acceptable. If `keyrings-alt` backend is installed, then it explicitly crashes on `None` username:
```pytb
libqtile/widget/imapwidget.py:78: in __init__
password = keyring.get_password("imapwidget", self.user)
.tox/py310-x11/lib/python3.10/site-packages/keyring/core.py:56: in get_password
return get_keyring().get_password(service_name, username)
.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:92: in get_password
assoc = self._generate_assoc(service, username)
.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:133: in _generate_assoc
return (escape_for_ini(service) + r'\0' + escape_for_ini(username)).encode()
.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/escape.py:29: in escape
return "".join(_escape_char(c) for c in value.encode('utf-8'))
E AttributeError: 'NoneType' object has no attribute 'encode'
```
To reproduce:
```
tox -e py310-x11 # you can cancel the initial test run, after dependencies are installed
. .tox/py310-x11/bin/activate
pip install imapclient keyring keyrings-alt
pytest --backend=x11
```
### Version
0.23.1.dev83+g9ccaf6f1
### Backend
X11 (default)
### Config
_No response_
### Logs
_No response_
### Required
- [X] I have searched past issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of libqtile/widget/imapwidget.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2015 David R. Andersen
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a copy
5 # of this software and associated documentation files (the "Software"), to deal
6 # in the Software without restriction, including without limitation the rights
7 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 # copies of the Software, and to permit persons to whom the Software is
9 # furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
20 # SOFTWARE.
21
22 import imaplib
23 import re
24
25 import keyring
26
27 from libqtile.log_utils import logger
28 from libqtile.widget import base
29
30
31 class ImapWidget(base.ThreadPoolText):
32 """Email IMAP widget
33
34 This widget will scan one of your imap email boxes and report the number of
35 unseen messages present. I've configured it to only work with imap with
36 ssl. Your password is obtained from the Gnome Keyring.
37
38 Writing your password to the keyring initially is as simple as (changing
39 out <userid> and <password> for your userid and password):
40
41 1) create the file ~/.local/share/python_keyring/keyringrc.cfg with the
42 following contents::
43
44 [backend]
45 default-keyring=keyring.backends.Gnome.Keyring
46 keyring-path=/home/<userid>/.local/share/keyring/
47
48
49 2) Execute the following python shell script once::
50
51 #!/usr/bin/env python3
52 import keyring
53 user = <userid>
54 password = <password>
55 keyring.set_password('imapwidget', user, password)
56
57 mbox names must include the path to the mbox (except for the default
58 INBOX). So, for example if your mailroot is ``~/Maildir``, and you want to
59 look at the mailbox at HomeMail/fred, the mbox setting would be:
60 ``mbox="~/Maildir/HomeMail/fred"``. Note the nested sets of quotes! Labels
61 can be whatever you choose, of course.
62
63 Widget requirements: keyring_.
64
65 .. _keyring: https://pypi.org/project/keyring/
66 """
67
68 defaults = [
69 ("mbox", '"INBOX"', "mailbox to fetch"),
70 ("label", "INBOX", "label for display"),
71 ("user", None, "email username"),
72 ("server", None, "email server name"),
73 ]
74
75 def __init__(self, **config):
76 base.ThreadPoolText.__init__(self, "", **config)
77 self.add_defaults(ImapWidget.defaults)
78 password = keyring.get_password("imapwidget", self.user)
79 if password is not None:
80 self.password = password
81 else:
82 logger.critical("Gnome Keyring Error")
83
84 def poll(self):
85 im = imaplib.IMAP4_SSL(self.server, 993)
86 if self.password == "Gnome Keyring Error":
87 text = "Gnome Keyring Error"
88 else:
89 im.login(self.user, self.password)
90 status, response = im.status(self.mbox, "(UNSEEN)")
91 text = response[0].decode()
92 text = self.label + ": " + re.sub(r"\).*$", "", re.sub(r"^.*N\s", "", text))
93 im.logout()
94 return text
95
[end of libqtile/widget/imapwidget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/widget/imapwidget.py b/libqtile/widget/imapwidget.py
--- a/libqtile/widget/imapwidget.py
+++ b/libqtile/widget/imapwidget.py
@@ -24,6 +24,7 @@
import keyring
+from libqtile.confreader import ConfigError
from libqtile.log_utils import logger
from libqtile.widget import base
@@ -75,6 +76,8 @@
def __init__(self, **config):
base.ThreadPoolText.__init__(self, "", **config)
self.add_defaults(ImapWidget.defaults)
+ if self.user is None:
+ raise ConfigError("You must set the 'user' parameter for the IMAP widget.")
password = keyring.get_password("imapwidget", self.user)
if password is not None:
self.password = password
| {"golden_diff": "diff --git a/libqtile/widget/imapwidget.py b/libqtile/widget/imapwidget.py\n--- a/libqtile/widget/imapwidget.py\n+++ b/libqtile/widget/imapwidget.py\n@@ -24,6 +24,7 @@\n \n import keyring\n \n+from libqtile.confreader import ConfigError\n from libqtile.log_utils import logger\n from libqtile.widget import base\n \n@@ -75,6 +76,8 @@\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(ImapWidget.defaults)\n+ if self.user is None:\n+ raise ConfigError(\"You must set the 'user' parameter for the IMAP widget.\")\n password = keyring.get_password(\"imapwidget\", self.user)\n if password is not None:\n self.password = password\n", "issue": "`ImapWidget` may call `keyring.get_password()` with `username=None`, violating API and potentially crashing it\n### Issue description\n\nThe problematic code is:\r\n\r\nhttps://github.com/qtile/qtile/blob/9ccaf6f1c01a9ffbd7beacdd8f405884bd81e1c0/libqtile/widget/imapwidget.py#L78\r\n\r\nAt this point, `self.user` may be `None`. However, according to the API definition at:\r\n\r\nhttps://github.com/jaraco/keyring/blob/0cebfebbf516a47e4e45911ba6b4d4dd2699845c/keyring/core.py#L54\r\n\r\n`keyring.get_password()` expects two `str` argument, i.e. `None` is not acceptable. If `keyrings-alt` backend is installed, then it explicitly crashes on `None` username:\r\n\r\n```pytb\r\nlibqtile/widget/imapwidget.py:78: in __init__\r\n password = keyring.get_password(\"imapwidget\", self.user)\r\n.tox/py310-x11/lib/python3.10/site-packages/keyring/core.py:56: in get_password\r\n return get_keyring().get_password(service_name, username)\r\n.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:92: in get_password\r\n assoc = self._generate_assoc(service, username)\r\n.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/file_base.py:133: in _generate_assoc\r\n return (escape_for_ini(service) + r'\\0' + escape_for_ini(username)).encode()\r\n.tox/py310-x11/lib/python3.10/site-packages/keyrings/alt/escape.py:29: in escape\r\n return \"\".join(_escape_char(c) for c in value.encode('utf-8'))\r\nE AttributeError: 'NoneType' object has no attribute 'encode'\r\n```\r\n\r\nTo reproduce:\r\n\r\n```\r\ntox -e py310-x11 # you can cancel the initial test run, after dependencies are installed\r\n. .tox/py310-x11/bin/activate\r\npip install imapclient keyring keyrings-alt\r\npytest --backend=x11\r\n```\n\n### Version\n\n0.23.1.dev83+g9ccaf6f1\n\n### Backend\n\nX11 (default)\n\n### Config\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Required\n\n- [X] I have searched past issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015 David R. Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport imaplib\nimport re\n\nimport keyring\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass ImapWidget(base.ThreadPoolText):\n \"\"\"Email IMAP widget\n\n This widget will scan one of your imap email boxes and report the number of\n unseen messages present. I've configured it to only work with imap with\n ssl. Your password is obtained from the Gnome Keyring.\n\n Writing your password to the keyring initially is as simple as (changing\n out <userid> and <password> for your userid and password):\n\n 1) create the file ~/.local/share/python_keyring/keyringrc.cfg with the\n following contents::\n\n [backend]\n default-keyring=keyring.backends.Gnome.Keyring\n keyring-path=/home/<userid>/.local/share/keyring/\n\n\n 2) Execute the following python shell script once::\n\n #!/usr/bin/env python3\n import keyring\n user = <userid>\n password = <password>\n keyring.set_password('imapwidget', user, password)\n\n mbox names must include the path to the mbox (except for the default\n INBOX). So, for example if your mailroot is ``~/Maildir``, and you want to\n look at the mailbox at HomeMail/fred, the mbox setting would be:\n ``mbox=\"~/Maildir/HomeMail/fred\"``. Note the nested sets of quotes! Labels\n can be whatever you choose, of course.\n\n Widget requirements: keyring_.\n\n .. _keyring: https://pypi.org/project/keyring/\n \"\"\"\n\n defaults = [\n (\"mbox\", '\"INBOX\"', \"mailbox to fetch\"),\n (\"label\", \"INBOX\", \"label for display\"),\n (\"user\", None, \"email username\"),\n (\"server\", None, \"email server name\"),\n ]\n\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(ImapWidget.defaults)\n password = keyring.get_password(\"imapwidget\", self.user)\n if password is not None:\n self.password = password\n else:\n logger.critical(\"Gnome Keyring Error\")\n\n def poll(self):\n im = imaplib.IMAP4_SSL(self.server, 993)\n if self.password == \"Gnome Keyring Error\":\n text = \"Gnome Keyring Error\"\n else:\n im.login(self.user, self.password)\n status, response = im.status(self.mbox, \"(UNSEEN)\")\n text = response[0].decode()\n text = self.label + \": \" + re.sub(r\"\\).*$\", \"\", re.sub(r\"^.*N\\s\", \"\", text))\n im.logout()\n return text\n", "path": "libqtile/widget/imapwidget.py"}]} | 2,198 | 190 |
gh_patches_debug_14753 | rasdani/github-patches | git_diff | ansible__ansible-39634 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aws_s3 is automaticly decrypting ansible-vault encrypted files before put
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
Always add information AFTER of these html comments. -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
aws_s3
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
2.5.1
```
##### SUMMARY
- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3.
- aws_s3 in 2.4 didn't decrypt the src: parameter.
- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.
- The aws_s3 module doesn't accept the decrypt: argument.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: upload vault to s3
aws_s3:
bucket: "the bucket"
object: "file.txt"
src: "file.txt"
mode: put
```
1. The file.txt is encrypted with ansible-vault.
2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.
##### EXPECTED RESULTS
Don't autodecrypt the src: argument or be able to specify decrypt: no.
##### ACTUAL RESULTS
The src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).
aws_s3 is automaticly decrypting ansible-vault encrypted files before put
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
Always add information AFTER of these html comments. -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
aws_s3
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
2.5.1
```
##### SUMMARY
- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3.
- aws_s3 in 2.4 didn't decrypt the src: parameter.
- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.
- The aws_s3 module doesn't accept the decrypt: argument.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: upload vault to s3
aws_s3:
bucket: "the bucket"
object: "file.txt"
src: "file.txt"
mode: put
```
1. The file.txt is encrypted with ansible-vault.
2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.
##### EXPECTED RESULTS
Don't autodecrypt the src: argument or be able to specify decrypt: no.
##### ACTUAL RESULTS
The src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).
</issue>
<code>
[start of lib/ansible/plugins/action/aws_s3.py]
1 # (c) 2012, Michael DeHaan <[email protected]>
2 # (c) 2018, Will Thames <[email protected]>
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 from __future__ import (absolute_import, division, print_function)
19 __metaclass__ = type
20
21 import os
22
23 from ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound
24 from ansible.module_utils._text import to_text
25 from ansible.plugins.action import ActionBase
26
27
28 class ActionModule(ActionBase):
29
30 TRANSFERS_FILES = True
31
32 def run(self, tmp=None, task_vars=None):
33 ''' handler for aws_s3 operations '''
34 if task_vars is None:
35 task_vars = dict()
36
37 result = super(ActionModule, self).run(tmp, task_vars)
38 del tmp # tmp no longer has any effect
39
40 source = self._task.args.get('src', None)
41
42 try:
43 new_module_args = self._task.args.copy()
44 if source:
45 source = os.path.expanduser(source)
46
47 # For backward compatibility check if the file exists on the remote; it should take precedence
48 if not self._remote_file_exists(source):
49 try:
50 source = self._loader.get_real_file(self._find_needle('files', source))
51 new_module_args['src'] = source
52 except AnsibleFileNotFound as e:
53 # module handles error message for nonexistent files
54 new_module_args['src'] = source
55 except AnsibleError as e:
56 raise AnsibleActionFail(to_text(e))
57
58 # execute the aws_s3 module now, with the updated args
59 result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))
60 except AnsibleAction as e:
61 result.update(e.result)
62 return result
63
[end of lib/ansible/plugins/action/aws_s3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/plugins/action/aws_s3.py b/lib/ansible/plugins/action/aws_s3.py
--- a/lib/ansible/plugins/action/aws_s3.py
+++ b/lib/ansible/plugins/action/aws_s3.py
@@ -47,7 +47,7 @@
# For backward compatibility check if the file exists on the remote; it should take precedence
if not self._remote_file_exists(source):
try:
- source = self._loader.get_real_file(self._find_needle('files', source))
+ source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)
new_module_args['src'] = source
except AnsibleFileNotFound as e:
# module handles error message for nonexistent files
| {"golden_diff": "diff --git a/lib/ansible/plugins/action/aws_s3.py b/lib/ansible/plugins/action/aws_s3.py\n--- a/lib/ansible/plugins/action/aws_s3.py\n+++ b/lib/ansible/plugins/action/aws_s3.py\n@@ -47,7 +47,7 @@\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n- source = self._loader.get_real_file(self._find_needle('files', source))\n+ source = self._loader.get_real_file(self._find_needle('files', source), decrypt=False)\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n", "issue": "aws_s3 is automaticly decrypting ansible-vault encrypted files before put\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and devel branch are affected too.\r\nAlways add information AFTER of these html comments. -->\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\naws_s3\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n2.5.1\r\n```\r\n\r\n##### SUMMARY\r\n- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3. \r\n- aws_s3 in 2.4 didn't decrypt the src: parameter.\r\n- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.\r\n- The aws_s3 module doesn't accept the decrypt: argument.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: upload vault to s3\r\n aws_s3:\r\n bucket: \"the bucket\"\r\n object: \"file.txt\"\r\n src: \"file.txt\"\r\n mode: put\r\n```\r\n1. The file.txt is encrypted with ansible-vault. \r\n2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.\r\n\r\n##### EXPECTED RESULTS\r\nDon't autodecrypt the src: argument or be able to specify decrypt: no.\r\n\r\n##### ACTUAL RESULTS\r\nThe src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).\r\n\naws_s3 is automaticly decrypting ansible-vault encrypted files before put\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and devel branch are affected too.\r\nAlways add information AFTER of these html comments. -->\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\naws_s3\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\n2.5.1\r\n```\r\n\r\n##### SUMMARY\r\n- I'm trying to upload an ansible-vault encrypted file with aws_s3. But aws_s3 decrypts the src: file before uploading it to S3. \r\n- aws_s3 in 2.4 didn't decrypt the src: parameter.\r\n- The documentation for aws_s3 doesn't mention that the src: parameter is autodecrypted.\r\n- The aws_s3 module doesn't accept the decrypt: argument.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: upload vault to s3\r\n aws_s3:\r\n bucket: \"the bucket\"\r\n object: \"file.txt\"\r\n src: \"file.txt\"\r\n mode: put\r\n```\r\n1. The file.txt is encrypted with ansible-vault. \r\n2. The playbook that runs this task is invoked with --vault-password and is able to decrypt the file because other tasks need the file decrypted.\r\n\r\n##### EXPECTED RESULTS\r\nDon't autodecrypt the src: argument or be able to specify decrypt: no.\r\n\r\n##### ACTUAL RESULTS\r\nThe src: argument to aws_s3 is automagicly decrypted without documentation or a way to disable the feature like other modules (ex. copy).\r\n\n", "before_files": [{"content": "# (c) 2012, Michael DeHaan <[email protected]>\n# (c) 2018, Will Thames <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\n\nfrom ansible.errors import AnsibleError, AnsibleAction, AnsibleActionFail, AnsibleFileNotFound\nfrom ansible.module_utils._text import to_text\nfrom ansible.plugins.action import ActionBase\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = True\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for aws_s3 operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n del tmp # tmp no longer has any effect\n\n source = self._task.args.get('src', None)\n\n try:\n new_module_args = self._task.args.copy()\n if source:\n source = os.path.expanduser(source)\n\n # For backward compatibility check if the file exists on the remote; it should take precedence\n if not self._remote_file_exists(source):\n try:\n source = self._loader.get_real_file(self._find_needle('files', source))\n new_module_args['src'] = source\n except AnsibleFileNotFound as e:\n # module handles error message for nonexistent files\n new_module_args['src'] = source\n except AnsibleError as e:\n raise AnsibleActionFail(to_text(e))\n\n # execute the aws_s3 module now, with the updated args\n result.update(self._execute_module(module_args=new_module_args, task_vars=task_vars))\n except AnsibleAction as e:\n result.update(e.result)\n return result\n", "path": "lib/ansible/plugins/action/aws_s3.py"}]} | 1,935 | 164 |
gh_patches_debug_7144 | rasdani/github-patches | git_diff | adap__flower-1735 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecated eval_fn still used in examples
### Describe the bug
While running the `embedded_devices` example, an issue is faced due to the use of `eval_fn` keyword which was deprecated after Flower 1.0.0 and has now been changed to `evaluate_fn`
### Steps/Code to Reproduce
Setup the server as mentioned in the `examples/embedded_devices` readme
### Expected Results
The server should start without any error
### Actual Results
The following error is encountered:
```
File "/embedded_devices/server.py", line 109, in main
strategy = fl.server.strategy.FedAvg(
TypeError: FedAvg.__init__() got an unexpected keyword argument 'eval_fn'
```
</issue>
<code>
[start of examples/embedded_devices/server.py]
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Minimal example on how to start a simple Flower server."""
16
17
18 import argparse
19 from collections import OrderedDict
20 from typing import Callable, Dict, Optional, Tuple
21
22 import flwr as fl
23 import numpy as np
24 import torch
25 import torchvision
26
27 import utils
28
29 # pylint: disable=no-member
30 DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
31 # pylint: enable=no-member
32
33 parser = argparse.ArgumentParser(description="Flower")
34 parser.add_argument(
35 "--server_address",
36 type=str,
37 required=True,
38 help=f"gRPC server address",
39 )
40 parser.add_argument(
41 "--rounds",
42 type=int,
43 default=1,
44 help="Number of rounds of federated learning (default: 1)",
45 )
46 parser.add_argument(
47 "--sample_fraction",
48 type=float,
49 default=1.0,
50 help="Fraction of available clients used for fit/evaluate (default: 1.0)",
51 )
52 parser.add_argument(
53 "--min_sample_size",
54 type=int,
55 default=2,
56 help="Minimum number of clients used for fit/evaluate (default: 2)",
57 )
58 parser.add_argument(
59 "--min_num_clients",
60 type=int,
61 default=2,
62 help="Minimum number of available clients required for sampling (default: 2)",
63 )
64 parser.add_argument(
65 "--log_host",
66 type=str,
67 help="Logserver address (no default)",
68 )
69 parser.add_argument(
70 "--model",
71 type=str,
72 default="ResNet18",
73 choices=["Net", "ResNet18"],
74 help="model to train",
75 )
76 parser.add_argument(
77 "--batch_size",
78 type=int,
79 default=32,
80 help="training batch size",
81 )
82 parser.add_argument(
83 "--num_workers",
84 type=int,
85 default=4,
86 help="number of workers for dataset reading",
87 )
88 parser.add_argument("--pin_memory", action="store_true")
89 args = parser.parse_args()
90
91
92 def main() -> None:
93 """Start server and train five rounds."""
94
95 print(args)
96
97 assert (
98 args.min_sample_size <= args.min_num_clients
99 ), f"Num_clients shouldn't be lower than min_sample_size"
100
101 # Configure logger
102 fl.common.logger.configure("server", host=args.log_host)
103
104 # Load evaluation data
105 _, testset = utils.load_cifar(download=True)
106
107 # Create client_manager, strategy, and server
108 client_manager = fl.server.SimpleClientManager()
109 strategy = fl.server.strategy.FedAvg(
110 fraction_fit=args.sample_fraction,
111 min_fit_clients=args.min_sample_size,
112 min_available_clients=args.min_num_clients,
113 eval_fn=get_eval_fn(testset),
114 on_fit_config_fn=fit_config,
115 )
116 server = fl.server.Server(client_manager=client_manager, strategy=strategy)
117
118 # Run server
119 fl.server.start_server(
120 server_address=args.server_address,
121 server=server,
122 config=fl.server.ServerConfig(num_rounds=args.rounds),
123 )
124
125
126 def fit_config(server_round: int) -> Dict[str, fl.common.Scalar]:
127 """Return a configuration with static batch size and (local) epochs."""
128 config = {
129 "epoch_global": str(server_round),
130 "epochs": str(1),
131 "batch_size": str(args.batch_size),
132 "num_workers": str(args.num_workers),
133 "pin_memory": str(args.pin_memory),
134 }
135 return config
136
137
138 def set_weights(model: torch.nn.ModuleList, weights: fl.common.NDArrays) -> None:
139 """Set model weights from a list of NumPy ndarrays."""
140 state_dict = OrderedDict(
141 {
142 k: torch.tensor(np.atleast_1d(v))
143 for k, v in zip(model.state_dict().keys(), weights)
144 }
145 )
146 model.load_state_dict(state_dict, strict=True)
147
148
149 def get_eval_fn(
150 testset: torchvision.datasets.CIFAR10,
151 ) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:
152 """Return an evaluation function for centralized evaluation."""
153
154 def evaluate(weights: fl.common.NDArrays) -> Optional[Tuple[float, float]]:
155 """Use the entire CIFAR-10 test set for evaluation."""
156
157 model = utils.load_model(args.model)
158 set_weights(model, weights)
159 model.to(DEVICE)
160
161 testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)
162 loss, accuracy = utils.test(model, testloader, device=DEVICE)
163 return loss, {"accuracy": accuracy}
164
165 return evaluate
166
167
168 if __name__ == "__main__":
169 main()
170
[end of examples/embedded_devices/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/embedded_devices/server.py b/examples/embedded_devices/server.py
--- a/examples/embedded_devices/server.py
+++ b/examples/embedded_devices/server.py
@@ -110,7 +110,7 @@
fraction_fit=args.sample_fraction,
min_fit_clients=args.min_sample_size,
min_available_clients=args.min_num_clients,
- eval_fn=get_eval_fn(testset),
+ evaluate_fn=get_eval_fn(testset),
on_fit_config_fn=fit_config,
)
server = fl.server.Server(client_manager=client_manager, strategy=strategy)
| {"golden_diff": "diff --git a/examples/embedded_devices/server.py b/examples/embedded_devices/server.py\n--- a/examples/embedded_devices/server.py\n+++ b/examples/embedded_devices/server.py\n@@ -110,7 +110,7 @@\n fraction_fit=args.sample_fraction,\n min_fit_clients=args.min_sample_size,\n min_available_clients=args.min_num_clients,\n- eval_fn=get_eval_fn(testset),\n+ evaluate_fn=get_eval_fn(testset),\n on_fit_config_fn=fit_config,\n )\n server = fl.server.Server(client_manager=client_manager, strategy=strategy)\n", "issue": "deprecated eval_fn still used in examples\n### Describe the bug\n\nWhile running the `embedded_devices` example, an issue is faced due to the use of `eval_fn` keyword which was deprecated after Flower 1.0.0 and has now been changed to `evaluate_fn`\n\n### Steps/Code to Reproduce\n\nSetup the server as mentioned in the `examples/embedded_devices` readme\n\n### Expected Results\n\nThe server should start without any error\n\n### Actual Results\n\nThe following error is encountered:\r\n```\r\nFile \"/embedded_devices/server.py\", line 109, in main\r\n strategy = fl.server.strategy.FedAvg(\r\nTypeError: FedAvg.__init__() got an unexpected keyword argument 'eval_fn'\r\n```\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Minimal example on how to start a simple Flower server.\"\"\"\n\n\nimport argparse\nfrom collections import OrderedDict\nfrom typing import Callable, Dict, Optional, Tuple\n\nimport flwr as fl\nimport numpy as np\nimport torch\nimport torchvision\n\nimport utils\n\n# pylint: disable=no-member\nDEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n# pylint: enable=no-member\n\nparser = argparse.ArgumentParser(description=\"Flower\")\nparser.add_argument(\n \"--server_address\",\n type=str,\n required=True,\n help=f\"gRPC server address\",\n)\nparser.add_argument(\n \"--rounds\",\n type=int,\n default=1,\n help=\"Number of rounds of federated learning (default: 1)\",\n)\nparser.add_argument(\n \"--sample_fraction\",\n type=float,\n default=1.0,\n help=\"Fraction of available clients used for fit/evaluate (default: 1.0)\",\n)\nparser.add_argument(\n \"--min_sample_size\",\n type=int,\n default=2,\n help=\"Minimum number of clients used for fit/evaluate (default: 2)\",\n)\nparser.add_argument(\n \"--min_num_clients\",\n type=int,\n default=2,\n help=\"Minimum number of available clients required for sampling (default: 2)\",\n)\nparser.add_argument(\n \"--log_host\",\n type=str,\n help=\"Logserver address (no default)\",\n)\nparser.add_argument(\n \"--model\",\n type=str,\n default=\"ResNet18\",\n choices=[\"Net\", \"ResNet18\"],\n help=\"model to train\",\n)\nparser.add_argument(\n \"--batch_size\",\n type=int,\n default=32,\n help=\"training batch size\",\n)\nparser.add_argument(\n \"--num_workers\",\n type=int,\n default=4,\n help=\"number of workers for dataset reading\",\n)\nparser.add_argument(\"--pin_memory\", action=\"store_true\")\nargs = parser.parse_args()\n\n\ndef main() -> None:\n \"\"\"Start server and train five rounds.\"\"\"\n\n print(args)\n\n assert (\n args.min_sample_size <= args.min_num_clients\n ), f\"Num_clients shouldn't be lower than min_sample_size\"\n\n # Configure logger\n fl.common.logger.configure(\"server\", host=args.log_host)\n\n # Load evaluation data\n _, testset = utils.load_cifar(download=True)\n\n # Create client_manager, strategy, and server\n client_manager = fl.server.SimpleClientManager()\n strategy = fl.server.strategy.FedAvg(\n fraction_fit=args.sample_fraction,\n min_fit_clients=args.min_sample_size,\n min_available_clients=args.min_num_clients,\n eval_fn=get_eval_fn(testset),\n on_fit_config_fn=fit_config,\n )\n server = fl.server.Server(client_manager=client_manager, strategy=strategy)\n\n # Run server\n fl.server.start_server(\n server_address=args.server_address,\n server=server,\n config=fl.server.ServerConfig(num_rounds=args.rounds),\n )\n\n\ndef fit_config(server_round: int) -> Dict[str, fl.common.Scalar]:\n \"\"\"Return a configuration with static batch size and (local) epochs.\"\"\"\n config = {\n \"epoch_global\": str(server_round),\n \"epochs\": str(1),\n \"batch_size\": str(args.batch_size),\n \"num_workers\": str(args.num_workers),\n \"pin_memory\": str(args.pin_memory),\n }\n return config\n\n\ndef set_weights(model: torch.nn.ModuleList, weights: fl.common.NDArrays) -> None:\n \"\"\"Set model weights from a list of NumPy ndarrays.\"\"\"\n state_dict = OrderedDict(\n {\n k: torch.tensor(np.atleast_1d(v))\n for k, v in zip(model.state_dict().keys(), weights)\n }\n )\n model.load_state_dict(state_dict, strict=True)\n\n\ndef get_eval_fn(\n testset: torchvision.datasets.CIFAR10,\n) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:\n \"\"\"Return an evaluation function for centralized evaluation.\"\"\"\n\n def evaluate(weights: fl.common.NDArrays) -> Optional[Tuple[float, float]]:\n \"\"\"Use the entire CIFAR-10 test set for evaluation.\"\"\"\n\n model = utils.load_model(args.model)\n set_weights(model, weights)\n model.to(DEVICE)\n\n testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)\n loss, accuracy = utils.test(model, testloader, device=DEVICE)\n return loss, {\"accuracy\": accuracy}\n\n return evaluate\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/embedded_devices/server.py"}]} | 2,210 | 122 |
gh_patches_debug_34994 | rasdani/github-patches | git_diff | getredash__redash-725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User should be redirected to his original destination after login with Google OAuth
If the user tried to open a page before being logged in, he should be redirected to this page after successful login.
</issue>
<code>
[start of redash/google_oauth.py]
1 import logging
2 from flask.ext.login import login_user
3 import requests
4 from flask import redirect, url_for, Blueprint, flash
5 from flask_oauth import OAuth
6 from redash import models, settings
7
8 logger = logging.getLogger('google_oauth')
9 oauth = OAuth()
10
11
12 if not settings.GOOGLE_APPS_DOMAIN:
13 logger.warning("No Google Apps domain defined, all Google accounts allowed.")
14
15 google = oauth.remote_app('google',
16 base_url='https://www.google.com/accounts/',
17 authorize_url='https://accounts.google.com/o/oauth2/auth',
18 request_token_url=None,
19 request_token_params={
20 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',
21 'response_type': 'code'
22 },
23 access_token_url='https://accounts.google.com/o/oauth2/token',
24 access_token_method='POST',
25 access_token_params={'grant_type': 'authorization_code'},
26 consumer_key=settings.GOOGLE_CLIENT_ID,
27 consumer_secret=settings.GOOGLE_CLIENT_SECRET)
28
29
30 blueprint = Blueprint('google_oauth', __name__)
31
32
33 def get_user_profile(access_token):
34 headers = {'Authorization': 'OAuth {}'.format(access_token)}
35 response = requests.get('https://www.googleapis.com/oauth2/v1/userinfo', headers=headers)
36
37 if response.status_code == 401:
38 logger.warning("Failed getting user profile (response code 401).")
39 return None
40
41 return response.json()
42
43
44 def verify_profile(profile):
45 if not settings.GOOGLE_APPS_DOMAIN:
46 return True
47
48 domain = profile['email'].split('@')[-1]
49 return domain in settings.GOOGLE_APPS_DOMAIN
50
51
52 def create_and_login_user(name, email):
53 try:
54 user_object = models.User.get_by_email(email)
55 if user_object.name != name:
56 logger.debug("Updating user name (%r -> %r)", user_object.name, name)
57 user_object.name = name
58 user_object.save()
59 except models.User.DoesNotExist:
60 logger.debug("Creating user object (%r)", name)
61 user_object = models.User.create(name=name, email=email, groups=models.User.DEFAULT_GROUPS)
62
63 login_user(user_object, remember=True)
64
65
66 @blueprint.route('/oauth/google', endpoint="authorize")
67 def login():
68 # TODO, suport next
69 callback=url_for('.callback', _external=True)
70 logger.debug("Callback url: %s", callback)
71 return google.authorize(callback=callback)
72
73
74 @blueprint.route('/oauth/google_callback', endpoint="callback")
75 @google.authorized_handler
76 def authorized(resp):
77 access_token = resp['access_token']
78
79 if access_token is None:
80 logger.warning("Access token missing in call back request.")
81 flash("Validation error. Please retry.")
82 return redirect(url_for('login'))
83
84 profile = get_user_profile(access_token)
85 if profile is None:
86 flash("Validation error. Please retry.")
87 return redirect(url_for('login'))
88
89 if not verify_profile(profile):
90 logger.warning("User tried to login with unauthorized domain name: %s", profile['email'])
91 flash("Your Google Apps domain name isn't allowed.")
92 return redirect(url_for('login'))
93
94 create_and_login_user(profile['name'], profile['email'])
95
96 return redirect(url_for('index'))
[end of redash/google_oauth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/google_oauth.py b/redash/google_oauth.py
--- a/redash/google_oauth.py
+++ b/redash/google_oauth.py
@@ -1,8 +1,8 @@
import logging
from flask.ext.login import login_user
import requests
-from flask import redirect, url_for, Blueprint, flash
-from flask_oauth import OAuth
+from flask import redirect, url_for, Blueprint, flash, request
+from flask_oauthlib.client import OAuth
from redash import models, settings
logger = logging.getLogger('google_oauth')
@@ -18,11 +18,9 @@
request_token_url=None,
request_token_params={
'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',
- 'response_type': 'code'
},
access_token_url='https://accounts.google.com/o/oauth2/token',
access_token_method='POST',
- access_token_params={'grant_type': 'authorization_code'},
consumer_key=settings.GOOGLE_CLIENT_ID,
consumer_secret=settings.GOOGLE_CLIENT_SECRET)
@@ -65,10 +63,10 @@
@blueprint.route('/oauth/google', endpoint="authorize")
def login():
- # TODO, suport next
+ next = request.args.get('next','/')
callback=url_for('.callback', _external=True)
logger.debug("Callback url: %s", callback)
- return google.authorize(callback=callback)
+ return google.authorize(callback=callback, state=next)
@blueprint.route('/oauth/google_callback', endpoint="callback")
@@ -93,4 +91,6 @@
create_and_login_user(profile['name'], profile['email'])
- return redirect(url_for('index'))
\ No newline at end of file
+ next = request.args.get('state','/')
+
+ return redirect(next)
| {"golden_diff": "diff --git a/redash/google_oauth.py b/redash/google_oauth.py\n--- a/redash/google_oauth.py\n+++ b/redash/google_oauth.py\n@@ -1,8 +1,8 @@\n import logging\n from flask.ext.login import login_user\n import requests\n-from flask import redirect, url_for, Blueprint, flash\n-from flask_oauth import OAuth\n+from flask import redirect, url_for, Blueprint, flash, request\n+from flask_oauthlib.client import OAuth\n from redash import models, settings\n \n logger = logging.getLogger('google_oauth')\n@@ -18,11 +18,9 @@\n request_token_url=None,\n request_token_params={\n 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',\n- 'response_type': 'code'\n },\n access_token_url='https://accounts.google.com/o/oauth2/token',\n access_token_method='POST',\n- access_token_params={'grant_type': 'authorization_code'},\n consumer_key=settings.GOOGLE_CLIENT_ID,\n consumer_secret=settings.GOOGLE_CLIENT_SECRET)\n \n@@ -65,10 +63,10 @@\n \n @blueprint.route('/oauth/google', endpoint=\"authorize\")\n def login():\n- # TODO, suport next\n+ next = request.args.get('next','/')\n callback=url_for('.callback', _external=True)\n logger.debug(\"Callback url: %s\", callback)\n- return google.authorize(callback=callback)\n+ return google.authorize(callback=callback, state=next)\n \n \n @blueprint.route('/oauth/google_callback', endpoint=\"callback\")\n@@ -93,4 +91,6 @@\n \n create_and_login_user(profile['name'], profile['email'])\n \n- return redirect(url_for('index'))\n\\ No newline at end of file\n+ next = request.args.get('state','/')\n+\n+ return redirect(next)\n", "issue": "User should be redirected to his original destination after login with Google OAuth\nIf the user tried to open a page before being logged in, he should be redirected to this page after successful login.\n\n", "before_files": [{"content": "import logging\nfrom flask.ext.login import login_user\nimport requests\nfrom flask import redirect, url_for, Blueprint, flash\nfrom flask_oauth import OAuth\nfrom redash import models, settings\n\nlogger = logging.getLogger('google_oauth')\noauth = OAuth()\n\n\nif not settings.GOOGLE_APPS_DOMAIN:\n logger.warning(\"No Google Apps domain defined, all Google accounts allowed.\")\n\ngoogle = oauth.remote_app('google',\n base_url='https://www.google.com/accounts/',\n authorize_url='https://accounts.google.com/o/oauth2/auth',\n request_token_url=None,\n request_token_params={\n 'scope': 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile',\n 'response_type': 'code'\n },\n access_token_url='https://accounts.google.com/o/oauth2/token',\n access_token_method='POST',\n access_token_params={'grant_type': 'authorization_code'},\n consumer_key=settings.GOOGLE_CLIENT_ID,\n consumer_secret=settings.GOOGLE_CLIENT_SECRET)\n\n\nblueprint = Blueprint('google_oauth', __name__)\n\n\ndef get_user_profile(access_token):\n headers = {'Authorization': 'OAuth {}'.format(access_token)}\n response = requests.get('https://www.googleapis.com/oauth2/v1/userinfo', headers=headers)\n\n if response.status_code == 401:\n logger.warning(\"Failed getting user profile (response code 401).\")\n return None\n\n return response.json()\n\n\ndef verify_profile(profile):\n if not settings.GOOGLE_APPS_DOMAIN:\n return True\n\n domain = profile['email'].split('@')[-1]\n return domain in settings.GOOGLE_APPS_DOMAIN\n\n\ndef create_and_login_user(name, email):\n try:\n user_object = models.User.get_by_email(email)\n if user_object.name != name:\n logger.debug(\"Updating user name (%r -> %r)\", user_object.name, name)\n user_object.name = name\n user_object.save()\n except models.User.DoesNotExist:\n logger.debug(\"Creating user object (%r)\", name)\n user_object = models.User.create(name=name, email=email, groups=models.User.DEFAULT_GROUPS)\n\n login_user(user_object, remember=True)\n\n\[email protected]('/oauth/google', endpoint=\"authorize\")\ndef login():\n # TODO, suport next\n callback=url_for('.callback', _external=True)\n logger.debug(\"Callback url: %s\", callback)\n return google.authorize(callback=callback)\n\n\[email protected]('/oauth/google_callback', endpoint=\"callback\")\[email protected]_handler\ndef authorized(resp):\n access_token = resp['access_token']\n\n if access_token is None:\n logger.warning(\"Access token missing in call back request.\")\n flash(\"Validation error. Please retry.\")\n return redirect(url_for('login'))\n\n profile = get_user_profile(access_token)\n if profile is None:\n flash(\"Validation error. Please retry.\")\n return redirect(url_for('login'))\n\n if not verify_profile(profile):\n logger.warning(\"User tried to login with unauthorized domain name: %s\", profile['email'])\n flash(\"Your Google Apps domain name isn't allowed.\")\n return redirect(url_for('login'))\n\n create_and_login_user(profile['name'], profile['email'])\n\n return redirect(url_for('index'))", "path": "redash/google_oauth.py"}]} | 1,445 | 400 |
gh_patches_debug_551 | rasdani/github-patches | git_diff | pypi__warehouse-5814 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sorting searches by 'Date last updated' results in 503
**Describe the bug**
When trying to search for anything on pypi.org, sorting by relevance or trending works fine, but sorting by date last updated returns a 503 error.
**Expected behavior**
Search results, sorted by date.
**To Reproduce**
Example URL: https://pypi.org/search/?q=test&o=-created
Result:
> Sorry, something went wrong
>
> PyPI is down for maintenance or is having an outage.
>
> This is affecting several of our services, including our web interface.
> If you are trying to install a package, you should be able to pip install packages without problem.
>
> Check our status page, or
> View Python Status on Twitter
The status page, though, shows all green.
**My Platform**
- Win 10, Firefox 66.0.3
- Ubuntu 18.04, Chrome 74.0.3729.108
---
Edit: I know this functionality was working at least as recently as last Thursday, 2 May 2019.
</issue>
<code>
[start of warehouse/packaging/search.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import packaging.version
14
15 from elasticsearch_dsl import Date, Document, Float, Keyword, Text, analyzer
16
17 from warehouse.search.utils import doc_type
18
19 EmailAnalyzer = analyzer(
20 "email",
21 tokenizer="uax_url_email",
22 filter=["standard", "lowercase", "stop", "snowball"],
23 )
24
25 NameAnalyzer = analyzer(
26 "normalized_name",
27 tokenizer="lowercase",
28 filter=["standard", "lowercase", "word_delimiter"],
29 )
30
31
32 @doc_type
33 class Project(Document):
34
35 name = Text()
36 normalized_name = Text(analyzer=NameAnalyzer)
37 version = Keyword(multi=True)
38 latest_version = Keyword()
39 summary = Text(analyzer="snowball")
40 description = Text(analyzer="snowball")
41 author = Text()
42 author_email = Text(analyzer=EmailAnalyzer)
43 maintainer = Text()
44 maintainer_email = Text(analyzer=EmailAnalyzer)
45 license = Text()
46 home_page = Keyword()
47 download_url = Keyword()
48 keywords = Text(analyzer="snowball")
49 platform = Keyword()
50 created = Date()
51 classifiers = Keyword(multi=True)
52 zscore = Float()
53
54 @classmethod
55 def from_db(cls, release):
56 obj = cls(meta={"id": release.normalized_name})
57 obj["name"] = release.name
58 obj["normalized_name"] = release.normalized_name
59 obj["version"] = sorted(
60 release.all_versions, key=lambda r: packaging.version.parse(r), reverse=True
61 )
62 obj["latest_version"] = release.latest_version
63 obj["summary"] = release.summary
64 obj["description"] = release.description
65 obj["author"] = release.author
66 obj["author_email"] = release.author_email
67 obj["maintainer"] = release.maintainer
68 obj["maintainer_email"] = release.maintainer_email
69 obj["home_page"] = release.home_page
70 obj["download_url"] = release.download_url
71 obj["keywords"] = release.keywords
72 obj["platform"] = release.platform
73 obj["created"] = release.created
74 obj["classifiers"] = release.classifiers
75 obj["zscore"] = release.zscore
76
77 return obj
78
[end of warehouse/packaging/search.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py
--- a/warehouse/packaging/search.py
+++ b/warehouse/packaging/search.py
@@ -75,3 +75,8 @@
obj["zscore"] = release.zscore
return obj
+
+ class Index:
+ # make sure this class can match any index so it will always be used to
+ # deserialize data coming from elasticsearch.
+ name = "*"
| {"golden_diff": "diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py\n--- a/warehouse/packaging/search.py\n+++ b/warehouse/packaging/search.py\n@@ -75,3 +75,8 @@\n obj[\"zscore\"] = release.zscore\n \n return obj\n+\n+ class Index:\n+ # make sure this class can match any index so it will always be used to\n+ # deserialize data coming from elasticsearch.\n+ name = \"*\"\n", "issue": "Sorting searches by 'Date last updated' results in 503\n**Describe the bug**\r\n\r\nWhen trying to search for anything on pypi.org, sorting by relevance or trending works fine, but sorting by date last updated returns a 503 error.\r\n\r\n**Expected behavior**\r\n\r\nSearch results, sorted by date.\r\n\r\n**To Reproduce**\r\n\r\nExample URL: https://pypi.org/search/?q=test&o=-created\r\n\r\nResult:\r\n\r\n> Sorry, something went wrong\r\n> \r\n> PyPI is down for maintenance or is having an outage.\r\n> \r\n> This is affecting several of our services, including our web interface.\r\n> If you are trying to install a package, you should be able to pip install packages without problem.\r\n> \r\n> Check our status page, or\r\n> View Python Status on Twitter\r\n\r\nThe status page, though, shows all green.\r\n\r\n\r\n**My Platform**\r\n\r\n- Win 10, Firefox 66.0.3\r\n- Ubuntu 18.04, Chrome 74.0.3729.108\r\n\r\n---\r\n\r\nEdit: I know this functionality was working at least as recently as last Thursday, 2 May 2019.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport packaging.version\n\nfrom elasticsearch_dsl import Date, Document, Float, Keyword, Text, analyzer\n\nfrom warehouse.search.utils import doc_type\n\nEmailAnalyzer = analyzer(\n \"email\",\n tokenizer=\"uax_url_email\",\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n)\n\nNameAnalyzer = analyzer(\n \"normalized_name\",\n tokenizer=\"lowercase\",\n filter=[\"standard\", \"lowercase\", \"word_delimiter\"],\n)\n\n\n@doc_type\nclass Project(Document):\n\n name = Text()\n normalized_name = Text(analyzer=NameAnalyzer)\n version = Keyword(multi=True)\n latest_version = Keyword()\n summary = Text(analyzer=\"snowball\")\n description = Text(analyzer=\"snowball\")\n author = Text()\n author_email = Text(analyzer=EmailAnalyzer)\n maintainer = Text()\n maintainer_email = Text(analyzer=EmailAnalyzer)\n license = Text()\n home_page = Keyword()\n download_url = Keyword()\n keywords = Text(analyzer=\"snowball\")\n platform = Keyword()\n created = Date()\n classifiers = Keyword(multi=True)\n zscore = Float()\n\n @classmethod\n def from_db(cls, release):\n obj = cls(meta={\"id\": release.normalized_name})\n obj[\"name\"] = release.name\n obj[\"normalized_name\"] = release.normalized_name\n obj[\"version\"] = sorted(\n release.all_versions, key=lambda r: packaging.version.parse(r), reverse=True\n )\n obj[\"latest_version\"] = release.latest_version\n obj[\"summary\"] = release.summary\n obj[\"description\"] = release.description\n obj[\"author\"] = release.author\n obj[\"author_email\"] = release.author_email\n obj[\"maintainer\"] = release.maintainer\n obj[\"maintainer_email\"] = release.maintainer_email\n obj[\"home_page\"] = release.home_page\n obj[\"download_url\"] = release.download_url\n obj[\"keywords\"] = release.keywords\n obj[\"platform\"] = release.platform\n obj[\"created\"] = release.created\n obj[\"classifiers\"] = release.classifiers\n obj[\"zscore\"] = release.zscore\n\n return obj\n", "path": "warehouse/packaging/search.py"}]} | 1,522 | 108 |
gh_patches_debug_34460 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-3164 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add --version flag to parlai command
**Is your feature request related to a problem? Please describe.**
Make parlai "super command" conform the unix idiom of a --version flag.
Usecase: A user switching between virtual environments or reading about the release of a new feature should be able to easily identify which version they have on their path.
**Describe the solution you'd like**
```bash
$ parlai --version
parlai version X.X.X
# Optionally any relevant install info (not probably not any)
# Optionally copyright/license info (up to the maintainers whether you want this here or not)
```
**Describe alternatives you've considered**
```bash
$ python -c "import parlai; print(parlai.__version__)"
0.9.2
```
A little less natural
</issue>
<code>
[start of parlai/core/script.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7
8 """
9 The general ParlAI Script interface.
10
11 An abstract class to help standardize the call to ParlAI scripts, enabling them to be
12 completed easily.
13
14 Also contains helper classes for loading scripts, etc.
15 """
16
17 import io
18 import argparse
19 from typing import List, Optional, Dict, Any
20 from parlai.core.opt import Opt
21 from parlai.core.params import ParlaiParser, CustomHelpFormatter
22 from abc import abstractmethod
23 import importlib
24 import pkgutil
25 import parlai.scripts
26 import parlai.utils.logging as logging
27 from parlai.core.loader import register_script, SCRIPT_REGISTRY # noqa: F401
28
29
30 def setup_script_registry():
31 """
32 Loads the scripts so that @register_script is hit for all.
33 """
34 for module in pkgutil.iter_modules(parlai.scripts.__path__, 'parlai.scripts.'):
35 importlib.import_module(module.name)
36
37
38 class ParlaiScript(object):
39 """
40 A ParlAI script is a standardized form of access.
41 """
42
43 parser: ParlaiParser
44
45 @classmethod
46 @abstractmethod
47 def setup_args(cls) -> ParlaiParser:
48 """
49 Create the parser with args.
50 """
51 # we want to later deprecate this for add_cmdline_args
52 pass
53
54 def __init__(self, opt: Opt):
55 self.opt = opt
56
57 @abstractmethod
58 def run(self):
59 """
60 The main method.
61
62 Must be implemented by the script writer.
63 """
64 raise NotImplementedError()
65
66 @classmethod
67 def _run_kwargs(cls, kwargs: Dict[str, Any]):
68 """
69 Construct and run the script using kwargs, pseudo-parsing them.
70 """
71 parser = cls.setup_args()
72 opt = parser.parse_kwargs(**kwargs)
73 return cls._run_from_parser_and_opt(opt, parser)
74
75 @classmethod
76 def _run_args(cls, args: Optional[List[str]] = None):
77 """
78 Construct and run the script using args, defaulting to getting from CLI.
79 """
80 parser = cls.setup_args()
81 opt = parser.parse_args(args=args)
82 return cls._run_from_parser_and_opt(opt, parser)
83
84 @classmethod
85 def _run_from_parser_and_opt(cls, opt: Opt, parser: ParlaiParser):
86 script = cls(opt)
87 script.parser = parser
88 return script.run()
89
90 @classmethod
91 def main(cls, *args, **kwargs):
92 """
93 Run the program, possibly with some given args.
94
95 You may provide command line args in the form of strings, or
96 options. For example:
97
98 >>> MyScript.main(['--task', 'convai2'])
99 >>> MyScript.main(task='convai2')
100
101 You may not combine both args and kwargs.
102 """
103 assert not (bool(args) and bool(kwargs))
104 if args:
105 return cls._run_args(args)
106 elif kwargs:
107 return cls._run_kwargs(kwargs)
108 else:
109 return cls._run_args(None)
110
111 @classmethod
112 def help(cls, **kwargs):
113 f = io.StringIO()
114 parser = cls.setup_args()
115 parser.prog = cls.__name__
116 parser.add_extra_args(parser._kwargs_to_str_args(**kwargs))
117 parser.print_help(f)
118 return f.getvalue()
119
120
121 class _SupercommandParser(ParlaiParser):
122 """
123 Specialty ParlAI parser used for the supercommand.
124
125 Contains some special behavior.
126 """
127
128 def __init__(self, *args, **kwargs):
129 from parlai.utils.strings import colorize
130
131 logo = ""
132 logo += colorize(' _', 'red') + "\n"
133 logo += colorize(' /', 'red') + colorize('"', 'brightblack')
134 logo += colorize(")", "yellow") + "\n"
135 logo += colorize(' //', 'red') + colorize(')', 'yellow') + '\n'
136 logo += colorize(' ==', 'green')
137 logo += colorize("/", 'blue') + colorize('/', 'red') + colorize("'", 'yellow')
138 logo += colorize("===", 'green') + " ParlAI\n"
139 logo += colorize(" /", 'blue')
140 kwargs['description'] = logo
141 return super().__init__(*args, **kwargs)
142
143 def add_extra_args(self, args):
144 sa = [a for a in self._actions if isinstance(a, argparse._SubParsersAction)]
145 assert len(sa) == 1
146 sa = sa[0]
147 for _, v in sa.choices.items():
148 v.add_extra_args(args)
149
150 def add_subparsers(self, **kwargs):
151 return super().add_subparsers(**kwargs)
152
153 def _unsuppress_hidden(self):
154 """
155 Restore the help messages of hidden commands.
156 """
157
158 spa = [a for a in self._actions if isinstance(a, argparse._SubParsersAction)]
159 assert len(spa) == 1
160 spa = spa[0]
161 for choices_action in spa._choices_actions:
162 dest = choices_action.dest
163 if choices_action.help == argparse.SUPPRESS:
164 choices_action.help = spa.choices[dest].description
165
166 def print_helpall(self):
167 self._unsuppress_hidden()
168 self.print_help()
169
170
171 class _SubcommandParser(ParlaiParser):
172 """
173 ParlaiParser which always sets add_parlai_args and add_model_args to False.
174
175 Used in the superscript to initialize just the args for that command.
176 """
177
178 def __init__(self, **kwargs):
179 kwargs['add_parlai_args'] = False
180 kwargs['add_model_args'] = False
181 assert 'description' in kwargs, 'Must supply description'
182 return super().__init__(**kwargs)
183
184 def parse_known_args(self, args=None, namespace=None, nohelp=False):
185 if not nohelp:
186 self.add_extra_args(args)
187 return super().parse_known_args(args, namespace, nohelp)
188
189
190 def _SuperscriptHelpFormatter(**kwargs):
191 kwargs['width'] = 100
192 kwargs['max_help_position'] = 9999
193
194 return CustomHelpFormatter(**kwargs)
195
196
197 def superscript_main(args=None):
198 """
199 Superscript is a loader for all the other scripts.
200 """
201 setup_script_registry()
202
203 parser = _SupercommandParser(
204 False, False, formatter_class=_SuperscriptHelpFormatter
205 )
206 parser.add_argument(
207 '--helpall',
208 action='helpall',
209 help='show all commands, including advanced ones.',
210 )
211 parser.set_defaults(super_command=None)
212 subparsers = parser.add_subparsers(
213 parser_class=_SubcommandParser, title="Commands", metavar="COMMAND"
214 )
215 hparser = subparsers.add_parser(
216 'help',
217 aliases=['h'],
218 help=argparse.SUPPRESS,
219 description="List the main commands",
220 )
221 hparser.set_defaults(super_command='help')
222 hparser = subparsers.add_parser(
223 'helpall',
224 help=argparse.SUPPRESS,
225 description="List all commands, including advanced ones.",
226 )
227 hparser.set_defaults(super_command='helpall')
228
229 # build the supercommand
230 for script_name, registration in SCRIPT_REGISTRY.items():
231 logging.verbose(f"Discovered command {script_name}")
232 script_parser = registration.klass.setup_args()
233 if script_parser is None:
234 # user didn't bother defining command line args. let's just fill
235 # in for them
236 script_parser = ParlaiParser(False, False)
237 help_ = argparse.SUPPRESS if registration.hidden else script_parser.description
238 subparser = subparsers.add_parser(
239 script_name,
240 aliases=registration.aliases,
241 help=help_,
242 description=script_parser.description,
243 formatter_class=CustomHelpFormatter,
244 )
245 subparser.set_defaults(
246 # carries the name of the full command so we know what to execute
247 super_command=script_name,
248 # used in ParlAI parser to find CLI options set by user
249 _subparser=subparser,
250 )
251 subparser.set_defaults(**script_parser._defaults)
252 for action in script_parser._actions:
253 subparser._add_action(action)
254 for action_group in script_parser._action_groups:
255 subparser._action_groups.append(action_group)
256
257 try:
258 import argcomplete
259
260 argcomplete.autocomplete(parser)
261 except ModuleNotFoundError:
262 pass
263
264 opt = parser.parse_args(args)
265 cmd = opt.pop('super_command')
266 if cmd == 'helpall':
267 parser.print_helpall()
268 elif cmd == 'help' or cmd is None:
269 parser.print_help()
270 elif cmd is not None:
271 return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)
272
[end of parlai/core/script.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parlai/core/script.py b/parlai/core/script.py
--- a/parlai/core/script.py
+++ b/parlai/core/script.py
@@ -17,6 +17,7 @@
import io
import argparse
from typing import List, Optional, Dict, Any
+import parlai
from parlai.core.opt import Opt
from parlai.core.params import ParlaiParser, CustomHelpFormatter
from abc import abstractmethod
@@ -206,7 +207,13 @@
parser.add_argument(
'--helpall',
action='helpall',
- help='show all commands, including advanced ones.',
+ help='List all commands, including advanced ones.',
+ )
+ parser.add_argument(
+ '--version',
+ action='version',
+ version=get_version_string(),
+ help='Prints version info and exit.',
)
parser.set_defaults(super_command=None)
subparsers = parser.add_subparsers(
@@ -216,13 +223,13 @@
'help',
aliases=['h'],
help=argparse.SUPPRESS,
- description="List the main commands",
+ description='List the main commands.',
)
hparser.set_defaults(super_command='help')
hparser = subparsers.add_parser(
'helpall',
help=argparse.SUPPRESS,
- description="List all commands, including advanced ones.",
+ description='List all commands, including advanced ones.',
)
hparser.set_defaults(super_command='helpall')
@@ -265,7 +272,13 @@
cmd = opt.pop('super_command')
if cmd == 'helpall':
parser.print_helpall()
+ elif cmd == 'versioninfo':
+ exit(0)
elif cmd == 'help' or cmd is None:
parser.print_help()
elif cmd is not None:
return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)
+
+
+def get_version_string() -> str:
+ return f"ParlAI version {parlai.__version__}"
| {"golden_diff": "diff --git a/parlai/core/script.py b/parlai/core/script.py\n--- a/parlai/core/script.py\n+++ b/parlai/core/script.py\n@@ -17,6 +17,7 @@\n import io\n import argparse\n from typing import List, Optional, Dict, Any\n+import parlai\n from parlai.core.opt import Opt\n from parlai.core.params import ParlaiParser, CustomHelpFormatter\n from abc import abstractmethod\n@@ -206,7 +207,13 @@\n parser.add_argument(\n '--helpall',\n action='helpall',\n- help='show all commands, including advanced ones.',\n+ help='List all commands, including advanced ones.',\n+ )\n+ parser.add_argument(\n+ '--version',\n+ action='version',\n+ version=get_version_string(),\n+ help='Prints version info and exit.',\n )\n parser.set_defaults(super_command=None)\n subparsers = parser.add_subparsers(\n@@ -216,13 +223,13 @@\n 'help',\n aliases=['h'],\n help=argparse.SUPPRESS,\n- description=\"List the main commands\",\n+ description='List the main commands.',\n )\n hparser.set_defaults(super_command='help')\n hparser = subparsers.add_parser(\n 'helpall',\n help=argparse.SUPPRESS,\n- description=\"List all commands, including advanced ones.\",\n+ description='List all commands, including advanced ones.',\n )\n hparser.set_defaults(super_command='helpall')\n \n@@ -265,7 +272,13 @@\n cmd = opt.pop('super_command')\n if cmd == 'helpall':\n parser.print_helpall()\n+ elif cmd == 'versioninfo':\n+ exit(0)\n elif cmd == 'help' or cmd is None:\n parser.print_help()\n elif cmd is not None:\n return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)\n+\n+\n+def get_version_string() -> str:\n+ return f\"ParlAI version {parlai.__version__}\"\n", "issue": "Add --version flag to parlai command\n**Is your feature request related to a problem? Please describe.**\r\nMake parlai \"super command\" conform the unix idiom of a --version flag.\r\n\r\nUsecase: A user switching between virtual environments or reading about the release of a new feature should be able to easily identify which version they have on their path.\r\n\r\n**Describe the solution you'd like**\r\n```bash\r\n$ parlai --version\r\nparlai version X.X.X\r\n# Optionally any relevant install info (not probably not any)\r\n# Optionally copyright/license info (up to the maintainers whether you want this here or not)\r\n```\r\n\r\n**Describe alternatives you've considered**\r\n```bash\r\n$ python -c \"import parlai; print(parlai.__version__)\"\r\n0.9.2\r\n```\r\nA little less natural\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n\"\"\"\nThe general ParlAI Script interface.\n\nAn abstract class to help standardize the call to ParlAI scripts, enabling them to be\ncompleted easily.\n\nAlso contains helper classes for loading scripts, etc.\n\"\"\"\n\nimport io\nimport argparse\nfrom typing import List, Optional, Dict, Any\nfrom parlai.core.opt import Opt\nfrom parlai.core.params import ParlaiParser, CustomHelpFormatter\nfrom abc import abstractmethod\nimport importlib\nimport pkgutil\nimport parlai.scripts\nimport parlai.utils.logging as logging\nfrom parlai.core.loader import register_script, SCRIPT_REGISTRY # noqa: F401\n\n\ndef setup_script_registry():\n \"\"\"\n Loads the scripts so that @register_script is hit for all.\n \"\"\"\n for module in pkgutil.iter_modules(parlai.scripts.__path__, 'parlai.scripts.'):\n importlib.import_module(module.name)\n\n\nclass ParlaiScript(object):\n \"\"\"\n A ParlAI script is a standardized form of access.\n \"\"\"\n\n parser: ParlaiParser\n\n @classmethod\n @abstractmethod\n def setup_args(cls) -> ParlaiParser:\n \"\"\"\n Create the parser with args.\n \"\"\"\n # we want to later deprecate this for add_cmdline_args\n pass\n\n def __init__(self, opt: Opt):\n self.opt = opt\n\n @abstractmethod\n def run(self):\n \"\"\"\n The main method.\n\n Must be implemented by the script writer.\n \"\"\"\n raise NotImplementedError()\n\n @classmethod\n def _run_kwargs(cls, kwargs: Dict[str, Any]):\n \"\"\"\n Construct and run the script using kwargs, pseudo-parsing them.\n \"\"\"\n parser = cls.setup_args()\n opt = parser.parse_kwargs(**kwargs)\n return cls._run_from_parser_and_opt(opt, parser)\n\n @classmethod\n def _run_args(cls, args: Optional[List[str]] = None):\n \"\"\"\n Construct and run the script using args, defaulting to getting from CLI.\n \"\"\"\n parser = cls.setup_args()\n opt = parser.parse_args(args=args)\n return cls._run_from_parser_and_opt(opt, parser)\n\n @classmethod\n def _run_from_parser_and_opt(cls, opt: Opt, parser: ParlaiParser):\n script = cls(opt)\n script.parser = parser\n return script.run()\n\n @classmethod\n def main(cls, *args, **kwargs):\n \"\"\"\n Run the program, possibly with some given args.\n\n You may provide command line args in the form of strings, or\n options. For example:\n\n >>> MyScript.main(['--task', 'convai2'])\n >>> MyScript.main(task='convai2')\n\n You may not combine both args and kwargs.\n \"\"\"\n assert not (bool(args) and bool(kwargs))\n if args:\n return cls._run_args(args)\n elif kwargs:\n return cls._run_kwargs(kwargs)\n else:\n return cls._run_args(None)\n\n @classmethod\n def help(cls, **kwargs):\n f = io.StringIO()\n parser = cls.setup_args()\n parser.prog = cls.__name__\n parser.add_extra_args(parser._kwargs_to_str_args(**kwargs))\n parser.print_help(f)\n return f.getvalue()\n\n\nclass _SupercommandParser(ParlaiParser):\n \"\"\"\n Specialty ParlAI parser used for the supercommand.\n\n Contains some special behavior.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n from parlai.utils.strings import colorize\n\n logo = \"\"\n logo += colorize(' _', 'red') + \"\\n\"\n logo += colorize(' /', 'red') + colorize('\"', 'brightblack')\n logo += colorize(\")\", \"yellow\") + \"\\n\"\n logo += colorize(' //', 'red') + colorize(')', 'yellow') + '\\n'\n logo += colorize(' ==', 'green')\n logo += colorize(\"/\", 'blue') + colorize('/', 'red') + colorize(\"'\", 'yellow')\n logo += colorize(\"===\", 'green') + \" ParlAI\\n\"\n logo += colorize(\" /\", 'blue')\n kwargs['description'] = logo\n return super().__init__(*args, **kwargs)\n\n def add_extra_args(self, args):\n sa = [a for a in self._actions if isinstance(a, argparse._SubParsersAction)]\n assert len(sa) == 1\n sa = sa[0]\n for _, v in sa.choices.items():\n v.add_extra_args(args)\n\n def add_subparsers(self, **kwargs):\n return super().add_subparsers(**kwargs)\n\n def _unsuppress_hidden(self):\n \"\"\"\n Restore the help messages of hidden commands.\n \"\"\"\n\n spa = [a for a in self._actions if isinstance(a, argparse._SubParsersAction)]\n assert len(spa) == 1\n spa = spa[0]\n for choices_action in spa._choices_actions:\n dest = choices_action.dest\n if choices_action.help == argparse.SUPPRESS:\n choices_action.help = spa.choices[dest].description\n\n def print_helpall(self):\n self._unsuppress_hidden()\n self.print_help()\n\n\nclass _SubcommandParser(ParlaiParser):\n \"\"\"\n ParlaiParser which always sets add_parlai_args and add_model_args to False.\n\n Used in the superscript to initialize just the args for that command.\n \"\"\"\n\n def __init__(self, **kwargs):\n kwargs['add_parlai_args'] = False\n kwargs['add_model_args'] = False\n assert 'description' in kwargs, 'Must supply description'\n return super().__init__(**kwargs)\n\n def parse_known_args(self, args=None, namespace=None, nohelp=False):\n if not nohelp:\n self.add_extra_args(args)\n return super().parse_known_args(args, namespace, nohelp)\n\n\ndef _SuperscriptHelpFormatter(**kwargs):\n kwargs['width'] = 100\n kwargs['max_help_position'] = 9999\n\n return CustomHelpFormatter(**kwargs)\n\n\ndef superscript_main(args=None):\n \"\"\"\n Superscript is a loader for all the other scripts.\n \"\"\"\n setup_script_registry()\n\n parser = _SupercommandParser(\n False, False, formatter_class=_SuperscriptHelpFormatter\n )\n parser.add_argument(\n '--helpall',\n action='helpall',\n help='show all commands, including advanced ones.',\n )\n parser.set_defaults(super_command=None)\n subparsers = parser.add_subparsers(\n parser_class=_SubcommandParser, title=\"Commands\", metavar=\"COMMAND\"\n )\n hparser = subparsers.add_parser(\n 'help',\n aliases=['h'],\n help=argparse.SUPPRESS,\n description=\"List the main commands\",\n )\n hparser.set_defaults(super_command='help')\n hparser = subparsers.add_parser(\n 'helpall',\n help=argparse.SUPPRESS,\n description=\"List all commands, including advanced ones.\",\n )\n hparser.set_defaults(super_command='helpall')\n\n # build the supercommand\n for script_name, registration in SCRIPT_REGISTRY.items():\n logging.verbose(f\"Discovered command {script_name}\")\n script_parser = registration.klass.setup_args()\n if script_parser is None:\n # user didn't bother defining command line args. let's just fill\n # in for them\n script_parser = ParlaiParser(False, False)\n help_ = argparse.SUPPRESS if registration.hidden else script_parser.description\n subparser = subparsers.add_parser(\n script_name,\n aliases=registration.aliases,\n help=help_,\n description=script_parser.description,\n formatter_class=CustomHelpFormatter,\n )\n subparser.set_defaults(\n # carries the name of the full command so we know what to execute\n super_command=script_name,\n # used in ParlAI parser to find CLI options set by user\n _subparser=subparser,\n )\n subparser.set_defaults(**script_parser._defaults)\n for action in script_parser._actions:\n subparser._add_action(action)\n for action_group in script_parser._action_groups:\n subparser._action_groups.append(action_group)\n\n try:\n import argcomplete\n\n argcomplete.autocomplete(parser)\n except ModuleNotFoundError:\n pass\n\n opt = parser.parse_args(args)\n cmd = opt.pop('super_command')\n if cmd == 'helpall':\n parser.print_helpall()\n elif cmd == 'help' or cmd is None:\n parser.print_help()\n elif cmd is not None:\n return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)\n", "path": "parlai/core/script.py"}]} | 3,344 | 462 |
gh_patches_debug_20796 | rasdani/github-patches | git_diff | wagtail__wagtail-7703 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
csrf_token missing for admin dashboard workflow actions (resulting in 403)
### Issue Summary
When a `POST` is made to the workflow URLs `admin/pages/workflow/action/` via the dashboard buttons (e.g. Approve) it does not contain the `csrfToken` resulting in a 403 each time.
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Create a new page & publish
3. Edit that same page but this time put to moderator's approval
4. Go to the dashboard (admin home)
5. Click 'Approve'
6. Expected: Should work & approve the page's moderation, Actual: user is shown a 403 error
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: No - Wagtail bakerydemo without changes only.
<img width="1873" alt="Screen Shot 2021-11-08 at 9 10 12 pm" src="https://user-images.githubusercontent.com/1396140/140733351-f955d1ed-283b-45b8-a85e-fc3ce20b6d49.png">
<img width="1873" alt="Screen Shot 2021-11-08 at 9 10 36 pm" src="https://user-images.githubusercontent.com/1396140/140733360-6657420b-6e14-4f28-9233-f81c95812305.png">
### Technical details
* Python version: 2.7.6
* Django version: 3.2.6
* Wagtail version: 2.15
* Browser version: Chrome 95 https://www.whatsmybrowser.org/b/VAX8V
### Likely root cause
* Dashboard renders buttons with `data-workflow-action-url` to indicate the action URL to POST to.
* `client/src/entrypoints/admin/workflow-action.js` has a function `ActivateWorkflowActionsForDashboard` which accepts a `csrfToken` and for non-modal workflow actions (e.g. approve but not approve with comment) it will `POST` to `buttonElement.dataset.workflowActionUrl` which in the approve case is something like `admin/pages/workflow/action/`
* This function is initiated in the template `wagtail/admin/templates/wagtailadmin/home/workflow_pages_to_moderate.html` - see `document.addEventListener('DOMContentLoaded', ActivateWorkflowActionsForDashboard('{{ csrf_token|escapejs }}'));`
* This template is used by `wagtail/admin/views/home.py` - see
* ```class WorkflowPagesToModeratePanel(Component):
name = 'workflow_pages_to_moderate'
template_name = 'wagtailadmin/home/workflow_pages_to_moderate.html'
order = 220```
* However, the `get_context_data` does not appear to return the context variable `csrf_token` at all
### Recommended fix
* It is probably best to align the `csrf_token` with the Django tag usage instead of passing it down as a variable through the component's context.
* However, it appears that a similar approach has been taken for wagtail/admin/templates/wagtailadmin/home/locked_pages.html (not sure if that is broken also) - but it seems like there is an intentional escapejs usage
</issue>
<code>
[start of wagtail/admin/views/home.py]
1 import itertools
2
3 from warnings import warn
4
5 from django.conf import settings
6 from django.contrib.auth import get_user_model
7 from django.contrib.auth.decorators import permission_required
8 from django.db import connection
9 from django.db.models import Max, Q
10 from django.forms import Media
11 from django.http import Http404, HttpResponse
12 from django.template.loader import render_to_string
13 from django.template.response import TemplateResponse
14
15 from wagtail.admin.navigation import get_site_for_user
16 from wagtail.admin.site_summary import SiteSummaryPanel
17 from wagtail.admin.ui.components import Component
18 from wagtail.core import hooks
19 from wagtail.core.models import (
20 Page, PageRevision, TaskState, UserPagePermissionsProxy, WorkflowState)
21 from wagtail.utils.deprecation import RemovedInWagtail217Warning
22
23
24 User = get_user_model()
25
26
27 # Panels for the homepage
28
29 class UpgradeNotificationPanel(Component):
30 name = 'upgrade_notification'
31 template_name = 'wagtailadmin/home/upgrade_notification.html'
32 order = 100
33
34 def render_html(self, parent_context):
35 if parent_context['request'].user.is_superuser and getattr(settings, "WAGTAIL_ENABLE_UPDATE_CHECK", True):
36 return super().render_html(parent_context)
37 else:
38 return ""
39
40
41 class PagesForModerationPanel(Component):
42 name = 'pages_for_moderation'
43 template_name = 'wagtailadmin/home/pages_for_moderation.html'
44 order = 200
45
46 def get_context_data(self, parent_context):
47 request = parent_context['request']
48 context = super().get_context_data(parent_context)
49 user_perms = UserPagePermissionsProxy(request.user)
50 context['page_revisions_for_moderation'] = (
51 user_perms.revisions_for_moderation().select_related('page', 'user').order_by('-created_at')
52 )
53 context['request'] = request
54 return context
55
56
57 class UserPagesInWorkflowModerationPanel(Component):
58 name = 'user_pages_in_workflow_moderation'
59 template_name = 'wagtailadmin/home/user_pages_in_workflow_moderation.html'
60 order = 210
61
62 def get_context_data(self, parent_context):
63 request = parent_context['request']
64 context = super().get_context_data(parent_context)
65 if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):
66 # Find in progress workflow states which are either requested by the user or on pages owned by the user
67 context['workflow_states'] = (
68 WorkflowState.objects.active()
69 .filter(Q(page__owner=request.user) | Q(requested_by=request.user))
70 .select_related(
71 'page', 'current_task_state', 'current_task_state__task', 'current_task_state__page_revision'
72 )
73 .order_by('-current_task_state__started_at')
74 )
75 else:
76 context['workflow_states'] = WorkflowState.objects.none()
77 context['request'] = request
78 return context
79
80
81 class WorkflowPagesToModeratePanel(Component):
82 name = 'workflow_pages_to_moderate'
83 template_name = 'wagtailadmin/home/workflow_pages_to_moderate.html'
84 order = 220
85
86 def get_context_data(self, parent_context):
87 request = parent_context['request']
88 context = super().get_context_data(parent_context)
89 if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):
90 states = (
91 TaskState.objects.reviewable_by(request.user)
92 .select_related('page_revision', 'task', 'page_revision__page')
93 .order_by('-started_at')
94 )
95 context['states'] = [
96 (state, state.task.specific.get_actions(page=state.page_revision.page, user=request.user), state.workflow_state.all_tasks_with_status())
97 for state in states
98 ]
99 else:
100 context['states'] = []
101 context['request'] = request
102 return context
103
104
105 class LockedPagesPanel(Component):
106 name = 'locked_pages'
107 template_name = 'wagtailadmin/home/locked_pages.html'
108 order = 300
109
110 def get_context_data(self, parent_context):
111 request = parent_context['request']
112 context = super().get_context_data(parent_context)
113 context.update({
114 'locked_pages': Page.objects.filter(
115 locked=True,
116 locked_by=request.user,
117 ),
118 'can_remove_locks': UserPagePermissionsProxy(request.user).can_remove_locks(),
119 'request': request,
120 })
121 return context
122
123
124 class RecentEditsPanel(Component):
125 name = 'recent_edits'
126 template_name = 'wagtailadmin/home/recent_edits.html'
127 order = 250
128
129 def get_context_data(self, parent_context):
130 request = parent_context['request']
131 context = super().get_context_data(parent_context)
132
133 # Last n edited pages
134 edit_count = getattr(settings, 'WAGTAILADMIN_RECENT_EDITS_LIMIT', 5)
135 if connection.vendor == 'mysql':
136 # MySQL can't handle the subselect created by the ORM version -
137 # it fails with "This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'"
138 last_edits = PageRevision.objects.raw(
139 """
140 SELECT wp.* FROM
141 wagtailcore_pagerevision wp JOIN (
142 SELECT max(created_at) AS max_created_at, page_id FROM
143 wagtailcore_pagerevision WHERE user_id = %s GROUP BY page_id ORDER BY max_created_at DESC LIMIT %s
144 ) AS max_rev ON max_rev.max_created_at = wp.created_at ORDER BY wp.created_at DESC
145 """, [
146 User._meta.pk.get_db_prep_value(request.user.pk, connection),
147 edit_count
148 ]
149 )
150 else:
151 last_edits_dates = (PageRevision.objects.filter(user=request.user)
152 .values('page_id').annotate(latest_date=Max('created_at'))
153 .order_by('-latest_date').values('latest_date')[:edit_count])
154 last_edits = PageRevision.objects.filter(created_at__in=last_edits_dates).order_by('-created_at')
155
156 page_keys = [pr.page_id for pr in last_edits]
157 pages = Page.objects.specific().in_bulk(page_keys)
158 context['last_edits'] = [
159 [revision, pages.get(revision.page_id)] for revision in last_edits
160 ]
161 context['request'] = request
162 return context
163
164
165 def home(request):
166
167 panels = [
168 SiteSummaryPanel(request),
169 UpgradeNotificationPanel(),
170 WorkflowPagesToModeratePanel(),
171 PagesForModerationPanel(),
172 UserPagesInWorkflowModerationPanel(),
173 RecentEditsPanel(),
174 LockedPagesPanel(),
175 ]
176
177 for fn in hooks.get_hooks('construct_homepage_panels'):
178 fn(request, panels)
179
180 media = Media()
181
182 for panel in panels:
183 if hasattr(panel, 'render') and not hasattr(panel, 'render_html'):
184 # NOTE: when this deprecation warning is removed the 'fallback_render_method=True' in
185 # wagtailadmin/home.html should be removed too
186 message = (
187 "Homepage panel %r should provide a render_html method. "
188 "See https://docs.wagtail.io/en/stable/releases/2.15.html#template-components-2-15"
189 % panel
190 )
191 warn(message, category=RemovedInWagtail217Warning)
192
193 # RemovedInWagtail217Warning: this hasattr check can be removed when support for
194 # non-component-based panels ends
195 if hasattr(panel, 'media'):
196 media += panel.media
197
198 site_details = get_site_for_user(request.user)
199
200 return TemplateResponse(request, "wagtailadmin/home.html", {
201 'root_page': site_details['root_page'],
202 'root_site': site_details['root_site'],
203 'site_name': site_details['site_name'],
204 'panels': sorted(panels, key=lambda p: p.order),
205 'user': request.user,
206 'media': media,
207 })
208
209
210 def error_test(request):
211 raise Exception("This is a test of the emergency broadcast system.")
212
213
214 @permission_required('wagtailadmin.access_admin', login_url='wagtailadmin_login')
215 def default(request):
216 """
217 Called whenever a request comes in with the correct prefix (eg /admin/) but
218 doesn't actually correspond to a Wagtail view.
219
220 For authenticated users, it'll raise a 404 error. Anonymous users will be
221 redirected to the login page.
222 """
223 raise Http404
224
225
226 _icons_html = None
227
228
229 def icons():
230 global _icons_html
231 if _icons_html is None:
232 icon_hooks = hooks.get_hooks('register_icons')
233 all_icons = sorted(itertools.chain.from_iterable(hook([]) for hook in icon_hooks))
234 _icons_html = render_to_string("wagtailadmin/shared/icons.html", {'icons': all_icons})
235 return _icons_html
236
237
238 def sprite(request):
239 return HttpResponse(icons())
240
[end of wagtail/admin/views/home.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/admin/views/home.py b/wagtail/admin/views/home.py
--- a/wagtail/admin/views/home.py
+++ b/wagtail/admin/views/home.py
@@ -51,6 +51,7 @@
user_perms.revisions_for_moderation().select_related('page', 'user').order_by('-created_at')
)
context['request'] = request
+ context['csrf_token'] = parent_context['csrf_token']
return context
@@ -99,6 +100,7 @@
else:
context['states'] = []
context['request'] = request
+ context['csrf_token'] = parent_context['csrf_token']
return context
@@ -117,6 +119,7 @@
),
'can_remove_locks': UserPagePermissionsProxy(request.user).can_remove_locks(),
'request': request,
+ 'csrf_token': parent_context['csrf_token'],
})
return context
| {"golden_diff": "diff --git a/wagtail/admin/views/home.py b/wagtail/admin/views/home.py\n--- a/wagtail/admin/views/home.py\n+++ b/wagtail/admin/views/home.py\n@@ -51,6 +51,7 @@\n user_perms.revisions_for_moderation().select_related('page', 'user').order_by('-created_at')\n )\n context['request'] = request\n+ context['csrf_token'] = parent_context['csrf_token']\n return context\n \n \n@@ -99,6 +100,7 @@\n else:\n context['states'] = []\n context['request'] = request\n+ context['csrf_token'] = parent_context['csrf_token']\n return context\n \n \n@@ -117,6 +119,7 @@\n ),\n 'can_remove_locks': UserPagePermissionsProxy(request.user).can_remove_locks(),\n 'request': request,\n+ 'csrf_token': parent_context['csrf_token'],\n })\n return context\n", "issue": "csrf_token missing for admin dashboard workflow actions (resulting in 403)\n### Issue Summary\r\n\r\nWhen a `POST` is made to the workflow URLs `admin/pages/workflow/action/` via the dashboard buttons (e.g. Approve) it does not contain the `csrfToken` resulting in a 403 each time.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Start a new project with `wagtail start myproject`\r\n2. Create a new page & publish\r\n3. Edit that same page but this time put to moderator's approval\r\n4. Go to the dashboard (admin home)\r\n5. Click 'Approve'\r\n6. Expected: Should work & approve the page's moderation, Actual: user is shown a 403 error\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: No - Wagtail bakerydemo without changes only.\r\n\r\n\r\n<img width=\"1873\" alt=\"Screen Shot 2021-11-08 at 9 10 12 pm\" src=\"https://user-images.githubusercontent.com/1396140/140733351-f955d1ed-283b-45b8-a85e-fc3ce20b6d49.png\">\r\n\r\n\r\n<img width=\"1873\" alt=\"Screen Shot 2021-11-08 at 9 10 36 pm\" src=\"https://user-images.githubusercontent.com/1396140/140733360-6657420b-6e14-4f28-9233-f81c95812305.png\">\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 2.7.6\r\n* Django version: 3.2.6\r\n* Wagtail version: 2.15\r\n* Browser version: Chrome 95 https://www.whatsmybrowser.org/b/VAX8V\r\n\r\n### Likely root cause\r\n\r\n* Dashboard renders buttons with `data-workflow-action-url` to indicate the action URL to POST to.\r\n* `client/src/entrypoints/admin/workflow-action.js` has a function `ActivateWorkflowActionsForDashboard` which accepts a `csrfToken` and for non-modal workflow actions (e.g. approve but not approve with comment) it will `POST` to `buttonElement.dataset.workflowActionUrl` which in the approve case is something like `admin/pages/workflow/action/`\r\n* This function is initiated in the template `wagtail/admin/templates/wagtailadmin/home/workflow_pages_to_moderate.html` - see `document.addEventListener('DOMContentLoaded', ActivateWorkflowActionsForDashboard('{{ csrf_token|escapejs }}'));`\r\n* This template is used by `wagtail/admin/views/home.py` - see\r\n* ```class WorkflowPagesToModeratePanel(Component):\r\n name = 'workflow_pages_to_moderate'\r\n template_name = 'wagtailadmin/home/workflow_pages_to_moderate.html'\r\n order = 220```\r\n* However, the `get_context_data` does not appear to return the context variable `csrf_token` at all\r\n\r\n\r\n### Recommended fix\r\n\r\n* It is probably best to align the `csrf_token` with the Django tag usage instead of passing it down as a variable through the component's context.\r\n* However, it appears that a similar approach has been taken for wagtail/admin/templates/wagtailadmin/home/locked_pages.html (not sure if that is broken also) - but it seems like there is an intentional escapejs usage\r\n\n", "before_files": [{"content": "import itertools\n\nfrom warnings import warn\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.decorators import permission_required\nfrom django.db import connection\nfrom django.db.models import Max, Q\nfrom django.forms import Media\nfrom django.http import Http404, HttpResponse\nfrom django.template.loader import render_to_string\nfrom django.template.response import TemplateResponse\n\nfrom wagtail.admin.navigation import get_site_for_user\nfrom wagtail.admin.site_summary import SiteSummaryPanel\nfrom wagtail.admin.ui.components import Component\nfrom wagtail.core import hooks\nfrom wagtail.core.models import (\n Page, PageRevision, TaskState, UserPagePermissionsProxy, WorkflowState)\nfrom wagtail.utils.deprecation import RemovedInWagtail217Warning\n\n\nUser = get_user_model()\n\n\n# Panels for the homepage\n\nclass UpgradeNotificationPanel(Component):\n name = 'upgrade_notification'\n template_name = 'wagtailadmin/home/upgrade_notification.html'\n order = 100\n\n def render_html(self, parent_context):\n if parent_context['request'].user.is_superuser and getattr(settings, \"WAGTAIL_ENABLE_UPDATE_CHECK\", True):\n return super().render_html(parent_context)\n else:\n return \"\"\n\n\nclass PagesForModerationPanel(Component):\n name = 'pages_for_moderation'\n template_name = 'wagtailadmin/home/pages_for_moderation.html'\n order = 200\n\n def get_context_data(self, parent_context):\n request = parent_context['request']\n context = super().get_context_data(parent_context)\n user_perms = UserPagePermissionsProxy(request.user)\n context['page_revisions_for_moderation'] = (\n user_perms.revisions_for_moderation().select_related('page', 'user').order_by('-created_at')\n )\n context['request'] = request\n return context\n\n\nclass UserPagesInWorkflowModerationPanel(Component):\n name = 'user_pages_in_workflow_moderation'\n template_name = 'wagtailadmin/home/user_pages_in_workflow_moderation.html'\n order = 210\n\n def get_context_data(self, parent_context):\n request = parent_context['request']\n context = super().get_context_data(parent_context)\n if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):\n # Find in progress workflow states which are either requested by the user or on pages owned by the user\n context['workflow_states'] = (\n WorkflowState.objects.active()\n .filter(Q(page__owner=request.user) | Q(requested_by=request.user))\n .select_related(\n 'page', 'current_task_state', 'current_task_state__task', 'current_task_state__page_revision'\n )\n .order_by('-current_task_state__started_at')\n )\n else:\n context['workflow_states'] = WorkflowState.objects.none()\n context['request'] = request\n return context\n\n\nclass WorkflowPagesToModeratePanel(Component):\n name = 'workflow_pages_to_moderate'\n template_name = 'wagtailadmin/home/workflow_pages_to_moderate.html'\n order = 220\n\n def get_context_data(self, parent_context):\n request = parent_context['request']\n context = super().get_context_data(parent_context)\n if getattr(settings, 'WAGTAIL_WORKFLOW_ENABLED', True):\n states = (\n TaskState.objects.reviewable_by(request.user)\n .select_related('page_revision', 'task', 'page_revision__page')\n .order_by('-started_at')\n )\n context['states'] = [\n (state, state.task.specific.get_actions(page=state.page_revision.page, user=request.user), state.workflow_state.all_tasks_with_status())\n for state in states\n ]\n else:\n context['states'] = []\n context['request'] = request\n return context\n\n\nclass LockedPagesPanel(Component):\n name = 'locked_pages'\n template_name = 'wagtailadmin/home/locked_pages.html'\n order = 300\n\n def get_context_data(self, parent_context):\n request = parent_context['request']\n context = super().get_context_data(parent_context)\n context.update({\n 'locked_pages': Page.objects.filter(\n locked=True,\n locked_by=request.user,\n ),\n 'can_remove_locks': UserPagePermissionsProxy(request.user).can_remove_locks(),\n 'request': request,\n })\n return context\n\n\nclass RecentEditsPanel(Component):\n name = 'recent_edits'\n template_name = 'wagtailadmin/home/recent_edits.html'\n order = 250\n\n def get_context_data(self, parent_context):\n request = parent_context['request']\n context = super().get_context_data(parent_context)\n\n # Last n edited pages\n edit_count = getattr(settings, 'WAGTAILADMIN_RECENT_EDITS_LIMIT', 5)\n if connection.vendor == 'mysql':\n # MySQL can't handle the subselect created by the ORM version -\n # it fails with \"This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'\"\n last_edits = PageRevision.objects.raw(\n \"\"\"\n SELECT wp.* FROM\n wagtailcore_pagerevision wp JOIN (\n SELECT max(created_at) AS max_created_at, page_id FROM\n wagtailcore_pagerevision WHERE user_id = %s GROUP BY page_id ORDER BY max_created_at DESC LIMIT %s\n ) AS max_rev ON max_rev.max_created_at = wp.created_at ORDER BY wp.created_at DESC\n \"\"\", [\n User._meta.pk.get_db_prep_value(request.user.pk, connection),\n edit_count\n ]\n )\n else:\n last_edits_dates = (PageRevision.objects.filter(user=request.user)\n .values('page_id').annotate(latest_date=Max('created_at'))\n .order_by('-latest_date').values('latest_date')[:edit_count])\n last_edits = PageRevision.objects.filter(created_at__in=last_edits_dates).order_by('-created_at')\n\n page_keys = [pr.page_id for pr in last_edits]\n pages = Page.objects.specific().in_bulk(page_keys)\n context['last_edits'] = [\n [revision, pages.get(revision.page_id)] for revision in last_edits\n ]\n context['request'] = request\n return context\n\n\ndef home(request):\n\n panels = [\n SiteSummaryPanel(request),\n UpgradeNotificationPanel(),\n WorkflowPagesToModeratePanel(),\n PagesForModerationPanel(),\n UserPagesInWorkflowModerationPanel(),\n RecentEditsPanel(),\n LockedPagesPanel(),\n ]\n\n for fn in hooks.get_hooks('construct_homepage_panels'):\n fn(request, panels)\n\n media = Media()\n\n for panel in panels:\n if hasattr(panel, 'render') and not hasattr(panel, 'render_html'):\n # NOTE: when this deprecation warning is removed the 'fallback_render_method=True' in\n # wagtailadmin/home.html should be removed too\n message = (\n \"Homepage panel %r should provide a render_html method. \"\n \"See https://docs.wagtail.io/en/stable/releases/2.15.html#template-components-2-15\"\n % panel\n )\n warn(message, category=RemovedInWagtail217Warning)\n\n # RemovedInWagtail217Warning: this hasattr check can be removed when support for\n # non-component-based panels ends\n if hasattr(panel, 'media'):\n media += panel.media\n\n site_details = get_site_for_user(request.user)\n\n return TemplateResponse(request, \"wagtailadmin/home.html\", {\n 'root_page': site_details['root_page'],\n 'root_site': site_details['root_site'],\n 'site_name': site_details['site_name'],\n 'panels': sorted(panels, key=lambda p: p.order),\n 'user': request.user,\n 'media': media,\n })\n\n\ndef error_test(request):\n raise Exception(\"This is a test of the emergency broadcast system.\")\n\n\n@permission_required('wagtailadmin.access_admin', login_url='wagtailadmin_login')\ndef default(request):\n \"\"\"\n Called whenever a request comes in with the correct prefix (eg /admin/) but\n doesn't actually correspond to a Wagtail view.\n\n For authenticated users, it'll raise a 404 error. Anonymous users will be\n redirected to the login page.\n \"\"\"\n raise Http404\n\n\n_icons_html = None\n\n\ndef icons():\n global _icons_html\n if _icons_html is None:\n icon_hooks = hooks.get_hooks('register_icons')\n all_icons = sorted(itertools.chain.from_iterable(hook([]) for hook in icon_hooks))\n _icons_html = render_to_string(\"wagtailadmin/shared/icons.html\", {'icons': all_icons})\n return _icons_html\n\n\ndef sprite(request):\n return HttpResponse(icons())\n", "path": "wagtail/admin/views/home.py"}]} | 3,845 | 214 |
gh_patches_debug_19601 | rasdani/github-patches | git_diff | voxel51__fiftyone-200 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation build takes a long time
Reproducible with `SPHINXOPTS=-E time ./docs/generate_docs.bash` - notably, it takes multiple minutes to build this page:
```
reading sources... [ 18%] api/fiftyone.core.odm
```
This is at least part of the reason why documentation builds on GitHub actions are taking so long.
Possibly related to #182, although I don't think earlier builds on GitHub actions ran into this.
</issue>
<code>
[start of fiftyone/core/odm/database.py]
1 """
2 Database connection.
3
4 | Copyright 2017-2020, Voxel51, Inc.
5 | `voxel51.com <https://voxel51.com/>`_
6 |
7 """
8 # pragma pylint: disable=redefined-builtin
9 # pragma pylint: disable=unused-wildcard-import
10 # pragma pylint: disable=wildcard-import
11 from __future__ import absolute_import
12 from __future__ import division
13 from __future__ import print_function
14 from __future__ import unicode_literals
15
16 # pragma pylint: enable=redefined-builtin
17 # pragma pylint: enable=unused-wildcard-import
18 # pragma pylint: enable=wildcard-import
19
20
21 from mongoengine import connect
22 import pymongo
23
24 _DEFAULT_DATABASE = "fiftyone"
25
26
27 _db = connect(_DEFAULT_DATABASE)
28
29
30 def get_db_conn():
31 """Creates a connection to the database"""
32 return pymongo.MongoClient()[_DEFAULT_DATABASE]
33
34
35 def drop_database():
36 """Drops the database."""
37 client = pymongo.MongoClient()
38 client.drop_database(_DEFAULT_DATABASE)
39
[end of fiftyone/core/odm/database.py]
[start of fiftyone/core/service.py]
1 """
2 FiftyOne Services.
3
4 | Copyright 2017-2020, Voxel51, Inc.
5 | `voxel51.com <https://voxel51.com/>`_
6 |
7 """
8 # pragma pylint: disable=redefined-builtin
9 # pragma pylint: disable=unused-wildcard-import
10 # pragma pylint: disable=wildcard-import
11 from __future__ import absolute_import
12 from __future__ import division
13 from __future__ import print_function
14 from __future__ import unicode_literals
15 from builtins import *
16
17 # pragma pylint: enable=redefined-builtin
18 # pragma pylint: enable=unused-wildcard-import
19 # pragma pylint: enable=wildcard-import
20
21 import logging
22 import os
23 import re
24 import subprocess
25 import sys
26
27 from packaging.version import Version
28 import requests
29
30 import eta.core.utils as etau
31
32 import fiftyone.constants as foc
33
34
35 logger = logging.getLogger(__name__)
36
37
38 class Service(object):
39 """Interface for FiftyOne services.
40
41 All services must define a ``command`` property.
42
43 Services are run in an isolated Python subprocess (see ``_service_main.py``)
44 to ensure that they are shut down when the main Python process exits. The
45 ``command`` and ``working_dir`` properties control the execution of the
46 service in the subprocess.
47 """
48
49 working_dir = "."
50
51 def __init__(self):
52 """Creates (starts) the Service."""
53 self._system = os.system
54 self._is_server = os.environ.get(
55 "FIFTYONE_SERVER", False
56 ) or os.environ.get("FIFTYONE_DISABLE_SERVICES", False)
57 self.child = None
58 if not self._is_server:
59 self.start()
60
61 def __del__(self):
62 """Deletes (stops) the Service."""
63 if not self._is_server:
64 try:
65 self.stop()
66 except:
67 # something probably failed due to interpreter shutdown, which
68 # will be handled by _service_main.py
69 pass
70
71 @property
72 def command(self):
73 raise NotImplementedError("subclasses must define `command`")
74
75 def start(self):
76 """Starts the Service."""
77 service_main_path = os.path.join(
78 os.path.dirname(os.path.abspath(__file__)),
79 "..",
80 "_service_main.py",
81 )
82 self.child = subprocess.Popen(
83 [sys.executable, service_main_path] + self.command,
84 cwd=self.working_dir,
85 stdin=subprocess.PIPE,
86 )
87
88 def stop(self):
89 """Stops the Service."""
90 self.child.stdin.close()
91 self.child.wait()
92
93 def wait(self):
94 """Waits for the Service to exit and returns its exit code."""
95 return self.child.wait()
96
97
98 class DatabaseService(Service):
99 """Service that controls the underlying MongoDB database."""
100
101 MIN_MONGO_VERSION = "3.6"
102
103 @property
104 def command(self):
105 return [
106 DatabaseService.find_mongod(),
107 "--dbpath",
108 foc.DB_PATH,
109 "--logpath",
110 foc.DB_LOG_PATH,
111 ]
112
113 def start(self):
114 """Starts the DatabaseService."""
115 for folder in (foc.DB_PATH, os.path.dirname(foc.DB_LOG_PATH)):
116 if not os.path.isdir(folder):
117 os.makedirs(folder)
118
119 super().start()
120
121 # Drop non-persistent datasets
122 import fiftyone.core.dataset as fod
123
124 fod.delete_non_persistent_datasets()
125
126 @staticmethod
127 def find_mongod():
128 search_paths = [
129 foc.FIFTYONE_DB_BIN_DIR,
130 os.path.join(foc.FIFTYONE_CONFIG_DIR, "bin"),
131 ] + os.environ["PATH"].split(os.pathsep)
132 searched = set()
133 attempts = []
134 for folder in search_paths:
135 if folder in searched:
136 continue
137 searched.add(folder)
138 mongod_path = os.path.join(folder, "mongod")
139 if os.path.isfile(mongod_path):
140 ok, out, err = etau.communicate([mongod_path, "--version"])
141 out = out.decode(errors="ignore").strip()
142 err = err.decode(errors="ignore").strip()
143 mongod_version = None
144 if ok:
145 match = re.search(r"db version.+?([\d\.]+)", out, re.I)
146 if match:
147 mongod_version = match.group(1)
148 if Version(mongod_version) >= Version(
149 DatabaseService.MIN_MONGO_VERSION
150 ):
151 return mongod_path
152 attempts.append((mongod_path, mongod_version, err))
153 for path, version, err in attempts:
154 if version is not None:
155 logger.warn("%s: incompatible version %s" % (path, version))
156 else:
157 logger.error("%s: failed to launch: %s" % (path, err))
158 raise RuntimeError(
159 "Could not find mongod >= %s" % DatabaseService.MIN_MONGO_VERSION
160 )
161
162
163 class ServerService(Service):
164 """Service that controls the FiftyOne web server."""
165
166 working_dir = foc.SERVER_DIR
167
168 def __init__(self, port):
169 self._port = port
170 super(ServerService, self).__init__()
171
172 def start(self):
173 server_version = None
174 try:
175 server_version = requests.get(
176 "http://127.0.0.1:%i/fiftyone" % self._port, timeout=2
177 ).json()["version"]
178 except Exception:
179 # There is likely not a fiftyone server running (remote or local),
180 # so start a local server. If there actually is a fiftyone server
181 # running that didn't respond to /fiftyone, the local server will
182 # fail to start but the dashboard will still connect successfully.
183 super().start()
184
185 if server_version is not None:
186 logger.info("Connected to fiftyone on local port %i" % self._port)
187 if server_version != foc.VERSION:
188 logger.warn(
189 "Server version (%s) does not match client version (%s)"
190 % (server_version, foc.VERSION)
191 )
192
193 @property
194 def command(self):
195 command = [
196 "gunicorn",
197 "-w",
198 "1",
199 "--worker-class",
200 "eventlet",
201 "-b",
202 "127.0.0.1:%d" % self._port,
203 "main:app",
204 ]
205 if foc.DEV_INSTALL:
206 command += ["--reload"]
207 return command
208
209 @property
210 def port(self):
211 """Getter for the current port"""
212 return self._port
213
214
215 class AppService(Service):
216 """Service that controls the FiftyOne app."""
217
218 working_dir = foc.FIFTYONE_APP_DIR
219
220 @property
221 def command(self):
222 with etau.WorkingDir(foc.FIFTYONE_APP_DIR):
223 if os.path.isfile("FiftyOne.AppImage"):
224 # linux
225 args = ["./FiftyOne.AppImage"]
226 elif os.path.isdir("FiftyOne.app"):
227 args = ["./FiftyOne.app/Contents/MacOS/FiftyOne"]
228 elif os.path.isfile("package.json"):
229 # dev build
230 args = ["yarn", "dev"]
231 else:
232 raise RuntimeError(
233 "Could not find FiftyOne dashboard in %r"
234 % foc.FIFTYONE_APP_DIR
235 )
236 return args
237
[end of fiftyone/core/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/fiftyone/core/odm/database.py b/fiftyone/core/odm/database.py
--- a/fiftyone/core/odm/database.py
+++ b/fiftyone/core/odm/database.py
@@ -22,17 +22,23 @@
import pymongo
_DEFAULT_DATABASE = "fiftyone"
+_client = None
-_db = connect(_DEFAULT_DATABASE)
+def _connect():
+ global _client
+ if _client is None:
+ connect(_DEFAULT_DATABASE)
+ _client = pymongo.MongoClient()
def get_db_conn():
"""Creates a connection to the database"""
- return pymongo.MongoClient()[_DEFAULT_DATABASE]
+ _connect()
+ return _client[_DEFAULT_DATABASE]
def drop_database():
"""Drops the database."""
- client = pymongo.MongoClient()
- client.drop_database(_DEFAULT_DATABASE)
+ _connect()
+ _client.drop_database(_DEFAULT_DATABASE)
diff --git a/fiftyone/core/service.py b/fiftyone/core/service.py
--- a/fiftyone/core/service.py
+++ b/fiftyone/core/service.py
@@ -118,6 +118,11 @@
super().start()
+ # Set up a default connection
+ import fiftyone.core.odm as foo
+
+ foo.get_db_conn()
+
# Drop non-persistent datasets
import fiftyone.core.dataset as fod
| {"golden_diff": "diff --git a/fiftyone/core/odm/database.py b/fiftyone/core/odm/database.py\n--- a/fiftyone/core/odm/database.py\n+++ b/fiftyone/core/odm/database.py\n@@ -22,17 +22,23 @@\n import pymongo\n \n _DEFAULT_DATABASE = \"fiftyone\"\n+_client = None\n \n \n-_db = connect(_DEFAULT_DATABASE)\n+def _connect():\n+ global _client\n+ if _client is None:\n+ connect(_DEFAULT_DATABASE)\n+ _client = pymongo.MongoClient()\n \n \n def get_db_conn():\n \"\"\"Creates a connection to the database\"\"\"\n- return pymongo.MongoClient()[_DEFAULT_DATABASE]\n+ _connect()\n+ return _client[_DEFAULT_DATABASE]\n \n \n def drop_database():\n \"\"\"Drops the database.\"\"\"\n- client = pymongo.MongoClient()\n- client.drop_database(_DEFAULT_DATABASE)\n+ _connect()\n+ _client.drop_database(_DEFAULT_DATABASE)\ndiff --git a/fiftyone/core/service.py b/fiftyone/core/service.py\n--- a/fiftyone/core/service.py\n+++ b/fiftyone/core/service.py\n@@ -118,6 +118,11 @@\n \n super().start()\n \n+ # Set up a default connection\n+ import fiftyone.core.odm as foo\n+\n+ foo.get_db_conn()\n+\n # Drop non-persistent datasets\n import fiftyone.core.dataset as fod\n", "issue": "Documentation build takes a long time\nReproducible with `SPHINXOPTS=-E time ./docs/generate_docs.bash` - notably, it takes multiple minutes to build this page:\r\n```\r\nreading sources... [ 18%] api/fiftyone.core.odm \r\n```\r\n\r\nThis is at least part of the reason why documentation builds on GitHub actions are taking so long.\r\n\r\nPossibly related to #182, although I don't think earlier builds on GitHub actions ran into this.\n", "before_files": [{"content": "\"\"\"\nDatabase connection.\n\n| Copyright 2017-2020, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\n# pragma pylint: disable=redefined-builtin\n# pragma pylint: disable=unused-wildcard-import\n# pragma pylint: disable=wildcard-import\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\n# pragma pylint: enable=redefined-builtin\n# pragma pylint: enable=unused-wildcard-import\n# pragma pylint: enable=wildcard-import\n\n\nfrom mongoengine import connect\nimport pymongo\n\n_DEFAULT_DATABASE = \"fiftyone\"\n\n\n_db = connect(_DEFAULT_DATABASE)\n\n\ndef get_db_conn():\n \"\"\"Creates a connection to the database\"\"\"\n return pymongo.MongoClient()[_DEFAULT_DATABASE]\n\n\ndef drop_database():\n \"\"\"Drops the database.\"\"\"\n client = pymongo.MongoClient()\n client.drop_database(_DEFAULT_DATABASE)\n", "path": "fiftyone/core/odm/database.py"}, {"content": "\"\"\"\nFiftyOne Services.\n\n| Copyright 2017-2020, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\n# pragma pylint: disable=redefined-builtin\n# pragma pylint: disable=unused-wildcard-import\n# pragma pylint: disable=wildcard-import\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\nfrom builtins import *\n\n# pragma pylint: enable=redefined-builtin\n# pragma pylint: enable=unused-wildcard-import\n# pragma pylint: enable=wildcard-import\n\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\n\nfrom packaging.version import Version\nimport requests\n\nimport eta.core.utils as etau\n\nimport fiftyone.constants as foc\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Service(object):\n \"\"\"Interface for FiftyOne services.\n\n All services must define a ``command`` property.\n\n Services are run in an isolated Python subprocess (see ``_service_main.py``)\n to ensure that they are shut down when the main Python process exits. The\n ``command`` and ``working_dir`` properties control the execution of the\n service in the subprocess.\n \"\"\"\n\n working_dir = \".\"\n\n def __init__(self):\n \"\"\"Creates (starts) the Service.\"\"\"\n self._system = os.system\n self._is_server = os.environ.get(\n \"FIFTYONE_SERVER\", False\n ) or os.environ.get(\"FIFTYONE_DISABLE_SERVICES\", False)\n self.child = None\n if not self._is_server:\n self.start()\n\n def __del__(self):\n \"\"\"Deletes (stops) the Service.\"\"\"\n if not self._is_server:\n try:\n self.stop()\n except:\n # something probably failed due to interpreter shutdown, which\n # will be handled by _service_main.py\n pass\n\n @property\n def command(self):\n raise NotImplementedError(\"subclasses must define `command`\")\n\n def start(self):\n \"\"\"Starts the Service.\"\"\"\n service_main_path = os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"..\",\n \"_service_main.py\",\n )\n self.child = subprocess.Popen(\n [sys.executable, service_main_path] + self.command,\n cwd=self.working_dir,\n stdin=subprocess.PIPE,\n )\n\n def stop(self):\n \"\"\"Stops the Service.\"\"\"\n self.child.stdin.close()\n self.child.wait()\n\n def wait(self):\n \"\"\"Waits for the Service to exit and returns its exit code.\"\"\"\n return self.child.wait()\n\n\nclass DatabaseService(Service):\n \"\"\"Service that controls the underlying MongoDB database.\"\"\"\n\n MIN_MONGO_VERSION = \"3.6\"\n\n @property\n def command(self):\n return [\n DatabaseService.find_mongod(),\n \"--dbpath\",\n foc.DB_PATH,\n \"--logpath\",\n foc.DB_LOG_PATH,\n ]\n\n def start(self):\n \"\"\"Starts the DatabaseService.\"\"\"\n for folder in (foc.DB_PATH, os.path.dirname(foc.DB_LOG_PATH)):\n if not os.path.isdir(folder):\n os.makedirs(folder)\n\n super().start()\n\n # Drop non-persistent datasets\n import fiftyone.core.dataset as fod\n\n fod.delete_non_persistent_datasets()\n\n @staticmethod\n def find_mongod():\n search_paths = [\n foc.FIFTYONE_DB_BIN_DIR,\n os.path.join(foc.FIFTYONE_CONFIG_DIR, \"bin\"),\n ] + os.environ[\"PATH\"].split(os.pathsep)\n searched = set()\n attempts = []\n for folder in search_paths:\n if folder in searched:\n continue\n searched.add(folder)\n mongod_path = os.path.join(folder, \"mongod\")\n if os.path.isfile(mongod_path):\n ok, out, err = etau.communicate([mongod_path, \"--version\"])\n out = out.decode(errors=\"ignore\").strip()\n err = err.decode(errors=\"ignore\").strip()\n mongod_version = None\n if ok:\n match = re.search(r\"db version.+?([\\d\\.]+)\", out, re.I)\n if match:\n mongod_version = match.group(1)\n if Version(mongod_version) >= Version(\n DatabaseService.MIN_MONGO_VERSION\n ):\n return mongod_path\n attempts.append((mongod_path, mongod_version, err))\n for path, version, err in attempts:\n if version is not None:\n logger.warn(\"%s: incompatible version %s\" % (path, version))\n else:\n logger.error(\"%s: failed to launch: %s\" % (path, err))\n raise RuntimeError(\n \"Could not find mongod >= %s\" % DatabaseService.MIN_MONGO_VERSION\n )\n\n\nclass ServerService(Service):\n \"\"\"Service that controls the FiftyOne web server.\"\"\"\n\n working_dir = foc.SERVER_DIR\n\n def __init__(self, port):\n self._port = port\n super(ServerService, self).__init__()\n\n def start(self):\n server_version = None\n try:\n server_version = requests.get(\n \"http://127.0.0.1:%i/fiftyone\" % self._port, timeout=2\n ).json()[\"version\"]\n except Exception:\n # There is likely not a fiftyone server running (remote or local),\n # so start a local server. If there actually is a fiftyone server\n # running that didn't respond to /fiftyone, the local server will\n # fail to start but the dashboard will still connect successfully.\n super().start()\n\n if server_version is not None:\n logger.info(\"Connected to fiftyone on local port %i\" % self._port)\n if server_version != foc.VERSION:\n logger.warn(\n \"Server version (%s) does not match client version (%s)\"\n % (server_version, foc.VERSION)\n )\n\n @property\n def command(self):\n command = [\n \"gunicorn\",\n \"-w\",\n \"1\",\n \"--worker-class\",\n \"eventlet\",\n \"-b\",\n \"127.0.0.1:%d\" % self._port,\n \"main:app\",\n ]\n if foc.DEV_INSTALL:\n command += [\"--reload\"]\n return command\n\n @property\n def port(self):\n \"\"\"Getter for the current port\"\"\"\n return self._port\n\n\nclass AppService(Service):\n \"\"\"Service that controls the FiftyOne app.\"\"\"\n\n working_dir = foc.FIFTYONE_APP_DIR\n\n @property\n def command(self):\n with etau.WorkingDir(foc.FIFTYONE_APP_DIR):\n if os.path.isfile(\"FiftyOne.AppImage\"):\n # linux\n args = [\"./FiftyOne.AppImage\"]\n elif os.path.isdir(\"FiftyOne.app\"):\n args = [\"./FiftyOne.app/Contents/MacOS/FiftyOne\"]\n elif os.path.isfile(\"package.json\"):\n # dev build\n args = [\"yarn\", \"dev\"]\n else:\n raise RuntimeError(\n \"Could not find FiftyOne dashboard in %r\"\n % foc.FIFTYONE_APP_DIR\n )\n return args\n", "path": "fiftyone/core/service.py"}]} | 3,146 | 311 |
gh_patches_debug_23865 | rasdani/github-patches | git_diff | angr__angr-2127 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DefUseGraph.top_predecessors() may trigger an infinite recursion
<!--
*Disclaimer:
The angr suite is maintained by a small team of volunteers.
While we cannot guarantee any timeliness for fixes and enhancements, we will do our best.
For more real-time help with angr, from us and the community, join our [Slack.](http://angr.io/invite/)*
-->
**Describe the bug.**
<!--
Please include a clear and concise description of what the bug is.
-->
Infinite recursion triggered by one of the test cases. Raw log is [here](https://dev.azure.com/angr/64bc1357-22a8-47d9-a89e-588cd888be53/_apis/build/builds/1354/logs/104).
```
======================================================================
ERROR: test_def_use_graph.transplant_class.<locals>.C (test_top_predecessors_should_not_contain_duplicates)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/__w/1/s/build/src/angr/tests/test_def_use_graph.py", line 100, in test_top_predecessors_should_not_contain_duplicates
result = def_use_graph.top_predecessors(D)
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 121, in top_predecessors
return _top_predecessors(definition, self._graph, [])
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 118, in _top_predecessors
result
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 116, in <lambda>
lambda acc, definition: _top_predecessors(definition, graph, acc),
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 118, in _top_predecessors
result
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 116, in <lambda>
lambda acc, definition: _top_predecessors(definition, graph, acc),
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 118, in _top_predecessors
result
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 116, in <lambda>
lambda acc, definition: _top_predecessors(definition, graph, acc),
...snip...
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 116, in <lambda>
lambda acc, definition: _top_predecessors(definition, graph, acc),
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 118, in _top_predecessors
result
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 116, in <lambda>
lambda acc, definition: _top_predecessors(definition, graph, acc),
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py", line 110, in _top_predecessors
predecessors = list(graph.predecessors(definition))
File "/__w/1/s/build/virtualenv/lib/python3.6/site-packages/networkx/classes/digraph.py", line 837, in predecessors
return iter(self._pred[n])
File "/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/definition.py", line 40, in __hash__
return hash((self.atom, self.codeloc, self.data))
RecursionError: maximum recursion depth exceeded while calling a Python object
```
**Environment Information.**
<!--
Many common issues are caused by problems with the local Python environment.
Before submitting, double-check that your versions of all modules in the angr suite (angr, cle, pyvex, ...) are up to date.
Please include the output of `python -m angr.misc.bug_report` here.
-->
Latest master branches of all projects (except for CLE) by 1/5/2020 and `cle:feat/pe_fast_load`.
**To Reproduce.**
<!--
Please include *both a script to reproduce the crash, and attach the binary used, if possible*
-->
Cannot be reproduced reliably. But running the test case multiple times should trigger it (if lucky).
**Additional context.**
<!--
Add any other context about the problem here.
-->
</issue>
<code>
[start of angr/analyses/reaching_definitions/dep_graph.py]
1 from typing import Optional, Dict, List
2 from functools import reduce
3
4 import networkx
5
6 from .definition import Definition
7
8
9 def _is_definition(node):
10 return isinstance(node, Definition)
11
12
13 class DepGraph:
14 """
15 The representation of a dependency graph: a directed graph, where nodes are definitions, and edges represent uses.
16
17 Mostly a wrapper around a <networkx.DiGraph>.
18 """
19
20 def __init__(self, graph: Optional[networkx.DiGraph]=None):
21 """
22 :param graph: A graph where nodes are definitions, and edges represent uses.
23 """
24 # Used for memoization of the `transitive_closure` method.
25 self._transitive_closures: Dict = {}
26
27 if graph and not all(map(_is_definition, graph.nodes)):
28 raise TypeError("In a DepGraph, nodes need to be <%s>s." % Definition.__name__)
29
30 self._graph = graph if graph is not None else networkx.DiGraph()
31
32 @property
33 def graph(self) -> networkx.DiGraph:
34 return self._graph
35
36 def add_node(self, node: Definition) -> None:
37 """
38 :param node: The definition to add to the definition-use graph.
39 """
40 if not _is_definition(node):
41 raise TypeError("In a DepGraph, nodes need to be <%s>s." % Definition.__name__)
42
43 self._graph.add_node(node)
44
45 def add_edge(self, source: Definition, destination: Definition, **labels) -> None:
46 """
47 The edge to add to the definition-use graph. Will create nodes that are not yet present.
48
49 :param source: The "source" definition, used by the "destination".
50 :param destination: The "destination" definition, using the variable defined by "source".
51 :param labels: Optional keyword arguments to represent edge labels.
52 """
53 if not _is_definition(source) and not _is_definition(destination):
54 raise TypeError("In a DepGraph, edges need to be between <%s>s." % Definition.__name__)
55
56 self._graph.add_edge(source, destination, **labels)
57
58 def transitive_closure(self, definition: Definition) -> networkx.DiGraph:
59 """
60 Compute the "transitive closure" of a given definition.
61 Obtained by transitively aggregating the ancestors of this definition in the graph.
62
63 Note: Each definition is memoized to avoid any kind of recomputation across the lifetime of this object.
64
65 :param definition: The Definition to get transitive closure for.
66 :return: A graph of the transitive closure of the given definition.
67 """
68
69 def _transitive_closure(def_: Definition, graph: networkx.DiGraph, result: networkx.DiGraph):
70 if def_ in self._transitive_closures.keys():
71 return self._transitive_closures[def_]
72
73 predecessors = list(graph.predecessors(def_))
74
75 result.add_node(def_)
76 result.add_edges_from(list(map(
77 lambda e: (*e, graph.get_edge_data(*e)),
78 map(
79 lambda p: (p, def_),
80 predecessors
81 )
82 )))
83
84 closure = reduce(
85 lambda acc, definition: _transitive_closure(definition, graph, acc),
86 predecessors,
87 result
88 )
89
90 self._transitive_closures[def_] = closure
91 return closure
92
93 return _transitive_closure(definition, self._graph, networkx.DiGraph())
94
95 def top_predecessors(self, definition: Definition) -> List[Definition]:
96 """
97 Recover the "entrypoint definitions" flowing into a given definition.
98 Obtained by transitively computing the top-level ancestors (nodes without predecessors) of this definition in
99 the graph.
100
101 :param definition: The <Definition> to return the top-level ancestors for.
102 :return: The list of top-level definitions flowing into the <node>.
103 """
104
105 def _top_predecessors(def_: Definition, graph: networkx.DiGraph, result: List[Definition]):
106 predecessors = list(graph.predecessors(def_))
107
108 if len(predecessors) == 0 and def_ not in result:
109 return result + [ def_ ]
110
111 return reduce(
112 lambda acc, definition: _top_predecessors(definition, graph, acc),
113 predecessors,
114 result
115 )
116
117 return _top_predecessors(definition, self._graph, [])
118
[end of angr/analyses/reaching_definitions/dep_graph.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/angr/analyses/reaching_definitions/dep_graph.py b/angr/analyses/reaching_definitions/dep_graph.py
--- a/angr/analyses/reaching_definitions/dep_graph.py
+++ b/angr/analyses/reaching_definitions/dep_graph.py
@@ -1,4 +1,4 @@
-from typing import Optional, Dict, List
+from typing import Optional, Dict
from functools import reduce
import networkx
@@ -91,27 +91,3 @@
return closure
return _transitive_closure(definition, self._graph, networkx.DiGraph())
-
- def top_predecessors(self, definition: Definition) -> List[Definition]:
- """
- Recover the "entrypoint definitions" flowing into a given definition.
- Obtained by transitively computing the top-level ancestors (nodes without predecessors) of this definition in
- the graph.
-
- :param definition: The <Definition> to return the top-level ancestors for.
- :return: The list of top-level definitions flowing into the <node>.
- """
-
- def _top_predecessors(def_: Definition, graph: networkx.DiGraph, result: List[Definition]):
- predecessors = list(graph.predecessors(def_))
-
- if len(predecessors) == 0 and def_ not in result:
- return result + [ def_ ]
-
- return reduce(
- lambda acc, definition: _top_predecessors(definition, graph, acc),
- predecessors,
- result
- )
-
- return _top_predecessors(definition, self._graph, [])
| {"golden_diff": "diff --git a/angr/analyses/reaching_definitions/dep_graph.py b/angr/analyses/reaching_definitions/dep_graph.py\n--- a/angr/analyses/reaching_definitions/dep_graph.py\n+++ b/angr/analyses/reaching_definitions/dep_graph.py\n@@ -1,4 +1,4 @@\n-from typing import Optional, Dict, List\n+from typing import Optional, Dict\n from functools import reduce\n \n import networkx\n@@ -91,27 +91,3 @@\n return closure\n \n return _transitive_closure(definition, self._graph, networkx.DiGraph())\n-\n- def top_predecessors(self, definition: Definition) -> List[Definition]:\n- \"\"\"\n- Recover the \"entrypoint definitions\" flowing into a given definition.\n- Obtained by transitively computing the top-level ancestors (nodes without predecessors) of this definition in\n- the graph.\n-\n- :param definition: The <Definition> to return the top-level ancestors for.\n- :return: The list of top-level definitions flowing into the <node>.\n- \"\"\"\n-\n- def _top_predecessors(def_: Definition, graph: networkx.DiGraph, result: List[Definition]):\n- predecessors = list(graph.predecessors(def_))\n-\n- if len(predecessors) == 0 and def_ not in result:\n- return result + [ def_ ]\n-\n- return reduce(\n- lambda acc, definition: _top_predecessors(definition, graph, acc),\n- predecessors,\n- result\n- )\n-\n- return _top_predecessors(definition, self._graph, [])\n", "issue": "DefUseGraph.top_predecessors() may trigger an infinite recursion\n<!--\r\n*Disclaimer:\r\nThe angr suite is maintained by a small team of volunteers.\r\nWhile we cannot guarantee any timeliness for fixes and enhancements, we will do our best.\r\nFor more real-time help with angr, from us and the community, join our [Slack.](http://angr.io/invite/)*\r\n-->\r\n\r\n**Describe the bug.**\r\n<!--\r\nPlease include a clear and concise description of what the bug is.\r\n-->\r\n\r\nInfinite recursion triggered by one of the test cases. Raw log is [here](https://dev.azure.com/angr/64bc1357-22a8-47d9-a89e-588cd888be53/_apis/build/builds/1354/logs/104).\r\n\r\n```\r\n======================================================================\r\nERROR: test_def_use_graph.transplant_class.<locals>.C (test_top_predecessors_should_not_contain_duplicates)\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/__w/1/s/build/src/angr/tests/test_def_use_graph.py\", line 100, in test_top_predecessors_should_not_contain_duplicates\r\n result = def_use_graph.top_predecessors(D)\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 121, in top_predecessors\r\n return _top_predecessors(definition, self._graph, [])\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 118, in _top_predecessors\r\n result\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 116, in <lambda>\r\n lambda acc, definition: _top_predecessors(definition, graph, acc),\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 118, in _top_predecessors\r\n result\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 116, in <lambda>\r\n lambda acc, definition: _top_predecessors(definition, graph, acc),\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 118, in _top_predecessors\r\n result\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 116, in <lambda>\r\n lambda acc, definition: _top_predecessors(definition, graph, acc),\r\n...snip...\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 116, in <lambda>\r\n lambda acc, definition: _top_predecessors(definition, graph, acc),\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 118, in _top_predecessors\r\n result\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 116, in <lambda>\r\n lambda acc, definition: _top_predecessors(definition, graph, acc),\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/def_use_graph.py\", line 110, in _top_predecessors\r\n predecessors = list(graph.predecessors(definition))\r\n File \"/__w/1/s/build/virtualenv/lib/python3.6/site-packages/networkx/classes/digraph.py\", line 837, in predecessors\r\n return iter(self._pred[n])\r\n File \"/__w/1/s/build/src/angr/angr/analyses/reaching_definitions/definition.py\", line 40, in __hash__\r\n return hash((self.atom, self.codeloc, self.data))\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\n\r\n**Environment Information.**\r\n<!--\r\nMany common issues are caused by problems with the local Python environment.\r\nBefore submitting, double-check that your versions of all modules in the angr suite (angr, cle, pyvex, ...) are up to date.\r\nPlease include the output of `python -m angr.misc.bug_report` here.\r\n-->\r\n\r\nLatest master branches of all projects (except for CLE) by 1/5/2020 and `cle:feat/pe_fast_load`.\r\n\r\n**To Reproduce.**\r\n<!--\r\nPlease include *both a script to reproduce the crash, and attach the binary used, if possible*\r\n-->\r\n\r\nCannot be reproduced reliably. But running the test case multiple times should trigger it (if lucky).\r\n\r\n**Additional context.**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\n", "before_files": [{"content": "from typing import Optional, Dict, List\nfrom functools import reduce\n\nimport networkx\n\nfrom .definition import Definition\n\n\ndef _is_definition(node):\n return isinstance(node, Definition)\n\n\nclass DepGraph:\n \"\"\"\n The representation of a dependency graph: a directed graph, where nodes are definitions, and edges represent uses.\n\n Mostly a wrapper around a <networkx.DiGraph>.\n \"\"\"\n\n def __init__(self, graph: Optional[networkx.DiGraph]=None):\n \"\"\"\n :param graph: A graph where nodes are definitions, and edges represent uses.\n \"\"\"\n # Used for memoization of the `transitive_closure` method.\n self._transitive_closures: Dict = {}\n\n if graph and not all(map(_is_definition, graph.nodes)):\n raise TypeError(\"In a DepGraph, nodes need to be <%s>s.\" % Definition.__name__)\n\n self._graph = graph if graph is not None else networkx.DiGraph()\n\n @property\n def graph(self) -> networkx.DiGraph:\n return self._graph\n\n def add_node(self, node: Definition) -> None:\n \"\"\"\n :param node: The definition to add to the definition-use graph.\n \"\"\"\n if not _is_definition(node):\n raise TypeError(\"In a DepGraph, nodes need to be <%s>s.\" % Definition.__name__)\n\n self._graph.add_node(node)\n\n def add_edge(self, source: Definition, destination: Definition, **labels) -> None:\n \"\"\"\n The edge to add to the definition-use graph. Will create nodes that are not yet present.\n\n :param source: The \"source\" definition, used by the \"destination\".\n :param destination: The \"destination\" definition, using the variable defined by \"source\".\n :param labels: Optional keyword arguments to represent edge labels.\n \"\"\"\n if not _is_definition(source) and not _is_definition(destination):\n raise TypeError(\"In a DepGraph, edges need to be between <%s>s.\" % Definition.__name__)\n\n self._graph.add_edge(source, destination, **labels)\n\n def transitive_closure(self, definition: Definition) -> networkx.DiGraph:\n \"\"\"\n Compute the \"transitive closure\" of a given definition.\n Obtained by transitively aggregating the ancestors of this definition in the graph.\n\n Note: Each definition is memoized to avoid any kind of recomputation across the lifetime of this object.\n\n :param definition: The Definition to get transitive closure for.\n :return: A graph of the transitive closure of the given definition.\n \"\"\"\n\n def _transitive_closure(def_: Definition, graph: networkx.DiGraph, result: networkx.DiGraph):\n if def_ in self._transitive_closures.keys():\n return self._transitive_closures[def_]\n\n predecessors = list(graph.predecessors(def_))\n\n result.add_node(def_)\n result.add_edges_from(list(map(\n lambda e: (*e, graph.get_edge_data(*e)),\n map(\n lambda p: (p, def_),\n predecessors\n )\n )))\n\n closure = reduce(\n lambda acc, definition: _transitive_closure(definition, graph, acc),\n predecessors,\n result\n )\n\n self._transitive_closures[def_] = closure\n return closure\n\n return _transitive_closure(definition, self._graph, networkx.DiGraph())\n\n def top_predecessors(self, definition: Definition) -> List[Definition]:\n \"\"\"\n Recover the \"entrypoint definitions\" flowing into a given definition.\n Obtained by transitively computing the top-level ancestors (nodes without predecessors) of this definition in\n the graph.\n\n :param definition: The <Definition> to return the top-level ancestors for.\n :return: The list of top-level definitions flowing into the <node>.\n \"\"\"\n\n def _top_predecessors(def_: Definition, graph: networkx.DiGraph, result: List[Definition]):\n predecessors = list(graph.predecessors(def_))\n\n if len(predecessors) == 0 and def_ not in result:\n return result + [ def_ ]\n\n return reduce(\n lambda acc, definition: _top_predecessors(definition, graph, acc),\n predecessors,\n result\n )\n\n return _top_predecessors(definition, self._graph, [])\n", "path": "angr/analyses/reaching_definitions/dep_graph.py"}]} | 2,858 | 357 |
gh_patches_debug_22279 | rasdani/github-patches | git_diff | chainer__chainer-243 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add type check to NonparameterizedConvolution2D function
Related to #123
</issue>
<code>
[start of chainer/functions/nonparameterized_convolution_2d.py]
1 from chainer import cuda
2 from chainer import function
3 from chainer.functions import convolution_2d as conv2d_module
4
5
6 class NonparameterizedConvolution2D(function.Function):
7
8 """Two-dimensional nonparameterized convolution class.
9
10 Args:
11 stride (int or (int, int)): Stride of filter applications.
12 ``stride=s`` and ``stride=(s, s)`` are equivalent.
13 pad (int or (int, int)): Spatial padding width for input arrays.
14 ``pad=p`` and ``pad=(p, p)`` are equivalent.
15 use_cudnn (bool): If True, then this function uses CuDNN if available.
16
17 .. seealso:: :class:`Convolution2D`
18
19 """
20 def __init__(self, stride=1, pad=0, use_cudnn=True):
21 self.stride = stride
22 self.pad = pad
23
24 self.use_cudnn = use_cudnn
25
26 def forward(self, x):
27 W = x[1]
28 b = None
29 if len(x) == 3:
30 b = x[2]
31 func = conv2d_module.Convolution2D(
32 W.shape[1], W.shape[0], W.shape[2:],
33 stride=self.stride, pad=self.pad, use_cudnn=self.use_cudnn,
34 initialW=W, initial_bias=b)
35 self.func = func
36 if any(isinstance(i, cuda.GPUArray) for i in x):
37 func.to_gpu()
38 return func.forward(x[:1])
39
40 def backward(self, x, gy):
41 func = self.func
42 func.zero_grads()
43 gx = func.backward(x[:1], gy)
44 if func.gb is None:
45 return (gx[0], func.gW)
46 return (gx[0], func.gW, func.gb)
47
48
49 def convolution_2d(x, W, b=None, stride=1, pad=0, use_cudnn=True):
50 """Two-dimensional convolution function.
51
52 Args:
53 x (~chainer.Variable): Input variable.
54 W (~chainer.Variable): Weight variable.
55 b (~chainer.Variable): Bias variable.
56 stride (int or (int, int)): Stride of filter applications.
57 ``stride=s`` and ``stride=(s, s)`` are equivalent.
58 pad (int or (int, int)): Spatial padding width for input arrays.
59 ``pad=p`` and ``pad=(p, p)`` are equivalent.
60 use_cudnn (bool): If True, then this function uses CuDNN if available.
61
62 Returns:
63 ~chainer.Variable: Output variable.
64
65 .. seealso:: :class:`Convolution2D`
66
67 """
68 return NonparameterizedConvolution2D(
69 stride=stride, pad=pad, use_cudnn=use_cudnn)(x, W, b)
70
[end of chainer/functions/nonparameterized_convolution_2d.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/nonparameterized_convolution_2d.py b/chainer/functions/nonparameterized_convolution_2d.py
--- a/chainer/functions/nonparameterized_convolution_2d.py
+++ b/chainer/functions/nonparameterized_convolution_2d.py
@@ -1,6 +1,9 @@
+import numpy
+
from chainer import cuda
from chainer import function
from chainer.functions import convolution_2d as conv2d_module
+from chainer.utils import type_check
class NonparameterizedConvolution2D(function.Function):
@@ -23,6 +26,30 @@
self.use_cudnn = use_cudnn
+ def check_type_forward(self, in_types):
+ type_check.expect(
+ 2 <= in_types.size(),
+ in_types.size() <= 3,
+ )
+
+ x_type = in_types[0]
+ w_type = in_types[1]
+ type_check.expect(
+ x_type.dtype == numpy.float32,
+ w_type.dtype == numpy.float32,
+ x_type.ndim == 4,
+ w_type.ndim == 4,
+ x_type.shape[1] == w_type.shape[1],
+ )
+
+ if in_types.size().eval() == 3:
+ b_type = in_types[2]
+ type_check.expect(
+ b_type.dtype == numpy.float32,
+ b_type.ndim == 1,
+ b_type.shape[0] == w_type.shape[0],
+ )
+
def forward(self, x):
W = x[1]
b = None
| {"golden_diff": "diff --git a/chainer/functions/nonparameterized_convolution_2d.py b/chainer/functions/nonparameterized_convolution_2d.py\n--- a/chainer/functions/nonparameterized_convolution_2d.py\n+++ b/chainer/functions/nonparameterized_convolution_2d.py\n@@ -1,6 +1,9 @@\n+import numpy\n+\n from chainer import cuda\n from chainer import function\n from chainer.functions import convolution_2d as conv2d_module\n+from chainer.utils import type_check\n \n \n class NonparameterizedConvolution2D(function.Function):\n@@ -23,6 +26,30 @@\n \n self.use_cudnn = use_cudnn\n \n+ def check_type_forward(self, in_types):\n+ type_check.expect(\n+ 2 <= in_types.size(),\n+ in_types.size() <= 3,\n+ )\n+\n+ x_type = in_types[0]\n+ w_type = in_types[1]\n+ type_check.expect(\n+ x_type.dtype == numpy.float32,\n+ w_type.dtype == numpy.float32,\n+ x_type.ndim == 4,\n+ w_type.ndim == 4,\n+ x_type.shape[1] == w_type.shape[1],\n+ )\n+\n+ if in_types.size().eval() == 3:\n+ b_type = in_types[2]\n+ type_check.expect(\n+ b_type.dtype == numpy.float32,\n+ b_type.ndim == 1,\n+ b_type.shape[0] == w_type.shape[0],\n+ )\n+\n def forward(self, x):\n W = x[1]\n b = None\n", "issue": "Add type check to NonparameterizedConvolution2D function\nRelated to #123 \n\n", "before_files": [{"content": "from chainer import cuda\nfrom chainer import function\nfrom chainer.functions import convolution_2d as conv2d_module\n\n\nclass NonparameterizedConvolution2D(function.Function):\n\n \"\"\"Two-dimensional nonparameterized convolution class.\n\n Args:\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n use_cudnn (bool): If True, then this function uses CuDNN if available.\n\n .. seealso:: :class:`Convolution2D`\n\n \"\"\"\n def __init__(self, stride=1, pad=0, use_cudnn=True):\n self.stride = stride\n self.pad = pad\n\n self.use_cudnn = use_cudnn\n\n def forward(self, x):\n W = x[1]\n b = None\n if len(x) == 3:\n b = x[2]\n func = conv2d_module.Convolution2D(\n W.shape[1], W.shape[0], W.shape[2:],\n stride=self.stride, pad=self.pad, use_cudnn=self.use_cudnn,\n initialW=W, initial_bias=b)\n self.func = func\n if any(isinstance(i, cuda.GPUArray) for i in x):\n func.to_gpu()\n return func.forward(x[:1])\n\n def backward(self, x, gy):\n func = self.func\n func.zero_grads()\n gx = func.backward(x[:1], gy)\n if func.gb is None:\n return (gx[0], func.gW)\n return (gx[0], func.gW, func.gb)\n\n\ndef convolution_2d(x, W, b=None, stride=1, pad=0, use_cudnn=True):\n \"\"\"Two-dimensional convolution function.\n\n Args:\n x (~chainer.Variable): Input variable.\n W (~chainer.Variable): Weight variable.\n b (~chainer.Variable): Bias variable.\n stride (int or (int, int)): Stride of filter applications.\n ``stride=s`` and ``stride=(s, s)`` are equivalent.\n pad (int or (int, int)): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p)`` are equivalent.\n use_cudnn (bool): If True, then this function uses CuDNN if available.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n .. seealso:: :class:`Convolution2D`\n\n \"\"\"\n return NonparameterizedConvolution2D(\n stride=stride, pad=pad, use_cudnn=use_cudnn)(x, W, b)\n", "path": "chainer/functions/nonparameterized_convolution_2d.py"}]} | 1,322 | 361 |
gh_patches_debug_23091 | rasdani/github-patches | git_diff | pytorch__ignite-984 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecate ignite.contrib.handlers.custom_events.CustomPeriodicEvent
## 🚀 Feature
Custom events `CustomPeriodicEvent` from contrib seem heavy and unusable.
Idea is to
- [ ] raise a warning about deprecation since v0.4.0 and removing since v0.5.0 and suggest to use filtered events.
- [ ] remove all docs about them
</issue>
<code>
[start of ignite/contrib/handlers/custom_events.py]
1 from ignite.engine import Events, State, EventEnum
2
3
4 class CustomPeriodicEvent:
5 """Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.
6
7 When custom periodic event is created and attached to an engine, the following events are fired:
8 1) K iterations is specified:
9 - `Events.ITERATIONS_<K>_STARTED`
10 - `Events.ITERATIONS_<K>_COMPLETED`
11
12 1) K epochs is specified:
13 - `Events.EPOCHS_<K>_STARTED`
14 - `Events.EPOCHS_<K>_COMPLETED`
15
16
17 Examples:
18
19 .. code-block:: python
20
21 from ignite.engine import Engine, Events
22 from ignite.contrib.handlers import CustomPeriodicEvent
23
24 # Let's define an event every 1000 iterations
25 cpe1 = CustomPeriodicEvent(n_iterations=1000)
26 cpe1.attach(trainer)
27
28 # Let's define an event every 10 epochs
29 cpe2 = CustomPeriodicEvent(n_epochs=10)
30 cpe2.attach(trainer)
31
32 @trainer.on(cpe1.Events.ITERATIONS_1000_COMPLETED)
33 def on_every_1000_iterations(engine):
34 # run a computation after 1000 iterations
35 # ...
36 print(engine.state.iterations_1000)
37
38 @trainer.on(cpe2.Events.EPOCHS_10_STARTED)
39 def on_every_10_epochs(engine):
40 # run a computation every 10 epochs
41 # ...
42 print(engine.state.epochs_10)
43
44
45 Args:
46 n_iterations (int, optional): number iterations of the custom periodic event
47 n_epochs (int, optional): number iterations of the custom periodic event. Argument is optional, but only one,
48 either n_iterations or n_epochs should defined.
49
50 """
51
52 def __init__(self, n_iterations=None, n_epochs=None):
53
54 if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):
55 raise ValueError("Argument n_iterations should be positive integer number")
56
57 if n_epochs is not None and (not isinstance(n_epochs, int) or n_epochs < 1):
58 raise ValueError("Argument n_epochs should be positive integer number")
59
60 if (n_iterations is None and n_epochs is None) or (n_iterations and n_epochs):
61 raise ValueError("Either n_iterations or n_epochs should defined")
62
63 if n_iterations:
64 prefix = "iterations"
65 self.state_attr = "iteration"
66 self.period = n_iterations
67
68 if n_epochs:
69 prefix = "epochs"
70 self.state_attr = "epoch"
71 self.period = n_epochs
72
73 self.custom_state_attr = "{}_{}".format(prefix, self.period)
74 event_name = "{}_{}".format(prefix.upper(), self.period)
75 setattr(
76 self,
77 "Events",
78 EventEnum("Events", " ".join(["{}_STARTED".format(event_name), "{}_COMPLETED".format(event_name)])),
79 )
80
81 # Update State.event_to_attr
82 for e in self.Events:
83 State.event_to_attr[e] = self.custom_state_attr
84
85 # Create aliases
86 self._periodic_event_started = getattr(self.Events, "{}_STARTED".format(event_name))
87 self._periodic_event_completed = getattr(self.Events, "{}_COMPLETED".format(event_name))
88
89 def _on_started(self, engine):
90 setattr(engine.state, self.custom_state_attr, 0)
91
92 def _on_periodic_event_started(self, engine):
93 if getattr(engine.state, self.state_attr) % self.period == 1:
94 setattr(engine.state, self.custom_state_attr, getattr(engine.state, self.custom_state_attr) + 1)
95 engine.fire_event(self._periodic_event_started)
96
97 def _on_periodic_event_completed(self, engine):
98 if getattr(engine.state, self.state_attr) % self.period == 0:
99 engine.fire_event(self._periodic_event_completed)
100
101 def attach(self, engine):
102 engine.register_events(*self.Events)
103
104 engine.add_event_handler(Events.STARTED, self._on_started)
105 engine.add_event_handler(
106 getattr(Events, "{}_STARTED".format(self.state_attr.upper())), self._on_periodic_event_started
107 )
108 engine.add_event_handler(
109 getattr(Events, "{}_COMPLETED".format(self.state_attr.upper())), self._on_periodic_event_completed
110 )
111
[end of ignite/contrib/handlers/custom_events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/contrib/handlers/custom_events.py b/ignite/contrib/handlers/custom_events.py
--- a/ignite/contrib/handlers/custom_events.py
+++ b/ignite/contrib/handlers/custom_events.py
@@ -1,8 +1,11 @@
from ignite.engine import Events, State, EventEnum
+import warnings
class CustomPeriodicEvent:
- """Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.
+ """DEPRECATED. Use filtered events instead.
+ Handler to define a custom periodic events as a number of elapsed iterations/epochs
+ for an engine.
When custom periodic event is created and attached to an engine, the following events are fired:
1) K iterations is specified:
@@ -51,6 +54,11 @@
def __init__(self, n_iterations=None, n_epochs=None):
+ warnings.warn(
+ "CustomPeriodicEvent is deprecated since 0.4.0 and will be removed in 0.5.0. Use filtered events instead.",
+ DeprecationWarning,
+ )
+
if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):
raise ValueError("Argument n_iterations should be positive integer number")
| {"golden_diff": "diff --git a/ignite/contrib/handlers/custom_events.py b/ignite/contrib/handlers/custom_events.py\n--- a/ignite/contrib/handlers/custom_events.py\n+++ b/ignite/contrib/handlers/custom_events.py\n@@ -1,8 +1,11 @@\n from ignite.engine import Events, State, EventEnum\n+import warnings\n \n \n class CustomPeriodicEvent:\n- \"\"\"Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.\n+ \"\"\"DEPRECATED. Use filtered events instead.\n+ Handler to define a custom periodic events as a number of elapsed iterations/epochs\n+ for an engine.\n \n When custom periodic event is created and attached to an engine, the following events are fired:\n 1) K iterations is specified:\n@@ -51,6 +54,11 @@\n \n def __init__(self, n_iterations=None, n_epochs=None):\n \n+ warnings.warn(\n+ \"CustomPeriodicEvent is deprecated since 0.4.0 and will be removed in 0.5.0. Use filtered events instead.\",\n+ DeprecationWarning,\n+ )\n+\n if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):\n raise ValueError(\"Argument n_iterations should be positive integer number\")\n", "issue": "Deprecate ignite.contrib.handlers.custom_events.CustomPeriodicEvent\n## \ud83d\ude80 Feature\r\n\r\nCustom events `CustomPeriodicEvent` from contrib seem heavy and unusable. \r\n\r\nIdea is to \r\n\r\n- [ ] raise a warning about deprecation since v0.4.0 and removing since v0.5.0 and suggest to use filtered events.\r\n- [ ] remove all docs about them \r\n\n", "before_files": [{"content": "from ignite.engine import Events, State, EventEnum\n\n\nclass CustomPeriodicEvent:\n \"\"\"Handler to define a custom periodic events as a number of elapsed iterations/epochs for an engine.\n\n When custom periodic event is created and attached to an engine, the following events are fired:\n 1) K iterations is specified:\n - `Events.ITERATIONS_<K>_STARTED`\n - `Events.ITERATIONS_<K>_COMPLETED`\n\n 1) K epochs is specified:\n - `Events.EPOCHS_<K>_STARTED`\n - `Events.EPOCHS_<K>_COMPLETED`\n\n\n Examples:\n\n .. code-block:: python\n\n from ignite.engine import Engine, Events\n from ignite.contrib.handlers import CustomPeriodicEvent\n\n # Let's define an event every 1000 iterations\n cpe1 = CustomPeriodicEvent(n_iterations=1000)\n cpe1.attach(trainer)\n\n # Let's define an event every 10 epochs\n cpe2 = CustomPeriodicEvent(n_epochs=10)\n cpe2.attach(trainer)\n\n @trainer.on(cpe1.Events.ITERATIONS_1000_COMPLETED)\n def on_every_1000_iterations(engine):\n # run a computation after 1000 iterations\n # ...\n print(engine.state.iterations_1000)\n\n @trainer.on(cpe2.Events.EPOCHS_10_STARTED)\n def on_every_10_epochs(engine):\n # run a computation every 10 epochs\n # ...\n print(engine.state.epochs_10)\n\n\n Args:\n n_iterations (int, optional): number iterations of the custom periodic event\n n_epochs (int, optional): number iterations of the custom periodic event. Argument is optional, but only one,\n either n_iterations or n_epochs should defined.\n\n \"\"\"\n\n def __init__(self, n_iterations=None, n_epochs=None):\n\n if n_iterations is not None and (not isinstance(n_iterations, int) or n_iterations < 1):\n raise ValueError(\"Argument n_iterations should be positive integer number\")\n\n if n_epochs is not None and (not isinstance(n_epochs, int) or n_epochs < 1):\n raise ValueError(\"Argument n_epochs should be positive integer number\")\n\n if (n_iterations is None and n_epochs is None) or (n_iterations and n_epochs):\n raise ValueError(\"Either n_iterations or n_epochs should defined\")\n\n if n_iterations:\n prefix = \"iterations\"\n self.state_attr = \"iteration\"\n self.period = n_iterations\n\n if n_epochs:\n prefix = \"epochs\"\n self.state_attr = \"epoch\"\n self.period = n_epochs\n\n self.custom_state_attr = \"{}_{}\".format(prefix, self.period)\n event_name = \"{}_{}\".format(prefix.upper(), self.period)\n setattr(\n self,\n \"Events\",\n EventEnum(\"Events\", \" \".join([\"{}_STARTED\".format(event_name), \"{}_COMPLETED\".format(event_name)])),\n )\n\n # Update State.event_to_attr\n for e in self.Events:\n State.event_to_attr[e] = self.custom_state_attr\n\n # Create aliases\n self._periodic_event_started = getattr(self.Events, \"{}_STARTED\".format(event_name))\n self._periodic_event_completed = getattr(self.Events, \"{}_COMPLETED\".format(event_name))\n\n def _on_started(self, engine):\n setattr(engine.state, self.custom_state_attr, 0)\n\n def _on_periodic_event_started(self, engine):\n if getattr(engine.state, self.state_attr) % self.period == 1:\n setattr(engine.state, self.custom_state_attr, getattr(engine.state, self.custom_state_attr) + 1)\n engine.fire_event(self._periodic_event_started)\n\n def _on_periodic_event_completed(self, engine):\n if getattr(engine.state, self.state_attr) % self.period == 0:\n engine.fire_event(self._periodic_event_completed)\n\n def attach(self, engine):\n engine.register_events(*self.Events)\n\n engine.add_event_handler(Events.STARTED, self._on_started)\n engine.add_event_handler(\n getattr(Events, \"{}_STARTED\".format(self.state_attr.upper())), self._on_periodic_event_started\n )\n engine.add_event_handler(\n getattr(Events, \"{}_COMPLETED\".format(self.state_attr.upper())), self._on_periodic_event_completed\n )\n", "path": "ignite/contrib/handlers/custom_events.py"}]} | 1,813 | 286 |
gh_patches_debug_13028 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-778 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update rule E2541 to not error when action names are not strings
*Issue #, if available:*
Fix #752
*Description of changes:*
- Update rule E2541 to only check for duplicate names when the action name is a string
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
</issue>
<code>
[start of src/cfnlint/transform.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import os
18 import logging
19 import six
20 import samtranslator
21 from samtranslator.parser import parser
22 from samtranslator.translator.translator import Translator
23 from samtranslator.public.exceptions import InvalidDocumentException
24
25 import cfnlint.helpers
26 LOGGER = logging.getLogger('cfnlint')
27
28 class Transform(object):
29 """
30 Application Serverless Module tranform Wrappor. Based on code from AWS SAM CLI:
31 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py
32 """
33
34 def __init__(self, filename, template, region):
35 """
36 Initialize Transform class
37 """
38 self._filename = filename
39 self._template = template
40 self._region = region
41
42 self._managed_policy_map = self.load_managed_policies()
43 self._sam_parser = parser.Parser()
44
45 def template(self):
46 """Get the template"""
47 return self._template
48
49 def load_managed_policies(self):
50 """
51 Load the ManagedPolicies locally, based on the AWS-CLI:
52 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json
53 """
54 return cfnlint.helpers.load_resources('data/Serverless/ManagedPolicies.json')
55
56 def _replace_local_codeuri(self):
57 """
58 Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in AWS::Serverless::Api to a fake
59 S3 Uri. This is to support running the SAM Translator with valid values for these fields. If this in not done,
60 the template is invalid in the eyes of SAM Translator (the translator does not support local paths)
61 """
62
63 all_resources = self._template.get('Resources', {})
64
65 for _, resource in all_resources.items():
66
67 resource_type = resource.get('Type')
68 resource_dict = resource.get('Properties')
69
70 if resource_type == 'AWS::Serverless::Function':
71
72 Transform._update_to_s3_uri('CodeUri', resource_dict)
73 if resource_type in ['AWS::Serverless::LayerVersion']:
74 if resource_dict.get('ContentUri'):
75 Transform._update_to_s3_uri('ContentUri', resource_dict)
76 if resource_type == 'AWS::Serverless::Application':
77 if resource_dict.get('Location'):
78 resource_dict['Location'] = ''
79 Transform._update_to_s3_uri('Location', resource_dict)
80 if resource_type == 'AWS::Serverless::Api':
81 if 'DefinitionBody' not in resource_dict:
82 Transform._update_to_s3_uri('DefinitionUri', resource_dict)
83
84 def transform_template(self):
85 """
86 Transform the Template using the Serverless Application Model.
87 """
88 matches = []
89
90 try:
91 # Output the SAM Translator version in debug mode
92 LOGGER.debug('SAM Translator: %s', samtranslator.__version__)
93
94 sam_translator = Translator(managed_policy_map=self._managed_policy_map,
95 sam_parser=self._sam_parser)
96
97 self._replace_local_codeuri()
98
99 # Tell SAM to use the region we're linting in, this has to be controlled using the default AWS mechanisms, see also:
100 # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py
101 LOGGER.debug('Setting AWS_DEFAULT_REGION to %s', self._region)
102 os.environ['AWS_DEFAULT_REGION'] = self._region
103
104 self._template = cfnlint.helpers.convert_dict(
105 sam_translator.translate(sam_template=self._template, parameter_values={}))
106
107 LOGGER.debug('Transformed template: %s', self._template)
108 except InvalidDocumentException as e:
109 message = 'Error transforming template: {0}'
110 for cause in e.causes:
111 matches.append(cfnlint.Match(
112 1, 1,
113 1, 1,
114 self._filename, cfnlint.TransformError(), message.format(cause.message)))
115 except Exception as e: # pylint: disable=W0703
116 LOGGER.debug('Error transforming template: %s', str(e))
117 LOGGER.debug('Stack trace: %s', e, exc_info=True)
118 message = 'Error transforming template: {0}'
119 matches.append(cfnlint.Match(
120 1, 1,
121 1, 1,
122 self._filename, cfnlint.TransformError(), message.format(str(e))))
123
124 return matches
125
126 @staticmethod
127 def is_s3_uri(uri):
128 """
129 Checks the uri and determines if it is a valid S3 Uri
130 Parameters
131 ----------
132 uri str, required
133 Uri to check
134 Returns
135 -------
136 bool
137 Returns True if the uri given is an S3 uri, otherwise False
138 """
139 return isinstance(uri, six.string_types) and uri.startswith('s3://')
140
141 @staticmethod
142 def _update_to_s3_uri(property_key, resource_property_dict, s3_uri_value='s3://bucket/value'):
143 """
144 Updates the 'property_key' in the 'resource_property_dict' to the value of 's3_uri_value'
145 Note: The function will mutate the resource_property_dict that is pass in
146 Parameters
147 ----------
148 property_key str, required
149 Key in the resource_property_dict
150 resource_property_dict dict, required
151 Property dictionary of a Resource in the template to replace
152 s3_uri_value str, optional
153 Value to update the value of the property_key to
154 """
155 uri_property = resource_property_dict.get(property_key, '.')
156
157 # ignore if dict or already an S3 Uri
158 if isinstance(uri_property, dict) or Transform.is_s3_uri(uri_property):
159 return
160
161 resource_property_dict[property_key] = s3_uri_value
162
[end of src/cfnlint/transform.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/transform.py b/src/cfnlint/transform.py
--- a/src/cfnlint/transform.py
+++ b/src/cfnlint/transform.py
@@ -78,8 +78,10 @@
resource_dict['Location'] = ''
Transform._update_to_s3_uri('Location', resource_dict)
if resource_type == 'AWS::Serverless::Api':
- if 'DefinitionBody' not in resource_dict:
+ if 'DefinitionBody' not in resource_dict and 'Auth' not in resource_dict:
Transform._update_to_s3_uri('DefinitionUri', resource_dict)
+ else:
+ resource_dict['DefinitionBody'] = ''
def transform_template(self):
"""
| {"golden_diff": "diff --git a/src/cfnlint/transform.py b/src/cfnlint/transform.py\n--- a/src/cfnlint/transform.py\n+++ b/src/cfnlint/transform.py\n@@ -78,8 +78,10 @@\n resource_dict['Location'] = ''\n Transform._update_to_s3_uri('Location', resource_dict)\n if resource_type == 'AWS::Serverless::Api':\n- if 'DefinitionBody' not in resource_dict:\n+ if 'DefinitionBody' not in resource_dict and 'Auth' not in resource_dict:\n Transform._update_to_s3_uri('DefinitionUri', resource_dict)\n+ else:\n+ resource_dict['DefinitionBody'] = ''\n \n def transform_template(self):\n \"\"\"\n", "issue": "Update rule E2541 to not error when action names are not strings\n*Issue #, if available:*\r\nFix #752\r\n*Description of changes:*\r\n- Update rule E2541 to only check for duplicate names when the action name is a string\r\n\r\nBy submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.\r\n\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport os\nimport logging\nimport six\nimport samtranslator\nfrom samtranslator.parser import parser\nfrom samtranslator.translator.translator import Translator\nfrom samtranslator.public.exceptions import InvalidDocumentException\n\nimport cfnlint.helpers\nLOGGER = logging.getLogger('cfnlint')\n\nclass Transform(object):\n \"\"\"\n Application Serverless Module tranform Wrappor. Based on code from AWS SAM CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py\n \"\"\"\n\n def __init__(self, filename, template, region):\n \"\"\"\n Initialize Transform class\n \"\"\"\n self._filename = filename\n self._template = template\n self._region = region\n\n self._managed_policy_map = self.load_managed_policies()\n self._sam_parser = parser.Parser()\n\n def template(self):\n \"\"\"Get the template\"\"\"\n return self._template\n\n def load_managed_policies(self):\n \"\"\"\n Load the ManagedPolicies locally, based on the AWS-CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json\n \"\"\"\n return cfnlint.helpers.load_resources('data/Serverless/ManagedPolicies.json')\n\n def _replace_local_codeuri(self):\n \"\"\"\n Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in AWS::Serverless::Api to a fake\n S3 Uri. This is to support running the SAM Translator with valid values for these fields. If this in not done,\n the template is invalid in the eyes of SAM Translator (the translator does not support local paths)\n \"\"\"\n\n all_resources = self._template.get('Resources', {})\n\n for _, resource in all_resources.items():\n\n resource_type = resource.get('Type')\n resource_dict = resource.get('Properties')\n\n if resource_type == 'AWS::Serverless::Function':\n\n Transform._update_to_s3_uri('CodeUri', resource_dict)\n if resource_type in ['AWS::Serverless::LayerVersion']:\n if resource_dict.get('ContentUri'):\n Transform._update_to_s3_uri('ContentUri', resource_dict)\n if resource_type == 'AWS::Serverless::Application':\n if resource_dict.get('Location'):\n resource_dict['Location'] = ''\n Transform._update_to_s3_uri('Location', resource_dict)\n if resource_type == 'AWS::Serverless::Api':\n if 'DefinitionBody' not in resource_dict:\n Transform._update_to_s3_uri('DefinitionUri', resource_dict)\n\n def transform_template(self):\n \"\"\"\n Transform the Template using the Serverless Application Model.\n \"\"\"\n matches = []\n\n try:\n # Output the SAM Translator version in debug mode\n LOGGER.debug('SAM Translator: %s', samtranslator.__version__)\n\n sam_translator = Translator(managed_policy_map=self._managed_policy_map,\n sam_parser=self._sam_parser)\n\n self._replace_local_codeuri()\n\n # Tell SAM to use the region we're linting in, this has to be controlled using the default AWS mechanisms, see also:\n # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py\n LOGGER.debug('Setting AWS_DEFAULT_REGION to %s', self._region)\n os.environ['AWS_DEFAULT_REGION'] = self._region\n\n self._template = cfnlint.helpers.convert_dict(\n sam_translator.translate(sam_template=self._template, parameter_values={}))\n\n LOGGER.debug('Transformed template: %s', self._template)\n except InvalidDocumentException as e:\n message = 'Error transforming template: {0}'\n for cause in e.causes:\n matches.append(cfnlint.Match(\n 1, 1,\n 1, 1,\n self._filename, cfnlint.TransformError(), message.format(cause.message)))\n except Exception as e: # pylint: disable=W0703\n LOGGER.debug('Error transforming template: %s', str(e))\n LOGGER.debug('Stack trace: %s', e, exc_info=True)\n message = 'Error transforming template: {0}'\n matches.append(cfnlint.Match(\n 1, 1,\n 1, 1,\n self._filename, cfnlint.TransformError(), message.format(str(e))))\n\n return matches\n\n @staticmethod\n def is_s3_uri(uri):\n \"\"\"\n Checks the uri and determines if it is a valid S3 Uri\n Parameters\n ----------\n uri str, required\n Uri to check\n Returns\n -------\n bool\n Returns True if the uri given is an S3 uri, otherwise False\n \"\"\"\n return isinstance(uri, six.string_types) and uri.startswith('s3://')\n\n @staticmethod\n def _update_to_s3_uri(property_key, resource_property_dict, s3_uri_value='s3://bucket/value'):\n \"\"\"\n Updates the 'property_key' in the 'resource_property_dict' to the value of 's3_uri_value'\n Note: The function will mutate the resource_property_dict that is pass in\n Parameters\n ----------\n property_key str, required\n Key in the resource_property_dict\n resource_property_dict dict, required\n Property dictionary of a Resource in the template to replace\n s3_uri_value str, optional\n Value to update the value of the property_key to\n \"\"\"\n uri_property = resource_property_dict.get(property_key, '.')\n\n # ignore if dict or already an S3 Uri\n if isinstance(uri_property, dict) or Transform.is_s3_uri(uri_property):\n return\n\n resource_property_dict[property_key] = s3_uri_value\n", "path": "src/cfnlint/transform.py"}]} | 2,439 | 158 |
gh_patches_debug_720 | rasdani/github-patches | git_diff | liqd__a4-opin-766 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cannot delete user in django admin if user has not uploaded avatar
</issue>
<code>
[start of euth/users/signals.py]
1 from django.db.models import signals
2 from django.dispatch import receiver
3
4 from adhocracy4.images import services
5
6 from . import models
7
8
9 @receiver(signals.post_init, sender=models.User)
10 def backup_image_path(sender, instance, **kwargs):
11 instance._current_image_file = instance.avatar
12
13
14 @receiver(signals.post_save, sender=models.User)
15 def delete_old_image(sender, instance, **kwargs):
16 if hasattr(instance, '_current_image_file'):
17 if instance._current_image_file != instance.avatar:
18 services.delete_images([instance._current_image_file])
19
20
21 @receiver(signals.post_delete, sender=models.User)
22 def delete_images_for_User(sender, instance, **kwargs):
23 services.delete_images([instance.avatar])
24
[end of euth/users/signals.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/users/signals.py b/euth/users/signals.py
--- a/euth/users/signals.py
+++ b/euth/users/signals.py
@@ -20,4 +20,4 @@
@receiver(signals.post_delete, sender=models.User)
def delete_images_for_User(sender, instance, **kwargs):
- services.delete_images([instance.avatar])
+ services.delete_images([instance._avatar])
| {"golden_diff": "diff --git a/euth/users/signals.py b/euth/users/signals.py\n--- a/euth/users/signals.py\n+++ b/euth/users/signals.py\n@@ -20,4 +20,4 @@\n \n @receiver(signals.post_delete, sender=models.User)\n def delete_images_for_User(sender, instance, **kwargs):\n- services.delete_images([instance.avatar])\n+ services.delete_images([instance._avatar])\n", "issue": "cannot delete user in django admin if user has not uploaded avatar \n\n", "before_files": [{"content": "from django.db.models import signals\nfrom django.dispatch import receiver\n\nfrom adhocracy4.images import services\n\nfrom . import models\n\n\n@receiver(signals.post_init, sender=models.User)\ndef backup_image_path(sender, instance, **kwargs):\n instance._current_image_file = instance.avatar\n\n\n@receiver(signals.post_save, sender=models.User)\ndef delete_old_image(sender, instance, **kwargs):\n if hasattr(instance, '_current_image_file'):\n if instance._current_image_file != instance.avatar:\n services.delete_images([instance._current_image_file])\n\n\n@receiver(signals.post_delete, sender=models.User)\ndef delete_images_for_User(sender, instance, **kwargs):\n services.delete_images([instance.avatar])\n", "path": "euth/users/signals.py"}]} | 739 | 91 |
gh_patches_debug_951 | rasdani/github-patches | git_diff | pytorch__ignite-844 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typehint of ignite._utils._to_hours_mins_secs not satisfied with float
## 🐛 Bug description
That is a so tiny bug. The `typehint` of the following function of `ignite._utils` is not satisfied with a `float` argument
``` python
def _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:
"""Convert seconds to hours, mins, and seconds."""
mins, secs = divmod(time_taken, 60)
hours, mins = divmod(mins, 60)
return hours, mins, secs
```
We have
```python
>>> divmod(10.0,2)
(5.0, 0.0)
```
## Environment
- PyTorch Version (e.g., 1.4): 1.4
- Ignite Version (e.g., 0.3.0): 0.3.0
- OS (e.g., Linux): Linux
- How you installed Ignite (`conda`, `pip`, source): conda
- Python version: 3.7
- Any other relevant information:
</issue>
<code>
[start of ignite/_utils.py]
1 from typing import Union, Tuple
2
3 # For compatibilty
4 from ignite.utils import convert_tensor, apply_to_tensor, apply_to_type, to_onehot
5
6
7 def _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:
8 """Convert seconds to hours, mins, and seconds."""
9 mins, secs = divmod(time_taken, 60)
10 hours, mins = divmod(mins, 60)
11 return hours, mins, secs
12
[end of ignite/_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/_utils.py b/ignite/_utils.py
--- a/ignite/_utils.py
+++ b/ignite/_utils.py
@@ -8,4 +8,4 @@
"""Convert seconds to hours, mins, and seconds."""
mins, secs = divmod(time_taken, 60)
hours, mins = divmod(mins, 60)
- return hours, mins, secs
+ return round(hours), round(mins), round(secs)
| {"golden_diff": "diff --git a/ignite/_utils.py b/ignite/_utils.py\n--- a/ignite/_utils.py\n+++ b/ignite/_utils.py\n@@ -8,4 +8,4 @@\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\n mins, secs = divmod(time_taken, 60)\n hours, mins = divmod(mins, 60)\n- return hours, mins, secs\n+ return round(hours), round(mins), round(secs)\n", "issue": "Typehint of ignite._utils._to_hours_mins_secs not satisfied with float\n## \ud83d\udc1b Bug description\r\n\r\nThat is a so tiny bug. The `typehint` of the following function of `ignite._utils` is not satisfied with a `float` argument\r\n``` python\r\ndef _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:\r\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\r\n mins, secs = divmod(time_taken, 60)\r\n hours, mins = divmod(mins, 60)\r\n return hours, mins, secs\r\n```\r\nWe have\r\n```python\r\n>>> divmod(10.0,2)\r\n(5.0, 0.0)\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.4\r\n - Ignite Version (e.g., 0.3.0): 0.3.0\r\n - OS (e.g., Linux): Linux\r\n - How you installed Ignite (`conda`, `pip`, source): conda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\n", "before_files": [{"content": "from typing import Union, Tuple\n\n# For compatibilty\nfrom ignite.utils import convert_tensor, apply_to_tensor, apply_to_type, to_onehot\n\n\ndef _to_hours_mins_secs(time_taken: Union[float, int]) -> Tuple[int, int, int]:\n \"\"\"Convert seconds to hours, mins, and seconds.\"\"\"\n mins, secs = divmod(time_taken, 60)\n hours, mins = divmod(mins, 60)\n return hours, mins, secs\n", "path": "ignite/_utils.py"}]} | 905 | 105 |
gh_patches_debug_16924 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-732 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Backtrace when run without gtk installed
If you try to bring up the graphical interface without gtk installed, you get a backtrace:
```
Traceback (most recent call last):
File "/usr/bin/solaar", line 57, in <module>
solaar.gtk.main()
File "/usr/lib/python3.7/site-packages/solaar/gtk.py", line 90, in main
gi.require_version('Gtk', '3.0')
File "/usr/lib64/python3.7/site-packages/gi/__init__.py", line 127, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gtk not available
```
It does appear that there is code to handle missing modules, but gtk doesn't get imported directly so it wouldn't trigger. Maybe something like this?
```
diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py
index f728420..20683af 100644
--- a/lib/solaar/gtk.py
+++ b/lib/solaar/gtk.py
@@ -87,8 +87,11 @@ def main():
if not args: return
if args.action: return _cli.run(args.action, args.hidraw_path)
- gi = _require('gi', 'python-gi')
- gi.require_version('Gtk', '3.0')
+ try:
+ gi.require_version('Gtk', '3.0')
+ except ValueError:
+ import sys
+ sys.exit("%s: Gtk (version 3) must be installed in order to run the graphical interface." % (NAME))
_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
try:
```
Can send a PR if desired, but I don't know if that's the right approach.
</issue>
<code>
[start of lib/solaar/gtk.py]
1 #!/usr/bin/env python3
2 # -*- python-mode -*-
3 # -*- coding: UTF-8 -*-
4
5 ## Copyright (C) 2012-2013 Daniel Pavel
6 ##
7 ## This program is free software; you can redistribute it and/or modify
8 ## it under the terms of the GNU General Public License as published by
9 ## the Free Software Foundation; either version 2 of the License, or
10 ## (at your option) any later version.
11 ##
12 ## This program is distributed in the hope that it will be useful,
13 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
14 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 ## GNU General Public License for more details.
16 ##
17 ## You should have received a copy of the GNU General Public License along
18 ## with this program; if not, write to the Free Software Foundation, Inc.,
19 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
20
21 from __future__ import absolute_import, division, print_function, unicode_literals
22
23 import importlib
24
25
26 from solaar import __version__, NAME
27 import solaar.i18n as _i18n
28 import solaar.cli as _cli
29
30 #
31 #
32 #
33
34 def _require(module, os_package):
35 try:
36 return importlib.import_module(module)
37 except ImportError:
38 import sys
39 sys.exit("%s: missing required system package %s" % (NAME, os_package))
40
41
42 def _parse_arguments():
43 import argparse
44 arg_parser = argparse.ArgumentParser(prog=NAME.lower())
45 arg_parser.add_argument('-d', '--debug', action='count', default=0,
46 help='print logging messages, for debugging purposes (may be repeated for extra verbosity)')
47 arg_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',
48 help='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')
49 arg_parser.add_argument('--restart-on-wake-up', action='store_true',
50 help='restart Solaar on sleep wake-up (experimental)')
51 arg_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')
52 arg_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)
53 arg_parser.add_argument('--help-actions', action='store_true',
54 help='print help for the optional actions')
55 arg_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,
56 help='optional actions to perform')
57
58 args = arg_parser.parse_args()
59
60 if args.help_actions:
61 _cli.print_help()
62 return
63
64 if args.window is None:
65 args.window = 'show' # default behaviour is to show main window
66
67 import logging
68 if args.debug > 0:
69 log_level = logging.WARNING - 10 * args.debug
70 log_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'
71 logging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')
72 else:
73 logging.root.addHandler(logging.NullHandler())
74 logging.root.setLevel(logging.ERROR)
75
76 if not args.action:
77 if logging.root.isEnabledFor(logging.INFO):
78 logging.info("language %s (%s), translations path %s", _i18n.language, _i18n.encoding, _i18n.path)
79
80 return args
81
82
83 def main():
84 _require('pyudev', 'python3-pyudev')
85
86 # handle ^C in console
87 import signal
88 signal.signal(signal.SIGINT, signal.SIG_DFL)
89
90 args = _parse_arguments()
91 if not args: return
92 if args.action:
93 # if any argument, run comandline and exit
94 return _cli.run(args.action, args.hidraw_path)
95
96 gi = _require('gi', 'python3-gi or python3-gobject')
97 gi.require_version('Gtk', '3.0')
98 _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
99
100 try:
101 import solaar.ui as ui
102 import solaar.listener as listener
103 listener.setup_scanner(ui.status_changed, ui.error_dialog)
104
105 import solaar.upower as _upower
106 if args.restart_on_wake_up:
107 _upower.watch(listener.start_all, listener.stop_all)
108 else:
109 _upower.watch(lambda: listener.ping_all(True))
110
111 # main UI event loop
112 ui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')
113 except Exception as e:
114 import sys
115 sys.exit('%s: error: %s' % (NAME.lower(), e))
116
117
118 if __name__ == '__main__':
119 main()
120
[end of lib/solaar/gtk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py
--- a/lib/solaar/gtk.py
+++ b/lib/solaar/gtk.py
@@ -31,10 +31,12 @@
#
#
-def _require(module, os_package):
+def _require(module, os_package, gi=None, gi_package=None, gi_version=None):
try:
+ if gi is not None:
+ gi.require_version(gi_package,gi_version)
return importlib.import_module(module)
- except ImportError:
+ except (ImportError, ValueError):
import sys
sys.exit("%s: missing required system package %s" % (NAME, os_package))
@@ -94,8 +96,7 @@
return _cli.run(args.action, args.hidraw_path)
gi = _require('gi', 'python3-gi or python3-gobject')
- gi.require_version('Gtk', '3.0')
- _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
+ _require('gi.repository.Gtk', 'gir1.2-gtk-3.0', gi, 'Gtk', '3.0')
try:
import solaar.ui as ui
| {"golden_diff": "diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py\n--- a/lib/solaar/gtk.py\n+++ b/lib/solaar/gtk.py\n@@ -31,10 +31,12 @@\n #\n #\n \n-def _require(module, os_package):\n+def _require(module, os_package, gi=None, gi_package=None, gi_version=None):\n \ttry:\n+\t\tif gi is not None:\n+\t\t\tgi.require_version(gi_package,gi_version)\n \t\treturn importlib.import_module(module)\n-\texcept ImportError:\n+\texcept (ImportError, ValueError):\n \t\timport sys\n \t\tsys.exit(\"%s: missing required system package %s\" % (NAME, os_package))\n \n@@ -94,8 +96,7 @@\n \t\treturn _cli.run(args.action, args.hidraw_path)\n \n \tgi = _require('gi', 'python3-gi or python3-gobject')\n-\tgi.require_version('Gtk', '3.0')\n-\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n+\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0', gi, 'Gtk', '3.0')\n \n \ttry:\n \t\timport solaar.ui as ui\n", "issue": "Backtrace when run without gtk installed\nIf you try to bring up the graphical interface without gtk installed, you get a backtrace:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/bin/solaar\", line 57, in <module>\r\n solaar.gtk.main()\r\n File \"/usr/lib/python3.7/site-packages/solaar/gtk.py\", line 90, in main\r\n gi.require_version('Gtk', '3.0')\r\n File \"/usr/lib64/python3.7/site-packages/gi/__init__.py\", line 127, in require_version\r\n raise ValueError('Namespace %s not available' % namespace)\r\nValueError: Namespace Gtk not available\r\n```\r\nIt does appear that there is code to handle missing modules, but gtk doesn't get imported directly so it wouldn't trigger. Maybe something like this?\r\n```\r\ndiff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py\r\nindex f728420..20683af 100644\r\n--- a/lib/solaar/gtk.py\r\n+++ b/lib/solaar/gtk.py\r\n@@ -87,8 +87,11 @@ def main():\r\n if not args: return\r\n if args.action: return _cli.run(args.action, args.hidraw_path)\r\n\r\n- gi = _require('gi', 'python-gi')\r\n- gi.require_version('Gtk', '3.0')\r\n+ try:\r\n+ gi.require_version('Gtk', '3.0')\r\n+ except ValueError:\r\n+ import sys\r\n+ sys.exit(\"%s: Gtk (version 3) must be installed in order to run the graphical interface.\" % (NAME))\r\n _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\r\n\r\n try:\r\n```\r\nCan send a PR if desired, but I don't know if that's the right approach.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\n\n\nfrom solaar import __version__, NAME\nimport solaar.i18n as _i18n\nimport solaar.cli as _cli\n\n#\n#\n#\n\ndef _require(module, os_package):\n\ttry:\n\t\treturn importlib.import_module(module)\n\texcept ImportError:\n\t\timport sys\n\t\tsys.exit(\"%s: missing required system package %s\" % (NAME, os_package))\n\n\ndef _parse_arguments():\n\timport argparse\n\targ_parser = argparse.ArgumentParser(prog=NAME.lower())\n\targ_parser.add_argument('-d', '--debug', action='count', default=0,\n\t\t\t\t\t\t\thelp='print logging messages, for debugging purposes (may be repeated for extra verbosity)')\n\targ_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',\n\t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n\targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n\t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n\targ_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')\n\targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n\targ_parser.add_argument('--help-actions', action='store_true',\n\t\t\t\t\t\t\thelp='print help for the optional actions')\n\targ_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,\n\t\t\t\t\t\t\thelp='optional actions to perform')\n\n\targs = arg_parser.parse_args()\n\n\tif args.help_actions:\n\t\t_cli.print_help()\n\t\treturn\n\n\tif args.window is None:\n\t\targs.window = 'show' # default behaviour is to show main window\n\n\timport logging\n\tif args.debug > 0:\n\t\tlog_level = logging.WARNING - 10 * args.debug\n\t\tlog_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'\n\t\tlogging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')\n\telse:\n\t\tlogging.root.addHandler(logging.NullHandler())\n\t\tlogging.root.setLevel(logging.ERROR)\n\n\tif not args.action:\n\t\tif logging.root.isEnabledFor(logging.INFO):\n\t\t\tlogging.info(\"language %s (%s), translations path %s\", _i18n.language, _i18n.encoding, _i18n.path)\n\n\treturn args\n\n\ndef main():\n\t_require('pyudev', 'python3-pyudev')\n\n\t# handle ^C in console\n\timport signal\n\tsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\n\targs = _parse_arguments()\n\tif not args: return\n\tif args.action:\n\t\t# if any argument, run comandline and exit\n\t\treturn _cli.run(args.action, args.hidraw_path)\n\n\tgi = _require('gi', 'python3-gi or python3-gobject')\n\tgi.require_version('Gtk', '3.0')\n\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n\n\ttry:\n\t\timport solaar.ui as ui\n\t\timport solaar.listener as listener\n\t\tlistener.setup_scanner(ui.status_changed, ui.error_dialog)\n\n\t\timport solaar.upower as _upower\n\t\tif args.restart_on_wake_up:\n\t\t\t_upower.watch(listener.start_all, listener.stop_all)\n\t\telse:\n\t\t\t_upower.watch(lambda: listener.ping_all(True))\n\n\t\t# main UI event loop\n\t\tui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')\n\texcept Exception as e:\n\t\timport sys\n\t\tsys.exit('%s: error: %s' % (NAME.lower(), e))\n\n\nif __name__ == '__main__':\n\tmain()\n", "path": "lib/solaar/gtk.py"}]} | 2,289 | 280 |
gh_patches_debug_32601 | rasdani/github-patches | git_diff | ansible-collections__amazon.aws-2029 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The cloudwatchlogs_log_group_info module generates throttling exceptions
### Summary
When running the amazon.aws.cloudwatchlogs_log_group_info module on AWS accounts having more than (roughly) 50 log groups we get ThrottlingExceptions once every 20 calls or so. I noticed that both the describe and the list-tags boto calls in the cloudwatchlogs_log_group_info module have no throttling handling configured and use the default throttling handling policy, ie. none.
### Issue Type
Bug Report
### Component Name
amazon.aws.cloudwatchlogs_log_group_info
### Ansible Version
```console (paste below)
ansible [core 2.13.13]
config file = None
configured module search path = ['/home/rundeck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/rundeck/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0]
jinja version = 3.1.3
libyaml = True
```
### Collection Versions
```console (paste below)
Collection Version
--------------------- -------
amazon.aws 7.3.0
community.general 8.3.0
nutanix.ncp 1.7.0
servicenow.servicenow 1.0.6
# /usr/local/lib/python3.8/dist-packages/ansible_collections
Collection Version
----------------------------- -------
amazon.aws 3.5.0
ansible.netcommon 3.1.3
ansible.posix 1.4.0
ansible.utils 2.8.0
ansible.windows 1.12.0
arista.eos 5.0.1
awx.awx 21.10.0
azure.azcollection 1.14.0
check_point.mgmt 2.3.0
chocolatey.chocolatey 1.3.1
cisco.aci 2.3.0
cisco.asa 3.1.0
cisco.dnac 6.6.1
cisco.intersight 1.0.22
cisco.ios 3.3.2
cisco.iosxr 3.3.1
cisco.ise 2.5.9
cisco.meraki 2.13.0
cisco.mso 2.1.0
cisco.nso 1.0.3
cisco.nxos 3.2.0
cisco.ucs 1.8.0
cloud.common 2.1.2
cloudscale_ch.cloud 2.2.3
community.aws 3.6.0
[...]
```
### AWS SDK versions
```console (paste below)
WARNING: Package(s) not found: boto
Name: boto3
Version: 1.34.45
Summary: The AWS SDK for Python
Home-page: https://github.com/boto/boto3
Author: Amazon Web Services
Author-email: None
License: Apache License 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: botocore, jmespath, s3transfer
Required-by:
---
Name: botocore
Version: 1.34.45
Summary: Low-level, data-driven core of boto 3.
Home-page: https://github.com/boto/botocore
Author: Amazon Web Services
Author-email: None
License: Apache License 2.0
Location: /usr/local/lib/python3.8/dist-packages
Requires: python-dateutil, jmespath, urllib3
Required-by: s3transfer, boto3, awscli
```
### Configuration
```console (paste below)
(no Ansible configuration)
```
### OS / Environment
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
### Steps to Reproduce
```yaml
- name: "Role based get all log groups in {{ selected_region }}"
amazon.aws.cloudwatchlogs_log_group_info:
region: "{{ selected_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
aws_session_token: "{{ aws_session_token }}"
log_group_name: "{{ log_group_prefix }}"
register: log_groups
```
### Expected Results
Should get all log group information and tags without error. Optionally, allow us to add an option to skip getting the tags for each log group would allow engineers to work around this issue.
### Actual Results
```console (paste below)
fatal: [127.0.0.1]: FAILED! => {"boto3_version": "1.34.45", "botocore_version": "1.34.45", "changed": false, "error": {"code": "ThrottlingException", "message": "Rate exceeded"}, "msg": "Unable to describe tags for log group /aws/codebuild/tankmaintenanceplanning-pipeline-tsa: An error occurred (ThrottlingException) when calling the ListTagsLogGroup operation (reached max retries: 4): Rate exceeded"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/cloudwatchlogs_log_group_info.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: Ansible Project
5 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
6
7 DOCUMENTATION = r"""
8 ---
9 module: cloudwatchlogs_log_group_info
10 version_added: 5.0.0
11 short_description: Get information about log_group in CloudWatchLogs
12 description:
13 - Lists the specified log groups. You can list all your log groups or filter the results by prefix.
14 - This module was originally added to C(community.aws) in release 1.0.0.
15 author:
16 - Willian Ricardo (@willricardo) <[email protected]>
17 options:
18 log_group_name:
19 description:
20 - The name or prefix of the log group to filter by.
21 type: str
22 extends_documentation_fragment:
23 - amazon.aws.common.modules
24 - amazon.aws.region.modules
25 - amazon.aws.boto3
26 """
27
28 EXAMPLES = r"""
29 # Note: These examples do not set authentication details, see the AWS Guide for details.
30 - amazon.aws.cloudwatchlogs_log_group_info:
31 log_group_name: test-log-group
32 """
33
34 RETURN = r"""
35 log_groups:
36 description: Return the list of complex objects representing log groups
37 returned: success
38 type: complex
39 contains:
40 log_group_name:
41 description: The name of the log group.
42 returned: always
43 type: str
44 creation_time:
45 description: The creation time of the log group.
46 returned: always
47 type: int
48 retention_in_days:
49 description: The number of days to retain the log events in the specified log group.
50 returned: always
51 type: int
52 metric_filter_count:
53 description: The number of metric filters.
54 returned: always
55 type: int
56 arn:
57 description: The Amazon Resource Name (ARN) of the log group.
58 returned: always
59 type: str
60 stored_bytes:
61 description: The number of bytes stored.
62 returned: always
63 type: str
64 kms_key_id:
65 description: The Amazon Resource Name (ARN) of the CMK to use when encrypting log data.
66 returned: always
67 type: str
68 tags:
69 description: A dictionary representing the tags on the log group.
70 returned: always
71 type: dict
72 version_added: 4.0.0
73 version_added_collection: community.aws
74 """
75
76 try:
77 import botocore
78 except ImportError:
79 pass # Handled by AnsibleAWSModule
80
81 from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
82
83 from ansible_collections.amazon.aws.plugins.module_utils.modules import AnsibleAWSModule
84 from ansible_collections.amazon.aws.plugins.module_utils.botocore import is_boto3_error_code
85
86
87 def describe_log_group(client, log_group_name, module):
88 params = {}
89 if log_group_name:
90 params["logGroupNamePrefix"] = log_group_name
91 try:
92 paginator = client.get_paginator("describe_log_groups")
93 desc_log_group = paginator.paginate(**params).build_full_result()
94 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
95 module.fail_json_aws(e, msg=f"Unable to describe log group {log_group_name}")
96
97 for log_group in desc_log_group["logGroups"]:
98 log_group_name = log_group["logGroupName"]
99 try:
100 tags = client.list_tags_log_group(logGroupName=log_group_name)
101 except is_boto3_error_code("AccessDeniedException"):
102 tags = {}
103 module.warn(f"Permission denied listing tags for log group {log_group_name}")
104 except (
105 botocore.exceptions.ClientError,
106 botocore.exceptions.BotoCoreError,
107 ) as e: # pylint: disable=duplicate-except
108 module.fail_json_aws(e, msg=f"Unable to describe tags for log group {log_group_name}")
109 log_group["tags"] = tags.get("tags", {})
110
111 return desc_log_group
112
113
114 def main():
115 argument_spec = dict(
116 log_group_name=dict(),
117 )
118
119 module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
120
121 try:
122 logs = module.client("logs")
123 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
124 module.fail_json_aws(e, msg="Failed to connect to AWS")
125
126 desc_log_group = describe_log_group(client=logs, log_group_name=module.params["log_group_name"], module=module)
127 final_log_group_snake = []
128
129 for log_group in desc_log_group["logGroups"]:
130 final_log_group_snake.append(camel_dict_to_snake_dict(log_group, ignore_list=["tags"]))
131
132 desc_log_group_result = dict(changed=False, log_groups=final_log_group_snake)
133 module.exit_json(**desc_log_group_result)
134
135
136 if __name__ == "__main__":
137 main()
138
[end of plugins/modules/cloudwatchlogs_log_group_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/cloudwatchlogs_log_group_info.py b/plugins/modules/cloudwatchlogs_log_group_info.py
--- a/plugins/modules/cloudwatchlogs_log_group_info.py
+++ b/plugins/modules/cloudwatchlogs_log_group_info.py
@@ -82,6 +82,18 @@
from ansible_collections.amazon.aws.plugins.module_utils.modules import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.botocore import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.retries import AWSRetry
+
+
[email protected]_backoff()
+def list_tags_log_group_with_backoff(client, log_group_name):
+ return client.list_tags_log_group(logGroupName=log_group_name)
+
+
[email protected]_backoff()
+def describe_log_groups_with_backoff(client, **kwargs):
+ paginator = client.get_paginator("describe_log_groups")
+ return paginator.paginate(**kwargs).build_full_result()
def describe_log_group(client, log_group_name, module):
@@ -89,15 +101,14 @@
if log_group_name:
params["logGroupNamePrefix"] = log_group_name
try:
- paginator = client.get_paginator("describe_log_groups")
- desc_log_group = paginator.paginate(**params).build_full_result()
+ desc_log_group = describe_log_groups_with_backoff(client, **params)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg=f"Unable to describe log group {log_group_name}")
for log_group in desc_log_group["logGroups"]:
log_group_name = log_group["logGroupName"]
try:
- tags = client.list_tags_log_group(logGroupName=log_group_name)
+ tags = list_tags_log_group_with_backoff(client, log_group_name)
except is_boto3_error_code("AccessDeniedException"):
tags = {}
module.warn(f"Permission denied listing tags for log group {log_group_name}")
| {"golden_diff": "diff --git a/plugins/modules/cloudwatchlogs_log_group_info.py b/plugins/modules/cloudwatchlogs_log_group_info.py\n--- a/plugins/modules/cloudwatchlogs_log_group_info.py\n+++ b/plugins/modules/cloudwatchlogs_log_group_info.py\n@@ -82,6 +82,18 @@\n \n from ansible_collections.amazon.aws.plugins.module_utils.modules import AnsibleAWSModule\n from ansible_collections.amazon.aws.plugins.module_utils.botocore import is_boto3_error_code\n+from ansible_collections.amazon.aws.plugins.module_utils.retries import AWSRetry\n+\n+\[email protected]_backoff()\n+def list_tags_log_group_with_backoff(client, log_group_name):\n+ return client.list_tags_log_group(logGroupName=log_group_name)\n+\n+\[email protected]_backoff()\n+def describe_log_groups_with_backoff(client, **kwargs):\n+ paginator = client.get_paginator(\"describe_log_groups\")\n+ return paginator.paginate(**kwargs).build_full_result()\n \n \n def describe_log_group(client, log_group_name, module):\n@@ -89,15 +101,14 @@\n if log_group_name:\n params[\"logGroupNamePrefix\"] = log_group_name\n try:\n- paginator = client.get_paginator(\"describe_log_groups\")\n- desc_log_group = paginator.paginate(**params).build_full_result()\n+ desc_log_group = describe_log_groups_with_backoff(client, **params)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg=f\"Unable to describe log group {log_group_name}\")\n \n for log_group in desc_log_group[\"logGroups\"]:\n log_group_name = log_group[\"logGroupName\"]\n try:\n- tags = client.list_tags_log_group(logGroupName=log_group_name)\n+ tags = list_tags_log_group_with_backoff(client, log_group_name)\n except is_boto3_error_code(\"AccessDeniedException\"):\n tags = {}\n module.warn(f\"Permission denied listing tags for log group {log_group_name}\")\n", "issue": "The cloudwatchlogs_log_group_info module generates throttling exceptions\n### Summary\n\nWhen running the amazon.aws.cloudwatchlogs_log_group_info module on AWS accounts having more than (roughly) 50 log groups we get ThrottlingExceptions once every 20 calls or so. I noticed that both the describe and the list-tags boto calls in the cloudwatchlogs_log_group_info module have no throttling handling configured and use the default throttling handling policy, ie. none. \n\n### Issue Type\n\nBug Report\n\n### Component Name\n\namazon.aws.cloudwatchlogs_log_group_info\n\n### Ansible Version\n\n```console (paste below)\r\nansible [core 2.13.13]\r\n config file = None\r\n configured module search path = ['/home/rundeck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible\r\n ansible collection location = /home/rundeck/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0]\r\n jinja version = 3.1.3\r\n libyaml = True\r\n\r\n```\r\n\n\n### Collection Versions\n\n```console (paste below)\r\nCollection Version\r\n--------------------- -------\r\namazon.aws 7.3.0\r\ncommunity.general 8.3.0\r\nnutanix.ncp 1.7.0\r\nservicenow.servicenow 1.0.6\r\n\r\n# /usr/local/lib/python3.8/dist-packages/ansible_collections\r\nCollection Version\r\n----------------------------- -------\r\namazon.aws 3.5.0\r\nansible.netcommon 3.1.3\r\nansible.posix 1.4.0\r\nansible.utils 2.8.0\r\nansible.windows 1.12.0\r\narista.eos 5.0.1\r\nawx.awx 21.10.0\r\nazure.azcollection 1.14.0\r\ncheck_point.mgmt 2.3.0\r\nchocolatey.chocolatey 1.3.1\r\ncisco.aci 2.3.0\r\ncisco.asa 3.1.0\r\ncisco.dnac 6.6.1\r\ncisco.intersight 1.0.22\r\ncisco.ios 3.3.2\r\ncisco.iosxr 3.3.1\r\ncisco.ise 2.5.9\r\ncisco.meraki 2.13.0\r\ncisco.mso 2.1.0\r\ncisco.nso 1.0.3\r\ncisco.nxos 3.2.0\r\ncisco.ucs 1.8.0\r\ncloud.common 2.1.2\r\ncloudscale_ch.cloud 2.2.3\r\ncommunity.aws 3.6.0\r\n[...]\r\n```\r\n\n\n### AWS SDK versions\n\n```console (paste below)\r\nWARNING: Package(s) not found: boto\r\nName: boto3\r\nVersion: 1.34.45\r\nSummary: The AWS SDK for Python\r\nHome-page: https://github.com/boto/boto3\r\nAuthor: Amazon Web Services\r\nAuthor-email: None\r\nLicense: Apache License 2.0\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: botocore, jmespath, s3transfer\r\nRequired-by:\r\n---\r\nName: botocore\r\nVersion: 1.34.45\r\nSummary: Low-level, data-driven core of boto 3.\r\nHome-page: https://github.com/boto/botocore\r\nAuthor: Amazon Web Services\r\nAuthor-email: None\r\nLicense: Apache License 2.0\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: python-dateutil, jmespath, urllib3\r\nRequired-by: s3transfer, boto3, awscli\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n(no Ansible configuration)\r\n```\r\n\n\n### OS / Environment\n\nNAME=\"Ubuntu\"\r\nVERSION=\"20.04.6 LTS (Focal Fossa)\"\r\nID=ubuntu\r\nID_LIKE=debian\r\nPRETTY_NAME=\"Ubuntu 20.04.6 LTS\"\n\n### Steps to Reproduce\n\n```yaml \r\n- name: \"Role based get all log groups in {{ selected_region }}\"\r\n amazon.aws.cloudwatchlogs_log_group_info:\r\n region: \"{{ selected_region }}\"\r\n aws_access_key: \"{{ aws_access_key }}\"\r\n aws_secret_key: \"{{ aws_secret_key }}\"\r\n aws_session_token: \"{{ aws_session_token }}\"\r\n log_group_name: \"{{ log_group_prefix }}\"\r\n register: log_groups\r\n```\r\n\n\n### Expected Results\n\nShould get all log group information and tags without error. Optionally, allow us to add an option to skip getting the tags for each log group would allow engineers to work around this issue. \n\n### Actual Results\n\n```console (paste below)\r\nfatal: [127.0.0.1]: FAILED! => {\"boto3_version\": \"1.34.45\", \"botocore_version\": \"1.34.45\", \"changed\": false, \"error\": {\"code\": \"ThrottlingException\", \"message\": \"Rate exceeded\"}, \"msg\": \"Unable to describe tags for log group /aws/codebuild/tankmaintenanceplanning-pipeline-tsa: An error occurred (ThrottlingException) when calling the ListTagsLogGroup operation (reached max retries: 4): Rate exceeded\"\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nDOCUMENTATION = r\"\"\"\n---\nmodule: cloudwatchlogs_log_group_info\nversion_added: 5.0.0\nshort_description: Get information about log_group in CloudWatchLogs\ndescription:\n - Lists the specified log groups. You can list all your log groups or filter the results by prefix.\n - This module was originally added to C(community.aws) in release 1.0.0.\nauthor:\n - Willian Ricardo (@willricardo) <[email protected]>\noptions:\n log_group_name:\n description:\n - The name or prefix of the log group to filter by.\n type: str\nextends_documentation_fragment:\n - amazon.aws.common.modules\n - amazon.aws.region.modules\n - amazon.aws.boto3\n\"\"\"\n\nEXAMPLES = r\"\"\"\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n- amazon.aws.cloudwatchlogs_log_group_info:\n log_group_name: test-log-group\n\"\"\"\n\nRETURN = r\"\"\"\nlog_groups:\n description: Return the list of complex objects representing log groups\n returned: success\n type: complex\n contains:\n log_group_name:\n description: The name of the log group.\n returned: always\n type: str\n creation_time:\n description: The creation time of the log group.\n returned: always\n type: int\n retention_in_days:\n description: The number of days to retain the log events in the specified log group.\n returned: always\n type: int\n metric_filter_count:\n description: The number of metric filters.\n returned: always\n type: int\n arn:\n description: The Amazon Resource Name (ARN) of the log group.\n returned: always\n type: str\n stored_bytes:\n description: The number of bytes stored.\n returned: always\n type: str\n kms_key_id:\n description: The Amazon Resource Name (ARN) of the CMK to use when encrypting log data.\n returned: always\n type: str\n tags:\n description: A dictionary representing the tags on the log group.\n returned: always\n type: dict\n version_added: 4.0.0\n version_added_collection: community.aws\n\"\"\"\n\ntry:\n import botocore\nexcept ImportError:\n pass # Handled by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.modules import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.botocore import is_boto3_error_code\n\n\ndef describe_log_group(client, log_group_name, module):\n params = {}\n if log_group_name:\n params[\"logGroupNamePrefix\"] = log_group_name\n try:\n paginator = client.get_paginator(\"describe_log_groups\")\n desc_log_group = paginator.paginate(**params).build_full_result()\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg=f\"Unable to describe log group {log_group_name}\")\n\n for log_group in desc_log_group[\"logGroups\"]:\n log_group_name = log_group[\"logGroupName\"]\n try:\n tags = client.list_tags_log_group(logGroupName=log_group_name)\n except is_boto3_error_code(\"AccessDeniedException\"):\n tags = {}\n module.warn(f\"Permission denied listing tags for log group {log_group_name}\")\n except (\n botocore.exceptions.ClientError,\n botocore.exceptions.BotoCoreError,\n ) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(e, msg=f\"Unable to describe tags for log group {log_group_name}\")\n log_group[\"tags\"] = tags.get(\"tags\", {})\n\n return desc_log_group\n\n\ndef main():\n argument_spec = dict(\n log_group_name=dict(),\n )\n\n module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)\n\n try:\n logs = module.client(\"logs\")\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg=\"Failed to connect to AWS\")\n\n desc_log_group = describe_log_group(client=logs, log_group_name=module.params[\"log_group_name\"], module=module)\n final_log_group_snake = []\n\n for log_group in desc_log_group[\"logGroups\"]:\n final_log_group_snake.append(camel_dict_to_snake_dict(log_group, ignore_list=[\"tags\"]))\n\n desc_log_group_result = dict(changed=False, log_groups=final_log_group_snake)\n module.exit_json(**desc_log_group_result)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "plugins/modules/cloudwatchlogs_log_group_info.py"}]} | 3,182 | 432 |
gh_patches_debug_64715 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-1243 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs page describing Beta meaning
## 📚 Documentation
Add a page in our docs describing that beta means that one or all of the following are true:
- the feature has unstable dependencies
- the feature may change without notice in future versions
- the feature is not compatible with other flash / pl features
- the performance of the feature has not been verified
Anything else?
</issue>
<code>
[start of docs/extensions/stability.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from docutils import nodes
15 from docutils.parsers.rst import Directive
16 from docutils.statemachine import StringList
17
18 ADMONITION_TEMPLATE = """
19 .. raw:: html
20
21 <div class="admonition warning {type}">
22 <p class="admonition-title">{title}</p>
23 <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future
24 releases.</p>
25 </div>
26 """
27
28
29 class Beta(Directive):
30 has_content = True
31 required_arguments = 1
32 optional_arguments = 0
33
34 def run(self):
35
36 scope = self.arguments[0]
37
38 admonition_rst = ADMONITION_TEMPLATE.format(type="beta", title="Beta", scope=scope)
39 admonition_list = StringList(admonition_rst.split("\n"))
40 admonition = nodes.paragraph()
41 self.state.nested_parse(admonition_list, self.content_offset, admonition)
42 return [admonition]
43
44
45 def setup(app):
46 app.add_directive("beta", Beta)
47
[end of docs/extensions/stability.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/extensions/stability.py b/docs/extensions/stability.py
--- a/docs/extensions/stability.py
+++ b/docs/extensions/stability.py
@@ -20,8 +20,14 @@
<div class="admonition warning {type}">
<p class="admonition-title">{title}</p>
- <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future
- releases.</p>
+ <p>
+
+This {scope} is currently in Beta. The API and functionality may change without warning in future
+releases. :ref:`More details <stability>`.
+
+.. raw:: html
+
+ </p>
</div>
"""
| {"golden_diff": "diff --git a/docs/extensions/stability.py b/docs/extensions/stability.py\n--- a/docs/extensions/stability.py\n+++ b/docs/extensions/stability.py\n@@ -20,8 +20,14 @@\n \n <div class=\"admonition warning {type}\">\n <p class=\"admonition-title\">{title}</p>\n- <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future\n- releases.</p>\n+ <p>\n+\n+This {scope} is currently in Beta. The API and functionality may change without warning in future\n+releases. :ref:`More details <stability>`.\n+\n+.. raw:: html\n+\n+ </p>\n </div>\n \"\"\"\n", "issue": "Docs page describing Beta meaning\n## \ud83d\udcda Documentation\r\n\r\nAdd a page in our docs describing that beta means that one or all of the following are true:\r\n- the feature has unstable dependencies\r\n- the feature may change without notice in future versions\r\n- the feature is not compatible with other flash / pl features\r\n- the performance of the feature has not been verified\r\n\r\nAnything else?\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive\nfrom docutils.statemachine import StringList\n\nADMONITION_TEMPLATE = \"\"\"\n.. raw:: html\n\n <div class=\"admonition warning {type}\">\n <p class=\"admonition-title\">{title}</p>\n <p>This {scope} is currently in Beta. The interfaces and functionality may change without warning in future\n releases.</p>\n </div>\n\"\"\"\n\n\nclass Beta(Directive):\n has_content = True\n required_arguments = 1\n optional_arguments = 0\n\n def run(self):\n\n scope = self.arguments[0]\n\n admonition_rst = ADMONITION_TEMPLATE.format(type=\"beta\", title=\"Beta\", scope=scope)\n admonition_list = StringList(admonition_rst.split(\"\\n\"))\n admonition = nodes.paragraph()\n self.state.nested_parse(admonition_list, self.content_offset, admonition)\n return [admonition]\n\n\ndef setup(app):\n app.add_directive(\"beta\", Beta)\n", "path": "docs/extensions/stability.py"}]} | 1,056 | 161 |
gh_patches_debug_3300 | rasdani/github-patches | git_diff | fal-ai__dbt-fal-128 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make logging of insert statements for write_to_source optional or just remove it
`cesarsantos#3303` in Discord asked for this probably to avoid logging sensible information or filling the logger with all the data there is.
</issue>
<code>
[start of src/faldbt/lib.py]
1 # NOTE: INSPIRED IN https://github.com/dbt-labs/dbt-core/blob/43edc887f97e359b02b6317a9f91898d3d66652b/core/dbt/lib.py
2 import six
3 from datetime import datetime
4 from dataclasses import dataclass
5 from uuid import uuid4
6 from typing import List, Tuple, Union
7
8 import dbt.version
9 import dbt.semver
10 import dbt.flags as flags
11 import dbt.adapters.factory as adapters_factory
12 from dbt.config.runtime import RuntimeConfig
13 from dbt.contracts.connection import AdapterResponse
14 from dbt.contracts.graph.manifest import Manifest
15 from dbt.parser.manifest import process_node
16 from dbt.logger import GLOBAL_LOGGER as logger
17
18 from . import parse
19
20 import pandas as pd
21 from pandas.io import sql as pdsql
22
23 import sqlalchemy
24 from sqlalchemy.sql.ddl import CreateTable
25 from sqlalchemy.sql import Insert
26 from sqlalchemy.sql.schema import MetaData
27
28
29 DBT_V1 = dbt.semver.VersionSpecifier.from_version_string("1.0.0")
30 DBT_VCURRENT = dbt.version.get_installed_version()
31
32 if DBT_VCURRENT.compare(DBT_V1) >= 0:
33 from dbt.parser.sql import SqlBlockParser
34 from dbt.contracts.graph.parsed import ParsedModelNode, ParsedSourceDefinition
35 from dbt.contracts.sql import ResultTable, RemoteRunResult
36 else:
37 from faldbt.cp.parser.sql import SqlBlockParser
38 from faldbt.cp.contracts.graph.parsed import ParsedModelNode, ParsedSourceDefinition
39 from faldbt.cp.contracts.sql import ResultTable, RemoteRunResult
40
41
42 @dataclass
43 class FlagsArgs:
44 profiles_dir: str
45 use_colors: bool
46
47
48 def initialize_dbt_flags(profiles_dir: str):
49 """
50 Initializes the flags module from dbt, since it's accessed from around their code.
51 """
52 args = FlagsArgs(profiles_dir, None)
53 user_config = parse.get_dbt_user_config(profiles_dir)
54 try:
55 flags.set_from_args(args, user_config)
56 except TypeError:
57 flags.set_from_args(args)
58
59 # Set invocation id
60 if DBT_VCURRENT.compare(DBT_V1) >= 0:
61 import dbt.events.functions as events_functions
62
63 events_functions.set_invocation_id()
64
65 # Re-enable logging for 1.0.0 through old API of logger
66 # TODO: migrate for 1.0.0 code to new event system
67 if DBT_VCURRENT.compare(DBT_V1) >= 0:
68 flags.ENABLE_LEGACY_LOGGER = "1"
69 if logger.disabled:
70 logger.enable()
71
72
73 def register_adapters(config: RuntimeConfig):
74 # Clear previously registered adapters. This fixes cacheing behavior on the dbt-server
75 adapters_factory.reset_adapters()
76 # Load the relevant adapter
77 adapters_factory.register_adapter(config)
78
79
80 def _get_operation_node(manifest: Manifest, project_path, profiles_dir, sql):
81
82 config = parse.get_dbt_config(project_path, profiles_dir)
83 block_parser = SqlBlockParser(
84 project=config,
85 manifest=manifest,
86 root_project=config,
87 )
88
89 # NOTE: nodes get registered to the manifest automatically,
90 # HACK: we need to include uniqueness (UUID4) to avoid clashes
91 name = "SQL:" + str(hash(sql)) + ":" + str(uuid4())
92 sql_node = block_parser.parse_remote(sql, name)
93 process_node(config, manifest, sql_node)
94 return sql_node
95
96
97 # NOTE: Once we get an adapter, we must call `connection_for` or `connection_named` to use it
98 def _get_adapter(project_path: str, profiles_dir: str):
99 config = parse.get_dbt_config(project_path, profiles_dir)
100
101 adapters_factory.cleanup_connections()
102 return adapters_factory.get_adapter(config)
103
104
105 def _execute_sql(
106 manifest: Manifest, project_path: str, profiles_dir: str, sql: str
107 ) -> Tuple[AdapterResponse, RemoteRunResult]:
108 node = _get_operation_node(manifest, project_path, profiles_dir, sql)
109 adapter = _get_adapter(project_path, profiles_dir)
110
111 logger.info("Running query\n{}", sql)
112
113 result = None
114 with adapter.connection_for(node):
115 adapter.connections.begin()
116 response, execute_result = adapter.execute(sql, fetch=True)
117
118 table = ResultTable(
119 column_names=list(execute_result.column_names),
120 rows=[list(row) for row in execute_result],
121 )
122
123 result = RemoteRunResult(
124 raw_sql=sql,
125 compiled_sql=sql,
126 node=node,
127 table=table,
128 timing=[],
129 logs=[],
130 generated_at=datetime.utcnow(),
131 )
132 adapter.connections.commit()
133
134 return response, result
135
136
137 def _get_target_relation(
138 target: Union[ParsedModelNode, ParsedSourceDefinition],
139 project_path: str,
140 profiles_dir: str,
141 ):
142 adapter = _get_adapter(project_path, profiles_dir)
143
144 relation = None
145 with adapter.connection_named(str(uuid4())):
146 # This ROLLBACKs so it has to be a new connection
147 relation = adapter.get_relation(
148 target.database, target.schema, target.identifier
149 )
150 return relation
151
152
153 def execute_sql(
154 manifest: Manifest, project_path: str, profiles_dir: str, sql: str
155 ) -> RemoteRunResult:
156 _, result = _execute_sql(manifest, project_path, profiles_dir, sql)
157 return result
158
159
160 def fetch_target(
161 manifest: Manifest,
162 project_path: str,
163 profiles_dir: str,
164 target: Union[ParsedModelNode, ParsedSourceDefinition],
165 ) -> RemoteRunResult:
166 relation = _get_target_relation(target, project_path, profiles_dir)
167
168 if relation is None:
169 raise Exception(f"Could not get relation for '{target.unique_id}'")
170
171 query = f"SELECT * FROM {relation}"
172 _, result = _execute_sql(manifest, project_path, profiles_dir, query)
173 return result
174
175
176 def write_target(
177 data: pd.DataFrame,
178 manifest: Manifest,
179 project_path: str,
180 profiles_dir: str,
181 target: Union[ParsedModelNode, ParsedSourceDefinition],
182 ) -> RemoteRunResult:
183 adapter = _get_adapter(project_path, profiles_dir)
184
185 relation = _get_target_relation(target, project_path, profiles_dir)
186
187 engine = _alchemy_engine(adapter, target)
188 pddb = pdsql.SQLDatabase(engine, schema=target.schema)
189 pdtable = pdsql.SQLTable(target.name, pddb, data, index=False)
190 alchemy_table: sqlalchemy.Table = pdtable.table.to_metadata(pdtable.pd_sql.meta)
191
192 column_names: List[str] = list(data.columns)
193 rows = data.to_records(index=False)
194 row_dicts = list(map(lambda row: dict(zip(column_names, row)), rows))
195
196 if relation is None:
197 create_stmt = CreateTable(alchemy_table).compile(
198 bind=engine, compile_kwargs={"literal_binds": True}
199 )
200
201 _execute_sql(
202 manifest, project_path, profiles_dir, six.text_type(create_stmt).strip()
203 )
204
205 insert_stmt = (
206 Insert(alchemy_table)
207 .values(row_dicts)
208 .compile(bind=engine, compile_kwargs={"literal_binds": True})
209 )
210
211 _, result = _execute_sql(
212 manifest, project_path, profiles_dir, six.text_type(insert_stmt).strip()
213 )
214 return result
215
216
217 def _alchemy_engine(
218 adapter: adapters_factory.Adapter,
219 target: Union[ParsedModelNode, ParsedSourceDefinition],
220 ):
221 url_string = f"{adapter.type()}://"
222 if adapter.type() == "bigquery":
223 url_string = f"bigquery://{target.database}"
224 if adapter.type() == "postgres":
225 url_string = "postgresql://"
226 # TODO: add special cases as needed
227
228 def null_dump(sql, *multiparams, **params):
229 pass
230
231 return sqlalchemy.create_mock_engine(url_string, executor=null_dump)
232
[end of src/faldbt/lib.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/faldbt/lib.py b/src/faldbt/lib.py
--- a/src/faldbt/lib.py
+++ b/src/faldbt/lib.py
@@ -108,7 +108,7 @@
node = _get_operation_node(manifest, project_path, profiles_dir, sql)
adapter = _get_adapter(project_path, profiles_dir)
- logger.info("Running query\n{}", sql)
+ logger.debug("Running query\n{}", sql)
result = None
with adapter.connection_for(node):
| {"golden_diff": "diff --git a/src/faldbt/lib.py b/src/faldbt/lib.py\n--- a/src/faldbt/lib.py\n+++ b/src/faldbt/lib.py\n@@ -108,7 +108,7 @@\n node = _get_operation_node(manifest, project_path, profiles_dir, sql)\n adapter = _get_adapter(project_path, profiles_dir)\n \n- logger.info(\"Running query\\n{}\", sql)\n+ logger.debug(\"Running query\\n{}\", sql)\n \n result = None\n with adapter.connection_for(node):\n", "issue": "Make logging of insert statements for write_to_source optional or just remove it\n`cesarsantos#3303` in Discord asked for this probably to avoid logging sensible information or filling the logger with all the data there is.\n", "before_files": [{"content": "# NOTE: INSPIRED IN https://github.com/dbt-labs/dbt-core/blob/43edc887f97e359b02b6317a9f91898d3d66652b/core/dbt/lib.py\nimport six\nfrom datetime import datetime\nfrom dataclasses import dataclass\nfrom uuid import uuid4\nfrom typing import List, Tuple, Union\n\nimport dbt.version\nimport dbt.semver\nimport dbt.flags as flags\nimport dbt.adapters.factory as adapters_factory\nfrom dbt.config.runtime import RuntimeConfig\nfrom dbt.contracts.connection import AdapterResponse\nfrom dbt.contracts.graph.manifest import Manifest\nfrom dbt.parser.manifest import process_node\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\nfrom . import parse\n\nimport pandas as pd\nfrom pandas.io import sql as pdsql\n\nimport sqlalchemy\nfrom sqlalchemy.sql.ddl import CreateTable\nfrom sqlalchemy.sql import Insert\nfrom sqlalchemy.sql.schema import MetaData\n\n\nDBT_V1 = dbt.semver.VersionSpecifier.from_version_string(\"1.0.0\")\nDBT_VCURRENT = dbt.version.get_installed_version()\n\nif DBT_VCURRENT.compare(DBT_V1) >= 0:\n from dbt.parser.sql import SqlBlockParser\n from dbt.contracts.graph.parsed import ParsedModelNode, ParsedSourceDefinition\n from dbt.contracts.sql import ResultTable, RemoteRunResult\nelse:\n from faldbt.cp.parser.sql import SqlBlockParser\n from faldbt.cp.contracts.graph.parsed import ParsedModelNode, ParsedSourceDefinition\n from faldbt.cp.contracts.sql import ResultTable, RemoteRunResult\n\n\n@dataclass\nclass FlagsArgs:\n profiles_dir: str\n use_colors: bool\n\n\ndef initialize_dbt_flags(profiles_dir: str):\n \"\"\"\n Initializes the flags module from dbt, since it's accessed from around their code.\n \"\"\"\n args = FlagsArgs(profiles_dir, None)\n user_config = parse.get_dbt_user_config(profiles_dir)\n try:\n flags.set_from_args(args, user_config)\n except TypeError:\n flags.set_from_args(args)\n\n # Set invocation id\n if DBT_VCURRENT.compare(DBT_V1) >= 0:\n import dbt.events.functions as events_functions\n\n events_functions.set_invocation_id()\n\n # Re-enable logging for 1.0.0 through old API of logger\n # TODO: migrate for 1.0.0 code to new event system\n if DBT_VCURRENT.compare(DBT_V1) >= 0:\n flags.ENABLE_LEGACY_LOGGER = \"1\"\n if logger.disabled:\n logger.enable()\n\n\ndef register_adapters(config: RuntimeConfig):\n # Clear previously registered adapters. This fixes cacheing behavior on the dbt-server\n adapters_factory.reset_adapters()\n # Load the relevant adapter\n adapters_factory.register_adapter(config)\n\n\ndef _get_operation_node(manifest: Manifest, project_path, profiles_dir, sql):\n\n config = parse.get_dbt_config(project_path, profiles_dir)\n block_parser = SqlBlockParser(\n project=config,\n manifest=manifest,\n root_project=config,\n )\n\n # NOTE: nodes get registered to the manifest automatically,\n # HACK: we need to include uniqueness (UUID4) to avoid clashes\n name = \"SQL:\" + str(hash(sql)) + \":\" + str(uuid4())\n sql_node = block_parser.parse_remote(sql, name)\n process_node(config, manifest, sql_node)\n return sql_node\n\n\n# NOTE: Once we get an adapter, we must call `connection_for` or `connection_named` to use it\ndef _get_adapter(project_path: str, profiles_dir: str):\n config = parse.get_dbt_config(project_path, profiles_dir)\n\n adapters_factory.cleanup_connections()\n return adapters_factory.get_adapter(config)\n\n\ndef _execute_sql(\n manifest: Manifest, project_path: str, profiles_dir: str, sql: str\n) -> Tuple[AdapterResponse, RemoteRunResult]:\n node = _get_operation_node(manifest, project_path, profiles_dir, sql)\n adapter = _get_adapter(project_path, profiles_dir)\n\n logger.info(\"Running query\\n{}\", sql)\n\n result = None\n with adapter.connection_for(node):\n adapter.connections.begin()\n response, execute_result = adapter.execute(sql, fetch=True)\n\n table = ResultTable(\n column_names=list(execute_result.column_names),\n rows=[list(row) for row in execute_result],\n )\n\n result = RemoteRunResult(\n raw_sql=sql,\n compiled_sql=sql,\n node=node,\n table=table,\n timing=[],\n logs=[],\n generated_at=datetime.utcnow(),\n )\n adapter.connections.commit()\n\n return response, result\n\n\ndef _get_target_relation(\n target: Union[ParsedModelNode, ParsedSourceDefinition],\n project_path: str,\n profiles_dir: str,\n):\n adapter = _get_adapter(project_path, profiles_dir)\n\n relation = None\n with adapter.connection_named(str(uuid4())):\n # This ROLLBACKs so it has to be a new connection\n relation = adapter.get_relation(\n target.database, target.schema, target.identifier\n )\n return relation\n\n\ndef execute_sql(\n manifest: Manifest, project_path: str, profiles_dir: str, sql: str\n) -> RemoteRunResult:\n _, result = _execute_sql(manifest, project_path, profiles_dir, sql)\n return result\n\n\ndef fetch_target(\n manifest: Manifest,\n project_path: str,\n profiles_dir: str,\n target: Union[ParsedModelNode, ParsedSourceDefinition],\n) -> RemoteRunResult:\n relation = _get_target_relation(target, project_path, profiles_dir)\n\n if relation is None:\n raise Exception(f\"Could not get relation for '{target.unique_id}'\")\n\n query = f\"SELECT * FROM {relation}\"\n _, result = _execute_sql(manifest, project_path, profiles_dir, query)\n return result\n\n\ndef write_target(\n data: pd.DataFrame,\n manifest: Manifest,\n project_path: str,\n profiles_dir: str,\n target: Union[ParsedModelNode, ParsedSourceDefinition],\n) -> RemoteRunResult:\n adapter = _get_adapter(project_path, profiles_dir)\n\n relation = _get_target_relation(target, project_path, profiles_dir)\n\n engine = _alchemy_engine(adapter, target)\n pddb = pdsql.SQLDatabase(engine, schema=target.schema)\n pdtable = pdsql.SQLTable(target.name, pddb, data, index=False)\n alchemy_table: sqlalchemy.Table = pdtable.table.to_metadata(pdtable.pd_sql.meta)\n\n column_names: List[str] = list(data.columns)\n rows = data.to_records(index=False)\n row_dicts = list(map(lambda row: dict(zip(column_names, row)), rows))\n\n if relation is None:\n create_stmt = CreateTable(alchemy_table).compile(\n bind=engine, compile_kwargs={\"literal_binds\": True}\n )\n\n _execute_sql(\n manifest, project_path, profiles_dir, six.text_type(create_stmt).strip()\n )\n\n insert_stmt = (\n Insert(alchemy_table)\n .values(row_dicts)\n .compile(bind=engine, compile_kwargs={\"literal_binds\": True})\n )\n\n _, result = _execute_sql(\n manifest, project_path, profiles_dir, six.text_type(insert_stmt).strip()\n )\n return result\n\n\ndef _alchemy_engine(\n adapter: adapters_factory.Adapter,\n target: Union[ParsedModelNode, ParsedSourceDefinition],\n):\n url_string = f\"{adapter.type()}://\"\n if adapter.type() == \"bigquery\":\n url_string = f\"bigquery://{target.database}\"\n if adapter.type() == \"postgres\":\n url_string = \"postgresql://\"\n # TODO: add special cases as needed\n\n def null_dump(sql, *multiparams, **params):\n pass\n\n return sqlalchemy.create_mock_engine(url_string, executor=null_dump)\n", "path": "src/faldbt/lib.py"}]} | 2,917 | 120 |
gh_patches_debug_2342 | rasdani/github-patches | git_diff | mozilla__bugbug-411 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use codespell in precommit hook
</issue>
<code>
[start of run.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import argparse
7 import csv
8 import os
9 from datetime import datetime, timedelta
10
11 import numpy as np
12
13 from bugbug import repository # noqa
14 from bugbug import bugzilla, db
15 from bugbug.models import get_model_class
16
17 if __name__ == "__main__":
18 parser = argparse.ArgumentParser()
19 parser.add_argument(
20 "--lemmatization",
21 help="Perform lemmatization (using spaCy)",
22 action="store_true",
23 )
24 parser.add_argument("--train", help="Perform training", action="store_true")
25 parser.add_argument(
26 "--goal",
27 help="Goal of the classifier",
28 choices=[
29 # bug classifiers
30 "defect",
31 "regression",
32 "tracking",
33 "qaneeded",
34 "uplift",
35 "component",
36 "devdocneeded",
37 "defectenhancementtask",
38 "assignee",
39 "bugtype",
40 "stepstoreproduce",
41 # commit classifiers
42 "backout",
43 ],
44 default="defect",
45 )
46 parser.add_argument(
47 "--classifier",
48 help="Type of the classifier",
49 choices=["default", "nn"],
50 default="default",
51 )
52 parser.add_argument("--classify", help="Perform evaluation", action="store_true")
53 parser.add_argument(
54 "--generate-sheet",
55 help="Perform evaluation on bugs from last week and generate a csv file",
56 action="store_true",
57 )
58 parser.add_argument("--token", help="Bugzilla token", action="store")
59 parser.add_argument(
60 "--historical", help="Analyze historical bugs", action="store_true"
61 )
62 args = parser.parse_args()
63
64 model_file_name = "{}{}model".format(
65 args.goal, "" if args.classifier == "default" else args.classifier
66 )
67
68 model_class_name = args.goal
69
70 if args.goal == "component":
71 if args.classifier == "default":
72 model_class_name = "component"
73 elif args.classifier == "nn":
74 model_class_name = "component_nn"
75 else:
76 raise ValueError(f"Unkown value {args.classifier}")
77
78 model_class = get_model_class(model_class_name)
79
80 if args.train:
81 db.download()
82
83 if args.historical:
84 model = model_class(args.lemmatization, args.historical)
85 else:
86 model = model_class(args.lemmatization)
87 model.train()
88 else:
89 model = model_class.load(model_file_name)
90
91 if args.classify:
92 for bug in bugzilla.get_bugs():
93 print(
94 f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug["id"] } - { bug["summary"]} '
95 )
96
97 if model.calculate_importance:
98 probas, importances = model.classify(
99 bug, probabilities=True, importances=True
100 )
101
102 feature_names = model.get_feature_names()
103 for i, (importance, index, is_positive) in enumerate(importances):
104 print(
105 f'{i + 1}. \'{feature_names[int(index)]}\' ({"+" if (is_positive) else "-"}{importance})'
106 )
107 else:
108 probas = model.classify(bug, probabilities=True, importances=False)
109
110 if np.argmax(probas) == 1:
111 print(f"Positive! {probas}")
112 else:
113 print(f"Negative! {probas}")
114 input()
115
116 if args.generate_sheet:
117 assert (
118 args.token is not None
119 ), "A Bugzilla token should be set in order to download bugs"
120 today = datetime.utcnow()
121 a_week_ago = today - timedelta(7)
122 bugzilla.set_token(args.token)
123 bugs = bugzilla.download_bugs_between(a_week_ago, today)
124
125 print(f"Classifying {len(bugs)} bugs...")
126
127 rows = [["Bug", f"{args.goal}(model)", args.goal, "Title"]]
128
129 for bug in bugs:
130 p = model.classify(bug, probabilities=True)
131 rows.append(
132 [
133 f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug["id"]}',
134 "y" if p[0][1] >= 0.7 else "n",
135 "",
136 bug["summary"],
137 ]
138 )
139
140 os.makedirs("sheets", exist_ok=True)
141 with open(
142 os.path.join(
143 "sheets",
144 f'{args.goal}-{datetime.utcnow().strftime("%Y-%m-%d")}-labels.csv',
145 ),
146 "w",
147 ) as f:
148 writer = csv.writer(f)
149 writer.writerows(rows)
150
[end of run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/run.py b/run.py
--- a/run.py
+++ b/run.py
@@ -73,7 +73,7 @@
elif args.classifier == "nn":
model_class_name = "component_nn"
else:
- raise ValueError(f"Unkown value {args.classifier}")
+ raise ValueError(f"Unknown value {args.classifier}")
model_class = get_model_class(model_class_name)
| {"golden_diff": "diff --git a/run.py b/run.py\n--- a/run.py\n+++ b/run.py\n@@ -73,7 +73,7 @@\n elif args.classifier == \"nn\":\n model_class_name = \"component_nn\"\n else:\n- raise ValueError(f\"Unkown value {args.classifier}\")\n+ raise ValueError(f\"Unknown value {args.classifier}\")\n \n model_class = get_model_class(model_class_name)\n", "issue": "Use codespell in precommit hook\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport os\nfrom datetime import datetime, timedelta\n\nimport numpy as np\n\nfrom bugbug import repository # noqa\nfrom bugbug import bugzilla, db\nfrom bugbug.models import get_model_class\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--lemmatization\",\n help=\"Perform lemmatization (using spaCy)\",\n action=\"store_true\",\n )\n parser.add_argument(\"--train\", help=\"Perform training\", action=\"store_true\")\n parser.add_argument(\n \"--goal\",\n help=\"Goal of the classifier\",\n choices=[\n # bug classifiers\n \"defect\",\n \"regression\",\n \"tracking\",\n \"qaneeded\",\n \"uplift\",\n \"component\",\n \"devdocneeded\",\n \"defectenhancementtask\",\n \"assignee\",\n \"bugtype\",\n \"stepstoreproduce\",\n # commit classifiers\n \"backout\",\n ],\n default=\"defect\",\n )\n parser.add_argument(\n \"--classifier\",\n help=\"Type of the classifier\",\n choices=[\"default\", \"nn\"],\n default=\"default\",\n )\n parser.add_argument(\"--classify\", help=\"Perform evaluation\", action=\"store_true\")\n parser.add_argument(\n \"--generate-sheet\",\n help=\"Perform evaluation on bugs from last week and generate a csv file\",\n action=\"store_true\",\n )\n parser.add_argument(\"--token\", help=\"Bugzilla token\", action=\"store\")\n parser.add_argument(\n \"--historical\", help=\"Analyze historical bugs\", action=\"store_true\"\n )\n args = parser.parse_args()\n\n model_file_name = \"{}{}model\".format(\n args.goal, \"\" if args.classifier == \"default\" else args.classifier\n )\n\n model_class_name = args.goal\n\n if args.goal == \"component\":\n if args.classifier == \"default\":\n model_class_name = \"component\"\n elif args.classifier == \"nn\":\n model_class_name = \"component_nn\"\n else:\n raise ValueError(f\"Unkown value {args.classifier}\")\n\n model_class = get_model_class(model_class_name)\n\n if args.train:\n db.download()\n\n if args.historical:\n model = model_class(args.lemmatization, args.historical)\n else:\n model = model_class(args.lemmatization)\n model.train()\n else:\n model = model_class.load(model_file_name)\n\n if args.classify:\n for bug in bugzilla.get_bugs():\n print(\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={ bug[\"id\"] } - { bug[\"summary\"]} '\n )\n\n if model.calculate_importance:\n probas, importances = model.classify(\n bug, probabilities=True, importances=True\n )\n\n feature_names = model.get_feature_names()\n for i, (importance, index, is_positive) in enumerate(importances):\n print(\n f'{i + 1}. \\'{feature_names[int(index)]}\\' ({\"+\" if (is_positive) else \"-\"}{importance})'\n )\n else:\n probas = model.classify(bug, probabilities=True, importances=False)\n\n if np.argmax(probas) == 1:\n print(f\"Positive! {probas}\")\n else:\n print(f\"Negative! {probas}\")\n input()\n\n if args.generate_sheet:\n assert (\n args.token is not None\n ), \"A Bugzilla token should be set in order to download bugs\"\n today = datetime.utcnow()\n a_week_ago = today - timedelta(7)\n bugzilla.set_token(args.token)\n bugs = bugzilla.download_bugs_between(a_week_ago, today)\n\n print(f\"Classifying {len(bugs)} bugs...\")\n\n rows = [[\"Bug\", f\"{args.goal}(model)\", args.goal, \"Title\"]]\n\n for bug in bugs:\n p = model.classify(bug, probabilities=True)\n rows.append(\n [\n f'https://bugzilla.mozilla.org/show_bug.cgi?id={bug[\"id\"]}',\n \"y\" if p[0][1] >= 0.7 else \"n\",\n \"\",\n bug[\"summary\"],\n ]\n )\n\n os.makedirs(\"sheets\", exist_ok=True)\n with open(\n os.path.join(\n \"sheets\",\n f'{args.goal}-{datetime.utcnow().strftime(\"%Y-%m-%d\")}-labels.csv',\n ),\n \"w\",\n ) as f:\n writer = csv.writer(f)\n writer.writerows(rows)\n", "path": "run.py"}]} | 1,922 | 93 |
gh_patches_debug_22273 | rasdani/github-patches | git_diff | kserve__kserve-545 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log format is not properly setup for KFServer
/kind bug
**What steps did you take and what happened:**
Log format is not properly setup
```
INFO:root:Copying contents of /mnt/models to local
INFO:root:Registering model:cifar10
INFO:root:Listening on port 8080
INFO:tornado.general:Starting 40 processes
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 11488.05ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 22800.67ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 24200.31ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 8301.00ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38398.63ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38799.67ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 7599.63ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 39800.00ms
INFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 32200.33ms
```
**What did you expect to happen:**
The log format should include timestamp.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version: 0.2.0
- Kubeflow version:
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
</issue>
<code>
[start of python/kfserving/kfserving/kfserver.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import tornado.ioloop
16 import tornado.web
17 import tornado.httpserver
18 import argparse
19 import logging
20 import json
21 from typing import List, Dict
22 from kfserving.handlers.http import PredictHandler, ExplainHandler
23 from kfserving import KFModel
24 from kfserving.constants import constants
25
26 DEFAULT_HTTP_PORT = 8080
27 DEFAULT_GRPC_PORT = 8081
28
29 parser = argparse.ArgumentParser(add_help=False)
30 parser.add_argument('--http_port', default=DEFAULT_HTTP_PORT, type=int,
31 help='The HTTP Port listened to by the model server.')
32 parser.add_argument('--grpc_port', default=DEFAULT_GRPC_PORT, type=int,
33 help='The GRPC Port listened to by the model server.')
34 parser.add_argument('--workers', default=0, type=int,
35 help='The number of works to fork')
36 args, _ = parser.parse_known_args()
37
38 logging.basicConfig(level=constants.KFSERVING_LOGLEVEL)
39
40
41 class KFServer():
42 def __init__(self, http_port: int = args.http_port,
43 grpc_port: int = args.grpc_port,
44 workers: int = args.workers):
45 self.registered_models = {}
46 self.http_port = http_port
47 self.grpc_port = grpc_port
48 self.workers = workers
49 self._http_server = None
50
51 def create_application(self):
52 return tornado.web.Application([
53 # Server Liveness API returns 200 if server is alive.
54 (r"/", LivenessHandler),
55 (r"/v1/models",
56 ListHandler, dict(models=self.registered_models)),
57 # Model Health API returns 200 if model is ready to serve.
58 (r"/v1/models/([a-zA-Z0-9_-]+)",
59 HealthHandler, dict(models=self.registered_models)),
60 (r"/v1/models/([a-zA-Z0-9_-]+):predict",
61 PredictHandler, dict(models=self.registered_models)),
62 (r"/v1/models/([a-zA-Z0-9_-]+):explain",
63 ExplainHandler, dict(models=self.registered_models)),
64 ])
65
66 def start(self, models: List[KFModel]):
67 for model in models:
68 self.register_model(model)
69
70 self._http_server = tornado.httpserver.HTTPServer(
71 self.create_application())
72
73 logging.info("Listening on port %s", self.http_port)
74 self._http_server.bind(self.http_port)
75 logging.info("Will fork %d workers", self.workers)
76 self._http_server.start(self.workers)
77 tornado.ioloop.IOLoop.current().start()
78
79 def register_model(self, model: KFModel):
80 if not model.name:
81 raise Exception(
82 "Failed to register model, model.name must be provided.")
83 self.registered_models[model.name] = model
84 logging.info("Registering model: %s", model.name)
85
86
87 class LivenessHandler(tornado.web.RequestHandler): # pylint:disable=too-few-public-methods
88 def get(self):
89 self.write("Alive")
90
91
92 class HealthHandler(tornado.web.RequestHandler):
93 def initialize(self, models: Dict[str, KFModel]):
94 self.models = models # pylint:disable=attribute-defined-outside-init
95
96 def get(self, name: str):
97 if name not in self.models:
98 raise tornado.web.HTTPError(
99 status_code=404,
100 reason="Model with name %s does not exist." % name
101 )
102
103 model = self.models[name]
104 self.write(json.dumps({
105 "name": model.name,
106 "ready": model.ready
107 }))
108
109
110 class ListHandler(tornado.web.RequestHandler):
111 def initialize(self, models: Dict[str, KFModel]):
112 self.models = models # pylint:disable=attribute-defined-outside-init
113
114 def get(self):
115 self.write(json.dumps(list(self.models.values())))
116
[end of python/kfserving/kfserving/kfserver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kfserving/kfserving/kfserver.py b/python/kfserving/kfserving/kfserver.py
--- a/python/kfserving/kfserving/kfserver.py
+++ b/python/kfserving/kfserving/kfserver.py
@@ -12,16 +12,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import tornado.ioloop
-import tornado.web
-import tornado.httpserver
import argparse
import logging
import json
from typing import List, Dict
+import tornado.ioloop
+import tornado.web
+import tornado.httpserver
+import tornado.log
from kfserving.handlers.http import PredictHandler, ExplainHandler
from kfserving import KFModel
-from kfserving.constants import constants
DEFAULT_HTTP_PORT = 8080
DEFAULT_GRPC_PORT = 8081
@@ -35,8 +35,7 @@
help='The number of works to fork')
args, _ = parser.parse_known_args()
-logging.basicConfig(level=constants.KFSERVING_LOGLEVEL)
-
+tornado.log.enable_pretty_logging()
class KFServer():
def __init__(self, http_port: int = args.http_port,
| {"golden_diff": "diff --git a/python/kfserving/kfserving/kfserver.py b/python/kfserving/kfserving/kfserver.py\n--- a/python/kfserving/kfserving/kfserver.py\n+++ b/python/kfserving/kfserving/kfserver.py\n@@ -12,16 +12,16 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import tornado.ioloop\n-import tornado.web\n-import tornado.httpserver\n import argparse\n import logging\n import json\n from typing import List, Dict\n+import tornado.ioloop\n+import tornado.web\n+import tornado.httpserver\n+import tornado.log\n from kfserving.handlers.http import PredictHandler, ExplainHandler\n from kfserving import KFModel\n-from kfserving.constants import constants\n \n DEFAULT_HTTP_PORT = 8080\n DEFAULT_GRPC_PORT = 8081\n@@ -35,8 +35,7 @@\n help='The number of works to fork')\n args, _ = parser.parse_known_args()\n \n-logging.basicConfig(level=constants.KFSERVING_LOGLEVEL)\n-\n+tornado.log.enable_pretty_logging()\n \n class KFServer():\n def __init__(self, http_port: int = args.http_port,\n", "issue": "Log format is not properly setup for KFServer\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nLog format is not properly setup\r\n\r\n```\r\nINFO:root:Copying contents of /mnt/models to local\r\nINFO:root:Registering model:cifar10\r\nINFO:root:Listening on port 8080\r\nINFO:tornado.general:Starting 40 processes\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 11488.05ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 22800.67ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 24200.31ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 8301.00ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38398.63ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 38799.67ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 7599.63ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 39800.00ms\r\nINFO:tornado.access:200 POST /v1/models/cifar10:predict (127.0.0.1) 32200.33ms\r\n```\r\n\r\n**What did you expect to happen:**\r\nThe log format should include timestamp.\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version: 0.2.0\r\n- Kubeflow version:\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tornado.ioloop\nimport tornado.web\nimport tornado.httpserver\nimport argparse\nimport logging\nimport json\nfrom typing import List, Dict\nfrom kfserving.handlers.http import PredictHandler, ExplainHandler\nfrom kfserving import KFModel\nfrom kfserving.constants import constants\n\nDEFAULT_HTTP_PORT = 8080\nDEFAULT_GRPC_PORT = 8081\n\nparser = argparse.ArgumentParser(add_help=False)\nparser.add_argument('--http_port', default=DEFAULT_HTTP_PORT, type=int,\n help='The HTTP Port listened to by the model server.')\nparser.add_argument('--grpc_port', default=DEFAULT_GRPC_PORT, type=int,\n help='The GRPC Port listened to by the model server.')\nparser.add_argument('--workers', default=0, type=int,\n help='The number of works to fork')\nargs, _ = parser.parse_known_args()\n\nlogging.basicConfig(level=constants.KFSERVING_LOGLEVEL)\n\n\nclass KFServer():\n def __init__(self, http_port: int = args.http_port,\n grpc_port: int = args.grpc_port,\n workers: int = args.workers):\n self.registered_models = {}\n self.http_port = http_port\n self.grpc_port = grpc_port\n self.workers = workers\n self._http_server = None\n\n def create_application(self):\n return tornado.web.Application([\n # Server Liveness API returns 200 if server is alive.\n (r\"/\", LivenessHandler),\n (r\"/v1/models\",\n ListHandler, dict(models=self.registered_models)),\n # Model Health API returns 200 if model is ready to serve.\n (r\"/v1/models/([a-zA-Z0-9_-]+)\",\n HealthHandler, dict(models=self.registered_models)),\n (r\"/v1/models/([a-zA-Z0-9_-]+):predict\",\n PredictHandler, dict(models=self.registered_models)),\n (r\"/v1/models/([a-zA-Z0-9_-]+):explain\",\n ExplainHandler, dict(models=self.registered_models)),\n ])\n\n def start(self, models: List[KFModel]):\n for model in models:\n self.register_model(model)\n\n self._http_server = tornado.httpserver.HTTPServer(\n self.create_application())\n\n logging.info(\"Listening on port %s\", self.http_port)\n self._http_server.bind(self.http_port)\n logging.info(\"Will fork %d workers\", self.workers)\n self._http_server.start(self.workers)\n tornado.ioloop.IOLoop.current().start()\n\n def register_model(self, model: KFModel):\n if not model.name:\n raise Exception(\n \"Failed to register model, model.name must be provided.\")\n self.registered_models[model.name] = model\n logging.info(\"Registering model: %s\", model.name)\n\n\nclass LivenessHandler(tornado.web.RequestHandler): # pylint:disable=too-few-public-methods\n def get(self):\n self.write(\"Alive\")\n\n\nclass HealthHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get(self, name: str):\n if name not in self.models:\n raise tornado.web.HTTPError(\n status_code=404,\n reason=\"Model with name %s does not exist.\" % name\n )\n\n model = self.models[name]\n self.write(json.dumps({\n \"name\": model.name,\n \"ready\": model.ready\n }))\n\n\nclass ListHandler(tornado.web.RequestHandler):\n def initialize(self, models: Dict[str, KFModel]):\n self.models = models # pylint:disable=attribute-defined-outside-init\n\n def get(self):\n self.write(json.dumps(list(self.models.values())))\n", "path": "python/kfserving/kfserving/kfserver.py"}]} | 2,299 | 270 |
gh_patches_debug_35906 | rasdani/github-patches | git_diff | streamlink__streamlink-5754 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.bigo: Unable to parse JSON
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest release
### Description
Hello,
the bigo.py is not working at the moment.
It is giving a parse JSON error.
Debug log is following...
### Debug log
```text
error: Unable to parse JSON: Expecting value: line 1 column 1 (char 0) ('<!DOCTYPE html>\n<html lang="en" s ...)
```
</issue>
<code>
[start of src/streamlink/plugins/bigo.py]
1 """
2 $description Global live streaming platform for live video game broadcasts and individual live streams.
3 $url live.bigo.tv
4 $url bigoweb.co
5 $type live
6 """
7
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import useragents, validate
12 from streamlink.stream.hls import HLSStream
13
14
15 @pluginmatcher(re.compile(
16 r"https?://(?:www\.)?bigo\.tv/([^/]+)$",
17 ))
18 class Bigo(Plugin):
19 _api_url = "https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}"
20
21 _video_info_schema = validate.Schema({
22 "code": 0,
23 "msg": "success",
24 "data": {
25 "videoSrc": validate.any(None, "", validate.url()),
26 },
27 })
28
29 def _get_streams(self):
30 res = self.session.http.get(
31 self._api_url.format(self.match.group(1)),
32 allow_redirects=True,
33 headers={"User-Agent": useragents.IPHONE_6},
34 )
35 data = self.session.http.json(res, schema=self._video_info_schema)
36 videourl = data["data"]["videoSrc"]
37 if videourl:
38 yield "live", HLSStream(self.session, videourl)
39
40
41 __plugin__ = Bigo
42
[end of src/streamlink/plugins/bigo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/bigo.py b/src/streamlink/plugins/bigo.py
--- a/src/streamlink/plugins/bigo.py
+++ b/src/streamlink/plugins/bigo.py
@@ -1,41 +1,68 @@
"""
-$description Global live streaming platform for live video game broadcasts and individual live streams.
-$url live.bigo.tv
-$url bigoweb.co
+$description Global live-streaming platform for live video game broadcasts and individual live streams.
+$url bigo.tv
$type live
+$metadata id
+$metadata author
+$metadata category
+$metadata title
"""
+import logging
import re
from streamlink.plugin import Plugin, pluginmatcher
-from streamlink.plugin.api import useragents, validate
+from streamlink.plugin.api import validate
from streamlink.stream.hls import HLSStream
+log = logging.getLogger(__name__)
+
+
@pluginmatcher(re.compile(
- r"https?://(?:www\.)?bigo\.tv/([^/]+)$",
+ r"https?://(?:www\.)?bigo\.tv/(?P<site_id>[^/]+)$",
))
class Bigo(Plugin):
- _api_url = "https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}"
-
- _video_info_schema = validate.Schema({
- "code": 0,
- "msg": "success",
- "data": {
- "videoSrc": validate.any(None, "", validate.url()),
- },
- })
+ _URL_API = "https://ta.bigo.tv/official_website/studio/getInternalStudioInfo"
def _get_streams(self):
- res = self.session.http.get(
- self._api_url.format(self.match.group(1)),
- allow_redirects=True,
- headers={"User-Agent": useragents.IPHONE_6},
+ self.id, self.author, self.category, self.title, hls_url = self.session.http.post(
+ self._URL_API,
+ params={
+ "siteId": self.match["site_id"],
+ "verify": "",
+ },
+ schema=validate.Schema(
+ validate.parse_json(),
+ {
+ "code": 0,
+ "msg": "success",
+ "data": {
+ "roomId": validate.any(None, str),
+ "clientBigoId": validate.any(None, str),
+ "gameTitle": str,
+ "roomTopic": str,
+ "hls_src": validate.any(None, "", validate.url()),
+ },
+ },
+ validate.union_get(
+ ("data", "roomId"),
+ ("data", "clientBigoId"),
+ ("data", "gameTitle"),
+ ("data", "roomTopic"),
+ ("data", "hls_src"),
+ ),
+ ),
)
- data = self.session.http.json(res, schema=self._video_info_schema)
- videourl = data["data"]["videoSrc"]
- if videourl:
- yield "live", HLSStream(self.session, videourl)
+
+ if not self.id:
+ return
+
+ if not hls_url:
+ log.info("Channel is offline")
+ return
+
+ yield "live", HLSStream(self.session, hls_url)
__plugin__ = Bigo
| {"golden_diff": "diff --git a/src/streamlink/plugins/bigo.py b/src/streamlink/plugins/bigo.py\n--- a/src/streamlink/plugins/bigo.py\n+++ b/src/streamlink/plugins/bigo.py\n@@ -1,41 +1,68 @@\n \"\"\"\n-$description Global live streaming platform for live video game broadcasts and individual live streams.\n-$url live.bigo.tv\n-$url bigoweb.co\n+$description Global live-streaming platform for live video game broadcasts and individual live streams.\n+$url bigo.tv\n $type live\n+$metadata id\n+$metadata author\n+$metadata category\n+$metadata title\n \"\"\"\n \n+import logging\n import re\n \n from streamlink.plugin import Plugin, pluginmatcher\n-from streamlink.plugin.api import useragents, validate\n+from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n \n \n+log = logging.getLogger(__name__)\n+\n+\n @pluginmatcher(re.compile(\n- r\"https?://(?:www\\.)?bigo\\.tv/([^/]+)$\",\n+ r\"https?://(?:www\\.)?bigo\\.tv/(?P<site_id>[^/]+)$\",\n ))\n class Bigo(Plugin):\n- _api_url = \"https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}\"\n-\n- _video_info_schema = validate.Schema({\n- \"code\": 0,\n- \"msg\": \"success\",\n- \"data\": {\n- \"videoSrc\": validate.any(None, \"\", validate.url()),\n- },\n- })\n+ _URL_API = \"https://ta.bigo.tv/official_website/studio/getInternalStudioInfo\"\n \n def _get_streams(self):\n- res = self.session.http.get(\n- self._api_url.format(self.match.group(1)),\n- allow_redirects=True,\n- headers={\"User-Agent\": useragents.IPHONE_6},\n+ self.id, self.author, self.category, self.title, hls_url = self.session.http.post(\n+ self._URL_API,\n+ params={\n+ \"siteId\": self.match[\"site_id\"],\n+ \"verify\": \"\",\n+ },\n+ schema=validate.Schema(\n+ validate.parse_json(),\n+ {\n+ \"code\": 0,\n+ \"msg\": \"success\",\n+ \"data\": {\n+ \"roomId\": validate.any(None, str),\n+ \"clientBigoId\": validate.any(None, str),\n+ \"gameTitle\": str,\n+ \"roomTopic\": str,\n+ \"hls_src\": validate.any(None, \"\", validate.url()),\n+ },\n+ },\n+ validate.union_get(\n+ (\"data\", \"roomId\"),\n+ (\"data\", \"clientBigoId\"),\n+ (\"data\", \"gameTitle\"),\n+ (\"data\", \"roomTopic\"),\n+ (\"data\", \"hls_src\"),\n+ ),\n+ ),\n )\n- data = self.session.http.json(res, schema=self._video_info_schema)\n- videourl = data[\"data\"][\"videoSrc\"]\n- if videourl:\n- yield \"live\", HLSStream(self.session, videourl)\n+\n+ if not self.id:\n+ return\n+\n+ if not hls_url:\n+ log.info(\"Channel is offline\")\n+ return\n+\n+ yield \"live\", HLSStream(self.session, hls_url)\n \n \n __plugin__ = Bigo\n", "issue": "plugins.bigo: Unable to parse JSON\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest release\n\n### Description\n\nHello,\r\n\r\nthe bigo.py is not working at the moment.\r\n\r\nIt is giving a parse JSON error.\r\n\r\nDebug log is following...\n\n### Debug log\n\n```text\nerror: Unable to parse JSON: Expecting value: line 1 column 1 (char 0) ('<!DOCTYPE html>\\n<html lang=\"en\" s ...)\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Global live streaming platform for live video game broadcasts and individual live streams.\n$url live.bigo.tv\n$url bigoweb.co\n$type live\n\"\"\"\n\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import useragents, validate\nfrom streamlink.stream.hls import HLSStream\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?bigo\\.tv/([^/]+)$\",\n))\nclass Bigo(Plugin):\n _api_url = \"https://www.bigo.tv/OInterface/getVideoParam?bigoId={0}\"\n\n _video_info_schema = validate.Schema({\n \"code\": 0,\n \"msg\": \"success\",\n \"data\": {\n \"videoSrc\": validate.any(None, \"\", validate.url()),\n },\n })\n\n def _get_streams(self):\n res = self.session.http.get(\n self._api_url.format(self.match.group(1)),\n allow_redirects=True,\n headers={\"User-Agent\": useragents.IPHONE_6},\n )\n data = self.session.http.json(res, schema=self._video_info_schema)\n videourl = data[\"data\"][\"videoSrc\"]\n if videourl:\n yield \"live\", HLSStream(self.session, videourl)\n\n\n__plugin__ = Bigo\n", "path": "src/streamlink/plugins/bigo.py"}]} | 1,144 | 724 |
gh_patches_debug_10737 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1137 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash on starlette middleware in mount endpoints (StaticFiles)
**Describe the bug**: Elastic middleware crash on access to static files
**Environment (please complete the following information)**
- OS: [e.g. Linux]
- Python version: 3.9.4
- Framework and version [e.g. Django 2.1]: FastAPI
- APM Server version: 7.12.1
- Agent version: 7.12.1
```python
from __future__ import annotations
import uvicorn
from elasticapm.contrib.starlette import ElasticAPM, make_apm_client
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
app = FastAPI(routes=[])
apm_config = {
"SERVICE_NAME": "testing",
"ENABLED": True,
"SERVER_URL": "http://localhost:8200",
"CAPTURE_HEADERS": True,
"CAPTURE_BODY": "all",
}
apm = make_apm_client(apm_config)
app.add_middleware(ElasticAPM, client=apm)
app.mount("/static",StaticFiles(directory="/"),name="static")
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", debug=False)
```
The issue is happens when elastic running the function _get_route_name.
the middleware will try to loop over the routes where starlette will return None in some cases:
```python
File "/home/syniex/.local/lib/python3.9/site-packages/elasticapm/contrib/starlette/__init__.py", line 233, in _get_route_name
for route in routes:
└ None
TypeError: 'NoneType' object is not iterable
```
Starlette code that makes the issue:
```python
@property
def routes(self) -> typing.List[BaseRoute]:
return getattr(self.app, "routes", None)
```
</issue>
<code>
[start of elasticapm/contrib/starlette/__init__.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 from __future__ import absolute_import
33
34 from typing import Dict, Optional
35
36 import starlette
37 from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
38 from starlette.requests import Request
39 from starlette.responses import Response
40 from starlette.routing import Match, Mount
41 from starlette.types import ASGIApp
42
43 import elasticapm
44 import elasticapm.instrumentation.control
45 from elasticapm.base import Client
46 from elasticapm.conf import constants
47 from elasticapm.contrib.asyncio.traces import set_context
48 from elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response
49 from elasticapm.utils.disttracing import TraceParent
50 from elasticapm.utils.logging import get_logger
51
52 logger = get_logger("elasticapm.errors.client")
53
54
55 def make_apm_client(config: Optional[Dict] = None, client_cls=Client, **defaults) -> Client:
56 """Builds ElasticAPM client.
57
58 Args:
59 config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.
60 client_cls (Client): Must be Client or its child.
61 **defaults: Additional parameters for Client. See `elasticapm.base.Client`
62
63 Returns:
64 Client
65 """
66 if "framework_name" not in defaults:
67 defaults["framework_name"] = "starlette"
68 defaults["framework_version"] = starlette.__version__
69
70 return client_cls(config, **defaults)
71
72
73 class ElasticAPM(BaseHTTPMiddleware):
74 """
75 Starlette / FastAPI middleware for Elastic APM capturing.
76
77 >>> elasticapm = make_apm_client({
78 >>> 'SERVICE_NAME': 'myapp',
79 >>> 'DEBUG': True,
80 >>> 'SERVER_URL': 'http://localhost:8200',
81 >>> 'CAPTURE_HEADERS': True,
82 >>> 'CAPTURE_BODY': 'all'
83 >>> })
84
85 >>> app.add_middleware(ElasticAPM, client=elasticapm)
86
87 Pass an arbitrary APP_NAME and SECRET_TOKEN::
88
89 >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')
90
91 Pass an explicit client::
92
93 >>> elasticapm = ElasticAPM(app, client=client)
94
95 Automatically configure logging::
96
97 >>> elasticapm = ElasticAPM(app, logging=True)
98
99 Capture an exception::
100
101 >>> try:
102 >>> 1 / 0
103 >>> except ZeroDivisionError:
104 >>> elasticapm.capture_exception()
105
106 Capture a message::
107
108 >>> elasticapm.capture_message('hello, world!')
109 """
110
111 def __init__(self, app: ASGIApp, client: Client):
112 """
113
114 Args:
115 app (ASGIApp): Starlette app
116 client (Client): ElasticAPM Client
117 """
118 self.client = client
119
120 if self.client.config.instrument and self.client.config.enabled:
121 elasticapm.instrumentation.control.instrument()
122
123 super().__init__(app)
124
125 async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
126 """Processes the whole request APM capturing.
127
128 Args:
129 request (Request)
130 call_next (RequestResponseEndpoint): Next request process in Starlette.
131
132 Returns:
133 Response
134 """
135 await self._request_started(request)
136
137 try:
138 response = await call_next(request)
139 elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)
140 except Exception:
141 await self.capture_exception(
142 context={"request": await get_data_from_request(request, self.client.config, constants.ERROR)}
143 )
144 elasticapm.set_transaction_result("HTTP 5xx", override=False)
145 elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)
146 elasticapm.set_context({"status_code": 500}, "response")
147
148 raise
149 else:
150 await self._request_finished(response)
151 finally:
152 self.client.end_transaction()
153
154 return response
155
156 async def capture_exception(self, *args, **kwargs):
157 """Captures your exception.
158
159 Args:
160 *args:
161 **kwargs:
162 """
163 self.client.capture_exception(*args, **kwargs)
164
165 async def capture_message(self, *args, **kwargs):
166 """Captures your message.
167
168 Args:
169 *args: Whatever
170 **kwargs: Whatever
171 """
172 self.client.capture_message(*args, **kwargs)
173
174 async def _request_started(self, request: Request):
175 """Captures the begin of the request processing to APM.
176
177 Args:
178 request (Request)
179 """
180 # When we consume the body, we replace the streaming mechanism with
181 # a mocked version -- this workaround came from
182 # https://github.com/encode/starlette/issues/495#issuecomment-513138055
183 # and we call the workaround here to make sure that regardless of
184 # `capture_body` settings, we will have access to the body if we need it.
185 if self.client.config.capture_body != "off":
186 await get_body(request)
187
188 if not self.client.should_ignore_url(request.url.path):
189 trace_parent = TraceParent.from_headers(dict(request.headers))
190 self.client.begin_transaction("request", trace_parent=trace_parent)
191
192 await set_context(
193 lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), "request"
194 )
195 transaction_name = self.get_route_name(request) or request.url.path
196 elasticapm.set_transaction_name("{} {}".format(request.method, transaction_name), override=False)
197
198 async def _request_finished(self, response: Response):
199 """Captures the end of the request processing to APM.
200
201 Args:
202 response (Response)
203 """
204 await set_context(
205 lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), "response"
206 )
207
208 result = "HTTP {}xx".format(response.status_code // 100)
209 elasticapm.set_transaction_result(result, override=False)
210
211 def get_route_name(self, request: Request) -> str:
212 app = request.app
213 scope = request.scope
214 routes = app.routes
215 route_name = self._get_route_name(scope, routes)
216
217 # Starlette magically redirects requests if the path matches a route name with a trailing slash
218 # appended or removed. To not spam the transaction names list, we do the same here and put these
219 # redirects all in the same "redirect trailing slashes" transaction name
220 if not route_name and app.router.redirect_slashes and scope["path"] != "/":
221 redirect_scope = dict(scope)
222 if scope["path"].endswith("/"):
223 redirect_scope["path"] = scope["path"][:-1]
224 trim = True
225 else:
226 redirect_scope["path"] = scope["path"] + "/"
227 trim = False
228
229 route_name = self._get_route_name(redirect_scope, routes)
230 if route_name is not None:
231 route_name = route_name + "/" if trim else route_name[:-1]
232 return route_name
233
234 def _get_route_name(self, scope, routes, route_name=None):
235 for route in routes:
236 match, child_scope = route.matches(scope)
237 if match == Match.FULL:
238 route_name = route.path
239 child_scope = {**scope, **child_scope}
240 if isinstance(route, Mount):
241 child_route_name = self._get_route_name(child_scope, route.routes, route_name)
242 if child_route_name is None:
243 route_name = None
244 else:
245 route_name += child_route_name
246 return route_name
247 elif match == Match.PARTIAL and route_name is None:
248 route_name = route.path
249
[end of elasticapm/contrib/starlette/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py
--- a/elasticapm/contrib/starlette/__init__.py
+++ b/elasticapm/contrib/starlette/__init__.py
@@ -237,7 +237,7 @@
if match == Match.FULL:
route_name = route.path
child_scope = {**scope, **child_scope}
- if isinstance(route, Mount):
+ if isinstance(route, Mount) and route.routes:
child_route_name = self._get_route_name(child_scope, route.routes, route_name)
if child_route_name is None:
route_name = None
| {"golden_diff": "diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py\n--- a/elasticapm/contrib/starlette/__init__.py\n+++ b/elasticapm/contrib/starlette/__init__.py\n@@ -237,7 +237,7 @@\n if match == Match.FULL:\n route_name = route.path\n child_scope = {**scope, **child_scope}\n- if isinstance(route, Mount):\n+ if isinstance(route, Mount) and route.routes:\n child_route_name = self._get_route_name(child_scope, route.routes, route_name)\n if child_route_name is None:\n route_name = None\n", "issue": "Crash on starlette middleware in mount endpoints (StaticFiles)\n**Describe the bug**: Elastic middleware crash on access to static files\r\n\r\n**Environment (please complete the following information)**\r\n- OS: [e.g. Linux]\r\n- Python version: 3.9.4\r\n- Framework and version [e.g. Django 2.1]: FastAPI\r\n- APM Server version: 7.12.1\r\n- Agent version: 7.12.1\r\n\r\n```python\r\nfrom __future__ import annotations\r\n\r\nimport uvicorn\r\nfrom elasticapm.contrib.starlette import ElasticAPM, make_apm_client\r\nfrom fastapi import FastAPI\r\nfrom fastapi.staticfiles import StaticFiles\r\n\r\napp = FastAPI(routes=[])\r\napm_config = {\r\n \"SERVICE_NAME\": \"testing\",\r\n \"ENABLED\": True,\r\n \"SERVER_URL\": \"http://localhost:8200\",\r\n \"CAPTURE_HEADERS\": True,\r\n \"CAPTURE_BODY\": \"all\",\r\n}\r\napm = make_apm_client(apm_config)\r\napp.add_middleware(ElasticAPM, client=apm)\r\napp.mount(\"/static\",StaticFiles(directory=\"/\"),name=\"static\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n uvicorn.run(app, host=\"0.0.0.0\", debug=False)\r\n```\r\n\r\nThe issue is happens when elastic running the function _get_route_name.\r\nthe middleware will try to loop over the routes where starlette will return None in some cases:\r\n```python\r\n File \"/home/syniex/.local/lib/python3.9/site-packages/elasticapm/contrib/starlette/__init__.py\", line 233, in _get_route_name\r\n for route in routes:\r\n \u2514 None\r\n\r\nTypeError: 'NoneType' object is not iterable\r\n\r\n```\r\nStarlette code that makes the issue:\r\n```python\r\n@property\r\n def routes(self) -> typing.List[BaseRoute]:\r\n return getattr(self.app, \"routes\", None)\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nfrom typing import Dict, Optional\n\nimport starlette\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.routing import Match, Mount\nfrom starlette.types import ASGIApp\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants\nfrom elasticapm.contrib.asyncio.traces import set_context\nfrom elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_apm_client(config: Optional[Dict] = None, client_cls=Client, **defaults) -> Client:\n \"\"\"Builds ElasticAPM client.\n\n Args:\n config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.\n client_cls (Client): Must be Client or its child.\n **defaults: Additional parameters for Client. See `elasticapm.base.Client`\n\n Returns:\n Client\n \"\"\"\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"starlette\"\n defaults[\"framework_version\"] = starlette.__version__\n\n return client_cls(config, **defaults)\n\n\nclass ElasticAPM(BaseHTTPMiddleware):\n \"\"\"\n Starlette / FastAPI middleware for Elastic APM capturing.\n\n >>> elasticapm = make_apm_client({\n >>> 'SERVICE_NAME': 'myapp',\n >>> 'DEBUG': True,\n >>> 'SERVER_URL': 'http://localhost:8200',\n >>> 'CAPTURE_HEADERS': True,\n >>> 'CAPTURE_BODY': 'all'\n >>> })\n\n >>> app.add_middleware(ElasticAPM, client=elasticapm)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app: ASGIApp, client: Client):\n \"\"\"\n\n Args:\n app (ASGIApp): Starlette app\n client (Client): ElasticAPM Client\n \"\"\"\n self.client = client\n\n if self.client.config.instrument and self.client.config.enabled:\n elasticapm.instrumentation.control.instrument()\n\n super().__init__(app)\n\n async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:\n \"\"\"Processes the whole request APM capturing.\n\n Args:\n request (Request)\n call_next (RequestResponseEndpoint): Next request process in Starlette.\n\n Returns:\n Response\n \"\"\"\n await self._request_started(request)\n\n try:\n response = await call_next(request)\n elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)\n except Exception:\n await self.capture_exception(\n context={\"request\": await get_data_from_request(request, self.client.config, constants.ERROR)}\n )\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n raise\n else:\n await self._request_finished(response)\n finally:\n self.client.end_transaction()\n\n return response\n\n async def capture_exception(self, *args, **kwargs):\n \"\"\"Captures your exception.\n\n Args:\n *args:\n **kwargs:\n \"\"\"\n self.client.capture_exception(*args, **kwargs)\n\n async def capture_message(self, *args, **kwargs):\n \"\"\"Captures your message.\n\n Args:\n *args: Whatever\n **kwargs: Whatever\n \"\"\"\n self.client.capture_message(*args, **kwargs)\n\n async def _request_started(self, request: Request):\n \"\"\"Captures the begin of the request processing to APM.\n\n Args:\n request (Request)\n \"\"\"\n # When we consume the body, we replace the streaming mechanism with\n # a mocked version -- this workaround came from\n # https://github.com/encode/starlette/issues/495#issuecomment-513138055\n # and we call the workaround here to make sure that regardless of\n # `capture_body` settings, we will have access to the body if we need it.\n if self.client.config.capture_body != \"off\":\n await get_body(request)\n\n if not self.client.should_ignore_url(request.url.path):\n trace_parent = TraceParent.from_headers(dict(request.headers))\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n transaction_name = self.get_route_name(request) or request.url.path\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, transaction_name), override=False)\n\n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n\n Args:\n response (Response)\n \"\"\"\n await set_context(\n lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), \"response\"\n )\n\n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n\n def get_route_name(self, request: Request) -> str:\n app = request.app\n scope = request.scope\n routes = app.routes\n route_name = self._get_route_name(scope, routes)\n\n # Starlette magically redirects requests if the path matches a route name with a trailing slash\n # appended or removed. To not spam the transaction names list, we do the same here and put these\n # redirects all in the same \"redirect trailing slashes\" transaction name\n if not route_name and app.router.redirect_slashes and scope[\"path\"] != \"/\":\n redirect_scope = dict(scope)\n if scope[\"path\"].endswith(\"/\"):\n redirect_scope[\"path\"] = scope[\"path\"][:-1]\n trim = True\n else:\n redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n trim = False\n\n route_name = self._get_route_name(redirect_scope, routes)\n if route_name is not None:\n route_name = route_name + \"/\" if trim else route_name[:-1]\n return route_name\n\n def _get_route_name(self, scope, routes, route_name=None):\n for route in routes:\n match, child_scope = route.matches(scope)\n if match == Match.FULL:\n route_name = route.path\n child_scope = {**scope, **child_scope}\n if isinstance(route, Mount):\n child_route_name = self._get_route_name(child_scope, route.routes, route_name)\n if child_route_name is None:\n route_name = None\n else:\n route_name += child_route_name\n return route_name\n elif match == Match.PARTIAL and route_name is None:\n route_name = route.path\n", "path": "elasticapm/contrib/starlette/__init__.py"}]} | 3,603 | 154 |
gh_patches_debug_1741 | rasdani/github-patches | git_diff | xonsh__xonsh-1265 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Amalgamate build on Anaconda.org fails
It builds fine on my machine so I am not really sure what is going on here:
https://anaconda.org/xonsh/xonsh/builds/403/0
``` bash
+ python setup.py install --single-version-externally-managed --record=record.txt
Traceback (most recent call last):
File "setup.py", line 210, in <module>
main()
File "setup.py", line 206, in main
setup(**skw)
File "/opt/miniconda/envs/_build/lib/python3.4/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/miniconda/envs/_build/lib/python3.4/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/opt/miniconda/envs/_build/lib/python3.4/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "setup.py", line 99, in run
build_tables()
File "setup.py", line 59, in build_tables
amalgamate.main(['amalgamate', '--debug=XONSH_DEBUG', 'xonsh'])
File "/opt/miniconda/conda-bld/work/amalgamate.py", line 369, in main
graph = make_graph(pkg, exclude=exclude)
File "/opt/miniconda/conda-bld/work/amalgamate.py", line 94, in make_graph
graph[base] = make_node(base, pkg, allowed)
File "/opt/miniconda/conda-bld/work/amalgamate.py", line 54, in make_node
raw = SOURCES[pkg, name]
File "/opt/miniconda/conda-bld/work/amalgamate.py", line 39, in __getitem__
raw = f.read()
File "/opt/miniconda/envs/_build/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc7 in position 2102: ordinal not in range(128)
Command failed: /bin/bash -x -e /opt/miniconda/conda-bld/work/conda_build.sh
discarding /home/dev/env/bin from PATH
```
</issue>
<code>
[start of xonsh/platform.py]
1 """Module for platform-specific constants and implementations, as well as
2 compatibility layers to make use of the 'best' implementation available
3 on a platform.
4 """
5 import os
6 import sys
7 import pathlib
8 import platform
9 import functools
10 import subprocess
11 import importlib.util
12
13 from xonsh.lazyasd import LazyObject, LazyBool
14
15 def _distro():
16 try:
17 import distro as d
18 except ImportError:
19 d = None
20 except:
21 raise
22 return d
23
24
25 distro = LazyObject(_distro, globals(), 'distro')
26 del _distro
27
28 # do not import any xonsh-modules here to avoid circular dependencies
29
30
31 #
32 # OS
33 #
34 ON_DARWIN = LazyBool(lambda: platform.system() == 'Darwin',
35 globals(), 'ON_DARWIN')
36 """ ``True`` if executed on a Darwin platform, else ``False``. """
37 ON_LINUX = LazyBool(lambda: platform.system() == 'Linux',
38 globals(), 'ON_LINUX')
39 """ ``True`` if executed on a Linux platform, else ``False``. """
40 ON_WINDOWS = LazyBool(lambda: platform.system() == 'Windows',
41 globals(), 'ON_WINDOWS')
42 """ ``True`` if executed on a native Windows platform, else ``False``. """
43 ON_CYGWIN = LazyBool(lambda: sys.platform == 'cygwin', globals(), 'ON_CYGWIN')
44 """ ``True`` if executed on a Cygwin Windows platform, else ``False``. """
45 ON_POSIX = LazyBool(lambda: (os.name == 'posix'), globals(), 'ON_POSIX')
46 """ ``True`` if executed on a POSIX-compliant platform, else ``False``. """
47
48
49 #
50 # Python & packages
51 #
52
53 PYTHON_VERSION_INFO = sys.version_info[:3]
54 """ Version of Python interpreter as three-value tuple. """
55 ON_ANACONDA = LazyBool(
56 lambda: any(s in sys.version for s in {'Anaconda', 'Continuum'}),
57 globals(), 'ON_ANACONDA')
58 """ ``True`` if executed in an Anaconda instance, else ``False``. """
59
60 def _has_pygments():
61 spec = importlib.util.find_spec('pygments')
62 return (spec is not None)
63
64
65 HAS_PYGMENTS = LazyBool(_has_pygments, globals(), 'HAS_PYGMENTS')
66 """ ``True`` if `pygments` is available, else ``False``. """
67 del _has_pygments
68
69
70 @functools.lru_cache(1)
71 def pygments_version():
72 """pygments.__version__ version if available, else Ǹone."""
73 if HAS_PYGMENTS:
74 import pygments
75 v = pygments.__version__
76 else:
77 v = None
78 return v
79
80
81 @functools.lru_cache(1)
82 def has_prompt_toolkit():
83 """ Tests if the `prompt_toolkit` is available. """
84 spec = importlib.util.find_spec('pygments')
85 return (spec is not None)
86
87
88 @functools.lru_cache(1)
89 def ptk_version():
90 """ Returns `prompt_toolkit.__version__` if available, else ``None``. """
91 if has_prompt_toolkit():
92 import prompt_toolkit
93 return getattr(prompt_toolkit, '__version__', '<0.57')
94 else:
95 return None
96
97
98 @functools.lru_cache(1)
99 def ptk_version_info():
100 """ Returns `prompt_toolkit`'s version as tuple of integers. """
101 if has_prompt_toolkit():
102 return tuple(int(x) for x in ptk_version().strip('<>+-=.').split('.'))
103 else:
104 return None
105
106
107 @functools.lru_cache(1)
108 def best_shell_type():
109 if ON_WINDOWS or has_prompt_toolkit():
110 return 'prompt_toolkit'
111 else:
112 return 'readline'
113
114
115 @functools.lru_cache(1)
116 def is_readline_available():
117 """Checks if readline is available to import."""
118 spec = importlib.util.find_spec('readline')
119 return (spec is not None)
120
121
122 #
123 # Encoding
124 #
125
126 DEFAULT_ENCODING = sys.getdefaultencoding()
127 """ Default string encoding. """
128
129
130 if PYTHON_VERSION_INFO < (3, 5, 0):
131 class DirEntry:
132 def __init__(self, directory, name):
133 self.__path__ = pathlib.Path(directory) / name
134 self.name = name
135 self.path = str(self.__path__)
136 self.is_symlink = self.__path__.is_symlink
137
138 def inode(self):
139 return os.stat(self.path, follow_symlinks=False).st_ino
140
141 def is_dir(self, *, follow_symlinks=True):
142 if follow_symlinks:
143 return self.__path__.is_dir()
144 else:
145 return not self.__path__.is_symlink() \
146 and self.__path__.is_dir()
147
148 def is_file(self, *, follow_symlinks=True):
149 if follow_symlinks:
150 return self.__path__.is_file()
151 else:
152 return not self.__path__.is_symlink() \
153 and self.__path__.is_file()
154
155 def stat(self, *, follow_symlinks=True):
156 return os.stat(self.path, follow_symlinks=follow_symlinks)
157
158 def scandir(path):
159 """ Compatibility layer for `os.scandir` from Python 3.5+. """
160 return (DirEntry(path, x) for x in os.listdir(path))
161 else:
162 scandir = os.scandir
163
164
165 #
166 # Linux distro
167 #
168
169 @functools.lru_cache(1)
170 def linux_distro():
171 """The id of the Linux distribution running on, possibly 'unknown'.
172 None on non-Linux platforms.
173 """
174 if ON_LINUX:
175 if distro:
176 ld = distro.id()
177 elif PYTHON_VERSION_INFO < (3, 7, 0):
178 ld = platform.linux_distribution()[0] or 'unknown'
179 elif '-ARCH-' in platform.platform():
180 ld = 'arch' # that's the only one we need to know for now
181 else:
182 ld = 'unknown'
183 else:
184 ld = None
185 return ld
186
187
188 #
189 # Windows
190 #
191
192 @functools.lru_cache(1)
193 def git_for_windows_path():
194 """Returns the path to git for windows, if available and None otherwise."""
195 import winreg
196 try:
197 key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,
198 'SOFTWARE\\GitForWindows')
199 gfwp, _ = winreg.QueryValueEx(key, "InstallPath")
200 except FileNotFoundError:
201 gfwp = None
202 return gfwp
203
204
205 @functools.lru_cache(1)
206 def windows_bash_command():
207 """Determines teh command for Bash on windows."""
208 import winreg
209 # Check that bash is on path otherwise try the default directory
210 # used by Git for windows
211 wbc = 'bash'
212 try:
213 subprocess.check_call([wbc, '--version'],
214 stdout=subprocess.PIPE,
215 stderr=subprocess.PIPE)
216 except (FileNotFoundError, subprocess.CalledProcessError):
217 gfwp = git_for_windows_path()
218 if gfwp:
219 bashcmd = os.path.join(gfwp, 'bin\\bash.exe')
220 if os.path.isfile(bashcmd):
221 wbc = bashcmd
222 return wbc
223
224 #
225 # Environment variables defaults
226 #
227
228 def _bcd():
229 """A possibly empty tuple with default paths to Bash completions known for
230 the current platform.
231 """
232 if ON_LINUX or ON_CYGWIN:
233 if linux_distro() == 'arch':
234 bcd = (
235 '/usr/share/bash-completion/bash_completion',
236 '/usr/share/bash-completion/completions')
237 else:
238 bcd = ('/usr/share/bash-completion',
239 '/usr/share/bash-completion/completions')
240 elif ON_DARWIN:
241 bcd = ('/usr/local/etc/bash_completion',
242 '/opt/local/etc/profile.d/bash_completion.sh')
243 elif ON_WINDOWS and git_for_windows_path():
244 bcd = (os.path.join(git_for_windows_path(),
245 'usr\\share\\bash-completion'),
246 os.path.join(git_for_windows_path(),
247 'usr\\share\\bash-completion\\completions'),
248 os.path.join(git_for_windows_path(),
249 'mingw64\\share\\git\\completion\\git-completion.bash'))
250 else:
251 bcd = ()
252 return bcd
253
254
255 BASH_COMPLETIONS_DEFAULT = LazyObject(_bcd, globals(),
256 'BASH_COMPLETIONS_DEFAULT')
257 del _bcd
258
259
260 def _pd():
261 if ON_LINUX or ON_CYGWIN:
262 if linux_distro() == 'arch':
263 pd = ('/usr/local/sbin',
264 '/usr/local/bin', '/usr/bin', '/usr/bin/site_perl',
265 '/usr/bin/vendor_perl', '/usr/bin/core_perl')
266 else:
267 pd = (os.path.expanduser('~/bin'), '/usr/local/sbin',
268 '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin',
269 '/usr/games', '/usr/local/games')
270 elif ON_DARWIN:
271 pd = ('/usr/local/bin', '/usr/bin', '/bin', '/usr/sbin', '/sbin')
272 elif ON_WINDOWS:
273 import winreg
274 key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,
275 r'SYSTEM\CurrentControlSet\Control\Session Manager\Environment')
276 pd = tuple(winreg.QueryValueEx(key, 'Path')[0].split(os.pathsep))
277 else:
278 pd = ()
279 return pd
280
281 PATH_DEFAULT = LazyObject(_pd, globals(), 'PATH_DEFAULT')
282 del _pd
283
[end of xonsh/platform.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/platform.py b/xonsh/platform.py
--- a/xonsh/platform.py
+++ b/xonsh/platform.py
@@ -69,7 +69,7 @@
@functools.lru_cache(1)
def pygments_version():
- """pygments.__version__ version if available, else Ǹone."""
+ """pygments.__version__ version if available, else None."""
if HAS_PYGMENTS:
import pygments
v = pygments.__version__
| {"golden_diff": "diff --git a/xonsh/platform.py b/xonsh/platform.py\n--- a/xonsh/platform.py\n+++ b/xonsh/platform.py\n@@ -69,7 +69,7 @@\n \n @functools.lru_cache(1)\n def pygments_version():\n- \"\"\"pygments.__version__ version if available, else \u01f8one.\"\"\"\n+ \"\"\"pygments.__version__ version if available, else None.\"\"\"\n if HAS_PYGMENTS:\n import pygments\n v = pygments.__version__\n", "issue": "Amalgamate build on Anaconda.org fails\nIt builds fine on my machine so I am not really sure what is going on here: \n\nhttps://anaconda.org/xonsh/xonsh/builds/403/0\n\n``` bash\n+ python setup.py install --single-version-externally-managed --record=record.txt\nTraceback (most recent call last):\n File \"setup.py\", line 210, in <module>\n main()\n File \"setup.py\", line 206, in main\n setup(**skw)\n File \"/opt/miniconda/envs/_build/lib/python3.4/distutils/core.py\", line 148, in setup\n dist.run_commands()\n File \"/opt/miniconda/envs/_build/lib/python3.4/distutils/dist.py\", line 955, in run_commands\n self.run_command(cmd)\n File \"/opt/miniconda/envs/_build/lib/python3.4/distutils/dist.py\", line 974, in run_command\n cmd_obj.run()\n File \"setup.py\", line 99, in run\n build_tables()\n File \"setup.py\", line 59, in build_tables\n amalgamate.main(['amalgamate', '--debug=XONSH_DEBUG', 'xonsh'])\n File \"/opt/miniconda/conda-bld/work/amalgamate.py\", line 369, in main\n graph = make_graph(pkg, exclude=exclude)\n File \"/opt/miniconda/conda-bld/work/amalgamate.py\", line 94, in make_graph\n graph[base] = make_node(base, pkg, allowed)\n File \"/opt/miniconda/conda-bld/work/amalgamate.py\", line 54, in make_node\n raw = SOURCES[pkg, name]\n File \"/opt/miniconda/conda-bld/work/amalgamate.py\", line 39, in __getitem__\n raw = f.read()\n File \"/opt/miniconda/envs/_build/lib/python3.4/encodings/ascii.py\", line 26, in decode\n return codecs.ascii_decode(input, self.errors)[0]\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xc7 in position 2102: ordinal not in range(128)\nCommand failed: /bin/bash -x -e /opt/miniconda/conda-bld/work/conda_build.sh\ndiscarding /home/dev/env/bin from PATH\n\n```\n\n", "before_files": [{"content": "\"\"\"Module for platform-specific constants and implementations, as well as\ncompatibility layers to make use of the 'best' implementation available\non a platform.\n\"\"\"\nimport os\nimport sys\nimport pathlib\nimport platform\nimport functools\nimport subprocess\nimport importlib.util\n\nfrom xonsh.lazyasd import LazyObject, LazyBool\n\ndef _distro():\n try:\n import distro as d\n except ImportError:\n d = None\n except:\n raise\n return d\n\n\ndistro = LazyObject(_distro, globals(), 'distro')\ndel _distro\n\n# do not import any xonsh-modules here to avoid circular dependencies\n\n\n#\n# OS\n#\nON_DARWIN = LazyBool(lambda: platform.system() == 'Darwin',\n globals(), 'ON_DARWIN')\n\"\"\" ``True`` if executed on a Darwin platform, else ``False``. \"\"\"\nON_LINUX = LazyBool(lambda: platform.system() == 'Linux',\n globals(), 'ON_LINUX')\n\"\"\" ``True`` if executed on a Linux platform, else ``False``. \"\"\"\nON_WINDOWS = LazyBool(lambda: platform.system() == 'Windows',\n globals(), 'ON_WINDOWS')\n\"\"\" ``True`` if executed on a native Windows platform, else ``False``. \"\"\"\nON_CYGWIN = LazyBool(lambda: sys.platform == 'cygwin', globals(), 'ON_CYGWIN')\n\"\"\" ``True`` if executed on a Cygwin Windows platform, else ``False``. \"\"\"\nON_POSIX = LazyBool(lambda: (os.name == 'posix'), globals(), 'ON_POSIX')\n\"\"\" ``True`` if executed on a POSIX-compliant platform, else ``False``. \"\"\"\n\n\n#\n# Python & packages\n#\n\nPYTHON_VERSION_INFO = sys.version_info[:3]\n\"\"\" Version of Python interpreter as three-value tuple. \"\"\"\nON_ANACONDA = LazyBool(\n lambda: any(s in sys.version for s in {'Anaconda', 'Continuum'}),\n globals(), 'ON_ANACONDA')\n\"\"\" ``True`` if executed in an Anaconda instance, else ``False``. \"\"\"\n\ndef _has_pygments():\n spec = importlib.util.find_spec('pygments')\n return (spec is not None)\n\n\nHAS_PYGMENTS = LazyBool(_has_pygments, globals(), 'HAS_PYGMENTS')\n\"\"\" ``True`` if `pygments` is available, else ``False``. \"\"\"\ndel _has_pygments\n\n\[email protected]_cache(1)\ndef pygments_version():\n \"\"\"pygments.__version__ version if available, else \u01f8one.\"\"\"\n if HAS_PYGMENTS:\n import pygments\n v = pygments.__version__\n else:\n v = None\n return v\n\n\[email protected]_cache(1)\ndef has_prompt_toolkit():\n \"\"\" Tests if the `prompt_toolkit` is available. \"\"\"\n spec = importlib.util.find_spec('pygments')\n return (spec is not None)\n\n\[email protected]_cache(1)\ndef ptk_version():\n \"\"\" Returns `prompt_toolkit.__version__` if available, else ``None``. \"\"\"\n if has_prompt_toolkit():\n import prompt_toolkit\n return getattr(prompt_toolkit, '__version__', '<0.57')\n else:\n return None\n\n\[email protected]_cache(1)\ndef ptk_version_info():\n \"\"\" Returns `prompt_toolkit`'s version as tuple of integers. \"\"\"\n if has_prompt_toolkit():\n return tuple(int(x) for x in ptk_version().strip('<>+-=.').split('.'))\n else:\n return None\n\n\[email protected]_cache(1)\ndef best_shell_type():\n if ON_WINDOWS or has_prompt_toolkit():\n return 'prompt_toolkit'\n else:\n return 'readline'\n\n\[email protected]_cache(1)\ndef is_readline_available():\n \"\"\"Checks if readline is available to import.\"\"\"\n spec = importlib.util.find_spec('readline')\n return (spec is not None)\n\n\n#\n# Encoding\n#\n\nDEFAULT_ENCODING = sys.getdefaultencoding()\n\"\"\" Default string encoding. \"\"\"\n\n\nif PYTHON_VERSION_INFO < (3, 5, 0):\n class DirEntry:\n def __init__(self, directory, name):\n self.__path__ = pathlib.Path(directory) / name\n self.name = name\n self.path = str(self.__path__)\n self.is_symlink = self.__path__.is_symlink\n\n def inode(self):\n return os.stat(self.path, follow_symlinks=False).st_ino\n\n def is_dir(self, *, follow_symlinks=True):\n if follow_symlinks:\n return self.__path__.is_dir()\n else:\n return not self.__path__.is_symlink() \\\n and self.__path__.is_dir()\n\n def is_file(self, *, follow_symlinks=True):\n if follow_symlinks:\n return self.__path__.is_file()\n else:\n return not self.__path__.is_symlink() \\\n and self.__path__.is_file()\n\n def stat(self, *, follow_symlinks=True):\n return os.stat(self.path, follow_symlinks=follow_symlinks)\n\n def scandir(path):\n \"\"\" Compatibility layer for `os.scandir` from Python 3.5+. \"\"\"\n return (DirEntry(path, x) for x in os.listdir(path))\nelse:\n scandir = os.scandir\n\n\n#\n# Linux distro\n#\n\[email protected]_cache(1)\ndef linux_distro():\n \"\"\"The id of the Linux distribution running on, possibly 'unknown'.\n None on non-Linux platforms.\n \"\"\"\n if ON_LINUX:\n if distro:\n ld = distro.id()\n elif PYTHON_VERSION_INFO < (3, 7, 0):\n ld = platform.linux_distribution()[0] or 'unknown'\n elif '-ARCH-' in platform.platform():\n ld = 'arch' # that's the only one we need to know for now\n else:\n ld = 'unknown'\n else:\n ld = None\n return ld\n\n\n#\n# Windows\n#\n\[email protected]_cache(1)\ndef git_for_windows_path():\n \"\"\"Returns the path to git for windows, if available and None otherwise.\"\"\"\n import winreg\n try:\n key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,\n 'SOFTWARE\\\\GitForWindows')\n gfwp, _ = winreg.QueryValueEx(key, \"InstallPath\")\n except FileNotFoundError:\n gfwp = None\n return gfwp\n\n\[email protected]_cache(1)\ndef windows_bash_command():\n \"\"\"Determines teh command for Bash on windows.\"\"\"\n import winreg\n # Check that bash is on path otherwise try the default directory\n # used by Git for windows\n wbc = 'bash'\n try:\n subprocess.check_call([wbc, '--version'],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n except (FileNotFoundError, subprocess.CalledProcessError):\n gfwp = git_for_windows_path()\n if gfwp:\n bashcmd = os.path.join(gfwp, 'bin\\\\bash.exe')\n if os.path.isfile(bashcmd):\n wbc = bashcmd\n return wbc\n\n#\n# Environment variables defaults\n#\n\ndef _bcd():\n \"\"\"A possibly empty tuple with default paths to Bash completions known for\n the current platform.\n \"\"\"\n if ON_LINUX or ON_CYGWIN:\n if linux_distro() == 'arch':\n bcd = (\n '/usr/share/bash-completion/bash_completion',\n '/usr/share/bash-completion/completions')\n else:\n bcd = ('/usr/share/bash-completion',\n '/usr/share/bash-completion/completions')\n elif ON_DARWIN:\n bcd = ('/usr/local/etc/bash_completion',\n '/opt/local/etc/profile.d/bash_completion.sh')\n elif ON_WINDOWS and git_for_windows_path():\n bcd = (os.path.join(git_for_windows_path(),\n 'usr\\\\share\\\\bash-completion'),\n os.path.join(git_for_windows_path(),\n 'usr\\\\share\\\\bash-completion\\\\completions'),\n os.path.join(git_for_windows_path(),\n 'mingw64\\\\share\\\\git\\\\completion\\\\git-completion.bash'))\n else:\n bcd = ()\n return bcd\n\n\nBASH_COMPLETIONS_DEFAULT = LazyObject(_bcd, globals(),\n 'BASH_COMPLETIONS_DEFAULT')\ndel _bcd\n\n\ndef _pd():\n if ON_LINUX or ON_CYGWIN:\n if linux_distro() == 'arch':\n pd = ('/usr/local/sbin',\n '/usr/local/bin', '/usr/bin', '/usr/bin/site_perl',\n '/usr/bin/vendor_perl', '/usr/bin/core_perl')\n else:\n pd = (os.path.expanduser('~/bin'), '/usr/local/sbin',\n '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin',\n '/usr/games', '/usr/local/games')\n elif ON_DARWIN:\n pd = ('/usr/local/bin', '/usr/bin', '/bin', '/usr/sbin', '/sbin')\n elif ON_WINDOWS:\n import winreg\n key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,\n r'SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment')\n pd = tuple(winreg.QueryValueEx(key, 'Path')[0].split(os.pathsep))\n else:\n pd = ()\n return pd\n\nPATH_DEFAULT = LazyObject(_pd, globals(), 'PATH_DEFAULT')\ndel _pd\n", "path": "xonsh/platform.py"}]} | 3,895 | 112 |
gh_patches_debug_18315 | rasdani/github-patches | git_diff | microsoft__Qcodes-719 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IPInstrument._send() could not send all its data
I was browsing the instruments code and found out that in the `IPInstrument` class the `socket.send(...)` function is implemented wrongly:
```python
def _send(self, cmd):
data = cmd + self._terminator
self._socket.send(data.encode())
```
This is what the documentation says about `socket.send(...)`:
> socket.send(bytes[, flags])
Send data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data.
https://docs.python.org/3.6/library/socket.html
At this moment, if `send(...)` fails, only a part of the message is transmitted. Which will create strange bugs.
A better solution is to use `socket.sendall(...)` or as the example shows:
```python
def mysend(self, msg):
totalsent = 0
while totalsent < MSGLEN:
sent = self.sock.send(msg[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent = totalsent + sent
```
https://docs.python.org/3.6/howto/sockets.html#socket-howto
IPInstrument._send() could not send all its data
I was browsing the instruments code and found out that in the `IPInstrument` class the `socket.send(...)` function is implemented wrongly:
```python
def _send(self, cmd):
data = cmd + self._terminator
self._socket.send(data.encode())
```
This is what the documentation says about `socket.send(...)`:
> socket.send(bytes[, flags])
Send data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data.
https://docs.python.org/3.6/library/socket.html
At this moment, if `send(...)` fails, only a part of the message is transmitted. Which will create strange bugs.
A better solution is to use `socket.sendall(...)` or as the example shows:
```python
def mysend(self, msg):
totalsent = 0
while totalsent < MSGLEN:
sent = self.sock.send(msg[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent = totalsent + sent
```
https://docs.python.org/3.6/howto/sockets.html#socket-howto
</issue>
<code>
[start of qcodes/instrument/ip.py]
1 """Ethernet instrument driver class based on sockets."""
2 import socket
3
4 from .base import Instrument
5
6
7 class IPInstrument(Instrument):
8
9 r"""
10 Bare socket ethernet instrument implementation.
11
12 Args:
13 name (str): What this instrument is called locally.
14
15 address (Optional[str]): The IP address or name. If not given on
16 construction, must be provided before any communication.
17
18 port (Optional[int]): The IP port. If not given on construction, must
19 be provided before any communication.
20
21 timeout (number): Seconds to allow for responses. Default 5.
22
23 terminator (str): Character(s) to terminate each send. Default '\n'.
24
25 persistent (bool): Whether to leave the socket open between calls.
26 Default True.
27
28 write_confirmation (bool): Whether the instrument acknowledges writes
29 with some response we should read. Default True.
30
31 metadata (Optional[Dict]): additional static metadata to add to this
32 instrument's JSON snapshot.
33
34 See help for ``qcodes.Instrument`` for additional information on writing
35 instrument subclasses.
36 """
37
38 def __init__(self, name, address=None, port=None, timeout=5,
39 terminator='\n', persistent=True, write_confirmation=True, testing=False,
40 **kwargs):
41 super().__init__(name, testing=testing, **kwargs)
42
43 self._address = address
44 self._port = port
45 self._timeout = timeout
46 self._terminator = terminator
47 self._confirmation = write_confirmation
48
49 self._ensure_connection = EnsureConnection(self)
50 self._buffer_size = 1400
51
52 self._socket = None
53
54 self.set_persistent(persistent)
55
56 def set_address(self, address=None, port=None):
57 """
58 Change the IP address and/or port of this instrument.
59
60 Args:
61 address (Optional[str]): The IP address or name.
62 port (Optional[number]): The IP port.
63 """
64 if address is not None:
65 self._address = address
66 elif not hasattr(self, '_address'):
67 raise TypeError('This instrument doesn\'t have an address yet, '
68 'you must provide one.')
69 if port is not None:
70 self._port = port
71 elif not hasattr(self, '_port'):
72 raise TypeError('This instrument doesn\'t have a port yet, '
73 'you must provide one.')
74
75 self._disconnect()
76 self.set_persistent(self._persistent)
77
78 def set_persistent(self, persistent):
79 """
80 Change whether this instrument keeps its socket open between calls.
81
82 Args:
83 persistent (bool): Set True to keep the socket open all the time.
84 """
85 self._persistent = persistent
86 if persistent:
87 self._connect()
88 else:
89 self._disconnect()
90
91 def flush_connection(self):
92 if not self._testing:
93 self._recv()
94
95 def _connect(self):
96 if self._testing:
97 return
98
99 if self._socket is not None:
100 self._disconnect()
101
102 try:
103 self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
104 self._socket.connect((self._address, self._port))
105 self.set_timeout(self._timeout)
106 except ConnectionRefusedError:
107 self._socket.close()
108 self._socket = None
109
110 def _disconnect(self):
111 if getattr(self, '_socket', None) is None:
112 return
113
114 self._socket.shutdown(socket.SHUT_RDWR)
115 self._socket.close()
116 self._socket = None
117
118 def set_timeout(self, timeout=None):
119 """
120 Change the read timeout for the socket.
121
122 Args:
123 timeout (number): Seconds to allow for responses.
124 """
125 self._timeout = timeout
126
127 if self._socket is not None:
128 self._socket.settimeout(float(self._timeout))
129
130 def set_terminator(self, terminator):
131 r"""
132 Change the write terminator to use.
133
134 Args:
135 terminator (str): Character(s) to terminate each send.
136 Default '\n'.
137 """
138 self._terminator = terminator
139
140 def _send(self, cmd):
141 data = cmd + self._terminator
142 self._socket.send(data.encode())
143
144 def _recv(self):
145 return self._socket.recv(self._buffer_size).decode()
146
147 def close(self):
148 """Disconnect and irreversibly tear down the instrument."""
149 self._disconnect()
150 super().close()
151
152 def write_raw(self, cmd):
153 """
154 Low-level interface to send a command that gets no response.
155
156 Args:
157 cmd (str): The command to send to the instrument.
158 """
159
160 with self._ensure_connection:
161 self._send(cmd)
162 if self._confirmation:
163 self._recv()
164
165 def ask_raw(self, cmd):
166 """
167 Low-level interface to send a command an read a response.
168
169 Args:
170 cmd (str): The command to send to the instrument.
171
172 Returns:
173 str: The instrument's response.
174 """
175 with self._ensure_connection:
176 self._send(cmd)
177 return self._recv()
178
179 def __del__(self):
180 self.close()
181
182 def snapshot_base(self, update=False):
183 """
184 State of the instrument as a JSON-compatible dict.
185
186 Args:
187 update (bool): If True, update the state by querying the
188 instrument. If False, just use the latest values in memory.
189
190 Returns:
191 dict: base snapshot
192 """
193 snap = super().snapshot_base(update=update)
194
195 snap['port'] = self._port
196 snap['confirmation'] = self._confirmation
197 snap['address'] = self._address
198 snap['terminator'] = self._terminator
199 snap['timeout'] = self._timeout
200 snap['persistent'] = self._persistent
201
202 return snap
203
204
205 class EnsureConnection:
206
207 """
208 Context manager to ensure an instrument is connected when needed.
209
210 Uses ``instrument._persistent`` to determine whether or not to close
211 the connection immediately on completion.
212
213 Args:
214 instrument (IPInstrument): the instance to connect.
215 """
216
217 def __init__(self, instrument):
218 self.instrument = instrument
219
220 def __enter__(self):
221 """Make sure we connect when entering the context."""
222 if not self.instrument._persistent or self.instrument._socket is None:
223 self.instrument._connect()
224
225 def __exit__(self, type, value, tb):
226 """Possibly disconnect on exiting the context."""
227 if not self.instrument._persistent:
228 self.instrument._disconnect()
229
[end of qcodes/instrument/ip.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qcodes/instrument/ip.py b/qcodes/instrument/ip.py
--- a/qcodes/instrument/ip.py
+++ b/qcodes/instrument/ip.py
@@ -1,8 +1,10 @@
"""Ethernet instrument driver class based on sockets."""
import socket
+import logging
from .base import Instrument
+log = logging.getLogger(__name__)
class IPInstrument(Instrument):
@@ -139,10 +141,14 @@
def _send(self, cmd):
data = cmd + self._terminator
- self._socket.send(data.encode())
+ self._socket.sendall(data.encode())
def _recv(self):
- return self._socket.recv(self._buffer_size).decode()
+ result = self._socket.recv(self._buffer_size)
+ if result == b'':
+ log.warning("Got empty response from Socket recv() "
+ "Connection broken.")
+ return result.decode()
def close(self):
"""Disconnect and irreversibly tear down the instrument."""
| {"golden_diff": "diff --git a/qcodes/instrument/ip.py b/qcodes/instrument/ip.py\n--- a/qcodes/instrument/ip.py\n+++ b/qcodes/instrument/ip.py\n@@ -1,8 +1,10 @@\n \"\"\"Ethernet instrument driver class based on sockets.\"\"\"\n import socket\n+import logging\n \n from .base import Instrument\n \n+log = logging.getLogger(__name__)\n \n class IPInstrument(Instrument):\n \n@@ -139,10 +141,14 @@\n \n def _send(self, cmd):\n data = cmd + self._terminator\n- self._socket.send(data.encode())\n+ self._socket.sendall(data.encode())\n \n def _recv(self):\n- return self._socket.recv(self._buffer_size).decode()\n+ result = self._socket.recv(self._buffer_size)\n+ if result == b'':\n+ log.warning(\"Got empty response from Socket recv() \"\n+ \"Connection broken.\")\n+ return result.decode()\n \n def close(self):\n \"\"\"Disconnect and irreversibly tear down the instrument.\"\"\"\n", "issue": "IPInstrument._send() could not send all its data\nI was browsing the instruments code and found out that in the `IPInstrument` class the `socket.send(...)` function is implemented wrongly:\r\n\r\n```python\r\n def _send(self, cmd):\r\n data = cmd + self._terminator\r\n self._socket.send(data.encode())\r\n```\r\n\r\nThis is what the documentation says about `socket.send(...)`:\r\n\r\n> socket.send(bytes[, flags])\r\nSend data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. \r\nhttps://docs.python.org/3.6/library/socket.html\r\n\r\nAt this moment, if `send(...)` fails, only a part of the message is transmitted. Which will create strange bugs.\r\n\r\nA better solution is to use `socket.sendall(...)` or as the example shows:\r\n```python\r\n def mysend(self, msg):\r\n totalsent = 0\r\n while totalsent < MSGLEN:\r\n sent = self.sock.send(msg[totalsent:])\r\n if sent == 0:\r\n raise RuntimeError(\"socket connection broken\")\r\n totalsent = totalsent + sent\r\n```\r\nhttps://docs.python.org/3.6/howto/sockets.html#socket-howto\nIPInstrument._send() could not send all its data\nI was browsing the instruments code and found out that in the `IPInstrument` class the `socket.send(...)` function is implemented wrongly:\r\n\r\n```python\r\n def _send(self, cmd):\r\n data = cmd + self._terminator\r\n self._socket.send(data.encode())\r\n```\r\n\r\nThis is what the documentation says about `socket.send(...)`:\r\n\r\n> socket.send(bytes[, flags])\r\nSend data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. \r\nhttps://docs.python.org/3.6/library/socket.html\r\n\r\nAt this moment, if `send(...)` fails, only a part of the message is transmitted. Which will create strange bugs.\r\n\r\nA better solution is to use `socket.sendall(...)` or as the example shows:\r\n```python\r\n def mysend(self, msg):\r\n totalsent = 0\r\n while totalsent < MSGLEN:\r\n sent = self.sock.send(msg[totalsent:])\r\n if sent == 0:\r\n raise RuntimeError(\"socket connection broken\")\r\n totalsent = totalsent + sent\r\n```\r\nhttps://docs.python.org/3.6/howto/sockets.html#socket-howto\n", "before_files": [{"content": "\"\"\"Ethernet instrument driver class based on sockets.\"\"\"\nimport socket\n\nfrom .base import Instrument\n\n\nclass IPInstrument(Instrument):\n\n r\"\"\"\n Bare socket ethernet instrument implementation.\n\n Args:\n name (str): What this instrument is called locally.\n\n address (Optional[str]): The IP address or name. If not given on\n construction, must be provided before any communication.\n\n port (Optional[int]): The IP port. If not given on construction, must\n be provided before any communication.\n\n timeout (number): Seconds to allow for responses. Default 5.\n\n terminator (str): Character(s) to terminate each send. Default '\\n'.\n\n persistent (bool): Whether to leave the socket open between calls.\n Default True.\n\n write_confirmation (bool): Whether the instrument acknowledges writes\n with some response we should read. Default True.\n\n metadata (Optional[Dict]): additional static metadata to add to this\n instrument's JSON snapshot.\n\n See help for ``qcodes.Instrument`` for additional information on writing\n instrument subclasses.\n \"\"\"\n\n def __init__(self, name, address=None, port=None, timeout=5,\n terminator='\\n', persistent=True, write_confirmation=True, testing=False,\n **kwargs):\n super().__init__(name, testing=testing, **kwargs)\n\n self._address = address\n self._port = port\n self._timeout = timeout\n self._terminator = terminator\n self._confirmation = write_confirmation\n\n self._ensure_connection = EnsureConnection(self)\n self._buffer_size = 1400\n\n self._socket = None\n\n self.set_persistent(persistent)\n\n def set_address(self, address=None, port=None):\n \"\"\"\n Change the IP address and/or port of this instrument.\n\n Args:\n address (Optional[str]): The IP address or name.\n port (Optional[number]): The IP port.\n \"\"\"\n if address is not None:\n self._address = address\n elif not hasattr(self, '_address'):\n raise TypeError('This instrument doesn\\'t have an address yet, '\n 'you must provide one.')\n if port is not None:\n self._port = port\n elif not hasattr(self, '_port'):\n raise TypeError('This instrument doesn\\'t have a port yet, '\n 'you must provide one.')\n\n self._disconnect()\n self.set_persistent(self._persistent)\n\n def set_persistent(self, persistent):\n \"\"\"\n Change whether this instrument keeps its socket open between calls.\n\n Args:\n persistent (bool): Set True to keep the socket open all the time.\n \"\"\"\n self._persistent = persistent\n if persistent:\n self._connect()\n else:\n self._disconnect()\n\n def flush_connection(self):\n if not self._testing:\n self._recv()\n\n def _connect(self):\n if self._testing:\n return\n\n if self._socket is not None:\n self._disconnect()\n\n try:\n self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self._socket.connect((self._address, self._port))\n self.set_timeout(self._timeout)\n except ConnectionRefusedError:\n self._socket.close()\n self._socket = None\n\n def _disconnect(self):\n if getattr(self, '_socket', None) is None:\n return\n\n self._socket.shutdown(socket.SHUT_RDWR)\n self._socket.close()\n self._socket = None\n\n def set_timeout(self, timeout=None):\n \"\"\"\n Change the read timeout for the socket.\n\n Args:\n timeout (number): Seconds to allow for responses.\n \"\"\"\n self._timeout = timeout\n\n if self._socket is not None:\n self._socket.settimeout(float(self._timeout))\n\n def set_terminator(self, terminator):\n r\"\"\"\n Change the write terminator to use.\n\n Args:\n terminator (str): Character(s) to terminate each send.\n Default '\\n'.\n \"\"\"\n self._terminator = terminator\n\n def _send(self, cmd):\n data = cmd + self._terminator\n self._socket.send(data.encode())\n\n def _recv(self):\n return self._socket.recv(self._buffer_size).decode()\n\n def close(self):\n \"\"\"Disconnect and irreversibly tear down the instrument.\"\"\"\n self._disconnect()\n super().close()\n\n def write_raw(self, cmd):\n \"\"\"\n Low-level interface to send a command that gets no response.\n\n Args:\n cmd (str): The command to send to the instrument.\n \"\"\"\n\n with self._ensure_connection:\n self._send(cmd)\n if self._confirmation:\n self._recv()\n\n def ask_raw(self, cmd):\n \"\"\"\n Low-level interface to send a command an read a response.\n\n Args:\n cmd (str): The command to send to the instrument.\n\n Returns:\n str: The instrument's response.\n \"\"\"\n with self._ensure_connection:\n self._send(cmd)\n return self._recv()\n\n def __del__(self):\n self.close()\n\n def snapshot_base(self, update=False):\n \"\"\"\n State of the instrument as a JSON-compatible dict.\n\n Args:\n update (bool): If True, update the state by querying the\n instrument. If False, just use the latest values in memory.\n\n Returns:\n dict: base snapshot\n \"\"\"\n snap = super().snapshot_base(update=update)\n\n snap['port'] = self._port\n snap['confirmation'] = self._confirmation\n snap['address'] = self._address\n snap['terminator'] = self._terminator\n snap['timeout'] = self._timeout\n snap['persistent'] = self._persistent\n\n return snap\n\n\nclass EnsureConnection:\n\n \"\"\"\n Context manager to ensure an instrument is connected when needed.\n\n Uses ``instrument._persistent`` to determine whether or not to close\n the connection immediately on completion.\n\n Args:\n instrument (IPInstrument): the instance to connect.\n \"\"\"\n\n def __init__(self, instrument):\n self.instrument = instrument\n\n def __enter__(self):\n \"\"\"Make sure we connect when entering the context.\"\"\"\n if not self.instrument._persistent or self.instrument._socket is None:\n self.instrument._connect()\n\n def __exit__(self, type, value, tb):\n \"\"\"Possibly disconnect on exiting the context.\"\"\"\n if not self.instrument._persistent:\n self.instrument._disconnect()\n", "path": "qcodes/instrument/ip.py"}]} | 3,120 | 224 |
gh_patches_debug_17767 | rasdani/github-patches | git_diff | aws__aws-sdk-pandas-2796 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect error message or implementation for datetime formatting?
### Describe the bug
when starting a SQL query with timezone aware date times, you get an error message like:
`Supports only timezone aware datatype, got 2024-04-29 13:30:34.573187+00:00`
The SDK code involved is
```python
39 if value.tzinfo is not None:
40 print(type(value))
---> 41 raise TypeError(f"Supports only timezone aware datatype, got {value}. {type(value)}")
43 return f"TIMESTAMP '{value.isoformat(sep=' ', timespec='milliseconds')}'"
```
Where it checks for the existence of timezone info. And enter the if clause, if there is one.
However the error message say it support _only_ timezone aware dates.
### How to Reproduce
``` python
import awswrangler as wr
from datetime import timezone, datetime
wr.athena.start_query_execution(
database='ddd',
sql=f"""DELETE FROM ddd.t
WHERE event_date >= :start
""",
params={
"start": datetime.now(timezone.utc),
})
``````
### Expected behavior
To either have an error message like:
`Supports only timezone naive datatype, got 2024-04-29 13:30:34.573187+00:00`
or:
to correctly format timetamps with timezone info.
### Your project
Private
### Screenshots
_No response_
### OS
Mac
### Python version
3.11
### AWS SDK for pandas version
seen on main branch and ('3.4.2')
### Additional context
_No response_
</issue>
<code>
[start of awswrangler/_sql_formatter.py]
1 """Formatting logic for SQL parameters."""
2
3 from __future__ import annotations
4
5 import datetime
6 import decimal
7 import re
8 from abc import ABC, abstractmethod
9 from typing import Any, Callable, Sequence
10
11 from typing_extensions import Literal
12
13 from awswrangler import exceptions
14
15 _EngineTypeLiteral = Literal["presto", "hive", "partiql"]
16
17
18 class _Engine(ABC):
19 def __init__(self, engine_name: _EngineTypeLiteral) -> None:
20 self.engine_name = engine_name
21
22 def format_null(self, value: None = None) -> str:
23 return "NULL"
24
25 @abstractmethod
26 def format_string(self, value: str) -> str:
27 pass
28
29 def format_bool(self, value: bool) -> str:
30 return str(value).upper()
31
32 def format_integer(self, value: int) -> str:
33 return str(value)
34
35 def format_float(self, value: float) -> str:
36 return f"{value:f}"
37
38 def format_decimal(self, value: decimal.Decimal) -> str:
39 return f"DECIMAL '{value:f}'"
40
41 def format_timestamp(self, value: datetime.datetime) -> str:
42 if value.tzinfo is not None:
43 raise TypeError(f"Supports only timezone aware datatype, got {value}.")
44
45 return f"TIMESTAMP '{value.isoformat(sep=' ', timespec='milliseconds')}'"
46
47 def format_date(self, value: datetime.date) -> str:
48 return f"DATE '{value.isoformat()}'"
49
50 def format_array(self, value: Sequence[Any]) -> str:
51 return f"ARRAY [{', '.join(map(self.format, value))}]"
52
53 def format_dict(self, value: dict[Any, Any]) -> str:
54 if not value:
55 return "MAP()"
56
57 map_keys = list(value.keys())
58 key_type = type(map_keys[0])
59 for key in map_keys:
60 if key is None:
61 raise TypeError("Map key cannot be null.")
62 if not isinstance(key, key_type):
63 raise TypeError("All Map key elements must be the same type.")
64
65 map_values = list(value.values())
66 return (
67 f"MAP(ARRAY [{', '.join(map(self.format, map_keys))}], ARRAY [{', '.join(map(self.format, map_values))}])"
68 )
69
70 def format(self, data: Any) -> str:
71 formats_dict: dict[type[Any], Callable[[Any], str]] = {
72 bool: self.format_bool,
73 str: self.format_string,
74 int: self.format_integer,
75 datetime.datetime: self.format_timestamp,
76 datetime.date: self.format_date,
77 decimal.Decimal: self.format_decimal,
78 float: self.format_float,
79 list: self.format_array,
80 tuple: self.format_array,
81 set: self.format_array,
82 dict: self.format_dict,
83 }
84
85 if data is None:
86 return self.format_null()
87
88 for python_type, format_func in formats_dict.items():
89 if isinstance(data, python_type):
90 return format_func(data)
91
92 raise TypeError(f"Unsupported type {type(data)} in parameter.")
93
94
95 class _PrestoEngine(_Engine):
96 def __init__(self) -> None:
97 super().__init__("presto")
98
99 def format_string(self, value: str) -> str:
100 return f"""'{value.replace("'", "''")}'"""
101
102
103 class _HiveEngine(_Engine):
104 def __init__(self) -> None:
105 super().__init__("hive")
106
107 def format_string(self, value: str) -> str:
108 return "'{}'".format(
109 value.replace("\\", "\\\\")
110 .replace("'", "\\'")
111 .replace("\r", "\\r")
112 .replace("\n", "\\n")
113 .replace("\t", "\\t")
114 )
115
116
117 class _PartiQLEngine(_Engine):
118 def __init__(self) -> None:
119 super().__init__("partiql")
120
121 def format_null(self, value: None = None) -> str:
122 return "null"
123
124 def format_string(self, value: str) -> str:
125 return f"""'{value.replace("'", "''")}'"""
126
127 def format_bool(self, value: bool) -> str:
128 return "1" if value else "0"
129
130 def format_decimal(self, value: decimal.Decimal) -> str:
131 return f"'{value}'"
132
133 def format_timestamp(self, value: datetime.datetime) -> str:
134 if value.tzinfo is not None:
135 raise TypeError(f"Supports only timezone aware datatype, got {value}.")
136
137 return f"'{value.isoformat()}'"
138
139 def format_date(self, value: datetime.date) -> str:
140 return f"'{value.isoformat()}'"
141
142 def format_array(self, value: Sequence[Any]) -> str:
143 raise NotImplementedError(f"format_array not implemented for engine={self.engine_name}.")
144
145 def format_dict(self, value: dict[Any, Any]) -> str:
146 raise NotImplementedError(f"format_dict not implemented for engine={self.engine_name}.")
147
148
149 def _format_parameters(params: dict[str, Any], engine: _Engine) -> dict[str, Any]:
150 processed_params = {}
151
152 for k, v in params.items():
153 processed_params[k] = engine.format(data=v)
154
155 return processed_params
156
157
158 _PATTERN = re.compile(r":([A-Za-z0-9_]+)(?![A-Za-z0-9_])")
159
160
161 def _create_engine(engine_type: _EngineTypeLiteral) -> _Engine:
162 if engine_type == "hive":
163 return _HiveEngine()
164
165 if engine_type == "presto":
166 return _PrestoEngine()
167
168 if engine_type == "partiql":
169 return _PartiQLEngine()
170
171 raise exceptions.InvalidArgumentValue(f"Unknown engine type: {engine_type}")
172
173
174 def _process_sql_params(sql: str, params: dict[str, Any] | None, engine_type: _EngineTypeLiteral = "presto") -> str:
175 if params is None:
176 params = {}
177
178 engine = _create_engine(engine_type)
179 processed_params = _format_parameters(params, engine=engine)
180
181 def replace(match: re.Match) -> str: # type: ignore[type-arg]
182 key = match.group(1)
183
184 if key not in processed_params:
185 # do not replace anything if the parameter is not provided
186 return str(match.group(0))
187
188 return str(processed_params[key])
189
190 sql = _PATTERN.sub(replace, sql)
191
192 return sql
193
[end of awswrangler/_sql_formatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awswrangler/_sql_formatter.py b/awswrangler/_sql_formatter.py
--- a/awswrangler/_sql_formatter.py
+++ b/awswrangler/_sql_formatter.py
@@ -40,7 +40,7 @@
def format_timestamp(self, value: datetime.datetime) -> str:
if value.tzinfo is not None:
- raise TypeError(f"Supports only timezone aware datatype, got {value}.")
+ raise TypeError(f"Supports only timezone naive datatype, got {value}.")
return f"TIMESTAMP '{value.isoformat(sep=' ', timespec='milliseconds')}'"
@@ -132,7 +132,7 @@
def format_timestamp(self, value: datetime.datetime) -> str:
if value.tzinfo is not None:
- raise TypeError(f"Supports only timezone aware datatype, got {value}.")
+ raise TypeError(f"Supports only timezone naive datatype, got {value}.")
return f"'{value.isoformat()}'"
| {"golden_diff": "diff --git a/awswrangler/_sql_formatter.py b/awswrangler/_sql_formatter.py\n--- a/awswrangler/_sql_formatter.py\n+++ b/awswrangler/_sql_formatter.py\n@@ -40,7 +40,7 @@\n \n def format_timestamp(self, value: datetime.datetime) -> str:\n if value.tzinfo is not None:\n- raise TypeError(f\"Supports only timezone aware datatype, got {value}.\")\n+ raise TypeError(f\"Supports only timezone naive datatype, got {value}.\")\n \n return f\"TIMESTAMP '{value.isoformat(sep=' ', timespec='milliseconds')}'\"\n \n@@ -132,7 +132,7 @@\n \n def format_timestamp(self, value: datetime.datetime) -> str:\n if value.tzinfo is not None:\n- raise TypeError(f\"Supports only timezone aware datatype, got {value}.\")\n+ raise TypeError(f\"Supports only timezone naive datatype, got {value}.\")\n \n return f\"'{value.isoformat()}'\"\n", "issue": "Incorrect error message or implementation for datetime formatting?\n### Describe the bug\n\nwhen starting a SQL query with timezone aware date times, you get an error message like:\r\n\r\n`Supports only timezone aware datatype, got 2024-04-29 13:30:34.573187+00:00`\r\n\r\nThe SDK code involved is\r\n\r\n```python\r\n 39 if value.tzinfo is not None:\r\n 40 print(type(value))\r\n---> 41 raise TypeError(f\"Supports only timezone aware datatype, got {value}. {type(value)}\")\r\n 43 return f\"TIMESTAMP '{value.isoformat(sep=' ', timespec='milliseconds')}'\"\r\n```\r\nWhere it checks for the existence of timezone info. And enter the if clause, if there is one.\r\n\r\nHowever the error message say it support _only_ timezone aware dates.\n\n### How to Reproduce\n\n``` python\r\nimport awswrangler as wr\r\nfrom datetime import timezone, datetime\r\n\r\nwr.athena.start_query_execution(\r\n database='ddd',\r\n sql=f\"\"\"DELETE FROM ddd.t\r\n WHERE event_date >= :start \r\n \"\"\",\r\n params={\r\n \"start\": datetime.now(timezone.utc),\r\n })\r\n``````\r\n\n\n### Expected behavior\n\nTo either have an error message like: \r\n\r\n`Supports only timezone naive datatype, got 2024-04-29 13:30:34.573187+00:00`\r\n\r\nor: \r\n\r\nto correctly format timetamps with timezone info.\n\n### Your project\n\nPrivate\n\n### Screenshots\n\n_No response_\n\n### OS\n\nMac\n\n### Python version\n\n3.11\n\n### AWS SDK for pandas version\n\nseen on main branch and ('3.4.2')\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"Formatting logic for SQL parameters.\"\"\"\n\nfrom __future__ import annotations\n\nimport datetime\nimport decimal\nimport re\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Callable, Sequence\n\nfrom typing_extensions import Literal\n\nfrom awswrangler import exceptions\n\n_EngineTypeLiteral = Literal[\"presto\", \"hive\", \"partiql\"]\n\n\nclass _Engine(ABC):\n def __init__(self, engine_name: _EngineTypeLiteral) -> None:\n self.engine_name = engine_name\n\n def format_null(self, value: None = None) -> str:\n return \"NULL\"\n\n @abstractmethod\n def format_string(self, value: str) -> str:\n pass\n\n def format_bool(self, value: bool) -> str:\n return str(value).upper()\n\n def format_integer(self, value: int) -> str:\n return str(value)\n\n def format_float(self, value: float) -> str:\n return f\"{value:f}\"\n\n def format_decimal(self, value: decimal.Decimal) -> str:\n return f\"DECIMAL '{value:f}'\"\n\n def format_timestamp(self, value: datetime.datetime) -> str:\n if value.tzinfo is not None:\n raise TypeError(f\"Supports only timezone aware datatype, got {value}.\")\n\n return f\"TIMESTAMP '{value.isoformat(sep=' ', timespec='milliseconds')}'\"\n\n def format_date(self, value: datetime.date) -> str:\n return f\"DATE '{value.isoformat()}'\"\n\n def format_array(self, value: Sequence[Any]) -> str:\n return f\"ARRAY [{', '.join(map(self.format, value))}]\"\n\n def format_dict(self, value: dict[Any, Any]) -> str:\n if not value:\n return \"MAP()\"\n\n map_keys = list(value.keys())\n key_type = type(map_keys[0])\n for key in map_keys:\n if key is None:\n raise TypeError(\"Map key cannot be null.\")\n if not isinstance(key, key_type):\n raise TypeError(\"All Map key elements must be the same type.\")\n\n map_values = list(value.values())\n return (\n f\"MAP(ARRAY [{', '.join(map(self.format, map_keys))}], ARRAY [{', '.join(map(self.format, map_values))}])\"\n )\n\n def format(self, data: Any) -> str:\n formats_dict: dict[type[Any], Callable[[Any], str]] = {\n bool: self.format_bool,\n str: self.format_string,\n int: self.format_integer,\n datetime.datetime: self.format_timestamp,\n datetime.date: self.format_date,\n decimal.Decimal: self.format_decimal,\n float: self.format_float,\n list: self.format_array,\n tuple: self.format_array,\n set: self.format_array,\n dict: self.format_dict,\n }\n\n if data is None:\n return self.format_null()\n\n for python_type, format_func in formats_dict.items():\n if isinstance(data, python_type):\n return format_func(data)\n\n raise TypeError(f\"Unsupported type {type(data)} in parameter.\")\n\n\nclass _PrestoEngine(_Engine):\n def __init__(self) -> None:\n super().__init__(\"presto\")\n\n def format_string(self, value: str) -> str:\n return f\"\"\"'{value.replace(\"'\", \"''\")}'\"\"\"\n\n\nclass _HiveEngine(_Engine):\n def __init__(self) -> None:\n super().__init__(\"hive\")\n\n def format_string(self, value: str) -> str:\n return \"'{}'\".format(\n value.replace(\"\\\\\", \"\\\\\\\\\")\n .replace(\"'\", \"\\\\'\")\n .replace(\"\\r\", \"\\\\r\")\n .replace(\"\\n\", \"\\\\n\")\n .replace(\"\\t\", \"\\\\t\")\n )\n\n\nclass _PartiQLEngine(_Engine):\n def __init__(self) -> None:\n super().__init__(\"partiql\")\n\n def format_null(self, value: None = None) -> str:\n return \"null\"\n\n def format_string(self, value: str) -> str:\n return f\"\"\"'{value.replace(\"'\", \"''\")}'\"\"\"\n\n def format_bool(self, value: bool) -> str:\n return \"1\" if value else \"0\"\n\n def format_decimal(self, value: decimal.Decimal) -> str:\n return f\"'{value}'\"\n\n def format_timestamp(self, value: datetime.datetime) -> str:\n if value.tzinfo is not None:\n raise TypeError(f\"Supports only timezone aware datatype, got {value}.\")\n\n return f\"'{value.isoformat()}'\"\n\n def format_date(self, value: datetime.date) -> str:\n return f\"'{value.isoformat()}'\"\n\n def format_array(self, value: Sequence[Any]) -> str:\n raise NotImplementedError(f\"format_array not implemented for engine={self.engine_name}.\")\n\n def format_dict(self, value: dict[Any, Any]) -> str:\n raise NotImplementedError(f\"format_dict not implemented for engine={self.engine_name}.\")\n\n\ndef _format_parameters(params: dict[str, Any], engine: _Engine) -> dict[str, Any]:\n processed_params = {}\n\n for k, v in params.items():\n processed_params[k] = engine.format(data=v)\n\n return processed_params\n\n\n_PATTERN = re.compile(r\":([A-Za-z0-9_]+)(?![A-Za-z0-9_])\")\n\n\ndef _create_engine(engine_type: _EngineTypeLiteral) -> _Engine:\n if engine_type == \"hive\":\n return _HiveEngine()\n\n if engine_type == \"presto\":\n return _PrestoEngine()\n\n if engine_type == \"partiql\":\n return _PartiQLEngine()\n\n raise exceptions.InvalidArgumentValue(f\"Unknown engine type: {engine_type}\")\n\n\ndef _process_sql_params(sql: str, params: dict[str, Any] | None, engine_type: _EngineTypeLiteral = \"presto\") -> str:\n if params is None:\n params = {}\n\n engine = _create_engine(engine_type)\n processed_params = _format_parameters(params, engine=engine)\n\n def replace(match: re.Match) -> str: # type: ignore[type-arg]\n key = match.group(1)\n\n if key not in processed_params:\n # do not replace anything if the parameter is not provided\n return str(match.group(0))\n\n return str(processed_params[key])\n\n sql = _PATTERN.sub(replace, sql)\n\n return sql\n", "path": "awswrangler/_sql_formatter.py"}]} | 2,826 | 230 |
gh_patches_debug_12103 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-4326 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
soundfile hook has osx/windows specific behaviour
hook-soundfile.py was added in 3.5 but it blows up on linux as follows
9727 INFO: Loading module hook "hook-soundfile.py"...
Unable to find "/home/matt/.virtualenvs/beqdesigner-entpycF3/lib/python3.7/site-packages/_soundfile_data" when adding binary and data files.
on OSX, it also fails but with a different error
ValueError: Unknown Mach-O header: 0x20202020 in <_io.BufferedReader
name='/Users/travis/build/3ll3d00d/beqdesigner/.venv/lib/python3.7/site-packages/_soundfile_data/COPYING'>
It completes successfully on Windows
The problem is that pysoundfile packages libsndfile on Windows and OSX (as per https://pysoundfile.readthedocs.io/en/0.9.0/#installation) but relies on a system package on Linux so the mentioned directory (`_soundfile_data`) will not exist on Linux. On OSX only a certain file is required (`_soundfile_data/libsndfile.dylib`)
Minimal test case can be found at https://github.com/3ll3d00d/pyinstaller-pysoundfile-bug
</issue>
<code>
[start of PyInstaller/hooks/hook-soundfile.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2016-2019, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10 """
11 pysoundfile:
12 https://github.com/bastibe/SoundFile
13 """
14
15 import os
16 from PyInstaller.utils.hooks import get_package_paths
17
18 # get path of soundfile
19 sfp = get_package_paths('soundfile')
20
21 # add the binaries
22 bins = os.path.join(sfp[0], "_soundfile_data")
23 binaries = [(bins, "_soundfile_data")]
24
[end of PyInstaller/hooks/hook-soundfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PyInstaller/hooks/hook-soundfile.py b/PyInstaller/hooks/hook-soundfile.py
--- a/PyInstaller/hooks/hook-soundfile.py
+++ b/PyInstaller/hooks/hook-soundfile.py
@@ -13,11 +13,20 @@
"""
import os
+
+from PyInstaller.compat import is_win, is_darwin
from PyInstaller.utils.hooks import get_package_paths
# get path of soundfile
sfp = get_package_paths('soundfile')
-# add the binaries
-bins = os.path.join(sfp[0], "_soundfile_data")
-binaries = [(bins, "_soundfile_data")]
+# add binaries packaged by soundfile on OSX and Windows
+# an external dependency (libsndfile) is used on GNU/Linux
+path = None
+if is_win:
+ path = os.path.join(sfp[0], '_soundfile_data')
+elif is_darwin:
+ path = os.path.join(sfp[0], '_soundfile_data', 'libsndfile.dylib')
+
+if path is not None and os.path.exists(path):
+ binaries = [(path, "_soundfile_data")]
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-soundfile.py b/PyInstaller/hooks/hook-soundfile.py\n--- a/PyInstaller/hooks/hook-soundfile.py\n+++ b/PyInstaller/hooks/hook-soundfile.py\n@@ -13,11 +13,20 @@\n \"\"\"\n \n import os\n+\n+from PyInstaller.compat import is_win, is_darwin\n from PyInstaller.utils.hooks import get_package_paths\n \n # get path of soundfile\n sfp = get_package_paths('soundfile')\n \n-# add the binaries\n-bins = os.path.join(sfp[0], \"_soundfile_data\")\n-binaries = [(bins, \"_soundfile_data\")]\n+# add binaries packaged by soundfile on OSX and Windows\n+# an external dependency (libsndfile) is used on GNU/Linux\n+path = None\n+if is_win:\n+ path = os.path.join(sfp[0], '_soundfile_data')\n+elif is_darwin:\n+ path = os.path.join(sfp[0], '_soundfile_data', 'libsndfile.dylib')\n+\n+if path is not None and os.path.exists(path):\n+ binaries = [(path, \"_soundfile_data\")]\n", "issue": "soundfile hook has osx/windows specific behaviour\nhook-soundfile.py was added in 3.5 but it blows up on linux as follows\r\n\r\n 9727 INFO: Loading module hook \"hook-soundfile.py\"...\r\n Unable to find \"/home/matt/.virtualenvs/beqdesigner-entpycF3/lib/python3.7/site-packages/_soundfile_data\" when adding binary and data files.\r\n\r\non OSX, it also fails but with a different error\r\n\r\n ValueError: Unknown Mach-O header: 0x20202020 in <_io.BufferedReader \r\n name='/Users/travis/build/3ll3d00d/beqdesigner/.venv/lib/python3.7/site-packages/_soundfile_data/COPYING'>\r\n\r\nIt completes successfully on Windows\r\n\r\nThe problem is that pysoundfile packages libsndfile on Windows and OSX (as per https://pysoundfile.readthedocs.io/en/0.9.0/#installation) but relies on a system package on Linux so the mentioned directory (`_soundfile_data`) will not exist on Linux. On OSX only a certain file is required (`_soundfile_data/libsndfile.dylib`)\r\n\r\nMinimal test case can be found at https://github.com/3ll3d00d/pyinstaller-pysoundfile-bug\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2016-2019, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n\"\"\"\npysoundfile:\nhttps://github.com/bastibe/SoundFile\n\"\"\"\n\nimport os\nfrom PyInstaller.utils.hooks import get_package_paths\n\n# get path of soundfile\nsfp = get_package_paths('soundfile')\n\n# add the binaries\nbins = os.path.join(sfp[0], \"_soundfile_data\")\nbinaries = [(bins, \"_soundfile_data\")]\n", "path": "PyInstaller/hooks/hook-soundfile.py"}]} | 1,010 | 257 |
gh_patches_debug_14260 | rasdani/github-patches | git_diff | kornia__kornia-2132 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Round trip `torch.save` / `torch.load` not working for `Hyperplane`
### Describe the bug
Crashing when trying to save and reload a `Hyperplane` using `torch`:
```
Traceback (most recent call last):
File "save_hyperplane.py", line 16, in <module>
plane = torch.load("./saved_plane.pt")
File "/home/kyle/venv/lib/python3.8/site-packages/torch/serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/kyle/venv/lib/python3.8/site-packages/torch/serialization.py", line 1131, in _load
result = unpickler.load()
File "/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py", line 48, in __getattr__
self.used_attrs.add(name)
File "/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py", line 48, in __getattr__
self.used_attrs.add(name)
File "/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py", line 48, in __getattr__
self.used_attrs.add(name)
[Previous line repeated 993 more times]
RecursionError: maximum recursion depth exceeded
```
### Reproduction steps
```bash
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install kornia
(venv) $ pip install numpy
(venv) $ python save_hyperplane.py
```
Content of save_hyperplane.py
```
from kornia.geometry.plane import Hyperplane
from kornia.geometry.vector import Vec3, Scalar
import torch
plane = Hyperplane(
Vec3(torch.tensor([0,0,1])), Scalar(torch.tensor(0.5))
)
torch.save(plane, "./saved_plane.pt")
plane = torch.load("./saved_plane.pt")
```
### Expected behavior
Hyperplane is able to be saved and reloaded with `torch.save` & `torch.load`
### Environment
```shell
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU
Nvidia driver version: 470.141.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==1.13.1
[conda] Could not collect
```
### Additional context
_No response_
</issue>
<code>
[start of kornia/core/tensor_wrapper.py]
1 # pytorch tensor wrapper class
2 # insipired by:
3 # https://github.com/pytorch/pytorch/blob/591dfffa38848de54b7f5f4e49260847024c9281/test/test_overrides.py#L748
4 import collections
5 from typing import Any
6
7 import torch
8 from torch import Tensor
9
10 # wrap inputs if necessary
11
12 # TODO: promote to KORNIA_WRAP
13
14
15 def wrap(v, cls):
16 if type(v) in {tuple, list}:
17 return type(v)(wrap(vi, cls) for vi in v)
18
19 return cls(v) if isinstance(v, Tensor) else v
20
21
22 # TODO: promote to KORNIA_UNWRAP
23
24
25 def unwrap(v):
26 if type(v) in {tuple, list}:
27 return type(v)(unwrap(vi) for vi in v)
28
29 return v._data if not isinstance(v, Tensor) else v
30
31
32 class TensorWrapper:
33 def __init__(self, data: Tensor) -> None:
34 self.__dict__["_data"] = data
35 self.__dict__["used_attrs"] = set()
36 self.__dict__["used_calls"] = set()
37
38 @property
39 def data(self) -> Tensor:
40 return self._data
41
42 def __repr__(self) -> str:
43 return f"{self.data}"
44
45 def __getattr__(self, name):
46 if name in self.__dict__:
47 return self.__dict__[name]
48 self.used_attrs.add(name)
49
50 val = getattr(self._data, name)
51
52 # NOTE: not clear is really needed
53 # If it's a method
54 # if callable(val):
55 # c = getattr(type(self._data), name)
56 # # Don't append self to args if classmethod/staticmethod
57 # if c is val:
58 # return lambda *a, **kw: wrap(self.__torch_function__(c, (type(self),), args=a, kwargs=kw), type(self))
59 # # Otherwise append self to args
60 # return lambda *a, **kw: wrap(
61 # #self.__torch_function__(c, (type(self),), args=(self,) + a, kwargs=kw), type(self)
62 # )
63
64 return wrap(val, type(self))
65
66 def __setattr__(self, name, value):
67 if name in self.__dict__:
68 self.__dict__[name] = value
69
70 self.used_attrs.add(name)
71 setattr(self._data, name, value)
72
73 def __setitem__(self, key, value):
74 self._data[key] = value
75
76 def __getitem__(self, key):
77 return wrap(self._data[key], type(self))
78
79 @classmethod
80 def __torch_function__(cls, func, types, args=(), kwargs=None):
81 if kwargs is None:
82 kwargs = {}
83 # Find an instance of this class in the arguments
84 args_of_this_cls = []
85 for a in args:
86 if isinstance(a, cls):
87 args_of_this_cls.append(a)
88 elif isinstance(a, collections.abc.Sequence):
89 args_of_this_cls.extend(el for el in a if isinstance(el, cls))
90 # assert len(args_of_this_cls) > 0
91 for a in args_of_this_cls:
92 a.used_calls.add(func)
93 args = unwrap(tuple(args))
94 kwargs = {k: unwrap(v) for k, v in kwargs.items()}
95
96 return wrap(func(*args, **kwargs), cls)
97
98 # TODO: `def __add__(self, other) -> Self:` when mypy release >0.991
99 def __add__(self, other):
100 return self.__unary_op__(torch.add, other)
101
102 def __mul__(self, other):
103 return self.__unary_op__(torch.mul, other)
104
105 def __sub__(self, other):
106 return self.__unary_op__(torch.sub, other)
107
108 def __truediv__(self, other):
109 return self.__unary_op__(torch.true_divide, other)
110
111 def __floordiv__(self, other):
112 return self.__unary_op__(torch.floor_divide, other)
113
114 def __ge__(self, other):
115 return self.__unary_op__(torch.ge, other)
116
117 def __gt__(self, other):
118 return self.__unary_op__(torch.gt, other)
119
120 def __lt__(self, other):
121 return self.__unary_op__(torch.lt, other)
122
123 def __le__(self, other):
124 return self.__unary_op__(torch.le, other)
125
126 def __eq__(self, other):
127 return self.__unary_op__(torch.eq, other)
128
129 def __ne__(self, other):
130 return self.__unary_op__(torch.ne, other)
131
132 def __bool__(self):
133 return self.__unary_op__(Tensor.__bool__)
134
135 def __int__(self):
136 return self.__unary_op__(Tensor.__int__)
137
138 def __neg__(self):
139 return self.__unary_op__(Tensor.negative)
140
141 def __unary_op__(self, func: Any, other=None):
142 args = (self, other) if other is not None else (self,)
143 return self.__torch_function__(func, (type(self),), args)
144
145 def __len__(self):
146 return len(self._data)
147
[end of kornia/core/tensor_wrapper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kornia/core/tensor_wrapper.py b/kornia/core/tensor_wrapper.py
--- a/kornia/core/tensor_wrapper.py
+++ b/kornia/core/tensor_wrapper.py
@@ -35,15 +35,22 @@
self.__dict__["used_attrs"] = set()
self.__dict__["used_calls"] = set()
- @property
- def data(self) -> Tensor:
- return self._data
+ def unwrap(self):
+ return unwrap(self)
+
+ def __getstate__(self):
+ return self.__dict__
+
+ def __setstate__(self, state):
+ self.__dict__.update(state)
def __repr__(self) -> str:
- return f"{self.data}"
+ return f"{self._data}"
def __getattr__(self, name):
- if name in self.__dict__:
+ if name == "data":
+ return self._data
+ elif name in self.__dict__:
return self.__dict__[name]
self.used_attrs.add(name)
| {"golden_diff": "diff --git a/kornia/core/tensor_wrapper.py b/kornia/core/tensor_wrapper.py\n--- a/kornia/core/tensor_wrapper.py\n+++ b/kornia/core/tensor_wrapper.py\n@@ -35,15 +35,22 @@\n self.__dict__[\"used_attrs\"] = set()\n self.__dict__[\"used_calls\"] = set()\n \n- @property\n- def data(self) -> Tensor:\n- return self._data\n+ def unwrap(self):\n+ return unwrap(self)\n+\n+ def __getstate__(self):\n+ return self.__dict__\n+\n+ def __setstate__(self, state):\n+ self.__dict__.update(state)\n \n def __repr__(self) -> str:\n- return f\"{self.data}\"\n+ return f\"{self._data}\"\n \n def __getattr__(self, name):\n- if name in self.__dict__:\n+ if name == \"data\":\n+ return self._data\n+ elif name in self.__dict__:\n return self.__dict__[name]\n self.used_attrs.add(name)\n", "issue": "Round trip `torch.save` / `torch.load` not working for `Hyperplane`\n### Describe the bug\r\n\r\nCrashing when trying to save and reload a `Hyperplane` using `torch`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"save_hyperplane.py\", line 16, in <module>\r\n plane = torch.load(\"./saved_plane.pt\")\r\n File \"/home/kyle/venv/lib/python3.8/site-packages/torch/serialization.py\", line 789, in load\r\n return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n File \"/home/kyle/venv/lib/python3.8/site-packages/torch/serialization.py\", line 1131, in _load\r\n result = unpickler.load()\r\n File \"/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py\", line 48, in __getattr__\r\n self.used_attrs.add(name)\r\n File \"/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py\", line 48, in __getattr__\r\n self.used_attrs.add(name)\r\n File \"/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py\", line 48, in __getattr__\r\n self.used_attrs.add(name)\r\n [Previous line repeated 993 more times]\r\nRecursionError: maximum recursion depth exceeded\r\n```\r\n\r\n### Reproduction steps\r\n\r\n```bash\r\n$ python3 -m venv venv\r\n$ source venv/bin/activate\r\n(venv) $ pip install kornia\r\n(venv) $ pip install numpy\r\n(venv) $ python save_hyperplane.py\r\n```\r\n\r\nContent of save_hyperplane.py\r\n\r\n```\r\nfrom kornia.geometry.plane import Hyperplane\r\nfrom kornia.geometry.vector import Vec3, Scalar\r\nimport torch\r\n\r\nplane = Hyperplane(\r\n Vec3(torch.tensor([0,0,1])), Scalar(torch.tensor(0.5))\r\n)\r\n\r\ntorch.save(plane, \"./saved_plane.pt\")\r\nplane = torch.load(\"./saved_plane.pt\")\r\n```\r\n\r\n### Expected behavior\r\n\r\nHyperplane is able to be saved and reloaded with `torch.save` & `torch.load` \r\n\r\n### Environment\r\n\r\n```shell\r\nCollecting environment information...\r\nPyTorch version: 1.13.1+cu117\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.16.3\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU\r\nNvidia driver version: 470.141.03\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.1\r\n[pip3] torch==1.13.1\r\n[conda] Could not collect\r\n```\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "# pytorch tensor wrapper class\n# insipired by:\n# https://github.com/pytorch/pytorch/blob/591dfffa38848de54b7f5f4e49260847024c9281/test/test_overrides.py#L748\nimport collections\nfrom typing import Any\n\nimport torch\nfrom torch import Tensor\n\n# wrap inputs if necessary\n\n# TODO: promote to KORNIA_WRAP\n\n\ndef wrap(v, cls):\n if type(v) in {tuple, list}:\n return type(v)(wrap(vi, cls) for vi in v)\n\n return cls(v) if isinstance(v, Tensor) else v\n\n\n# TODO: promote to KORNIA_UNWRAP\n\n\ndef unwrap(v):\n if type(v) in {tuple, list}:\n return type(v)(unwrap(vi) for vi in v)\n\n return v._data if not isinstance(v, Tensor) else v\n\n\nclass TensorWrapper:\n def __init__(self, data: Tensor) -> None:\n self.__dict__[\"_data\"] = data\n self.__dict__[\"used_attrs\"] = set()\n self.__dict__[\"used_calls\"] = set()\n\n @property\n def data(self) -> Tensor:\n return self._data\n\n def __repr__(self) -> str:\n return f\"{self.data}\"\n\n def __getattr__(self, name):\n if name in self.__dict__:\n return self.__dict__[name]\n self.used_attrs.add(name)\n\n val = getattr(self._data, name)\n\n # NOTE: not clear is really needed\n # If it's a method\n # if callable(val):\n # c = getattr(type(self._data), name)\n # # Don't append self to args if classmethod/staticmethod\n # if c is val:\n # return lambda *a, **kw: wrap(self.__torch_function__(c, (type(self),), args=a, kwargs=kw), type(self))\n # # Otherwise append self to args\n # return lambda *a, **kw: wrap(\n # #self.__torch_function__(c, (type(self),), args=(self,) + a, kwargs=kw), type(self)\n # )\n\n return wrap(val, type(self))\n\n def __setattr__(self, name, value):\n if name in self.__dict__:\n self.__dict__[name] = value\n\n self.used_attrs.add(name)\n setattr(self._data, name, value)\n\n def __setitem__(self, key, value):\n self._data[key] = value\n\n def __getitem__(self, key):\n return wrap(self._data[key], type(self))\n\n @classmethod\n def __torch_function__(cls, func, types, args=(), kwargs=None):\n if kwargs is None:\n kwargs = {}\n # Find an instance of this class in the arguments\n args_of_this_cls = []\n for a in args:\n if isinstance(a, cls):\n args_of_this_cls.append(a)\n elif isinstance(a, collections.abc.Sequence):\n args_of_this_cls.extend(el for el in a if isinstance(el, cls))\n # assert len(args_of_this_cls) > 0\n for a in args_of_this_cls:\n a.used_calls.add(func)\n args = unwrap(tuple(args))\n kwargs = {k: unwrap(v) for k, v in kwargs.items()}\n\n return wrap(func(*args, **kwargs), cls)\n\n # TODO: `def __add__(self, other) -> Self:` when mypy release >0.991\n def __add__(self, other):\n return self.__unary_op__(torch.add, other)\n\n def __mul__(self, other):\n return self.__unary_op__(torch.mul, other)\n\n def __sub__(self, other):\n return self.__unary_op__(torch.sub, other)\n\n def __truediv__(self, other):\n return self.__unary_op__(torch.true_divide, other)\n\n def __floordiv__(self, other):\n return self.__unary_op__(torch.floor_divide, other)\n\n def __ge__(self, other):\n return self.__unary_op__(torch.ge, other)\n\n def __gt__(self, other):\n return self.__unary_op__(torch.gt, other)\n\n def __lt__(self, other):\n return self.__unary_op__(torch.lt, other)\n\n def __le__(self, other):\n return self.__unary_op__(torch.le, other)\n\n def __eq__(self, other):\n return self.__unary_op__(torch.eq, other)\n\n def __ne__(self, other):\n return self.__unary_op__(torch.ne, other)\n\n def __bool__(self):\n return self.__unary_op__(Tensor.__bool__)\n\n def __int__(self):\n return self.__unary_op__(Tensor.__int__)\n\n def __neg__(self):\n return self.__unary_op__(Tensor.negative)\n\n def __unary_op__(self, func: Any, other=None):\n args = (self, other) if other is not None else (self,)\n return self.__torch_function__(func, (type(self),), args)\n\n def __len__(self):\n return len(self._data)\n", "path": "kornia/core/tensor_wrapper.py"}]} | 2,906 | 240 |
gh_patches_debug_12208 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-118 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ICS Scraper error
My Home-Assistant has the following error:
`2021-12-02 06:53:42 ERROR (SyncWorker_19) [waste_collection_schedule.scraper] fetch failed for source <waste_collection_schedule.source.ics.Source object at 0x7f3785601ca0>: 'NoneType' object does not support item assignment`
I have events for this year, but not for the next. I know, that the ICS scraper is trying to get events for the next year, when it's december. I don't know if this years events are cached or not. But I assume that the issue is because of the next year scraping. With this in mind I searched the source code.
My Python is quite bad, but I assume, that the problem is in line [162](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py#L162). I think this line should look like the block [151-155](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py#L151-L155).
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py]
1 import datetime
2 import logging
3 from pathlib import Path
4
5 import requests
6 from waste_collection_schedule import Collection # type: ignore[attr-defined]
7 from waste_collection_schedule.service.ICS import ICS
8
9 TITLE = "ICS"
10 DESCRIPTION = "Source for ICS based schedules."
11 URL = None
12 TEST_CASES = {
13 "Dortmund, Dudenstr. 5": {
14 "url": "https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4"
15 },
16 "Leipzig, Sandgrubenweg 27": {
17 "url": "https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027"
18 },
19 "Ludwigsburg": {
20 "url": "https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics"
21 },
22 "Esslingen, Bahnhof": {
23 "url": "https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe"
24 },
25 "Test File": {
26 # Path is used here to allow to call the Source from any location.
27 # This is not required in a yaml configuration!
28 "file": str(Path(__file__).resolve().parents[1].joinpath("test/test.ics"))
29 },
30 "Test File (recurring)": {
31 # Path is used here to allow to call the Source from any location.
32 # This is not required in a yaml configuration!
33 "file": str(Path(__file__).resolve().parents[1].joinpath("test/recurring.ics"))
34 },
35 "München, Bahnstr. 11": {
36 "url": "https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}"
37 },
38 "Buxtehude, Am Berg": {
39 "url": "https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics"
40 },
41 # "Hausmüllinfo: ASR Chemnitz": {
42 # "url": "https://asc.hausmuell.info/ics/ics.php",
43 # "method": "POST",
44 # "params": {
45 # "hidden_id_egebiet": 439087,
46 # "input_ort": "Chemnitz",
47 # "input_str": "Straße der Nationen",
48 # "input_hnr": 2,
49 # "hidden_send_btn": "ics",
50 # # "hiddenYear": 2021,
51 # "hidden_id_ort": 10,
52 # "hidden_id_ortsteil": 0,
53 # "hidden_id_str": 17814,
54 # "hidden_id_hnr": 5538100,
55 # "hidden_kalenderart": "privat",
56 # "showBinsBio": "on",
57 # "showBinsRest": "on",
58 # "showBinsRest_rc": "on",
59 # "showBinsPapier": "on",
60 # "showBinsOrganic": "on",
61 # "showBinsXmas": "on",
62 # "showBinsDsd": "on",
63 # "showBinsProb": "on",
64 # },
65 # "year_field": "hiddenYear",
66 # },
67 "Abfall Zollernalbkreis, Ebingen": {
68 "url": "https://www.abfallkalender-zak.de",
69 "params": {
70 "city": "2,3,4",
71 "street": "3",
72 "types[]": [
73 "restmuell",
74 "gelbersack",
75 "papiertonne",
76 "biomuell",
77 "gruenabfall",
78 "schadstoffsammlung",
79 "altpapiersammlung",
80 "schrottsammlung",
81 "weihnachtsbaeume",
82 "elektrosammlung",
83 ],
84 "go_ics": "Download",
85 },
86 "year_field": "year",
87 },
88 "Detmold": {
89 "url": "https://abfuhrkalender.detmold.de/icsmaker.php",
90 "method": "GET",
91 "params": {"strid": 338},
92 "year_field": "year",
93 },
94 "EAW Rheingau Taunus": {
95 "url": "https://www.eaw-rheingau-taunus.de/abfallkalender/calendar.ics?streetid=1429",
96 "split_at": ",",
97 },
98 "Recollect, Ottawa": {
99 "url": "https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics",
100 "split_at": "\\, [and ]*",
101 },
102 "Frankfurt am Main, Achenbachstrasse 3": {
103 "url": "https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics"
104 },
105 "Erlensee, Am Haspel": {
106 "url": "https://sperrmuell.erlensee.de/?type=reminder",
107 "method": "POST",
108 "params": {
109 "street": 8,
110 "eventType[]": [27, 23, 19, 20, 21, 24, 22, 25, 26],
111 "timeframe": 23,
112 "download": "ical",
113 },
114 },
115 }
116
117
118 HEADERS = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
119 _LOGGER = logging.getLogger(__name__)
120
121
122 class Source:
123 def __init__(
124 self,
125 url=None,
126 file=None,
127 offset=None,
128 params=None,
129 year_field=None,
130 method="GET",
131 split_at=None,
132 ):
133 self._url = url
134 self._file = file
135 if bool(self._url is not None) == bool(self._file is not None):
136 raise RuntimeError("Specify either url or file")
137 self._ics = ICS(offset=offset, split_at=split_at)
138 self._params = params
139 self._year_field = year_field # replace this field in params with current year
140 self._method = method # The method to send the params
141
142 def fetch(self):
143 if self._url is not None:
144 if "{%Y}" in self._url or self._year_field is not None:
145 # url contains wildcard or params contains year field
146 now = datetime.datetime.now()
147
148 # replace year in url
149 url = self._url.replace("{%Y}", str(now.year))
150
151 # replace year in params
152 if self._year_field is not None:
153 if self._params is None:
154 raise RuntimeError("year_field specified without params")
155 self._params[self._year_field] = str(now.year)
156
157 entries = self.fetch_url(url, self._params)
158
159 if now.month == 12:
160 # also get data for next year if we are already in december
161 url = self._url.replace("{%Y}", str(now.year + 1))
162 self._params[self._year_field] = str(now.year + 1)
163
164 try:
165 entries.extend(self.fetch_url(url), self._params)
166 except Exception:
167 # ignore if fetch for next year fails
168 pass
169 return entries
170 else:
171 return self.fetch_url(self._url, self._params)
172 elif self._file is not None:
173 return self.fetch_file(self._file)
174
175 def fetch_url(self, url, params=None):
176 # get ics file
177 if self._method == "GET":
178 r = requests.get(url, params=params, headers=HEADERS)
179 elif self._method == "POST":
180 r = requests.post(url, data=params, headers=HEADERS)
181 else:
182 _LOGGER.error(
183 "Error: unknown method to fetch URL, use GET or POST; got %s"
184 % self._method
185 )
186 return "error"
187 r.encoding = "utf-8" # requests doesn't guess the encoding correctly
188
189 # check the return code
190 if not r.ok:
191 _LOGGER.error(
192 "Error: the response is not ok; need code 200, but got code %s"
193 % r.status_code
194 )
195 return "error"
196
197 return self._convert(r.text)
198
199 def fetch_file(self, file):
200 f = open(file)
201 return self._convert(f.read())
202
203 def _convert(self, data):
204 dates = self._ics.convert(data)
205
206 entries = []
207 for d in dates:
208 entries.append(Collection(d[0], d[1]))
209 return entries
210
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
@@ -159,7 +159,8 @@
if now.month == 12:
# also get data for next year if we are already in december
url = self._url.replace("{%Y}", str(now.year + 1))
- self._params[self._year_field] = str(now.year + 1)
+ if self._year_field is not None:
+ self._params[self._year_field] = str(now.year + 1)
try:
entries.extend(self.fetch_url(url), self._params)
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n@@ -159,7 +159,8 @@\n if now.month == 12:\n # also get data for next year if we are already in december\n url = self._url.replace(\"{%Y}\", str(now.year + 1))\n- self._params[self._year_field] = str(now.year + 1)\n+ if self._year_field is not None:\n+ self._params[self._year_field] = str(now.year + 1)\n \n try:\n entries.extend(self.fetch_url(url), self._params)\n", "issue": "ICS Scraper error\nMy Home-Assistant has the following error:\r\n\r\n`2021-12-02 06:53:42 ERROR (SyncWorker_19) [waste_collection_schedule.scraper] fetch failed for source <waste_collection_schedule.source.ics.Source object at 0x7f3785601ca0>: 'NoneType' object does not support item assignment`\r\n\r\nI have events for this year, but not for the next. I know, that the ICS scraper is trying to get events for the next year, when it's december. I don't know if this years events are cached or not. But I assume that the issue is because of the next year scraping. With this in mind I searched the source code.\r\nMy Python is quite bad, but I assume, that the problem is in line [162](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py#L162). I think this line should look like the block [151-155](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py#L151-L155).\n", "before_files": [{"content": "import datetime\nimport logging\nfrom pathlib import Path\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"ICS\"\nDESCRIPTION = \"Source for ICS based schedules.\"\nURL = None\nTEST_CASES = {\n \"Dortmund, Dudenstr. 5\": {\n \"url\": \"https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4\"\n },\n \"Leipzig, Sandgrubenweg 27\": {\n \"url\": \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027\"\n },\n \"Ludwigsburg\": {\n \"url\": \"https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics\"\n },\n \"Esslingen, Bahnhof\": {\n \"url\": \"https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe\"\n },\n \"Test File\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/test.ics\"))\n },\n \"Test File (recurring)\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/recurring.ics\"))\n },\n \"M\u00fcnchen, Bahnstr. 11\": {\n \"url\": \"https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}\"\n },\n \"Buxtehude, Am Berg\": {\n \"url\": \"https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics\"\n },\n # \"Hausm\u00fcllinfo: ASR Chemnitz\": {\n # \"url\": \"https://asc.hausmuell.info/ics/ics.php\",\n # \"method\": \"POST\",\n # \"params\": {\n # \"hidden_id_egebiet\": 439087,\n # \"input_ort\": \"Chemnitz\",\n # \"input_str\": \"Stra\u00dfe der Nationen\",\n # \"input_hnr\": 2,\n # \"hidden_send_btn\": \"ics\",\n # # \"hiddenYear\": 2021,\n # \"hidden_id_ort\": 10,\n # \"hidden_id_ortsteil\": 0,\n # \"hidden_id_str\": 17814,\n # \"hidden_id_hnr\": 5538100,\n # \"hidden_kalenderart\": \"privat\",\n # \"showBinsBio\": \"on\",\n # \"showBinsRest\": \"on\",\n # \"showBinsRest_rc\": \"on\",\n # \"showBinsPapier\": \"on\",\n # \"showBinsOrganic\": \"on\",\n # \"showBinsXmas\": \"on\",\n # \"showBinsDsd\": \"on\",\n # \"showBinsProb\": \"on\",\n # },\n # \"year_field\": \"hiddenYear\",\n # },\n \"Abfall Zollernalbkreis, Ebingen\": {\n \"url\": \"https://www.abfallkalender-zak.de\",\n \"params\": {\n \"city\": \"2,3,4\",\n \"street\": \"3\",\n \"types[]\": [\n \"restmuell\",\n \"gelbersack\",\n \"papiertonne\",\n \"biomuell\",\n \"gruenabfall\",\n \"schadstoffsammlung\",\n \"altpapiersammlung\",\n \"schrottsammlung\",\n \"weihnachtsbaeume\",\n \"elektrosammlung\",\n ],\n \"go_ics\": \"Download\",\n },\n \"year_field\": \"year\",\n },\n \"Detmold\": {\n \"url\": \"https://abfuhrkalender.detmold.de/icsmaker.php\",\n \"method\": \"GET\",\n \"params\": {\"strid\": 338},\n \"year_field\": \"year\",\n },\n \"EAW Rheingau Taunus\": {\n \"url\": \"https://www.eaw-rheingau-taunus.de/abfallkalender/calendar.ics?streetid=1429\",\n \"split_at\": \",\",\n },\n \"Recollect, Ottawa\": {\n \"url\": \"https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics\",\n \"split_at\": \"\\\\, [and ]*\",\n },\n \"Frankfurt am Main, Achenbachstrasse 3\": {\n \"url\": \"https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics\"\n },\n \"Erlensee, Am Haspel\": {\n \"url\": \"https://sperrmuell.erlensee.de/?type=reminder\",\n \"method\": \"POST\",\n \"params\": {\n \"street\": 8,\n \"eventType[]\": [27, 23, 19, 20, 21, 24, 22, 25, 26],\n \"timeframe\": 23,\n \"download\": \"ical\",\n },\n },\n}\n\n\nHEADERS = {\"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64)\"}\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(\n self,\n url=None,\n file=None,\n offset=None,\n params=None,\n year_field=None,\n method=\"GET\",\n split_at=None,\n ):\n self._url = url\n self._file = file\n if bool(self._url is not None) == bool(self._file is not None):\n raise RuntimeError(\"Specify either url or file\")\n self._ics = ICS(offset=offset, split_at=split_at)\n self._params = params\n self._year_field = year_field # replace this field in params with current year\n self._method = method # The method to send the params\n\n def fetch(self):\n if self._url is not None:\n if \"{%Y}\" in self._url or self._year_field is not None:\n # url contains wildcard or params contains year field\n now = datetime.datetime.now()\n\n # replace year in url\n url = self._url.replace(\"{%Y}\", str(now.year))\n\n # replace year in params\n if self._year_field is not None:\n if self._params is None:\n raise RuntimeError(\"year_field specified without params\")\n self._params[self._year_field] = str(now.year)\n\n entries = self.fetch_url(url, self._params)\n\n if now.month == 12:\n # also get data for next year if we are already in december\n url = self._url.replace(\"{%Y}\", str(now.year + 1))\n self._params[self._year_field] = str(now.year + 1)\n\n try:\n entries.extend(self.fetch_url(url), self._params)\n except Exception:\n # ignore if fetch for next year fails\n pass\n return entries\n else:\n return self.fetch_url(self._url, self._params)\n elif self._file is not None:\n return self.fetch_file(self._file)\n\n def fetch_url(self, url, params=None):\n # get ics file\n if self._method == \"GET\":\n r = requests.get(url, params=params, headers=HEADERS)\n elif self._method == \"POST\":\n r = requests.post(url, data=params, headers=HEADERS)\n else:\n _LOGGER.error(\n \"Error: unknown method to fetch URL, use GET or POST; got %s\"\n % self._method\n )\n return \"error\"\n r.encoding = \"utf-8\" # requests doesn't guess the encoding correctly\n\n # check the return code\n if not r.ok:\n _LOGGER.error(\n \"Error: the response is not ok; need code 200, but got code %s\"\n % r.status_code\n )\n return \"error\"\n\n return self._convert(r.text)\n\n def fetch_file(self, file):\n f = open(file)\n return self._convert(f.read())\n\n def _convert(self, data):\n dates = self._ics.convert(data)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py"}]} | 3,915 | 187 |
gh_patches_debug_35619 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-616 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Uncaught 404s in record viewsets and pagination classes
## Description
The record viewset, column viewset and paignation classes regularly call `Table.objects.get(id=table_pk)`, which throws a `mathesar.models.Table.DoesNotExist: Table matching query does not exist.` when an invalid table id is passed.
To recreate, run `client.get(f'/api/v0/tables/3000/records/')`.
## Expected behavior
We should ensure that the table exists before querying, or catch the `DoesNotExist` error after querying. We should also include tests for table 404s.
This is probably best done after #488 is merged, as it includes a function to do exactly this.
</issue>
<code>
[start of mathesar/api/pagination.py]
1 from collections import OrderedDict
2
3 from rest_framework.pagination import LimitOffsetPagination
4 from rest_framework.response import Response
5
6
7 class DefaultLimitOffsetPagination(LimitOffsetPagination):
8 default_limit = 50
9 max_limit = 500
10
11 def get_paginated_response(self, data):
12 return Response(OrderedDict([
13 ('count', self.count),
14 ('results', data)
15 ]))
16
17
18 class ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):
19
20 def paginate_queryset(self, queryset, request, table_id):
21 self.limit = self.get_limit(request)
22 if self.limit is None:
23 self.limit = self.default_limit
24 self.offset = self.get_offset(request)
25 table = queryset.get(id=table_id)
26 self.count = len(table.sa_columns)
27 self.request = request
28 return list(table.sa_columns)[self.offset:self.offset + self.limit]
29
30
31 class TableLimitOffsetPagination(DefaultLimitOffsetPagination):
32
33 def paginate_queryset(self, queryset, request, table_id,
34 filters=[], order_by=[]):
35 self.limit = self.get_limit(request)
36 if self.limit is None:
37 self.limit = self.default_limit
38 self.offset = self.get_offset(request)
39 # TODO: Cache count value somewhere, since calculating it is expensive.
40 table = queryset.get(id=table_id)
41 self.count = table.sa_num_records(filters=filters)
42 self.request = request
43
44 return table.get_records(
45 self.limit, self.offset, filters=filters, order_by=order_by,
46 )
47
48
49 class TableLimitOffsetGroupPagination(TableLimitOffsetPagination):
50 def get_paginated_response(self, data):
51 return Response(OrderedDict([
52 ('count', self.count),
53 ('group_count', self.group_count),
54 ('results', data)
55 ]))
56
57 def paginate_queryset(self, queryset, request, table_id,
58 filters=[], order_by=[], group_count_by=[]):
59 records = super().paginate_queryset(
60 queryset, request, table_id, filters=filters, order_by=order_by
61 )
62
63 table = queryset.get(id=table_id)
64 if group_count_by:
65 group_count = table.get_group_counts(
66 group_count_by, self.limit, self.offset,
67 filters=filters, order_by=order_by
68 )
69 # Convert the tuple keys into strings so it can be converted to JSON
70 group_count = [{"values": list(cols), "count": count}
71 for cols, count in group_count.items()]
72 self.group_count = {
73 'group_count_by': group_count_by,
74 'results': group_count,
75 }
76 else:
77 self.group_count = {
78 'group_count_by': None,
79 'results': None,
80 }
81
82 return records
83
[end of mathesar/api/pagination.py]
[start of mathesar/api/utils.py]
1 from rest_framework.exceptions import NotFound
2
3 from mathesar.models import Table
4
5
6 def get_table_or_404(pk):
7 try:
8 table = Table.objects.get(id=pk)
9 except Table.DoesNotExist:
10 raise NotFound
11 return table
12
[end of mathesar/api/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/api/pagination.py b/mathesar/api/pagination.py
--- a/mathesar/api/pagination.py
+++ b/mathesar/api/pagination.py
@@ -3,6 +3,8 @@
from rest_framework.pagination import LimitOffsetPagination
from rest_framework.response import Response
+from mathesar.api.utils import get_table_or_404
+
class DefaultLimitOffsetPagination(LimitOffsetPagination):
default_limit = 50
@@ -22,7 +24,7 @@
if self.limit is None:
self.limit = self.default_limit
self.offset = self.get_offset(request)
- table = queryset.get(id=table_id)
+ table = get_table_or_404(pk=table_id)
self.count = len(table.sa_columns)
self.request = request
return list(table.sa_columns)[self.offset:self.offset + self.limit]
@@ -37,7 +39,7 @@
self.limit = self.default_limit
self.offset = self.get_offset(request)
# TODO: Cache count value somewhere, since calculating it is expensive.
- table = queryset.get(id=table_id)
+ table = get_table_or_404(pk=table_id)
self.count = table.sa_num_records(filters=filters)
self.request = request
@@ -60,7 +62,7 @@
queryset, request, table_id, filters=filters, order_by=order_by
)
- table = queryset.get(id=table_id)
+ table = get_table_or_404(pk=table_id)
if group_count_by:
group_count = table.get_group_counts(
group_count_by, self.limit, self.offset,
diff --git a/mathesar/api/utils.py b/mathesar/api/utils.py
--- a/mathesar/api/utils.py
+++ b/mathesar/api/utils.py
@@ -4,6 +4,13 @@
def get_table_or_404(pk):
+ """
+ Get table if it exists, otherwise throws a DRF NotFound error.
+ Args:
+ pk: id of table
+ Returns:
+ table: return the table based on a specific id
+ """
try:
table = Table.objects.get(id=pk)
except Table.DoesNotExist:
| {"golden_diff": "diff --git a/mathesar/api/pagination.py b/mathesar/api/pagination.py\n--- a/mathesar/api/pagination.py\n+++ b/mathesar/api/pagination.py\n@@ -3,6 +3,8 @@\n from rest_framework.pagination import LimitOffsetPagination\n from rest_framework.response import Response\n \n+from mathesar.api.utils import get_table_or_404\n+\n \n class DefaultLimitOffsetPagination(LimitOffsetPagination):\n default_limit = 50\n@@ -22,7 +24,7 @@\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n- table = queryset.get(id=table_id)\n+ table = get_table_or_404(pk=table_id)\n self.count = len(table.sa_columns)\n self.request = request\n return list(table.sa_columns)[self.offset:self.offset + self.limit]\n@@ -37,7 +39,7 @@\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n # TODO: Cache count value somewhere, since calculating it is expensive.\n- table = queryset.get(id=table_id)\n+ table = get_table_or_404(pk=table_id)\n self.count = table.sa_num_records(filters=filters)\n self.request = request\n \n@@ -60,7 +62,7 @@\n queryset, request, table_id, filters=filters, order_by=order_by\n )\n \n- table = queryset.get(id=table_id)\n+ table = get_table_or_404(pk=table_id)\n if group_count_by:\n group_count = table.get_group_counts(\n group_count_by, self.limit, self.offset,\ndiff --git a/mathesar/api/utils.py b/mathesar/api/utils.py\n--- a/mathesar/api/utils.py\n+++ b/mathesar/api/utils.py\n@@ -4,6 +4,13 @@\n \n \n def get_table_or_404(pk):\n+ \"\"\"\n+ Get table if it exists, otherwise throws a DRF NotFound error.\n+ Args:\n+ pk: id of table\n+ Returns:\n+ table: return the table based on a specific id\n+ \"\"\"\n try:\n table = Table.objects.get(id=pk)\n except Table.DoesNotExist:\n", "issue": "Uncaught 404s in record viewsets and pagination classes\n## Description\r\nThe record viewset, column viewset and paignation classes regularly call `Table.objects.get(id=table_pk)`, which throws a `mathesar.models.Table.DoesNotExist: Table matching query does not exist.` when an invalid table id is passed.\r\n\r\nTo recreate, run `client.get(f'/api/v0/tables/3000/records/')`.\r\n\r\n\r\n## Expected behavior\r\nWe should ensure that the table exists before querying, or catch the `DoesNotExist` error after querying. We should also include tests for table 404s. \r\n\r\nThis is probably best done after #488 is merged, as it includes a function to do exactly this.\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom rest_framework.pagination import LimitOffsetPagination\nfrom rest_framework.response import Response\n\n\nclass DefaultLimitOffsetPagination(LimitOffsetPagination):\n default_limit = 50\n max_limit = 500\n\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('results', data)\n ]))\n\n\nclass ColumnLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n table = queryset.get(id=table_id)\n self.count = len(table.sa_columns)\n self.request = request\n return list(table.sa_columns)[self.offset:self.offset + self.limit]\n\n\nclass TableLimitOffsetPagination(DefaultLimitOffsetPagination):\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[]):\n self.limit = self.get_limit(request)\n if self.limit is None:\n self.limit = self.default_limit\n self.offset = self.get_offset(request)\n # TODO: Cache count value somewhere, since calculating it is expensive.\n table = queryset.get(id=table_id)\n self.count = table.sa_num_records(filters=filters)\n self.request = request\n\n return table.get_records(\n self.limit, self.offset, filters=filters, order_by=order_by,\n )\n\n\nclass TableLimitOffsetGroupPagination(TableLimitOffsetPagination):\n def get_paginated_response(self, data):\n return Response(OrderedDict([\n ('count', self.count),\n ('group_count', self.group_count),\n ('results', data)\n ]))\n\n def paginate_queryset(self, queryset, request, table_id,\n filters=[], order_by=[], group_count_by=[]):\n records = super().paginate_queryset(\n queryset, request, table_id, filters=filters, order_by=order_by\n )\n\n table = queryset.get(id=table_id)\n if group_count_by:\n group_count = table.get_group_counts(\n group_count_by, self.limit, self.offset,\n filters=filters, order_by=order_by\n )\n # Convert the tuple keys into strings so it can be converted to JSON\n group_count = [{\"values\": list(cols), \"count\": count}\n for cols, count in group_count.items()]\n self.group_count = {\n 'group_count_by': group_count_by,\n 'results': group_count,\n }\n else:\n self.group_count = {\n 'group_count_by': None,\n 'results': None,\n }\n\n return records\n", "path": "mathesar/api/pagination.py"}, {"content": "from rest_framework.exceptions import NotFound\n\nfrom mathesar.models import Table\n\n\ndef get_table_or_404(pk):\n try:\n table = Table.objects.get(id=pk)\n except Table.DoesNotExist:\n raise NotFound\n return table\n", "path": "mathesar/api/utils.py"}]} | 1,506 | 487 |
gh_patches_debug_31929 | rasdani/github-patches | git_diff | fossasia__open-event-server-1422 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check if datetime fields exists in Event API payload before using them
Currently when string datetime's are converted to python datetime in Event API, then the keys are not being checked for existence. So if an "optional" field is missing from payload, it results in `KeyError`.
Fixing this issue should also fix https://github.com/loklak/loklak_server/issues/642
</issue>
<code>
[start of open_event/api/events.py]
1 from flask.ext.restplus import Resource, Namespace, reqparse
2 from flask import g
3
4 from open_event.models.event import Event as EventModel
5 from open_event.models.social_link import SocialLink as SocialLinkModel
6 from open_event.models.users_events_roles import UsersEventsRoles
7 from open_event.models.role import Role
8 from open_event.models.user import ORGANIZER
9 from open_event.helpers.data import save_to_db, update_version, record_activity
10
11 from .helpers.helpers import get_paginated_list, requires_auth, parse_args
12 from .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, \
13 PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES, BaseDAO, ServiceDAO
14 from .helpers import custom_fields as fields
15 from helpers.special_fields import EventTypeField, EventTopicField, \
16 EventPrivacyField, EventSubTopicField
17
18 api = Namespace('events', description='Events')
19
20 EVENT_CREATOR = api.model('EventCreator', {
21 'id': fields.Integer(),
22 'email': fields.Email()
23 })
24
25 SOCIAL_LINK = api.model('SocialLink', {
26 'id': fields.Integer(),
27 'name': fields.String(),
28 'link': fields.String()
29 })
30
31 SOCIAL_LINK_POST = api.clone('SocialLinkPost', SOCIAL_LINK)
32 del SOCIAL_LINK_POST['id']
33
34 EVENT = api.model('Event', {
35 'id': fields.Integer(required=True),
36 'name': fields.String(required=True),
37 'email': fields.Email(),
38 'logo': fields.Upload(),
39 'start_time': fields.DateTime(required=True),
40 'end_time': fields.DateTime(required=True),
41 'timezone': fields.String(),
42 'latitude': fields.Float(),
43 'longitude': fields.Float(),
44 'event_url': fields.Uri(),
45 'background_url': fields.Upload(),
46 'description': fields.String(),
47 'location_name': fields.String(),
48 'organizer_name': fields.String(),
49 'organizer_description': fields.String(),
50 'state': fields.String(),
51 'closing_datetime': fields.DateTime(),
52 'type': EventTypeField(),
53 'topic': EventTopicField(),
54 'sub_topic': EventSubTopicField(),
55 'privacy': EventPrivacyField(),
56 'ticket_url': fields.Uri(),
57 'creator': fields.Nested(EVENT_CREATOR, allow_null=True),
58 'schedule_published_on': fields.DateTime(),
59 'code_of_conduct': fields.String(),
60 'social_links': fields.List(fields.Nested(SOCIAL_LINK), attribute='social_link')
61 })
62
63 EVENT_PAGINATED = api.clone('EventPaginated', PAGINATED_MODEL, {
64 'results': fields.List(fields.Nested(EVENT))
65 })
66
67 EVENT_POST = api.clone('EventPost', EVENT)
68 del EVENT_POST['id']
69 del EVENT_POST['creator']
70 del EVENT_POST['social_links']
71
72
73
74 # ###################
75 # Data Access Objects
76 # ###################
77
78
79 class SocialLinkDAO(ServiceDAO):
80 """
81 Social Link DAO
82 """
83 pass
84
85
86 class EventDAO(BaseDAO):
87 """
88 Event DAO
89 """
90
91 def fix_payload(self, data):
92 """
93 Fixes the payload data.
94 Here converts string time from datetime obj
95 """
96 data['start_time'] = EVENT_POST['start_time'].from_str(data['start_time'])
97 data['end_time'] = EVENT_POST['end_time'].from_str(data['end_time'])
98 data['closing_datetime'] = EVENT_POST['closing_datetime'].from_str(
99 data['closing_datetime'])
100 data['schedule_published_on'] = EVENT_POST['schedule_published_on'].from_str(
101 data['schedule_published_on'])
102 return data
103
104 def create(self, data, url):
105 data = self.validate(data)
106 payload = self.fix_payload(data)
107 new_event = self.model(**payload)
108 new_event.creator = g.user
109 save_to_db(new_event, "Event saved")
110 # set organizer
111 role = Role.query.filter_by(name=ORGANIZER).first()
112 uer = UsersEventsRoles(g.user, new_event, role)
113 save_to_db(uer, 'UER saved')
114 update_version(
115 event_id=new_event.id,
116 is_created=True,
117 column_to_increment="event_ver"
118 )
119 # Return created resource with a 201 status code and its Location
120 # (url) in the header.
121 resource_location = url + '/' + str(new_event.id)
122 return self.get(new_event.id), 201, {'Location': resource_location}
123
124 def update(self, event_id, data):
125 data = self.validate(data)
126 payload = self.fix_payload(data)
127 return BaseDAO.update(self, event_id, payload, validate=False)
128
129
130 LinkDAO = SocialLinkDAO(SocialLinkModel, SOCIAL_LINK_POST)
131 DAO = EventDAO(EventModel, EVENT_POST)
132
133 # DEFINE PARAMS
134
135 EVENT_PARAMS = {
136 'location': {
137 'type': str
138 },
139 'contains': {
140 'description': 'Contains the string in name and description',
141 'type': str
142 },
143 'state': {
144 'type': str
145 },
146 'privacy': {
147 'type': str
148 },
149 'type': {
150 'type': str
151 },
152 'topic': {
153 'type': str
154 },
155 'sub_topic': {
156 'type': str
157 },
158 'start_time_gt': {},
159 'start_time_lt': {},
160 'end_time_gt': {},
161 'end_time_lt': {},
162 'time_period': {
163 'type': str
164 }
165 }
166
167
168 # DEFINE RESOURCES
169
170
171 class EventResource():
172 """
173 Event Resource Base class
174 """
175 event_parser = reqparse.RequestParser()
176 event_parser.add_argument('location', type=str, dest='__event_location')
177 event_parser.add_argument('contains', type=str, dest='__event_contains')
178 event_parser.add_argument('state', type=str)
179 event_parser.add_argument('privacy', type=str)
180 event_parser.add_argument('type', type=str)
181 event_parser.add_argument('topic', type=str)
182 event_parser.add_argument('sub_topic', type=str)
183 event_parser.add_argument('start_time_gt', dest='__event_start_time_gt')
184 event_parser.add_argument('start_time_lt', dest='__event_start_time_lt')
185 event_parser.add_argument('end_time_gt', dest='__event_end_time_gt')
186 event_parser.add_argument('end_time_lt', dest='__event_end_time_lt')
187 event_parser.add_argument('time_period', type=str, dest='__event_time_period')
188
189
190 @api.route('/<int:event_id>')
191 @api.param('event_id')
192 @api.response(404, 'Event not found')
193 class Event(Resource):
194 @api.doc('get_event')
195 @api.marshal_with(EVENT)
196 def get(self, event_id):
197 """Fetch an event given its id"""
198 return DAO.get(event_id)
199
200 @requires_auth
201 @api.doc('delete_event')
202 @api.marshal_with(EVENT)
203 def delete(self, event_id):
204 """Delete an event given its id"""
205 event = DAO.delete(event_id)
206 record_activity('delete_event', event_id=event_id)
207 return event
208
209 @requires_auth
210 @api.doc('update_event', responses=PUT_RESPONSES)
211 @api.marshal_with(EVENT)
212 @api.expect(EVENT_POST)
213 def put(self, event_id):
214 """Update an event given its id"""
215 event = DAO.update(event_id, self.api.payload)
216 record_activity('update_event', event_id=event_id)
217 return event
218
219
220 @api.route('')
221 class EventList(Resource, EventResource):
222 @api.doc('list_events', params=EVENT_PARAMS)
223 @api.marshal_list_with(EVENT)
224 def get(self):
225 """List all events"""
226 return DAO.list(**parse_args(self.event_parser))
227
228 @requires_auth
229 @api.doc('create_event', responses=POST_RESPONSES)
230 @api.marshal_with(EVENT)
231 @api.expect(EVENT_POST)
232 def post(self):
233 """Create an event"""
234 item = DAO.create(self.api.payload, self.api.url_for(self))
235 record_activity('create_event', event_id=item[0].id)
236 return item
237
238
239 @api.route('/page')
240 class EventListPaginated(Resource, PaginatedResourceBase, EventResource):
241 @api.doc('list_events_paginated', params=PAGE_PARAMS)
242 @api.doc(params=EVENT_PARAMS)
243 @api.marshal_with(EVENT_PAGINATED)
244 def get(self):
245 """List events in a paginated manner"""
246 args = self.parser.parse_args()
247 return get_paginated_list(
248 EventModel, args=args,
249 **parse_args(self.event_parser)
250 )
251
252
253 @api.route('/<int:event_id>/links')
254 @api.param('event_id')
255 class SocialLinkList(Resource):
256 @api.doc('list_social_links')
257 @api.marshal_list_with(SOCIAL_LINK)
258 def get(self, event_id):
259 """List all social links"""
260 return LinkDAO.list(event_id)
261
262 @requires_auth
263 @api.doc('create_social_link', responses=POST_RESPONSES)
264 @api.marshal_with(SOCIAL_LINK_POST)
265 @api.expect(SOCIAL_LINK_POST)
266 def post(self, event_id):
267 """Create a social link"""
268 return LinkDAO.create(
269 event_id,
270 self.api.payload,
271 self.api.url_for(self, event_id=event_id)
272 )
273
274
275 @api.route('/<int:event_id>/links/<int:link_id>')
276 class SocialLink(Resource):
277 @requires_auth
278 @api.doc('delete_social_link')
279 @api.marshal_with(SOCIAL_LINK)
280 def delete(self, event_id, link_id):
281 """Delete a social link given its id"""
282 return LinkDAO.delete(event_id, link_id)
283
284 @requires_auth
285 @api.doc('update_social_link', responses=PUT_RESPONSES)
286 @api.marshal_with(SOCIAL_LINK_POST)
287 @api.expect(SOCIAL_LINK_POST)
288 def put(self, event_id, link_id):
289 """Update a social link given its id"""
290 return LinkDAO.update(event_id, link_id, self.api.payload)
291
[end of open_event/api/events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/open_event/api/events.py b/open_event/api/events.py
--- a/open_event/api/events.py
+++ b/open_event/api/events.py
@@ -70,7 +70,6 @@
del EVENT_POST['social_links']
-
# ###################
# Data Access Objects
# ###################
@@ -93,12 +92,12 @@
Fixes the payload data.
Here converts string time from datetime obj
"""
- data['start_time'] = EVENT_POST['start_time'].from_str(data['start_time'])
- data['end_time'] = EVENT_POST['end_time'].from_str(data['end_time'])
- data['closing_datetime'] = EVENT_POST['closing_datetime'].from_str(
- data['closing_datetime'])
- data['schedule_published_on'] = EVENT_POST['schedule_published_on'].from_str(
- data['schedule_published_on'])
+ datetime_fields = [
+ 'start_time', 'end_time', 'closing_datetime',
+ 'schedule_published_on'
+ ]
+ for i in datetime_fields:
+ data[i] = EVENT_POST[i].from_str(data.get(i))
return data
def create(self, data, url):
@@ -133,35 +132,20 @@
# DEFINE PARAMS
EVENT_PARAMS = {
- 'location': {
- 'type': str
- },
+ 'location': {},
'contains': {
- 'description': 'Contains the string in name and description',
- 'type': str
- },
- 'state': {
- 'type': str
- },
- 'privacy': {
- 'type': str
- },
- 'type': {
- 'type': str
- },
- 'topic': {
- 'type': str
- },
- 'sub_topic': {
- 'type': str
+ 'description': 'Contains the string in name and description'
},
+ 'state': {},
+ 'privacy': {},
+ 'type': {},
+ 'topic': {},
+ 'sub_topic': {},
'start_time_gt': {},
'start_time_lt': {},
'end_time_gt': {},
'end_time_lt': {},
- 'time_period': {
- 'type': str
- }
+ 'time_period': {}
}
| {"golden_diff": "diff --git a/open_event/api/events.py b/open_event/api/events.py\n--- a/open_event/api/events.py\n+++ b/open_event/api/events.py\n@@ -70,7 +70,6 @@\n del EVENT_POST['social_links']\n \n \n-\n # ###################\n # Data Access Objects\n # ###################\n@@ -93,12 +92,12 @@\n Fixes the payload data.\n Here converts string time from datetime obj\n \"\"\"\n- data['start_time'] = EVENT_POST['start_time'].from_str(data['start_time'])\n- data['end_time'] = EVENT_POST['end_time'].from_str(data['end_time'])\n- data['closing_datetime'] = EVENT_POST['closing_datetime'].from_str(\n- data['closing_datetime'])\n- data['schedule_published_on'] = EVENT_POST['schedule_published_on'].from_str(\n- data['schedule_published_on'])\n+ datetime_fields = [\n+ 'start_time', 'end_time', 'closing_datetime',\n+ 'schedule_published_on'\n+ ]\n+ for i in datetime_fields:\n+ data[i] = EVENT_POST[i].from_str(data.get(i))\n return data\n \n def create(self, data, url):\n@@ -133,35 +132,20 @@\n # DEFINE PARAMS\n \n EVENT_PARAMS = {\n- 'location': {\n- 'type': str\n- },\n+ 'location': {},\n 'contains': {\n- 'description': 'Contains the string in name and description',\n- 'type': str\n- },\n- 'state': {\n- 'type': str\n- },\n- 'privacy': {\n- 'type': str\n- },\n- 'type': {\n- 'type': str\n- },\n- 'topic': {\n- 'type': str\n- },\n- 'sub_topic': {\n- 'type': str\n+ 'description': 'Contains the string in name and description'\n },\n+ 'state': {},\n+ 'privacy': {},\n+ 'type': {},\n+ 'topic': {},\n+ 'sub_topic': {},\n 'start_time_gt': {},\n 'start_time_lt': {},\n 'end_time_gt': {},\n 'end_time_lt': {},\n- 'time_period': {\n- 'type': str\n- }\n+ 'time_period': {}\n }\n", "issue": "Check if datetime fields exists in Event API payload before using them\nCurrently when string datetime's are converted to python datetime in Event API, then the keys are not being checked for existence. So if an \"optional\" field is missing from payload, it results in `KeyError`. \n\nFixing this issue should also fix https://github.com/loklak/loklak_server/issues/642\n\n", "before_files": [{"content": "from flask.ext.restplus import Resource, Namespace, reqparse\nfrom flask import g\n\nfrom open_event.models.event import Event as EventModel\nfrom open_event.models.social_link import SocialLink as SocialLinkModel\nfrom open_event.models.users_events_roles import UsersEventsRoles\nfrom open_event.models.role import Role\nfrom open_event.models.user import ORGANIZER\nfrom open_event.helpers.data import save_to_db, update_version, record_activity\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, parse_args\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES, BaseDAO, ServiceDAO\nfrom .helpers import custom_fields as fields\nfrom helpers.special_fields import EventTypeField, EventTopicField, \\\n EventPrivacyField, EventSubTopicField\n\napi = Namespace('events', description='Events')\n\nEVENT_CREATOR = api.model('EventCreator', {\n 'id': fields.Integer(),\n 'email': fields.Email()\n})\n\nSOCIAL_LINK = api.model('SocialLink', {\n 'id': fields.Integer(),\n 'name': fields.String(),\n 'link': fields.String()\n})\n\nSOCIAL_LINK_POST = api.clone('SocialLinkPost', SOCIAL_LINK)\ndel SOCIAL_LINK_POST['id']\n\nEVENT = api.model('Event', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(required=True),\n 'email': fields.Email(),\n 'logo': fields.Upload(),\n 'start_time': fields.DateTime(required=True),\n 'end_time': fields.DateTime(required=True),\n 'timezone': fields.String(),\n 'latitude': fields.Float(),\n 'longitude': fields.Float(),\n 'event_url': fields.Uri(),\n 'background_url': fields.Upload(),\n 'description': fields.String(),\n 'location_name': fields.String(),\n 'organizer_name': fields.String(),\n 'organizer_description': fields.String(),\n 'state': fields.String(),\n 'closing_datetime': fields.DateTime(),\n 'type': EventTypeField(),\n 'topic': EventTopicField(),\n 'sub_topic': EventSubTopicField(),\n 'privacy': EventPrivacyField(),\n 'ticket_url': fields.Uri(),\n 'creator': fields.Nested(EVENT_CREATOR, allow_null=True),\n 'schedule_published_on': fields.DateTime(),\n 'code_of_conduct': fields.String(),\n 'social_links': fields.List(fields.Nested(SOCIAL_LINK), attribute='social_link')\n})\n\nEVENT_PAGINATED = api.clone('EventPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(EVENT))\n})\n\nEVENT_POST = api.clone('EventPost', EVENT)\ndel EVENT_POST['id']\ndel EVENT_POST['creator']\ndel EVENT_POST['social_links']\n\n\n\n# ###################\n# Data Access Objects\n# ###################\n\n\nclass SocialLinkDAO(ServiceDAO):\n \"\"\"\n Social Link DAO\n \"\"\"\n pass\n\n\nclass EventDAO(BaseDAO):\n \"\"\"\n Event DAO\n \"\"\"\n\n def fix_payload(self, data):\n \"\"\"\n Fixes the payload data.\n Here converts string time from datetime obj\n \"\"\"\n data['start_time'] = EVENT_POST['start_time'].from_str(data['start_time'])\n data['end_time'] = EVENT_POST['end_time'].from_str(data['end_time'])\n data['closing_datetime'] = EVENT_POST['closing_datetime'].from_str(\n data['closing_datetime'])\n data['schedule_published_on'] = EVENT_POST['schedule_published_on'].from_str(\n data['schedule_published_on'])\n return data\n\n def create(self, data, url):\n data = self.validate(data)\n payload = self.fix_payload(data)\n new_event = self.model(**payload)\n new_event.creator = g.user\n save_to_db(new_event, \"Event saved\")\n # set organizer\n role = Role.query.filter_by(name=ORGANIZER).first()\n uer = UsersEventsRoles(g.user, new_event, role)\n save_to_db(uer, 'UER saved')\n update_version(\n event_id=new_event.id,\n is_created=True,\n column_to_increment=\"event_ver\"\n )\n # Return created resource with a 201 status code and its Location\n # (url) in the header.\n resource_location = url + '/' + str(new_event.id)\n return self.get(new_event.id), 201, {'Location': resource_location}\n\n def update(self, event_id, data):\n data = self.validate(data)\n payload = self.fix_payload(data)\n return BaseDAO.update(self, event_id, payload, validate=False)\n\n\nLinkDAO = SocialLinkDAO(SocialLinkModel, SOCIAL_LINK_POST)\nDAO = EventDAO(EventModel, EVENT_POST)\n\n# DEFINE PARAMS\n\nEVENT_PARAMS = {\n 'location': {\n 'type': str\n },\n 'contains': {\n 'description': 'Contains the string in name and description',\n 'type': str\n },\n 'state': {\n 'type': str\n },\n 'privacy': {\n 'type': str\n },\n 'type': {\n 'type': str\n },\n 'topic': {\n 'type': str\n },\n 'sub_topic': {\n 'type': str\n },\n 'start_time_gt': {},\n 'start_time_lt': {},\n 'end_time_gt': {},\n 'end_time_lt': {},\n 'time_period': {\n 'type': str\n }\n}\n\n\n# DEFINE RESOURCES\n\n\nclass EventResource():\n \"\"\"\n Event Resource Base class\n \"\"\"\n event_parser = reqparse.RequestParser()\n event_parser.add_argument('location', type=str, dest='__event_location')\n event_parser.add_argument('contains', type=str, dest='__event_contains')\n event_parser.add_argument('state', type=str)\n event_parser.add_argument('privacy', type=str)\n event_parser.add_argument('type', type=str)\n event_parser.add_argument('topic', type=str)\n event_parser.add_argument('sub_topic', type=str)\n event_parser.add_argument('start_time_gt', dest='__event_start_time_gt')\n event_parser.add_argument('start_time_lt', dest='__event_start_time_lt')\n event_parser.add_argument('end_time_gt', dest='__event_end_time_gt')\n event_parser.add_argument('end_time_lt', dest='__event_end_time_lt')\n event_parser.add_argument('time_period', type=str, dest='__event_time_period')\n\n\[email protected]('/<int:event_id>')\[email protected]('event_id')\[email protected](404, 'Event not found')\nclass Event(Resource):\n @api.doc('get_event')\n @api.marshal_with(EVENT)\n def get(self, event_id):\n \"\"\"Fetch an event given its id\"\"\"\n return DAO.get(event_id)\n\n @requires_auth\n @api.doc('delete_event')\n @api.marshal_with(EVENT)\n def delete(self, event_id):\n \"\"\"Delete an event given its id\"\"\"\n event = DAO.delete(event_id)\n record_activity('delete_event', event_id=event_id)\n return event\n\n @requires_auth\n @api.doc('update_event', responses=PUT_RESPONSES)\n @api.marshal_with(EVENT)\n @api.expect(EVENT_POST)\n def put(self, event_id):\n \"\"\"Update an event given its id\"\"\"\n event = DAO.update(event_id, self.api.payload)\n record_activity('update_event', event_id=event_id)\n return event\n\n\[email protected]('')\nclass EventList(Resource, EventResource):\n @api.doc('list_events', params=EVENT_PARAMS)\n @api.marshal_list_with(EVENT)\n def get(self):\n \"\"\"List all events\"\"\"\n return DAO.list(**parse_args(self.event_parser))\n\n @requires_auth\n @api.doc('create_event', responses=POST_RESPONSES)\n @api.marshal_with(EVENT)\n @api.expect(EVENT_POST)\n def post(self):\n \"\"\"Create an event\"\"\"\n item = DAO.create(self.api.payload, self.api.url_for(self))\n record_activity('create_event', event_id=item[0].id)\n return item\n\n\[email protected]('/page')\nclass EventListPaginated(Resource, PaginatedResourceBase, EventResource):\n @api.doc('list_events_paginated', params=PAGE_PARAMS)\n @api.doc(params=EVENT_PARAMS)\n @api.marshal_with(EVENT_PAGINATED)\n def get(self):\n \"\"\"List events in a paginated manner\"\"\"\n args = self.parser.parse_args()\n return get_paginated_list(\n EventModel, args=args,\n **parse_args(self.event_parser)\n )\n\n\[email protected]('/<int:event_id>/links')\[email protected]('event_id')\nclass SocialLinkList(Resource):\n @api.doc('list_social_links')\n @api.marshal_list_with(SOCIAL_LINK)\n def get(self, event_id):\n \"\"\"List all social links\"\"\"\n return LinkDAO.list(event_id)\n\n @requires_auth\n @api.doc('create_social_link', responses=POST_RESPONSES)\n @api.marshal_with(SOCIAL_LINK_POST)\n @api.expect(SOCIAL_LINK_POST)\n def post(self, event_id):\n \"\"\"Create a social link\"\"\"\n return LinkDAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/<int:event_id>/links/<int:link_id>')\nclass SocialLink(Resource):\n @requires_auth\n @api.doc('delete_social_link')\n @api.marshal_with(SOCIAL_LINK)\n def delete(self, event_id, link_id):\n \"\"\"Delete a social link given its id\"\"\"\n return LinkDAO.delete(event_id, link_id)\n\n @requires_auth\n @api.doc('update_social_link', responses=PUT_RESPONSES)\n @api.marshal_with(SOCIAL_LINK_POST)\n @api.expect(SOCIAL_LINK_POST)\n def put(self, event_id, link_id):\n \"\"\"Update a social link given its id\"\"\"\n return LinkDAO.update(event_id, link_id, self.api.payload)\n", "path": "open_event/api/events.py"}]} | 3,528 | 509 |
gh_patches_debug_3160 | rasdani/github-patches | git_diff | ipython__ipython-7560 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Displaying a widget using displayhook produces misaligned Out[N] prompt

This doesn't look right. @jdfreder, can you investigate?
</issue>
<code>
[start of IPython/kernel/zmq/displayhook.py]
1 """Replacements for sys.displayhook that publish over ZMQ."""
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 import sys
7
8 from IPython.core.displayhook import DisplayHook
9 from IPython.kernel.inprocess.socket import SocketABC
10 from IPython.utils.jsonutil import encode_images
11 from IPython.utils.py3compat import builtin_mod
12 from IPython.utils.traitlets import Instance, Dict
13 from .session import extract_header, Session
14
15 class ZMQDisplayHook(object):
16 """A simple displayhook that publishes the object's repr over a ZeroMQ
17 socket."""
18 topic=b'execute_result'
19
20 def __init__(self, session, pub_socket):
21 self.session = session
22 self.pub_socket = pub_socket
23 self.parent_header = {}
24
25 def __call__(self, obj):
26 if obj is None:
27 return
28
29 builtin_mod._ = obj
30 sys.stdout.flush()
31 sys.stderr.flush()
32 msg = self.session.send(self.pub_socket, u'execute_result', {u'data':repr(obj)},
33 parent=self.parent_header, ident=self.topic)
34
35 def set_parent(self, parent):
36 self.parent_header = extract_header(parent)
37
38
39 class ZMQShellDisplayHook(DisplayHook):
40 """A displayhook subclass that publishes data using ZeroMQ. This is intended
41 to work with an InteractiveShell instance. It sends a dict of different
42 representations of the object."""
43 topic=None
44
45 session = Instance(Session)
46 pub_socket = Instance(SocketABC)
47 parent_header = Dict({})
48
49 def set_parent(self, parent):
50 """Set the parent for outbound messages."""
51 self.parent_header = extract_header(parent)
52
53 def start_displayhook(self):
54 self.msg = self.session.msg(u'execute_result', {
55 'data': {},
56 'metadata': {},
57 }, parent=self.parent_header)
58
59 def write_output_prompt(self):
60 """Write the output prompt."""
61 self.msg['content']['execution_count'] = self.prompt_count
62
63 def write_format_data(self, format_dict, md_dict=None):
64 self.msg['content']['data'] = encode_images(format_dict)
65 self.msg['content']['metadata'] = md_dict
66
67 def finish_displayhook(self):
68 """Finish up all displayhook activities."""
69 sys.stdout.flush()
70 sys.stderr.flush()
71 self.session.send(self.pub_socket, self.msg, ident=self.topic)
72 self.msg = None
73
74
[end of IPython/kernel/zmq/displayhook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/kernel/zmq/displayhook.py b/IPython/kernel/zmq/displayhook.py
--- a/IPython/kernel/zmq/displayhook.py
+++ b/IPython/kernel/zmq/displayhook.py
@@ -68,6 +68,7 @@
"""Finish up all displayhook activities."""
sys.stdout.flush()
sys.stderr.flush()
- self.session.send(self.pub_socket, self.msg, ident=self.topic)
+ if self.msg['content']['data']:
+ self.session.send(self.pub_socket, self.msg, ident=self.topic)
self.msg = None
| {"golden_diff": "diff --git a/IPython/kernel/zmq/displayhook.py b/IPython/kernel/zmq/displayhook.py\n--- a/IPython/kernel/zmq/displayhook.py\n+++ b/IPython/kernel/zmq/displayhook.py\n@@ -68,6 +68,7 @@\n \"\"\"Finish up all displayhook activities.\"\"\"\n sys.stdout.flush()\n sys.stderr.flush()\n- self.session.send(self.pub_socket, self.msg, ident=self.topic)\n+ if self.msg['content']['data']:\n+ self.session.send(self.pub_socket, self.msg, ident=self.topic)\n self.msg = None\n", "issue": "Displaying a widget using displayhook produces misaligned Out[N] prompt\n\n\nThis doesn't look right. @jdfreder, can you investigate?\n\n", "before_files": [{"content": "\"\"\"Replacements for sys.displayhook that publish over ZMQ.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport sys\n\nfrom IPython.core.displayhook import DisplayHook\nfrom IPython.kernel.inprocess.socket import SocketABC\nfrom IPython.utils.jsonutil import encode_images\nfrom IPython.utils.py3compat import builtin_mod\nfrom IPython.utils.traitlets import Instance, Dict\nfrom .session import extract_header, Session\n\nclass ZMQDisplayHook(object):\n \"\"\"A simple displayhook that publishes the object's repr over a ZeroMQ\n socket.\"\"\"\n topic=b'execute_result'\n\n def __init__(self, session, pub_socket):\n self.session = session\n self.pub_socket = pub_socket\n self.parent_header = {}\n\n def __call__(self, obj):\n if obj is None:\n return\n\n builtin_mod._ = obj\n sys.stdout.flush()\n sys.stderr.flush()\n msg = self.session.send(self.pub_socket, u'execute_result', {u'data':repr(obj)},\n parent=self.parent_header, ident=self.topic)\n\n def set_parent(self, parent):\n self.parent_header = extract_header(parent)\n\n\nclass ZMQShellDisplayHook(DisplayHook):\n \"\"\"A displayhook subclass that publishes data using ZeroMQ. This is intended\n to work with an InteractiveShell instance. It sends a dict of different\n representations of the object.\"\"\"\n topic=None\n\n session = Instance(Session)\n pub_socket = Instance(SocketABC)\n parent_header = Dict({})\n\n def set_parent(self, parent):\n \"\"\"Set the parent for outbound messages.\"\"\"\n self.parent_header = extract_header(parent)\n\n def start_displayhook(self):\n self.msg = self.session.msg(u'execute_result', {\n 'data': {},\n 'metadata': {},\n }, parent=self.parent_header)\n\n def write_output_prompt(self):\n \"\"\"Write the output prompt.\"\"\"\n self.msg['content']['execution_count'] = self.prompt_count\n\n def write_format_data(self, format_dict, md_dict=None):\n self.msg['content']['data'] = encode_images(format_dict)\n self.msg['content']['metadata'] = md_dict\n\n def finish_displayhook(self):\n \"\"\"Finish up all displayhook activities.\"\"\"\n sys.stdout.flush()\n sys.stderr.flush()\n self.session.send(self.pub_socket, self.msg, ident=self.topic)\n self.msg = None\n\n", "path": "IPython/kernel/zmq/displayhook.py"}]} | 1,312 | 123 |
gh_patches_debug_32522 | rasdani/github-patches | git_diff | open-mmlab__mmcv-2149 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected behaviour `by_epoch` DvcliveLoggerHook/AnyLogger
Thanks for reporting the unexpected results and we appreciate it a lot.
See also the DVClive issue: https://github.com/iterative/dvclive/issues/267
**Describe the Issue**
I was using the `dvclive` hook for mmcv and expected the `by_epoch` variable to mean something else then it is doing right now. I expected to get a result per epoch. However, it seems to have no (or not the expected) effect.
**Reproduction**
1. What command, code, or script did you run? I added the following to my config.
```bash
log_config = dict(
hooks=[
dict(
type="DvcliveLoggerHook",
path="{{ fileDirname }}/../live",
interval=1,
by_epoch=True,
),
],
)
```
2. Did you make any modifications on the code? Did you understand what you have modified?
I did not modify the code.
**Environment**
1. Please run `python -c "from mmcv.utils import collect_env; print(collect_env())"` to collect necessary environment information and paste it here.
<details><summary>Output</summary>
```none
{
'sys.platform': 'linux',
'Python': '3.8.8 (default, Feb 24 2021, 21: 46: 12) [GCC 7.3.0]',
'CUDA available': True,
'GPU 0, 1, 2, 3': 'NVIDIA TITAN RTX',
'CUDA_HOME': '/usr/local/cuda',
'NVCC': 'Cuda compilation tools,
release 11.2, V11.2.142',
'GCC': 'gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0',
'PyTorch': '1.9.0a0+df837d0',
'PyTorch compiling details': 'PyTorch built with:\n - GCC 9.3\n - C++ Version: 201402\n - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v1.7.0 (Git Hash N/A)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.2\n - NVCC architecture flags: -gencode;arch=compute_52,
code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,
code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,
code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,
code=sm_86;-gencode;arch=compute_86,code=compute_86\n - CuDNN 8.1.1\n - Magma 2.5.2\n - Build settings: BLAS_INFO=mkl,
BUILD_TYPE=Release,
CUDA_VERSION=11.2,
CUDNN_VERSION=8.1.1,
CXX_COMPILER=/usr/bin/c++,
CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow,
FORCE_FALLBACK_CUDA_MPI=1,
LAPACK_INFO=mkl,
PERF_WITH_AVX=1,
PERF_WITH_AVX2=1,
PERF_WITH_AVX512=1,
TORCH_VERSION=1.9.0,
USE_CUDA=ON,
USE_CUDNN=ON,
USE_EXCEPTION_PTR=1,
USE_GFLAGS=OFF,
USE_GLOG=OFF,
USE_MKL=ON,
USE_MKLDNN=ON,
USE_MPI=ON,
USE_NCCL=ON,
USE_NNPACK=ON,
USE_OPENMP=ON,
\n',
'TorchVision': '0.9.0a0',
'OpenCV': '3.4.11',
'MMCV': '1.5.0',
'MMCV Compiler': 'GCC 9.3',
'MMCV CUDA Compiler': '11.2'
}
```
</details>
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch \[e.g., pip, conda, source\]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error traceback here.
n/a
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
n/a
</issue>
<code>
[start of mmcv/runner/hooks/logger/dvclive.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 from pathlib import Path
3 from typing import Optional
4
5 from ...dist_utils import master_only
6 from ..hook import HOOKS
7 from .base import LoggerHook
8
9
10 @HOOKS.register_module()
11 class DvcliveLoggerHook(LoggerHook):
12 """Class to log metrics with dvclive.
13
14 It requires `dvclive`_ to be installed.
15
16 Args:
17 model_file (str): Default None. If not None, after each epoch the
18 model will be saved to {model_file}.
19 interval (int): Logging interval (every k iterations). Default 10.
20 ignore_last (bool): Ignore the log of last iterations in each epoch
21 if less than `interval`. Default: True.
22 reset_flag (bool): Whether to clear the output buffer after logging.
23 Default: False.
24 by_epoch (bool): Whether EpochBasedRunner is used. Default: True.
25 dvclive (Live, optional): An instance of the `Live`_ logger to use
26 instead of initializing a new one internally. Defaults to None.
27 kwargs: Arguments for instantiating `Live`_ (ignored if `dvclive` is
28 provided).
29
30 .. _dvclive:
31 https://dvc.org/doc/dvclive
32
33 .. _Live:
34 https://dvc.org/doc/dvclive/api-reference/live#parameters
35 """
36
37 def __init__(self,
38 model_file: Optional[str] = None,
39 interval: int = 10,
40 ignore_last: bool = True,
41 reset_flag: bool = False,
42 by_epoch: bool = True,
43 dvclive=None,
44 **kwargs):
45 super().__init__(interval, ignore_last, reset_flag, by_epoch)
46 self.model_file = model_file
47 self._import_dvclive(dvclive, **kwargs)
48
49 def _import_dvclive(self, dvclive=None, **kwargs) -> None:
50 try:
51 from dvclive import Live
52 except ImportError:
53 raise ImportError(
54 'Please run "pip install dvclive" to install dvclive')
55 self.dvclive = dvclive if dvclive is not None else Live(**kwargs)
56
57 @master_only
58 def log(self, runner) -> None:
59 tags = self.get_loggable_tags(runner)
60 if tags:
61 self.dvclive.set_step(self.get_iter(runner))
62 for k, v in tags.items():
63 self.dvclive.log(k, v)
64
65 @master_only
66 def after_train_epoch(self, runner) -> None:
67 super().after_train_epoch(runner)
68 if self.model_file is not None:
69 runner.save_checkpoint(
70 Path(self.model_file).parent,
71 filename_tmpl=Path(self.model_file).name,
72 create_symlink=False,
73 )
74
[end of mmcv/runner/hooks/logger/dvclive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmcv/runner/hooks/logger/dvclive.py b/mmcv/runner/hooks/logger/dvclive.py
--- a/mmcv/runner/hooks/logger/dvclive.py
+++ b/mmcv/runner/hooks/logger/dvclive.py
@@ -21,7 +21,10 @@
if less than `interval`. Default: True.
reset_flag (bool): Whether to clear the output buffer after logging.
Default: False.
- by_epoch (bool): Whether EpochBasedRunner is used. Default: True.
+ by_epoch (bool): Whether EpochBasedRunner is used.
+ Determines whether `log` is called `after_train_iter` or
+ `after_train_epoch`.
+ Default: True.
dvclive (Live, optional): An instance of the `Live`_ logger to use
instead of initializing a new one internally. Defaults to None.
kwargs: Arguments for instantiating `Live`_ (ignored if `dvclive` is
@@ -58,16 +61,23 @@
def log(self, runner) -> None:
tags = self.get_loggable_tags(runner)
if tags:
- self.dvclive.set_step(self.get_iter(runner))
+ step = self.get_epoch(runner) if self.by_epoch else self.get_iter(
+ runner)
+ self.dvclive.set_step(step)
+
for k, v in tags.items():
self.dvclive.log(k, v)
- @master_only
def after_train_epoch(self, runner) -> None:
- super().after_train_epoch(runner)
if self.model_file is not None:
runner.save_checkpoint(
Path(self.model_file).parent,
filename_tmpl=Path(self.model_file).name,
create_symlink=False,
)
+ if self.by_epoch:
+ super().after_train_epoch(runner)
+
+ def after_train_iter(self, runner) -> None:
+ if not self.by_epoch:
+ super().after_train_iter(runner)
| {"golden_diff": "diff --git a/mmcv/runner/hooks/logger/dvclive.py b/mmcv/runner/hooks/logger/dvclive.py\n--- a/mmcv/runner/hooks/logger/dvclive.py\n+++ b/mmcv/runner/hooks/logger/dvclive.py\n@@ -21,7 +21,10 @@\n if less than `interval`. Default: True.\n reset_flag (bool): Whether to clear the output buffer after logging.\n Default: False.\n- by_epoch (bool): Whether EpochBasedRunner is used. Default: True.\n+ by_epoch (bool): Whether EpochBasedRunner is used.\n+ Determines whether `log` is called `after_train_iter` or\n+ `after_train_epoch`.\n+ Default: True.\n dvclive (Live, optional): An instance of the `Live`_ logger to use\n instead of initializing a new one internally. Defaults to None.\n kwargs: Arguments for instantiating `Live`_ (ignored if `dvclive` is\n@@ -58,16 +61,23 @@\n def log(self, runner) -> None:\n tags = self.get_loggable_tags(runner)\n if tags:\n- self.dvclive.set_step(self.get_iter(runner))\n+ step = self.get_epoch(runner) if self.by_epoch else self.get_iter(\n+ runner)\n+ self.dvclive.set_step(step)\n+\n for k, v in tags.items():\n self.dvclive.log(k, v)\n \n- @master_only\n def after_train_epoch(self, runner) -> None:\n- super().after_train_epoch(runner)\n if self.model_file is not None:\n runner.save_checkpoint(\n Path(self.model_file).parent,\n filename_tmpl=Path(self.model_file).name,\n create_symlink=False,\n )\n+ if self.by_epoch:\n+ super().after_train_epoch(runner)\n+\n+ def after_train_iter(self, runner) -> None:\n+ if not self.by_epoch:\n+ super().after_train_iter(runner)\n", "issue": "Unexpected behaviour `by_epoch` DvcliveLoggerHook/AnyLogger\nThanks for reporting the unexpected results and we appreciate it a lot.\r\n\r\nSee also the DVClive issue: https://github.com/iterative/dvclive/issues/267\r\n\r\n**Describe the Issue**\r\n\r\nI was using the `dvclive` hook for mmcv and expected the `by_epoch` variable to mean something else then it is doing right now. I expected to get a result per epoch. However, it seems to have no (or not the expected) effect. \r\n\r\n**Reproduction**\r\n\r\n1. What command, code, or script did you run? I added the following to my config.\r\n\r\n```bash\r\nlog_config = dict(\r\n hooks=[\r\n dict(\r\n type=\"DvcliveLoggerHook\",\r\n path=\"{{ fileDirname }}/../live\",\r\n interval=1,\r\n by_epoch=True,\r\n ),\r\n ],\r\n)\r\n```\r\n\r\n2. Did you make any modifications on the code? Did you understand what you have modified?\r\n\r\nI did not modify the code.\r\n\r\n**Environment**\r\n\r\n1. Please run `python -c \"from mmcv.utils import collect_env; print(collect_env())\"` to collect necessary environment information and paste it here.\r\n\r\n<details><summary>Output</summary>\r\n\r\n```none\r\n{ \r\n 'sys.platform': 'linux', \r\n 'Python': '3.8.8 (default, Feb 24 2021, 21: 46: 12) [GCC 7.3.0]',\r\n 'CUDA available': True,\r\n 'GPU 0, 1, 2, 3': 'NVIDIA TITAN RTX',\r\n 'CUDA_HOME': '/usr/local/cuda',\r\n 'NVCC': 'Cuda compilation tools, \r\n release 11.2, V11.2.142',\r\n 'GCC': 'gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0',\r\n 'PyTorch': '1.9.0a0+df837d0', \r\n 'PyTorch compiling details': 'PyTorch built with:\\n - GCC 9.3\\n - C++ Version: 201402\\n - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications\\n - Intel(R) MKL-DNN v1.7.0 (Git Hash N/A)\\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\\n - NNPACK is enabled\\n - CPU capability usage: AVX2\\n - CUDA Runtime 11.2\\n - NVCC architecture flags: -gencode;arch=compute_52,\r\n code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,\r\n code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,\r\n code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,\r\n code=sm_86;-gencode;arch=compute_86,code=compute_86\\n - CuDNN 8.1.1\\n - Magma 2.5.2\\n - Build settings: BLAS_INFO=mkl,\r\n BUILD_TYPE=Release,\r\n CUDA_VERSION=11.2,\r\n CUDNN_VERSION=8.1.1,\r\n CXX_COMPILER=/usr/bin/c++,\r\n CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow,\r\n FORCE_FALLBACK_CUDA_MPI=1,\r\n LAPACK_INFO=mkl,\r\n PERF_WITH_AVX=1,\r\n PERF_WITH_AVX2=1,\r\n PERF_WITH_AVX512=1,\r\n TORCH_VERSION=1.9.0,\r\n USE_CUDA=ON,\r\n USE_CUDNN=ON,\r\n USE_EXCEPTION_PTR=1,\r\n USE_GFLAGS=OFF,\r\n USE_GLOG=OFF,\r\n USE_MKL=ON,\r\n USE_MKLDNN=ON,\r\n USE_MPI=ON,\r\n USE_NCCL=ON,\r\n USE_NNPACK=ON,\r\n USE_OPENMP=ON,\r\n \\n',\r\n 'TorchVision': '0.9.0a0',\r\n 'OpenCV': '3.4.11',\r\n 'MMCV': '1.5.0',\r\n 'MMCV Compiler': 'GCC 9.3',\r\n 'MMCV CUDA Compiler': '11.2'\r\n}\r\n```\r\n\r\n</details>\r\n\r\n\r\n2. You may add addition that may be helpful for locating the problem, such as\r\n - How you installed PyTorch \\[e.g., pip, conda, source\\]\r\n - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)\r\n\r\n\r\n**Error traceback**\r\nIf applicable, paste the error traceback here.\r\n\r\nn/a\r\n\r\n**Bug fix**\r\nIf you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!\r\n\r\nn/a\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom ...dist_utils import master_only\nfrom ..hook import HOOKS\nfrom .base import LoggerHook\n\n\[email protected]_module()\nclass DvcliveLoggerHook(LoggerHook):\n \"\"\"Class to log metrics with dvclive.\n\n It requires `dvclive`_ to be installed.\n\n Args:\n model_file (str): Default None. If not None, after each epoch the\n model will be saved to {model_file}.\n interval (int): Logging interval (every k iterations). Default 10.\n ignore_last (bool): Ignore the log of last iterations in each epoch\n if less than `interval`. Default: True.\n reset_flag (bool): Whether to clear the output buffer after logging.\n Default: False.\n by_epoch (bool): Whether EpochBasedRunner is used. Default: True.\n dvclive (Live, optional): An instance of the `Live`_ logger to use\n instead of initializing a new one internally. Defaults to None.\n kwargs: Arguments for instantiating `Live`_ (ignored if `dvclive` is\n provided).\n\n .. _dvclive:\n https://dvc.org/doc/dvclive\n\n .. _Live:\n https://dvc.org/doc/dvclive/api-reference/live#parameters\n \"\"\"\n\n def __init__(self,\n model_file: Optional[str] = None,\n interval: int = 10,\n ignore_last: bool = True,\n reset_flag: bool = False,\n by_epoch: bool = True,\n dvclive=None,\n **kwargs):\n super().__init__(interval, ignore_last, reset_flag, by_epoch)\n self.model_file = model_file\n self._import_dvclive(dvclive, **kwargs)\n\n def _import_dvclive(self, dvclive=None, **kwargs) -> None:\n try:\n from dvclive import Live\n except ImportError:\n raise ImportError(\n 'Please run \"pip install dvclive\" to install dvclive')\n self.dvclive = dvclive if dvclive is not None else Live(**kwargs)\n\n @master_only\n def log(self, runner) -> None:\n tags = self.get_loggable_tags(runner)\n if tags:\n self.dvclive.set_step(self.get_iter(runner))\n for k, v in tags.items():\n self.dvclive.log(k, v)\n\n @master_only\n def after_train_epoch(self, runner) -> None:\n super().after_train_epoch(runner)\n if self.model_file is not None:\n runner.save_checkpoint(\n Path(self.model_file).parent,\n filename_tmpl=Path(self.model_file).name,\n create_symlink=False,\n )\n", "path": "mmcv/runner/hooks/logger/dvclive.py"}]} | 2,729 | 449 |
gh_patches_debug_10210 | rasdani/github-patches | git_diff | bokeh__bokeh-7188 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RangeSlider css_classes not being updated anymore
It seems in 0.12.7 the `css_classes` attribute of the RangeSlider is not being updated anymore by the callbacks. It was working in 0.12.6.
In the example below, both the text input and the slider starts with the `initial` class as expected. When the slider changes, both should gain the `sent-later` class, but only the text input is gaining it...
```
from bokeh.io import curdoc
from bokeh.models import TextInput, RangeSlider
from bokeh.layouts import widgetbox
from bokeh.layouts import column
from bokeh.models import Div
div1 = Div(text="""
<style>
.sent-later {
border-radius: 0.5em;
padding: 1em;
border: 3px solid #2397D8;
}
</style>
""")
def on_slider_change(attr, old, new):
slider.css_classes.append('sent-later')
text.css_classes.append('sent-later')
print(slider.css_classes)
print(text.css_classes)
text = TextInput(
value=str(2),
title='Population Beta:')
slider = RangeSlider(
start=0,
end=10,
value=(0, 2),
step=1,
title='Population Beta')
slider.css_classes = []
slider.css_classes.append('initial')
text.css_classes = []
text.css_classes.append('initial')
slider.on_change('value', on_slider_change)
curdoc().add_root(column(div1, widgetbox(text, slider)))
```
</issue>
<code>
[start of bokeh/models/layouts.py]
1 ''' Various kinds of layout components.
2
3 '''
4 from __future__ import absolute_import
5
6 import logging
7 logger = logging.getLogger(__name__)
8
9 from ..core.enums import SizingMode
10 from ..core.has_props import abstract
11 from ..core.properties import Bool, Enum, Int, Instance, List, Seq, String
12 from ..core.validation import warning
13 from ..core.validation.warnings import BOTH_CHILD_AND_ROOT, EMPTY_LAYOUT
14 from ..model import Model
15
16 @abstract
17 class LayoutDOM(Model):
18 ''' An abstract base class for layout components.
19
20 '''
21
22 width = Int(help="""
23 An optional width for the component (in pixels).
24 """)
25
26 height = Int(help="""
27 An optional height for the component (in pixels).
28 """)
29
30 disabled = Bool(False, help="""
31 Whether the widget will be disabled when rendered. If ``True``,
32 the widget will be greyed-out, and not respond to UI events.
33 """)
34
35 sizing_mode = Enum(SizingMode, default="fixed", help="""
36 How the item being displayed should size itself. Possible values are
37 ``"fixed"``, ``"scale_width"``, ``"scale_height"``, ``"scale_both"``, and
38 ``"stretch_both"``.
39
40 ``"stretch_both"`` elements are completely responsive (independently in width and height) and
41 will resize to occupy all available space, even if this changes the aspect ratio of the element.
42 This is sometimes called outside-in, and is a typical behavior for desktop applications.
43
44 ``"fixed"`` elements are not responsive. They will retain their original width and height
45 regardless of any subsequent browser window resize events.
46
47 ``"scale_width"`` elements will responsively resize to fit to the width available, *while
48 maintaining the original aspect ratio*. This is a typical behavior for modern websites. For a
49 ``Plot``, the aspect ratio ``plot_width/plot_height`` is maintained.
50
51 ``"scale_height"`` elements will responsively resize to fit to the height available, *while
52 maintaining the original aspect ratio*. For a ``Plot``, the aspect ratio
53 ``plot_width/plot_height`` is maintained. A plot with ``"scale_height"`` mode needs
54 to be wrapped in a ``Row`` or ``Column`` to be responsive.
55
56 ``"scale_both"`` elements will responsively resize to for both the width and height available,
57 *while maintaining the original aspect ratio*.
58
59 """)
60
61 css_classes = Seq(String, help="""
62 A list of css class names to add to this DOM element. Note: the class names are
63 simply added as-is, no other guarantees are provided.
64 """)
65
66
67 class Spacer(LayoutDOM):
68 ''' A container for space used to fill an empty spot in a row or column.
69
70 '''
71
72
73 class WidgetBox(LayoutDOM):
74 ''' A container for widgets that are part of a layout.
75
76 '''
77 def __init__(self, *args, **kwargs):
78 if len(args) > 0 and "children" in kwargs:
79 raise ValueError("'children' keyword cannot be used with positional arguments")
80 elif len(args) > 0:
81 kwargs["children"] = list(args)
82 super(WidgetBox, self).__init__(**kwargs)
83
84 @warning(EMPTY_LAYOUT)
85 def _check_empty_layout(self):
86 from itertools import chain
87 if not list(chain(self.children)):
88 return str(self)
89
90 @warning(BOTH_CHILD_AND_ROOT)
91 def _check_child_is_also_root(self):
92 problems = []
93 for c in self.children:
94 if c.document is not None and c in c.document.roots:
95 problems.append(str(c))
96 if problems:
97 return ", ".join(problems)
98 else:
99 return None
100
101 children = List(Instance('bokeh.models.widgets.Widget'), help="""
102 The list of widgets to put in the layout box.
103 """)
104
105
106 @abstract
107 class Box(LayoutDOM):
108 ''' Abstract base class for Row and Column. Do not use directly.
109
110 '''
111
112 def __init__(self, *args, **kwargs):
113
114 if len(args) > 0 and "children" in kwargs:
115 raise ValueError("'children' keyword cannot be used with positional arguments")
116 elif len(args) > 0:
117 kwargs["children"] = list(args)
118
119 unwrapped_children = kwargs.get("children", [])
120 kwargs["children"] = self._wrap_children(unwrapped_children)
121 super(Box, self).__init__(**kwargs)
122
123 def _wrap_children(self, children):
124 ''' Wrap any Widgets of a list of child layouts in a WidgetBox.
125 This allows for the convenience of just spelling Row(button1, button2).
126 '''
127 from .widgets.widget import Widget
128 wrapped_children = []
129 for child in children:
130 if isinstance(child, Widget):
131 child = WidgetBox(
132 children=[child],
133 sizing_mode=child.sizing_mode,
134 width=child.width,
135 height=child.height,
136 disabled=child.disabled
137 )
138 wrapped_children.append(child)
139 return wrapped_children
140
141 @warning(EMPTY_LAYOUT)
142 def _check_empty_layout(self):
143 from itertools import chain
144 if not list(chain(self.children)):
145 return str(self)
146
147 @warning(BOTH_CHILD_AND_ROOT)
148 def _check_child_is_also_root(self):
149 problems = []
150 for c in self.children:
151 if c.document is not None and c in c.document.roots:
152 problems.append(str(c))
153 if problems:
154 return ", ".join(problems)
155 else:
156 return None
157
158 #TODO Debating the following instead to prevent people adding just a plain
159 # widget into a box, which sometimes works and sometimes looks disastrous
160 #children = List(
161 # Either(
162 # Instance('bokeh.models.layouts.Row'),
163 # Instance('bokeh.models.layouts.Column'),
164 # Instance('bokeh.models.plots.Plot'),
165 # Instance('bokeh.models.layouts.WidgetBox')
166 # ), help="""
167 # The list of children, which can be other components including plots, rows, columns, and widgets.
168 #""")
169 children = List(Instance(LayoutDOM), help="""
170 The list of children, which can be other components including plots, rows, columns, and widgets.
171 """)
172
173
174 class Row(Box):
175 ''' Lay out child components in a single horizontal row.
176
177 Children can be specified as positional arguments, as a single argument
178 that is a sequence, or using the ``children`` keyword argument.
179 '''
180
181
182 class Column(Box):
183 ''' Lay out child components in a single vertical row.
184
185 Children can be specified as positional arguments, as a single argument
186 that is a sequence, or using the ``children`` keyword argument.
187 '''
188
[end of bokeh/models/layouts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bokeh/models/layouts.py b/bokeh/models/layouts.py
--- a/bokeh/models/layouts.py
+++ b/bokeh/models/layouts.py
@@ -58,10 +58,14 @@
""")
- css_classes = Seq(String, help="""
+ # List in order for in-place changes to trigger changes, ref: https://github.com/bokeh/bokeh/issues/6841
+ css_classes = List(String, help="""
A list of css class names to add to this DOM element. Note: the class names are
simply added as-is, no other guarantees are provided.
- """)
+
+ It is also permissible to assign from tuples, however these are adapted -- the
+ property will always contain a list.
+ """).accepts(Seq(String), lambda x: list(x))
class Spacer(LayoutDOM):
| {"golden_diff": "diff --git a/bokeh/models/layouts.py b/bokeh/models/layouts.py\n--- a/bokeh/models/layouts.py\n+++ b/bokeh/models/layouts.py\n@@ -58,10 +58,14 @@\n \n \"\"\")\n \n- css_classes = Seq(String, help=\"\"\"\n+ # List in order for in-place changes to trigger changes, ref: https://github.com/bokeh/bokeh/issues/6841\n+ css_classes = List(String, help=\"\"\"\n A list of css class names to add to this DOM element. Note: the class names are\n simply added as-is, no other guarantees are provided.\n- \"\"\")\n+\n+ It is also permissible to assign from tuples, however these are adapted -- the\n+ property will always contain a list.\n+ \"\"\").accepts(Seq(String), lambda x: list(x))\n \n \n class Spacer(LayoutDOM):\n", "issue": "RangeSlider css_classes not being updated anymore\nIt seems in 0.12.7 the `css_classes` attribute of the RangeSlider is not being updated anymore by the callbacks. It was working in 0.12.6.\r\nIn the example below, both the text input and the slider starts with the `initial` class as expected. When the slider changes, both should gain the `sent-later` class, but only the text input is gaining it...\r\n\r\n```\r\nfrom bokeh.io import curdoc\r\nfrom bokeh.models import TextInput, RangeSlider\r\nfrom bokeh.layouts import widgetbox\r\nfrom bokeh.layouts import column\r\nfrom bokeh.models import Div\r\n\r\ndiv1 = Div(text=\"\"\"\r\n <style>\r\n .sent-later {\r\n border-radius: 0.5em;\r\n padding: 1em;\r\n border: 3px solid #2397D8;\r\n }\r\n </style>\r\n\"\"\")\r\n\r\n\r\ndef on_slider_change(attr, old, new):\r\n slider.css_classes.append('sent-later')\r\n text.css_classes.append('sent-later')\r\n print(slider.css_classes)\r\n print(text.css_classes)\r\n\r\n\r\ntext = TextInput(\r\n value=str(2),\r\n title='Population Beta:')\r\n\r\nslider = RangeSlider(\r\n start=0,\r\n end=10,\r\n value=(0, 2),\r\n step=1,\r\n title='Population Beta')\r\n\r\nslider.css_classes = []\r\nslider.css_classes.append('initial')\r\ntext.css_classes = []\r\ntext.css_classes.append('initial')\r\n\r\nslider.on_change('value', on_slider_change)\r\n\r\ncurdoc().add_root(column(div1, widgetbox(text, slider)))\r\n\r\n```\r\n\n", "before_files": [{"content": "''' Various kinds of layout components.\n\n'''\nfrom __future__ import absolute_import\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nfrom ..core.enums import SizingMode\nfrom ..core.has_props import abstract\nfrom ..core.properties import Bool, Enum, Int, Instance, List, Seq, String\nfrom ..core.validation import warning\nfrom ..core.validation.warnings import BOTH_CHILD_AND_ROOT, EMPTY_LAYOUT\nfrom ..model import Model\n\n@abstract\nclass LayoutDOM(Model):\n ''' An abstract base class for layout components.\n\n '''\n\n width = Int(help=\"\"\"\n An optional width for the component (in pixels).\n \"\"\")\n\n height = Int(help=\"\"\"\n An optional height for the component (in pixels).\n \"\"\")\n\n disabled = Bool(False, help=\"\"\"\n Whether the widget will be disabled when rendered. If ``True``,\n the widget will be greyed-out, and not respond to UI events.\n \"\"\")\n\n sizing_mode = Enum(SizingMode, default=\"fixed\", help=\"\"\"\n How the item being displayed should size itself. Possible values are\n ``\"fixed\"``, ``\"scale_width\"``, ``\"scale_height\"``, ``\"scale_both\"``, and\n ``\"stretch_both\"``.\n\n ``\"stretch_both\"`` elements are completely responsive (independently in width and height) and\n will resize to occupy all available space, even if this changes the aspect ratio of the element.\n This is sometimes called outside-in, and is a typical behavior for desktop applications.\n\n ``\"fixed\"`` elements are not responsive. They will retain their original width and height\n regardless of any subsequent browser window resize events.\n\n ``\"scale_width\"`` elements will responsively resize to fit to the width available, *while\n maintaining the original aspect ratio*. This is a typical behavior for modern websites. For a\n ``Plot``, the aspect ratio ``plot_width/plot_height`` is maintained.\n\n ``\"scale_height\"`` elements will responsively resize to fit to the height available, *while\n maintaining the original aspect ratio*. For a ``Plot``, the aspect ratio\n ``plot_width/plot_height`` is maintained. A plot with ``\"scale_height\"`` mode needs\n to be wrapped in a ``Row`` or ``Column`` to be responsive.\n\n ``\"scale_both\"`` elements will responsively resize to for both the width and height available,\n *while maintaining the original aspect ratio*.\n\n \"\"\")\n\n css_classes = Seq(String, help=\"\"\"\n A list of css class names to add to this DOM element. Note: the class names are\n simply added as-is, no other guarantees are provided.\n \"\"\")\n\n\nclass Spacer(LayoutDOM):\n ''' A container for space used to fill an empty spot in a row or column.\n\n '''\n\n\nclass WidgetBox(LayoutDOM):\n ''' A container for widgets that are part of a layout.\n\n '''\n def __init__(self, *args, **kwargs):\n if len(args) > 0 and \"children\" in kwargs:\n raise ValueError(\"'children' keyword cannot be used with positional arguments\")\n elif len(args) > 0:\n kwargs[\"children\"] = list(args)\n super(WidgetBox, self).__init__(**kwargs)\n\n @warning(EMPTY_LAYOUT)\n def _check_empty_layout(self):\n from itertools import chain\n if not list(chain(self.children)):\n return str(self)\n\n @warning(BOTH_CHILD_AND_ROOT)\n def _check_child_is_also_root(self):\n problems = []\n for c in self.children:\n if c.document is not None and c in c.document.roots:\n problems.append(str(c))\n if problems:\n return \", \".join(problems)\n else:\n return None\n\n children = List(Instance('bokeh.models.widgets.Widget'), help=\"\"\"\n The list of widgets to put in the layout box.\n \"\"\")\n\n\n@abstract\nclass Box(LayoutDOM):\n ''' Abstract base class for Row and Column. Do not use directly.\n\n '''\n\n def __init__(self, *args, **kwargs):\n\n if len(args) > 0 and \"children\" in kwargs:\n raise ValueError(\"'children' keyword cannot be used with positional arguments\")\n elif len(args) > 0:\n kwargs[\"children\"] = list(args)\n\n unwrapped_children = kwargs.get(\"children\", [])\n kwargs[\"children\"] = self._wrap_children(unwrapped_children)\n super(Box, self).__init__(**kwargs)\n\n def _wrap_children(self, children):\n ''' Wrap any Widgets of a list of child layouts in a WidgetBox.\n This allows for the convenience of just spelling Row(button1, button2).\n '''\n from .widgets.widget import Widget\n wrapped_children = []\n for child in children:\n if isinstance(child, Widget):\n child = WidgetBox(\n children=[child],\n sizing_mode=child.sizing_mode,\n width=child.width,\n height=child.height,\n disabled=child.disabled\n )\n wrapped_children.append(child)\n return wrapped_children\n\n @warning(EMPTY_LAYOUT)\n def _check_empty_layout(self):\n from itertools import chain\n if not list(chain(self.children)):\n return str(self)\n\n @warning(BOTH_CHILD_AND_ROOT)\n def _check_child_is_also_root(self):\n problems = []\n for c in self.children:\n if c.document is not None and c in c.document.roots:\n problems.append(str(c))\n if problems:\n return \", \".join(problems)\n else:\n return None\n\n #TODO Debating the following instead to prevent people adding just a plain\n # widget into a box, which sometimes works and sometimes looks disastrous\n #children = List(\n # Either(\n # Instance('bokeh.models.layouts.Row'),\n # Instance('bokeh.models.layouts.Column'),\n # Instance('bokeh.models.plots.Plot'),\n # Instance('bokeh.models.layouts.WidgetBox')\n # ), help=\"\"\"\n # The list of children, which can be other components including plots, rows, columns, and widgets.\n #\"\"\")\n children = List(Instance(LayoutDOM), help=\"\"\"\n The list of children, which can be other components including plots, rows, columns, and widgets.\n \"\"\")\n\n\nclass Row(Box):\n ''' Lay out child components in a single horizontal row.\n\n Children can be specified as positional arguments, as a single argument\n that is a sequence, or using the ``children`` keyword argument.\n '''\n\n\nclass Column(Box):\n ''' Lay out child components in a single vertical row.\n\n Children can be specified as positional arguments, as a single argument\n that is a sequence, or using the ``children`` keyword argument.\n '''\n", "path": "bokeh/models/layouts.py"}]} | 2,796 | 196 |
gh_patches_debug_49247 | rasdani/github-patches | git_diff | rlworkgroup__garage-1759 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incompatible dependencies in pipenv install
Hello,
When I try to install garage in a fresh pipenv as per the documentation, I get the following error:
```
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: Could not find a version that matches cloudpickle==1.3,~=1.2.0 (from garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))
Tried: 0.1.0, 0.1.0, 0.1.1, 0.1.1, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.3.0, 0.3.0, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.4.1, 0.4.1, 0.4.2, 0.4.2, 0.4.3, 0.4.3, 0.4.4, 0.4.4, 0.5.0, 0.5.0, 0.5.1, 0.5.1, 0.5.2, 0.5.2, 0.5.3, 0.5.3, 0.5.4, 0.5.4, 0.5.5, 0.5.5, 0.5.6, 0.5.6, 0.6.0, 0.6.0, 0.6.1, 0.6.1, 0.7.0, 0.7.0, 0.8.0, 0.8.0, 0.8.1, 0.8.1, 1.0.0, 1.0.0, 1.1.1, 1.1.1, 1.2.0, 1.2.0, 1.2.1, 1.2.1, 1.2.2, 1.2.2, 1.3.0, 1.3.0, 1.4.0, 1.4.0, 1.4.1, 1.4.1, 1.5.0, 1.5.0
There are incompatible versions in the resolved dependencies:
cloudpickle (from garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))
cloudpickle==1.3 (from tensorflow-probability==0.10.1->garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))
cloudpickle~=1.2.0 (from gym[atari,box2d,classic_control]==0.15.4->garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))
```
This can be shortcutted by appending `--skip-lock` to the `pipenv install`, but that's obviously not ideal. Thanks!
</issue>
<code>
[start of setup.py]
1 """setuptools based setup module."""
2 import os
3
4 from setuptools import find_packages, setup
5
6 GARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'
7 GYM_VERSION = '0.15.4'
8
9 # Required dependencies
10 REQUIRED = [
11 # Please keep alphabetized
12 'akro',
13 'click>=2.0',
14 'cloudpickle<1.5',
15 'cma==2.7.0',
16 'dowel==0.0.3',
17 f'gym[atari,box2d,classic_control]=={GYM_VERSION}',
18 'numpy>=1.14.5',
19 'psutil',
20 # Pyglet 1.4.0 introduces some api change which breaks some
21 # gym environments
22 # See: https://github.com/openai/gym/issues/1588
23 'pyglet<1.4.0,>=1.3.0',
24 'python-dateutil',
25 'ray',
26 'scikit-image',
27 'scipy',
28 'setproctitle>=1.0',
29 'tensorflow>=1.14',
30 'tensorflow-probability',
31 'torch>=1.0.0,!=1.5.0',
32 'torchvision>=0.2.1',
33 ]
34
35 # Dependencies for optional features
36 EXTRAS = {}
37
38 EXTRAS['mujoco'] = [
39 'mujoco-py<2.1,>=2.0',
40 f'gym[all]=={GYM_VERSION}',
41 ]
42
43 EXTRAS['dm_control'] = [
44 # dm_control throws an error during install about not being able to
45 # find a build dependency (absl-py). Later pip executes the `install`
46 # command again and the install succeeds because absl-py has been
47 # installed. This is stupid, but harmless.
48 'dm_control==0.0.300771433',
49 ]
50
51 EXTRAS['bullet'] = ['mpi4py', 'pybullet']
52
53 EXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))
54
55 # Development dependencies (*not* included in 'all')
56 EXTRAS['dev'] = [
57 # Please keep alphabetized
58 'flake8',
59 'flake8-docstrings>=1.5.0',
60 'flake8-import-order',
61 f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/861ae8d8c4bef80a7ed86f47f47acaa494d4ab77', # noqa: E501
62 'isort>=4.3.21,<5.0.0',
63 'pep8-naming==0.7.0',
64 'pre-commit',
65 'pycodestyle>=2.5.0',
66 'pydocstyle>=4.0.0',
67 'pylint>=2.5.3',
68 'pytest>=4.5.0', # Required for strict-markers
69 'pytest-cov',
70 'pytest-timeout',
71 'pytest-xdist',
72 'recommonmark',
73 'sphinx',
74 'sphinx-autoapi>=1.4.0',
75 'sphinx_rtd_theme',
76 'yapf==0.30.0',
77 ] # yapf: disable
78
79 with open('README.md') as f:
80 README = f.read()
81
82 # Get the package version dynamically
83 with open('VERSION') as v:
84 VERSION = v.read().strip()
85
86 setup(
87 name='garage',
88 version=VERSION,
89 author='Reinforcement Learning Working Group',
90 description='A toolkit for reproducible reinforcement learning research',
91 url='https://github.com/rlworkgroup/garage',
92 packages=find_packages(where='src'),
93 package_dir={'': 'src'},
94 scripts=['scripts/garage'],
95 python_requires='>=3.6',
96 install_requires=REQUIRED,
97 extras_require=EXTRAS,
98 license='MIT',
99 long_description=README,
100 long_description_content_type='text/markdown',
101 classifiers=[
102 'Development Status :: 4 - Beta',
103 'Intended Audience :: Developers',
104 'Intended Audience :: Education',
105 'Intended Audience :: Science/Research',
106 'License :: OSI Approved :: MIT License',
107 'Programming Language :: Python :: 3.6',
108 'Programming Language :: Python :: 3.7',
109 'Programming Language :: Python :: 3 :: Only',
110 'Topic :: Scientific/Engineering :: Artificial Intelligence',
111 'Topic :: Scientific/Engineering :: Mathematics',
112 'Topic :: Software Development :: Libraries',
113 ],
114 )
115
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,7 +27,7 @@
'scipy',
'setproctitle>=1.0',
'tensorflow>=1.14',
- 'tensorflow-probability',
+ 'tensorflow-probability<=0.10.0',
'torch>=1.0.0,!=1.5.0',
'torchvision>=0.2.1',
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,7 +27,7 @@\n 'scipy',\n 'setproctitle>=1.0',\n 'tensorflow>=1.14',\n- 'tensorflow-probability',\n+ 'tensorflow-probability<=0.10.0',\n 'torch>=1.0.0,!=1.5.0',\n 'torchvision>=0.2.1',\n ]\n", "issue": "Incompatible dependencies in pipenv install\nHello,\r\n\r\nWhen I try to install garage in a fresh pipenv as per the documentation, I get the following error:\r\n\r\n```\r\n[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.\r\n First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.\r\n Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.\r\n Hint: try $ pipenv lock --pre if it is a pre-release dependency.\r\nERROR: Could not find a version that matches cloudpickle==1.3,~=1.2.0 (from garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))\r\nTried: 0.1.0, 0.1.0, 0.1.1, 0.1.1, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.3.0, 0.3.0, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.4.1, 0.4.1, 0.4.2, 0.4.2, 0.4.3, 0.4.3, 0.4.4, 0.4.4, 0.5.0, 0.5.0, 0.5.1, 0.5.1, 0.5.2, 0.5.2, 0.5.3, 0.5.3, 0.5.4, 0.5.4, 0.5.5, 0.5.5, 0.5.6, 0.5.6, 0.6.0, 0.6.0, 0.6.1, 0.6.1, 0.7.0, 0.7.0, 0.8.0, 0.8.0, 0.8.1, 0.8.1, 1.0.0, 1.0.0, 1.1.1, 1.1.1, 1.2.0, 1.2.0, 1.2.1, 1.2.1, 1.2.2, 1.2.2, 1.3.0, 1.3.0, 1.4.0, 1.4.0, 1.4.1, 1.4.1, 1.5.0, 1.5.0\r\nThere are incompatible versions in the resolved dependencies:\r\n cloudpickle (from garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))\r\n cloudpickle==1.3 (from tensorflow-probability==0.10.1->garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))\r\n cloudpickle~=1.2.0 (from gym[atari,box2d,classic_control]==0.15.4->garage==2020.6.0->-r /tmp/pipenvprlocesvrequirements/pipenv-7gor6s43-constraints.txt (line 2))\r\n```\r\nThis can be shortcutted by appending `--skip-lock` to the `pipenv install`, but that's obviously not ideal. Thanks! \n", "before_files": [{"content": "\"\"\"setuptools based setup module.\"\"\"\nimport os\n\nfrom setuptools import find_packages, setup\n\nGARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'\nGYM_VERSION = '0.15.4'\n\n# Required dependencies\nREQUIRED = [\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n 'cloudpickle<1.5',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n f'gym[atari,box2d,classic_control]=={GYM_VERSION}',\n 'numpy>=1.14.5',\n 'psutil',\n # Pyglet 1.4.0 introduces some api change which breaks some\n # gym environments\n # See: https://github.com/openai/gym/issues/1588\n 'pyglet<1.4.0,>=1.3.0',\n 'python-dateutil',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'setproctitle>=1.0',\n 'tensorflow>=1.14',\n 'tensorflow-probability',\n 'torch>=1.0.0,!=1.5.0',\n 'torchvision>=0.2.1',\n]\n\n# Dependencies for optional features\nEXTRAS = {}\n\nEXTRAS['mujoco'] = [\n 'mujoco-py<2.1,>=2.0',\n f'gym[all]=={GYM_VERSION}',\n]\n\nEXTRAS['dm_control'] = [\n # dm_control throws an error during install about not being able to\n # find a build dependency (absl-py). Later pip executes the `install`\n # command again and the install succeeds because absl-py has been\n # installed. This is stupid, but harmless.\n 'dm_control==0.0.300771433',\n]\n\nEXTRAS['bullet'] = ['mpi4py', 'pybullet']\n\nEXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))\n\n# Development dependencies (*not* included in 'all')\nEXTRAS['dev'] = [\n # Please keep alphabetized\n 'flake8',\n 'flake8-docstrings>=1.5.0',\n 'flake8-import-order',\n f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/861ae8d8c4bef80a7ed86f47f47acaa494d4ab77', # noqa: E501\n 'isort>=4.3.21,<5.0.0',\n 'pep8-naming==0.7.0',\n 'pre-commit',\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.5.3',\n 'pytest>=4.5.0', # Required for strict-markers\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-xdist',\n 'recommonmark',\n 'sphinx',\n 'sphinx-autoapi>=1.4.0',\n 'sphinx_rtd_theme',\n 'yapf==0.30.0',\n] # yapf: disable\n\nwith open('README.md') as f:\n README = f.read()\n\n# Get the package version dynamically\nwith open('VERSION') as v:\n VERSION = v.read().strip()\n\nsetup(\n name='garage',\n version=VERSION,\n author='Reinforcement Learning Working Group',\n description='A toolkit for reproducible reinforcement learning research',\n url='https://github.com/rlworkgroup/garage',\n packages=find_packages(where='src'),\n package_dir={'': 'src'},\n scripts=['scripts/garage'],\n python_requires='>=3.6',\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n license='MIT',\n long_description=README,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]} | 2,663 | 109 |
gh_patches_debug_22432 | rasdani/github-patches | git_diff | saleor__saleor-5004 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
json serialization error with google analytics enabled
### What I'm trying to achieve
I want to enable google analytics of a docker image I build using the current docker files from the master repo. I am starting it in beanstalk and then the error occurs:
```
/var/log/eb-docker/containers/eb-current-app/eb-d5c249ad18da-stdouterr.log
-------------------------------------
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type generator is not JSON serializable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./saleor/core/middleware.py", line 27, in middleware
client_id, path=path, language=language, headers=headers)
File "./saleor/core/analytics.py", line 68, in report_view
_report(client_id, payloads, extra_headers=extra_headers)
File "./saleor/core/analytics.py", line 36, in _report
**extra_data)
File "/usr/local/lib/python3.7/site-packages/celery/app/task.py", line 427, in delay
return self.apply_async(args, kwargs)
File "/usr/local/lib/python3.7/site-packages/celery/app/task.py", line 544, in apply_async
body, serializer,
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 221, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 54, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/usr/local/lib/python3.7/site-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 50, in _reraise_errors
yield
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 221, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/json.py", line 69, in dumps
**dict(default_kwargs, **kwargs))
File "/usr/local/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/json.py", line 59, in default
return super(JSONEncoder, self).default(o)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
kombu.exceptions.EncodeError: Object of type generator is not JSON serializable
ERROR saleor.core.middleware Unable to update analytics [PID:17:uWSGIWorker2Core0]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 50, in _reraise_errors
yield
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 221, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/json.py", line 69, in dumps
**dict(default_kwargs, **kwargs))
File "/usr/local/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/json.py", line 59, in default
return super(JSONEncoder, self).default(o)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type generator is not JSON serializable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./saleor/core/middleware.py", line 27, in middleware
client_id, path=path, language=language, headers=headers)
File "./saleor/core/analytics.py", line 68, in report_view
_report(client_id, payloads, extra_headers=extra_headers)
File "./saleor/core/analytics.py", line 36, in _report
**extra_data)
File "/usr/local/lib/python3.7/site-packages/celery/app/task.py", line 427, in delay
return self.apply_async(args, kwargs)
File "/usr/local/lib/python3.7/site-packages/celery/app/task.py", line 544, in apply_async
body, serializer,
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 221, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 54, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/usr/local/lib/python3.7/site-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 50, in _reraise_errors
yield
File "/usr/local/lib/python3.7/site-packages/kombu/serialization.py", line 221, in dumps
payload = encoder(data)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/json.py", line 69, in dumps
**dict(default_kwargs, **kwargs))
File "/usr/local/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/json.py", line 59, in default
return super(JSONEncoder, self).default(o)
File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
kombu.exceptions.EncodeError: Object of type generator is not JSON serializable
UWSGI uwsgi "GET /de/cart/summary/ HTTP/1.1" 200 785 29ms [PID:17:Worker-2] [RSS:162MB]```
### Steps to reproduce the problem
1. Build the docker image
2. Try to start it in beanstalk with the minimal needed env variables set and google analytics id set too
3. Look into the logs and find the error in question
### What I expected to happen
Beanstalk starts the dockerimage with the needed minimal env variables and google analytics enabled.
There is a merge request related to this #3615
</issue>
<code>
[start of saleor/core/analytics.py]
1 import uuid
2
3 import google_measurement_protocol as ga
4 from django.conf import settings
5
6 from ..celeryconf import app
7
8 FINGERPRINT_PARTS = [
9 "HTTP_ACCEPT_ENCODING",
10 "HTTP_ACCEPT_LANGUAGE",
11 "HTTP_USER_AGENT",
12 "HTTP_X_FORWARDED_FOR",
13 "REMOTE_ADDR",
14 ]
15
16 UUID_NAMESPACE = uuid.UUID("fb4abc05-e2fb-4e3e-8b78-28037ef7d07f")
17
18
19 def get_client_id(request):
20 parts = [request.META.get(key, "") for key in FINGERPRINT_PARTS]
21 name = "_".join(parts)
22 return uuid.uuid5(UUID_NAMESPACE, name)
23
24
25 @app.task
26 def ga_report(tracking_id, client_id, payloads, extra_headers=None, **extra_data):
27 ga.report(
28 tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data
29 )
30
31
32 def _report(client_id, payloads, extra_headers=None, **extra_data):
33 tracking_id = getattr(settings, "GOOGLE_ANALYTICS_TRACKING_ID", None)
34 if tracking_id and client_id:
35 ga_report.delay(
36 tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data
37 )
38
39
40 def get_order_payloads(order):
41 items = [
42 ga.item(
43 str(order_line),
44 order_line.unit_price.gross,
45 quantity=order_line.quantity,
46 item_id=order_line.product_sku,
47 )
48 for order_line in order
49 ]
50 return ga.transaction(
51 order.id,
52 items,
53 revenue=order.total.gross,
54 tax=order.total.tax,
55 shipping=order.shipping_price.net,
56 )
57
58
59 def report_order(client_id, order):
60 payloads = get_order_payloads(order)
61 _report(client_id, payloads)
62
63
64 def get_view_payloads(path, language, headers):
65 host_name = headers.get("HTTP_HOST", None)
66 referrer = headers.get("HTTP_REFERER", None)
67 return ga.pageview(path, host_name=host_name, referrer=referrer, language=language)
68
69
70 def report_view(client_id, path, language, headers):
71 payloads = get_view_payloads(path, language, headers)
72 extra_headers = {}
73 user_agent = headers.get("HTTP_USER_AGENT", None)
74 if user_agent:
75 extra_headers["user-agent"] = user_agent
76 _report(client_id, payloads, extra_headers=extra_headers)
77
[end of saleor/core/analytics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/core/analytics.py b/saleor/core/analytics.py
--- a/saleor/core/analytics.py
+++ b/saleor/core/analytics.py
@@ -1,4 +1,5 @@
import uuid
+from typing import Generator, List
import google_measurement_protocol as ga
from django.conf import settings
@@ -23,17 +24,25 @@
@app.task
-def ga_report(tracking_id, client_id, payloads, extra_headers=None, **extra_data):
+def ga_report(
+ tracking_id, client_id, payloads: List[dict], extra_headers=None, **extra_data
+):
ga.report(
tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data
)
-def _report(client_id, payloads, extra_headers=None, **extra_data):
+def _report(
+ client_id, payloads: Generator[dict, None, None], extra_headers=None, **extra_data
+):
tracking_id = getattr(settings, "GOOGLE_ANALYTICS_TRACKING_ID", None)
if tracking_id and client_id:
ga_report.delay(
- tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data
+ tracking_id,
+ client_id,
+ list(payloads),
+ extra_headers=extra_headers,
+ **extra_data,
)
| {"golden_diff": "diff --git a/saleor/core/analytics.py b/saleor/core/analytics.py\n--- a/saleor/core/analytics.py\n+++ b/saleor/core/analytics.py\n@@ -1,4 +1,5 @@\n import uuid\n+from typing import Generator, List\n \n import google_measurement_protocol as ga\n from django.conf import settings\n@@ -23,17 +24,25 @@\n \n \n @app.task\n-def ga_report(tracking_id, client_id, payloads, extra_headers=None, **extra_data):\n+def ga_report(\n+ tracking_id, client_id, payloads: List[dict], extra_headers=None, **extra_data\n+):\n ga.report(\n tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data\n )\n \n \n-def _report(client_id, payloads, extra_headers=None, **extra_data):\n+def _report(\n+ client_id, payloads: Generator[dict, None, None], extra_headers=None, **extra_data\n+):\n tracking_id = getattr(settings, \"GOOGLE_ANALYTICS_TRACKING_ID\", None)\n if tracking_id and client_id:\n ga_report.delay(\n- tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data\n+ tracking_id,\n+ client_id,\n+ list(payloads),\n+ extra_headers=extra_headers,\n+ **extra_data,\n )\n", "issue": "json serialization error with google analytics enabled\n### What I'm trying to achieve\r\nI want to enable google analytics of a docker image I build using the current docker files from the master repo. I am starting it in beanstalk and then the error occurs: \r\n```\r\n/var/log/eb-docker/containers/eb-current-app/eb-d5c249ad18da-stdouterr.log\r\n-------------------------------------\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type generator is not JSON serializable\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"./saleor/core/middleware.py\", line 27, in middleware\r\n client_id, path=path, language=language, headers=headers)\r\n File \"./saleor/core/analytics.py\", line 68, in report_view\r\n _report(client_id, payloads, extra_headers=extra_headers)\r\n File \"./saleor/core/analytics.py\", line 36, in _report\r\n **extra_data)\r\n File \"/usr/local/lib/python3.7/site-packages/celery/app/task.py\", line 427, in delay\r\n return self.apply_async(args, kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/celery/app/task.py\", line 544, in apply_async\r\n body, serializer,\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 221, in dumps\r\n payload = encoder(data)\r\n File \"/usr/local/lib/python3.7/contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 54, in _reraise_errors\r\n reraise(wrapper, wrapper(exc), sys.exc_info()[2])\r\n File \"/usr/local/lib/python3.7/site-packages/vine/five.py\", line 194, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 50, in _reraise_errors\r\n yield\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 221, in dumps\r\n payload = encoder(data)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/utils/json.py\", line 69, in dumps\r\n **dict(default_kwargs, **kwargs))\r\n File \"/usr/local/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/utils/json.py\", line 59, in default\r\n return super(JSONEncoder, self).default(o)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nkombu.exceptions.EncodeError: Object of type generator is not JSON serializable\r\nERROR saleor.core.middleware Unable to update analytics [PID:17:uWSGIWorker2Core0]\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 50, in _reraise_errors\r\n yield\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 221, in dumps\r\n payload = encoder(data)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/utils/json.py\", line 69, in dumps\r\n **dict(default_kwargs, **kwargs))\r\n File \"/usr/local/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/utils/json.py\", line 59, in default\r\n return super(JSONEncoder, self).default(o)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type generator is not JSON serializable\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"./saleor/core/middleware.py\", line 27, in middleware\r\n client_id, path=path, language=language, headers=headers)\r\n File \"./saleor/core/analytics.py\", line 68, in report_view\r\n _report(client_id, payloads, extra_headers=extra_headers)\r\n File \"./saleor/core/analytics.py\", line 36, in _report\r\n **extra_data)\r\n File \"/usr/local/lib/python3.7/site-packages/celery/app/task.py\", line 427, in delay\r\n return self.apply_async(args, kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/celery/app/task.py\", line 544, in apply_async\r\n body, serializer,\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 221, in dumps\r\n payload = encoder(data)\r\n File \"/usr/local/lib/python3.7/contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 54, in _reraise_errors\r\n reraise(wrapper, wrapper(exc), sys.exc_info()[2])\r\n File \"/usr/local/lib/python3.7/site-packages/vine/five.py\", line 194, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 50, in _reraise_errors\r\n yield\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/serialization.py\", line 221, in dumps\r\n payload = encoder(data)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/utils/json.py\", line 69, in dumps\r\n **dict(default_kwargs, **kwargs))\r\n File \"/usr/local/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/usr/local/lib/python3.7/site-packages/kombu/utils/json.py\", line 59, in default\r\n return super(JSONEncoder, self).default(o)\r\n File \"/usr/local/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nkombu.exceptions.EncodeError: Object of type generator is not JSON serializable\r\nUWSGI uwsgi \"GET /de/cart/summary/ HTTP/1.1\" 200 785 29ms [PID:17:Worker-2] [RSS:162MB]```\r\n\r\n### Steps to reproduce the problem\r\n1. Build the docker image \r\n2. Try to start it in beanstalk with the minimal needed env variables set and google analytics id set too \r\n3. Look into the logs and find the error in question\r\n\r\n### What I expected to happen\r\nBeanstalk starts the dockerimage with the needed minimal env variables and google analytics enabled. \r\n \r\nThere is a merge request related to this #3615 \n", "before_files": [{"content": "import uuid\n\nimport google_measurement_protocol as ga\nfrom django.conf import settings\n\nfrom ..celeryconf import app\n\nFINGERPRINT_PARTS = [\n \"HTTP_ACCEPT_ENCODING\",\n \"HTTP_ACCEPT_LANGUAGE\",\n \"HTTP_USER_AGENT\",\n \"HTTP_X_FORWARDED_FOR\",\n \"REMOTE_ADDR\",\n]\n\nUUID_NAMESPACE = uuid.UUID(\"fb4abc05-e2fb-4e3e-8b78-28037ef7d07f\")\n\n\ndef get_client_id(request):\n parts = [request.META.get(key, \"\") for key in FINGERPRINT_PARTS]\n name = \"_\".join(parts)\n return uuid.uuid5(UUID_NAMESPACE, name)\n\n\[email protected]\ndef ga_report(tracking_id, client_id, payloads, extra_headers=None, **extra_data):\n ga.report(\n tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data\n )\n\n\ndef _report(client_id, payloads, extra_headers=None, **extra_data):\n tracking_id = getattr(settings, \"GOOGLE_ANALYTICS_TRACKING_ID\", None)\n if tracking_id and client_id:\n ga_report.delay(\n tracking_id, client_id, payloads, extra_headers=extra_headers, **extra_data\n )\n\n\ndef get_order_payloads(order):\n items = [\n ga.item(\n str(order_line),\n order_line.unit_price.gross,\n quantity=order_line.quantity,\n item_id=order_line.product_sku,\n )\n for order_line in order\n ]\n return ga.transaction(\n order.id,\n items,\n revenue=order.total.gross,\n tax=order.total.tax,\n shipping=order.shipping_price.net,\n )\n\n\ndef report_order(client_id, order):\n payloads = get_order_payloads(order)\n _report(client_id, payloads)\n\n\ndef get_view_payloads(path, language, headers):\n host_name = headers.get(\"HTTP_HOST\", None)\n referrer = headers.get(\"HTTP_REFERER\", None)\n return ga.pageview(path, host_name=host_name, referrer=referrer, language=language)\n\n\ndef report_view(client_id, path, language, headers):\n payloads = get_view_payloads(path, language, headers)\n extra_headers = {}\n user_agent = headers.get(\"HTTP_USER_AGENT\", None)\n if user_agent:\n extra_headers[\"user-agent\"] = user_agent\n _report(client_id, payloads, extra_headers=extra_headers)\n", "path": "saleor/core/analytics.py"}]} | 3,064 | 298 |
gh_patches_debug_61971 | rasdani/github-patches | git_diff | crytic__slither-1110 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug-Candidate]: Phi-node print missing 'f' in f-string
### Describe the issue:
When printing a Phi-node the string is not formatted.
There seems to be a 'f' missing ahead of the str in https://github.com/crytic/slither/blob/dev/slither/slithir/operations/phi.py#L36
### Code example to reproduce the issue:
slither tests/complex_func.sol --print slithir-ssa
### Version:
dev-branch dd91f770f61eaadc286e2af3c72fb5798e376c16
### Relevant log output:
```
Contract Increment
Function Increment.increaseBy1()
IRs:
{self.lvalue}({self.lvalue.type}) := ϕ({[str(v) for v in self._rvalues]})
Expression: i += 1
```
</issue>
<code>
[start of slither/slithir/operations/phi.py]
1 from slither.slithir.operations.lvalue import OperationWithLValue
2 from slither.slithir.utils.utils import is_valid_lvalue
3
4
5 class Phi(OperationWithLValue):
6 def __init__(self, left_variable, nodes):
7 # When Phi operations are created the
8 # correct indexes of the variables are not yet computed
9 # We store the nodes where the variables are written
10 # so we can update the rvalues of the Phi operation
11 # after its instantiation
12 assert is_valid_lvalue(left_variable)
13 assert isinstance(nodes, set)
14 super().__init__()
15 self._lvalue = left_variable
16 self._rvalues = []
17 self._nodes = nodes
18
19 @property
20 def read(self):
21 return self.rvalues
22
23 @property
24 def rvalues(self):
25 return self._rvalues
26
27 @rvalues.setter
28 def rvalues(self, vals):
29 self._rvalues = vals
30
31 @property
32 def nodes(self):
33 return self._nodes
34
35 def __str__(self):
36 return "{self.lvalue}({self.lvalue.type}) := \u03D5({[str(v) for v in self._rvalues]})"
37
[end of slither/slithir/operations/phi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/slithir/operations/phi.py b/slither/slithir/operations/phi.py
--- a/slither/slithir/operations/phi.py
+++ b/slither/slithir/operations/phi.py
@@ -33,4 +33,4 @@
return self._nodes
def __str__(self):
- return "{self.lvalue}({self.lvalue.type}) := \u03D5({[str(v) for v in self._rvalues]})"
+ return f"{self.lvalue}({self.lvalue.type}) := \u03D5({[str(v) for v in self._rvalues]})"
| {"golden_diff": "diff --git a/slither/slithir/operations/phi.py b/slither/slithir/operations/phi.py\n--- a/slither/slithir/operations/phi.py\n+++ b/slither/slithir/operations/phi.py\n@@ -33,4 +33,4 @@\n return self._nodes\n \n def __str__(self):\n- return \"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n+ return f\"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n", "issue": "[Bug-Candidate]: Phi-node print missing 'f' in f-string\n### Describe the issue:\n\nWhen printing a Phi-node the string is not formatted.\r\nThere seems to be a 'f' missing ahead of the str in https://github.com/crytic/slither/blob/dev/slither/slithir/operations/phi.py#L36\n\n### Code example to reproduce the issue:\n\nslither tests/complex_func.sol --print slithir-ssa\n\n### Version:\n\ndev-branch dd91f770f61eaadc286e2af3c72fb5798e376c16\n\n### Relevant log output:\n\n```\r\nContract Increment\r\n Function Increment.increaseBy1()\r\n IRs:\r\n {self.lvalue}({self.lvalue.type}) := \u03d5({[str(v) for v in self._rvalues]})\r\n Expression: i += 1\r\n```\n", "before_files": [{"content": "from slither.slithir.operations.lvalue import OperationWithLValue\nfrom slither.slithir.utils.utils import is_valid_lvalue\n\n\nclass Phi(OperationWithLValue):\n def __init__(self, left_variable, nodes):\n # When Phi operations are created the\n # correct indexes of the variables are not yet computed\n # We store the nodes where the variables are written\n # so we can update the rvalues of the Phi operation\n # after its instantiation\n assert is_valid_lvalue(left_variable)\n assert isinstance(nodes, set)\n super().__init__()\n self._lvalue = left_variable\n self._rvalues = []\n self._nodes = nodes\n\n @property\n def read(self):\n return self.rvalues\n\n @property\n def rvalues(self):\n return self._rvalues\n\n @rvalues.setter\n def rvalues(self, vals):\n self._rvalues = vals\n\n @property\n def nodes(self):\n return self._nodes\n\n def __str__(self):\n return \"{self.lvalue}({self.lvalue.type}) := \\u03D5({[str(v) for v in self._rvalues]})\"\n", "path": "slither/slithir/operations/phi.py"}]} | 1,069 | 148 |
gh_patches_debug_13611 | rasdani/github-patches | git_diff | conan-io__conan-center-index-5274 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] fontconfig/2.13.93: build fails on macos with bzip2:shared=True
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **fontconfig/2.13.93**
* Operating System+version: **MacOS 10.15**
* Compiler+version: **Apple clang 11.0**
* Conan version: **conan 1.35.1**
* Python version: **Python 3.9.4**
### Conan profile
```
Configuration for profile macos-xcode11-x86_64:
[settings]
os=Macos
os_build=Macos
arch=x86_64
arch_build=x86_64
compiler=apple-clang
compiler.version=11.0
compiler.libcxx=libc++
build_type=Release
os.version=10.13
compiler.cppstd=17
[options]
boost:header_only=True
brotli:shared=True
bzip2:shared=True
cairo:enable_fc=True
cairo:enable_ft=True
cairo:enable_glib=False
cairo:shared=True
ccdcsqlite3:enable_explain_comments=True
ccdcsqlite3:enable_fts3=True
ccdcsqlite3:enable_fts4=True
ccdcsqlite3:enable_fts5=True
ccdcsqlite3:enable_json1=True
ccdcsqlite3:enable_null_trim=True
ccdcsqlite3:max_column=10000
ccdcsqlite3:shared=True
csdprotobufs:shared=True
expat:shared=True
fontconfig:shared=True
freetype:shared=True
gtest:shared=True
inchi:shared=True
jasper:shared=True
jbig:shared=True
lexactivator:shared=True
lexfloatclient:shared=True
libarchive:shared=True
libarchive:with_bzip2=True
libarchive:with_zlib=True
libarchive:with_zstd=True
libcurl:darwin_ssl=True
libcurl:shared=True
libcurl:with_openssl=False
libiconv:shared=False
libjpeg:shared=True
libpng:shared=True
libtiff:shared=True
libuuid:shared=True
libwebp:shared=True
libxl:shared=True
libxml2:shared=True
openscenegraph:shared=True
openssl:shared=True
protobuf:shared=True
protobuf:with_zlib=True
xz_utils:shared=True
zlib:shared=True
zstd:shared=True
[build_requires]
[env]
```
### Steps to reproduce (Include if Applicable)
build fails in conan package stage
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
if [ -f source_subfolder/fonts.conf ]; then \
echo " /usr/bin/install -c -m 644 source_subfolder/fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf"; \
/usr/bin/install -c -m 644 source_subfolder/fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf; \
else if [ -f fonts.conf ]; then \
echo " /usr/bin/install -c -m 644 fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf"; \
/usr/bin/install -c -m 644 fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf; \
fi; fi
/usr/bin/install -c -m 644 fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf
fc-cache -s -f -v
/usr/bin/install -c -m 644 fontconfig.pc '/Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/lib/pkgconfig'
/usr/bin/install -c -m 644 source_subfolder/fonts.dtd '/Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/share/xml/fontconfig'
dyld: Library not loaded: libbz2.1.dylib
Referenced from: /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/build/a1eeda7123576b54b8790256237bfc4e70516c86/fc-cache/.libs/fc-cache
Reason: image not found
/bin/sh: line 1: 35414 Abort trap: 6 fc-cache/fc-cache -s -f -v
make[2]: *** [install-data-local] Error 134
make[1]: *** [install-am] Error 2
make: *** [install-recursive] Error 1
ERROR: fontconfig/2.13.93: Error in package() method, line 87
autotools.install()
ConanException: Error 2 while executing make install -j12
```
</details>
</issue>
<code>
[start of recipes/fontconfig/all/conanfile.py]
1 import os
2 import glob
3
4 from conans import ConanFile, tools, AutoToolsBuildEnvironment
5 from conans.errors import ConanInvalidConfiguration
6
7
8 class FontconfigConan(ConanFile):
9 name = "fontconfig"
10 license = "MIT"
11 url = "https://github.com/conan-io/conan-center-index"
12 description = "Fontconfig is a library for configuring and customizing font access"
13 homepage = "https://gitlab.freedesktop.org/fontconfig/fontconfig"
14 topics = ("conan", "fontconfig", "fonts", "freedesktop")
15 settings = "os", "compiler", "build_type", "arch"
16 options = {"shared": [True, False], "fPIC": [True, False]}
17 default_options = {"shared": False, "fPIC": True}
18 generators = "pkg_config"
19
20 _autotools = None
21
22 @property
23 def _source_subfolder(self):
24 return "source_subfolder"
25
26 def config_options(self):
27 if self.settings.os == "Windows":
28 del self.options.fPIC
29
30 def configure(self):
31 if self.settings.compiler == "Visual Studio":
32 raise ConanInvalidConfiguration("Visual Studio builds are not supported.")
33 if self.options.shared:
34 del self.options.fPIC
35 del self.settings.compiler.libcxx
36 del self.settings.compiler.cppstd
37
38 def requirements(self):
39 self.requires("freetype/2.10.4")
40 self.requires("expat/2.2.10")
41 if self.settings.os == "Linux":
42 self.requires("libuuid/1.0.3")
43 elif self.settings.os == "Macos":
44 self.requires("libgettext/0.20.1")
45
46 def build_requirements(self):
47 self.build_requires("gperf/3.1")
48 self.build_requires("pkgconf/1.7.3")
49 if tools.os_info.is_windows and not tools.get_env("CONAN_BASH_PATH"):
50 self.build_requires("msys2/20200517")
51
52 def source(self):
53 tools.get(**self.conan_data["sources"][self.version])
54 extrated_dir = self.name + "-" + self.version
55 os.rename(extrated_dir, self._source_subfolder)
56
57 def _configure_autotools(self):
58 if not self._autotools:
59 args = ["--enable-static=%s" % ("no" if self.options.shared else "yes"),
60 "--enable-shared=%s" % ("yes" if self.options.shared else "no"),
61 "--disable-docs"]
62 args.append("--sysconfdir=%s" % tools.unix_path(os.path.join(self.package_folder, "bin", "etc")))
63 args.append("--datadir=%s" % tools.unix_path(os.path.join(self.package_folder, "bin", "share")))
64 args.append("--datarootdir=%s" % tools.unix_path(os.path.join(self.package_folder, "bin", "share")))
65 args.append("--localstatedir=%s" % tools.unix_path(os.path.join(self.package_folder, "bin", "var")))
66 self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
67 self._autotools.libs = []
68 self._autotools.configure(configure_dir=self._source_subfolder, args=args)
69 tools.replace_in_file("Makefile", "po-conf test", "po-conf")
70 return self._autotools
71
72 def _patch_files(self):
73 # - fontconfig requires libtool version number, change it for the corresponding freetype one
74 tools.replace_in_file(os.path.join(self._source_subfolder, 'configure'), '21.0.15', '2.8.1')
75
76 def build(self):
77 # Patch files from dependencies
78 self._patch_files()
79 with tools.run_environment(self):
80 autotools = self._configure_autotools()
81 autotools.make()
82
83 def package(self):
84 self.copy("COPYING", dst="licenses", src=self._source_subfolder)
85 with tools.run_environment(self):
86 autotools = self._configure_autotools()
87 autotools.install()
88 os.unlink(os.path.join(self.package_folder, "lib", "libfontconfig.la"))
89 tools.rmdir(os.path.join(self.package_folder, "lib", "pkgconfig"))
90 for f in glob.glob(os.path.join(self.package_folder, "bin", "etc", "fonts", "conf.d", "*.conf")):
91 if os.path.islink(f):
92 os.unlink(f)
93 for def_file in glob.glob(os.path.join(self.package_folder, "lib", "*.def")):
94 os.remove(def_file)
95
96 def package_info(self):
97 self.cpp_info.libs = ["fontconfig"]
98 if self.settings.os in ["Linux", "FreeBSD"]:
99 self.cpp_info.system_libs.extend(["m", "pthread"])
100 self.cpp_info.names["cmake_find_package"] = "Fontconfig"
101 self.cpp_info.names["cmake_find_package_multi"] = "Fontconfig"
102
103 fontconfig_file = os.path.join(self.package_folder, "bin", "etc", "fonts", "fonts.conf")
104 self.output.info("Creating FONTCONFIG_FILE environment variable: {}".format(fontconfig_file))
105 self.env_info.FONTCONFIG_FILE = fontconfig_file
106 fontconfig_path = os.path.join(self.package_folder, "bin", "etc", "fonts")
107 self.output.info("Creating FONTCONFIG_PATH environment variable: {}".format(fontconfig_path))
108 self.env_info.FONTCONFIG_PATH = fontconfig_path
109
[end of recipes/fontconfig/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/fontconfig/all/conanfile.py b/recipes/fontconfig/all/conanfile.py
--- a/recipes/fontconfig/all/conanfile.py
+++ b/recipes/fontconfig/all/conanfile.py
@@ -72,6 +72,12 @@
def _patch_files(self):
# - fontconfig requires libtool version number, change it for the corresponding freetype one
tools.replace_in_file(os.path.join(self._source_subfolder, 'configure'), '21.0.15', '2.8.1')
+ # disable fc-cache test to enable cross compilation but also builds with shared libraries on MacOS
+ tools.replace_in_file(
+ os.path.join(self._source_subfolder, 'Makefile.in'),
+ '@CROSS_COMPILING_TRUE@RUN_FC_CACHE_TEST = false',
+ 'RUN_FC_CACHE_TEST=false'
+ )
def build(self):
# Patch files from dependencies
| {"golden_diff": "diff --git a/recipes/fontconfig/all/conanfile.py b/recipes/fontconfig/all/conanfile.py\n--- a/recipes/fontconfig/all/conanfile.py\n+++ b/recipes/fontconfig/all/conanfile.py\n@@ -72,6 +72,12 @@\n def _patch_files(self):\n # - fontconfig requires libtool version number, change it for the corresponding freetype one\n tools.replace_in_file(os.path.join(self._source_subfolder, 'configure'), '21.0.15', '2.8.1')\n+ # disable fc-cache test to enable cross compilation but also builds with shared libraries on MacOS\n+ tools.replace_in_file(\n+ os.path.join(self._source_subfolder, 'Makefile.in'),\n+ '@CROSS_COMPILING_TRUE@RUN_FC_CACHE_TEST = false',\n+ 'RUN_FC_CACHE_TEST=false'\n+ )\n \n def build(self):\n # Patch files from dependencies\n", "issue": "[package] fontconfig/2.13.93: build fails on macos with bzip2:shared=True\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **fontconfig/2.13.93**\r\n * Operating System+version: **MacOS 10.15**\r\n * Compiler+version: **Apple clang 11.0**\r\n * Conan version: **conan 1.35.1**\r\n * Python version: **Python 3.9.4**\r\n\r\n\r\n### Conan profile \r\n```\r\nConfiguration for profile macos-xcode11-x86_64:\r\n\r\n[settings]\r\nos=Macos\r\nos_build=Macos\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=apple-clang\r\ncompiler.version=11.0\r\ncompiler.libcxx=libc++\r\nbuild_type=Release\r\nos.version=10.13\r\ncompiler.cppstd=17\r\n[options]\r\nboost:header_only=True\r\nbrotli:shared=True\r\nbzip2:shared=True\r\ncairo:enable_fc=True\r\ncairo:enable_ft=True\r\ncairo:enable_glib=False\r\ncairo:shared=True\r\nccdcsqlite3:enable_explain_comments=True\r\nccdcsqlite3:enable_fts3=True\r\nccdcsqlite3:enable_fts4=True\r\nccdcsqlite3:enable_fts5=True\r\nccdcsqlite3:enable_json1=True\r\nccdcsqlite3:enable_null_trim=True\r\nccdcsqlite3:max_column=10000\r\nccdcsqlite3:shared=True\r\ncsdprotobufs:shared=True\r\nexpat:shared=True\r\nfontconfig:shared=True\r\nfreetype:shared=True\r\ngtest:shared=True\r\ninchi:shared=True\r\njasper:shared=True\r\njbig:shared=True\r\nlexactivator:shared=True\r\nlexfloatclient:shared=True\r\nlibarchive:shared=True\r\nlibarchive:with_bzip2=True\r\nlibarchive:with_zlib=True\r\nlibarchive:with_zstd=True\r\nlibcurl:darwin_ssl=True\r\nlibcurl:shared=True\r\nlibcurl:with_openssl=False\r\nlibiconv:shared=False\r\nlibjpeg:shared=True\r\nlibpng:shared=True\r\nlibtiff:shared=True\r\nlibuuid:shared=True\r\nlibwebp:shared=True\r\nlibxl:shared=True\r\nlibxml2:shared=True\r\nopenscenegraph:shared=True\r\nopenssl:shared=True\r\nprotobuf:shared=True\r\nprotobuf:with_zlib=True\r\nxz_utils:shared=True\r\nzlib:shared=True\r\nzstd:shared=True\r\n[build_requires]\r\n[env]\r\n\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\nbuild fails in conan package stage\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nif [ -f source_subfolder/fonts.conf ]; then \\\r\n\t echo \" /usr/bin/install -c -m 644 source_subfolder/fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf\"; \\\r\n\t /usr/bin/install -c -m 644 source_subfolder/fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf; \\\r\n\telse if [ -f fonts.conf ]; then \\\r\n\t echo \" /usr/bin/install -c -m 644 fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf\"; \\\r\n\t /usr/bin/install -c -m 644 fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf; \\\r\n\tfi; fi\r\n /usr/bin/install -c -m 644 fonts.conf /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/etc/fonts/fonts.conf\r\nfc-cache -s -f -v\r\n /usr/bin/install -c -m 644 fontconfig.pc '/Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/lib/pkgconfig'\r\n /usr/bin/install -c -m 644 source_subfolder/fonts.dtd '/Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/package/a1eeda7123576b54b8790256237bfc4e70516c86/bin/share/xml/fontconfig'\r\ndyld: Library not loaded: libbz2.1.dylib\r\n Referenced from: /Users/vagrant/.conan/data/fontconfig/2.13.93/_/_/build/a1eeda7123576b54b8790256237bfc4e70516c86/fc-cache/.libs/fc-cache\r\n Reason: image not found\r\n/bin/sh: line 1: 35414 Abort trap: 6 fc-cache/fc-cache -s -f -v\r\nmake[2]: *** [install-data-local] Error 134\r\nmake[1]: *** [install-am] Error 2\r\nmake: *** [install-recursive] Error 1\r\nERROR: fontconfig/2.13.93: Error in package() method, line 87\r\n\tautotools.install()\r\n\tConanException: Error 2 while executing make install -j12\r\n\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "import os\nimport glob\n\nfrom conans import ConanFile, tools, AutoToolsBuildEnvironment\nfrom conans.errors import ConanInvalidConfiguration\n\n\nclass FontconfigConan(ConanFile):\n name = \"fontconfig\"\n license = \"MIT\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"Fontconfig is a library for configuring and customizing font access\"\n homepage = \"https://gitlab.freedesktop.org/fontconfig/fontconfig\"\n topics = (\"conan\", \"fontconfig\", \"fonts\", \"freedesktop\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n default_options = {\"shared\": False, \"fPIC\": True}\n generators = \"pkg_config\"\n\n _autotools = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.settings.compiler == \"Visual Studio\":\n raise ConanInvalidConfiguration(\"Visual Studio builds are not supported.\")\n if self.options.shared:\n del self.options.fPIC\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def requirements(self):\n self.requires(\"freetype/2.10.4\")\n self.requires(\"expat/2.2.10\")\n if self.settings.os == \"Linux\":\n self.requires(\"libuuid/1.0.3\")\n elif self.settings.os == \"Macos\":\n self.requires(\"libgettext/0.20.1\")\n\n def build_requirements(self):\n self.build_requires(\"gperf/3.1\")\n self.build_requires(\"pkgconf/1.7.3\")\n if tools.os_info.is_windows and not tools.get_env(\"CONAN_BASH_PATH\"):\n self.build_requires(\"msys2/20200517\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extrated_dir = self.name + \"-\" + self.version\n os.rename(extrated_dir, self._source_subfolder)\n\n def _configure_autotools(self):\n if not self._autotools:\n args = [\"--enable-static=%s\" % (\"no\" if self.options.shared else \"yes\"),\n \"--enable-shared=%s\" % (\"yes\" if self.options.shared else \"no\"),\n \"--disable-docs\"]\n args.append(\"--sysconfdir=%s\" % tools.unix_path(os.path.join(self.package_folder, \"bin\", \"etc\")))\n args.append(\"--datadir=%s\" % tools.unix_path(os.path.join(self.package_folder, \"bin\", \"share\")))\n args.append(\"--datarootdir=%s\" % tools.unix_path(os.path.join(self.package_folder, \"bin\", \"share\")))\n args.append(\"--localstatedir=%s\" % tools.unix_path(os.path.join(self.package_folder, \"bin\", \"var\")))\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.libs = []\n self._autotools.configure(configure_dir=self._source_subfolder, args=args)\n tools.replace_in_file(\"Makefile\", \"po-conf test\", \"po-conf\")\n return self._autotools\n\n def _patch_files(self):\n # - fontconfig requires libtool version number, change it for the corresponding freetype one\n tools.replace_in_file(os.path.join(self._source_subfolder, 'configure'), '21.0.15', '2.8.1')\n\n def build(self):\n # Patch files from dependencies\n self._patch_files()\n with tools.run_environment(self):\n autotools = self._configure_autotools()\n autotools.make()\n\n def package(self):\n self.copy(\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n with tools.run_environment(self):\n autotools = self._configure_autotools()\n autotools.install()\n os.unlink(os.path.join(self.package_folder, \"lib\", \"libfontconfig.la\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n for f in glob.glob(os.path.join(self.package_folder, \"bin\", \"etc\", \"fonts\", \"conf.d\", \"*.conf\")):\n if os.path.islink(f):\n os.unlink(f)\n for def_file in glob.glob(os.path.join(self.package_folder, \"lib\", \"*.def\")):\n os.remove(def_file)\n\n def package_info(self):\n self.cpp_info.libs = [\"fontconfig\"]\n if self.settings.os in [\"Linux\", \"FreeBSD\"]:\n self.cpp_info.system_libs.extend([\"m\", \"pthread\"])\n self.cpp_info.names[\"cmake_find_package\"] = \"Fontconfig\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"Fontconfig\"\n\n fontconfig_file = os.path.join(self.package_folder, \"bin\", \"etc\", \"fonts\", \"fonts.conf\")\n self.output.info(\"Creating FONTCONFIG_FILE environment variable: {}\".format(fontconfig_file))\n self.env_info.FONTCONFIG_FILE = fontconfig_file\n fontconfig_path = os.path.join(self.package_folder, \"bin\", \"etc\", \"fonts\")\n self.output.info(\"Creating FONTCONFIG_PATH environment variable: {}\".format(fontconfig_path))\n self.env_info.FONTCONFIG_PATH = fontconfig_path\n", "path": "recipes/fontconfig/all/conanfile.py"}]} | 3,390 | 206 |
gh_patches_debug_31305 | rasdani/github-patches | git_diff | sosreport__sos-2660 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[ssh] Can ssh plugin check the permissions set for /home/*/.ssh files?
Hello!
When users set wrong permissions to files in their ~/.ssh/ folder, i.e. they set write permission for `~/.ssh/authenticated_keys` for `other`, the SSH server will refuse to accept connections for this user.
I think it would be nice for the [ssh] plugin to check, if the permissions set for files in the `.ssh` folders of system users are correct, or if they are corrupted in some way.
A very simple solution for that would be to just run `ls -l .ssh/` in every home directory. Would it be OK to extend the ssh plugin to do this? Would it be better to have a separate plugin do this?
Let me know what you think, and I'll give it a try if that's ok :)
</issue>
<code>
[start of sos/report/plugins/ssh.py]
1 # Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]>
2
3 # This file is part of the sos project: https://github.com/sosreport/sos
4 #
5 # This copyrighted material is made available to anyone wishing to use,
6 # modify, copy, or redistribute it subject to the terms and conditions of
7 # version 2 of the GNU General Public License.
8 #
9 # See the LICENSE file in the source distribution for further information.
10
11 from sos.report.plugins import Plugin, IndependentPlugin
12
13
14 class Ssh(Plugin, IndependentPlugin):
15
16 short_desc = 'Secure shell service'
17
18 plugin_name = 'ssh'
19 profiles = ('services', 'security', 'system', 'identity')
20
21 def setup(self):
22
23 self.add_file_tags({
24 '/etc/ssh/sshd_config': 'sshd_config',
25 '/etc/ssh/ssh_config': 'ssh_config'
26 })
27
28 sshcfgs = [
29 "/etc/ssh/ssh_config",
30 "/etc/ssh/sshd_config"
31 ]
32
33 # Include main config files
34 self.add_copy_spec(sshcfgs)
35
36 # Read configs for any includes and copy those
37 try:
38 for sshcfg in sshcfgs:
39 tag = sshcfg.split('/')[-1]
40 with open(sshcfg, 'r') as cfgfile:
41 for line in cfgfile:
42 # skip empty lines and comments
43 if len(line.split()) == 0 or line.startswith('#'):
44 continue
45 # ssh_config keywords are allowed as case-insensitive
46 if line.lower().startswith('include'):
47 confarg = line.split()
48 self.add_copy_spec(confarg[1], tags=tag)
49 except Exception:
50 pass
51
52
53 # vim: set et ts=4 sw=4 :
54
[end of sos/report/plugins/ssh.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sos/report/plugins/ssh.py b/sos/report/plugins/ssh.py
--- a/sos/report/plugins/ssh.py
+++ b/sos/report/plugins/ssh.py
@@ -9,6 +9,7 @@
# See the LICENSE file in the source distribution for further information.
from sos.report.plugins import Plugin, IndependentPlugin
+import os.path
class Ssh(Plugin, IndependentPlugin):
@@ -33,6 +34,10 @@
# Include main config files
self.add_copy_spec(sshcfgs)
+ self.included_configs(sshcfgs)
+ self.user_ssh_files_permissions()
+
+ def included_configs(self, sshcfgs):
# Read configs for any includes and copy those
try:
for sshcfg in sshcfgs:
@@ -49,5 +54,33 @@
except Exception:
pass
+ def user_ssh_files_permissions(self):
+ """
+ Iterate over .ssh folders in user homes to see their permissions.
+
+ Bad permissions can prevent SSH from allowing access to given user.
+ """
+ users_data = self.exec_cmd('getent passwd')
+
+ if users_data['status']:
+ # If getent fails, fallback to just reading /etc/passwd
+ try:
+ with open('/etc/passwd') as passwd_file:
+ users_data_lines = passwd_file.readlines()
+ except Exception:
+ # If we can't read /etc/passwd, then there's something wrong.
+ self._log_error("Couldn't read /etc/passwd")
+ return
+ else:
+ users_data_lines = users_data['output'].splitlines()
+
+ # Read the home paths of users in the system and check the ~/.ssh dirs
+ for usr_line in users_data_lines:
+ try:
+ home_dir = os.path.join(usr_line.split(':')[5], '.ssh')
+ if self.path_isdir(home_dir):
+ self.add_cmd_output('ls -laZ {}'.format(home_dir))
+ except IndexError:
+ pass
# vim: set et ts=4 sw=4 :
| {"golden_diff": "diff --git a/sos/report/plugins/ssh.py b/sos/report/plugins/ssh.py\n--- a/sos/report/plugins/ssh.py\n+++ b/sos/report/plugins/ssh.py\n@@ -9,6 +9,7 @@\n # See the LICENSE file in the source distribution for further information.\n \n from sos.report.plugins import Plugin, IndependentPlugin\n+import os.path\n \n \n class Ssh(Plugin, IndependentPlugin):\n@@ -33,6 +34,10 @@\n # Include main config files\n self.add_copy_spec(sshcfgs)\n \n+ self.included_configs(sshcfgs)\n+ self.user_ssh_files_permissions()\n+\n+ def included_configs(self, sshcfgs):\n # Read configs for any includes and copy those\n try:\n for sshcfg in sshcfgs:\n@@ -49,5 +54,33 @@\n except Exception:\n pass\n \n+ def user_ssh_files_permissions(self):\n+ \"\"\"\n+ Iterate over .ssh folders in user homes to see their permissions.\n+\n+ Bad permissions can prevent SSH from allowing access to given user.\n+ \"\"\"\n+ users_data = self.exec_cmd('getent passwd')\n+\n+ if users_data['status']:\n+ # If getent fails, fallback to just reading /etc/passwd\n+ try:\n+ with open('/etc/passwd') as passwd_file:\n+ users_data_lines = passwd_file.readlines()\n+ except Exception:\n+ # If we can't read /etc/passwd, then there's something wrong.\n+ self._log_error(\"Couldn't read /etc/passwd\")\n+ return\n+ else:\n+ users_data_lines = users_data['output'].splitlines()\n+\n+ # Read the home paths of users in the system and check the ~/.ssh dirs\n+ for usr_line in users_data_lines:\n+ try:\n+ home_dir = os.path.join(usr_line.split(':')[5], '.ssh')\n+ if self.path_isdir(home_dir):\n+ self.add_cmd_output('ls -laZ {}'.format(home_dir))\n+ except IndexError:\n+ pass\n \n # vim: set et ts=4 sw=4 :\n", "issue": "[ssh] Can ssh plugin check the permissions set for /home/*/.ssh files?\nHello!\r\n\r\nWhen users set wrong permissions to files in their ~/.ssh/ folder, i.e. they set write permission for `~/.ssh/authenticated_keys` for `other`, the SSH server will refuse to accept connections for this user.\r\n\r\nI think it would be nice for the [ssh] plugin to check, if the permissions set for files in the `.ssh` folders of system users are correct, or if they are corrupted in some way. \r\n\r\nA very simple solution for that would be to just run `ls -l .ssh/` in every home directory. Would it be OK to extend the ssh plugin to do this? Would it be better to have a separate plugin do this?\r\n\r\nLet me know what you think, and I'll give it a try if that's ok :)\n", "before_files": [{"content": "# Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]>\n\n# This file is part of the sos project: https://github.com/sosreport/sos\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# version 2 of the GNU General Public License.\n#\n# See the LICENSE file in the source distribution for further information.\n\nfrom sos.report.plugins import Plugin, IndependentPlugin\n\n\nclass Ssh(Plugin, IndependentPlugin):\n\n short_desc = 'Secure shell service'\n\n plugin_name = 'ssh'\n profiles = ('services', 'security', 'system', 'identity')\n\n def setup(self):\n\n self.add_file_tags({\n '/etc/ssh/sshd_config': 'sshd_config',\n '/etc/ssh/ssh_config': 'ssh_config'\n })\n\n sshcfgs = [\n \"/etc/ssh/ssh_config\",\n \"/etc/ssh/sshd_config\"\n ]\n\n # Include main config files\n self.add_copy_spec(sshcfgs)\n\n # Read configs for any includes and copy those\n try:\n for sshcfg in sshcfgs:\n tag = sshcfg.split('/')[-1]\n with open(sshcfg, 'r') as cfgfile:\n for line in cfgfile:\n # skip empty lines and comments\n if len(line.split()) == 0 or line.startswith('#'):\n continue\n # ssh_config keywords are allowed as case-insensitive\n if line.lower().startswith('include'):\n confarg = line.split()\n self.add_copy_spec(confarg[1], tags=tag)\n except Exception:\n pass\n\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/report/plugins/ssh.py"}]} | 1,201 | 465 |
gh_patches_debug_23795 | rasdani/github-patches | git_diff | rasterio__rasterio-1935 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MemoryFile and seek
# Expected behavior and actual behavior.
I am trying to download a file from AWS to a MemoryFile using boto3 (Sentinel 2 JP2). I would like to use `client.download_fileobj()` because I have finer control over the download :
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html
## Steps to reproduce the problem.
```
import rasterio.io
import boto3
with rasterio.io.MemoryFile() as memfile:
boto3.client('s3').download_fileobj(
Fileobj=memfile,
Bucket='sentinel-s2-l1c',
Key='tiles/15/R/TN/2019/3/28/0/B02.jp2',
ExtraArgs={'RequestPayer': 'requester'},
Config=None)
```
Raises:
```
ValueError: seek position past end of file: 8388608
```
This works fine with io.BytesIO or the GCP storage client `blob.download_to_file()`.
There is a difference of behavior regarding the seek on positions w.r.t. BytesIO (the problem is probably linked to the multithreading in the AWS case?).
## Operating system
Ubuntu 18
## Rasterio version and provenance
rasterio-1.1.4-cp38-cp38-manylinux1_x86_64.whl
</issue>
<code>
[start of rasterio/io.py]
1 """Classes capable of reading and writing datasets
2
3 Instances of these classes are called dataset objects.
4 """
5
6 import logging
7
8 from rasterio._base import (
9 get_dataset_driver, driver_can_create, driver_can_create_copy)
10 from rasterio._io import (
11 DatasetReaderBase, DatasetWriterBase, BufferedDatasetWriterBase,
12 MemoryFileBase)
13 from rasterio.windows import WindowMethodsMixin
14 from rasterio.env import ensure_env, env_ctx_if_needed
15 from rasterio.transform import TransformMethodsMixin
16 from rasterio.path import UnparsedPath
17
18
19 log = logging.getLogger(__name__)
20
21
22 class DatasetReader(DatasetReaderBase, WindowMethodsMixin,
23 TransformMethodsMixin):
24 """An unbuffered data and metadata reader"""
25
26 def __repr__(self):
27 return "<{} DatasetReader name='{}' mode='{}'>".format(
28 self.closed and 'closed' or 'open', self.name, self.mode)
29
30
31 class DatasetWriter(DatasetWriterBase, WindowMethodsMixin,
32 TransformMethodsMixin):
33 """An unbuffered data and metadata writer. Its methods write data
34 directly to disk.
35 """
36
37 def __repr__(self):
38 return "<{} DatasetWriter name='{}' mode='{}'>".format(
39 self.closed and 'closed' or 'open', self.name, self.mode)
40
41
42 class BufferedDatasetWriter(BufferedDatasetWriterBase, WindowMethodsMixin,
43 TransformMethodsMixin):
44 """Maintains data and metadata in a buffer, writing to disk or
45 network only when `close()` is called.
46
47 This allows incremental updates to datasets using formats that don't
48 otherwise support updates, such as JPEG.
49 """
50
51 def __repr__(self):
52 return "<{} BufferedDatasetWriter name='{}' mode='{}'>".format(
53 self.closed and 'closed' or 'open', self.name, self.mode)
54
55
56 class MemoryFile(MemoryFileBase):
57 """A BytesIO-like object, backed by an in-memory file.
58
59 This allows formatted files to be read and written without I/O.
60
61 A MemoryFile created with initial bytes becomes immutable. A
62 MemoryFile created without initial bytes may be written to using
63 either file-like or dataset interfaces.
64
65 Examples
66 --------
67
68 A GeoTIFF can be loaded in memory and accessed using the GeoTIFF
69 format driver
70
71 >>> with open('tests/data/RGB.byte.tif', 'rb') as f, MemoryFile(f) as memfile:
72 ... with memfile.open() as src:
73 ... pprint.pprint(src.profile)
74 ...
75 {'count': 3,
76 'crs': CRS({'init': 'epsg:32618'}),
77 'driver': 'GTiff',
78 'dtype': 'uint8',
79 'height': 718,
80 'interleave': 'pixel',
81 'nodata': 0.0,
82 'tiled': False,
83 'transform': Affine(300.0379266750948, 0.0, 101985.0,
84 0.0, -300.041782729805, 2826915.0),
85 'width': 791}
86
87 """
88 def __init__(self, file_or_bytes=None, filename=None, ext=''):
89 """Create a new file in memory
90
91 Parameters
92 ----------
93 file_or_bytes : file-like object or bytes, optional
94 File or bytes holding initial data.
95 filename : str, optional
96 An optional filename. A unique one will otherwise be generated.
97 ext : str, optional
98 An optional extension.
99
100 Returns
101 -------
102 MemoryFile
103 """
104 super(MemoryFile, self).__init__(
105 file_or_bytes=file_or_bytes, filename=filename, ext=ext)
106
107 @ensure_env
108 def open(self, driver=None, width=None, height=None, count=None, crs=None,
109 transform=None, dtype=None, nodata=None, sharing=False, **kwargs):
110 """Open the file and return a Rasterio dataset object.
111
112 If data has already been written, the file is opened in 'r'
113 mode. Otherwise, the file is opened in 'w' mode.
114
115 Parameters
116 ----------
117 Note well that there is no `path` parameter: a `MemoryFile`
118 contains a single dataset and there is no need to specify a
119 path.
120
121 Other parameters are optional and have the same semantics as the
122 parameters of `rasterio.open()`.
123 """
124 mempath = UnparsedPath(self.name)
125
126 if self.closed:
127 raise IOError("I/O operation on closed file.")
128 if self.exists():
129 log.debug("VSI path: {}".format(mempath.path))
130 return DatasetReader(mempath, driver=driver, sharing=sharing, **kwargs)
131 else:
132 writer = get_writer_for_driver(driver)
133 return writer(mempath, 'w+', driver=driver, width=width,
134 height=height, count=count, crs=crs,
135 transform=transform, dtype=dtype,
136 nodata=nodata, sharing=sharing, **kwargs)
137
138 def __enter__(self):
139 self._env = env_ctx_if_needed()
140 self._env.__enter__()
141 return self
142
143 def __exit__(self, *args, **kwargs):
144 self._env.__exit__()
145 self.close()
146
147
148 class ZipMemoryFile(MemoryFile):
149 """A read-only BytesIO-like object backed by an in-memory zip file.
150
151 This allows a zip file containing formatted files to be read
152 without I/O.
153 """
154
155 def __init__(self, file_or_bytes=None):
156 super(ZipMemoryFile, self).__init__(file_or_bytes, ext='zip')
157
158 @ensure_env
159 def open(self, path, driver=None, sharing=False, **kwargs):
160 """Open a dataset within the zipped stream.
161
162 Parameters
163 ----------
164 path : str
165 Path to a dataset in the zip file, relative to the root of the
166 archive.
167
168 Other parameters are optional and have the same semantics as the
169 parameters of `rasterio.open()`.
170
171 Returns
172 -------
173 A Rasterio dataset object
174 """
175 zippath = UnparsedPath('/vsizip{0}/{1}'.format(self.name, path.lstrip('/')))
176
177 if self.closed:
178 raise IOError("I/O operation on closed file.")
179 return DatasetReader(zippath, driver=driver, sharing=sharing, **kwargs)
180
181
182 def get_writer_for_driver(driver):
183 """Return the writer class appropriate for the specified driver."""
184 if not driver:
185 raise ValueError("'driver' is required to write dataset.")
186 cls = None
187 if driver_can_create(driver):
188 cls = DatasetWriter
189 elif driver_can_create_copy(driver): # pragma: no branch
190 cls = BufferedDatasetWriter
191 return cls
192
193
194 def get_writer_for_path(path, driver=None):
195 """Return the writer class appropriate for the existing dataset."""
196 if not driver:
197 driver = get_dataset_driver(path)
198 return get_writer_for_driver(driver)
199
[end of rasterio/io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/io.py b/rasterio/io.py
--- a/rasterio/io.py
+++ b/rasterio/io.py
@@ -85,7 +85,7 @@
'width': 791}
"""
- def __init__(self, file_or_bytes=None, filename=None, ext=''):
+ def __init__(self, file_or_bytes=None, dirname=None, filename=None, ext=''):
"""Create a new file in memory
Parameters
@@ -102,7 +102,7 @@
MemoryFile
"""
super(MemoryFile, self).__init__(
- file_or_bytes=file_or_bytes, filename=filename, ext=ext)
+ file_or_bytes=file_or_bytes, dirname=dirname, filename=filename, ext=ext)
@ensure_env
def open(self, driver=None, width=None, height=None, count=None, crs=None,
@@ -125,7 +125,7 @@
if self.closed:
raise IOError("I/O operation on closed file.")
- if self.exists():
+ if len(self) > 0:
log.debug("VSI path: {}".format(mempath.path))
return DatasetReader(mempath, driver=driver, sharing=sharing, **kwargs)
else:
| {"golden_diff": "diff --git a/rasterio/io.py b/rasterio/io.py\n--- a/rasterio/io.py\n+++ b/rasterio/io.py\n@@ -85,7 +85,7 @@\n 'width': 791}\n \n \"\"\"\n- def __init__(self, file_or_bytes=None, filename=None, ext=''):\n+ def __init__(self, file_or_bytes=None, dirname=None, filename=None, ext=''):\n \"\"\"Create a new file in memory\n \n Parameters\n@@ -102,7 +102,7 @@\n MemoryFile\n \"\"\"\n super(MemoryFile, self).__init__(\n- file_or_bytes=file_or_bytes, filename=filename, ext=ext)\n+ file_or_bytes=file_or_bytes, dirname=dirname, filename=filename, ext=ext)\n \n @ensure_env\n def open(self, driver=None, width=None, height=None, count=None, crs=None,\n@@ -125,7 +125,7 @@\n \n if self.closed:\n raise IOError(\"I/O operation on closed file.\")\n- if self.exists():\n+ if len(self) > 0:\n log.debug(\"VSI path: {}\".format(mempath.path))\n return DatasetReader(mempath, driver=driver, sharing=sharing, **kwargs)\n else:\n", "issue": "MemoryFile and seek\n# Expected behavior and actual behavior.\r\n\r\nI am trying to download a file from AWS to a MemoryFile using boto3 (Sentinel 2 JP2). I would like to use `client.download_fileobj()` because I have finer control over the download : \r\n\r\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html\r\n\r\n## Steps to reproduce the problem.\r\n\r\n```\r\nimport rasterio.io\r\nimport boto3\r\n\r\nwith rasterio.io.MemoryFile() as memfile:\r\n boto3.client('s3').download_fileobj(\r\n Fileobj=memfile,\r\n Bucket='sentinel-s2-l1c',\r\n Key='tiles/15/R/TN/2019/3/28/0/B02.jp2',\r\n ExtraArgs={'RequestPayer': 'requester'},\r\n Config=None)\r\n```\r\nRaises:\r\n\r\n```\r\nValueError: seek position past end of file: 8388608\r\n```\r\n\r\nThis works fine with io.BytesIO or the GCP storage client `blob.download_to_file()`.\r\nThere is a difference of behavior regarding the seek on positions w.r.t. BytesIO (the problem is probably linked to the multithreading in the AWS case?).\r\n\r\n## Operating system\r\nUbuntu 18\r\n\r\n## Rasterio version and provenance\r\nrasterio-1.1.4-cp38-cp38-manylinux1_x86_64.whl\r\n\n", "before_files": [{"content": "\"\"\"Classes capable of reading and writing datasets\n\nInstances of these classes are called dataset objects.\n\"\"\"\n\nimport logging\n\nfrom rasterio._base import (\n get_dataset_driver, driver_can_create, driver_can_create_copy)\nfrom rasterio._io import (\n DatasetReaderBase, DatasetWriterBase, BufferedDatasetWriterBase,\n MemoryFileBase)\nfrom rasterio.windows import WindowMethodsMixin\nfrom rasterio.env import ensure_env, env_ctx_if_needed\nfrom rasterio.transform import TransformMethodsMixin\nfrom rasterio.path import UnparsedPath\n\n\nlog = logging.getLogger(__name__)\n\n\nclass DatasetReader(DatasetReaderBase, WindowMethodsMixin,\n TransformMethodsMixin):\n \"\"\"An unbuffered data and metadata reader\"\"\"\n\n def __repr__(self):\n return \"<{} DatasetReader name='{}' mode='{}'>\".format(\n self.closed and 'closed' or 'open', self.name, self.mode)\n\n\nclass DatasetWriter(DatasetWriterBase, WindowMethodsMixin,\n TransformMethodsMixin):\n \"\"\"An unbuffered data and metadata writer. Its methods write data\n directly to disk.\n \"\"\"\n\n def __repr__(self):\n return \"<{} DatasetWriter name='{}' mode='{}'>\".format(\n self.closed and 'closed' or 'open', self.name, self.mode)\n\n\nclass BufferedDatasetWriter(BufferedDatasetWriterBase, WindowMethodsMixin,\n TransformMethodsMixin):\n \"\"\"Maintains data and metadata in a buffer, writing to disk or\n network only when `close()` is called.\n\n This allows incremental updates to datasets using formats that don't\n otherwise support updates, such as JPEG.\n \"\"\"\n\n def __repr__(self):\n return \"<{} BufferedDatasetWriter name='{}' mode='{}'>\".format(\n self.closed and 'closed' or 'open', self.name, self.mode)\n\n\nclass MemoryFile(MemoryFileBase):\n \"\"\"A BytesIO-like object, backed by an in-memory file.\n\n This allows formatted files to be read and written without I/O.\n\n A MemoryFile created with initial bytes becomes immutable. A\n MemoryFile created without initial bytes may be written to using\n either file-like or dataset interfaces.\n\n Examples\n --------\n\n A GeoTIFF can be loaded in memory and accessed using the GeoTIFF\n format driver\n\n >>> with open('tests/data/RGB.byte.tif', 'rb') as f, MemoryFile(f) as memfile:\n ... with memfile.open() as src:\n ... pprint.pprint(src.profile)\n ...\n {'count': 3,\n 'crs': CRS({'init': 'epsg:32618'}),\n 'driver': 'GTiff',\n 'dtype': 'uint8',\n 'height': 718,\n 'interleave': 'pixel',\n 'nodata': 0.0,\n 'tiled': False,\n 'transform': Affine(300.0379266750948, 0.0, 101985.0,\n 0.0, -300.041782729805, 2826915.0),\n 'width': 791}\n\n \"\"\"\n def __init__(self, file_or_bytes=None, filename=None, ext=''):\n \"\"\"Create a new file in memory\n\n Parameters\n ----------\n file_or_bytes : file-like object or bytes, optional\n File or bytes holding initial data.\n filename : str, optional\n An optional filename. A unique one will otherwise be generated.\n ext : str, optional\n An optional extension.\n\n Returns\n -------\n MemoryFile\n \"\"\"\n super(MemoryFile, self).__init__(\n file_or_bytes=file_or_bytes, filename=filename, ext=ext)\n\n @ensure_env\n def open(self, driver=None, width=None, height=None, count=None, crs=None,\n transform=None, dtype=None, nodata=None, sharing=False, **kwargs):\n \"\"\"Open the file and return a Rasterio dataset object.\n\n If data has already been written, the file is opened in 'r'\n mode. Otherwise, the file is opened in 'w' mode.\n\n Parameters\n ----------\n Note well that there is no `path` parameter: a `MemoryFile`\n contains a single dataset and there is no need to specify a\n path.\n\n Other parameters are optional and have the same semantics as the\n parameters of `rasterio.open()`.\n \"\"\"\n mempath = UnparsedPath(self.name)\n\n if self.closed:\n raise IOError(\"I/O operation on closed file.\")\n if self.exists():\n log.debug(\"VSI path: {}\".format(mempath.path))\n return DatasetReader(mempath, driver=driver, sharing=sharing, **kwargs)\n else:\n writer = get_writer_for_driver(driver)\n return writer(mempath, 'w+', driver=driver, width=width,\n height=height, count=count, crs=crs,\n transform=transform, dtype=dtype,\n nodata=nodata, sharing=sharing, **kwargs)\n\n def __enter__(self):\n self._env = env_ctx_if_needed()\n self._env.__enter__()\n return self\n\n def __exit__(self, *args, **kwargs):\n self._env.__exit__()\n self.close()\n\n\nclass ZipMemoryFile(MemoryFile):\n \"\"\"A read-only BytesIO-like object backed by an in-memory zip file.\n\n This allows a zip file containing formatted files to be read\n without I/O.\n \"\"\"\n\n def __init__(self, file_or_bytes=None):\n super(ZipMemoryFile, self).__init__(file_or_bytes, ext='zip')\n\n @ensure_env\n def open(self, path, driver=None, sharing=False, **kwargs):\n \"\"\"Open a dataset within the zipped stream.\n\n Parameters\n ----------\n path : str\n Path to a dataset in the zip file, relative to the root of the\n archive.\n\n Other parameters are optional and have the same semantics as the\n parameters of `rasterio.open()`.\n\n Returns\n -------\n A Rasterio dataset object\n \"\"\"\n zippath = UnparsedPath('/vsizip{0}/{1}'.format(self.name, path.lstrip('/')))\n\n if self.closed:\n raise IOError(\"I/O operation on closed file.\")\n return DatasetReader(zippath, driver=driver, sharing=sharing, **kwargs)\n\n\ndef get_writer_for_driver(driver):\n \"\"\"Return the writer class appropriate for the specified driver.\"\"\"\n if not driver:\n raise ValueError(\"'driver' is required to write dataset.\")\n cls = None\n if driver_can_create(driver):\n cls = DatasetWriter\n elif driver_can_create_copy(driver): # pragma: no branch\n cls = BufferedDatasetWriter\n return cls\n\n\ndef get_writer_for_path(path, driver=None):\n \"\"\"Return the writer class appropriate for the existing dataset.\"\"\"\n if not driver:\n driver = get_dataset_driver(path)\n return get_writer_for_driver(driver)\n", "path": "rasterio/io.py"}]} | 2,873 | 284 |
gh_patches_debug_14701 | rasdani/github-patches | git_diff | hi-primus__optimus-1104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Profiling bug when creating a dataframe from a dictionary using dask
**Describe the bug**
calling `df.profile("*")` for a dataframe created using `op.create.dataframe` causes an error `KeyError: 'id'`.
**To Reproduce**
```
df = op.create.dataframe({"id":[5,6,10,11,79,100]})
df.profile("*")
```
</issue>
<code>
[start of optimus/engines/base/dask/dataframe.py]
1 from abc import abstractmethod
2
3 import dask
4 import humanize
5 from dask.distributed import Variable
6 from dask.utils import parse_bytes
7
8 from optimus.engines.base.basedataframe import BaseDataFrame
9 from optimus.engines.pandas.dataframe import PandasDataFrame
10 from optimus.helpers.functions import random_int
11 from optimus.helpers.raiseit import RaiseIt
12 from optimus.infer import is_one_element
13
14
15 class DaskBaseDataFrame(BaseDataFrame):
16
17 def __init__(self, root, data):
18 super().__init__(root, data)
19
20 def _assign(self, kw_columns):
21
22 dfd = self.root.data
23
24 if dfd.known_divisions:
25 for key in kw_columns:
26 kw_column = kw_columns[key]
27 if not is_one_element(kw_column) and not callable(kw_column) and not kw_column.known_divisions:
28 kw_columns[key] = kw_column.reset_index().set_index('index')[key]
29 return dfd.assign(**kw_columns)
30
31 @staticmethod
32 @abstractmethod
33 def _base_to_dfd(df, n_partitions):
34 """
35 Convert a dataframe from the dataframe base (pandas, cudf) in a distributed engine dataframe (dask, dask_cudf)
36 :param pdf:
37 :param n_partitions:
38 :return:
39 """
40 pass
41
42 def execute(self):
43 self.data = self.data.persist()
44 return self
45
46 def compute(self):
47 df = self.data
48 return df.compute()
49
50 def visualize(self):
51 return display(self.data.visualize())
52
53 def export(self):
54 """
55 Helper function to export all the dataframe in text format. Aimed to be used in test functions
56 :return:
57 """
58 df = self.root
59 df_data = df.to_json()
60 df_schema = df.cols.dtypes()
61
62 return f"{df_schema}, {df_data}"
63
64 def _reset_buffer(self):
65 if self.buffer:
66 Variable(self.buffer).delete()
67 self.buffer = None
68
69 def get_buffer(self):
70 return Variable(self.buffer).get() if self.buffer else None
71
72 def _buffer_window(self, input_cols, lower_bound, upper_bound):
73 def func(value):
74 return value[lower_bound:upper_bound]
75
76 return PandasDataFrame(self.data[input_cols].partitions[0].map_partitions(func).compute())
77
78 def sample(self, n=10, random=False):
79 """
80 Return a n number of sample from a dataFrame
81 :param n: Number of samples
82 :param random: if true get a semi random sample
83 :return:
84 """
85 df = self.root
86 if random is True:
87 seed = random_int()
88 elif random is False:
89 seed = 0
90 else:
91 RaiseIt.value_error(random, ["True", "False"])
92
93 rows_count = df.rows.count()
94 if n < rows_count:
95 # n/rows_count can return a number that represent less the total number we expect. multiply by 1.1
96 fraction = (n / rows_count) * 1.1
97 else:
98 fraction = 1.0
99 return self.root.new(df.data.sample(frac=fraction, random_state=seed))
100
101 def stratified_sample(self, col_name, seed: int = 1):
102 """
103 Stratified Sampling
104 :param col_name:
105 :param seed:
106 :return:
107 """
108 df = self.data
109 n = min(5, df[col_name].value_counts().min())
110 df = df.groupby(col_name).apply(lambda x: x.sample(2))
111 # df_.index = df_.index.droplevel(0)
112 return self.root.new(df)
113
114 @staticmethod
115 def pivot(index, column, values):
116 """
117 Return reshaped DataFrame organized by given index / column values.
118 :param index: Column to use to make new frame's index.
119 :param column: Column to use to make new frame's columns.
120 :param values: Column(s) to use for populating new frame's values.
121 :return:
122 """
123 raise NotImplementedError
124
125 @staticmethod
126 def melt(id_vars, value_vars, var_name="variable", value_name="value", data_type="str"):
127 """
128 Convert DataFrame from wide to long format.
129 :param id_vars: column with unique values
130 :param value_vars: Column names that are going to be converted to columns values
131 :param var_name: Column name for vars
132 :param value_name: Column name for values
133 :param data_type: All columns must have the same type. It will transform all columns to this data type.
134 :return:
135 """
136
137 raise NotImplementedError
138
139 def size(self, deep=False, format=None):
140 """
141 Get the size of a dask in bytes
142 :return:
143 """
144 df = self.data
145 result = df.memory_usage(index=True, deep=deep).sum().compute()
146 if format == "human":
147 result = humanize.naturalsize(result)
148
149 return result
150
151 def run(self):
152 """
153 This method is a very useful function to break lineage of transformations. By default Spark uses the lazy
154 evaluation approach in processing data: transformation functions are not computed into an action is called.
155 Sometimes when transformations are numerous, the computations are very extensive because the high number of
156 operations that spark needs to run in order to get the results.
157
158 Other important thing is that Apache Spark save task but not result of dataFrame, so tasks are
159 accumulated and the same situation happens.
160
161 :return:
162 """
163 df = self.data
164 df.cache().count()
165 return df
166
167 @staticmethod
168 def query(sql_expression):
169 raise NotImplementedError
170
171 def partitions(self):
172 return self.data.npartitions
173
174 @staticmethod
175 def partitioner():
176 print("Dask not support custom partitioner")
177 raise NotImplementedError
178
179 def repartition(self, n=None, *args, **kwargs):
180 dfd = self.data
181 df = self
182 if n == "auto":
183 # Follow a heuristic for partitioning a mentioned
184 # https://docs.dask.org/en/latest/best-practices.html#avoid-very-large-partitions
185 client = dask.distributed.get_client()
186 worker_memory = parse_bytes(client.cluster.worker_spec[0]["options"]["memory_limit"])
187 nthreads = client.cluster.worker_spec[0]["options"]["nthreads"]
188
189 part_recommended_size = worker_memory / nthreads / 10
190 n = int(df.size() / part_recommended_size)
191
192 # Partition can not be lower than 1
193 n = n if n < 0 else 1
194 # TODO .repartition(partition_size="100MB"). https://stackoverflow.com/questions/44657631/strategy-for-partitioning-dask-dataframes-efficiently
195 dfd = dfd.repartition(npartitions=n, *args, **kwargs)
196
197 return self.new(dfd, meta=self.meta)
198
199 @staticmethod
200 def debug():
201 """
202
203 :return:
204 """
205 raise NotImplementedError
206
207 @staticmethod
208 def create_id(column="id"):
209 """
210 Create a unique id for every row.
211 :param column: Columns to be processed
212 :return:
213 """
214
215 raise NotImplementedError
216
217 def to_dict(self, orient="records", index=True):
218 """
219 Create a dict
220 :param orient:
221 :param index: Return the series index
222 :return:
223 """
224
225 series = self.data
226 if index is True:
227 return series.compute().to_dict(orient)
228 else:
229 return series.compute().to_list()
230
231 def to_pandas(self):
232 return self.data.compute()
233
234 @property
235 def constants(self):
236 from optimus.engines.base.dask.constants import constants
237 return constants(self)
238
[end of optimus/engines/base/dask/dataframe.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/optimus/engines/base/dask/dataframe.py b/optimus/engines/base/dask/dataframe.py
--- a/optimus/engines/base/dask/dataframe.py
+++ b/optimus/engines/base/dask/dataframe.py
@@ -25,7 +25,13 @@
for key in kw_columns:
kw_column = kw_columns[key]
if not is_one_element(kw_column) and not callable(kw_column) and not kw_column.known_divisions:
- kw_columns[key] = kw_column.reset_index().set_index('index')[key]
+ _dfd = kw_column.reset_index().set_index('index')
+ if key in _dfd:
+ # the incoming series has the same column key
+ kw_columns[key] = _dfd[key]
+ else:
+ # the incoming series has no column key
+ kw_columns[key] = _dfd[0]
return dfd.assign(**kw_columns)
@staticmethod
| {"golden_diff": "diff --git a/optimus/engines/base/dask/dataframe.py b/optimus/engines/base/dask/dataframe.py\n--- a/optimus/engines/base/dask/dataframe.py\n+++ b/optimus/engines/base/dask/dataframe.py\n@@ -25,7 +25,13 @@\n for key in kw_columns:\n kw_column = kw_columns[key]\n if not is_one_element(kw_column) and not callable(kw_column) and not kw_column.known_divisions:\n- kw_columns[key] = kw_column.reset_index().set_index('index')[key]\n+ _dfd = kw_column.reset_index().set_index('index')\n+ if key in _dfd:\n+ # the incoming series has the same column key\n+ kw_columns[key] = _dfd[key]\n+ else:\n+ # the incoming series has no column key\n+ kw_columns[key] = _dfd[0]\n return dfd.assign(**kw_columns)\n \n @staticmethod\n", "issue": "Profiling bug when creating a dataframe from a dictionary using dask\n**Describe the bug**\r\ncalling `df.profile(\"*\")` for a dataframe created using `op.create.dataframe` causes an error `KeyError: 'id'`.\r\n\r\n**To Reproduce**\r\n```\r\ndf = op.create.dataframe({\"id\":[5,6,10,11,79,100]})\r\ndf.profile(\"*\")\r\n```\r\n\n", "before_files": [{"content": "from abc import abstractmethod\n\nimport dask\nimport humanize\nfrom dask.distributed import Variable\nfrom dask.utils import parse_bytes\n\nfrom optimus.engines.base.basedataframe import BaseDataFrame\nfrom optimus.engines.pandas.dataframe import PandasDataFrame\nfrom optimus.helpers.functions import random_int\nfrom optimus.helpers.raiseit import RaiseIt\nfrom optimus.infer import is_one_element\n\n\nclass DaskBaseDataFrame(BaseDataFrame):\n\n def __init__(self, root, data):\n super().__init__(root, data)\n\n def _assign(self, kw_columns):\n\n dfd = self.root.data\n\n if dfd.known_divisions:\n for key in kw_columns:\n kw_column = kw_columns[key]\n if not is_one_element(kw_column) and not callable(kw_column) and not kw_column.known_divisions:\n kw_columns[key] = kw_column.reset_index().set_index('index')[key]\n return dfd.assign(**kw_columns)\n\n @staticmethod\n @abstractmethod\n def _base_to_dfd(df, n_partitions):\n \"\"\"\n Convert a dataframe from the dataframe base (pandas, cudf) in a distributed engine dataframe (dask, dask_cudf)\n :param pdf:\n :param n_partitions:\n :return:\n \"\"\"\n pass\n\n def execute(self):\n self.data = self.data.persist()\n return self\n\n def compute(self):\n df = self.data\n return df.compute()\n\n def visualize(self):\n return display(self.data.visualize())\n\n def export(self):\n \"\"\"\n Helper function to export all the dataframe in text format. Aimed to be used in test functions\n :return:\n \"\"\"\n df = self.root\n df_data = df.to_json()\n df_schema = df.cols.dtypes()\n\n return f\"{df_schema}, {df_data}\"\n\n def _reset_buffer(self):\n if self.buffer:\n Variable(self.buffer).delete()\n self.buffer = None\n\n def get_buffer(self):\n return Variable(self.buffer).get() if self.buffer else None\n\n def _buffer_window(self, input_cols, lower_bound, upper_bound):\n def func(value):\n return value[lower_bound:upper_bound]\n\n return PandasDataFrame(self.data[input_cols].partitions[0].map_partitions(func).compute())\n\n def sample(self, n=10, random=False):\n \"\"\"\n Return a n number of sample from a dataFrame\n :param n: Number of samples\n :param random: if true get a semi random sample\n :return:\n \"\"\"\n df = self.root\n if random is True:\n seed = random_int()\n elif random is False:\n seed = 0\n else:\n RaiseIt.value_error(random, [\"True\", \"False\"])\n\n rows_count = df.rows.count()\n if n < rows_count:\n # n/rows_count can return a number that represent less the total number we expect. multiply by 1.1\n fraction = (n / rows_count) * 1.1\n else:\n fraction = 1.0\n return self.root.new(df.data.sample(frac=fraction, random_state=seed))\n\n def stratified_sample(self, col_name, seed: int = 1):\n \"\"\"\n Stratified Sampling\n :param col_name:\n :param seed:\n :return:\n \"\"\"\n df = self.data\n n = min(5, df[col_name].value_counts().min())\n df = df.groupby(col_name).apply(lambda x: x.sample(2))\n # df_.index = df_.index.droplevel(0)\n return self.root.new(df)\n\n @staticmethod\n def pivot(index, column, values):\n \"\"\"\n Return reshaped DataFrame organized by given index / column values.\n :param index: Column to use to make new frame's index.\n :param column: Column to use to make new frame's columns.\n :param values: Column(s) to use for populating new frame's values.\n :return:\n \"\"\"\n raise NotImplementedError\n\n @staticmethod\n def melt(id_vars, value_vars, var_name=\"variable\", value_name=\"value\", data_type=\"str\"):\n \"\"\"\n Convert DataFrame from wide to long format.\n :param id_vars: column with unique values\n :param value_vars: Column names that are going to be converted to columns values\n :param var_name: Column name for vars\n :param value_name: Column name for values\n :param data_type: All columns must have the same type. It will transform all columns to this data type.\n :return:\n \"\"\"\n\n raise NotImplementedError\n\n def size(self, deep=False, format=None):\n \"\"\"\n Get the size of a dask in bytes\n :return:\n \"\"\"\n df = self.data\n result = df.memory_usage(index=True, deep=deep).sum().compute()\n if format == \"human\":\n result = humanize.naturalsize(result)\n\n return result\n\n def run(self):\n \"\"\"\n This method is a very useful function to break lineage of transformations. By default Spark uses the lazy\n evaluation approach in processing data: transformation functions are not computed into an action is called.\n Sometimes when transformations are numerous, the computations are very extensive because the high number of\n operations that spark needs to run in order to get the results.\n\n Other important thing is that Apache Spark save task but not result of dataFrame, so tasks are\n accumulated and the same situation happens.\n\n :return:\n \"\"\"\n df = self.data\n df.cache().count()\n return df\n\n @staticmethod\n def query(sql_expression):\n raise NotImplementedError\n\n def partitions(self):\n return self.data.npartitions\n\n @staticmethod\n def partitioner():\n print(\"Dask not support custom partitioner\")\n raise NotImplementedError\n\n def repartition(self, n=None, *args, **kwargs):\n dfd = self.data\n df = self\n if n == \"auto\":\n # Follow a heuristic for partitioning a mentioned\n # https://docs.dask.org/en/latest/best-practices.html#avoid-very-large-partitions\n client = dask.distributed.get_client()\n worker_memory = parse_bytes(client.cluster.worker_spec[0][\"options\"][\"memory_limit\"])\n nthreads = client.cluster.worker_spec[0][\"options\"][\"nthreads\"]\n\n part_recommended_size = worker_memory / nthreads / 10\n n = int(df.size() / part_recommended_size)\n\n # Partition can not be lower than 1\n n = n if n < 0 else 1\n # TODO .repartition(partition_size=\"100MB\"). https://stackoverflow.com/questions/44657631/strategy-for-partitioning-dask-dataframes-efficiently\n dfd = dfd.repartition(npartitions=n, *args, **kwargs)\n\n return self.new(dfd, meta=self.meta)\n\n @staticmethod\n def debug():\n \"\"\"\n\n :return:\n \"\"\"\n raise NotImplementedError\n\n @staticmethod\n def create_id(column=\"id\"):\n \"\"\"\n Create a unique id for every row.\n :param column: Columns to be processed\n :return:\n \"\"\"\n\n raise NotImplementedError\n\n def to_dict(self, orient=\"records\", index=True):\n \"\"\"\n Create a dict\n :param orient:\n :param index: Return the series index\n :return:\n \"\"\"\n\n series = self.data\n if index is True:\n return series.compute().to_dict(orient)\n else:\n return series.compute().to_list()\n\n def to_pandas(self):\n return self.data.compute()\n\n @property\n def constants(self):\n from optimus.engines.base.dask.constants import constants\n return constants(self)\n", "path": "optimus/engines/base/dask/dataframe.py"}]} | 2,947 | 220 |
gh_patches_debug_18995 | rasdani/github-patches | git_diff | mars-project__mars-3323 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] mars.tensor.int is not compatible with numpy since v1.24
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Since numpy v1.24, numpy.int, numpy.float, numpy.object, numpy.bool is no longer able to be imported, while cases are that mt.int is used. Deprecate warnings removed by PR #2788 finally come into reality.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: v3.8.13
2. The version of Mars you use: master
3. Versions of crucial packages, numpy v1.24.1
4. Full stack of the error.
AttributeError: module 'mars.tensor' has no attribute 'int'
5. Minimized code to reproduce the error.
> import mars.tensor as mt
> mt.int
**Expected behavior**
AttributeError: module 'mars.tensor' has no attribute 'int'
**Additional context**
- [Using the aliases of builtin types like np.int is deprecated](https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations)
- [NumPy 1.24 Release Notes](https://numpy.org/devdocs/release/1.24.0-notes.html)
- [DEP: Next step in scalar type alias deprecations/futurewarnings by seberg · Pull Request #22607 · numpy/numpy](https://github.com/numpy/numpy/pull/22607)
- [NumPy 1.22 dropped Python 3.7](https://numpy.org/devdocs/release/1.22.0-notes.html)
</issue>
<code>
[start of mars/tensor/__init__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2021 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17
18 from .datasource import (
19 tensor,
20 array,
21 asarray,
22 ascontiguousarray,
23 asfortranarray,
24 scalar,
25 empty,
26 empty_like,
27 ones,
28 ones_like,
29 zeros,
30 zeros_like,
31 full,
32 full_like,
33 arange,
34 diag,
35 diagflat,
36 eye,
37 identity,
38 linspace,
39 meshgrid,
40 indices,
41 tril,
42 triu,
43 fromtiledb,
44 fromtiledb as from_tiledb,
45 from_dataframe,
46 fromhdf5,
47 fromhdf5 as from_hdf5,
48 fromzarr,
49 fromzarr as from_zarr,
50 fromvineyard,
51 fromvineyard as from_vineyard,
52 )
53 from .datastore import (
54 totiledb,
55 totiledb as to_tiledb,
56 tohdf5,
57 tohdf5 as to_hdf5,
58 tozarr,
59 tozarr as to_zarr,
60 tovineyard,
61 tovineyard as to_vineyard,
62 ) # pylint: disable=reimported
63 from .base import (
64 result_type,
65 ndim,
66 copyto,
67 transpose,
68 where,
69 broadcast_to,
70 broadcast_arrays,
71 expand_dims,
72 rollaxis,
73 swapaxes,
74 moveaxis,
75 ravel,
76 atleast_1d,
77 atleast_2d,
78 atleast_3d,
79 argwhere,
80 array_split,
81 split,
82 hsplit,
83 vsplit,
84 dsplit,
85 roll,
86 squeeze,
87 diff,
88 ediff1d,
89 flip,
90 flipud,
91 fliplr,
92 repeat,
93 tile,
94 isin,
95 searchsorted,
96 unique,
97 sort,
98 argsort,
99 partition,
100 argpartition,
101 topk,
102 argtopk,
103 copy,
104 trapz,
105 shape,
106 insert,
107 delete,
108 in1d,
109 setdiff1d,
110 )
111 from .arithmetic import (
112 add,
113 subtract,
114 multiply,
115 divide,
116 truediv as true_divide,
117 floordiv as floor_divide,
118 mod,
119 power,
120 float_power,
121 fmod,
122 sqrt,
123 around,
124 round_,
125 round_ as round,
126 logaddexp,
127 logaddexp2,
128 negative,
129 positive,
130 absolute,
131 fabs,
132 absolute as abs,
133 rint,
134 sign,
135 degrees,
136 radians,
137 conj,
138 conjugate,
139 exp,
140 exp2,
141 log,
142 log2,
143 log10,
144 expm1,
145 log1p,
146 square,
147 cbrt,
148 reciprocal,
149 equal,
150 not_equal,
151 less,
152 less_equal,
153 greater,
154 greater_equal,
155 sin,
156 cos,
157 tan,
158 arcsin,
159 arccos,
160 arctan,
161 arctan2,
162 hypot,
163 sinh,
164 cosh,
165 tanh,
166 arcsinh,
167 arccosh,
168 arctanh,
169 deg2rad,
170 rad2deg,
171 bitand as bitwise_and,
172 bitor as bitwise_or,
173 bitxor as bitwise_xor,
174 invert,
175 invert as bitwise_not,
176 lshift as left_shift,
177 rshift as right_shift,
178 logical_and,
179 logical_or,
180 logical_xor,
181 logical_not,
182 maximum,
183 minimum,
184 floor,
185 ceil,
186 trunc,
187 remainder,
188 fmax,
189 fmin,
190 isfinite,
191 isinf,
192 isnan,
193 signbit,
194 copysign,
195 nextafter,
196 spacing,
197 clip,
198 isclose,
199 ldexp,
200 frexp,
201 modf,
202 angle,
203 isreal,
204 iscomplex,
205 real,
206 imag,
207 fix,
208 i0,
209 sinc,
210 nan_to_num,
211 tree_add,
212 tree_multiply,
213 )
214 from .statistics import (
215 average,
216 bincount,
217 cov,
218 corrcoef,
219 digitize,
220 ptp,
221 histogram_bin_edges,
222 histogram,
223 median,
224 quantile,
225 percentile,
226 )
227 from .linalg.tensordot import tensordot
228 from .linalg.dot import dot
229 from .linalg.inner import inner, innerproduct
230 from .linalg.vdot import vdot
231 from .linalg.matmul import matmul
232 from .reduction import (
233 sum,
234 nansum,
235 prod,
236 prod as product,
237 nanprod,
238 max,
239 max as amax,
240 nanmax,
241 min,
242 min as amin,
243 nanmin,
244 all,
245 any,
246 mean,
247 nanmean,
248 argmax,
249 nanargmax,
250 argmin,
251 nanargmin,
252 cumsum,
253 cumprod,
254 var,
255 std,
256 nanvar,
257 nanstd,
258 nancumsum,
259 nancumprod,
260 count_nonzero,
261 allclose,
262 array_equal,
263 )
264 from .reshape import reshape
265 from .merge import (
266 concatenate,
267 stack,
268 hstack,
269 vstack,
270 dstack,
271 column_stack,
272 union1d,
273 block,
274 append,
275 )
276 from .indexing import (
277 take,
278 compress,
279 extract,
280 choose,
281 unravel_index,
282 nonzero,
283 flatnonzero,
284 fill_diagonal,
285 )
286 from .rechunk import rechunk
287 from .einsum import einsum
288 from .images import imread
289
290 # noinspection PyUnresolvedReferences
291 from .lib.index_tricks import mgrid, ogrid, ndindex, r_, c_
292
293 from . import random
294 from . import fft
295 from . import linalg
296 from . import lib
297 from . import special
298 from . import stats
299
300 # types
301 from .core import Tensor
302
303 # noinspection PyUnresolvedReferences
304 from ..core import ExecutableTuple
305
306 # noinspection PyUnresolvedReferences
307 from numpy import (
308 newaxis,
309 AxisError,
310 inf,
311 Inf,
312 NINF,
313 nan,
314 NAN,
315 NaN,
316 pi,
317 e,
318 errstate,
319 geterr,
320 seterr,
321 )
322
323 # import numpy types
324 # noinspection PyUnresolvedReferences
325 from numpy import (
326 dtype,
327 number,
328 inexact,
329 floating,
330 complexfloating,
331 integer,
332 signedinteger,
333 unsignedinteger,
334 character,
335 generic,
336 flexible,
337 int_,
338 bool_,
339 float_,
340 cfloat,
341 bytes_,
342 unicode_,
343 void,
344 object_,
345 intc,
346 intp,
347 int8,
348 int16,
349 int32,
350 int64,
351 uint8,
352 uint16,
353 uint32,
354 uint64,
355 uint,
356 float16,
357 float32,
358 float64,
359 double,
360 complex64,
361 complex128,
362 datetime64,
363 timedelta64,
364 )
365
366 # noinspection PyUnresolvedReferences
367 from numpy import finfo
368
369 # register fuse op and fetch op
370 from .fuse import TensorFuseChunk, TensorCpFuseChunk, TensorNeFuseChunk
371 from .fetch import TensorFetch, TensorFetchShuffle
372 from . import ufunc
373
374 try:
375 import warnings
376
377 # suppress numpy warnings on types
378 with warnings.catch_warnings():
379 warnings.simplefilter("ignore", DeprecationWarning)
380 warnings.simplefilter("ignore", FutureWarning)
381 # noinspection PyUnresolvedReferences
382 from numpy import object, int, bool, float
383 except ImportError: # pragma: no cover
384 pass
385 finally:
386 del warnings
387
388 del (
389 TensorFuseChunk,
390 TensorCpFuseChunk,
391 TensorNeFuseChunk,
392 TensorFetch,
393 TensorFetchShuffle,
394 ufunc,
395 )
396
[end of mars/tensor/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/tensor/__init__.py b/mars/tensor/__init__.py
--- a/mars/tensor/__init__.py
+++ b/mars/tensor/__init__.py
@@ -334,14 +334,14 @@
character,
generic,
flexible,
- int_,
- bool_,
- float_,
+ int_ as int,
+ bool_ as bool,
+ float_ as float,
cfloat,
bytes_,
unicode_,
void,
- object_,
+ object_ as object,
intc,
intp,
int8,
@@ -371,20 +371,6 @@
from .fetch import TensorFetch, TensorFetchShuffle
from . import ufunc
-try:
- import warnings
-
- # suppress numpy warnings on types
- with warnings.catch_warnings():
- warnings.simplefilter("ignore", DeprecationWarning)
- warnings.simplefilter("ignore", FutureWarning)
- # noinspection PyUnresolvedReferences
- from numpy import object, int, bool, float
-except ImportError: # pragma: no cover
- pass
-finally:
- del warnings
-
del (
TensorFuseChunk,
TensorCpFuseChunk,
| {"golden_diff": "diff --git a/mars/tensor/__init__.py b/mars/tensor/__init__.py\n--- a/mars/tensor/__init__.py\n+++ b/mars/tensor/__init__.py\n@@ -334,14 +334,14 @@\n character,\n generic,\n flexible,\n- int_,\n- bool_,\n- float_,\n+ int_ as int,\n+ bool_ as bool,\n+ float_ as float,\n cfloat,\n bytes_,\n unicode_,\n void,\n- object_,\n+ object_ as object,\n intc,\n intp,\n int8,\n@@ -371,20 +371,6 @@\n from .fetch import TensorFetch, TensorFetchShuffle\n from . import ufunc\n \n-try:\n- import warnings\n-\n- # suppress numpy warnings on types\n- with warnings.catch_warnings():\n- warnings.simplefilter(\"ignore\", DeprecationWarning)\n- warnings.simplefilter(\"ignore\", FutureWarning)\n- # noinspection PyUnresolvedReferences\n- from numpy import object, int, bool, float\n-except ImportError: # pragma: no cover\n- pass\n-finally:\n- del warnings\n-\n del (\n TensorFuseChunk,\n TensorCpFuseChunk,\n", "issue": "[BUG] mars.tensor.int is not compatible with numpy since v1.24\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nSince numpy v1.24, numpy.int, numpy.float, numpy.object, numpy.bool is no longer able to be imported, while cases are that mt.int is used. Deprecate warnings removed by PR #2788 finally come into reality.\r\n\r\n**To Reproduce**\r\nTo help us reproducing this bug, please provide information below:\r\n1. Your Python version: v3.8.13\r\n2. The version of Mars you use: master\r\n3. Versions of crucial packages, numpy v1.24.1\r\n4. Full stack of the error.\r\nAttributeError: module 'mars.tensor' has no attribute 'int'\r\n5. Minimized code to reproduce the error.\r\n> import mars.tensor as mt\r\n> mt.int\r\n\r\n**Expected behavior**\r\nAttributeError: module 'mars.tensor' has no attribute 'int'\r\n\r\n**Additional context**\r\n- [Using the aliases of builtin types like np.int is deprecated](https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations)\r\n- [NumPy 1.24 Release Notes](https://numpy.org/devdocs/release/1.24.0-notes.html)\r\n- [DEP: Next step in scalar type alias deprecations/futurewarnings by seberg \u00b7 Pull Request #22607 \u00b7 numpy/numpy](https://github.com/numpy/numpy/pull/22607)\r\n- [NumPy 1.22 dropped Python 3.7](https://numpy.org/devdocs/release/1.22.0-notes.html)\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom .datasource import (\n tensor,\n array,\n asarray,\n ascontiguousarray,\n asfortranarray,\n scalar,\n empty,\n empty_like,\n ones,\n ones_like,\n zeros,\n zeros_like,\n full,\n full_like,\n arange,\n diag,\n diagflat,\n eye,\n identity,\n linspace,\n meshgrid,\n indices,\n tril,\n triu,\n fromtiledb,\n fromtiledb as from_tiledb,\n from_dataframe,\n fromhdf5,\n fromhdf5 as from_hdf5,\n fromzarr,\n fromzarr as from_zarr,\n fromvineyard,\n fromvineyard as from_vineyard,\n)\nfrom .datastore import (\n totiledb,\n totiledb as to_tiledb,\n tohdf5,\n tohdf5 as to_hdf5,\n tozarr,\n tozarr as to_zarr,\n tovineyard,\n tovineyard as to_vineyard,\n) # pylint: disable=reimported\nfrom .base import (\n result_type,\n ndim,\n copyto,\n transpose,\n where,\n broadcast_to,\n broadcast_arrays,\n expand_dims,\n rollaxis,\n swapaxes,\n moveaxis,\n ravel,\n atleast_1d,\n atleast_2d,\n atleast_3d,\n argwhere,\n array_split,\n split,\n hsplit,\n vsplit,\n dsplit,\n roll,\n squeeze,\n diff,\n ediff1d,\n flip,\n flipud,\n fliplr,\n repeat,\n tile,\n isin,\n searchsorted,\n unique,\n sort,\n argsort,\n partition,\n argpartition,\n topk,\n argtopk,\n copy,\n trapz,\n shape,\n insert,\n delete,\n in1d,\n setdiff1d,\n)\nfrom .arithmetic import (\n add,\n subtract,\n multiply,\n divide,\n truediv as true_divide,\n floordiv as floor_divide,\n mod,\n power,\n float_power,\n fmod,\n sqrt,\n around,\n round_,\n round_ as round,\n logaddexp,\n logaddexp2,\n negative,\n positive,\n absolute,\n fabs,\n absolute as abs,\n rint,\n sign,\n degrees,\n radians,\n conj,\n conjugate,\n exp,\n exp2,\n log,\n log2,\n log10,\n expm1,\n log1p,\n square,\n cbrt,\n reciprocal,\n equal,\n not_equal,\n less,\n less_equal,\n greater,\n greater_equal,\n sin,\n cos,\n tan,\n arcsin,\n arccos,\n arctan,\n arctan2,\n hypot,\n sinh,\n cosh,\n tanh,\n arcsinh,\n arccosh,\n arctanh,\n deg2rad,\n rad2deg,\n bitand as bitwise_and,\n bitor as bitwise_or,\n bitxor as bitwise_xor,\n invert,\n invert as bitwise_not,\n lshift as left_shift,\n rshift as right_shift,\n logical_and,\n logical_or,\n logical_xor,\n logical_not,\n maximum,\n minimum,\n floor,\n ceil,\n trunc,\n remainder,\n fmax,\n fmin,\n isfinite,\n isinf,\n isnan,\n signbit,\n copysign,\n nextafter,\n spacing,\n clip,\n isclose,\n ldexp,\n frexp,\n modf,\n angle,\n isreal,\n iscomplex,\n real,\n imag,\n fix,\n i0,\n sinc,\n nan_to_num,\n tree_add,\n tree_multiply,\n)\nfrom .statistics import (\n average,\n bincount,\n cov,\n corrcoef,\n digitize,\n ptp,\n histogram_bin_edges,\n histogram,\n median,\n quantile,\n percentile,\n)\nfrom .linalg.tensordot import tensordot\nfrom .linalg.dot import dot\nfrom .linalg.inner import inner, innerproduct\nfrom .linalg.vdot import vdot\nfrom .linalg.matmul import matmul\nfrom .reduction import (\n sum,\n nansum,\n prod,\n prod as product,\n nanprod,\n max,\n max as amax,\n nanmax,\n min,\n min as amin,\n nanmin,\n all,\n any,\n mean,\n nanmean,\n argmax,\n nanargmax,\n argmin,\n nanargmin,\n cumsum,\n cumprod,\n var,\n std,\n nanvar,\n nanstd,\n nancumsum,\n nancumprod,\n count_nonzero,\n allclose,\n array_equal,\n)\nfrom .reshape import reshape\nfrom .merge import (\n concatenate,\n stack,\n hstack,\n vstack,\n dstack,\n column_stack,\n union1d,\n block,\n append,\n)\nfrom .indexing import (\n take,\n compress,\n extract,\n choose,\n unravel_index,\n nonzero,\n flatnonzero,\n fill_diagonal,\n)\nfrom .rechunk import rechunk\nfrom .einsum import einsum\nfrom .images import imread\n\n# noinspection PyUnresolvedReferences\nfrom .lib.index_tricks import mgrid, ogrid, ndindex, r_, c_\n\nfrom . import random\nfrom . import fft\nfrom . import linalg\nfrom . import lib\nfrom . import special\nfrom . import stats\n\n# types\nfrom .core import Tensor\n\n# noinspection PyUnresolvedReferences\nfrom ..core import ExecutableTuple\n\n# noinspection PyUnresolvedReferences\nfrom numpy import (\n newaxis,\n AxisError,\n inf,\n Inf,\n NINF,\n nan,\n NAN,\n NaN,\n pi,\n e,\n errstate,\n geterr,\n seterr,\n)\n\n# import numpy types\n# noinspection PyUnresolvedReferences\nfrom numpy import (\n dtype,\n number,\n inexact,\n floating,\n complexfloating,\n integer,\n signedinteger,\n unsignedinteger,\n character,\n generic,\n flexible,\n int_,\n bool_,\n float_,\n cfloat,\n bytes_,\n unicode_,\n void,\n object_,\n intc,\n intp,\n int8,\n int16,\n int32,\n int64,\n uint8,\n uint16,\n uint32,\n uint64,\n uint,\n float16,\n float32,\n float64,\n double,\n complex64,\n complex128,\n datetime64,\n timedelta64,\n)\n\n# noinspection PyUnresolvedReferences\nfrom numpy import finfo\n\n# register fuse op and fetch op\nfrom .fuse import TensorFuseChunk, TensorCpFuseChunk, TensorNeFuseChunk\nfrom .fetch import TensorFetch, TensorFetchShuffle\nfrom . import ufunc\n\ntry:\n import warnings\n\n # suppress numpy warnings on types\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\", DeprecationWarning)\n warnings.simplefilter(\"ignore\", FutureWarning)\n # noinspection PyUnresolvedReferences\n from numpy import object, int, bool, float\nexcept ImportError: # pragma: no cover\n pass\nfinally:\n del warnings\n\ndel (\n TensorFuseChunk,\n TensorCpFuseChunk,\n TensorNeFuseChunk,\n TensorFetch,\n TensorFetchShuffle,\n ufunc,\n)\n", "path": "mars/tensor/__init__.py"}]} | 3,824 | 281 |
gh_patches_debug_54607 | rasdani/github-patches | git_diff | zulip__zulip-13067 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.
It looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines.
See `compile_requirements` in that file for details.
My guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`.
@hackerkid this might be a good project for you.
Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.
It looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines.
See `compile_requirements` in that file for details.
My guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`.
@hackerkid this might be a good project for you.
</issue>
<code>
[start of version.py]
1 import os
2
3 ZULIP_VERSION = "2.0.4+git"
4 # Add information on number of commits and commit hash to version, if available
5 zulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')
6 if os.path.exists(zulip_git_version_file):
7 with open(zulip_git_version_file) as f:
8 version = f.read().strip()
9 if version:
10 ZULIP_VERSION = version
11
12 LATEST_MAJOR_VERSION = "2.0"
13 LATEST_RELEASE_VERSION = "2.0.4"
14 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.org/2019/03/01/zulip-2-0-released/"
15
16 # Bump the minor PROVISION_VERSION to indicate that folks should provision
17 # only when going from an old version of the code to a newer version. Bump
18 # the major version to indicate that folks should provision in both
19 # directions.
20
21 # Typically,
22 # * adding a dependency only requires a minor version bump;
23 # * removing a dependency requires a major version bump;
24 # * upgrading a dependency requires a major version bump, unless the
25 # upgraded dependency is backwards compatible with all of our
26 # historical commits sharing the same major version, in which case a
27 # minor version bump suffices.
28
29 PROVISION_VERSION = '49.1'
30
[end of version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -26,4 +26,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '49.1'
+PROVISION_VERSION = '49.2'
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -26,4 +26,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = '49.1'\n+PROVISION_VERSION = '49.2'\n", "issue": "Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.\nIt looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines. \r\n See `compile_requirements` in that file for details. \r\n\r\nMy guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`. \r\n\r\n@hackerkid this might be a good project for you.\nClean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.\nIt looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines. \r\n See `compile_requirements` in that file for details. \r\n\r\nMy guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`. \r\n\r\n@hackerkid this might be a good project for you.\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"2.0.4+git\"\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n version = f.read().strip()\n if version:\n ZULIP_VERSION = version\n\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.4\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = '49.1'\n", "path": "version.py"}]} | 1,259 | 78 |
gh_patches_debug_9751 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-481 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
project list does not show text if there are no matching projects
It should show something like "No projects could be found". Note that the text should work for two cases: "there are no projects" and "there are no projects matching the filters".
</issue>
<code>
[start of apps/contrib/templatetags/contrib_tags.py]
1 from django import template
2 from django.template.loader import render_to_string
3
4 register = template.Library()
5
6
7 @register.assignment_tag
8 def include_template_string(template, **kwargs):
9 rendered_template = render_to_string(template, kwargs)
10 return str(rendered_template)
11
12
13 @register.assignment_tag
14 def combined_url_parameter(request_query_dict, **kwargs):
15 combined_query_dict = request_query_dict.copy()
16 for key in kwargs:
17 combined_query_dict.setlist(key, [kwargs[key]])
18 encoded_parameter = '?' + combined_query_dict.urlencode()
19 return encoded_parameter
20
[end of apps/contrib/templatetags/contrib_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/contrib/templatetags/contrib_tags.py b/apps/contrib/templatetags/contrib_tags.py
--- a/apps/contrib/templatetags/contrib_tags.py
+++ b/apps/contrib/templatetags/contrib_tags.py
@@ -17,3 +17,14 @@
combined_query_dict.setlist(key, [kwargs[key]])
encoded_parameter = '?' + combined_query_dict.urlencode()
return encoded_parameter
+
+
[email protected]_tag
+def filter_has_perm(perm, user, objects):
+ """Filter a list of objects based on user permissions."""
+ if not hasattr(user, 'has_perm'):
+ # If the swapped user model does not support permissions, all objects
+ # will be returned. This is taken from rules.templatetags.has_perm.
+ return objects
+ else:
+ return (obj for obj in objects if user.has_perm(perm, obj))
| {"golden_diff": "diff --git a/apps/contrib/templatetags/contrib_tags.py b/apps/contrib/templatetags/contrib_tags.py\n--- a/apps/contrib/templatetags/contrib_tags.py\n+++ b/apps/contrib/templatetags/contrib_tags.py\n@@ -17,3 +17,14 @@\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n+\n+\[email protected]_tag\n+def filter_has_perm(perm, user, objects):\n+ \"\"\"Filter a list of objects based on user permissions.\"\"\"\n+ if not hasattr(user, 'has_perm'):\n+ # If the swapped user model does not support permissions, all objects\n+ # will be returned. This is taken from rules.templatetags.has_perm.\n+ return objects\n+ else:\n+ return (obj for obj in objects if user.has_perm(perm, obj))\n", "issue": "project list does not show text if there are no matching projects\nIt should show something like \"No projects could be found\". Note that the text should work for two cases: \"there are no projects\" and \"there are no projects matching the filters\".\n", "before_files": [{"content": "from django import template\nfrom django.template.loader import render_to_string\n\nregister = template.Library()\n\n\[email protected]_tag\ndef include_template_string(template, **kwargs):\n rendered_template = render_to_string(template, kwargs)\n return str(rendered_template)\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n", "path": "apps/contrib/templatetags/contrib_tags.py"}]} | 748 | 209 |
gh_patches_debug_8368 | rasdani/github-patches | git_diff | wagtail__wagtail-2488 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Keyerror when sending password reset email
When sending a password reset email, I'm getting an internal error
I'll just share the raven error - hopefully that doesn't review all of the site secrets (probably does)
https://app.getsentry.com/share/issue/37343334302e313233323439393235/
</issue>
<code>
[start of wagtail/wagtailadmin/templatetags/wagtailuserbar.py]
1 from __future__ import absolute_import, unicode_literals
2
3 from django import template
4 from django.template.loader import render_to_string
5
6 from wagtail.wagtailadmin.userbar import (
7 AddPageItem, AdminItem, ApproveModerationEditPageItem, EditPageItem, ExplorePageItem,
8 RejectModerationEditPageItem)
9 from wagtail.wagtailcore import hooks
10 from wagtail.wagtailcore.models import PAGE_TEMPLATE_VAR, Page, PageRevision
11
12 # from django.contrib.auth.decorators import permission_required
13
14
15 register = template.Library()
16
17
18 def get_page_instance(context):
19 """
20 Given a template context, try and find a Page variable in the common
21 places. Returns None if a page can not be found.
22 """
23 possible_names = [PAGE_TEMPLATE_VAR, 'self']
24 for name in possible_names:
25 if name in context:
26 page = context[name]
27 if isinstance(page, Page):
28 return page
29
30
31 @register.simple_tag(takes_context=True)
32 def wagtailuserbar(context, position='bottom-right'):
33 # Find request object
34 request = context['request']
35
36
37 # Don't render if user doesn't have permission to access the admin area
38 if not request.user.has_perm('wagtailadmin.access_admin'):
39 return ''
40
41 # Only render if the context contains a variable referencing a saved page
42 page = get_page_instance(context)
43 if page is None:
44 return ''
45
46 # Dont render anything if the page has not been saved - i.e. a preview
47 if page.pk is None:
48 return ''
49
50 try:
51 revision_id = request.revision_id
52 except AttributeError:
53 revision_id = None
54
55 if revision_id is None:
56 items = [
57 AdminItem(),
58 ExplorePageItem(Page.objects.get(id=page.id)),
59 EditPageItem(Page.objects.get(id=page.id)),
60 AddPageItem(Page.objects.get(id=page.id)),
61 ]
62 else:
63 items = [
64 AdminItem(),
65 ExplorePageItem(PageRevision.objects.get(id=revision_id).page),
66 EditPageItem(PageRevision.objects.get(id=revision_id).page),
67 AddPageItem(PageRevision.objects.get(id=revision_id).page),
68 ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
69 RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),
70 ]
71
72 for fn in hooks.get_hooks('construct_wagtail_userbar'):
73 fn(request, items)
74
75 # Render the items
76 rendered_items = [item.render(request) for item in items]
77
78 # Remove any unrendered items
79 rendered_items = [item for item in rendered_items if item]
80
81 # Render the userbar items
82 return render_to_string('wagtailadmin/userbar/base.html', {
83 'request': request,
84 'items': rendered_items,
85 'position': position,
86 'page': page,
87 'revision_id': revision_id
88 })
89
[end of wagtail/wagtailadmin/templatetags/wagtailuserbar.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py
--- a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py
+++ b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py
@@ -31,8 +31,10 @@
@register.simple_tag(takes_context=True)
def wagtailuserbar(context, position='bottom-right'):
# Find request object
- request = context['request']
-
+ try:
+ request = context['request']
+ except KeyError:
+ return ''
# Don't render if user doesn't have permission to access the admin area
if not request.user.has_perm('wagtailadmin.access_admin'):
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py\n--- a/wagtail/wagtailadmin/templatetags/wagtailuserbar.py\n+++ b/wagtail/wagtailadmin/templatetags/wagtailuserbar.py\n@@ -31,8 +31,10 @@\n @register.simple_tag(takes_context=True)\n def wagtailuserbar(context, position='bottom-right'):\n # Find request object\n- request = context['request']\n-\n+ try:\n+ request = context['request']\n+ except KeyError:\n+ return ''\n \n # Don't render if user doesn't have permission to access the admin area\n if not request.user.has_perm('wagtailadmin.access_admin'):\n", "issue": "Keyerror when sending password reset email\nWhen sending a password reset email, I'm getting an internal error\n\nI'll just share the raven error - hopefully that doesn't review all of the site secrets (probably does)\n\nhttps://app.getsentry.com/share/issue/37343334302e313233323439393235/\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django import template\nfrom django.template.loader import render_to_string\n\nfrom wagtail.wagtailadmin.userbar import (\n AddPageItem, AdminItem, ApproveModerationEditPageItem, EditPageItem, ExplorePageItem,\n RejectModerationEditPageItem)\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import PAGE_TEMPLATE_VAR, Page, PageRevision\n\n# from django.contrib.auth.decorators import permission_required\n\n\nregister = template.Library()\n\n\ndef get_page_instance(context):\n \"\"\"\n Given a template context, try and find a Page variable in the common\n places. Returns None if a page can not be found.\n \"\"\"\n possible_names = [PAGE_TEMPLATE_VAR, 'self']\n for name in possible_names:\n if name in context:\n page = context[name]\n if isinstance(page, Page):\n return page\n\n\[email protected]_tag(takes_context=True)\ndef wagtailuserbar(context, position='bottom-right'):\n # Find request object\n request = context['request']\n\n\n # Don't render if user doesn't have permission to access the admin area\n if not request.user.has_perm('wagtailadmin.access_admin'):\n return ''\n\n # Only render if the context contains a variable referencing a saved page\n page = get_page_instance(context)\n if page is None:\n return ''\n\n # Dont render anything if the page has not been saved - i.e. a preview\n if page.pk is None:\n return ''\n\n try:\n revision_id = request.revision_id\n except AttributeError:\n revision_id = None\n\n if revision_id is None:\n items = [\n AdminItem(),\n ExplorePageItem(Page.objects.get(id=page.id)),\n EditPageItem(Page.objects.get(id=page.id)),\n AddPageItem(Page.objects.get(id=page.id)),\n ]\n else:\n items = [\n AdminItem(),\n ExplorePageItem(PageRevision.objects.get(id=revision_id).page),\n EditPageItem(PageRevision.objects.get(id=revision_id).page),\n AddPageItem(PageRevision.objects.get(id=revision_id).page),\n ApproveModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n RejectModerationEditPageItem(PageRevision.objects.get(id=revision_id)),\n ]\n\n for fn in hooks.get_hooks('construct_wagtail_userbar'):\n fn(request, items)\n\n # Render the items\n rendered_items = [item.render(request) for item in items]\n\n # Remove any unrendered items\n rendered_items = [item for item in rendered_items if item]\n\n # Render the userbar items\n return render_to_string('wagtailadmin/userbar/base.html', {\n 'request': request,\n 'items': rendered_items,\n 'position': position,\n 'page': page,\n 'revision_id': revision_id\n })\n", "path": "wagtail/wagtailadmin/templatetags/wagtailuserbar.py"}]} | 1,445 | 187 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.