problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_31170 | rasdani/github-patches | git_diff | beeware__toga-1702 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Winforms datepicker widget returning unexpected date format
**Describe the bug**
The winforms datepicker native widget returns a date format different to the expected format. A ValueError exception is raised. For example "time data 'Saturday, 1 January 2011' does not match format '%A, %B %d, %Y'".
**To Reproduce**
Steps to reproduce the behavior:
Run the following code and click the update button:
```
from toga import App, MainWindow, Label, Box, Button, DatePicker
from toga.constants import COLUMN
from toga.style import Pack
class Application(App):
def startup(self):
self.main_window = MainWindow(title=self.name)
self.button = Button(
"Update",
on_press=self.update,
style=Pack(flex=1, width=200),
)
self.box1 = Box(
style=Pack(
direction=COLUMN,
)
)
self.date_input = DatePicker()
self.box1.add(self.date_input)
self.box1.add(self.button)
self.main_window.content = self.box1
self.main_window.content.refresh()
self.main_window.show()
def update(self, widget, **kwargs):
print(self.date_input.value)
if __name__ == "__main__":
app = Application("BugApp", "App")
app.main_loop()
```
A traceback similiar to the folllowing will be generated:
```
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\_strptime.py", line 349, in _strptime
raise ValueError("time data %r does not match format %r" %
File "C:\Program Files\Python310\lib\_strptime.py", line 568, in _strptime_datetime
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
File "C:\Users\gregc\python\toga\toga\winforms\src\toga_winforms\widgets\datepicker.py", line 15, in get_value
return datetime.datetime.strptime(self.native.Text, "%A, %B %d, %Y").date()
File "C:\Users\gregc\python\toga\toga\core\src\toga\widgets\datepicker.py", line 75, in value
return self._impl.get_value()
File "C:\Users\gregc\python\toga\test2.py", line 33, in update
print(self.date_input.value)
File "C:\Users\gregc\python\toga\toga\core\src\toga\handlers.py", line 64, in _handler
result = handler(interface, *args, **kwargs)
File "C:\Users\gregc\python\toga\toga\winforms\src\toga_winforms\widgets\button.py", line 19, in winforms_click
self.interface.on_press(self.interface)
at Python.Runtime.PythonException.ThrowLastAsClrException()
at Python.Runtime.Dispatcher.TrueDispatch(Object[] args)
at Python.Runtime.Dispatcher.Dispatch(Object[] args)
at __System_EventHandlerDispatcher.Invoke(Object , EventArgs )
at System.Windows.Forms.Control.OnClick(EventArgs e)
at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ButtonBase.WndProc(Message& m)
at System.Windows.Forms.Button.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
time data 'Friday, 2 February 3201' does not match format '%A, %B %d, %Y'
```
**Expected behavior**
The date should be printed to the console eg. "2011-02-01".
**Screenshots**

**Environment:**
- Operating System: Windows 11
- Python version: 3.10.8
- Software versions:
- Toga: 0.3.0.dev39
**Additional context**
I have a proposed fix. I will submit a pull request when I can work out why CI is failing:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `winforms/src/toga_winforms/widgets/datepicker.py`
Content:
```
1 import datetime
2
3 from travertino.size import at_least
4
5 from toga_winforms.libs import WinDateTime, WinForms
6
7 from .base import Widget
8
9
10 class DatePicker(Widget):
11 def create(self):
12 self.native = WinForms.DateTimePicker()
13
14 def get_value(self):
15 return datetime.datetime.strptime(self.native.Text, "%A, %B %d, %Y").date()
16
17 def set_value(self, value):
18 self.native.Value = WinDateTime(value.year, value.month, value.day)
19
20 def set_min_date(self, value):
21 if value is None:
22 value = self.native.MinDateTime
23 else:
24 value = WinDateTime(value.year, value.month, value.day)
25
26 self.native.MinDate = value
27
28 def set_max_date(self, value):
29 if value is None:
30 value = self.native.MaxDateTime
31 else:
32 value = WinDateTime(value.year, value.month, value.day)
33
34 self.native.MaxDate = value
35
36 def rehint(self):
37 # Height of a date input is known and fixed.
38 # Width must be > 200
39 self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)
40 self.interface.intrinsic.height = self.native.PreferredSize.Height
41
42 def set_on_change(self, handler):
43 self.native.ValueChanged += self.on_date_change
44
45 def on_date_change(self, sender, event):
46 if self.interface._on_change:
47 self.interface.on_change(self.interface)
48
```
Path: `winforms/src/toga_winforms/libs/__init__.py`
Content:
```
1 from .extensions import ( # noqa: F401
2 CoreWebView2CreationProperties,
3 WebView2,
4 WebView2RuntimeNotFoundException,
5 )
6 from .fonts import HorizontalTextAlignment, TextAlignment, win_font_family # noqa: F401
7 from .winforms import ( # noqa: F401
8 Action,
9 Bitmap,
10 Color,
11 Convert,
12 Drawing2D,
13 FillMode,
14 FontFamily,
15 FontStyle,
16 Graphics,
17 GraphicsPath,
18 ImageFormat,
19 Matrix,
20 MemoryStream,
21 Pen,
22 Point,
23 PointF,
24 Rectangle,
25 RectangleF,
26 SecurityProtocolType,
27 ServicePointManager,
28 Single,
29 Size,
30 SolidBrush,
31 String,
32 StringFormat,
33 SystemColors,
34 Task,
35 TaskScheduler,
36 Threading,
37 Uri,
38 WinDateTime,
39 WinFont,
40 WinForms,
41 WinIcon,
42 WinImage,
43 shcore,
44 user32,
45 win_version,
46 )
47
```
Path: `winforms/src/toga_winforms/libs/winforms.py`
Content:
```
1 import ctypes
2
3 import clr
4
5 clr.AddReference("System.Windows.Forms")
6
7 import System.Windows.Forms as WinForms # noqa: F401, E402
8 from System import ( # noqa: F401, E402
9 Action,
10 ArgumentException,
11 Convert,
12 DateTime as WinDateTime,
13 Environment,
14 Single,
15 String,
16 Threading,
17 Uri,
18 )
19 from System.Drawing import ( # noqa: F401, E402
20 Bitmap,
21 Color,
22 ContentAlignment,
23 Drawing2D,
24 Font as WinFont,
25 FontFamily,
26 FontStyle,
27 Graphics,
28 Icon as WinIcon,
29 Image as WinImage,
30 Pen,
31 Point,
32 PointF,
33 Rectangle,
34 RectangleF,
35 Size,
36 SolidBrush,
37 StringFormat,
38 SystemColors,
39 SystemFonts,
40 Text,
41 )
42 from System.Drawing.Drawing2D import ( # noqa: F401, E402
43 FillMode,
44 GraphicsPath,
45 Matrix,
46 )
47 from System.Drawing.Imaging import ImageFormat # noqa: F401, E402
48 from System.Drawing.Text import PrivateFontCollection # noqa: F401, E402
49 from System.IO import FileNotFoundException, MemoryStream # noqa: F401, E402
50 from System.Net import SecurityProtocolType, ServicePointManager # noqa: F401, E402
51 from System.Runtime.InteropServices import ExternalException # noqa: F401, E402
52 from System.Threading.Tasks import Task, TaskScheduler # noqa: F401, E402
53
54 user32 = ctypes.windll.user32
55 # shcore dll not exist on some Windows versions
56 # win_version should be checked to ensure proper usage
57 try:
58 shcore = ctypes.windll.shcore
59 except OSError:
60 shcore = None
61 win_version = Environment.OSVersion.Version
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/winforms/src/toga_winforms/libs/__init__.py b/winforms/src/toga_winforms/libs/__init__.py
--- a/winforms/src/toga_winforms/libs/__init__.py
+++ b/winforms/src/toga_winforms/libs/__init__.py
@@ -9,6 +9,7 @@
Bitmap,
Color,
Convert,
+ CultureInfo,
Drawing2D,
FillMode,
FontFamily,
diff --git a/winforms/src/toga_winforms/libs/winforms.py b/winforms/src/toga_winforms/libs/winforms.py
--- a/winforms/src/toga_winforms/libs/winforms.py
+++ b/winforms/src/toga_winforms/libs/winforms.py
@@ -46,6 +46,7 @@
)
from System.Drawing.Imaging import ImageFormat # noqa: F401, E402
from System.Drawing.Text import PrivateFontCollection # noqa: F401, E402
+from System.Globalization import CultureInfo # noqa: F401, E402
from System.IO import FileNotFoundException, MemoryStream # noqa: F401, E402
from System.Net import SecurityProtocolType, ServicePointManager # noqa: F401, E402
from System.Runtime.InteropServices import ExternalException # noqa: F401, E402
diff --git a/winforms/src/toga_winforms/widgets/datepicker.py b/winforms/src/toga_winforms/widgets/datepicker.py
--- a/winforms/src/toga_winforms/widgets/datepicker.py
+++ b/winforms/src/toga_winforms/widgets/datepicker.py
@@ -2,7 +2,7 @@
from travertino.size import at_least
-from toga_winforms.libs import WinDateTime, WinForms
+from toga_winforms.libs import CultureInfo, WinDateTime, WinForms
from .base import Widget
@@ -12,7 +12,12 @@
self.native = WinForms.DateTimePicker()
def get_value(self):
- return datetime.datetime.strptime(self.native.Text, "%A, %B %d, %Y").date()
+ return datetime.datetime.strptime(
+ self.native.Value.ToString(
+ "yyyy-MM-ddTHH:mm:sszzz", CultureInfo.InvariantCulture
+ ),
+ "%Y-%m-%dT%H:%M:%S%z",
+ ).date()
def set_value(self, value):
self.native.Value = WinDateTime(value.year, value.month, value.day)
| {"golden_diff": "diff --git a/winforms/src/toga_winforms/libs/__init__.py b/winforms/src/toga_winforms/libs/__init__.py\n--- a/winforms/src/toga_winforms/libs/__init__.py\n+++ b/winforms/src/toga_winforms/libs/__init__.py\n@@ -9,6 +9,7 @@\n Bitmap,\n Color,\n Convert,\n+ CultureInfo,\n Drawing2D,\n FillMode,\n FontFamily,\ndiff --git a/winforms/src/toga_winforms/libs/winforms.py b/winforms/src/toga_winforms/libs/winforms.py\n--- a/winforms/src/toga_winforms/libs/winforms.py\n+++ b/winforms/src/toga_winforms/libs/winforms.py\n@@ -46,6 +46,7 @@\n )\n from System.Drawing.Imaging import ImageFormat # noqa: F401, E402\n from System.Drawing.Text import PrivateFontCollection # noqa: F401, E402\n+from System.Globalization import CultureInfo # noqa: F401, E402\n from System.IO import FileNotFoundException, MemoryStream # noqa: F401, E402\n from System.Net import SecurityProtocolType, ServicePointManager # noqa: F401, E402\n from System.Runtime.InteropServices import ExternalException # noqa: F401, E402\ndiff --git a/winforms/src/toga_winforms/widgets/datepicker.py b/winforms/src/toga_winforms/widgets/datepicker.py\n--- a/winforms/src/toga_winforms/widgets/datepicker.py\n+++ b/winforms/src/toga_winforms/widgets/datepicker.py\n@@ -2,7 +2,7 @@\n \n from travertino.size import at_least\n \n-from toga_winforms.libs import WinDateTime, WinForms\n+from toga_winforms.libs import CultureInfo, WinDateTime, WinForms\n \n from .base import Widget\n \n@@ -12,7 +12,12 @@\n self.native = WinForms.DateTimePicker()\n \n def get_value(self):\n- return datetime.datetime.strptime(self.native.Text, \"%A, %B %d, %Y\").date()\n+ return datetime.datetime.strptime(\n+ self.native.Value.ToString(\n+ \"yyyy-MM-ddTHH:mm:sszzz\", CultureInfo.InvariantCulture\n+ ),\n+ \"%Y-%m-%dT%H:%M:%S%z\",\n+ ).date()\n \n def set_value(self, value):\n self.native.Value = WinDateTime(value.year, value.month, value.day)\n", "issue": "Winforms datepicker widget returning unexpected date format\n**Describe the bug**\r\nThe winforms datepicker native widget returns a date format different to the expected format. A ValueError exception is raised. For example \"time data 'Saturday, 1 January 2011' does not match format '%A, %B %d, %Y'\".\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nRun the following code and click the update button:\r\n\r\n```\r\nfrom toga import App, MainWindow, Label, Box, Button, DatePicker\r\nfrom toga.constants import COLUMN\r\nfrom toga.style import Pack\r\n\r\n\r\nclass Application(App):\r\n def startup(self):\r\n\r\n self.main_window = MainWindow(title=self.name)\r\n\r\n self.button = Button(\r\n \"Update\",\r\n on_press=self.update,\r\n style=Pack(flex=1, width=200),\r\n )\r\n\r\n self.box1 = Box(\r\n style=Pack(\r\n direction=COLUMN,\r\n )\r\n )\r\n\r\n self.date_input = DatePicker()\r\n\r\n self.box1.add(self.date_input)\r\n self.box1.add(self.button)\r\n self.main_window.content = self.box1\r\n\r\n self.main_window.content.refresh()\r\n self.main_window.show()\r\n\r\n def update(self, widget, **kwargs):\r\n print(self.date_input.value)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app = Application(\"BugApp\", \"App\")\r\n app.main_loop()\r\n```\r\nA traceback similiar to the folllowing will be generated:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\Python310\\lib\\_strptime.py\", line 349, in _strptime\r\n raise ValueError(\"time data %r does not match format %r\" %\r\n File \"C:\\Program Files\\Python310\\lib\\_strptime.py\", line 568, in _strptime_datetime\r\n tt, fraction, gmtoff_fraction = _strptime(data_string, format)\r\n File \"C:\\Users\\gregc\\python\\toga\\toga\\winforms\\src\\toga_winforms\\widgets\\datepicker.py\", line 15, in get_value\r\n return datetime.datetime.strptime(self.native.Text, \"%A, %B %d, %Y\").date()\r\n File \"C:\\Users\\gregc\\python\\toga\\toga\\core\\src\\toga\\widgets\\datepicker.py\", line 75, in value\r\n return self._impl.get_value()\r\n File \"C:\\Users\\gregc\\python\\toga\\test2.py\", line 33, in update\r\n print(self.date_input.value)\r\n File \"C:\\Users\\gregc\\python\\toga\\toga\\core\\src\\toga\\handlers.py\", line 64, in _handler\r\n result = handler(interface, *args, **kwargs)\r\n File \"C:\\Users\\gregc\\python\\toga\\toga\\winforms\\src\\toga_winforms\\widgets\\button.py\", line 19, in winforms_click\r\n self.interface.on_press(self.interface)\r\n at Python.Runtime.PythonException.ThrowLastAsClrException()\r\n at Python.Runtime.Dispatcher.TrueDispatch(Object[] args)\r\n at Python.Runtime.Dispatcher.Dispatch(Object[] args)\r\n at __System_EventHandlerDispatcher.Invoke(Object , EventArgs )\r\n at System.Windows.Forms.Control.OnClick(EventArgs e)\r\n at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)\r\n at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)\r\n at System.Windows.Forms.Control.WndProc(Message& m)\r\n at System.Windows.Forms.ButtonBase.WndProc(Message& m)\r\n at System.Windows.Forms.Button.WndProc(Message& m)\r\n at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)\r\ntime data 'Friday, 2 February 3201' does not match format '%A, %B %d, %Y'\r\n```\r\n\r\n**Expected behavior**\r\nThe date should be printed to the console eg. \"2011-02-01\".\r\n\r\n**Screenshots**\r\n\r\n\r\n**Environment:**\r\n - Operating System: Windows 11\r\n - Python version: 3.10.8\r\n - Software versions:\r\n - Toga: 0.3.0.dev39\r\n\r\n**Additional context**\r\nI have a proposed fix. I will submit a pull request when I can work out why CI is failing:\r\n\r\n\n", "before_files": [{"content": "import datetime\n\nfrom travertino.size import at_least\n\nfrom toga_winforms.libs import WinDateTime, WinForms\n\nfrom .base import Widget\n\n\nclass DatePicker(Widget):\n def create(self):\n self.native = WinForms.DateTimePicker()\n\n def get_value(self):\n return datetime.datetime.strptime(self.native.Text, \"%A, %B %d, %Y\").date()\n\n def set_value(self, value):\n self.native.Value = WinDateTime(value.year, value.month, value.day)\n\n def set_min_date(self, value):\n if value is None:\n value = self.native.MinDateTime\n else:\n value = WinDateTime(value.year, value.month, value.day)\n\n self.native.MinDate = value\n\n def set_max_date(self, value):\n if value is None:\n value = self.native.MaxDateTime\n else:\n value = WinDateTime(value.year, value.month, value.day)\n\n self.native.MaxDate = value\n\n def rehint(self):\n # Height of a date input is known and fixed.\n # Width must be > 200\n self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)\n self.interface.intrinsic.height = self.native.PreferredSize.Height\n\n def set_on_change(self, handler):\n self.native.ValueChanged += self.on_date_change\n\n def on_date_change(self, sender, event):\n if self.interface._on_change:\n self.interface.on_change(self.interface)\n", "path": "winforms/src/toga_winforms/widgets/datepicker.py"}, {"content": "from .extensions import ( # noqa: F401\n CoreWebView2CreationProperties,\n WebView2,\n WebView2RuntimeNotFoundException,\n)\nfrom .fonts import HorizontalTextAlignment, TextAlignment, win_font_family # noqa: F401\nfrom .winforms import ( # noqa: F401\n Action,\n Bitmap,\n Color,\n Convert,\n Drawing2D,\n FillMode,\n FontFamily,\n FontStyle,\n Graphics,\n GraphicsPath,\n ImageFormat,\n Matrix,\n MemoryStream,\n Pen,\n Point,\n PointF,\n Rectangle,\n RectangleF,\n SecurityProtocolType,\n ServicePointManager,\n Single,\n Size,\n SolidBrush,\n String,\n StringFormat,\n SystemColors,\n Task,\n TaskScheduler,\n Threading,\n Uri,\n WinDateTime,\n WinFont,\n WinForms,\n WinIcon,\n WinImage,\n shcore,\n user32,\n win_version,\n)\n", "path": "winforms/src/toga_winforms/libs/__init__.py"}, {"content": "import ctypes\n\nimport clr\n\nclr.AddReference(\"System.Windows.Forms\")\n\nimport System.Windows.Forms as WinForms # noqa: F401, E402\nfrom System import ( # noqa: F401, E402\n Action,\n ArgumentException,\n Convert,\n DateTime as WinDateTime,\n Environment,\n Single,\n String,\n Threading,\n Uri,\n)\nfrom System.Drawing import ( # noqa: F401, E402\n Bitmap,\n Color,\n ContentAlignment,\n Drawing2D,\n Font as WinFont,\n FontFamily,\n FontStyle,\n Graphics,\n Icon as WinIcon,\n Image as WinImage,\n Pen,\n Point,\n PointF,\n Rectangle,\n RectangleF,\n Size,\n SolidBrush,\n StringFormat,\n SystemColors,\n SystemFonts,\n Text,\n)\nfrom System.Drawing.Drawing2D import ( # noqa: F401, E402\n FillMode,\n GraphicsPath,\n Matrix,\n)\nfrom System.Drawing.Imaging import ImageFormat # noqa: F401, E402\nfrom System.Drawing.Text import PrivateFontCollection # noqa: F401, E402\nfrom System.IO import FileNotFoundException, MemoryStream # noqa: F401, E402\nfrom System.Net import SecurityProtocolType, ServicePointManager # noqa: F401, E402\nfrom System.Runtime.InteropServices import ExternalException # noqa: F401, E402\nfrom System.Threading.Tasks import Task, TaskScheduler # noqa: F401, E402\n\nuser32 = ctypes.windll.user32\n# shcore dll not exist on some Windows versions\n# win_version should be checked to ensure proper usage\ntry:\n shcore = ctypes.windll.shcore\nexcept OSError:\n shcore = None\nwin_version = Environment.OSVersion.Version\n", "path": "winforms/src/toga_winforms/libs/winforms.py"}], "after_files": [{"content": "import datetime\n\nfrom travertino.size import at_least\n\nfrom toga_winforms.libs import CultureInfo, WinDateTime, WinForms\n\nfrom .base import Widget\n\n\nclass DatePicker(Widget):\n def create(self):\n self.native = WinForms.DateTimePicker()\n\n def get_value(self):\n return datetime.datetime.strptime(\n self.native.Value.ToString(\n \"yyyy-MM-ddTHH:mm:sszzz\", CultureInfo.InvariantCulture\n ),\n \"%Y-%m-%dT%H:%M:%S%z\",\n ).date()\n\n def set_value(self, value):\n self.native.Value = WinDateTime(value.year, value.month, value.day)\n\n def set_min_date(self, value):\n if value is None:\n value = self.native.MinDateTime\n else:\n value = WinDateTime(value.year, value.month, value.day)\n\n self.native.MinDate = value\n\n def set_max_date(self, value):\n if value is None:\n value = self.native.MaxDateTime\n else:\n value = WinDateTime(value.year, value.month, value.day)\n\n self.native.MaxDate = value\n\n def rehint(self):\n # Height of a date input is known and fixed.\n # Width must be > 200\n self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)\n self.interface.intrinsic.height = self.native.PreferredSize.Height\n\n def set_on_change(self, handler):\n self.native.ValueChanged += self.on_date_change\n\n def on_date_change(self, sender, event):\n if self.interface._on_change:\n self.interface.on_change(self.interface)\n", "path": "winforms/src/toga_winforms/widgets/datepicker.py"}, {"content": "from .extensions import ( # noqa: F401\n CoreWebView2CreationProperties,\n WebView2,\n WebView2RuntimeNotFoundException,\n)\nfrom .fonts import HorizontalTextAlignment, TextAlignment, win_font_family # noqa: F401\nfrom .winforms import ( # noqa: F401\n Action,\n Bitmap,\n Color,\n Convert,\n CultureInfo,\n Drawing2D,\n FillMode,\n FontFamily,\n FontStyle,\n Graphics,\n GraphicsPath,\n ImageFormat,\n Matrix,\n MemoryStream,\n Pen,\n Point,\n PointF,\n Rectangle,\n RectangleF,\n SecurityProtocolType,\n ServicePointManager,\n Single,\n Size,\n SolidBrush,\n String,\n StringFormat,\n SystemColors,\n Task,\n TaskScheduler,\n Threading,\n Uri,\n WinDateTime,\n WinFont,\n WinForms,\n WinIcon,\n WinImage,\n shcore,\n user32,\n win_version,\n)\n", "path": "winforms/src/toga_winforms/libs/__init__.py"}, {"content": "import ctypes\n\nimport clr\n\nclr.AddReference(\"System.Windows.Forms\")\n\nimport System.Windows.Forms as WinForms # noqa: F401, E402\nfrom System import ( # noqa: F401, E402\n Action,\n ArgumentException,\n Convert,\n DateTime as WinDateTime,\n Environment,\n Single,\n String,\n Threading,\n Uri,\n)\nfrom System.Drawing import ( # noqa: F401, E402\n Bitmap,\n Color,\n ContentAlignment,\n Drawing2D,\n Font as WinFont,\n FontFamily,\n FontStyle,\n Graphics,\n Icon as WinIcon,\n Image as WinImage,\n Pen,\n Point,\n PointF,\n Rectangle,\n RectangleF,\n Size,\n SolidBrush,\n StringFormat,\n SystemColors,\n SystemFonts,\n Text,\n)\nfrom System.Drawing.Drawing2D import ( # noqa: F401, E402\n FillMode,\n GraphicsPath,\n Matrix,\n)\nfrom System.Drawing.Imaging import ImageFormat # noqa: F401, E402\nfrom System.Drawing.Text import PrivateFontCollection # noqa: F401, E402\nfrom System.Globalization import CultureInfo # noqa: F401, E402\nfrom System.IO import FileNotFoundException, MemoryStream # noqa: F401, E402\nfrom System.Net import SecurityProtocolType, ServicePointManager # noqa: F401, E402\nfrom System.Runtime.InteropServices import ExternalException # noqa: F401, E402\nfrom System.Threading.Tasks import Task, TaskScheduler # noqa: F401, E402\n\nuser32 = ctypes.windll.user32\n# shcore dll not exist on some Windows versions\n# win_version should be checked to ensure proper usage\ntry:\n shcore = ctypes.windll.shcore\nexcept OSError:\n shcore = None\nwin_version = Environment.OSVersion.Version\n", "path": "winforms/src/toga_winforms/libs/winforms.py"}]} | 2,591 | 535 |
gh_patches_debug_2515 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-2974 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test 2959: redesign mail of new Stellungnahme in b-plan module
**URL:** mail
**user:** sachbearbeiter
**expected behaviour:** logo is no longer in the email
**behaviour:** logo is on the bottom left corner of the mail, outside the mail layout box
**important screensize:**
**device & browser:** mail on mac
**Comment/Question:**
Screenshot?
<img width="776" alt="Bildschirmfoto 2020-05-25 um 15 44 09" src="https://user-images.githubusercontent.com/35491681/82819838-5e76f900-9ea1-11ea-99a9-9a531588387f.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/bplan/emails.py`
Content:
```
1 from django.conf import settings
2
3 from meinberlin.apps.contrib.emails import Email
4
5
6 class OfficeWorkerNotification(Email):
7 template_name = 'meinberlin_bplan/emails/office_worker_notification'
8
9 @property
10 def office_worker_email(self):
11 project = self.object.module.project
12 return project.externalproject.bplan.office_worker_email
13
14 @property
15 def bplan_identifier(self):
16 project = self.object.module.project
17 return project.externalproject.bplan.identifier
18
19 def get_receivers(self):
20 return [self.office_worker_email]
21
22 def get_context(self):
23 context = super().get_context()
24 context['module'] = self.object.module
25 context['project'] = self.object.module.project
26 context['contact_email'] = settings.CONTACT_EMAIL
27 context['identifier'] = self.bplan_identifier
28 return context
29
30
31 class SubmitterConfirmation(Email):
32 template_name = 'meinberlin_bplan/emails/submitter_confirmation'
33
34 def get_receivers(self):
35 return [self.object.email]
36
37 def get_context(self):
38 context = super().get_context()
39 context['module'] = self.object.module
40 context['project'] = self.object.module.project
41 context['contact_email'] = settings.CONTACT_EMAIL
42 return context
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/bplan/emails.py b/meinberlin/apps/bplan/emails.py
--- a/meinberlin/apps/bplan/emails.py
+++ b/meinberlin/apps/bplan/emails.py
@@ -27,6 +27,9 @@
context['identifier'] = self.bplan_identifier
return context
+ def get_attachments(self):
+ return []
+
class SubmitterConfirmation(Email):
template_name = 'meinberlin_bplan/emails/submitter_confirmation'
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/emails.py b/meinberlin/apps/bplan/emails.py\n--- a/meinberlin/apps/bplan/emails.py\n+++ b/meinberlin/apps/bplan/emails.py\n@@ -27,6 +27,9 @@\n context['identifier'] = self.bplan_identifier\n return context\n \n+ def get_attachments(self):\n+ return []\n+\n \n class SubmitterConfirmation(Email):\n template_name = 'meinberlin_bplan/emails/submitter_confirmation'\n", "issue": "test 2959: redesign mail of new Stellungnahme in b-plan module\n**URL:** mail\r\n**user:** sachbearbeiter\r\n**expected behaviour:** logo is no longer in the email\r\n**behaviour:** logo is on the bottom left corner of the mail, outside the mail layout box \r\n**important screensize:**\r\n**device & browser:** mail on mac\r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n<img width=\"776\" alt=\"Bildschirmfoto 2020-05-25 um 15 44 09\" src=\"https://user-images.githubusercontent.com/35491681/82819838-5e76f900-9ea1-11ea-99a9-9a531588387f.png\">\r\n\r\n\n", "before_files": [{"content": "from django.conf import settings\n\nfrom meinberlin.apps.contrib.emails import Email\n\n\nclass OfficeWorkerNotification(Email):\n template_name = 'meinberlin_bplan/emails/office_worker_notification'\n\n @property\n def office_worker_email(self):\n project = self.object.module.project\n return project.externalproject.bplan.office_worker_email\n\n @property\n def bplan_identifier(self):\n project = self.object.module.project\n return project.externalproject.bplan.identifier\n\n def get_receivers(self):\n return [self.office_worker_email]\n\n def get_context(self):\n context = super().get_context()\n context['module'] = self.object.module\n context['project'] = self.object.module.project\n context['contact_email'] = settings.CONTACT_EMAIL\n context['identifier'] = self.bplan_identifier\n return context\n\n\nclass SubmitterConfirmation(Email):\n template_name = 'meinberlin_bplan/emails/submitter_confirmation'\n\n def get_receivers(self):\n return [self.object.email]\n\n def get_context(self):\n context = super().get_context()\n context['module'] = self.object.module\n context['project'] = self.object.module.project\n context['contact_email'] = settings.CONTACT_EMAIL\n return context\n", "path": "meinberlin/apps/bplan/emails.py"}], "after_files": [{"content": "from django.conf import settings\n\nfrom meinberlin.apps.contrib.emails import Email\n\n\nclass OfficeWorkerNotification(Email):\n template_name = 'meinberlin_bplan/emails/office_worker_notification'\n\n @property\n def office_worker_email(self):\n project = self.object.module.project\n return project.externalproject.bplan.office_worker_email\n\n @property\n def bplan_identifier(self):\n project = self.object.module.project\n return project.externalproject.bplan.identifier\n\n def get_receivers(self):\n return [self.office_worker_email]\n\n def get_context(self):\n context = super().get_context()\n context['module'] = self.object.module\n context['project'] = self.object.module.project\n context['contact_email'] = settings.CONTACT_EMAIL\n context['identifier'] = self.bplan_identifier\n return context\n\n def get_attachments(self):\n return []\n\n\nclass SubmitterConfirmation(Email):\n template_name = 'meinberlin_bplan/emails/submitter_confirmation'\n\n def get_receivers(self):\n return [self.object.email]\n\n def get_context(self):\n context = super().get_context()\n context['module'] = self.object.module\n context['project'] = self.object.module.project\n context['contact_email'] = settings.CONTACT_EMAIL\n return context\n", "path": "meinberlin/apps/bplan/emails.py"}]} | 809 | 117 |
gh_patches_debug_11460 | rasdani/github-patches | git_diff | modoboa__modoboa-2495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Robots.txt is missing from urls.py
# Impacted versions
* Modoboa: 1.12.2 and older
* installer used: Yes, but some modifications made
* Webserver: Nginx
# Steps to reproduce
Install modoboa and enable webinterface.
# Current behavior
No robots.txt is defined. Search engines do not now how to index the website. When search engines try to find robots.txt an 404 is raised and the error is mailed to ADMINS (if configured)
# Expected behavior
Robots.txt in urls.py defined, to deny all traffic, as webmail should not be publicly indexed by search engines. Possible fix, add:
`path('robots.txt', lambda r: HttpResponse("User-agent: *\nDisAllow: /", content_type="text/plain"), name='robots')`
# Video/Screenshot link (optional)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modoboa/core/urls.py`
Content:
```
1 """Core urls."""
2
3 from django.urls import path
4
5 from . import views
6
7 app_name = "core"
8
9 urlpatterns = [
10 path('', views.RootDispatchView.as_view(), name="root"),
11 path('dashboard/', views.DashboardView.as_view(), name="dashboard"),
12
13 path('accounts/login/', views.dologin, name="login"),
14 path('accounts/logout/', views.dologout, name="logout"),
15 path('accounts/2fa_verify/',
16 views.TwoFactorCodeVerifyView.as_view(),
17 name='2fa_verify'),
18
19 path('core/', views.viewsettings, name="index"),
20 path('core/parameters/', views.parameters, name="parameters"),
21 path('core/info/', views.information, name="information"),
22 path('core/logs/', views.logs, name="log_list"),
23 path('core/logs/page/', views.logs_page, name="logs_page"),
24 path('core/top_notifications/check/',
25 views.check_top_notifications,
26 name="top_notifications_check"),
27
28 path('user/', views.index, name="user_index"),
29 path('user/preferences/', views.preferences,
30 name="user_preferences"),
31 path('user/profile/', views.profile, name="user_profile"),
32 path('user/api/', views.api_access, name="user_api_access"),
33 path('user/security/', views.security, name="user_security"),
34 ]
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modoboa/core/urls.py b/modoboa/core/urls.py
--- a/modoboa/core/urls.py
+++ b/modoboa/core/urls.py
@@ -1,6 +1,7 @@
"""Core urls."""
from django.urls import path
+from django.views.generic.base import TemplateView
from . import views
@@ -31,4 +32,5 @@
path('user/profile/', views.profile, name="user_profile"),
path('user/api/', views.api_access, name="user_api_access"),
path('user/security/', views.security, name="user_security"),
+ path('robots.txt', TemplateView.as_view(template_name="core/robots.txt", content_type="text/plain")),
]
| {"golden_diff": "diff --git a/modoboa/core/urls.py b/modoboa/core/urls.py\n--- a/modoboa/core/urls.py\n+++ b/modoboa/core/urls.py\n@@ -1,6 +1,7 @@\n \"\"\"Core urls.\"\"\"\n \n from django.urls import path\n+from django.views.generic.base import TemplateView\n \n from . import views\n \n@@ -31,4 +32,5 @@\n path('user/profile/', views.profile, name=\"user_profile\"),\n path('user/api/', views.api_access, name=\"user_api_access\"),\n path('user/security/', views.security, name=\"user_security\"),\n+ path('robots.txt', TemplateView.as_view(template_name=\"core/robots.txt\", content_type=\"text/plain\")),\n ]\n", "issue": "Robots.txt is missing from urls.py\n# Impacted versions\r\n\r\n* Modoboa: 1.12.2 and older\r\n* installer used: Yes, but some modifications made\r\n* Webserver: Nginx\r\n\r\n# Steps to reproduce\r\nInstall modoboa and enable webinterface.\r\n\r\n# Current behavior\r\nNo robots.txt is defined. Search engines do not now how to index the website. When search engines try to find robots.txt an 404 is raised and the error is mailed to ADMINS (if configured)\r\n\r\n# Expected behavior\r\nRobots.txt in urls.py defined, to deny all traffic, as webmail should not be publicly indexed by search engines. Possible fix, add:\r\n`path('robots.txt', lambda r: HttpResponse(\"User-agent: *\\nDisAllow: /\", content_type=\"text/plain\"), name='robots')`\r\n\r\n# Video/Screenshot link (optional)\r\n\r\n\n", "before_files": [{"content": "\"\"\"Core urls.\"\"\"\n\nfrom django.urls import path\n\nfrom . import views\n\napp_name = \"core\"\n\nurlpatterns = [\n path('', views.RootDispatchView.as_view(), name=\"root\"),\n path('dashboard/', views.DashboardView.as_view(), name=\"dashboard\"),\n\n path('accounts/login/', views.dologin, name=\"login\"),\n path('accounts/logout/', views.dologout, name=\"logout\"),\n path('accounts/2fa_verify/',\n views.TwoFactorCodeVerifyView.as_view(),\n name='2fa_verify'),\n\n path('core/', views.viewsettings, name=\"index\"),\n path('core/parameters/', views.parameters, name=\"parameters\"),\n path('core/info/', views.information, name=\"information\"),\n path('core/logs/', views.logs, name=\"log_list\"),\n path('core/logs/page/', views.logs_page, name=\"logs_page\"),\n path('core/top_notifications/check/',\n views.check_top_notifications,\n name=\"top_notifications_check\"),\n\n path('user/', views.index, name=\"user_index\"),\n path('user/preferences/', views.preferences,\n name=\"user_preferences\"),\n path('user/profile/', views.profile, name=\"user_profile\"),\n path('user/api/', views.api_access, name=\"user_api_access\"),\n path('user/security/', views.security, name=\"user_security\"),\n]\n", "path": "modoboa/core/urls.py"}], "after_files": [{"content": "\"\"\"Core urls.\"\"\"\n\nfrom django.urls import path\nfrom django.views.generic.base import TemplateView\n\nfrom . import views\n\napp_name = \"core\"\n\nurlpatterns = [\n path('', views.RootDispatchView.as_view(), name=\"root\"),\n path('dashboard/', views.DashboardView.as_view(), name=\"dashboard\"),\n\n path('accounts/login/', views.dologin, name=\"login\"),\n path('accounts/logout/', views.dologout, name=\"logout\"),\n path('accounts/2fa_verify/',\n views.TwoFactorCodeVerifyView.as_view(),\n name='2fa_verify'),\n\n path('core/', views.viewsettings, name=\"index\"),\n path('core/parameters/', views.parameters, name=\"parameters\"),\n path('core/info/', views.information, name=\"information\"),\n path('core/logs/', views.logs, name=\"log_list\"),\n path('core/logs/page/', views.logs_page, name=\"logs_page\"),\n path('core/top_notifications/check/',\n views.check_top_notifications,\n name=\"top_notifications_check\"),\n\n path('user/', views.index, name=\"user_index\"),\n path('user/preferences/', views.preferences,\n name=\"user_preferences\"),\n path('user/profile/', views.profile, name=\"user_profile\"),\n path('user/api/', views.api_access, name=\"user_api_access\"),\n path('user/security/', views.security, name=\"user_security\"),\n path('robots.txt', TemplateView.as_view(template_name=\"core/robots.txt\", content_type=\"text/plain\")),\n]\n", "path": "modoboa/core/urls.py"}]} | 792 | 158 |
gh_patches_debug_20049 | rasdani/github-patches | git_diff | joke2k__faker-1273 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
VAT for IT provider generates invalid values
* Faker version: `4.1.3`
* OS: Linux Debian
`company_vat` for `it_IT` providers randomly returns invalid VAT values.
### Expected behavior
As example, `company_vat()` generated the invalid value `IT00472451194`. This value is not valid because the last part of the code, `IT#######119#` is not a valid number for the algorithm that validates the Italian VAT (the [stdnum library](https://github.com/arthurdejong/python-stdnum), for example, correctly recognize this as an invalid VAT). As described in the ["Partita IVA" (VAT) wikipedia page](https://www.wikiwand.com/it/Partita_IVA), the valid values must be in the range `001-100` plus the values `120,121,888,999`.
The current code is:
```python
def company_vat(self):
"""
Returns Italian VAT identification number (Partita IVA).
"""
code = "0" + self.bothify('######') + str(self.generator.random.randrange(1, 121)).zfill(3)
luhn_checksum = str(calculate_luhn(code))
return 'IT{}{}'.format(code, luhn_checksum)
```
this code has some issues:
- The `randrange` is generating invalid values.
- The method is not generating all the possible VAT due to the missing `888` and `999` values. Also, the first digit of the VAT is allowed to be values other than 0 (currently it is fixed to 0).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/company/it_IT/__init__.py`
Content:
```
1 from faker.utils.checksums import calculate_luhn
2
3 from .. import Provider as CompanyProvider
4
5
6 class Provider(CompanyProvider):
7 formats = (
8 '{{last_name}} {{company_suffix}}',
9 '{{last_name}}-{{last_name}} {{company_suffix}}',
10 '{{last_name}}, {{last_name}} e {{last_name}} {{company_suffix}}',
11 )
12
13 catch_phrase_words = (
14 ('Abilità',
15 'Access',
16 'Adattatore',
17 'Algoritmo',
18 'Alleanza',
19 'Analizzatore',
20 'Applicazione',
21 'Approccio',
22 'Architettura',
23 'Archivio',
24 'Intelligenza artificiale',
25 'Array',
26 'Attitudine',
27 'Benchmark',
28 'Capacità',
29 'Sfida',
30 'Circuito',
31 'Collaborazione',
32 'Complessità',
33 'Concetto',
34 'Conglomerato',
35 'Contingenza',
36 'Core',
37 'Database',
38 'Data-warehouse',
39 'Definizione',
40 'Emulazione',
41 'Codifica',
42 'Criptazione',
43 'Firmware',
44 'Flessibilità',
45 'Previsione',
46 'Frame',
47 'framework',
48 'Funzione',
49 'Funzionalità',
50 'Interfaccia grafica',
51 'Hardware',
52 'Help-desk',
53 'Gerarchia',
54 'Hub',
55 'Implementazione',
56 'Infrastruttura',
57 'Iniziativa',
58 'Installazione',
59 'Set di istruzioni',
60 'Interfaccia',
61 'Soluzione internet',
62 'Intranet',
63 'Conoscenza base',
64 'Matrici',
65 'Matrice',
66 'Metodologia',
67 'Middleware',
68 'Migrazione',
69 'Modello',
70 'Moderazione',
71 'Monitoraggio',
72 'Moratoria',
73 'Rete',
74 'Architettura aperta',
75 'Sistema aperto',
76 'Orchestrazione',
77 'Paradigma',
78 'Parallelismo',
79 'Policy',
80 'Portale',
81 'Struttura di prezzo',
82 'Prodotto',
83 'Produttività',
84 'Progetto',
85 'Proiezione',
86 'Protocollo',
87 'Servizio clienti',
88 'Software',
89 'Soluzione',
90 'Standardizzazione',
91 'Strategia',
92 'Struttura',
93 'Successo',
94 'Sovrastruttura',
95 'Supporto',
96 'Sinergia',
97 'Task-force',
98 'Finestra temporale',
99 'Strumenti',
100 'Utilizzazione',
101 'Sito web',
102 'Forza lavoro'),
103 ('adattiva',
104 'avanzata',
105 'migliorata',
106 'assimilata',
107 'automatizzata',
108 'bilanciata',
109 'centralizzata',
110 'compatibile',
111 'configurabile',
112 'cross-platform',
113 'decentralizzata',
114 'digitalizzata',
115 'distribuita',
116 'piccola',
117 'ergonomica',
118 'esclusiva',
119 'espansa',
120 'estesa',
121 'configurabile',
122 'fondamentale',
123 'orizzontale',
124 'implementata',
125 'innovativa',
126 'integrata',
127 'intuitiva',
128 'inversa',
129 'gestita',
130 'obbligatoria',
131 'monitorata',
132 'multi-canale',
133 'multi-laterale',
134 'open-source',
135 'operativa',
136 'ottimizzata',
137 'organica',
138 'persistente',
139 'polarizzata',
140 'proattiva',
141 'programmabile',
142 'progressiva',
143 'reattiva',
144 'riallineata',
145 'ricontestualizzata',
146 'ridotta',
147 'robusta',
148 'sicura',
149 'condivisibile',
150 'stand-alone',
151 'switchabile',
152 'sincronizzata',
153 'sinergica',
154 'totale',
155 'universale',
156 'user-friendly',
157 'versatile',
158 'virtuale',
159 'visionaria'),
160 ('24 ore',
161 '24/7',
162 'terza generazione',
163 'quarta generazione',
164 'quinta generazione',
165 'sesta generazione',
166 'asimmetrica',
167 'asincrona',
168 'background',
169 'bi-direzionale',
170 'biforcata',
171 'bottom-line',
172 'coerente',
173 'coesiva',
174 'composita',
175 'sensibile al contesto',
176 'basta sul contesto',
177 'basata sul contenuto',
178 'dedicata',
179 'didattica',
180 'direzionale',
181 'discreta',
182 'dinamica',
183 'eco-centrica',
184 'esecutiva',
185 'esplicita',
186 'full-range',
187 'globale',
188 'euristica',
189 'alto livello',
190 'olistica',
191 'omogenea',
192 'ibrida',
193 'impattante',
194 'incrementale',
195 'intangibile',
196 'interattiva',
197 'intermediaria',
198 'locale',
199 'logistica',
200 'massimizzata',
201 'metodica',
202 'mission-critical',
203 'mobile',
204 'modulare',
205 'motivazionale',
206 'multimedia',
207 'multi-tasking',
208 'nazionale',
209 'neutrale',
210 'nextgeneration',
211 'non-volatile',
212 'object-oriented',
213 'ottima',
214 'ottimizzante',
215 'radicale',
216 'real-time',
217 'reciproca',
218 'regionale',
219 'responsiva',
220 'scalabile',
221 'secondaria',
222 'stabile',
223 'statica',
224 'sistematica',
225 'sistemica',
226 'tangibile',
227 'terziaria',
228 'uniforme',
229 'valore aggiunto'))
230
231 bsWords = (
232 ('partnerships',
233 'comunità',
234 'ROI',
235 'soluzioni',
236 'e-services',
237 'nicchie',
238 'tecnologie',
239 'contenuti',
240 'supply-chains',
241 'convergenze',
242 'relazioni',
243 'architetture',
244 'interfacce',
245 'mercati',
246 'e-commerce',
247 'sistemi',
248 'modelli',
249 'schemi',
250 'reti',
251 'applicazioni',
252 'metriche',
253 'e-business',
254 'funzionalità',
255 'esperienze',
256 'webservices',
257 'metodologie'),
258 ('implementate',
259 'utilizzo',
260 'integrate',
261 'ottimali',
262 'evolutive',
263 'abilitate',
264 'reinventate',
265 'aggregate',
266 'migliorate',
267 'incentivate',
268 'monetizzate',
269 'sinergizzate',
270 'strategiche',
271 'deploy',
272 'marchi',
273 'accrescitive',
274 'target',
275 'sintetizzate',
276 'spedizioni',
277 'massimizzate',
278 'innovazione',
279 'guida',
280 'estensioni',
281 'generate',
282 'exploit',
283 'transizionali',
284 'matrici',
285 'ricontestualizzate'),
286 ('valore aggiunto',
287 'verticalizzate',
288 'proattive',
289 'forti',
290 'rivoluzionari',
291 'scalabili',
292 'innovativi',
293 'intuitivi',
294 'strategici',
295 'e-business',
296 'mission-critical',
297 '24/7',
298 'globali',
299 'B2B',
300 'B2C',
301 'granulari',
302 'virtuali',
303 'virali',
304 'dinamiche',
305 'magnetiche',
306 'web',
307 'interattive',
308 'sexy',
309 'back-end',
310 'real-time',
311 'efficienti',
312 'front-end',
313 'distributivi',
314 'estensibili',
315 'mondiali',
316 'open-source',
317 'cross-platform',
318 'sinergiche',
319 'out-of-the-box',
320 'enterprise',
321 'integrate',
322 'di impatto',
323 'wireless',
324 'trasparenti',
325 'next-generation',
326 'cutting-edge',
327 'visionari',
328 'plug-and-play',
329 'collaborative',
330 'olistiche',
331 'ricche'))
332
333 company_suffixes = ('SPA', 'e figli', 'Group', 's.r.l.')
334
335 def catch_phrase(self):
336 """
337 :example 'Robust full-range hub'
338 """
339 result = []
340 for word_list in self.catch_phrase_words:
341 result.append(self.random_element(word_list))
342
343 return " ".join(result)
344
345 def bs(self):
346 """
347 :example 'integrate extensible convergence'
348 """
349 result = []
350 for word_list in self.bsWords:
351 result.append(self.random_element(word_list))
352
353 return " ".join(result)
354
355 def company_vat(self):
356 """
357 Returns Italian VAT identification number (Partita IVA).
358 """
359 code = "0" + self.bothify('######') + str(self.generator.random.randrange(1, 121)).zfill(3)
360 luhn_checksum = str(calculate_luhn(code))
361 return 'IT{}{}'.format(code, luhn_checksum)
362
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/company/it_IT/__init__.py b/faker/providers/company/it_IT/__init__.py
--- a/faker/providers/company/it_IT/__init__.py
+++ b/faker/providers/company/it_IT/__init__.py
@@ -352,10 +352,30 @@
return " ".join(result)
+ def _random_vat_office(self):
+ """
+ Returns a random code identifying the VAT office needed to build a valid VAT with company_vat.
+
+ See https://it.wikipedia.org/wiki/Partita_IVA#Tabella_degli_Uffici_IVA
+ """
+ val = self.random_int(1, 104)
+
+ # handle special cases
+ if val == 101:
+ return 120
+ elif val == 102:
+ return 121
+ elif val == 103:
+ return 888
+ elif val == 104:
+ return 999
+ # else: between 1 and 100 are all valid
+ return val
+
def company_vat(self):
"""
Returns Italian VAT identification number (Partita IVA).
"""
- code = "0" + self.bothify('######') + str(self.generator.random.randrange(1, 121)).zfill(3)
+ code = self.bothify('#######') + str(self._random_vat_office()).zfill(3)
luhn_checksum = str(calculate_luhn(code))
return 'IT{}{}'.format(code, luhn_checksum)
| {"golden_diff": "diff --git a/faker/providers/company/it_IT/__init__.py b/faker/providers/company/it_IT/__init__.py\n--- a/faker/providers/company/it_IT/__init__.py\n+++ b/faker/providers/company/it_IT/__init__.py\n@@ -352,10 +352,30 @@\n \n return \" \".join(result)\n \n+ def _random_vat_office(self):\n+ \"\"\"\n+ Returns a random code identifying the VAT office needed to build a valid VAT with company_vat.\n+\n+ See https://it.wikipedia.org/wiki/Partita_IVA#Tabella_degli_Uffici_IVA\n+ \"\"\"\n+ val = self.random_int(1, 104)\n+\n+ # handle special cases\n+ if val == 101:\n+ return 120\n+ elif val == 102:\n+ return 121\n+ elif val == 103:\n+ return 888\n+ elif val == 104:\n+ return 999\n+ # else: between 1 and 100 are all valid\n+ return val\n+\n def company_vat(self):\n \"\"\"\n Returns Italian VAT identification number (Partita IVA).\n \"\"\"\n- code = \"0\" + self.bothify('######') + str(self.generator.random.randrange(1, 121)).zfill(3)\n+ code = self.bothify('#######') + str(self._random_vat_office()).zfill(3)\n luhn_checksum = str(calculate_luhn(code))\n return 'IT{}{}'.format(code, luhn_checksum)\n", "issue": "VAT for IT provider generates invalid values\n* Faker version: `4.1.3`\r\n* OS: Linux Debian\r\n\r\n`company_vat` for `it_IT` providers randomly returns invalid VAT values.\r\n\r\n### Expected behavior\r\n\r\nAs example, `company_vat()` generated the invalid value `IT00472451194`. This value is not valid because the last part of the code, `IT#######119#` is not a valid number for the algorithm that validates the Italian VAT (the [stdnum library](https://github.com/arthurdejong/python-stdnum), for example, correctly recognize this as an invalid VAT). As described in the [\"Partita IVA\" (VAT) wikipedia page](https://www.wikiwand.com/it/Partita_IVA), the valid values must be in the range `001-100` plus the values `120,121,888,999`.\r\n\r\nThe current code is:\r\n\r\n```python\r\ndef company_vat(self):\r\n \"\"\"\r\n Returns Italian VAT identification number (Partita IVA).\r\n \"\"\"\r\n code = \"0\" + self.bothify('######') + str(self.generator.random.randrange(1, 121)).zfill(3)\r\n luhn_checksum = str(calculate_luhn(code))\r\n return 'IT{}{}'.format(code, luhn_checksum)\r\n```\r\n\r\nthis code has some issues:\r\n\r\n- The `randrange` is generating invalid values.\r\n- The method is not generating all the possible VAT due to the missing `888` and `999` values. Also, the first digit of the VAT is allowed to be values other than 0 (currently it is fixed to 0).\r\n\n", "before_files": [{"content": "from faker.utils.checksums import calculate_luhn\n\nfrom .. import Provider as CompanyProvider\n\n\nclass Provider(CompanyProvider):\n formats = (\n '{{last_name}} {{company_suffix}}',\n '{{last_name}}-{{last_name}} {{company_suffix}}',\n '{{last_name}}, {{last_name}} e {{last_name}} {{company_suffix}}',\n )\n\n catch_phrase_words = (\n ('Abilit\u00e0',\n 'Access',\n 'Adattatore',\n 'Algoritmo',\n 'Alleanza',\n 'Analizzatore',\n 'Applicazione',\n 'Approccio',\n 'Architettura',\n 'Archivio',\n 'Intelligenza artificiale',\n 'Array',\n 'Attitudine',\n 'Benchmark',\n 'Capacit\u00e0',\n 'Sfida',\n 'Circuito',\n 'Collaborazione',\n 'Complessit\u00e0',\n 'Concetto',\n 'Conglomerato',\n 'Contingenza',\n 'Core',\n 'Database',\n 'Data-warehouse',\n 'Definizione',\n 'Emulazione',\n 'Codifica',\n 'Criptazione',\n 'Firmware',\n 'Flessibilit\u00e0',\n 'Previsione',\n 'Frame',\n 'framework',\n 'Funzione',\n 'Funzionalit\u00e0',\n 'Interfaccia grafica',\n 'Hardware',\n 'Help-desk',\n 'Gerarchia',\n 'Hub',\n 'Implementazione',\n 'Infrastruttura',\n 'Iniziativa',\n 'Installazione',\n 'Set di istruzioni',\n 'Interfaccia',\n 'Soluzione internet',\n 'Intranet',\n 'Conoscenza base',\n 'Matrici',\n 'Matrice',\n 'Metodologia',\n 'Middleware',\n 'Migrazione',\n 'Modello',\n 'Moderazione',\n 'Monitoraggio',\n 'Moratoria',\n 'Rete',\n 'Architettura aperta',\n 'Sistema aperto',\n 'Orchestrazione',\n 'Paradigma',\n 'Parallelismo',\n 'Policy',\n 'Portale',\n 'Struttura di prezzo',\n 'Prodotto',\n 'Produttivit\u00e0',\n 'Progetto',\n 'Proiezione',\n 'Protocollo',\n 'Servizio clienti',\n 'Software',\n 'Soluzione',\n 'Standardizzazione',\n 'Strategia',\n 'Struttura',\n 'Successo',\n 'Sovrastruttura',\n 'Supporto',\n 'Sinergia',\n 'Task-force',\n 'Finestra temporale',\n 'Strumenti',\n 'Utilizzazione',\n 'Sito web',\n 'Forza lavoro'),\n ('adattiva',\n 'avanzata',\n 'migliorata',\n 'assimilata',\n 'automatizzata',\n 'bilanciata',\n 'centralizzata',\n 'compatibile',\n 'configurabile',\n 'cross-platform',\n 'decentralizzata',\n 'digitalizzata',\n 'distribuita',\n 'piccola',\n 'ergonomica',\n 'esclusiva',\n 'espansa',\n 'estesa',\n 'configurabile',\n 'fondamentale',\n 'orizzontale',\n 'implementata',\n 'innovativa',\n 'integrata',\n 'intuitiva',\n 'inversa',\n 'gestita',\n 'obbligatoria',\n 'monitorata',\n 'multi-canale',\n 'multi-laterale',\n 'open-source',\n 'operativa',\n 'ottimizzata',\n 'organica',\n 'persistente',\n 'polarizzata',\n 'proattiva',\n 'programmabile',\n 'progressiva',\n 'reattiva',\n 'riallineata',\n 'ricontestualizzata',\n 'ridotta',\n 'robusta',\n 'sicura',\n 'condivisibile',\n 'stand-alone',\n 'switchabile',\n 'sincronizzata',\n 'sinergica',\n 'totale',\n 'universale',\n 'user-friendly',\n 'versatile',\n 'virtuale',\n 'visionaria'),\n ('24 ore',\n '24/7',\n 'terza generazione',\n 'quarta generazione',\n 'quinta generazione',\n 'sesta generazione',\n 'asimmetrica',\n 'asincrona',\n 'background',\n 'bi-direzionale',\n 'biforcata',\n 'bottom-line',\n 'coerente',\n 'coesiva',\n 'composita',\n 'sensibile al contesto',\n 'basta sul contesto',\n 'basata sul contenuto',\n 'dedicata',\n 'didattica',\n 'direzionale',\n 'discreta',\n 'dinamica',\n 'eco-centrica',\n 'esecutiva',\n 'esplicita',\n 'full-range',\n 'globale',\n 'euristica',\n 'alto livello',\n 'olistica',\n 'omogenea',\n 'ibrida',\n 'impattante',\n 'incrementale',\n 'intangibile',\n 'interattiva',\n 'intermediaria',\n 'locale',\n 'logistica',\n 'massimizzata',\n 'metodica',\n 'mission-critical',\n 'mobile',\n 'modulare',\n 'motivazionale',\n 'multimedia',\n 'multi-tasking',\n 'nazionale',\n 'neutrale',\n 'nextgeneration',\n 'non-volatile',\n 'object-oriented',\n 'ottima',\n 'ottimizzante',\n 'radicale',\n 'real-time',\n 'reciproca',\n 'regionale',\n 'responsiva',\n 'scalabile',\n 'secondaria',\n 'stabile',\n 'statica',\n 'sistematica',\n 'sistemica',\n 'tangibile',\n 'terziaria',\n 'uniforme',\n 'valore aggiunto'))\n\n bsWords = (\n ('partnerships',\n 'comunit\u00e0',\n 'ROI',\n 'soluzioni',\n 'e-services',\n 'nicchie',\n 'tecnologie',\n 'contenuti',\n 'supply-chains',\n 'convergenze',\n 'relazioni',\n 'architetture',\n 'interfacce',\n 'mercati',\n 'e-commerce',\n 'sistemi',\n 'modelli',\n 'schemi',\n 'reti',\n 'applicazioni',\n 'metriche',\n 'e-business',\n 'funzionalit\u00e0',\n 'esperienze',\n 'webservices',\n 'metodologie'),\n ('implementate',\n 'utilizzo',\n 'integrate',\n 'ottimali',\n 'evolutive',\n 'abilitate',\n 'reinventate',\n 'aggregate',\n 'migliorate',\n 'incentivate',\n 'monetizzate',\n 'sinergizzate',\n 'strategiche',\n 'deploy',\n 'marchi',\n 'accrescitive',\n 'target',\n 'sintetizzate',\n 'spedizioni',\n 'massimizzate',\n 'innovazione',\n 'guida',\n 'estensioni',\n 'generate',\n 'exploit',\n 'transizionali',\n 'matrici',\n 'ricontestualizzate'),\n ('valore aggiunto',\n 'verticalizzate',\n 'proattive',\n 'forti',\n 'rivoluzionari',\n 'scalabili',\n 'innovativi',\n 'intuitivi',\n 'strategici',\n 'e-business',\n 'mission-critical',\n '24/7',\n 'globali',\n 'B2B',\n 'B2C',\n 'granulari',\n 'virtuali',\n 'virali',\n 'dinamiche',\n 'magnetiche',\n 'web',\n 'interattive',\n 'sexy',\n 'back-end',\n 'real-time',\n 'efficienti',\n 'front-end',\n 'distributivi',\n 'estensibili',\n 'mondiali',\n 'open-source',\n 'cross-platform',\n 'sinergiche',\n 'out-of-the-box',\n 'enterprise',\n 'integrate',\n 'di impatto',\n 'wireless',\n 'trasparenti',\n 'next-generation',\n 'cutting-edge',\n 'visionari',\n 'plug-and-play',\n 'collaborative',\n 'olistiche',\n 'ricche'))\n\n company_suffixes = ('SPA', 'e figli', 'Group', 's.r.l.')\n\n def catch_phrase(self):\n \"\"\"\n :example 'Robust full-range hub'\n \"\"\"\n result = []\n for word_list in self.catch_phrase_words:\n result.append(self.random_element(word_list))\n\n return \" \".join(result)\n\n def bs(self):\n \"\"\"\n :example 'integrate extensible convergence'\n \"\"\"\n result = []\n for word_list in self.bsWords:\n result.append(self.random_element(word_list))\n\n return \" \".join(result)\n\n def company_vat(self):\n \"\"\"\n Returns Italian VAT identification number (Partita IVA).\n \"\"\"\n code = \"0\" + self.bothify('######') + str(self.generator.random.randrange(1, 121)).zfill(3)\n luhn_checksum = str(calculate_luhn(code))\n return 'IT{}{}'.format(code, luhn_checksum)\n", "path": "faker/providers/company/it_IT/__init__.py"}], "after_files": [{"content": "from faker.utils.checksums import calculate_luhn\n\nfrom .. import Provider as CompanyProvider\n\n\nclass Provider(CompanyProvider):\n formats = (\n '{{last_name}} {{company_suffix}}',\n '{{last_name}}-{{last_name}} {{company_suffix}}',\n '{{last_name}}, {{last_name}} e {{last_name}} {{company_suffix}}',\n )\n\n catch_phrase_words = (\n ('Abilit\u00e0',\n 'Access',\n 'Adattatore',\n 'Algoritmo',\n 'Alleanza',\n 'Analizzatore',\n 'Applicazione',\n 'Approccio',\n 'Architettura',\n 'Archivio',\n 'Intelligenza artificiale',\n 'Array',\n 'Attitudine',\n 'Benchmark',\n 'Capacit\u00e0',\n 'Sfida',\n 'Circuito',\n 'Collaborazione',\n 'Complessit\u00e0',\n 'Concetto',\n 'Conglomerato',\n 'Contingenza',\n 'Core',\n 'Database',\n 'Data-warehouse',\n 'Definizione',\n 'Emulazione',\n 'Codifica',\n 'Criptazione',\n 'Firmware',\n 'Flessibilit\u00e0',\n 'Previsione',\n 'Frame',\n 'framework',\n 'Funzione',\n 'Funzionalit\u00e0',\n 'Interfaccia grafica',\n 'Hardware',\n 'Help-desk',\n 'Gerarchia',\n 'Hub',\n 'Implementazione',\n 'Infrastruttura',\n 'Iniziativa',\n 'Installazione',\n 'Set di istruzioni',\n 'Interfaccia',\n 'Soluzione internet',\n 'Intranet',\n 'Conoscenza base',\n 'Matrici',\n 'Matrice',\n 'Metodologia',\n 'Middleware',\n 'Migrazione',\n 'Modello',\n 'Moderazione',\n 'Monitoraggio',\n 'Moratoria',\n 'Rete',\n 'Architettura aperta',\n 'Sistema aperto',\n 'Orchestrazione',\n 'Paradigma',\n 'Parallelismo',\n 'Policy',\n 'Portale',\n 'Struttura di prezzo',\n 'Prodotto',\n 'Produttivit\u00e0',\n 'Progetto',\n 'Proiezione',\n 'Protocollo',\n 'Servizio clienti',\n 'Software',\n 'Soluzione',\n 'Standardizzazione',\n 'Strategia',\n 'Struttura',\n 'Successo',\n 'Sovrastruttura',\n 'Supporto',\n 'Sinergia',\n 'Task-force',\n 'Finestra temporale',\n 'Strumenti',\n 'Utilizzazione',\n 'Sito web',\n 'Forza lavoro'),\n ('adattiva',\n 'avanzata',\n 'migliorata',\n 'assimilata',\n 'automatizzata',\n 'bilanciata',\n 'centralizzata',\n 'compatibile',\n 'configurabile',\n 'cross-platform',\n 'decentralizzata',\n 'digitalizzata',\n 'distribuita',\n 'piccola',\n 'ergonomica',\n 'esclusiva',\n 'espansa',\n 'estesa',\n 'configurabile',\n 'fondamentale',\n 'orizzontale',\n 'implementata',\n 'innovativa',\n 'integrata',\n 'intuitiva',\n 'inversa',\n 'gestita',\n 'obbligatoria',\n 'monitorata',\n 'multi-canale',\n 'multi-laterale',\n 'open-source',\n 'operativa',\n 'ottimizzata',\n 'organica',\n 'persistente',\n 'polarizzata',\n 'proattiva',\n 'programmabile',\n 'progressiva',\n 'reattiva',\n 'riallineata',\n 'ricontestualizzata',\n 'ridotta',\n 'robusta',\n 'sicura',\n 'condivisibile',\n 'stand-alone',\n 'switchabile',\n 'sincronizzata',\n 'sinergica',\n 'totale',\n 'universale',\n 'user-friendly',\n 'versatile',\n 'virtuale',\n 'visionaria'),\n ('24 ore',\n '24/7',\n 'terza generazione',\n 'quarta generazione',\n 'quinta generazione',\n 'sesta generazione',\n 'asimmetrica',\n 'asincrona',\n 'background',\n 'bi-direzionale',\n 'biforcata',\n 'bottom-line',\n 'coerente',\n 'coesiva',\n 'composita',\n 'sensibile al contesto',\n 'basta sul contesto',\n 'basata sul contenuto',\n 'dedicata',\n 'didattica',\n 'direzionale',\n 'discreta',\n 'dinamica',\n 'eco-centrica',\n 'esecutiva',\n 'esplicita',\n 'full-range',\n 'globale',\n 'euristica',\n 'alto livello',\n 'olistica',\n 'omogenea',\n 'ibrida',\n 'impattante',\n 'incrementale',\n 'intangibile',\n 'interattiva',\n 'intermediaria',\n 'locale',\n 'logistica',\n 'massimizzata',\n 'metodica',\n 'mission-critical',\n 'mobile',\n 'modulare',\n 'motivazionale',\n 'multimedia',\n 'multi-tasking',\n 'nazionale',\n 'neutrale',\n 'nextgeneration',\n 'non-volatile',\n 'object-oriented',\n 'ottima',\n 'ottimizzante',\n 'radicale',\n 'real-time',\n 'reciproca',\n 'regionale',\n 'responsiva',\n 'scalabile',\n 'secondaria',\n 'stabile',\n 'statica',\n 'sistematica',\n 'sistemica',\n 'tangibile',\n 'terziaria',\n 'uniforme',\n 'valore aggiunto'))\n\n bsWords = (\n ('partnerships',\n 'comunit\u00e0',\n 'ROI',\n 'soluzioni',\n 'e-services',\n 'nicchie',\n 'tecnologie',\n 'contenuti',\n 'supply-chains',\n 'convergenze',\n 'relazioni',\n 'architetture',\n 'interfacce',\n 'mercati',\n 'e-commerce',\n 'sistemi',\n 'modelli',\n 'schemi',\n 'reti',\n 'applicazioni',\n 'metriche',\n 'e-business',\n 'funzionalit\u00e0',\n 'esperienze',\n 'webservices',\n 'metodologie'),\n ('implementate',\n 'utilizzo',\n 'integrate',\n 'ottimali',\n 'evolutive',\n 'abilitate',\n 'reinventate',\n 'aggregate',\n 'migliorate',\n 'incentivate',\n 'monetizzate',\n 'sinergizzate',\n 'strategiche',\n 'deploy',\n 'marchi',\n 'accrescitive',\n 'target',\n 'sintetizzate',\n 'spedizioni',\n 'massimizzate',\n 'innovazione',\n 'guida',\n 'estensioni',\n 'generate',\n 'exploit',\n 'transizionali',\n 'matrici',\n 'ricontestualizzate'),\n ('valore aggiunto',\n 'verticalizzate',\n 'proattive',\n 'forti',\n 'rivoluzionari',\n 'scalabili',\n 'innovativi',\n 'intuitivi',\n 'strategici',\n 'e-business',\n 'mission-critical',\n '24/7',\n 'globali',\n 'B2B',\n 'B2C',\n 'granulari',\n 'virtuali',\n 'virali',\n 'dinamiche',\n 'magnetiche',\n 'web',\n 'interattive',\n 'sexy',\n 'back-end',\n 'real-time',\n 'efficienti',\n 'front-end',\n 'distributivi',\n 'estensibili',\n 'mondiali',\n 'open-source',\n 'cross-platform',\n 'sinergiche',\n 'out-of-the-box',\n 'enterprise',\n 'integrate',\n 'di impatto',\n 'wireless',\n 'trasparenti',\n 'next-generation',\n 'cutting-edge',\n 'visionari',\n 'plug-and-play',\n 'collaborative',\n 'olistiche',\n 'ricche'))\n\n company_suffixes = ('SPA', 'e figli', 'Group', 's.r.l.')\n\n def catch_phrase(self):\n \"\"\"\n :example 'Robust full-range hub'\n \"\"\"\n result = []\n for word_list in self.catch_phrase_words:\n result.append(self.random_element(word_list))\n\n return \" \".join(result)\n\n def bs(self):\n \"\"\"\n :example 'integrate extensible convergence'\n \"\"\"\n result = []\n for word_list in self.bsWords:\n result.append(self.random_element(word_list))\n\n return \" \".join(result)\n\n def _random_vat_office(self):\n \"\"\"\n Returns a random code identifying the VAT office needed to build a valid VAT with company_vat.\n\n See https://it.wikipedia.org/wiki/Partita_IVA#Tabella_degli_Uffici_IVA\n \"\"\"\n val = self.random_int(1, 104)\n\n # handle special cases\n if val == 101:\n return 120\n elif val == 102:\n return 121\n elif val == 103:\n return 888\n elif val == 104:\n return 999\n # else: between 1 and 100 are all valid\n return val\n\n def company_vat(self):\n \"\"\"\n Returns Italian VAT identification number (Partita IVA).\n \"\"\"\n code = self.bothify('#######') + str(self._random_vat_office()).zfill(3)\n luhn_checksum = str(calculate_luhn(code))\n return 'IT{}{}'.format(code, luhn_checksum)\n", "path": "faker/providers/company/it_IT/__init__.py"}]} | 3,749 | 369 |
gh_patches_debug_3849 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Celery is only using low priority queue
I'm not sure if this is happening for everyone or just bookwyrm.social, but all my celery tasks are going to the `low_priority` queue and it's making everything run super slowly!
(@tofuwabohu are you noticing this in flower?)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celerywyrm/settings.py`
Content:
```
1 """ bookwyrm settings and configuration """
2 # pylint: disable=wildcard-import
3 # pylint: disable=unused-wildcard-import
4 from bookwyrm.settings import *
5
6 # pylint: disable=line-too-long
7 REDIS_BROKER_PASSWORD = requests.utils.quote(env("REDIS_BROKER_PASSWORD", None))
8 REDIS_BROKER_HOST = env("REDIS_BROKER_HOST", "redis_broker")
9 REDIS_BROKER_PORT = env("REDIS_BROKER_PORT", 6379)
10 REDIS_BROKER_DB_INDEX = env("REDIS_BROKER_DB_INDEX", 0)
11
12 CELERY_BROKER_URL = f"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}"
13 CELERY_RESULT_BACKEND = f"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}"
14
15 CELERY_DEFAULT_QUEUE = "low_priority"
16
17 CELERY_ACCEPT_CONTENT = ["json"]
18 CELERY_TASK_SERIALIZER = "json"
19 CELERY_RESULT_SERIALIZER = "json"
20
21 CELERY_BEAT_SCHEDULER = "django_celery_beat.schedulers:DatabaseScheduler"
22 CELERY_TIMEZONE = env("TIME_ZONE", "UTC")
23
24 FLOWER_PORT = env("FLOWER_PORT")
25
26 INSTALLED_APPS = INSTALLED_APPS + [
27 "celerywyrm",
28 ]
29
30 ROOT_URLCONF = "celerywyrm.urls"
31
32 WSGI_APPLICATION = "celerywyrm.wsgi.application"
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celerywyrm/settings.py b/celerywyrm/settings.py
--- a/celerywyrm/settings.py
+++ b/celerywyrm/settings.py
@@ -13,6 +13,7 @@
CELERY_RESULT_BACKEND = f"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}"
CELERY_DEFAULT_QUEUE = "low_priority"
+CELERY_CREATE_MISSING_QUEUES = True
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
| {"golden_diff": "diff --git a/celerywyrm/settings.py b/celerywyrm/settings.py\n--- a/celerywyrm/settings.py\n+++ b/celerywyrm/settings.py\n@@ -13,6 +13,7 @@\n CELERY_RESULT_BACKEND = f\"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}\"\n \n CELERY_DEFAULT_QUEUE = \"low_priority\"\n+CELERY_CREATE_MISSING_QUEUES = True\n \n CELERY_ACCEPT_CONTENT = [\"json\"]\n CELERY_TASK_SERIALIZER = \"json\"\n", "issue": "Celery is only using low priority queue\nI'm not sure if this is happening for everyone or just bookwyrm.social, but all my celery tasks are going to the `low_priority` queue and it's making everything run super slowly!\r\n\r\n(@tofuwabohu are you noticing this in flower?)\n", "before_files": [{"content": "\"\"\" bookwyrm settings and configuration \"\"\"\n# pylint: disable=wildcard-import\n# pylint: disable=unused-wildcard-import\nfrom bookwyrm.settings import *\n\n# pylint: disable=line-too-long\nREDIS_BROKER_PASSWORD = requests.utils.quote(env(\"REDIS_BROKER_PASSWORD\", None))\nREDIS_BROKER_HOST = env(\"REDIS_BROKER_HOST\", \"redis_broker\")\nREDIS_BROKER_PORT = env(\"REDIS_BROKER_PORT\", 6379)\nREDIS_BROKER_DB_INDEX = env(\"REDIS_BROKER_DB_INDEX\", 0)\n\nCELERY_BROKER_URL = f\"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}\"\nCELERY_RESULT_BACKEND = f\"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}\"\n\nCELERY_DEFAULT_QUEUE = \"low_priority\"\n\nCELERY_ACCEPT_CONTENT = [\"json\"]\nCELERY_TASK_SERIALIZER = \"json\"\nCELERY_RESULT_SERIALIZER = \"json\"\n\nCELERY_BEAT_SCHEDULER = \"django_celery_beat.schedulers:DatabaseScheduler\"\nCELERY_TIMEZONE = env(\"TIME_ZONE\", \"UTC\")\n\nFLOWER_PORT = env(\"FLOWER_PORT\")\n\nINSTALLED_APPS = INSTALLED_APPS + [\n \"celerywyrm\",\n]\n\nROOT_URLCONF = \"celerywyrm.urls\"\n\nWSGI_APPLICATION = \"celerywyrm.wsgi.application\"\n", "path": "celerywyrm/settings.py"}], "after_files": [{"content": "\"\"\" bookwyrm settings and configuration \"\"\"\n# pylint: disable=wildcard-import\n# pylint: disable=unused-wildcard-import\nfrom bookwyrm.settings import *\n\n# pylint: disable=line-too-long\nREDIS_BROKER_PASSWORD = requests.utils.quote(env(\"REDIS_BROKER_PASSWORD\", None))\nREDIS_BROKER_HOST = env(\"REDIS_BROKER_HOST\", \"redis_broker\")\nREDIS_BROKER_PORT = env(\"REDIS_BROKER_PORT\", 6379)\nREDIS_BROKER_DB_INDEX = env(\"REDIS_BROKER_DB_INDEX\", 0)\n\nCELERY_BROKER_URL = f\"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}\"\nCELERY_RESULT_BACKEND = f\"redis://:{REDIS_BROKER_PASSWORD}@{REDIS_BROKER_HOST}:{REDIS_BROKER_PORT}/{REDIS_BROKER_DB_INDEX}\"\n\nCELERY_DEFAULT_QUEUE = \"low_priority\"\nCELERY_CREATE_MISSING_QUEUES = True\n\nCELERY_ACCEPT_CONTENT = [\"json\"]\nCELERY_TASK_SERIALIZER = \"json\"\nCELERY_RESULT_SERIALIZER = \"json\"\n\nCELERY_BEAT_SCHEDULER = \"django_celery_beat.schedulers:DatabaseScheduler\"\nCELERY_TIMEZONE = env(\"TIME_ZONE\", \"UTC\")\n\nFLOWER_PORT = env(\"FLOWER_PORT\")\n\nINSTALLED_APPS = INSTALLED_APPS + [\n \"celerywyrm\",\n]\n\nROOT_URLCONF = \"celerywyrm.urls\"\n\nWSGI_APPLICATION = \"celerywyrm.wsgi.application\"\n", "path": "celerywyrm/settings.py"}]} | 709 | 127 |
gh_patches_debug_48127 | rasdani/github-patches | git_diff | dynaconf__dynaconf-1010 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] TypeError for older versions of HVAC in read_secret_version method
**Describe the bug**
A combination of newer versions of Dynaconf with older versions of HVAC result in an incompatible mix of expected vs available arguments. Specifically you can get the following traceback.
```python
109 try:
110 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:
--> 111 data = client.secrets.kv.v2.read_secret_version(
112 path,
113 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
114 raise_on_deleted_version=True, # keep default behavior
115 )
116 else:
117 data = client.secrets.kv.read_secret(
118 "data/" + path,
119 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
120 )
TypeError: KvV2.read_secret_version() got an unexpected keyword argument 'raise_on_deleted_version'
```
The PR introducing this feature was included in HVAC 1.1.0: https://github.com/hvac/hvac/pull/907
**To Reproduce**
Steps to reproduce the behavior:
1. Have a version of HVAC older than 1.1.0
2. Trigger a vault version read
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from __future__ import annotations
2
3 import os
4
5 from setuptools import find_packages
6 from setuptools import setup
7
8
9 def read(*names, **kwargs):
10 """Read a file."""
11 content = ""
12 with open(
13 os.path.join(os.path.dirname(__file__), *names),
14 encoding=kwargs.get("encoding", "utf8"),
15 ) as open_file:
16 content = open_file.read().strip()
17 return content
18
19
20 test_requirements = [
21 "pytest",
22 "pytest-cov",
23 "pytest-xdist",
24 "pytest-mock",
25 "flake8",
26 "pep8-naming",
27 "flake8-debugger",
28 "flake8-print",
29 "flake8-todo",
30 "radon",
31 "flask>=0.12",
32 "django",
33 "python-dotenv",
34 "toml",
35 "redis",
36 "hvac",
37 "configobj",
38 ]
39
40
41 setup(
42 name="dynaconf",
43 version=read("dynaconf", "VERSION"),
44 url="https://github.com/dynaconf/dynaconf",
45 license="MIT",
46 license_files=["LICENSE", "vendor_licenses/*"],
47 author="Bruno Rocha",
48 author_email="[email protected]",
49 description="The dynamic configurator for your Python Project",
50 long_description=read("README.md"),
51 long_description_content_type="text/markdown",
52 packages=find_packages(
53 exclude=[
54 "tests",
55 "tests.*",
56 "tests_functional",
57 "tests_functional.*",
58 "docs",
59 "legacy_docs",
60 "legacy_docs.*",
61 "docs.*",
62 "build",
63 "build.*",
64 "dynaconf.vendor_src",
65 "dynaconf/vendor_src",
66 "dynaconf.vendor_src.*",
67 "dynaconf/vendor_src/*",
68 ]
69 ),
70 include_package_data=True,
71 zip_safe=False,
72 platforms="any",
73 tests_require=test_requirements,
74 extras_require={
75 "redis": ["redis"],
76 "vault": ["hvac"],
77 "yaml": ["ruamel.yaml"],
78 "toml": ["toml"],
79 "ini": ["configobj"],
80 "configobj": ["configobj"],
81 "all": ["redis", "ruamel.yaml", "configobj", "hvac"],
82 "test": test_requirements,
83 },
84 python_requires=">=3.8",
85 entry_points={"console_scripts": ["dynaconf=dynaconf.cli:main"]},
86 setup_requires=["setuptools>=38.6.0"],
87 classifiers=[
88 "Development Status :: 5 - Production/Stable",
89 "Framework :: Django",
90 "Framework :: Flask",
91 "Intended Audience :: Developers",
92 "License :: OSI Approved :: MIT License",
93 "Natural Language :: English",
94 "Operating System :: OS Independent",
95 "Programming Language :: Python",
96 "Programming Language :: Python :: 3",
97 "Programming Language :: Python :: 3 :: Only",
98 "Programming Language :: Python :: 3.8",
99 "Programming Language :: Python :: 3.9",
100 "Programming Language :: Python :: 3.10",
101 "Programming Language :: Python :: 3.11",
102 "Topic :: Utilities",
103 "Topic :: Software Development :: Libraries",
104 "Topic :: Software Development :: Libraries :: Python Modules",
105 ],
106 )
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,7 +33,7 @@
"python-dotenv",
"toml",
"redis",
- "hvac",
+ "hvac>=1.1.0",
"configobj",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,7 +33,7 @@\n \"python-dotenv\",\n \"toml\",\n \"redis\",\n- \"hvac\",\n+ \"hvac>=1.1.0\",\n \"configobj\",\n ]\n", "issue": "[bug] TypeError for older versions of HVAC in read_secret_version method\n**Describe the bug**\r\nA combination of newer versions of Dynaconf with older versions of HVAC result in an incompatible mix of expected vs available arguments. Specifically you can get the following traceback.\r\n\r\n```python\r\n 109 try:\r\n 110 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:\r\n--> 111 data = client.secrets.kv.v2.read_secret_version(\r\n 112 path,\r\n 113 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\r\n 114 raise_on_deleted_version=True, # keep default behavior\r\n 115 )\r\n 116 else:\r\n 117 data = client.secrets.kv.read_secret(\r\n 118 \"data/\" + path,\r\n 119 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\r\n 120 )\r\n\r\nTypeError: KvV2.read_secret_version() got an unexpected keyword argument 'raise_on_deleted_version'\r\n```\r\n\r\nThe PR introducing this feature was included in HVAC 1.1.0: https://github.com/hvac/hvac/pull/907 \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Have a version of HVAC older than 1.1.0\r\n2. Trigger a vault version read\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef read(*names, **kwargs):\n \"\"\"Read a file.\"\"\"\n content = \"\"\n with open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\"),\n ) as open_file:\n content = open_file.read().strip()\n return content\n\n\ntest_requirements = [\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-mock\",\n \"flake8\",\n \"pep8-naming\",\n \"flake8-debugger\",\n \"flake8-print\",\n \"flake8-todo\",\n \"radon\",\n \"flask>=0.12\",\n \"django\",\n \"python-dotenv\",\n \"toml\",\n \"redis\",\n \"hvac\",\n \"configobj\",\n]\n\n\nsetup(\n name=\"dynaconf\",\n version=read(\"dynaconf\", \"VERSION\"),\n url=\"https://github.com/dynaconf/dynaconf\",\n license=\"MIT\",\n license_files=[\"LICENSE\", \"vendor_licenses/*\"],\n author=\"Bruno Rocha\",\n author_email=\"[email protected]\",\n description=\"The dynamic configurator for your Python Project\",\n long_description=read(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\n \"tests\",\n \"tests.*\",\n \"tests_functional\",\n \"tests_functional.*\",\n \"docs\",\n \"legacy_docs\",\n \"legacy_docs.*\",\n \"docs.*\",\n \"build\",\n \"build.*\",\n \"dynaconf.vendor_src\",\n \"dynaconf/vendor_src\",\n \"dynaconf.vendor_src.*\",\n \"dynaconf/vendor_src/*\",\n ]\n ),\n include_package_data=True,\n zip_safe=False,\n platforms=\"any\",\n tests_require=test_requirements,\n extras_require={\n \"redis\": [\"redis\"],\n \"vault\": [\"hvac\"],\n \"yaml\": [\"ruamel.yaml\"],\n \"toml\": [\"toml\"],\n \"ini\": [\"configobj\"],\n \"configobj\": [\"configobj\"],\n \"all\": [\"redis\", \"ruamel.yaml\", \"configobj\", \"hvac\"],\n \"test\": test_requirements,\n },\n python_requires=\">=3.8\",\n entry_points={\"console_scripts\": [\"dynaconf=dynaconf.cli:main\"]},\n setup_requires=[\"setuptools>=38.6.0\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Django\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Utilities\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef read(*names, **kwargs):\n \"\"\"Read a file.\"\"\"\n content = \"\"\n with open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\"),\n ) as open_file:\n content = open_file.read().strip()\n return content\n\n\ntest_requirements = [\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-mock\",\n \"flake8\",\n \"pep8-naming\",\n \"flake8-debugger\",\n \"flake8-print\",\n \"flake8-todo\",\n \"radon\",\n \"flask>=0.12\",\n \"django\",\n \"python-dotenv\",\n \"toml\",\n \"redis\",\n \"hvac>=1.1.0\",\n \"configobj\",\n]\n\n\nsetup(\n name=\"dynaconf\",\n version=read(\"dynaconf\", \"VERSION\"),\n url=\"https://github.com/dynaconf/dynaconf\",\n license=\"MIT\",\n license_files=[\"LICENSE\", \"vendor_licenses/*\"],\n author=\"Bruno Rocha\",\n author_email=\"[email protected]\",\n description=\"The dynamic configurator for your Python Project\",\n long_description=read(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\n \"tests\",\n \"tests.*\",\n \"tests_functional\",\n \"tests_functional.*\",\n \"docs\",\n \"legacy_docs\",\n \"legacy_docs.*\",\n \"docs.*\",\n \"build\",\n \"build.*\",\n \"dynaconf.vendor_src\",\n \"dynaconf/vendor_src\",\n \"dynaconf.vendor_src.*\",\n \"dynaconf/vendor_src/*\",\n ]\n ),\n include_package_data=True,\n zip_safe=False,\n platforms=\"any\",\n tests_require=test_requirements,\n extras_require={\n \"redis\": [\"redis\"],\n \"vault\": [\"hvac\"],\n \"yaml\": [\"ruamel.yaml\"],\n \"toml\": [\"toml\"],\n \"ini\": [\"configobj\"],\n \"configobj\": [\"configobj\"],\n \"all\": [\"redis\", \"ruamel.yaml\", \"configobj\", \"hvac\"],\n \"test\": test_requirements,\n },\n python_requires=\">=3.8\",\n entry_points={\"console_scripts\": [\"dynaconf=dynaconf.cli:main\"]},\n setup_requires=[\"setuptools>=38.6.0\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Django\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Utilities\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n)\n", "path": "setup.py"}]} | 1,496 | 71 |
gh_patches_debug_23445 | rasdani/github-patches | git_diff | liqd__a4-opin-689 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invite: email address should be independent of letter case
After testing invites for private projects a lot with AEGEE, I finally found out what their problem was. When they invite users, the auto correct on their Android tablet lets email addresses start with an uppercase letter. The users they wanted to invite had their email address written in lowercase letters though. OPIN did not recognize them as the same users. We should change this behaviour ASAP. It should not matter anywhere whether a user inputs email addresses in lower or uppercase letters.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/memberships/views.py`
Content:
```
1 from django.http import Http404
2 from django.shortcuts import redirect
3 from django.views import generic
4 from rules.compat import access_mixins as mixin
5
6 from adhocracy4.projects import models as prj_models
7 from adhocracy4.projects import views as prj_views
8
9 from . import forms, models
10
11
12 class RequestsProjectDetailView(prj_views.ProjectDetailView):
13
14 def handle_no_permission(self):
15 """
16 Check if user clould join
17 """
18 user = self.request.user
19 is_member = user.is_authenticated() and self.project.has_member(user)
20
21 if is_member:
22 return super().handle_no_permission()
23 else:
24 return self.handle_no_membership()
25
26 def handle_no_membership(self):
27 membership_impossible = (
28 not self.request.user.is_authenticated()
29 or self.project.is_draft
30 or self.project.has_member(self.request.user)
31 )
32
33 if membership_impossible:
34 return super().handle_no_permission()
35 else:
36 return redirect('memberships-request',
37 project_slug=self.project.slug)
38
39
40 class InviteView(mixin.LoginRequiredMixin, generic.UpdateView):
41 model = models.Invite
42 form_class = forms.InviteForm
43 slug_field = 'token'
44 slug_url_kwarg = 'invite_token'
45
46 def get_form_kwargs(self):
47 kwargs = super().get_form_kwargs()
48 kwargs.update({'user': self.request.user})
49 return kwargs
50
51 def form_valid(self, form):
52 if form.is_accepted():
53 form.instance.accept(self.request.user)
54 return redirect(form.instance.project.get_absolute_url())
55 else:
56 form.instance.reject()
57 return redirect('/')
58
59
60 class RequestView(mixin.LoginRequiredMixin, generic.DetailView):
61 """
62 Displays membership request if it exists or allows to create one.
63 """
64 model = models.Request
65 slug_field = 'project__slug'
66 slug_url_kwarg = 'project_slug'
67 context_object_name = 'join_request'
68
69 def get_queryset(self):
70 return self.model.objects.filter(creator=self.request.user)
71
72 def get(self, request, *args, **kwargs):
73 if self.project.has_member(request.user):
74 return redirect(self.project.get_absolute_url())
75 else:
76 return super().get(request, *args, **kwargs)
77
78 def post(self, request, *args, **kwargs):
79 user = request.user
80 project = self.project
81 models.Request.objects.request_membership(project, user)
82 return redirect(self.request.path)
83
84 def get_object(self, queryset=None):
85 try:
86 return super().get_object(queryset)
87 except Http404:
88 return None
89
90 @property
91 def project(self):
92 project_slug = self.kwargs[self.slug_url_kwarg]
93 return prj_models.Project.objects.get(slug=project_slug)
94
```
Path: `euth/memberships/forms.py`
Content:
```
1 from django import forms
2 from django.core.exceptions import ValidationError
3
4 from . import models
5
6
7 class InviteForm(forms.ModelForm):
8 accept = forms.CharField(required=False)
9 reject = forms.CharField(required=False)
10
11 class Meta:
12 model = models.Invite
13 fields = ['accept', 'reject']
14
15 def __init__(self, user=None, **kwargs):
16 super().__init__(**kwargs)
17 self.user = user
18
19 def clean(self):
20 data = self.data
21 if 'accept' not in data and 'reject' not in data:
22 raise ValidationError('Reject or accept')
23 if 'accept' in data and not self.user.email == self.instance.email:
24 raise ValidationError('This user has another email address than '
25 'the one that received the invitation.')
26 return data
27
28 def is_accepted(self):
29 data = self.data
30 return 'accept' in data and 'reject' not in data
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/memberships/forms.py b/euth/memberships/forms.py
--- a/euth/memberships/forms.py
+++ b/euth/memberships/forms.py
@@ -12,17 +12,10 @@
model = models.Invite
fields = ['accept', 'reject']
- def __init__(self, user=None, **kwargs):
- super().__init__(**kwargs)
- self.user = user
-
def clean(self):
data = self.data
if 'accept' not in data and 'reject' not in data:
raise ValidationError('Reject or accept')
- if 'accept' in data and not self.user.email == self.instance.email:
- raise ValidationError('This user has another email address than '
- 'the one that received the invitation.')
return data
def is_accepted(self):
diff --git a/euth/memberships/views.py b/euth/memberships/views.py
--- a/euth/memberships/views.py
+++ b/euth/memberships/views.py
@@ -43,11 +43,6 @@
slug_field = 'token'
slug_url_kwarg = 'invite_token'
- def get_form_kwargs(self):
- kwargs = super().get_form_kwargs()
- kwargs.update({'user': self.request.user})
- return kwargs
-
def form_valid(self, form):
if form.is_accepted():
form.instance.accept(self.request.user)
| {"golden_diff": "diff --git a/euth/memberships/forms.py b/euth/memberships/forms.py\n--- a/euth/memberships/forms.py\n+++ b/euth/memberships/forms.py\n@@ -12,17 +12,10 @@\n model = models.Invite\n fields = ['accept', 'reject']\n \n- def __init__(self, user=None, **kwargs):\n- super().__init__(**kwargs)\n- self.user = user\n-\n def clean(self):\n data = self.data\n if 'accept' not in data and 'reject' not in data:\n raise ValidationError('Reject or accept')\n- if 'accept' in data and not self.user.email == self.instance.email:\n- raise ValidationError('This user has another email address than '\n- 'the one that received the invitation.')\n return data\n \n def is_accepted(self):\ndiff --git a/euth/memberships/views.py b/euth/memberships/views.py\n--- a/euth/memberships/views.py\n+++ b/euth/memberships/views.py\n@@ -43,11 +43,6 @@\n slug_field = 'token'\n slug_url_kwarg = 'invite_token'\n \n- def get_form_kwargs(self):\n- kwargs = super().get_form_kwargs()\n- kwargs.update({'user': self.request.user})\n- return kwargs\n-\n def form_valid(self, form):\n if form.is_accepted():\n form.instance.accept(self.request.user)\n", "issue": "Invite: email address should be independent of letter case \nAfter testing invites for private projects a lot with AEGEE, I finally found out what their problem was. When they invite users, the auto correct on their Android tablet lets email addresses start with an uppercase letter. The users they wanted to invite had their email address written in lowercase letters though. OPIN did not recognize them as the same users. We should change this behaviour ASAP. It should not matter anywhere whether a user inputs email addresses in lower or uppercase letters.\n", "before_files": [{"content": "from django.http import Http404\nfrom django.shortcuts import redirect\nfrom django.views import generic\nfrom rules.compat import access_mixins as mixin\n\nfrom adhocracy4.projects import models as prj_models\nfrom adhocracy4.projects import views as prj_views\n\nfrom . import forms, models\n\n\nclass RequestsProjectDetailView(prj_views.ProjectDetailView):\n\n def handle_no_permission(self):\n \"\"\"\n Check if user clould join\n \"\"\"\n user = self.request.user\n is_member = user.is_authenticated() and self.project.has_member(user)\n\n if is_member:\n return super().handle_no_permission()\n else:\n return self.handle_no_membership()\n\n def handle_no_membership(self):\n membership_impossible = (\n not self.request.user.is_authenticated()\n or self.project.is_draft\n or self.project.has_member(self.request.user)\n )\n\n if membership_impossible:\n return super().handle_no_permission()\n else:\n return redirect('memberships-request',\n project_slug=self.project.slug)\n\n\nclass InviteView(mixin.LoginRequiredMixin, generic.UpdateView):\n model = models.Invite\n form_class = forms.InviteForm\n slug_field = 'token'\n slug_url_kwarg = 'invite_token'\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs.update({'user': self.request.user})\n return kwargs\n\n def form_valid(self, form):\n if form.is_accepted():\n form.instance.accept(self.request.user)\n return redirect(form.instance.project.get_absolute_url())\n else:\n form.instance.reject()\n return redirect('/')\n\n\nclass RequestView(mixin.LoginRequiredMixin, generic.DetailView):\n \"\"\"\n Displays membership request if it exists or allows to create one.\n \"\"\"\n model = models.Request\n slug_field = 'project__slug'\n slug_url_kwarg = 'project_slug'\n context_object_name = 'join_request'\n\n def get_queryset(self):\n return self.model.objects.filter(creator=self.request.user)\n\n def get(self, request, *args, **kwargs):\n if self.project.has_member(request.user):\n return redirect(self.project.get_absolute_url())\n else:\n return super().get(request, *args, **kwargs)\n\n def post(self, request, *args, **kwargs):\n user = request.user\n project = self.project\n models.Request.objects.request_membership(project, user)\n return redirect(self.request.path)\n\n def get_object(self, queryset=None):\n try:\n return super().get_object(queryset)\n except Http404:\n return None\n\n @property\n def project(self):\n project_slug = self.kwargs[self.slug_url_kwarg]\n return prj_models.Project.objects.get(slug=project_slug)\n", "path": "euth/memberships/views.py"}, {"content": "from django import forms\nfrom django.core.exceptions import ValidationError\n\nfrom . import models\n\n\nclass InviteForm(forms.ModelForm):\n accept = forms.CharField(required=False)\n reject = forms.CharField(required=False)\n\n class Meta:\n model = models.Invite\n fields = ['accept', 'reject']\n\n def __init__(self, user=None, **kwargs):\n super().__init__(**kwargs)\n self.user = user\n\n def clean(self):\n data = self.data\n if 'accept' not in data and 'reject' not in data:\n raise ValidationError('Reject or accept')\n if 'accept' in data and not self.user.email == self.instance.email:\n raise ValidationError('This user has another email address than '\n 'the one that received the invitation.')\n return data\n\n def is_accepted(self):\n data = self.data\n return 'accept' in data and 'reject' not in data\n", "path": "euth/memberships/forms.py"}], "after_files": [{"content": "from django.http import Http404\nfrom django.shortcuts import redirect\nfrom django.views import generic\nfrom rules.compat import access_mixins as mixin\n\nfrom adhocracy4.projects import models as prj_models\nfrom adhocracy4.projects import views as prj_views\n\nfrom . import forms, models\n\n\nclass RequestsProjectDetailView(prj_views.ProjectDetailView):\n\n def handle_no_permission(self):\n \"\"\"\n Check if user clould join\n \"\"\"\n user = self.request.user\n is_member = user.is_authenticated() and self.project.has_member(user)\n\n if is_member:\n return super().handle_no_permission()\n else:\n return self.handle_no_membership()\n\n def handle_no_membership(self):\n membership_impossible = (\n not self.request.user.is_authenticated()\n or self.project.is_draft\n or self.project.has_member(self.request.user)\n )\n\n if membership_impossible:\n return super().handle_no_permission()\n else:\n return redirect('memberships-request',\n project_slug=self.project.slug)\n\n\nclass InviteView(mixin.LoginRequiredMixin, generic.UpdateView):\n model = models.Invite\n form_class = forms.InviteForm\n slug_field = 'token'\n slug_url_kwarg = 'invite_token'\n\n def form_valid(self, form):\n if form.is_accepted():\n form.instance.accept(self.request.user)\n return redirect(form.instance.project.get_absolute_url())\n else:\n form.instance.reject()\n return redirect('/')\n\n\nclass RequestView(mixin.LoginRequiredMixin, generic.DetailView):\n \"\"\"\n Displays membership request if it exists or allows to create one.\n \"\"\"\n model = models.Request\n slug_field = 'project__slug'\n slug_url_kwarg = 'project_slug'\n context_object_name = 'join_request'\n\n def get_queryset(self):\n return self.model.objects.filter(creator=self.request.user)\n\n def get(self, request, *args, **kwargs):\n if self.project.has_member(request.user):\n return redirect(self.project.get_absolute_url())\n else:\n return super().get(request, *args, **kwargs)\n\n def post(self, request, *args, **kwargs):\n user = request.user\n project = self.project\n models.Request.objects.request_membership(project, user)\n return redirect(self.request.path)\n\n def get_object(self, queryset=None):\n try:\n return super().get_object(queryset)\n except Http404:\n return None\n\n @property\n def project(self):\n project_slug = self.kwargs[self.slug_url_kwarg]\n return prj_models.Project.objects.get(slug=project_slug)\n", "path": "euth/memberships/views.py"}, {"content": "from django import forms\nfrom django.core.exceptions import ValidationError\n\nfrom . import models\n\n\nclass InviteForm(forms.ModelForm):\n accept = forms.CharField(required=False)\n reject = forms.CharField(required=False)\n\n class Meta:\n model = models.Invite\n fields = ['accept', 'reject']\n\n def clean(self):\n data = self.data\n if 'accept' not in data and 'reject' not in data:\n raise ValidationError('Reject or accept')\n return data\n\n def is_accepted(self):\n data = self.data\n return 'accept' in data and 'reject' not in data\n", "path": "euth/memberships/forms.py"}]} | 1,397 | 316 |
gh_patches_debug_6685 | rasdani/github-patches | git_diff | goauthentik__authentik-2536 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot use kubernetes secret - read only filesystem
I am trying to pass secrets from Kubernetes into Authentik. For that I added the following lines to the Helm values:
```
volumes:
- name: cluster-domain-cert
secret:
secretName: ${CLUSTER_DOMAIN_CERT}
defaultMode: 0666
items:
- key: tls.crt
path: fullchain.pem
- key: tls.key
path: privkey.pem
volumeMounts:
- name: cluster-domain-cert
mountPath: /certs/${CLUSTER_DOMAIN}
readOnly: false
```
The secrets gets mounted sucessfuly and the task to search certs finds them but fails to read them:
```
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "task_id": "00f728fc-22e1-4f63-9376-2706e7a85158", "task_name": "certificate_discovery", "timestamp": "2022-03-20T23:20:11.159295"}
{"event": "Failed to open file or invalid format", "exc": "OSError(30, 'Read-only file system')", "file": "PosixPath('/certs/angelnu.com/privkey.pem')", "level": "warning", "logger": "authentik.crypto.tasks", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "timestamp": "2022-03-20T23:20:11.160119"}
{"event": "Failed to open file or invalid format", "exc": "OSError(30, 'Read-only file system')", "file": "PosixPath('/certs/angelnu.com/fullchain.pem')", "level": "warning", "logger": "authentik.crypto.tasks", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "timestamp": "2022-03-20T23:20:11.160346"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "state": "SUCCESS", "task_id": "00f728fc-22e1-4f63-9376-2706e7a85158", "task_name": "certificate_discovery", "timestamp": "2022-03-20T23:20:11.172351"}
```
The problem is that since Kubernetes 1.9 the [secrets are mounted read-only](https://github.com/kubernetes/kubernetes/issues/62099) and the Authentik task tries to open the file in read and **write** mode: https://github.com/goauthentik/authentik/blob/457e17fec37664236224644f00a7b93a54f9e7fb/authentik/crypto/tasks.py#L64
```
authentik@authentik-worker-6c899bf47-djtgg:/$ mount|grep certs
tmpfs on /certs/angelnu.com type tmpfs (ro,relatime)
```
I think the fix is to change the task to open in read-only mode.
Cannot use kubernetes secret - read only filesystem
I am trying to pass secrets from Kubernetes into Authentik. For that I added the following lines to the Helm values:
```
volumes:
- name: cluster-domain-cert
secret:
secretName: ${CLUSTER_DOMAIN_CERT}
defaultMode: 0666
items:
- key: tls.crt
path: fullchain.pem
- key: tls.key
path: privkey.pem
volumeMounts:
- name: cluster-domain-cert
mountPath: /certs/${CLUSTER_DOMAIN}
readOnly: false
```
The secrets gets mounted sucessfuly and the task to search certs finds them but fails to read them:
```
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "task_id": "00f728fc-22e1-4f63-9376-2706e7a85158", "task_name": "certificate_discovery", "timestamp": "2022-03-20T23:20:11.159295"}
{"event": "Failed to open file or invalid format", "exc": "OSError(30, 'Read-only file system')", "file": "PosixPath('/certs/angelnu.com/privkey.pem')", "level": "warning", "logger": "authentik.crypto.tasks", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "timestamp": "2022-03-20T23:20:11.160119"}
{"event": "Failed to open file or invalid format", "exc": "OSError(30, 'Read-only file system')", "file": "PosixPath('/certs/angelnu.com/fullchain.pem')", "level": "warning", "logger": "authentik.crypto.tasks", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "timestamp": "2022-03-20T23:20:11.160346"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 134, "request_id": "task-00f728fc22e14f6393762706e7a85158", "state": "SUCCESS", "task_id": "00f728fc-22e1-4f63-9376-2706e7a85158", "task_name": "certificate_discovery", "timestamp": "2022-03-20T23:20:11.172351"}
```
The problem is that since Kubernetes 1.9 the [secrets are mounted read-only](https://github.com/kubernetes/kubernetes/issues/62099) and the Authentik task tries to open the file in read and **write** mode: https://github.com/goauthentik/authentik/blob/457e17fec37664236224644f00a7b93a54f9e7fb/authentik/crypto/tasks.py#L64
```
authentik@authentik-worker-6c899bf47-djtgg:/$ mount|grep certs
tmpfs on /certs/angelnu.com type tmpfs (ro,relatime)
```
I think the fix is to change the task to open in read-only mode.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/crypto/tasks.py`
Content:
```
1 """Crypto tasks"""
2 from glob import glob
3 from pathlib import Path
4
5 from cryptography.hazmat.backends import default_backend
6 from cryptography.hazmat.primitives.serialization import load_pem_private_key
7 from cryptography.x509.base import load_pem_x509_certificate
8 from django.utils.translation import gettext_lazy as _
9 from structlog.stdlib import get_logger
10
11 from authentik.crypto.models import CertificateKeyPair
12 from authentik.events.monitored_tasks import (
13 MonitoredTask,
14 TaskResult,
15 TaskResultStatus,
16 prefill_task,
17 )
18 from authentik.lib.config import CONFIG
19 from authentik.root.celery import CELERY_APP
20
21 LOGGER = get_logger()
22
23 MANAGED_DISCOVERED = "goauthentik.io/crypto/discovered/%s"
24
25
26 def ensure_private_key_valid(body: str):
27 """Attempt loading of a PEM Private key without password"""
28 load_pem_private_key(
29 str.encode("\n".join([x.strip() for x in body.split("\n")])),
30 password=None,
31 backend=default_backend(),
32 )
33 return body
34
35
36 def ensure_certificate_valid(body: str):
37 """Attempt loading of a PEM-encoded certificate"""
38 load_pem_x509_certificate(body.encode("utf-8"), default_backend())
39 return body
40
41
42 @CELERY_APP.task(bind=True, base=MonitoredTask)
43 @prefill_task
44 def certificate_discovery(self: MonitoredTask):
45 """Discover, import and update certificates from the filesystem"""
46 certs = {}
47 private_keys = {}
48 discovered = 0
49 for file in glob(CONFIG.y("cert_discovery_dir") + "/**", recursive=True):
50 path = Path(file)
51 if not path.exists():
52 continue
53 if path.is_dir():
54 continue
55 # For certbot setups, we want to ignore archive.
56 if "archive" in file:
57 continue
58 # Support certbot's directory structure
59 if path.name in ["fullchain.pem", "privkey.pem"]:
60 cert_name = path.parent.name
61 else:
62 cert_name = path.name.replace(path.suffix, "")
63 try:
64 with open(path, "r+", encoding="utf-8") as _file:
65 body = _file.read()
66 if "PRIVATE KEY" in body:
67 private_keys[cert_name] = ensure_private_key_valid(body)
68 else:
69 certs[cert_name] = ensure_certificate_valid(body)
70 except (OSError, ValueError) as exc:
71 LOGGER.warning("Failed to open file or invalid format", exc=exc, file=path)
72 discovered += 1
73 for name, cert_data in certs.items():
74 cert = CertificateKeyPair.objects.filter(managed=MANAGED_DISCOVERED % name).first()
75 if not cert:
76 cert = CertificateKeyPair(
77 name=name,
78 managed=MANAGED_DISCOVERED % name,
79 )
80 dirty = False
81 if cert.certificate_data != cert_data:
82 cert.certificate_data = cert_data
83 dirty = True
84 if name in private_keys:
85 if cert.key_data != private_keys[name]:
86 cert.key_data = private_keys[name]
87 dirty = True
88 if dirty:
89 cert.save()
90 self.set_status(
91 TaskResult(
92 TaskResultStatus.SUCCESSFUL,
93 messages=[_("Successfully imported %(count)d files." % {"count": discovered})],
94 )
95 )
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/crypto/tasks.py b/authentik/crypto/tasks.py
--- a/authentik/crypto/tasks.py
+++ b/authentik/crypto/tasks.py
@@ -61,7 +61,7 @@
else:
cert_name = path.name.replace(path.suffix, "")
try:
- with open(path, "r+", encoding="utf-8") as _file:
+ with open(path, "r", encoding="utf-8") as _file:
body = _file.read()
if "PRIVATE KEY" in body:
private_keys[cert_name] = ensure_private_key_valid(body)
| {"golden_diff": "diff --git a/authentik/crypto/tasks.py b/authentik/crypto/tasks.py\n--- a/authentik/crypto/tasks.py\n+++ b/authentik/crypto/tasks.py\n@@ -61,7 +61,7 @@\n else:\n cert_name = path.name.replace(path.suffix, \"\")\n try:\n- with open(path, \"r+\", encoding=\"utf-8\") as _file:\n+ with open(path, \"r\", encoding=\"utf-8\") as _file:\n body = _file.read()\n if \"PRIVATE KEY\" in body:\n private_keys[cert_name] = ensure_private_key_valid(body)\n", "issue": "Cannot use kubernetes secret - read only filesystem\nI am trying to pass secrets from Kubernetes into Authentik. For that I added the following lines to the Helm values:\r\n\r\n```\r\n volumes:\r\n - name: cluster-domain-cert\r\n secret:\r\n secretName: ${CLUSTER_DOMAIN_CERT}\r\n defaultMode: 0666\r\n items:\r\n - key: tls.crt\r\n path: fullchain.pem\r\n - key: tls.key\r\n path: privkey.pem\r\n volumeMounts:\r\n - name: cluster-domain-cert\r\n mountPath: /certs/${CLUSTER_DOMAIN}\r\n readOnly: false\r\n```\r\n\r\nThe secrets gets mounted sucessfuly and the task to search certs finds them but fails to read them:\r\n```\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"task_id\": \"00f728fc-22e1-4f63-9376-2706e7a85158\", \"task_name\": \"certificate_discovery\", \"timestamp\": \"2022-03-20T23:20:11.159295\"}\r\n{\"event\": \"Failed to open file or invalid format\", \"exc\": \"OSError(30, 'Read-only file system')\", \"file\": \"PosixPath('/certs/angelnu.com/privkey.pem')\", \"level\": \"warning\", \"logger\": \"authentik.crypto.tasks\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"timestamp\": \"2022-03-20T23:20:11.160119\"}\r\n{\"event\": \"Failed to open file or invalid format\", \"exc\": \"OSError(30, 'Read-only file system')\", \"file\": \"PosixPath('/certs/angelnu.com/fullchain.pem')\", \"level\": \"warning\", \"logger\": \"authentik.crypto.tasks\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"timestamp\": \"2022-03-20T23:20:11.160346\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"state\": \"SUCCESS\", \"task_id\": \"00f728fc-22e1-4f63-9376-2706e7a85158\", \"task_name\": \"certificate_discovery\", \"timestamp\": \"2022-03-20T23:20:11.172351\"}\r\n\r\n```\r\n\r\nThe problem is that since Kubernetes 1.9 the [secrets are mounted read-only](https://github.com/kubernetes/kubernetes/issues/62099) and the Authentik task tries to open the file in read and **write** mode: https://github.com/goauthentik/authentik/blob/457e17fec37664236224644f00a7b93a54f9e7fb/authentik/crypto/tasks.py#L64\r\n\r\n```\r\nauthentik@authentik-worker-6c899bf47-djtgg:/$ mount|grep certs\r\ntmpfs on /certs/angelnu.com type tmpfs (ro,relatime)\r\n```\r\n\r\nI think the fix is to change the task to open in read-only mode.\nCannot use kubernetes secret - read only filesystem\nI am trying to pass secrets from Kubernetes into Authentik. For that I added the following lines to the Helm values:\r\n\r\n```\r\n volumes:\r\n - name: cluster-domain-cert\r\n secret:\r\n secretName: ${CLUSTER_DOMAIN_CERT}\r\n defaultMode: 0666\r\n items:\r\n - key: tls.crt\r\n path: fullchain.pem\r\n - key: tls.key\r\n path: privkey.pem\r\n volumeMounts:\r\n - name: cluster-domain-cert\r\n mountPath: /certs/${CLUSTER_DOMAIN}\r\n readOnly: false\r\n```\r\n\r\nThe secrets gets mounted sucessfuly and the task to search certs finds them but fails to read them:\r\n```\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"task_id\": \"00f728fc-22e1-4f63-9376-2706e7a85158\", \"task_name\": \"certificate_discovery\", \"timestamp\": \"2022-03-20T23:20:11.159295\"}\r\n{\"event\": \"Failed to open file or invalid format\", \"exc\": \"OSError(30, 'Read-only file system')\", \"file\": \"PosixPath('/certs/angelnu.com/privkey.pem')\", \"level\": \"warning\", \"logger\": \"authentik.crypto.tasks\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"timestamp\": \"2022-03-20T23:20:11.160119\"}\r\n{\"event\": \"Failed to open file or invalid format\", \"exc\": \"OSError(30, 'Read-only file system')\", \"file\": \"PosixPath('/certs/angelnu.com/fullchain.pem')\", \"level\": \"warning\", \"logger\": \"authentik.crypto.tasks\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"timestamp\": \"2022-03-20T23:20:11.160346\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 134, \"request_id\": \"task-00f728fc22e14f6393762706e7a85158\", \"state\": \"SUCCESS\", \"task_id\": \"00f728fc-22e1-4f63-9376-2706e7a85158\", \"task_name\": \"certificate_discovery\", \"timestamp\": \"2022-03-20T23:20:11.172351\"}\r\n\r\n```\r\n\r\nThe problem is that since Kubernetes 1.9 the [secrets are mounted read-only](https://github.com/kubernetes/kubernetes/issues/62099) and the Authentik task tries to open the file in read and **write** mode: https://github.com/goauthentik/authentik/blob/457e17fec37664236224644f00a7b93a54f9e7fb/authentik/crypto/tasks.py#L64\r\n\r\n```\r\nauthentik@authentik-worker-6c899bf47-djtgg:/$ mount|grep certs\r\ntmpfs on /certs/angelnu.com type tmpfs (ro,relatime)\r\n```\r\n\r\nI think the fix is to change the task to open in read-only mode.\n", "before_files": [{"content": "\"\"\"Crypto tasks\"\"\"\nfrom glob import glob\nfrom pathlib import Path\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\nfrom cryptography.x509.base import load_pem_x509_certificate\nfrom django.utils.translation import gettext_lazy as _\nfrom structlog.stdlib import get_logger\n\nfrom authentik.crypto.models import CertificateKeyPair\nfrom authentik.events.monitored_tasks import (\n MonitoredTask,\n TaskResult,\n TaskResultStatus,\n prefill_task,\n)\nfrom authentik.lib.config import CONFIG\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger()\n\nMANAGED_DISCOVERED = \"goauthentik.io/crypto/discovered/%s\"\n\n\ndef ensure_private_key_valid(body: str):\n \"\"\"Attempt loading of a PEM Private key without password\"\"\"\n load_pem_private_key(\n str.encode(\"\\n\".join([x.strip() for x in body.split(\"\\n\")])),\n password=None,\n backend=default_backend(),\n )\n return body\n\n\ndef ensure_certificate_valid(body: str):\n \"\"\"Attempt loading of a PEM-encoded certificate\"\"\"\n load_pem_x509_certificate(body.encode(\"utf-8\"), default_backend())\n return body\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\n@prefill_task\ndef certificate_discovery(self: MonitoredTask):\n \"\"\"Discover, import and update certificates from the filesystem\"\"\"\n certs = {}\n private_keys = {}\n discovered = 0\n for file in glob(CONFIG.y(\"cert_discovery_dir\") + \"/**\", recursive=True):\n path = Path(file)\n if not path.exists():\n continue\n if path.is_dir():\n continue\n # For certbot setups, we want to ignore archive.\n if \"archive\" in file:\n continue\n # Support certbot's directory structure\n if path.name in [\"fullchain.pem\", \"privkey.pem\"]:\n cert_name = path.parent.name\n else:\n cert_name = path.name.replace(path.suffix, \"\")\n try:\n with open(path, \"r+\", encoding=\"utf-8\") as _file:\n body = _file.read()\n if \"PRIVATE KEY\" in body:\n private_keys[cert_name] = ensure_private_key_valid(body)\n else:\n certs[cert_name] = ensure_certificate_valid(body)\n except (OSError, ValueError) as exc:\n LOGGER.warning(\"Failed to open file or invalid format\", exc=exc, file=path)\n discovered += 1\n for name, cert_data in certs.items():\n cert = CertificateKeyPair.objects.filter(managed=MANAGED_DISCOVERED % name).first()\n if not cert:\n cert = CertificateKeyPair(\n name=name,\n managed=MANAGED_DISCOVERED % name,\n )\n dirty = False\n if cert.certificate_data != cert_data:\n cert.certificate_data = cert_data\n dirty = True\n if name in private_keys:\n if cert.key_data != private_keys[name]:\n cert.key_data = private_keys[name]\n dirty = True\n if dirty:\n cert.save()\n self.set_status(\n TaskResult(\n TaskResultStatus.SUCCESSFUL,\n messages=[_(\"Successfully imported %(count)d files.\" % {\"count\": discovered})],\n )\n )\n", "path": "authentik/crypto/tasks.py"}], "after_files": [{"content": "\"\"\"Crypto tasks\"\"\"\nfrom glob import glob\nfrom pathlib import Path\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\nfrom cryptography.x509.base import load_pem_x509_certificate\nfrom django.utils.translation import gettext_lazy as _\nfrom structlog.stdlib import get_logger\n\nfrom authentik.crypto.models import CertificateKeyPair\nfrom authentik.events.monitored_tasks import (\n MonitoredTask,\n TaskResult,\n TaskResultStatus,\n prefill_task,\n)\nfrom authentik.lib.config import CONFIG\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger()\n\nMANAGED_DISCOVERED = \"goauthentik.io/crypto/discovered/%s\"\n\n\ndef ensure_private_key_valid(body: str):\n \"\"\"Attempt loading of a PEM Private key without password\"\"\"\n load_pem_private_key(\n str.encode(\"\\n\".join([x.strip() for x in body.split(\"\\n\")])),\n password=None,\n backend=default_backend(),\n )\n return body\n\n\ndef ensure_certificate_valid(body: str):\n \"\"\"Attempt loading of a PEM-encoded certificate\"\"\"\n load_pem_x509_certificate(body.encode(\"utf-8\"), default_backend())\n return body\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\n@prefill_task\ndef certificate_discovery(self: MonitoredTask):\n \"\"\"Discover, import and update certificates from the filesystem\"\"\"\n certs = {}\n private_keys = {}\n discovered = 0\n for file in glob(CONFIG.y(\"cert_discovery_dir\") + \"/**\", recursive=True):\n path = Path(file)\n if not path.exists():\n continue\n if path.is_dir():\n continue\n # For certbot setups, we want to ignore archive.\n if \"archive\" in file:\n continue\n # Support certbot's directory structure\n if path.name in [\"fullchain.pem\", \"privkey.pem\"]:\n cert_name = path.parent.name\n else:\n cert_name = path.name.replace(path.suffix, \"\")\n try:\n with open(path, \"r\", encoding=\"utf-8\") as _file:\n body = _file.read()\n if \"PRIVATE KEY\" in body:\n private_keys[cert_name] = ensure_private_key_valid(body)\n else:\n certs[cert_name] = ensure_certificate_valid(body)\n except (OSError, ValueError) as exc:\n LOGGER.warning(\"Failed to open file or invalid format\", exc=exc, file=path)\n discovered += 1\n for name, cert_data in certs.items():\n cert = CertificateKeyPair.objects.filter(managed=MANAGED_DISCOVERED % name).first()\n if not cert:\n cert = CertificateKeyPair(\n name=name,\n managed=MANAGED_DISCOVERED % name,\n )\n dirty = False\n if cert.certificate_data != cert_data:\n cert.certificate_data = cert_data\n dirty = True\n if name in private_keys:\n if cert.key_data != private_keys[name]:\n cert.key_data = private_keys[name]\n dirty = True\n if dirty:\n cert.save()\n self.set_status(\n TaskResult(\n TaskResultStatus.SUCCESSFUL,\n messages=[_(\"Successfully imported %(count)d files.\" % {\"count\": discovered})],\n )\n )\n", "path": "authentik/crypto/tasks.py"}]} | 3,036 | 134 |
gh_patches_debug_43660 | rasdani/github-patches | git_diff | privacyidea__privacyidea-1068 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tokenhandler should delete tokeninfo
An additional action in the token handler should be to delete a tokeninfo field.
This action would only require the ``tokeninfo_key``.
See https://community.privacyidea.org/t/how-to-clear-validity-period-on-privacyidea-once-set-through-the-ui-interface/667
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `privacyidea/lib/eventhandler/tokenhandler.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # 2017-07-18 Cornelius Kölbel <[email protected]>
4 # Allow setting time with timedelta
5 # 2017-01-21 Cornelius Kölbel <[email protected]>
6 # Add required mobile number and email address when enrolling tokens
7 # added with the help of splashx
8 # 2016-11-14 Cornelius Kölbel <[email protected]>
9 # Initial writup
10 #
11 # License: AGPLv3
12 # (c) 2016. Cornelius Kölbel
13 #
14 # This code is free software; you can redistribute it and/or
15 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
16 # License as published by the Free Software Foundation; either
17 # version 3 of the License, or any later version.
18 #
19 # This code is distributed in the hope that it will be useful,
20 # but WITHOUT ANY WARRANTY; without even the implied warranty of
21 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
22 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
23 #
24 # You should have received a copy of the GNU Affero General Public
25 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
26 #
27 #
28 __doc__ = """This is the event handler module for token actions.
29 You can attach token actions like enable, disable, delete, unassign,... of the
30
31 * current token
32 * all the user's tokens
33 * all unassigned tokens
34 * all disabled tokens
35 * ...
36 """
37 from privacyidea.lib.eventhandler.base import BaseEventHandler
38 from privacyidea.lib.token import (get_token_types, set_validity_period_end,
39 set_validity_period_start)
40 from privacyidea.lib.realm import get_realms
41 from privacyidea.lib.token import (set_realms, remove_token, enable_token,
42 unassign_token, init_token, set_description,
43 set_count_window, add_tokeninfo,
44 set_failcounter)
45 from privacyidea.lib.utils import (parse_date, is_true,
46 parse_time_offset_from_now)
47 from privacyidea.lib.tokenclass import DATE_FORMAT, AUTH_DATE_FORMAT
48 from privacyidea.lib import _
49 import json
50 import logging
51 import datetime
52 from dateutil.parser import parse as parse_date_string
53 from dateutil.tz import tzlocal
54
55 log = logging.getLogger(__name__)
56
57
58 class ACTION_TYPE(object):
59 """
60 Allowed actions
61 """
62 SET_TOKENREALM = "set tokenrealm"
63 DELETE = "delete"
64 UNASSIGN = "unassign"
65 DISABLE = "disable"
66 ENABLE = "enable"
67 INIT = "enroll"
68 SET_DESCRIPTION = "set description"
69 SET_VALIDITY = "set validity"
70 SET_COUNTWINDOW = "set countwindow"
71 SET_TOKENINFO = "set tokeninfo"
72 SET_FAILCOUNTER = "set failcounter"
73
74
75 class VALIDITY(object):
76 """
77 Allowed validity options
78 """
79 START= "valid from"
80 END = "valid till"
81
82
83 class TokenEventHandler(BaseEventHandler):
84 """
85 An Eventhandler needs to return a list of actions, which it can handle.
86
87 It also returns a list of allowed action and conditions
88
89 It returns an identifier, which can be used in the eventhandlig definitions
90 """
91
92 identifier = "Token"
93 description = "This event handler can trigger new actions on tokens."
94
95 @property
96 def actions(cls):
97 """
98 This method returns a dictionary of allowed actions and possible
99 options in this handler module.
100
101 :return: dict with actions
102 """
103 realm_list = get_realms().keys()
104 actions = {ACTION_TYPE.SET_TOKENREALM:
105 {"realm":
106 {"type": "str",
107 "required": True,
108 "description": _("set a new realm of the token"),
109 "value": realm_list},
110 "only_realm":
111 {"type": "bool",
112 "description": _("The new realm will be the only "
113 "realm of the token. I.e. all "
114 "other realms will be removed "
115 "from this token. Otherwise the "
116 "realm will be added to the token.")
117 }
118 },
119 ACTION_TYPE.DELETE: {},
120 ACTION_TYPE.UNASSIGN: {},
121 ACTION_TYPE.DISABLE: {},
122 ACTION_TYPE.ENABLE: {},
123 ACTION_TYPE.INIT:
124 {"tokentype":
125 {"type": "str",
126 "required": True,
127 "description": _("Token type to create"),
128 "value": get_token_types()
129 },
130 "user":
131 {"type": "bool",
132 "description": _("Assign token to user in "
133 "request or to tokenowner.")},
134 "realm":
135 {"type": "str",
136 "required": False,
137 "description": _("Set the realm of the newly "
138 "created token."),
139 "value": realm_list},
140 "dynamic_phone": {
141 "type": "bool",
142 "visibleIf": "tokentype",
143 "visibleValue": "sms",
144 "description": _("Dynamically read the mobile number "
145 "from the user store.")
146 },
147 "motppin": {
148 "type": "str",
149 "visibleIf": "tokentype",
150 "visibleValue": "motp",
151 "description": _("Set the MOTP PIN of the MOTP "
152 "token during enrollment. This "
153 "is a required value for "
154 "enrolling MOTP tokens.")}
155 },
156 ACTION_TYPE.SET_DESCRIPTION:
157 {"description":
158 {
159 "type": "str",
160 "description": _("The new description of the "
161 "token.")
162 }
163 },
164 ACTION_TYPE.SET_VALIDITY:
165 {VALIDITY.START: {
166 "type": "str",
167 "description": _("The token will be valid starting "
168 "at the given date. Can be a fixed "
169 "date or an offset like +10m, "
170 "+24h, +7d.")
171 },
172 VALIDITY.END: {
173 "type": "str",
174 "description": _("The token will be valid until "
175 "the given date. Can be a fixed "
176 "date or an offset like +10m, "
177 "+24h, +7d.")
178 }
179 },
180 ACTION_TYPE.SET_COUNTWINDOW:
181 {"count window":
182 {
183 # TODO: should be "int" but we do not support
184 # this at the moment.
185 "type": "str",
186 "required": True,
187 "description": _("Set the new count window of "
188 "the token.")
189 }
190 },
191 ACTION_TYPE.SET_FAILCOUNTER:
192 {
193 "fail counter":
194 {
195 "type": "str",
196 "required": True,
197 "description": _("Set the failcounter of "
198 "the token.")
199 }
200 },
201 ACTION_TYPE.SET_TOKENINFO:
202 {"key":
203 {
204 "type": "str",
205 "required": True,
206 "description": _("Set this tokeninfo key.")
207 },
208 "value":
209 {
210 "type": "str",
211 "description": _("Set the above key the this "
212 "value.")
213 }
214 }
215 }
216 return actions
217
218 def do(self, action, options=None):
219 """
220 This method executes the defined action in the given event.
221
222 :param action:
223 :param options: Contains the flask parameters g, request, response
224 and the handler_def configuration
225 :type options: dict
226 :return:
227 """
228 ret = True
229 g = options.get("g")
230 request = options.get("request")
231 response = options.get("response")
232 content = json.loads(response.data)
233 handler_def = options.get("handler_def")
234 handler_options = handler_def.get("options", {})
235
236 serial = request.all_data.get("serial") or \
237 content.get("detail", {}).get("serial") or \
238 g.audit_object.audit_data.get("serial")
239
240 if action.lower() in [ACTION_TYPE.SET_TOKENREALM,
241 ACTION_TYPE.SET_DESCRIPTION,
242 ACTION_TYPE.DELETE, ACTION_TYPE.DISABLE,
243 ACTION_TYPE.ENABLE, ACTION_TYPE.UNASSIGN,
244 ACTION_TYPE.SET_VALIDITY,
245 ACTION_TYPE.SET_COUNTWINDOW,
246 ACTION_TYPE.SET_TOKENINFO,
247 ACTION_TYPE.SET_FAILCOUNTER]:
248 if serial:
249 log.info("{0!s} for token {1!s}".format(action, serial))
250 if action.lower() == ACTION_TYPE.SET_TOKENREALM:
251 realm = handler_options.get("realm")
252 only_realm = is_true(handler_options.get("only_realm"))
253 # Set the realm..
254 log.info("Setting realm of token {0!s} to {1!s}".format(
255 serial, realm))
256 # Add the token realm
257 set_realms(serial, [realm], add=not only_realm)
258 elif action.lower() == ACTION_TYPE.DELETE:
259 remove_token(serial=serial)
260 elif action.lower() == ACTION_TYPE.DISABLE:
261 enable_token(serial, enable=False)
262 elif action.lower() == ACTION_TYPE.ENABLE:
263 enable_token(serial, enable=True)
264 elif action.lower() == ACTION_TYPE.UNASSIGN:
265 unassign_token(serial)
266 elif action.lower() == ACTION_TYPE.SET_DESCRIPTION:
267 description = handler_options.get("description") or ""
268 description, td = parse_time_offset_from_now(description)
269 s_now = (datetime.datetime.now(tzlocal()) + td).strftime(
270 AUTH_DATE_FORMAT)
271 set_description(serial,
272 description.format(
273 current_time=s_now,
274 now=s_now,
275 client_ip=g.client_ip,
276 ua_browser=request.user_agent.browser,
277 ua_string=request.user_agent.string))
278 elif action.lower() == ACTION_TYPE.SET_COUNTWINDOW:
279 set_count_window(serial,
280 int(handler_options.get("count window",
281 50)))
282 elif action.lower() == ACTION_TYPE.SET_TOKENINFO:
283 tokeninfo = handler_options.get("value") or ""
284 tokeninfo, td = parse_time_offset_from_now(tokeninfo)
285 s_now = (datetime.datetime.now(tzlocal()) + td).strftime(
286 AUTH_DATE_FORMAT)
287 try:
288 username = request.User.loginname
289 realm = request.User.realm
290 except Exception:
291 username = "N/A"
292 realm = "N/A"
293 add_tokeninfo(serial, handler_options.get("key"),
294 tokeninfo.format(
295 current_time=s_now,
296 now=s_now,
297 client_ip=g.client_ip,
298 username=username,
299 realm=realm,
300 ua_browser=request.user_agent.browser,
301 ua_string=request.user_agent.string))
302 elif action.lower() == ACTION_TYPE.SET_VALIDITY:
303 start_date = handler_options.get(VALIDITY.START)
304 end_date = handler_options.get(VALIDITY.END)
305 if start_date:
306 d = parse_date(start_date)
307 set_validity_period_start(serial, None,
308 d.strftime(DATE_FORMAT))
309 if end_date:
310 d = parse_date(end_date)
311 set_validity_period_end(serial, None,
312 d.strftime(DATE_FORMAT))
313 elif action.lower() == ACTION_TYPE.SET_FAILCOUNTER:
314 try:
315 set_failcounter(serial,
316 int(handler_options.get("fail counter")))
317 except Exception as exx:
318 log.warning("Misconfiguration: Failed to set fail "
319 "counter!")
320 else:
321 log.info("Action {0!s} requires serial number. But no serial "
322 "number could be found in request.")
323
324 if action.lower() == ACTION_TYPE.INIT:
325 log.info("Initializing new token")
326 init_param = {"type": handler_options.get("tokentype"),
327 "genkey": 1,
328 "realm": handler_options.get("realm", "")}
329 user = None
330 if is_true(handler_options.get("user")):
331 user = self._get_tokenowner(request)
332 tokentype = handler_options.get("tokentype")
333 # Some tokentypes need additional parameters or otherwise
334 # will fail to enroll.
335 # TODO: Other tokentypes will require additional parameters
336 if tokentype == "sms":
337 if handler_options.get("dynamic_phone"):
338 init_param["dynamic_phone"] = 1
339 else:
340 init_param['phone'] = user.get_user_phone(
341 phone_type='mobile')
342 if not init_param['phone']:
343 log.warning("Enrolling SMS token. But the user "
344 "{0!r} has no mobile number!".format(user))
345 elif tokentype == "email":
346 init_param['email'] = user.info.get("email", "")
347 if not init_param['email']:
348 log.warning("Enrolling EMail token. But the user {0!s}"
349 "has no email address!".format(user))
350 elif tokentype == "motp":
351 init_param['motppin'] = handler_options.get("motppin")
352
353 t = init_token(param=init_param, user=user)
354 log.info("New token {0!s} enrolled.".format(t.token.serial))
355
356 return ret
357
358
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/privacyidea/lib/eventhandler/tokenhandler.py b/privacyidea/lib/eventhandler/tokenhandler.py
--- a/privacyidea/lib/eventhandler/tokenhandler.py
+++ b/privacyidea/lib/eventhandler/tokenhandler.py
@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
#
+# 2018-34-14 Paul Lettich <[email protected]>
+# Add "delete tokeninfo" action
# 2017-07-18 Cornelius Kölbel <[email protected]>
# Allow setting time with timedelta
# 2017-01-21 Cornelius Kölbel <[email protected]>
@@ -41,7 +43,7 @@
from privacyidea.lib.token import (set_realms, remove_token, enable_token,
unassign_token, init_token, set_description,
set_count_window, add_tokeninfo,
- set_failcounter)
+ set_failcounter, delete_tokeninfo)
from privacyidea.lib.utils import (parse_date, is_true,
parse_time_offset_from_now)
from privacyidea.lib.tokenclass import DATE_FORMAT, AUTH_DATE_FORMAT
@@ -70,13 +72,14 @@
SET_COUNTWINDOW = "set countwindow"
SET_TOKENINFO = "set tokeninfo"
SET_FAILCOUNTER = "set failcounter"
+ DELETE_TOKENINFO = "delete tokeninfo"
class VALIDITY(object):
"""
Allowed validity options
"""
- START= "valid from"
+ START = "valid from"
END = "valid till"
@@ -211,6 +214,14 @@
"description": _("Set the above key the this "
"value.")
}
+ },
+ ACTION_TYPE.DELETE_TOKENINFO:
+ {"key":
+ {
+ "type": "str",
+ "required": True,
+ "description": _("Delete this tokeninfo key.")
+ }
}
}
return actions
@@ -244,7 +255,8 @@
ACTION_TYPE.SET_VALIDITY,
ACTION_TYPE.SET_COUNTWINDOW,
ACTION_TYPE.SET_TOKENINFO,
- ACTION_TYPE.SET_FAILCOUNTER]:
+ ACTION_TYPE.SET_FAILCOUNTER,
+ ACTION_TYPE.DELETE_TOKENINFO]:
if serial:
log.info("{0!s} for token {1!s}".format(action, serial))
if action.lower() == ACTION_TYPE.SET_TOKENREALM:
@@ -299,6 +311,8 @@
realm=realm,
ua_browser=request.user_agent.browser,
ua_string=request.user_agent.string))
+ elif action.lower() == ACTION_TYPE.DELETE_TOKENINFO:
+ delete_tokeninfo(serial, handler_options.get("key"))
elif action.lower() == ACTION_TYPE.SET_VALIDITY:
start_date = handler_options.get(VALIDITY.START)
end_date = handler_options.get(VALIDITY.END)
@@ -354,4 +368,3 @@
log.info("New token {0!s} enrolled.".format(t.token.serial))
return ret
-
| {"golden_diff": "diff --git a/privacyidea/lib/eventhandler/tokenhandler.py b/privacyidea/lib/eventhandler/tokenhandler.py\n--- a/privacyidea/lib/eventhandler/tokenhandler.py\n+++ b/privacyidea/lib/eventhandler/tokenhandler.py\n@@ -1,5 +1,7 @@\n # -*- coding: utf-8 -*-\n #\n+# 2018-34-14 Paul Lettich <[email protected]>\n+# Add \"delete tokeninfo\" action\n # 2017-07-18 Cornelius K\u00f6lbel <[email protected]>\n # Allow setting time with timedelta\n # 2017-01-21 Cornelius K\u00f6lbel <[email protected]>\n@@ -41,7 +43,7 @@\n from privacyidea.lib.token import (set_realms, remove_token, enable_token,\n unassign_token, init_token, set_description,\n set_count_window, add_tokeninfo,\n- set_failcounter)\n+ set_failcounter, delete_tokeninfo)\n from privacyidea.lib.utils import (parse_date, is_true,\n parse_time_offset_from_now)\n from privacyidea.lib.tokenclass import DATE_FORMAT, AUTH_DATE_FORMAT\n@@ -70,13 +72,14 @@\n SET_COUNTWINDOW = \"set countwindow\"\n SET_TOKENINFO = \"set tokeninfo\"\n SET_FAILCOUNTER = \"set failcounter\"\n+ DELETE_TOKENINFO = \"delete tokeninfo\"\n \n \n class VALIDITY(object):\n \"\"\"\n Allowed validity options\n \"\"\"\n- START= \"valid from\"\n+ START = \"valid from\"\n END = \"valid till\"\n \n \n@@ -211,6 +214,14 @@\n \"description\": _(\"Set the above key the this \"\n \"value.\")\n }\n+ },\n+ ACTION_TYPE.DELETE_TOKENINFO:\n+ {\"key\":\n+ {\n+ \"type\": \"str\",\n+ \"required\": True,\n+ \"description\": _(\"Delete this tokeninfo key.\")\n+ }\n }\n }\n return actions\n@@ -244,7 +255,8 @@\n ACTION_TYPE.SET_VALIDITY,\n ACTION_TYPE.SET_COUNTWINDOW,\n ACTION_TYPE.SET_TOKENINFO,\n- ACTION_TYPE.SET_FAILCOUNTER]:\n+ ACTION_TYPE.SET_FAILCOUNTER,\n+ ACTION_TYPE.DELETE_TOKENINFO]:\n if serial:\n log.info(\"{0!s} for token {1!s}\".format(action, serial))\n if action.lower() == ACTION_TYPE.SET_TOKENREALM:\n@@ -299,6 +311,8 @@\n realm=realm,\n ua_browser=request.user_agent.browser,\n ua_string=request.user_agent.string))\n+ elif action.lower() == ACTION_TYPE.DELETE_TOKENINFO:\n+ delete_tokeninfo(serial, handler_options.get(\"key\"))\n elif action.lower() == ACTION_TYPE.SET_VALIDITY:\n start_date = handler_options.get(VALIDITY.START)\n end_date = handler_options.get(VALIDITY.END)\n@@ -354,4 +368,3 @@\n log.info(\"New token {0!s} enrolled.\".format(t.token.serial))\n \n return ret\n-\n", "issue": "Tokenhandler should delete tokeninfo\nAn additional action in the token handler should be to delete a tokeninfo field.\r\nThis action would only require the ``tokeninfo_key``.\r\n\r\nSee https://community.privacyidea.org/t/how-to-clear-validity-period-on-privacyidea-once-set-through-the-ui-interface/667\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2017-07-18 Cornelius K\u00f6lbel <[email protected]>\n# Allow setting time with timedelta\n# 2017-01-21 Cornelius K\u00f6lbel <[email protected]>\n# Add required mobile number and email address when enrolling tokens\n# added with the help of splashx\n# 2016-11-14 Cornelius K\u00f6lbel <[email protected]>\n# Initial writup\n#\n# License: AGPLv3\n# (c) 2016. Cornelius K\u00f6lbel\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\n__doc__ = \"\"\"This is the event handler module for token actions.\nYou can attach token actions like enable, disable, delete, unassign,... of the\n\n * current token\n * all the user's tokens\n * all unassigned tokens\n * all disabled tokens\n * ...\n\"\"\"\nfrom privacyidea.lib.eventhandler.base import BaseEventHandler\nfrom privacyidea.lib.token import (get_token_types, set_validity_period_end,\n set_validity_period_start)\nfrom privacyidea.lib.realm import get_realms\nfrom privacyidea.lib.token import (set_realms, remove_token, enable_token,\n unassign_token, init_token, set_description,\n set_count_window, add_tokeninfo,\n set_failcounter)\nfrom privacyidea.lib.utils import (parse_date, is_true,\n parse_time_offset_from_now)\nfrom privacyidea.lib.tokenclass import DATE_FORMAT, AUTH_DATE_FORMAT\nfrom privacyidea.lib import _\nimport json\nimport logging\nimport datetime\nfrom dateutil.parser import parse as parse_date_string\nfrom dateutil.tz import tzlocal\n\nlog = logging.getLogger(__name__)\n\n\nclass ACTION_TYPE(object):\n \"\"\"\n Allowed actions\n \"\"\"\n SET_TOKENREALM = \"set tokenrealm\"\n DELETE = \"delete\"\n UNASSIGN = \"unassign\"\n DISABLE = \"disable\"\n ENABLE = \"enable\"\n INIT = \"enroll\"\n SET_DESCRIPTION = \"set description\"\n SET_VALIDITY = \"set validity\"\n SET_COUNTWINDOW = \"set countwindow\"\n SET_TOKENINFO = \"set tokeninfo\"\n SET_FAILCOUNTER = \"set failcounter\"\n\n\nclass VALIDITY(object):\n \"\"\"\n Allowed validity options\n \"\"\"\n START= \"valid from\"\n END = \"valid till\"\n\n\nclass TokenEventHandler(BaseEventHandler):\n \"\"\"\n An Eventhandler needs to return a list of actions, which it can handle.\n\n It also returns a list of allowed action and conditions\n\n It returns an identifier, which can be used in the eventhandlig definitions\n \"\"\"\n\n identifier = \"Token\"\n description = \"This event handler can trigger new actions on tokens.\"\n\n @property\n def actions(cls):\n \"\"\"\n This method returns a dictionary of allowed actions and possible\n options in this handler module.\n\n :return: dict with actions\n \"\"\"\n realm_list = get_realms().keys()\n actions = {ACTION_TYPE.SET_TOKENREALM:\n {\"realm\":\n {\"type\": \"str\",\n \"required\": True,\n \"description\": _(\"set a new realm of the token\"),\n \"value\": realm_list},\n \"only_realm\":\n {\"type\": \"bool\",\n \"description\": _(\"The new realm will be the only \"\n \"realm of the token. I.e. all \"\n \"other realms will be removed \"\n \"from this token. Otherwise the \"\n \"realm will be added to the token.\")\n }\n },\n ACTION_TYPE.DELETE: {},\n ACTION_TYPE.UNASSIGN: {},\n ACTION_TYPE.DISABLE: {},\n ACTION_TYPE.ENABLE: {},\n ACTION_TYPE.INIT:\n {\"tokentype\":\n {\"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Token type to create\"),\n \"value\": get_token_types()\n },\n \"user\":\n {\"type\": \"bool\",\n \"description\": _(\"Assign token to user in \"\n \"request or to tokenowner.\")},\n \"realm\":\n {\"type\": \"str\",\n \"required\": False,\n \"description\": _(\"Set the realm of the newly \"\n \"created token.\"),\n \"value\": realm_list},\n \"dynamic_phone\": {\n \"type\": \"bool\",\n \"visibleIf\": \"tokentype\",\n \"visibleValue\": \"sms\",\n \"description\": _(\"Dynamically read the mobile number \"\n \"from the user store.\")\n },\n \"motppin\": {\n \"type\": \"str\",\n \"visibleIf\": \"tokentype\",\n \"visibleValue\": \"motp\",\n \"description\": _(\"Set the MOTP PIN of the MOTP \"\n \"token during enrollment. This \"\n \"is a required value for \"\n \"enrolling MOTP tokens.\")}\n },\n ACTION_TYPE.SET_DESCRIPTION:\n {\"description\":\n {\n \"type\": \"str\",\n \"description\": _(\"The new description of the \"\n \"token.\")\n }\n },\n ACTION_TYPE.SET_VALIDITY:\n {VALIDITY.START: {\n \"type\": \"str\",\n \"description\": _(\"The token will be valid starting \"\n \"at the given date. Can be a fixed \"\n \"date or an offset like +10m, \"\n \"+24h, +7d.\")\n },\n VALIDITY.END: {\n \"type\": \"str\",\n \"description\": _(\"The token will be valid until \"\n \"the given date. Can be a fixed \"\n \"date or an offset like +10m, \"\n \"+24h, +7d.\")\n }\n },\n ACTION_TYPE.SET_COUNTWINDOW:\n {\"count window\":\n {\n # TODO: should be \"int\" but we do not support\n # this at the moment.\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Set the new count window of \"\n \"the token.\")\n }\n },\n ACTION_TYPE.SET_FAILCOUNTER:\n {\n \"fail counter\":\n {\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Set the failcounter of \"\n \"the token.\")\n }\n },\n ACTION_TYPE.SET_TOKENINFO:\n {\"key\":\n {\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Set this tokeninfo key.\")\n },\n \"value\":\n {\n \"type\": \"str\",\n \"description\": _(\"Set the above key the this \"\n \"value.\")\n }\n }\n }\n return actions\n\n def do(self, action, options=None):\n \"\"\"\n This method executes the defined action in the given event.\n\n :param action:\n :param options: Contains the flask parameters g, request, response\n and the handler_def configuration\n :type options: dict\n :return:\n \"\"\"\n ret = True\n g = options.get(\"g\")\n request = options.get(\"request\")\n response = options.get(\"response\")\n content = json.loads(response.data)\n handler_def = options.get(\"handler_def\")\n handler_options = handler_def.get(\"options\", {})\n\n serial = request.all_data.get(\"serial\") or \\\n content.get(\"detail\", {}).get(\"serial\") or \\\n g.audit_object.audit_data.get(\"serial\")\n\n if action.lower() in [ACTION_TYPE.SET_TOKENREALM,\n ACTION_TYPE.SET_DESCRIPTION,\n ACTION_TYPE.DELETE, ACTION_TYPE.DISABLE,\n ACTION_TYPE.ENABLE, ACTION_TYPE.UNASSIGN,\n ACTION_TYPE.SET_VALIDITY,\n ACTION_TYPE.SET_COUNTWINDOW,\n ACTION_TYPE.SET_TOKENINFO,\n ACTION_TYPE.SET_FAILCOUNTER]:\n if serial:\n log.info(\"{0!s} for token {1!s}\".format(action, serial))\n if action.lower() == ACTION_TYPE.SET_TOKENREALM:\n realm = handler_options.get(\"realm\")\n only_realm = is_true(handler_options.get(\"only_realm\"))\n # Set the realm..\n log.info(\"Setting realm of token {0!s} to {1!s}\".format(\n serial, realm))\n # Add the token realm\n set_realms(serial, [realm], add=not only_realm)\n elif action.lower() == ACTION_TYPE.DELETE:\n remove_token(serial=serial)\n elif action.lower() == ACTION_TYPE.DISABLE:\n enable_token(serial, enable=False)\n elif action.lower() == ACTION_TYPE.ENABLE:\n enable_token(serial, enable=True)\n elif action.lower() == ACTION_TYPE.UNASSIGN:\n unassign_token(serial)\n elif action.lower() == ACTION_TYPE.SET_DESCRIPTION:\n description = handler_options.get(\"description\") or \"\"\n description, td = parse_time_offset_from_now(description)\n s_now = (datetime.datetime.now(tzlocal()) + td).strftime(\n AUTH_DATE_FORMAT)\n set_description(serial,\n description.format(\n current_time=s_now,\n now=s_now,\n client_ip=g.client_ip,\n ua_browser=request.user_agent.browser,\n ua_string=request.user_agent.string))\n elif action.lower() == ACTION_TYPE.SET_COUNTWINDOW:\n set_count_window(serial,\n int(handler_options.get(\"count window\",\n 50)))\n elif action.lower() == ACTION_TYPE.SET_TOKENINFO:\n tokeninfo = handler_options.get(\"value\") or \"\"\n tokeninfo, td = parse_time_offset_from_now(tokeninfo)\n s_now = (datetime.datetime.now(tzlocal()) + td).strftime(\n AUTH_DATE_FORMAT)\n try:\n username = request.User.loginname\n realm = request.User.realm\n except Exception:\n username = \"N/A\"\n realm = \"N/A\"\n add_tokeninfo(serial, handler_options.get(\"key\"),\n tokeninfo.format(\n current_time=s_now,\n now=s_now,\n client_ip=g.client_ip,\n username=username,\n realm=realm,\n ua_browser=request.user_agent.browser,\n ua_string=request.user_agent.string))\n elif action.lower() == ACTION_TYPE.SET_VALIDITY:\n start_date = handler_options.get(VALIDITY.START)\n end_date = handler_options.get(VALIDITY.END)\n if start_date:\n d = parse_date(start_date)\n set_validity_period_start(serial, None,\n d.strftime(DATE_FORMAT))\n if end_date:\n d = parse_date(end_date)\n set_validity_period_end(serial, None,\n d.strftime(DATE_FORMAT))\n elif action.lower() == ACTION_TYPE.SET_FAILCOUNTER:\n try:\n set_failcounter(serial,\n int(handler_options.get(\"fail counter\")))\n except Exception as exx:\n log.warning(\"Misconfiguration: Failed to set fail \"\n \"counter!\")\n else:\n log.info(\"Action {0!s} requires serial number. But no serial \"\n \"number could be found in request.\")\n\n if action.lower() == ACTION_TYPE.INIT:\n log.info(\"Initializing new token\")\n init_param = {\"type\": handler_options.get(\"tokentype\"),\n \"genkey\": 1,\n \"realm\": handler_options.get(\"realm\", \"\")}\n user = None\n if is_true(handler_options.get(\"user\")):\n user = self._get_tokenowner(request)\n tokentype = handler_options.get(\"tokentype\")\n # Some tokentypes need additional parameters or otherwise\n # will fail to enroll.\n # TODO: Other tokentypes will require additional parameters\n if tokentype == \"sms\":\n if handler_options.get(\"dynamic_phone\"):\n init_param[\"dynamic_phone\"] = 1\n else:\n init_param['phone'] = user.get_user_phone(\n phone_type='mobile')\n if not init_param['phone']:\n log.warning(\"Enrolling SMS token. But the user \"\n \"{0!r} has no mobile number!\".format(user))\n elif tokentype == \"email\":\n init_param['email'] = user.info.get(\"email\", \"\")\n if not init_param['email']:\n log.warning(\"Enrolling EMail token. But the user {0!s}\"\n \"has no email address!\".format(user))\n elif tokentype == \"motp\":\n init_param['motppin'] = handler_options.get(\"motppin\")\n\n t = init_token(param=init_param, user=user)\n log.info(\"New token {0!s} enrolled.\".format(t.token.serial))\n\n return ret\n\n", "path": "privacyidea/lib/eventhandler/tokenhandler.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# 2018-34-14 Paul Lettich <[email protected]>\n# Add \"delete tokeninfo\" action\n# 2017-07-18 Cornelius K\u00f6lbel <[email protected]>\n# Allow setting time with timedelta\n# 2017-01-21 Cornelius K\u00f6lbel <[email protected]>\n# Add required mobile number and email address when enrolling tokens\n# added with the help of splashx\n# 2016-11-14 Cornelius K\u00f6lbel <[email protected]>\n# Initial writup\n#\n# License: AGPLv3\n# (c) 2016. Cornelius K\u00f6lbel\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n#\n__doc__ = \"\"\"This is the event handler module for token actions.\nYou can attach token actions like enable, disable, delete, unassign,... of the\n\n * current token\n * all the user's tokens\n * all unassigned tokens\n * all disabled tokens\n * ...\n\"\"\"\nfrom privacyidea.lib.eventhandler.base import BaseEventHandler\nfrom privacyidea.lib.token import (get_token_types, set_validity_period_end,\n set_validity_period_start)\nfrom privacyidea.lib.realm import get_realms\nfrom privacyidea.lib.token import (set_realms, remove_token, enable_token,\n unassign_token, init_token, set_description,\n set_count_window, add_tokeninfo,\n set_failcounter, delete_tokeninfo)\nfrom privacyidea.lib.utils import (parse_date, is_true,\n parse_time_offset_from_now)\nfrom privacyidea.lib.tokenclass import DATE_FORMAT, AUTH_DATE_FORMAT\nfrom privacyidea.lib import _\nimport json\nimport logging\nimport datetime\nfrom dateutil.parser import parse as parse_date_string\nfrom dateutil.tz import tzlocal\n\nlog = logging.getLogger(__name__)\n\n\nclass ACTION_TYPE(object):\n \"\"\"\n Allowed actions\n \"\"\"\n SET_TOKENREALM = \"set tokenrealm\"\n DELETE = \"delete\"\n UNASSIGN = \"unassign\"\n DISABLE = \"disable\"\n ENABLE = \"enable\"\n INIT = \"enroll\"\n SET_DESCRIPTION = \"set description\"\n SET_VALIDITY = \"set validity\"\n SET_COUNTWINDOW = \"set countwindow\"\n SET_TOKENINFO = \"set tokeninfo\"\n SET_FAILCOUNTER = \"set failcounter\"\n DELETE_TOKENINFO = \"delete tokeninfo\"\n\n\nclass VALIDITY(object):\n \"\"\"\n Allowed validity options\n \"\"\"\n START = \"valid from\"\n END = \"valid till\"\n\n\nclass TokenEventHandler(BaseEventHandler):\n \"\"\"\n An Eventhandler needs to return a list of actions, which it can handle.\n\n It also returns a list of allowed action and conditions\n\n It returns an identifier, which can be used in the eventhandlig definitions\n \"\"\"\n\n identifier = \"Token\"\n description = \"This event handler can trigger new actions on tokens.\"\n\n @property\n def actions(cls):\n \"\"\"\n This method returns a dictionary of allowed actions and possible\n options in this handler module.\n\n :return: dict with actions\n \"\"\"\n realm_list = get_realms().keys()\n actions = {ACTION_TYPE.SET_TOKENREALM:\n {\"realm\":\n {\"type\": \"str\",\n \"required\": True,\n \"description\": _(\"set a new realm of the token\"),\n \"value\": realm_list},\n \"only_realm\":\n {\"type\": \"bool\",\n \"description\": _(\"The new realm will be the only \"\n \"realm of the token. I.e. all \"\n \"other realms will be removed \"\n \"from this token. Otherwise the \"\n \"realm will be added to the token.\")\n }\n },\n ACTION_TYPE.DELETE: {},\n ACTION_TYPE.UNASSIGN: {},\n ACTION_TYPE.DISABLE: {},\n ACTION_TYPE.ENABLE: {},\n ACTION_TYPE.INIT:\n {\"tokentype\":\n {\"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Token type to create\"),\n \"value\": get_token_types()\n },\n \"user\":\n {\"type\": \"bool\",\n \"description\": _(\"Assign token to user in \"\n \"request or to tokenowner.\")},\n \"realm\":\n {\"type\": \"str\",\n \"required\": False,\n \"description\": _(\"Set the realm of the newly \"\n \"created token.\"),\n \"value\": realm_list},\n \"dynamic_phone\": {\n \"type\": \"bool\",\n \"visibleIf\": \"tokentype\",\n \"visibleValue\": \"sms\",\n \"description\": _(\"Dynamically read the mobile number \"\n \"from the user store.\")\n },\n \"motppin\": {\n \"type\": \"str\",\n \"visibleIf\": \"tokentype\",\n \"visibleValue\": \"motp\",\n \"description\": _(\"Set the MOTP PIN of the MOTP \"\n \"token during enrollment. This \"\n \"is a required value for \"\n \"enrolling MOTP tokens.\")}\n },\n ACTION_TYPE.SET_DESCRIPTION:\n {\"description\":\n {\n \"type\": \"str\",\n \"description\": _(\"The new description of the \"\n \"token.\")\n }\n },\n ACTION_TYPE.SET_VALIDITY:\n {VALIDITY.START: {\n \"type\": \"str\",\n \"description\": _(\"The token will be valid starting \"\n \"at the given date. Can be a fixed \"\n \"date or an offset like +10m, \"\n \"+24h, +7d.\")\n },\n VALIDITY.END: {\n \"type\": \"str\",\n \"description\": _(\"The token will be valid until \"\n \"the given date. Can be a fixed \"\n \"date or an offset like +10m, \"\n \"+24h, +7d.\")\n }\n },\n ACTION_TYPE.SET_COUNTWINDOW:\n {\"count window\":\n {\n # TODO: should be \"int\" but we do not support\n # this at the moment.\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Set the new count window of \"\n \"the token.\")\n }\n },\n ACTION_TYPE.SET_FAILCOUNTER:\n {\n \"fail counter\":\n {\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Set the failcounter of \"\n \"the token.\")\n }\n },\n ACTION_TYPE.SET_TOKENINFO:\n {\"key\":\n {\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Set this tokeninfo key.\")\n },\n \"value\":\n {\n \"type\": \"str\",\n \"description\": _(\"Set the above key the this \"\n \"value.\")\n }\n },\n ACTION_TYPE.DELETE_TOKENINFO:\n {\"key\":\n {\n \"type\": \"str\",\n \"required\": True,\n \"description\": _(\"Delete this tokeninfo key.\")\n }\n }\n }\n return actions\n\n def do(self, action, options=None):\n \"\"\"\n This method executes the defined action in the given event.\n\n :param action:\n :param options: Contains the flask parameters g, request, response\n and the handler_def configuration\n :type options: dict\n :return:\n \"\"\"\n ret = True\n g = options.get(\"g\")\n request = options.get(\"request\")\n response = options.get(\"response\")\n content = json.loads(response.data)\n handler_def = options.get(\"handler_def\")\n handler_options = handler_def.get(\"options\", {})\n\n serial = request.all_data.get(\"serial\") or \\\n content.get(\"detail\", {}).get(\"serial\") or \\\n g.audit_object.audit_data.get(\"serial\")\n\n if action.lower() in [ACTION_TYPE.SET_TOKENREALM,\n ACTION_TYPE.SET_DESCRIPTION,\n ACTION_TYPE.DELETE, ACTION_TYPE.DISABLE,\n ACTION_TYPE.ENABLE, ACTION_TYPE.UNASSIGN,\n ACTION_TYPE.SET_VALIDITY,\n ACTION_TYPE.SET_COUNTWINDOW,\n ACTION_TYPE.SET_TOKENINFO,\n ACTION_TYPE.SET_FAILCOUNTER,\n ACTION_TYPE.DELETE_TOKENINFO]:\n if serial:\n log.info(\"{0!s} for token {1!s}\".format(action, serial))\n if action.lower() == ACTION_TYPE.SET_TOKENREALM:\n realm = handler_options.get(\"realm\")\n only_realm = is_true(handler_options.get(\"only_realm\"))\n # Set the realm..\n log.info(\"Setting realm of token {0!s} to {1!s}\".format(\n serial, realm))\n # Add the token realm\n set_realms(serial, [realm], add=not only_realm)\n elif action.lower() == ACTION_TYPE.DELETE:\n remove_token(serial=serial)\n elif action.lower() == ACTION_TYPE.DISABLE:\n enable_token(serial, enable=False)\n elif action.lower() == ACTION_TYPE.ENABLE:\n enable_token(serial, enable=True)\n elif action.lower() == ACTION_TYPE.UNASSIGN:\n unassign_token(serial)\n elif action.lower() == ACTION_TYPE.SET_DESCRIPTION:\n description = handler_options.get(\"description\") or \"\"\n description, td = parse_time_offset_from_now(description)\n s_now = (datetime.datetime.now(tzlocal()) + td).strftime(\n AUTH_DATE_FORMAT)\n set_description(serial,\n description.format(\n current_time=s_now,\n now=s_now,\n client_ip=g.client_ip,\n ua_browser=request.user_agent.browser,\n ua_string=request.user_agent.string))\n elif action.lower() == ACTION_TYPE.SET_COUNTWINDOW:\n set_count_window(serial,\n int(handler_options.get(\"count window\",\n 50)))\n elif action.lower() == ACTION_TYPE.SET_TOKENINFO:\n tokeninfo = handler_options.get(\"value\") or \"\"\n tokeninfo, td = parse_time_offset_from_now(tokeninfo)\n s_now = (datetime.datetime.now(tzlocal()) + td).strftime(\n AUTH_DATE_FORMAT)\n try:\n username = request.User.loginname\n realm = request.User.realm\n except Exception:\n username = \"N/A\"\n realm = \"N/A\"\n add_tokeninfo(serial, handler_options.get(\"key\"),\n tokeninfo.format(\n current_time=s_now,\n now=s_now,\n client_ip=g.client_ip,\n username=username,\n realm=realm,\n ua_browser=request.user_agent.browser,\n ua_string=request.user_agent.string))\n elif action.lower() == ACTION_TYPE.DELETE_TOKENINFO:\n delete_tokeninfo(serial, handler_options.get(\"key\"))\n elif action.lower() == ACTION_TYPE.SET_VALIDITY:\n start_date = handler_options.get(VALIDITY.START)\n end_date = handler_options.get(VALIDITY.END)\n if start_date:\n d = parse_date(start_date)\n set_validity_period_start(serial, None,\n d.strftime(DATE_FORMAT))\n if end_date:\n d = parse_date(end_date)\n set_validity_period_end(serial, None,\n d.strftime(DATE_FORMAT))\n elif action.lower() == ACTION_TYPE.SET_FAILCOUNTER:\n try:\n set_failcounter(serial,\n int(handler_options.get(\"fail counter\")))\n except Exception as exx:\n log.warning(\"Misconfiguration: Failed to set fail \"\n \"counter!\")\n else:\n log.info(\"Action {0!s} requires serial number. But no serial \"\n \"number could be found in request.\")\n\n if action.lower() == ACTION_TYPE.INIT:\n log.info(\"Initializing new token\")\n init_param = {\"type\": handler_options.get(\"tokentype\"),\n \"genkey\": 1,\n \"realm\": handler_options.get(\"realm\", \"\")}\n user = None\n if is_true(handler_options.get(\"user\")):\n user = self._get_tokenowner(request)\n tokentype = handler_options.get(\"tokentype\")\n # Some tokentypes need additional parameters or otherwise\n # will fail to enroll.\n # TODO: Other tokentypes will require additional parameters\n if tokentype == \"sms\":\n if handler_options.get(\"dynamic_phone\"):\n init_param[\"dynamic_phone\"] = 1\n else:\n init_param['phone'] = user.get_user_phone(\n phone_type='mobile')\n if not init_param['phone']:\n log.warning(\"Enrolling SMS token. But the user \"\n \"{0!r} has no mobile number!\".format(user))\n elif tokentype == \"email\":\n init_param['email'] = user.info.get(\"email\", \"\")\n if not init_param['email']:\n log.warning(\"Enrolling EMail token. But the user {0!s}\"\n \"has no email address!\".format(user))\n elif tokentype == \"motp\":\n init_param['motppin'] = handler_options.get(\"motppin\")\n\n t = init_token(param=init_param, user=user)\n log.info(\"New token {0!s} enrolled.\".format(t.token.serial))\n\n return ret\n", "path": "privacyidea/lib/eventhandler/tokenhandler.py"}]} | 4,093 | 697 |
gh_patches_debug_13444 | rasdani/github-patches | git_diff | iterative__dvc-5425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tests: exp executor teardown is flaky on windows
Looks like there is some race condition on windows that sometimes happens between cleaning up the test `tmp_dir` and cleaning up the experiments executor temp directory (which is placed in `tmp_dir/.dvc/tmp/...`). May be better to go back to running experiments in system `$TEMP` instead of `.dvc/tmp` (for win tests only)?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/experiments/executor/local.py`
Content:
```
1 import logging
2 import os
3 import sys
4 from tempfile import TemporaryDirectory
5 from typing import Optional
6
7 from dvc.utils.fs import remove
8
9 from .base import BaseExecutor
10
11 logger = logging.getLogger(__name__)
12
13
14 class BaseLocalExecutor(BaseExecutor):
15 """Base local machine executor."""
16
17 @property
18 def git_url(self) -> str:
19 root_dir = os.path.abspath(self.root_dir)
20 if os.name == "nt":
21 root_dir = root_dir.replace(os.sep, "/")
22 return f"file://{root_dir}"
23
24
25 class TempDirExecutor(BaseLocalExecutor):
26 """Temp directory experiment executor."""
27
28 # Temp dir executors should warn if untracked files exist (to help with
29 # debugging user code), and suppress other DVC hints (like `git add`
30 # suggestions) that are not applicable outside of workspace runs
31 WARN_UNTRACKED = True
32 QUIET = True
33
34 def __init__(
35 self,
36 *args,
37 tmp_dir: Optional[str] = None,
38 cache_dir: Optional[str] = None,
39 **kwargs,
40 ):
41 self._tmp_dir = TemporaryDirectory(dir=tmp_dir)
42 kwargs["root_dir"] = self._tmp_dir.name
43 super().__init__(*args, **kwargs)
44 if cache_dir:
45 self._config(cache_dir)
46 logger.debug(
47 "Init temp dir executor in dir '%s'", self._tmp_dir,
48 )
49
50 def _config(self, cache_dir):
51 local_config = os.path.join(self.dvc_dir, "config.local")
52 logger.debug("Writing experiments local config '%s'", local_config)
53 with open(local_config, "w") as fobj:
54 fobj.write(f"[cache]\n dir = {cache_dir}")
55
56 def cleanup(self):
57 super().cleanup()
58 logger.debug("Removing tmpdir '%s'", self._tmp_dir)
59 try:
60 self._tmp_dir.cleanup()
61 except PermissionError:
62 if os.name == "nt" and sys.version_info < (3, 8):
63 # see https://bugs.python.org/issue26660
64 remove(self._tmp_dir.name)
65 return
66 raise
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/repo/experiments/executor/local.py b/dvc/repo/experiments/executor/local.py
--- a/dvc/repo/experiments/executor/local.py
+++ b/dvc/repo/experiments/executor/local.py
@@ -1,6 +1,5 @@
import logging
import os
-import sys
from tempfile import TemporaryDirectory
from typing import Optional
@@ -56,11 +55,4 @@
def cleanup(self):
super().cleanup()
logger.debug("Removing tmpdir '%s'", self._tmp_dir)
- try:
- self._tmp_dir.cleanup()
- except PermissionError:
- if os.name == "nt" and sys.version_info < (3, 8):
- # see https://bugs.python.org/issue26660
- remove(self._tmp_dir.name)
- return
- raise
+ remove(self._tmp_dir.name)
| {"golden_diff": "diff --git a/dvc/repo/experiments/executor/local.py b/dvc/repo/experiments/executor/local.py\n--- a/dvc/repo/experiments/executor/local.py\n+++ b/dvc/repo/experiments/executor/local.py\n@@ -1,6 +1,5 @@\n import logging\n import os\n-import sys\n from tempfile import TemporaryDirectory\n from typing import Optional\n \n@@ -56,11 +55,4 @@\n def cleanup(self):\n super().cleanup()\n logger.debug(\"Removing tmpdir '%s'\", self._tmp_dir)\n- try:\n- self._tmp_dir.cleanup()\n- except PermissionError:\n- if os.name == \"nt\" and sys.version_info < (3, 8):\n- # see https://bugs.python.org/issue26660\n- remove(self._tmp_dir.name)\n- return\n- raise\n+ remove(self._tmp_dir.name)\n", "issue": "tests: exp executor teardown is flaky on windows\nLooks like there is some race condition on windows that sometimes happens between cleaning up the test `tmp_dir` and cleaning up the experiments executor temp directory (which is placed in `tmp_dir/.dvc/tmp/...`). May be better to go back to running experiments in system `$TEMP` instead of `.dvc/tmp` (for win tests only)?\n", "before_files": [{"content": "import logging\nimport os\nimport sys\nfrom tempfile import TemporaryDirectory\nfrom typing import Optional\n\nfrom dvc.utils.fs import remove\n\nfrom .base import BaseExecutor\n\nlogger = logging.getLogger(__name__)\n\n\nclass BaseLocalExecutor(BaseExecutor):\n \"\"\"Base local machine executor.\"\"\"\n\n @property\n def git_url(self) -> str:\n root_dir = os.path.abspath(self.root_dir)\n if os.name == \"nt\":\n root_dir = root_dir.replace(os.sep, \"/\")\n return f\"file://{root_dir}\"\n\n\nclass TempDirExecutor(BaseLocalExecutor):\n \"\"\"Temp directory experiment executor.\"\"\"\n\n # Temp dir executors should warn if untracked files exist (to help with\n # debugging user code), and suppress other DVC hints (like `git add`\n # suggestions) that are not applicable outside of workspace runs\n WARN_UNTRACKED = True\n QUIET = True\n\n def __init__(\n self,\n *args,\n tmp_dir: Optional[str] = None,\n cache_dir: Optional[str] = None,\n **kwargs,\n ):\n self._tmp_dir = TemporaryDirectory(dir=tmp_dir)\n kwargs[\"root_dir\"] = self._tmp_dir.name\n super().__init__(*args, **kwargs)\n if cache_dir:\n self._config(cache_dir)\n logger.debug(\n \"Init temp dir executor in dir '%s'\", self._tmp_dir,\n )\n\n def _config(self, cache_dir):\n local_config = os.path.join(self.dvc_dir, \"config.local\")\n logger.debug(\"Writing experiments local config '%s'\", local_config)\n with open(local_config, \"w\") as fobj:\n fobj.write(f\"[cache]\\n dir = {cache_dir}\")\n\n def cleanup(self):\n super().cleanup()\n logger.debug(\"Removing tmpdir '%s'\", self._tmp_dir)\n try:\n self._tmp_dir.cleanup()\n except PermissionError:\n if os.name == \"nt\" and sys.version_info < (3, 8):\n # see https://bugs.python.org/issue26660\n remove(self._tmp_dir.name)\n return\n raise\n", "path": "dvc/repo/experiments/executor/local.py"}], "after_files": [{"content": "import logging\nimport os\nfrom tempfile import TemporaryDirectory\nfrom typing import Optional\n\nfrom dvc.utils.fs import remove\n\nfrom .base import BaseExecutor\n\nlogger = logging.getLogger(__name__)\n\n\nclass BaseLocalExecutor(BaseExecutor):\n \"\"\"Base local machine executor.\"\"\"\n\n @property\n def git_url(self) -> str:\n root_dir = os.path.abspath(self.root_dir)\n if os.name == \"nt\":\n root_dir = root_dir.replace(os.sep, \"/\")\n return f\"file://{root_dir}\"\n\n\nclass TempDirExecutor(BaseLocalExecutor):\n \"\"\"Temp directory experiment executor.\"\"\"\n\n # Temp dir executors should warn if untracked files exist (to help with\n # debugging user code), and suppress other DVC hints (like `git add`\n # suggestions) that are not applicable outside of workspace runs\n WARN_UNTRACKED = True\n QUIET = True\n\n def __init__(\n self,\n *args,\n tmp_dir: Optional[str] = None,\n cache_dir: Optional[str] = None,\n **kwargs,\n ):\n self._tmp_dir = TemporaryDirectory(dir=tmp_dir)\n kwargs[\"root_dir\"] = self._tmp_dir.name\n super().__init__(*args, **kwargs)\n if cache_dir:\n self._config(cache_dir)\n logger.debug(\n \"Init temp dir executor in dir '%s'\", self._tmp_dir,\n )\n\n def _config(self, cache_dir):\n local_config = os.path.join(self.dvc_dir, \"config.local\")\n logger.debug(\"Writing experiments local config '%s'\", local_config)\n with open(local_config, \"w\") as fobj:\n fobj.write(f\"[cache]\\n dir = {cache_dir}\")\n\n def cleanup(self):\n super().cleanup()\n logger.debug(\"Removing tmpdir '%s'\", self._tmp_dir)\n remove(self._tmp_dir.name)\n", "path": "dvc/repo/experiments/executor/local.py"}]} | 934 | 201 |
gh_patches_debug_48687 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-tf-577 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in "onmt-ark-to-records" code
I have found a small bug in the code line referenced below. It causes the script to terminate with a `TypeError: data type not understood`. Just for the sake of completeness, this is caused by the fact that numpy doesn't understand the object `tf.float32`. I changed that to `float` and it worked as it was supposed to. I can create a PR for this, but I suppose it is too trivial to do so and claim a contribution, unless you want me to.
https://github.com/OpenNMT/OpenNMT-tf/blob/5809c293d7bc65d923274cfd56b3339fc4107af6/opennmt/bin/ark_to_records.py#L46
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opennmt/bin/ark_to_records.py`
Content:
```
1 """ARK data file to TFRecords converter.
2
3 The scripts takes the ARK data file and optionally the indexed target text
4 to write aligned source and target data.
5 """
6
7 import argparse
8 import numpy as np
9 import tensorflow as tf
10
11 from opennmt.inputters.record_inputter import write_sequence_record
12
13
14 def consume_next_vector(ark_file):
15 """Consumes the next vector.
16
17 Args:
18 ark_file: The ARK data file.
19
20 Returns:
21 The next vector as a 2D Numpy array.
22 """
23 idx = None
24 vector = []
25
26 for line in ark_file:
27 line = line.strip()
28 fields = line.split()
29
30 if not idx:
31 idx = fields[0]
32 fields.pop(0)
33 fields.pop(0)
34
35 end = fields and fields[-1] == "]"
36
37 if end:
38 fields.pop()
39
40 if fields:
41 vector.append(fields)
42
43 if end:
44 break
45
46 return idx, np.asarray(vector, dtype=tf.float32)
47
48 def consume_next_text(text_file):
49 """Consumes the next text line from `text_file`."""
50 idx = None
51 text = text_file.readline()
52
53 if text:
54 tokens = text.strip().split()
55 idx = tokens[0]
56 tokens.pop(0)
57 text = " ".join(tokens)
58
59 return idx, text
60
61 def write_text(text, writer):
62 """Serializes a line of text."""
63 writer.write(text)
64 writer.write("\n")
65
66 def ark_to_records_aligned(ark_filename, text_filename, out_prefix, compression_type=None):
67 """Converts ARK and text datasets to aligned TFRecords and text datasets."""
68 record_filename = "%s.records" % out_prefix
69 if compression_type == "GZIP":
70 record_filename = "%s.gz" % record_filename
71 record_writer = tf.io.TFRecordWriter(record_filename, options=compression_type)
72 text_writer = open(out_prefix + ".txt", encoding="utf-8", mode="w")
73
74 ark_buffer = {}
75 text_buffer = {}
76 count = 0
77
78 def _write_example(vector, text):
79 write_sequence_record(vector, record_writer)
80 write_text(text, text_writer)
81
82 def _search_aligned():
83 for idx in ark_buffer:
84 if idx in text_buffer:
85 vector = ark_buffer[idx]
86 text = text_buffer[idx]
87
88 del ark_buffer[idx]
89 del text_buffer[idx]
90
91 return vector, text
92
93 return None, None
94
95 with open(ark_filename, encoding="utf-8") as ark_file, open(text_filename, encoding="utf-8") as text_file: #pylint: disable=line-too-long
96 while True:
97 ark_idx, vector = consume_next_vector(ark_file)
98 text_idx, text = consume_next_text(text_file)
99
100 if not ark_idx and not text_idx:
101 # Both files are empty.
102 break
103
104 if ark_idx == text_idx:
105 # If the indices match, write the example.
106 _write_example(vector, text)
107 count += 1
108 else:
109 # Otherwise store the entries.
110 if ark_idx:
111 ark_buffer[ark_idx] = vector
112 if text_idx:
113 text_buffer[text_idx] = text
114
115 # Look if we can now find aligned entries.
116 vector, text = _search_aligned()
117
118 if vector is not None:
119 _write_example(vector, text)
120 count += 1
121
122 # Search alignments in stored entries.
123 while True:
124 vector, text = _search_aligned()
125 if vector is None:
126 break
127 _write_example(vector, text)
128 count += 1
129
130 record_writer.close()
131 text_writer.close()
132
133 print("Saved {} aligned records.".format(count))
134
135 def ark_to_records(ark_filename, out_prefix, compression_type=None):
136 """Converts ARK dataset to TFRecords."""
137 record_writer = tf.io.TFRecordWriter(out_prefix + ".records", options=compression_type)
138 count = 0
139
140 with open(ark_filename, encoding="utf-8") as ark_file:
141 while True:
142 ark_idx, vector = consume_next_vector(ark_file)
143 if not ark_idx:
144 break
145 write_sequence_record(vector, record_writer)
146 count += 1
147
148 record_writer.close()
149 print("Saved {} records.".format(count))
150
151
152 def main():
153 parser = argparse.ArgumentParser()
154 parser.add_argument("--ark", required=True,
155 help="Indexed ARK data file.")
156 parser.add_argument("--txt",
157 help=("Indexed target text data file "
158 "(must set it to align source and target files)."))
159 parser.add_argument("--out", required=True,
160 help="Output files prefix (will be suffixed by .records and .txt).")
161 parser.add_argument("--compression_type", default=None, choices=["GZIP"],
162 help="Optional compression type.")
163 args = parser.parse_args()
164
165 if args.txt:
166 ark_to_records_aligned(args.ark, args.txt, args.out, compression_type=args.compression_type)
167 else:
168 ark_to_records(args.ark, args.out, compression_type=args.compression_type)
169
170 if __name__ == "__main__":
171 main()
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opennmt/bin/ark_to_records.py b/opennmt/bin/ark_to_records.py
--- a/opennmt/bin/ark_to_records.py
+++ b/opennmt/bin/ark_to_records.py
@@ -43,7 +43,7 @@
if end:
break
- return idx, np.asarray(vector, dtype=tf.float32)
+ return idx, np.asarray(vector, dtype=np.float32)
def consume_next_text(text_file):
"""Consumes the next text line from `text_file`."""
| {"golden_diff": "diff --git a/opennmt/bin/ark_to_records.py b/opennmt/bin/ark_to_records.py\n--- a/opennmt/bin/ark_to_records.py\n+++ b/opennmt/bin/ark_to_records.py\n@@ -43,7 +43,7 @@\n if end:\n break\n \n- return idx, np.asarray(vector, dtype=tf.float32)\n+ return idx, np.asarray(vector, dtype=np.float32)\n \n def consume_next_text(text_file):\n \"\"\"Consumes the next text line from `text_file`.\"\"\"\n", "issue": "Bug in \"onmt-ark-to-records\" code\nI have found a small bug in the code line referenced below. It causes the script to terminate with a `TypeError: data type not understood`. Just for the sake of completeness, this is caused by the fact that numpy doesn't understand the object `tf.float32`. I changed that to `float` and it worked as it was supposed to. I can create a PR for this, but I suppose it is too trivial to do so and claim a contribution, unless you want me to.\r\n\r\nhttps://github.com/OpenNMT/OpenNMT-tf/blob/5809c293d7bc65d923274cfd56b3339fc4107af6/opennmt/bin/ark_to_records.py#L46\n", "before_files": [{"content": "\"\"\"ARK data file to TFRecords converter.\n\nThe scripts takes the ARK data file and optionally the indexed target text\nto write aligned source and target data.\n\"\"\"\n\nimport argparse\nimport numpy as np\nimport tensorflow as tf\n\nfrom opennmt.inputters.record_inputter import write_sequence_record\n\n\ndef consume_next_vector(ark_file):\n \"\"\"Consumes the next vector.\n\n Args:\n ark_file: The ARK data file.\n\n Returns:\n The next vector as a 2D Numpy array.\n \"\"\"\n idx = None\n vector = []\n\n for line in ark_file:\n line = line.strip()\n fields = line.split()\n\n if not idx:\n idx = fields[0]\n fields.pop(0)\n fields.pop(0)\n\n end = fields and fields[-1] == \"]\"\n\n if end:\n fields.pop()\n\n if fields:\n vector.append(fields)\n\n if end:\n break\n\n return idx, np.asarray(vector, dtype=tf.float32)\n\ndef consume_next_text(text_file):\n \"\"\"Consumes the next text line from `text_file`.\"\"\"\n idx = None\n text = text_file.readline()\n\n if text:\n tokens = text.strip().split()\n idx = tokens[0]\n tokens.pop(0)\n text = \" \".join(tokens)\n\n return idx, text\n\ndef write_text(text, writer):\n \"\"\"Serializes a line of text.\"\"\"\n writer.write(text)\n writer.write(\"\\n\")\n\ndef ark_to_records_aligned(ark_filename, text_filename, out_prefix, compression_type=None):\n \"\"\"Converts ARK and text datasets to aligned TFRecords and text datasets.\"\"\"\n record_filename = \"%s.records\" % out_prefix\n if compression_type == \"GZIP\":\n record_filename = \"%s.gz\" % record_filename\n record_writer = tf.io.TFRecordWriter(record_filename, options=compression_type)\n text_writer = open(out_prefix + \".txt\", encoding=\"utf-8\", mode=\"w\")\n\n ark_buffer = {}\n text_buffer = {}\n count = 0\n\n def _write_example(vector, text):\n write_sequence_record(vector, record_writer)\n write_text(text, text_writer)\n\n def _search_aligned():\n for idx in ark_buffer:\n if idx in text_buffer:\n vector = ark_buffer[idx]\n text = text_buffer[idx]\n\n del ark_buffer[idx]\n del text_buffer[idx]\n\n return vector, text\n\n return None, None\n\n with open(ark_filename, encoding=\"utf-8\") as ark_file, open(text_filename, encoding=\"utf-8\") as text_file: #pylint: disable=line-too-long\n while True:\n ark_idx, vector = consume_next_vector(ark_file)\n text_idx, text = consume_next_text(text_file)\n\n if not ark_idx and not text_idx:\n # Both files are empty.\n break\n\n if ark_idx == text_idx:\n # If the indices match, write the example.\n _write_example(vector, text)\n count += 1\n else:\n # Otherwise store the entries.\n if ark_idx:\n ark_buffer[ark_idx] = vector\n if text_idx:\n text_buffer[text_idx] = text\n\n # Look if we can now find aligned entries.\n vector, text = _search_aligned()\n\n if vector is not None:\n _write_example(vector, text)\n count += 1\n\n # Search alignments in stored entries.\n while True:\n vector, text = _search_aligned()\n if vector is None:\n break\n _write_example(vector, text)\n count += 1\n\n record_writer.close()\n text_writer.close()\n\n print(\"Saved {} aligned records.\".format(count))\n\ndef ark_to_records(ark_filename, out_prefix, compression_type=None):\n \"\"\"Converts ARK dataset to TFRecords.\"\"\"\n record_writer = tf.io.TFRecordWriter(out_prefix + \".records\", options=compression_type)\n count = 0\n\n with open(ark_filename, encoding=\"utf-8\") as ark_file:\n while True:\n ark_idx, vector = consume_next_vector(ark_file)\n if not ark_idx:\n break\n write_sequence_record(vector, record_writer)\n count += 1\n\n record_writer.close()\n print(\"Saved {} records.\".format(count))\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--ark\", required=True,\n help=\"Indexed ARK data file.\")\n parser.add_argument(\"--txt\",\n help=(\"Indexed target text data file \"\n \"(must set it to align source and target files).\"))\n parser.add_argument(\"--out\", required=True,\n help=\"Output files prefix (will be suffixed by .records and .txt).\")\n parser.add_argument(\"--compression_type\", default=None, choices=[\"GZIP\"],\n help=\"Optional compression type.\")\n args = parser.parse_args()\n\n if args.txt:\n ark_to_records_aligned(args.ark, args.txt, args.out, compression_type=args.compression_type)\n else:\n ark_to_records(args.ark, args.out, compression_type=args.compression_type)\n\nif __name__ == \"__main__\":\n main()\n", "path": "opennmt/bin/ark_to_records.py"}], "after_files": [{"content": "\"\"\"ARK data file to TFRecords converter.\n\nThe scripts takes the ARK data file and optionally the indexed target text\nto write aligned source and target data.\n\"\"\"\n\nimport argparse\nimport numpy as np\nimport tensorflow as tf\n\nfrom opennmt.inputters.record_inputter import write_sequence_record\n\n\ndef consume_next_vector(ark_file):\n \"\"\"Consumes the next vector.\n\n Args:\n ark_file: The ARK data file.\n\n Returns:\n The next vector as a 2D Numpy array.\n \"\"\"\n idx = None\n vector = []\n\n for line in ark_file:\n line = line.strip()\n fields = line.split()\n\n if not idx:\n idx = fields[0]\n fields.pop(0)\n fields.pop(0)\n\n end = fields and fields[-1] == \"]\"\n\n if end:\n fields.pop()\n\n if fields:\n vector.append(fields)\n\n if end:\n break\n\n return idx, np.asarray(vector, dtype=np.float32)\n\ndef consume_next_text(text_file):\n \"\"\"Consumes the next text line from `text_file`.\"\"\"\n idx = None\n text = text_file.readline()\n\n if text:\n tokens = text.strip().split()\n idx = tokens[0]\n tokens.pop(0)\n text = \" \".join(tokens)\n\n return idx, text\n\ndef write_text(text, writer):\n \"\"\"Serializes a line of text.\"\"\"\n writer.write(text)\n writer.write(\"\\n\")\n\ndef ark_to_records_aligned(ark_filename, text_filename, out_prefix, compression_type=None):\n \"\"\"Converts ARK and text datasets to aligned TFRecords and text datasets.\"\"\"\n record_filename = \"%s.records\" % out_prefix\n if compression_type == \"GZIP\":\n record_filename = \"%s.gz\" % record_filename\n record_writer = tf.io.TFRecordWriter(record_filename, options=compression_type)\n text_writer = open(out_prefix + \".txt\", encoding=\"utf-8\", mode=\"w\")\n\n ark_buffer = {}\n text_buffer = {}\n count = 0\n\n def _write_example(vector, text):\n write_sequence_record(vector, record_writer)\n write_text(text, text_writer)\n\n def _search_aligned():\n for idx in ark_buffer:\n if idx in text_buffer:\n vector = ark_buffer[idx]\n text = text_buffer[idx]\n\n del ark_buffer[idx]\n del text_buffer[idx]\n\n return vector, text\n\n return None, None\n\n with open(ark_filename, encoding=\"utf-8\") as ark_file, open(text_filename, encoding=\"utf-8\") as text_file: #pylint: disable=line-too-long\n while True:\n ark_idx, vector = consume_next_vector(ark_file)\n text_idx, text = consume_next_text(text_file)\n\n if not ark_idx and not text_idx:\n # Both files are empty.\n break\n\n if ark_idx == text_idx:\n # If the indices match, write the example.\n _write_example(vector, text)\n count += 1\n else:\n # Otherwise store the entries.\n if ark_idx:\n ark_buffer[ark_idx] = vector\n if text_idx:\n text_buffer[text_idx] = text\n\n # Look if we can now find aligned entries.\n vector, text = _search_aligned()\n\n if vector is not None:\n _write_example(vector, text)\n count += 1\n\n # Search alignments in stored entries.\n while True:\n vector, text = _search_aligned()\n if vector is None:\n break\n _write_example(vector, text)\n count += 1\n\n record_writer.close()\n text_writer.close()\n\n print(\"Saved {} aligned records.\".format(count))\n\ndef ark_to_records(ark_filename, out_prefix, compression_type=None):\n \"\"\"Converts ARK dataset to TFRecords.\"\"\"\n record_writer = tf.io.TFRecordWriter(out_prefix + \".records\", options=compression_type)\n count = 0\n\n with open(ark_filename, encoding=\"utf-8\") as ark_file:\n while True:\n ark_idx, vector = consume_next_vector(ark_file)\n if not ark_idx:\n break\n write_sequence_record(vector, record_writer)\n count += 1\n\n record_writer.close()\n print(\"Saved {} records.\".format(count))\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--ark\", required=True,\n help=\"Indexed ARK data file.\")\n parser.add_argument(\"--txt\",\n help=(\"Indexed target text data file \"\n \"(must set it to align source and target files).\"))\n parser.add_argument(\"--out\", required=True,\n help=\"Output files prefix (will be suffixed by .records and .txt).\")\n parser.add_argument(\"--compression_type\", default=None, choices=[\"GZIP\"],\n help=\"Optional compression type.\")\n args = parser.parse_args()\n\n if args.txt:\n ark_to_records_aligned(args.ark, args.txt, args.out, compression_type=args.compression_type)\n else:\n ark_to_records(args.ark, args.out, compression_type=args.compression_type)\n\nif __name__ == \"__main__\":\n main()\n", "path": "opennmt/bin/ark_to_records.py"}]} | 1,972 | 119 |
gh_patches_debug_24753 | rasdani/github-patches | git_diff | ipython__ipython-3539 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nbconvert incompatible with Windows?
This is just speculation, but the way pathnames are defined in `IPython/nbconvert/exporters/exporter.py` looks like it will cause problems on windows systems. `os.path.join` should be used in favor of hard-coded paths.
I don't have a windows machine to test this out on, though.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/nbconvert/exporters/exporter.py`
Content:
```
1 """This module defines Exporter, a highly configurable converter
2 that uses Jinja2 to export notebook files into different formats.
3 """
4
5 #-----------------------------------------------------------------------------
6 # Copyright (c) 2013, the IPython Development Team.
7 #
8 # Distributed under the terms of the Modified BSD License.
9 #
10 # The full license is in the file COPYING.txt, distributed with this software.
11 #-----------------------------------------------------------------------------
12
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16
17 from __future__ import print_function, absolute_import
18
19 # Stdlib imports
20 import io
21 import os
22 import inspect
23 from copy import deepcopy
24
25 # other libs/dependencies
26 from jinja2 import Environment, FileSystemLoader
27 from markdown import markdown
28
29 # IPython imports
30 from IPython.config.configurable import Configurable
31 from IPython.config import Config
32 from IPython.nbformat import current as nbformat
33 from IPython.utils.traitlets import MetaHasTraits, Unicode
34 from IPython.utils.text import indent
35
36 from IPython.nbconvert import filters
37 from IPython.nbconvert import transformers
38
39 #-----------------------------------------------------------------------------
40 # Globals and constants
41 #-----------------------------------------------------------------------------
42
43 #Jinja2 extensions to load.
44 JINJA_EXTENSIONS = ['jinja2.ext.loopcontrols']
45
46 default_filters = {
47 'indent': indent,
48 'markdown': markdown,
49 'ansi2html': filters.ansi2html,
50 'filter_data_type': filters.DataTypeFilter,
51 'get_lines': filters.get_lines,
52 'highlight': filters.highlight,
53 'highlight2html': filters.highlight,
54 'highlight2latex': filters.highlight2latex,
55 'markdown2latex': filters.markdown2latex,
56 'markdown2rst': filters.markdown2rst,
57 'pycomment': filters.python_comment,
58 'rm_ansi': filters.remove_ansi,
59 'rm_dollars': filters.strip_dollars,
60 'rm_fake': filters.rm_fake,
61 'ansi2latex': filters.ansi2latex,
62 'rm_math_space': filters.rm_math_space,
63 'wrap': filters.wrap
64 }
65
66 #-----------------------------------------------------------------------------
67 # Class
68 #-----------------------------------------------------------------------------
69
70 class Exporter(Configurable):
71 """
72 Exports notebooks into other file formats. Uses Jinja 2 templating engine
73 to output new formats. Inherit from this class if you are creating a new
74 template type along with new filters/transformers. If the filters/
75 transformers provided by default suffice, there is no need to inherit from
76 this class. Instead, override the template_file and file_extension
77 traits via a config file.
78
79 {filters}
80 """
81
82 # finish the docstring
83 __doc__ = __doc__.format(filters = '- '+'\n - '.join(default_filters.keys()))
84
85
86 template_file = Unicode(
87 '', config=True,
88 help="Name of the template file to use")
89
90 file_extension = Unicode(
91 'txt', config=True,
92 help="Extension of the file that should be written to disk"
93 )
94
95 template_path = Unicode(
96 "/../templates/", config=True,
97 help="Path where the template files are located.")
98
99 template_skeleton_path = Unicode(
100 "/../templates/skeleton/", config=True,
101 help="Path where the template skeleton files are located.")
102
103 #Jinja block definitions
104 jinja_comment_block_start = Unicode("", config=True)
105 jinja_comment_block_end = Unicode("", config=True)
106 jinja_variable_block_start = Unicode("", config=True)
107 jinja_variable_block_end = Unicode("", config=True)
108 jinja_logic_block_start = Unicode("", config=True)
109 jinja_logic_block_end = Unicode("", config=True)
110
111 #Extension that the template files use.
112 template_extension = Unicode(".tpl", config=True)
113
114 #Processors that process the input data prior to the export, set in the
115 #constructor for this class.
116 transformers = None
117
118
119 def __init__(self, transformers=None, filters=None, config=None, **kw):
120 """
121 Public constructor
122
123 Parameters
124 ----------
125 transformers : list[of transformer]
126 Custom transformers to apply to the notebook prior to engaging
127 the Jinja template engine. Any transformers specified here
128 will override existing transformers if a naming conflict
129 occurs.
130 filters : dict[of filter]
131 filters specified here will override existing filters if a naming
132 conflict occurs. Filters are availlable in jinja template through
133 the name of the corresponding key. Cf class docstring for
134 availlable default filters.
135 config : config
136 User configuration instance.
137 """
138
139 #Call the base class constructor
140 c = self.default_config
141 if config:
142 c.merge(config)
143
144 super(Exporter, self).__init__(config=c, **kw)
145
146 #Standard environment
147 self._init_environment()
148
149 #Add transformers
150 self._register_transformers()
151
152 #Add filters to the Jinja2 environment
153 self._register_filters()
154
155 #Load user transformers. Overwrite existing transformers if need be.
156 if transformers :
157 for transformer in transformers:
158 self.register_transformer(transformer)
159
160 #Load user filters. Overwrite existing filters if need be.
161 if not filters is None:
162 for key, user_filter in filters.iteritems():
163 self.register_filter(key, user_filter)
164
165 @property
166 def default_config(self):
167 return Config()
168
169
170
171 def from_notebook_node(self, nb, resources=None):
172 """
173 Convert a notebook from a notebook node instance.
174
175 Parameters
176 ----------
177 nb : Notebook node
178 resources : a dict of additional resources that
179 can be accessed read/write by transformers
180 and filters.
181 """
182 if resources is None:
183 resources = {}
184 nb, resources = self._preprocess(nb, resources)
185
186 #Load the template file.
187 self.template = self.environment.get_template(self.template_file+self.template_extension)
188
189 return self.template.render(nb=nb, resources=resources), resources
190
191
192 def from_filename(self, filename):
193 """
194 Convert a notebook from a notebook file.
195
196 Parameters
197 ----------
198 filename : str
199 Full filename of the notebook file to open and convert.
200 """
201
202 with io.open(filename) as f:
203 return self.from_notebook_node(nbformat.read(f, 'json'))
204
205
206 def from_file(self, file_stream):
207 """
208 Convert a notebook from a notebook file.
209
210 Parameters
211 ----------
212 file_stream : file-like object
213 Notebook file-like object to convert.
214 """
215 return self.from_notebook_node(nbformat.read(file_stream, 'json'))
216
217
218 def register_transformer(self, transformer):
219 """
220 Register a transformer.
221 Transformers are classes that act upon the notebook before it is
222 passed into the Jinja templating engine. Transformers are also
223 capable of passing additional information to the Jinja
224 templating engine.
225
226 Parameters
227 ----------
228 transformer : transformer
229 """
230 if self.transformers is None:
231 self.transformers = []
232
233 if inspect.isfunction(transformer):
234 self.transformers.append(transformer)
235 return transformer
236 elif isinstance(transformer, MetaHasTraits):
237 transformer_instance = transformer(config=self.config)
238 self.transformers.append(transformer_instance)
239 return transformer_instance
240 else:
241 transformer_instance = transformer()
242 self.transformers.append(transformer_instance)
243 return transformer_instance
244
245
246 def register_filter(self, name, filter):
247 """
248 Register a filter.
249 A filter is a function that accepts and acts on one string.
250 The filters are accesible within the Jinja templating engine.
251
252 Parameters
253 ----------
254 name : str
255 name to give the filter in the Jinja engine
256 filter : filter
257 """
258 if inspect.isfunction(filter):
259 self.environment.filters[name] = filter
260 elif isinstance(filter, MetaHasTraits):
261 self.environment.filters[name] = filter(config=self.config)
262 else:
263 self.environment.filters[name] = filter()
264 return self.environment.filters[name]
265
266
267 def _register_transformers(self):
268 """
269 Register all of the transformers needed for this exporter.
270 """
271
272 self.register_transformer(transformers.coalesce_streams)
273
274 #Remember the figure extraction transformer so it can be enabled and
275 #disabled easily later.
276 self.extract_figure_transformer = self.register_transformer(transformers.ExtractFigureTransformer)
277
278
279 def _register_filters(self):
280 """
281 Register all of the filters required for the exporter.
282 """
283 for k, v in default_filters.iteritems():
284 self.register_filter(k, v)
285
286
287 def _init_environment(self):
288 """
289 Create the Jinja templating environment.
290 """
291
292 self.environment = Environment(
293 loader=FileSystemLoader([
294 os.path.dirname(os.path.realpath(__file__)) + self.template_path,
295 os.path.dirname(os.path.realpath(__file__)) + self.template_skeleton_path,
296 ]),
297 extensions=JINJA_EXTENSIONS
298 )
299
300 #Set special Jinja2 syntax that will not conflict with latex.
301 if self.jinja_logic_block_start:
302 self.environment.block_start_string = self.jinja_logic_block_start
303 if self.jinja_logic_block_end:
304 self.environment.block_end_string = self.jinja_logic_block_end
305 if self.jinja_variable_block_start:
306 self.environment.variable_start_string = self.jinja_variable_block_start
307 if self.jinja_variable_block_end:
308 self.environment.variable_end_string = self.jinja_variable_block_end
309 if self.jinja_comment_block_start:
310 self.environment.comment_start_string = self.jinja_comment_block_start
311 if self.jinja_comment_block_end:
312 self.environment.comment_end_string = self.jinja_comment_block_end
313
314
315 def _preprocess(self, nb, resources):
316 """
317 Preprocess the notebook before passing it into the Jinja engine.
318 To preprocess the notebook is to apply all of the
319
320 Parameters
321 ----------
322 nb : notebook node
323 notebook that is being exported.
324 resources : a dict of additional resources that
325 can be accessed read/write by transformers
326 and filters.
327 """
328
329 # Do a deepcopy first,
330 # we are never safe enough with what the transformers could do.
331 nbc = deepcopy(nb)
332 resc = deepcopy(resources)
333 #Run each transformer on the notebook. Carry the output along
334 #to each transformer
335 for transformer in self.transformers:
336 nb, resources = transformer(nbc, resc)
337 return nb, resources
338
339
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/IPython/nbconvert/exporters/exporter.py b/IPython/nbconvert/exporters/exporter.py
--- a/IPython/nbconvert/exporters/exporter.py
+++ b/IPython/nbconvert/exporters/exporter.py
@@ -93,11 +93,11 @@
)
template_path = Unicode(
- "/../templates/", config=True,
+ os.path.join("..", "templates"), config=True,
help="Path where the template files are located.")
template_skeleton_path = Unicode(
- "/../templates/skeleton/", config=True,
+ os.path.join("..", "templates", "skeleton"), config=True,
help="Path where the template skeleton files are located.")
#Jinja block definitions
@@ -288,11 +288,11 @@
"""
Create the Jinja templating environment.
"""
-
+ here = os.path.realpath(__file__)
self.environment = Environment(
loader=FileSystemLoader([
- os.path.dirname(os.path.realpath(__file__)) + self.template_path,
- os.path.dirname(os.path.realpath(__file__)) + self.template_skeleton_path,
+ os.path.join(here, self.template_path),
+ os.path.join(here, self.template_skeleton_path),
]),
extensions=JINJA_EXTENSIONS
)
| {"golden_diff": "diff --git a/IPython/nbconvert/exporters/exporter.py b/IPython/nbconvert/exporters/exporter.py\n--- a/IPython/nbconvert/exporters/exporter.py\n+++ b/IPython/nbconvert/exporters/exporter.py\n@@ -93,11 +93,11 @@\n )\n \n template_path = Unicode(\n- \"/../templates/\", config=True,\n+ os.path.join(\"..\", \"templates\"), config=True,\n help=\"Path where the template files are located.\")\n \n template_skeleton_path = Unicode(\n- \"/../templates/skeleton/\", config=True,\n+ os.path.join(\"..\", \"templates\", \"skeleton\"), config=True,\n help=\"Path where the template skeleton files are located.\") \n \n #Jinja block definitions\n@@ -288,11 +288,11 @@\n \"\"\"\n Create the Jinja templating environment.\n \"\"\"\n- \n+ here = os.path.realpath(__file__)\n self.environment = Environment(\n loader=FileSystemLoader([\n- os.path.dirname(os.path.realpath(__file__)) + self.template_path,\n- os.path.dirname(os.path.realpath(__file__)) + self.template_skeleton_path,\n+ os.path.join(here, self.template_path),\n+ os.path.join(here, self.template_skeleton_path),\n ]),\n extensions=JINJA_EXTENSIONS\n )\n", "issue": "nbconvert incompatible with Windows?\nThis is just speculation, but the way pathnames are defined in `IPython/nbconvert/exporters/exporter.py` looks like it will cause problems on windows systems. `os.path.join` should be used in favor of hard-coded paths.\n\nI don't have a windows machine to test this out on, though.\n\n", "before_files": [{"content": "\"\"\"This module defines Exporter, a highly configurable converter\nthat uses Jinja2 to export notebook files into different formats.\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom __future__ import print_function, absolute_import\n\n# Stdlib imports\nimport io\nimport os\nimport inspect\nfrom copy import deepcopy\n\n# other libs/dependencies\nfrom jinja2 import Environment, FileSystemLoader\nfrom markdown import markdown\n\n# IPython imports\nfrom IPython.config.configurable import Configurable\nfrom IPython.config import Config\nfrom IPython.nbformat import current as nbformat\nfrom IPython.utils.traitlets import MetaHasTraits, Unicode\nfrom IPython.utils.text import indent\n\nfrom IPython.nbconvert import filters\nfrom IPython.nbconvert import transformers\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n#Jinja2 extensions to load.\nJINJA_EXTENSIONS = ['jinja2.ext.loopcontrols']\n\ndefault_filters = {\n 'indent': indent,\n 'markdown': markdown,\n 'ansi2html': filters.ansi2html,\n 'filter_data_type': filters.DataTypeFilter,\n 'get_lines': filters.get_lines,\n 'highlight': filters.highlight,\n 'highlight2html': filters.highlight,\n 'highlight2latex': filters.highlight2latex,\n 'markdown2latex': filters.markdown2latex,\n 'markdown2rst': filters.markdown2rst,\n 'pycomment': filters.python_comment,\n 'rm_ansi': filters.remove_ansi,\n 'rm_dollars': filters.strip_dollars,\n 'rm_fake': filters.rm_fake,\n 'ansi2latex': filters.ansi2latex,\n 'rm_math_space': filters.rm_math_space,\n 'wrap': filters.wrap\n}\n\n#-----------------------------------------------------------------------------\n# Class\n#-----------------------------------------------------------------------------\n\nclass Exporter(Configurable):\n \"\"\"\n Exports notebooks into other file formats. Uses Jinja 2 templating engine\n to output new formats. Inherit from this class if you are creating a new\n template type along with new filters/transformers. If the filters/\n transformers provided by default suffice, there is no need to inherit from\n this class. Instead, override the template_file and file_extension\n traits via a config file.\n \n {filters}\n \"\"\"\n \n # finish the docstring\n __doc__ = __doc__.format(filters = '- '+'\\n - '.join(default_filters.keys()))\n\n\n template_file = Unicode(\n '', config=True,\n help=\"Name of the template file to use\")\n\n file_extension = Unicode(\n 'txt', config=True, \n help=\"Extension of the file that should be written to disk\"\n )\n\n template_path = Unicode(\n \"/../templates/\", config=True,\n help=\"Path where the template files are located.\")\n\n template_skeleton_path = Unicode(\n \"/../templates/skeleton/\", config=True,\n help=\"Path where the template skeleton files are located.\") \n\n #Jinja block definitions\n jinja_comment_block_start = Unicode(\"\", config=True)\n jinja_comment_block_end = Unicode(\"\", config=True)\n jinja_variable_block_start = Unicode(\"\", config=True)\n jinja_variable_block_end = Unicode(\"\", config=True)\n jinja_logic_block_start = Unicode(\"\", config=True)\n jinja_logic_block_end = Unicode(\"\", config=True)\n \n #Extension that the template files use. \n template_extension = Unicode(\".tpl\", config=True)\n\n #Processors that process the input data prior to the export, set in the \n #constructor for this class.\n transformers = None\n\n \n def __init__(self, transformers=None, filters=None, config=None, **kw):\n \"\"\"\n Public constructor\n \n Parameters\n ----------\n transformers : list[of transformer]\n Custom transformers to apply to the notebook prior to engaging\n the Jinja template engine. Any transformers specified here \n will override existing transformers if a naming conflict\n occurs.\n filters : dict[of filter]\n filters specified here will override existing filters if a naming\n conflict occurs. Filters are availlable in jinja template through\n the name of the corresponding key. Cf class docstring for\n availlable default filters.\n config : config\n User configuration instance.\n \"\"\"\n \n #Call the base class constructor\n c = self.default_config\n if config:\n c.merge(config)\n\n super(Exporter, self).__init__(config=c, **kw)\n\n #Standard environment\n self._init_environment()\n\n #Add transformers\n self._register_transformers()\n\n #Add filters to the Jinja2 environment\n self._register_filters()\n\n #Load user transformers. Overwrite existing transformers if need be.\n if transformers :\n for transformer in transformers:\n self.register_transformer(transformer)\n \n #Load user filters. Overwrite existing filters if need be.\n if not filters is None:\n for key, user_filter in filters.iteritems():\n self.register_filter(key, user_filter)\n\n @property\n def default_config(self):\n return Config()\n\n \n \n def from_notebook_node(self, nb, resources=None):\n \"\"\"\n Convert a notebook from a notebook node instance.\n \n Parameters\n ----------\n nb : Notebook node\n resources : a dict of additional resources that\n can be accessed read/write by transformers\n and filters.\n \"\"\"\n if resources is None:\n resources = {}\n nb, resources = self._preprocess(nb, resources)\n \n #Load the template file.\n self.template = self.environment.get_template(self.template_file+self.template_extension)\n \n return self.template.render(nb=nb, resources=resources), resources\n\n\n def from_filename(self, filename):\n \"\"\"\n Convert a notebook from a notebook file.\n \n Parameters\n ----------\n filename : str\n Full filename of the notebook file to open and convert.\n \"\"\"\n \n with io.open(filename) as f:\n return self.from_notebook_node(nbformat.read(f, 'json'))\n\n\n def from_file(self, file_stream):\n \"\"\"\n Convert a notebook from a notebook file.\n \n Parameters\n ----------\n file_stream : file-like object\n Notebook file-like object to convert.\n \"\"\"\n return self.from_notebook_node(nbformat.read(file_stream, 'json'))\n\n\n def register_transformer(self, transformer):\n \"\"\"\n Register a transformer.\n Transformers are classes that act upon the notebook before it is\n passed into the Jinja templating engine. Transformers are also\n capable of passing additional information to the Jinja\n templating engine.\n \n Parameters\n ----------\n transformer : transformer\n \"\"\"\n if self.transformers is None:\n self.transformers = []\n \n if inspect.isfunction(transformer):\n self.transformers.append(transformer)\n return transformer\n elif isinstance(transformer, MetaHasTraits):\n transformer_instance = transformer(config=self.config)\n self.transformers.append(transformer_instance)\n return transformer_instance\n else:\n transformer_instance = transformer()\n self.transformers.append(transformer_instance)\n return transformer_instance\n\n\n def register_filter(self, name, filter):\n \"\"\"\n Register a filter.\n A filter is a function that accepts and acts on one string. \n The filters are accesible within the Jinja templating engine.\n \n Parameters\n ----------\n name : str\n name to give the filter in the Jinja engine\n filter : filter\n \"\"\"\n if inspect.isfunction(filter):\n self.environment.filters[name] = filter\n elif isinstance(filter, MetaHasTraits):\n self.environment.filters[name] = filter(config=self.config)\n else:\n self.environment.filters[name] = filter()\n return self.environment.filters[name]\n\n \n def _register_transformers(self):\n \"\"\"\n Register all of the transformers needed for this exporter.\n \"\"\"\n \n self.register_transformer(transformers.coalesce_streams)\n \n #Remember the figure extraction transformer so it can be enabled and\n #disabled easily later.\n self.extract_figure_transformer = self.register_transformer(transformers.ExtractFigureTransformer)\n \n \n def _register_filters(self):\n \"\"\"\n Register all of the filters required for the exporter.\n \"\"\"\n for k, v in default_filters.iteritems():\n self.register_filter(k, v)\n \n \n def _init_environment(self):\n \"\"\"\n Create the Jinja templating environment.\n \"\"\"\n \n self.environment = Environment(\n loader=FileSystemLoader([\n os.path.dirname(os.path.realpath(__file__)) + self.template_path,\n os.path.dirname(os.path.realpath(__file__)) + self.template_skeleton_path,\n ]),\n extensions=JINJA_EXTENSIONS\n )\n \n #Set special Jinja2 syntax that will not conflict with latex.\n if self.jinja_logic_block_start:\n self.environment.block_start_string = self.jinja_logic_block_start\n if self.jinja_logic_block_end:\n self.environment.block_end_string = self.jinja_logic_block_end\n if self.jinja_variable_block_start:\n self.environment.variable_start_string = self.jinja_variable_block_start\n if self.jinja_variable_block_end:\n self.environment.variable_end_string = self.jinja_variable_block_end\n if self.jinja_comment_block_start:\n self.environment.comment_start_string = self.jinja_comment_block_start\n if self.jinja_comment_block_end:\n self.environment.comment_end_string = self.jinja_comment_block_end\n\n\n def _preprocess(self, nb, resources):\n \"\"\"\n Preprocess the notebook before passing it into the Jinja engine.\n To preprocess the notebook is to apply all of the \n \n Parameters\n ----------\n nb : notebook node\n notebook that is being exported.\n resources : a dict of additional resources that\n can be accessed read/write by transformers\n and filters.\n \"\"\"\n \n # Do a deepcopy first, \n # we are never safe enough with what the transformers could do.\n nbc = deepcopy(nb)\n resc = deepcopy(resources)\n #Run each transformer on the notebook. Carry the output along\n #to each transformer\n for transformer in self.transformers:\n nb, resources = transformer(nbc, resc)\n return nb, resources\n\n", "path": "IPython/nbconvert/exporters/exporter.py"}], "after_files": [{"content": "\"\"\"This module defines Exporter, a highly configurable converter\nthat uses Jinja2 to export notebook files into different formats.\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom __future__ import print_function, absolute_import\n\n# Stdlib imports\nimport io\nimport os\nimport inspect\nfrom copy import deepcopy\n\n# other libs/dependencies\nfrom jinja2 import Environment, FileSystemLoader\nfrom markdown import markdown\n\n# IPython imports\nfrom IPython.config.configurable import Configurable\nfrom IPython.config import Config\nfrom IPython.nbformat import current as nbformat\nfrom IPython.utils.traitlets import MetaHasTraits, Unicode\nfrom IPython.utils.text import indent\n\nfrom IPython.nbconvert import filters\nfrom IPython.nbconvert import transformers\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n#Jinja2 extensions to load.\nJINJA_EXTENSIONS = ['jinja2.ext.loopcontrols']\n\ndefault_filters = {\n 'indent': indent,\n 'markdown': markdown,\n 'ansi2html': filters.ansi2html,\n 'filter_data_type': filters.DataTypeFilter,\n 'get_lines': filters.get_lines,\n 'highlight': filters.highlight,\n 'highlight2html': filters.highlight,\n 'highlight2latex': filters.highlight2latex,\n 'markdown2latex': filters.markdown2latex,\n 'markdown2rst': filters.markdown2rst,\n 'pycomment': filters.python_comment,\n 'rm_ansi': filters.remove_ansi,\n 'rm_dollars': filters.strip_dollars,\n 'rm_fake': filters.rm_fake,\n 'ansi2latex': filters.ansi2latex,\n 'rm_math_space': filters.rm_math_space,\n 'wrap': filters.wrap\n}\n\n#-----------------------------------------------------------------------------\n# Class\n#-----------------------------------------------------------------------------\n\nclass Exporter(Configurable):\n \"\"\"\n Exports notebooks into other file formats. Uses Jinja 2 templating engine\n to output new formats. Inherit from this class if you are creating a new\n template type along with new filters/transformers. If the filters/\n transformers provided by default suffice, there is no need to inherit from\n this class. Instead, override the template_file and file_extension\n traits via a config file.\n \n {filters}\n \"\"\"\n \n # finish the docstring\n __doc__ = __doc__.format(filters = '- '+'\\n - '.join(default_filters.keys()))\n\n\n template_file = Unicode(\n '', config=True,\n help=\"Name of the template file to use\")\n\n file_extension = Unicode(\n 'txt', config=True, \n help=\"Extension of the file that should be written to disk\"\n )\n\n template_path = Unicode(\n os.path.join(\"..\", \"templates\"), config=True,\n help=\"Path where the template files are located.\")\n\n template_skeleton_path = Unicode(\n os.path.join(\"..\", \"templates\", \"skeleton\"), config=True,\n help=\"Path where the template skeleton files are located.\") \n\n #Jinja block definitions\n jinja_comment_block_start = Unicode(\"\", config=True)\n jinja_comment_block_end = Unicode(\"\", config=True)\n jinja_variable_block_start = Unicode(\"\", config=True)\n jinja_variable_block_end = Unicode(\"\", config=True)\n jinja_logic_block_start = Unicode(\"\", config=True)\n jinja_logic_block_end = Unicode(\"\", config=True)\n \n #Extension that the template files use. \n template_extension = Unicode(\".tpl\", config=True)\n\n #Processors that process the input data prior to the export, set in the \n #constructor for this class.\n transformers = None\n\n \n def __init__(self, transformers=None, filters=None, config=None, **kw):\n \"\"\"\n Public constructor\n \n Parameters\n ----------\n transformers : list[of transformer]\n Custom transformers to apply to the notebook prior to engaging\n the Jinja template engine. Any transformers specified here \n will override existing transformers if a naming conflict\n occurs.\n filters : dict[of filter]\n filters specified here will override existing filters if a naming\n conflict occurs. Filters are availlable in jinja template through\n the name of the corresponding key. Cf class docstring for\n availlable default filters.\n config : config\n User configuration instance.\n \"\"\"\n \n #Call the base class constructor\n c = self.default_config\n if config:\n c.merge(config)\n\n super(Exporter, self).__init__(config=c, **kw)\n\n #Standard environment\n self._init_environment()\n\n #Add transformers\n self._register_transformers()\n\n #Add filters to the Jinja2 environment\n self._register_filters()\n\n #Load user transformers. Overwrite existing transformers if need be.\n if transformers :\n for transformer in transformers:\n self.register_transformer(transformer)\n \n #Load user filters. Overwrite existing filters if need be.\n if not filters is None:\n for key, user_filter in filters.iteritems():\n self.register_filter(key, user_filter)\n\n @property\n def default_config(self):\n return Config()\n\n \n \n def from_notebook_node(self, nb, resources=None):\n \"\"\"\n Convert a notebook from a notebook node instance.\n \n Parameters\n ----------\n nb : Notebook node\n resources : a dict of additional resources that\n can be accessed read/write by transformers\n and filters.\n \"\"\"\n if resources is None:\n resources = {}\n nb, resources = self._preprocess(nb, resources)\n \n #Load the template file.\n self.template = self.environment.get_template(self.template_file+self.template_extension)\n \n return self.template.render(nb=nb, resources=resources), resources\n\n\n def from_filename(self, filename):\n \"\"\"\n Convert a notebook from a notebook file.\n \n Parameters\n ----------\n filename : str\n Full filename of the notebook file to open and convert.\n \"\"\"\n \n with io.open(filename) as f:\n return self.from_notebook_node(nbformat.read(f, 'json'))\n\n\n def from_file(self, file_stream):\n \"\"\"\n Convert a notebook from a notebook file.\n \n Parameters\n ----------\n file_stream : file-like object\n Notebook file-like object to convert.\n \"\"\"\n return self.from_notebook_node(nbformat.read(file_stream, 'json'))\n\n\n def register_transformer(self, transformer):\n \"\"\"\n Register a transformer.\n Transformers are classes that act upon the notebook before it is\n passed into the Jinja templating engine. Transformers are also\n capable of passing additional information to the Jinja\n templating engine.\n \n Parameters\n ----------\n transformer : transformer\n \"\"\"\n if self.transformers is None:\n self.transformers = []\n \n if inspect.isfunction(transformer):\n self.transformers.append(transformer)\n return transformer\n elif isinstance(transformer, MetaHasTraits):\n transformer_instance = transformer(config=self.config)\n self.transformers.append(transformer_instance)\n return transformer_instance\n else:\n transformer_instance = transformer()\n self.transformers.append(transformer_instance)\n return transformer_instance\n\n\n def register_filter(self, name, filter):\n \"\"\"\n Register a filter.\n A filter is a function that accepts and acts on one string. \n The filters are accesible within the Jinja templating engine.\n \n Parameters\n ----------\n name : str\n name to give the filter in the Jinja engine\n filter : filter\n \"\"\"\n if inspect.isfunction(filter):\n self.environment.filters[name] = filter\n elif isinstance(filter, MetaHasTraits):\n self.environment.filters[name] = filter(config=self.config)\n else:\n self.environment.filters[name] = filter()\n return self.environment.filters[name]\n\n \n def _register_transformers(self):\n \"\"\"\n Register all of the transformers needed for this exporter.\n \"\"\"\n \n self.register_transformer(transformers.coalesce_streams)\n \n #Remember the figure extraction transformer so it can be enabled and\n #disabled easily later.\n self.extract_figure_transformer = self.register_transformer(transformers.ExtractFigureTransformer)\n \n \n def _register_filters(self):\n \"\"\"\n Register all of the filters required for the exporter.\n \"\"\"\n for k, v in default_filters.iteritems():\n self.register_filter(k, v)\n \n \n def _init_environment(self):\n \"\"\"\n Create the Jinja templating environment.\n \"\"\"\n here = os.path.realpath(__file__)\n self.environment = Environment(\n loader=FileSystemLoader([\n os.path.join(here, self.template_path),\n os.path.join(here, self.template_skeleton_path),\n ]),\n extensions=JINJA_EXTENSIONS\n )\n \n #Set special Jinja2 syntax that will not conflict with latex.\n if self.jinja_logic_block_start:\n self.environment.block_start_string = self.jinja_logic_block_start\n if self.jinja_logic_block_end:\n self.environment.block_end_string = self.jinja_logic_block_end\n if self.jinja_variable_block_start:\n self.environment.variable_start_string = self.jinja_variable_block_start\n if self.jinja_variable_block_end:\n self.environment.variable_end_string = self.jinja_variable_block_end\n if self.jinja_comment_block_start:\n self.environment.comment_start_string = self.jinja_comment_block_start\n if self.jinja_comment_block_end:\n self.environment.comment_end_string = self.jinja_comment_block_end\n\n\n def _preprocess(self, nb, resources):\n \"\"\"\n Preprocess the notebook before passing it into the Jinja engine.\n To preprocess the notebook is to apply all of the \n \n Parameters\n ----------\n nb : notebook node\n notebook that is being exported.\n resources : a dict of additional resources that\n can be accessed read/write by transformers\n and filters.\n \"\"\"\n \n # Do a deepcopy first, \n # we are never safe enough with what the transformers could do.\n nbc = deepcopy(nb)\n resc = deepcopy(resources)\n #Run each transformer on the notebook. Carry the output along\n #to each transformer\n for transformer in self.transformers:\n nb, resources = transformer(nbc, resc)\n return nb, resources\n\n", "path": "IPython/nbconvert/exporters/exporter.py"}]} | 3,462 | 290 |
gh_patches_debug_13119 | rasdani/github-patches | git_diff | freqtrade__freqtrade-3616 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bittrex is not working for PriceFilter
## Describe your environment
* Operating system: Ubuntu Server
* Python Version: 3.8.2 (`python -V`)
* CCXT version: 1.30.48 (`pip freeze | grep ccxt`)
* Freqtrade Version: 2020.6 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
I have the following config file:
```
{
"max_open_trades": 10,
"stake_currency": "USDT",
"stake_amount": "unlimited",
"amend_last_stake_amount": true,
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "USD",
"dry_run": true,
"dry_run_wallet": 200,
"unfilledtimeout": {
"buy": 10,
"sell": 30
},
"order_types": {
"buy": "limit",
"sell": "limit",
"stoploss": "limit",
"stoploss_on_exchange": false,
"stoploss_on_exchange_interval": 60
},
"bid_strategy": {
"ask_last_balance": 0.0,
"use_order_book": false,
"order_book_top": 1,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"ask_strategy": {
"use_order_book": false,
"order_book_min": 1,
"order_book_max": 9,
"use_sell_signal": true,
"sell_profit_only": false,
"ignore_roi_if_buy_signal": false
},
"exchange": {
"name": "bittrex",
"key": "",
"secret": "",
"ccxt_config": {
"enableRateLimit": true
},
"ccxt_async_config": {
"enableRateLimit": true,
"rateLimit": 500
},
"pair_blacklist": [
"BUSD/USDT",
"TUSD/USDT",
"USDC/USDT",
"PAX/USDT",
"BKRW/USDT",
"EUR/USDT",
"IDRT/USDT",
"NGN/USDT",
"RUB/USDT",
"TRY/USDT",
"ZAR/USDT"
]
},
"pairlists": [
{
"method": "VolumePairList",
"number_assets": 200,
"sort_key": "quoteVolume",
"refresh_period": 1800
},
{"method": "PrecisionFilter"},
{"method": "PriceFilter", "low_price_ratio": 0.02},
{"method": "SpreadFilter", "max_spread_ratio": 0.01},
{"method": "ShuffleFilter", "seed": 42},
{"method": "AgeFilter", "min_days_listed": 20}
],
"edge": {
"enabled": false,
"process_throttle_secs": 3600,
"calculate_since_number_of_days": 7,
"allowed_risk": 0.01,
"stoploss_range_min": -0.01,
"stoploss_range_max": -0.1,
"stoploss_range_step": -0.01,
"minimum_winrate": 0.60,
"minimum_expectancy": 0.20,
"min_trade_number": 10,
"max_trade_duration_minute": 1440,
"remove_pumps": false
},
"telegram": {
"enabled": true,
"token": "",
"chat_id": ""
},
"initial_state": "running",
"forcebuy_enable": false,
"internals": {
"process_throttle_secs": 5
}
}
```
If I'm trying to run this configuration with **Bittrex** as exchanger I'm going to receive the following error.
```
2020-07-21 01:27:44,324 - freqtrade.commands.trade_commands - ERROR - float division by zero
2020-07-21 01:27:44,324 - freqtrade.commands.trade_commands - ERROR - Fatal exception!
Traceback (most recent call last):
File "/home/vlad/freqtrade/freqtrade/commands/trade_commands.py", line 19, in start_trading
worker = Worker(args)
File "/home/vlad/freqtrade/freqtrade/worker.py", line 34, in __init__
self._init(False)
File "/home/vlad/freqtrade/freqtrade/worker.py", line 51, in _init
self.freqtrade = FreqtradeBot(self._config)
File "/home/vlad/freqtrade/freqtrade/freqtradebot.py", line 87, in __init__
self.active_pair_whitelist = self._refresh_active_whitelist()
File "/home/vlad/freqtrade/freqtrade/freqtradebot.py", line 184, in _refresh_active_whitelist
self.pairlists.refresh_pairlist()
File "/home/vlad/freqtrade/freqtrade/pairlist/pairlistmanager.py", line 95, in refresh_pairlist
pairlist = pairlist_handler.filter_pairlist(pairlist, tickers)
File "/home/vlad/freqtrade/freqtrade/pairlist/IPairList.py", line 131, in filter_pairlist
if not self._validate_pair(tickers[p]):
File "/home/vlad/freqtrade/freqtrade/pairlist/PriceFilter.py", line 50, in _validate_pair
changeperc = compare / ticker['last']
ZeroDivisionError: float division by zero
```
The interesting part is that if I'm running the same configuration but on **Binance** instead of **Bittrex** everything is going to work properly as expected. Just on **Bittrex** so far I'm getting this error message.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/pairlist/PriceFilter.py`
Content:
```
1 """
2 Price pair list filter
3 """
4 import logging
5 from typing import Any, Dict
6
7 from freqtrade.pairlist.IPairList import IPairList
8
9
10 logger = logging.getLogger(__name__)
11
12
13 class PriceFilter(IPairList):
14
15 def __init__(self, exchange, pairlistmanager,
16 config: Dict[str, Any], pairlistconfig: Dict[str, Any],
17 pairlist_pos: int) -> None:
18 super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)
19
20 self._low_price_ratio = pairlistconfig.get('low_price_ratio', 0)
21 self._min_price = pairlistconfig.get('min_price', 0)
22 self._max_price = pairlistconfig.get('max_price', 0)
23 self._enabled = ((self._low_price_ratio != 0) or
24 (self._min_price != 0) or
25 (self._max_price != 0))
26
27 @property
28 def needstickers(self) -> bool:
29 """
30 Boolean property defining if tickers are necessary.
31 If no Pairlist requires tickers, an empty List is passed
32 as tickers argument to filter_pairlist
33 """
34 return True
35
36 def short_desc(self) -> str:
37 """
38 Short whitelist method description - used for startup-messages
39 """
40 active_price_filters = []
41 if self._low_price_ratio != 0:
42 active_price_filters.append(f"below {self._low_price_ratio * 100}%")
43 if self._min_price != 0:
44 active_price_filters.append(f"below {self._min_price:.8f}")
45 if self._max_price != 0:
46 active_price_filters.append(f"above {self._max_price:.8f}")
47
48 if len(active_price_filters):
49 return f"{self.name} - Filtering pairs priced {' or '.join(active_price_filters)}."
50
51 return f"{self.name} - No price filters configured."
52
53 def _validate_pair(self, ticker) -> bool:
54 """
55 Check if if one price-step (pip) is > than a certain barrier.
56 :param ticker: ticker dict as returned from ccxt.load_markets()
57 :return: True if the pair can stay, false if it should be removed
58 """
59 if ticker['last'] is None:
60 self.log_on_refresh(logger.info,
61 f"Removed {ticker['symbol']} from whitelist, because "
62 "ticker['last'] is empty (Usually no trade in the last 24h).")
63 return False
64
65 # Perform low_price_ratio check.
66 if self._low_price_ratio != 0:
67 compare = self._exchange.price_get_one_pip(ticker['symbol'], ticker['last'])
68 changeperc = compare / ticker['last']
69 if changeperc > self._low_price_ratio:
70 self.log_on_refresh(logger.info, f"Removed {ticker['symbol']} from whitelist, "
71 f"because 1 unit is {changeperc * 100:.3f}%")
72 return False
73
74 # Perform min_price check.
75 if self._min_price != 0:
76 if ticker['last'] < self._min_price:
77 self.log_on_refresh(logger.info, f"Removed {ticker['symbol']} from whitelist, "
78 f"because last price < {self._min_price:.8f}")
79 return False
80
81 # Perform max_price check.
82 if self._max_price != 0:
83 if ticker['last'] > self._max_price:
84 self.log_on_refresh(logger.info, f"Removed {ticker['symbol']} from whitelist, "
85 f"because last price > {self._max_price:.8f}")
86 return False
87
88 return True
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/pairlist/PriceFilter.py b/freqtrade/pairlist/PriceFilter.py
--- a/freqtrade/pairlist/PriceFilter.py
+++ b/freqtrade/pairlist/PriceFilter.py
@@ -56,7 +56,7 @@
:param ticker: ticker dict as returned from ccxt.load_markets()
:return: True if the pair can stay, false if it should be removed
"""
- if ticker['last'] is None:
+ if ticker['last'] is None or ticker['last'] == 0:
self.log_on_refresh(logger.info,
f"Removed {ticker['symbol']} from whitelist, because "
"ticker['last'] is empty (Usually no trade in the last 24h).")
| {"golden_diff": "diff --git a/freqtrade/pairlist/PriceFilter.py b/freqtrade/pairlist/PriceFilter.py\n--- a/freqtrade/pairlist/PriceFilter.py\n+++ b/freqtrade/pairlist/PriceFilter.py\n@@ -56,7 +56,7 @@\n :param ticker: ticker dict as returned from ccxt.load_markets()\n :return: True if the pair can stay, false if it should be removed\n \"\"\"\n- if ticker['last'] is None:\n+ if ticker['last'] is None or ticker['last'] == 0:\n self.log_on_refresh(logger.info,\n f\"Removed {ticker['symbol']} from whitelist, because \"\n \"ticker['last'] is empty (Usually no trade in the last 24h).\")\n", "issue": "Bittrex is not working for PriceFilter\n## Describe your environment\r\n\r\n * Operating system: Ubuntu Server\r\n * Python Version: 3.8.2 (`python -V`)\r\n * CCXT version: 1.30.48 (`pip freeze | grep ccxt`)\r\n * Freqtrade Version: 2020.6 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)\r\n \r\nNote: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.\r\n\r\n## Describe the problem:\r\n\r\nI have the following config file:\r\n\r\n```\r\n{\r\n \"max_open_trades\": 10,\r\n \"stake_currency\": \"USDT\",\r\n \"stake_amount\": \"unlimited\",\r\n \"amend_last_stake_amount\": true,\r\n \"tradable_balance_ratio\": 0.99,\r\n \"fiat_display_currency\": \"USD\",\r\n \"dry_run\": true,\r\n \"dry_run_wallet\": 200,\r\n \"unfilledtimeout\": {\r\n \"buy\": 10,\r\n \"sell\": 30\r\n },\r\n \"order_types\": {\r\n \"buy\": \"limit\",\r\n \"sell\": \"limit\",\r\n \"stoploss\": \"limit\",\r\n \"stoploss_on_exchange\": false,\r\n \"stoploss_on_exchange_interval\": 60\r\n },\r\n \"bid_strategy\": {\r\n \"ask_last_balance\": 0.0,\r\n \"use_order_book\": false,\r\n \"order_book_top\": 1,\r\n \"check_depth_of_market\": {\r\n \"enabled\": false,\r\n \"bids_to_ask_delta\": 1\r\n }\r\n },\r\n \"ask_strategy\": {\r\n \"use_order_book\": false,\r\n \"order_book_min\": 1,\r\n \"order_book_max\": 9,\r\n \"use_sell_signal\": true,\r\n \"sell_profit_only\": false,\r\n \"ignore_roi_if_buy_signal\": false\r\n },\r\n \"exchange\": {\r\n \"name\": \"bittrex\",\r\n \"key\": \"\",\r\n \"secret\": \"\",\r\n \"ccxt_config\": {\r\n \"enableRateLimit\": true\r\n },\r\n \"ccxt_async_config\": {\r\n \"enableRateLimit\": true,\r\n \"rateLimit\": 500\r\n },\r\n \"pair_blacklist\": [\r\n \"BUSD/USDT\",\r\n \"TUSD/USDT\",\r\n \"USDC/USDT\",\r\n \"PAX/USDT\",\r\n \"BKRW/USDT\",\r\n \"EUR/USDT\",\r\n \"IDRT/USDT\",\r\n \"NGN/USDT\",\r\n \"RUB/USDT\",\r\n \"TRY/USDT\",\r\n \"ZAR/USDT\"\r\n ]\r\n },\r\n \"pairlists\": [\r\n {\r\n \"method\": \"VolumePairList\",\r\n \"number_assets\": 200,\r\n \"sort_key\": \"quoteVolume\",\r\n \"refresh_period\": 1800\r\n },\r\n {\"method\": \"PrecisionFilter\"},\r\n {\"method\": \"PriceFilter\", \"low_price_ratio\": 0.02},\r\n {\"method\": \"SpreadFilter\", \"max_spread_ratio\": 0.01},\r\n {\"method\": \"ShuffleFilter\", \"seed\": 42},\r\n {\"method\": \"AgeFilter\", \"min_days_listed\": 20}\r\n\r\n ],\r\n \"edge\": {\r\n \"enabled\": false,\r\n \"process_throttle_secs\": 3600,\r\n \"calculate_since_number_of_days\": 7,\r\n \"allowed_risk\": 0.01,\r\n \"stoploss_range_min\": -0.01,\r\n \"stoploss_range_max\": -0.1,\r\n \"stoploss_range_step\": -0.01,\r\n \"minimum_winrate\": 0.60,\r\n \"minimum_expectancy\": 0.20,\r\n \"min_trade_number\": 10,\r\n \"max_trade_duration_minute\": 1440,\r\n \"remove_pumps\": false\r\n },\r\n \"telegram\": {\r\n \"enabled\": true,\r\n \"token\": \"\",\r\n \"chat_id\": \"\"\r\n },\r\n \"initial_state\": \"running\",\r\n \"forcebuy_enable\": false,\r\n \"internals\": {\r\n \"process_throttle_secs\": 5\r\n }\r\n}\r\n```\r\n\r\nIf I'm trying to run this configuration with **Bittrex** as exchanger I'm going to receive the following error.\r\n\r\n```\r\n2020-07-21 01:27:44,324 - freqtrade.commands.trade_commands - ERROR - float division by zero\r\n2020-07-21 01:27:44,324 - freqtrade.commands.trade_commands - ERROR - Fatal exception!\r\nTraceback (most recent call last):\r\n File \"/home/vlad/freqtrade/freqtrade/commands/trade_commands.py\", line 19, in start_trading\r\n worker = Worker(args)\r\n File \"/home/vlad/freqtrade/freqtrade/worker.py\", line 34, in __init__\r\n self._init(False)\r\n File \"/home/vlad/freqtrade/freqtrade/worker.py\", line 51, in _init\r\n self.freqtrade = FreqtradeBot(self._config)\r\n File \"/home/vlad/freqtrade/freqtrade/freqtradebot.py\", line 87, in __init__\r\n self.active_pair_whitelist = self._refresh_active_whitelist()\r\n File \"/home/vlad/freqtrade/freqtrade/freqtradebot.py\", line 184, in _refresh_active_whitelist\r\n self.pairlists.refresh_pairlist()\r\n File \"/home/vlad/freqtrade/freqtrade/pairlist/pairlistmanager.py\", line 95, in refresh_pairlist\r\n pairlist = pairlist_handler.filter_pairlist(pairlist, tickers)\r\n File \"/home/vlad/freqtrade/freqtrade/pairlist/IPairList.py\", line 131, in filter_pairlist\r\n if not self._validate_pair(tickers[p]):\r\n File \"/home/vlad/freqtrade/freqtrade/pairlist/PriceFilter.py\", line 50, in _validate_pair\r\n changeperc = compare / ticker['last']\r\nZeroDivisionError: float division by zero\r\n```\r\n\r\n\r\nThe interesting part is that if I'm running the same configuration but on **Binance** instead of **Bittrex** everything is going to work properly as expected. Just on **Bittrex** so far I'm getting this error message.\n", "before_files": [{"content": "\"\"\"\nPrice pair list filter\n\"\"\"\nimport logging\nfrom typing import Any, Dict\n\nfrom freqtrade.pairlist.IPairList import IPairList\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PriceFilter(IPairList):\n\n def __init__(self, exchange, pairlistmanager,\n config: Dict[str, Any], pairlistconfig: Dict[str, Any],\n pairlist_pos: int) -> None:\n super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)\n\n self._low_price_ratio = pairlistconfig.get('low_price_ratio', 0)\n self._min_price = pairlistconfig.get('min_price', 0)\n self._max_price = pairlistconfig.get('max_price', 0)\n self._enabled = ((self._low_price_ratio != 0) or\n (self._min_price != 0) or\n (self._max_price != 0))\n\n @property\n def needstickers(self) -> bool:\n \"\"\"\n Boolean property defining if tickers are necessary.\n If no Pairlist requires tickers, an empty List is passed\n as tickers argument to filter_pairlist\n \"\"\"\n return True\n\n def short_desc(self) -> str:\n \"\"\"\n Short whitelist method description - used for startup-messages\n \"\"\"\n active_price_filters = []\n if self._low_price_ratio != 0:\n active_price_filters.append(f\"below {self._low_price_ratio * 100}%\")\n if self._min_price != 0:\n active_price_filters.append(f\"below {self._min_price:.8f}\")\n if self._max_price != 0:\n active_price_filters.append(f\"above {self._max_price:.8f}\")\n\n if len(active_price_filters):\n return f\"{self.name} - Filtering pairs priced {' or '.join(active_price_filters)}.\"\n\n return f\"{self.name} - No price filters configured.\"\n\n def _validate_pair(self, ticker) -> bool:\n \"\"\"\n Check if if one price-step (pip) is > than a certain barrier.\n :param ticker: ticker dict as returned from ccxt.load_markets()\n :return: True if the pair can stay, false if it should be removed\n \"\"\"\n if ticker['last'] is None:\n self.log_on_refresh(logger.info,\n f\"Removed {ticker['symbol']} from whitelist, because \"\n \"ticker['last'] is empty (Usually no trade in the last 24h).\")\n return False\n\n # Perform low_price_ratio check.\n if self._low_price_ratio != 0:\n compare = self._exchange.price_get_one_pip(ticker['symbol'], ticker['last'])\n changeperc = compare / ticker['last']\n if changeperc > self._low_price_ratio:\n self.log_on_refresh(logger.info, f\"Removed {ticker['symbol']} from whitelist, \"\n f\"because 1 unit is {changeperc * 100:.3f}%\")\n return False\n\n # Perform min_price check.\n if self._min_price != 0:\n if ticker['last'] < self._min_price:\n self.log_on_refresh(logger.info, f\"Removed {ticker['symbol']} from whitelist, \"\n f\"because last price < {self._min_price:.8f}\")\n return False\n\n # Perform max_price check.\n if self._max_price != 0:\n if ticker['last'] > self._max_price:\n self.log_on_refresh(logger.info, f\"Removed {ticker['symbol']} from whitelist, \"\n f\"because last price > {self._max_price:.8f}\")\n return False\n\n return True\n", "path": "freqtrade/pairlist/PriceFilter.py"}], "after_files": [{"content": "\"\"\"\nPrice pair list filter\n\"\"\"\nimport logging\nfrom typing import Any, Dict\n\nfrom freqtrade.pairlist.IPairList import IPairList\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PriceFilter(IPairList):\n\n def __init__(self, exchange, pairlistmanager,\n config: Dict[str, Any], pairlistconfig: Dict[str, Any],\n pairlist_pos: int) -> None:\n super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)\n\n self._low_price_ratio = pairlistconfig.get('low_price_ratio', 0)\n self._min_price = pairlistconfig.get('min_price', 0)\n self._max_price = pairlistconfig.get('max_price', 0)\n self._enabled = ((self._low_price_ratio != 0) or\n (self._min_price != 0) or\n (self._max_price != 0))\n\n @property\n def needstickers(self) -> bool:\n \"\"\"\n Boolean property defining if tickers are necessary.\n If no Pairlist requires tickers, an empty List is passed\n as tickers argument to filter_pairlist\n \"\"\"\n return True\n\n def short_desc(self) -> str:\n \"\"\"\n Short whitelist method description - used for startup-messages\n \"\"\"\n active_price_filters = []\n if self._low_price_ratio != 0:\n active_price_filters.append(f\"below {self._low_price_ratio * 100}%\")\n if self._min_price != 0:\n active_price_filters.append(f\"below {self._min_price:.8f}\")\n if self._max_price != 0:\n active_price_filters.append(f\"above {self._max_price:.8f}\")\n\n if len(active_price_filters):\n return f\"{self.name} - Filtering pairs priced {' or '.join(active_price_filters)}.\"\n\n return f\"{self.name} - No price filters configured.\"\n\n def _validate_pair(self, ticker) -> bool:\n \"\"\"\n Check if if one price-step (pip) is > than a certain barrier.\n :param ticker: ticker dict as returned from ccxt.load_markets()\n :return: True if the pair can stay, false if it should be removed\n \"\"\"\n if ticker['last'] is None or ticker['last'] == 0:\n self.log_on_refresh(logger.info,\n f\"Removed {ticker['symbol']} from whitelist, because \"\n \"ticker['last'] is empty (Usually no trade in the last 24h).\")\n return False\n\n # Perform low_price_ratio check.\n if self._low_price_ratio != 0:\n compare = self._exchange.price_get_one_pip(ticker['symbol'], ticker['last'])\n changeperc = compare / ticker['last']\n if changeperc > self._low_price_ratio:\n self.log_on_refresh(logger.info, f\"Removed {ticker['symbol']} from whitelist, \"\n f\"because 1 unit is {changeperc * 100:.3f}%\")\n return False\n\n # Perform min_price check.\n if self._min_price != 0:\n if ticker['last'] < self._min_price:\n self.log_on_refresh(logger.info, f\"Removed {ticker['symbol']} from whitelist, \"\n f\"because last price < {self._min_price:.8f}\")\n return False\n\n # Perform max_price check.\n if self._max_price != 0:\n if ticker['last'] > self._max_price:\n self.log_on_refresh(logger.info, f\"Removed {ticker['symbol']} from whitelist, \"\n f\"because last price > {self._max_price:.8f}\")\n return False\n\n return True\n", "path": "freqtrade/pairlist/PriceFilter.py"}]} | 2,635 | 171 |
gh_patches_debug_38599 | rasdani/github-patches | git_diff | kivy__python-for-android-1765 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken libglob recipe
`libglob` recipe compilation fails for the exact same reason problem as ifaddrs.
See details https://github.com/kivy/python-for-android/issues/1398
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/libglob/__init__.py`
Content:
```
1 """
2 android libglob
3 available via '-lglob' LDFLAG
4 """
5 from os.path import exists, join
6 from pythonforandroid.recipe import CompiledComponentsPythonRecipe
7 from pythonforandroid.toolchain import current_directory
8 from pythonforandroid.logger import info, shprint
9 import sh
10
11
12 class LibGlobRecipe(CompiledComponentsPythonRecipe):
13 """Make a glob.h and glob.so for the python_install_dir()"""
14 version = '0.0.1'
15 url = None
16 #
17 # glob.h and glob.c extracted from
18 # https://github.com/white-gecko/TokyoCabinet, e.g.:
19 # https://raw.githubusercontent.com/white-gecko/TokyoCabinet/master/glob.h
20 # https://raw.githubusercontent.com/white-gecko/TokyoCabinet/master/glob.c
21 # and pushed in via patch
22 name = 'libglob'
23
24 depends = [('hostpython2', 'hostpython3')]
25 patches = ['glob.patch']
26
27 def should_build(self, arch):
28 """It's faster to build than check"""
29 return True
30
31 def prebuild_arch(self, arch):
32 """Make the build and target directories"""
33 path = self.get_build_dir(arch.arch)
34 if not exists(path):
35 info("creating {}".format(path))
36 shprint(sh.mkdir, '-p', path)
37
38 def build_arch(self, arch):
39 """simple shared compile"""
40 env = self.get_recipe_env(arch, with_flags_in_cc=False)
41 for path in (
42 self.get_build_dir(arch.arch),
43 join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Lib'),
44 join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')):
45 if not exists(path):
46 info("creating {}".format(path))
47 shprint(sh.mkdir, '-p', path)
48 cli = env['CC'].split()
49 cc = sh.Command(cli[0])
50
51 with current_directory(self.get_build_dir(arch.arch)):
52 cflags = env['CFLAGS'].split()
53 cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.']) # , '-o', 'glob.o'])
54 shprint(cc, *cflags, _env=env)
55
56 cflags = env['CFLAGS'].split()
57 srindex = cflags.index('--sysroot')
58 if srindex:
59 cflags[srindex+1] = self.ctx.ndk_platform
60 cflags.extend(['-shared', '-I.', 'glob.o', '-o', 'libglob.so'])
61 shprint(cc, *cflags, _env=env)
62
63 shprint(sh.cp, 'libglob.so', join(self.ctx.libs_dir, arch.arch))
64 shprint(sh.cp, "libglob.so", join(self.ctx.get_python_install_dir(), 'lib'))
65 # drop header in to the Python include directory
66 shprint(sh.cp, "glob.h", join(self.ctx.get_python_install_dir(),
67 'include/python{}'.format(
68 self.ctx.python_recipe.version[0:3]
69 )
70 )
71 )
72 include_path = join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')
73 shprint(sh.cp, "glob.h", include_path)
74
75
76 recipe = LibGlobRecipe()
77
```
Path: `ci/constants.py`
Content:
```
1 from enum import Enum
2
3
4 class TargetPython(Enum):
5 python2 = 0
6 python3crystax = 1
7 python3 = 2
8
9
10 # recipes that currently break the build
11 # a recipe could be broken for a target Python and not for the other,
12 # hence we're maintaining one list per Python target
13 BROKEN_RECIPES_PYTHON2 = set([
14 # pythonhelpers.h:12:18: fatal error: string: No such file or directory
15 'atom',
16 # https://github.com/kivy/python-for-android/issues/550
17 'audiostream',
18 'brokenrecipe',
19 'evdev',
20 # distutils.errors.DistutilsError
21 # Could not find suitable distribution for Requirement.parse('cython')
22 'ffpyplayer',
23 'flask',
24 'groestlcoin_hash',
25 'hostpython3crystax',
26 # https://github.com/kivy/python-for-android/issues/1354
27 'kiwisolver',
28 # https://github.com/kivy/python-for-android/issues/1399
29 'libglob',
30 'libmysqlclient',
31 'libsecp256k1',
32 'libtribler',
33 'ndghttpsclient',
34 'm2crypto',
35 # ImportError: No module named setuptools
36 'netifaces',
37 'Pillow',
38 # depends on cffi that still seems to have compilation issues
39 'protobuf_cpp',
40 'xeddsa',
41 'x3dh',
42 'pynacl',
43 'doubleratchet',
44 # The opencv recipe fails to pass travis tests due to the long processing
45 # when building it and the lack of console output, so, it's only broken
46 # for travis, see: https://github.com/kivy/python-for-android/pull/1661
47 'opencv',
48 'omemo',
49 # requires `libpq-dev` system dependency e.g. for `pg_config` binary
50 'psycopg2',
51 'pygame',
52 # most likely some setup in the Docker container, because it works in host
53 'pyjnius', 'pyopenal',
54 'pyproj',
55 'pysdl2',
56 'pyzmq',
57 'secp256k1',
58 'shapely',
59 # mpmath package with a version >= 0.19 required
60 'sympy',
61 'twisted',
62 'vlc',
63 'websocket-client',
64 'zeroconf',
65 'zope',
66 ])
67 BROKEN_RECIPES_PYTHON3 = set([
68 'brokenrecipe',
69 # enum34 is not compatible with Python 3.6 standard library
70 # https://stackoverflow.com/a/45716067/185510
71 'enum34',
72 # https://github.com/kivy/python-for-android/issues/1399
73 'libglob',
74 # build_dir = glob.glob('build/lib.*')[0]
75 # IndexError: list index out of range
76 'secp256k1',
77 'ffpyplayer',
78 'icu',
79 # https://github.com/kivy/python-for-android/issues/1354
80 # The opencv recipe fails to pass travis tests due to the long processing
81 # when building it and the lack of console output, so, it's only broken
82 # for travis, see: https://github.com/kivy/python-for-android/pull/1661
83 'opencv',
84 # requires `libpq-dev` system dependency e.g. for `pg_config` binary
85 'psycopg2',
86 'protobuf_cpp',
87 # most likely some setup in the Docker container, because it works in host
88 'pyjnius', 'pyopenal',
89 # SyntaxError: invalid syntax (Python2)
90 'storm',
91 # mpmath package with a version >= 0.19 required
92 'sympy',
93 'vlc',
94 ])
95
96 BROKEN_RECIPES = {
97 TargetPython.python2: BROKEN_RECIPES_PYTHON2,
98 TargetPython.python3: BROKEN_RECIPES_PYTHON3,
99 }
100 # recipes that were already built will be skipped
101 CORE_RECIPES = set([
102 'pyjnius', 'kivy', 'openssl', 'requests', 'sqlite3', 'setuptools',
103 'numpy', 'android', 'python2', 'python3',
104 ])
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ci/constants.py b/ci/constants.py
--- a/ci/constants.py
+++ b/ci/constants.py
@@ -25,8 +25,6 @@
'hostpython3crystax',
# https://github.com/kivy/python-for-android/issues/1354
'kiwisolver',
- # https://github.com/kivy/python-for-android/issues/1399
- 'libglob',
'libmysqlclient',
'libsecp256k1',
'libtribler',
@@ -69,8 +67,6 @@
# enum34 is not compatible with Python 3.6 standard library
# https://stackoverflow.com/a/45716067/185510
'enum34',
- # https://github.com/kivy/python-for-android/issues/1399
- 'libglob',
# build_dir = glob.glob('build/lib.*')[0]
# IndexError: list index out of range
'secp256k1',
diff --git a/pythonforandroid/recipes/libglob/__init__.py b/pythonforandroid/recipes/libglob/__init__.py
--- a/pythonforandroid/recipes/libglob/__init__.py
+++ b/pythonforandroid/recipes/libglob/__init__.py
@@ -45,32 +45,22 @@
if not exists(path):
info("creating {}".format(path))
shprint(sh.mkdir, '-p', path)
- cli = env['CC'].split()
- cc = sh.Command(cli[0])
+ cli = env['CC'].split()[0]
+ # makes sure first CC command is the compiler rather than ccache, refs:
+ # https://github.com/kivy/python-for-android/issues/1399
+ if 'ccache' in cli:
+ cli = env['CC'].split()[1]
+ cc = sh.Command(cli)
with current_directory(self.get_build_dir(arch.arch)):
cflags = env['CFLAGS'].split()
- cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.']) # , '-o', 'glob.o'])
+ cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.'])
shprint(cc, *cflags, _env=env)
-
cflags = env['CFLAGS'].split()
- srindex = cflags.index('--sysroot')
- if srindex:
- cflags[srindex+1] = self.ctx.ndk_platform
cflags.extend(['-shared', '-I.', 'glob.o', '-o', 'libglob.so'])
+ cflags.extend(env['LDFLAGS'].split())
shprint(cc, *cflags, _env=env)
-
shprint(sh.cp, 'libglob.so', join(self.ctx.libs_dir, arch.arch))
- shprint(sh.cp, "libglob.so", join(self.ctx.get_python_install_dir(), 'lib'))
- # drop header in to the Python include directory
- shprint(sh.cp, "glob.h", join(self.ctx.get_python_install_dir(),
- 'include/python{}'.format(
- self.ctx.python_recipe.version[0:3]
- )
- )
- )
- include_path = join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')
- shprint(sh.cp, "glob.h", include_path)
recipe = LibGlobRecipe()
| {"golden_diff": "diff --git a/ci/constants.py b/ci/constants.py\n--- a/ci/constants.py\n+++ b/ci/constants.py\n@@ -25,8 +25,6 @@\n 'hostpython3crystax',\n # https://github.com/kivy/python-for-android/issues/1354\n 'kiwisolver',\n- # https://github.com/kivy/python-for-android/issues/1399\n- 'libglob',\n 'libmysqlclient',\n 'libsecp256k1',\n 'libtribler',\n@@ -69,8 +67,6 @@\n # enum34 is not compatible with Python 3.6 standard library\n # https://stackoverflow.com/a/45716067/185510\n 'enum34',\n- # https://github.com/kivy/python-for-android/issues/1399\n- 'libglob',\n # build_dir = glob.glob('build/lib.*')[0]\n # IndexError: list index out of range\n 'secp256k1',\ndiff --git a/pythonforandroid/recipes/libglob/__init__.py b/pythonforandroid/recipes/libglob/__init__.py\n--- a/pythonforandroid/recipes/libglob/__init__.py\n+++ b/pythonforandroid/recipes/libglob/__init__.py\n@@ -45,32 +45,22 @@\n if not exists(path):\n info(\"creating {}\".format(path))\n shprint(sh.mkdir, '-p', path)\n- cli = env['CC'].split()\n- cc = sh.Command(cli[0])\n+ cli = env['CC'].split()[0]\n+ # makes sure first CC command is the compiler rather than ccache, refs:\n+ # https://github.com/kivy/python-for-android/issues/1399\n+ if 'ccache' in cli:\n+ cli = env['CC'].split()[1]\n+ cc = sh.Command(cli)\n \n with current_directory(self.get_build_dir(arch.arch)):\n cflags = env['CFLAGS'].split()\n- cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.']) # , '-o', 'glob.o'])\n+ cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.'])\n shprint(cc, *cflags, _env=env)\n-\n cflags = env['CFLAGS'].split()\n- srindex = cflags.index('--sysroot')\n- if srindex:\n- cflags[srindex+1] = self.ctx.ndk_platform\n cflags.extend(['-shared', '-I.', 'glob.o', '-o', 'libglob.so'])\n+ cflags.extend(env['LDFLAGS'].split())\n shprint(cc, *cflags, _env=env)\n-\n shprint(sh.cp, 'libglob.so', join(self.ctx.libs_dir, arch.arch))\n- shprint(sh.cp, \"libglob.so\", join(self.ctx.get_python_install_dir(), 'lib'))\n- # drop header in to the Python include directory\n- shprint(sh.cp, \"glob.h\", join(self.ctx.get_python_install_dir(),\n- 'include/python{}'.format(\n- self.ctx.python_recipe.version[0:3]\n- )\n- )\n- )\n- include_path = join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')\n- shprint(sh.cp, \"glob.h\", include_path)\n \n \n recipe = LibGlobRecipe()\n", "issue": "Broken libglob recipe\n`libglob` recipe compilation fails for the exact same reason problem as ifaddrs.\r\nSee details https://github.com/kivy/python-for-android/issues/1398\n", "before_files": [{"content": "\"\"\"\n android libglob\n available via '-lglob' LDFLAG\n\"\"\"\nfrom os.path import exists, join\nfrom pythonforandroid.recipe import CompiledComponentsPythonRecipe\nfrom pythonforandroid.toolchain import current_directory\nfrom pythonforandroid.logger import info, shprint\nimport sh\n\n\nclass LibGlobRecipe(CompiledComponentsPythonRecipe):\n \"\"\"Make a glob.h and glob.so for the python_install_dir()\"\"\"\n version = '0.0.1'\n url = None\n #\n # glob.h and glob.c extracted from\n # https://github.com/white-gecko/TokyoCabinet, e.g.:\n # https://raw.githubusercontent.com/white-gecko/TokyoCabinet/master/glob.h\n # https://raw.githubusercontent.com/white-gecko/TokyoCabinet/master/glob.c\n # and pushed in via patch\n name = 'libglob'\n\n depends = [('hostpython2', 'hostpython3')]\n patches = ['glob.patch']\n\n def should_build(self, arch):\n \"\"\"It's faster to build than check\"\"\"\n return True\n\n def prebuild_arch(self, arch):\n \"\"\"Make the build and target directories\"\"\"\n path = self.get_build_dir(arch.arch)\n if not exists(path):\n info(\"creating {}\".format(path))\n shprint(sh.mkdir, '-p', path)\n\n def build_arch(self, arch):\n \"\"\"simple shared compile\"\"\"\n env = self.get_recipe_env(arch, with_flags_in_cc=False)\n for path in (\n self.get_build_dir(arch.arch),\n join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Lib'),\n join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')):\n if not exists(path):\n info(\"creating {}\".format(path))\n shprint(sh.mkdir, '-p', path)\n cli = env['CC'].split()\n cc = sh.Command(cli[0])\n\n with current_directory(self.get_build_dir(arch.arch)):\n cflags = env['CFLAGS'].split()\n cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.']) # , '-o', 'glob.o'])\n shprint(cc, *cflags, _env=env)\n\n cflags = env['CFLAGS'].split()\n srindex = cflags.index('--sysroot')\n if srindex:\n cflags[srindex+1] = self.ctx.ndk_platform\n cflags.extend(['-shared', '-I.', 'glob.o', '-o', 'libglob.so'])\n shprint(cc, *cflags, _env=env)\n\n shprint(sh.cp, 'libglob.so', join(self.ctx.libs_dir, arch.arch))\n shprint(sh.cp, \"libglob.so\", join(self.ctx.get_python_install_dir(), 'lib'))\n # drop header in to the Python include directory\n shprint(sh.cp, \"glob.h\", join(self.ctx.get_python_install_dir(),\n 'include/python{}'.format(\n self.ctx.python_recipe.version[0:3]\n )\n )\n )\n include_path = join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')\n shprint(sh.cp, \"glob.h\", include_path)\n\n\nrecipe = LibGlobRecipe()\n", "path": "pythonforandroid/recipes/libglob/__init__.py"}, {"content": "from enum import Enum\n\n\nclass TargetPython(Enum):\n python2 = 0\n python3crystax = 1\n python3 = 2\n\n\n# recipes that currently break the build\n# a recipe could be broken for a target Python and not for the other,\n# hence we're maintaining one list per Python target\nBROKEN_RECIPES_PYTHON2 = set([\n # pythonhelpers.h:12:18: fatal error: string: No such file or directory\n 'atom',\n # https://github.com/kivy/python-for-android/issues/550\n 'audiostream',\n 'brokenrecipe',\n 'evdev',\n # distutils.errors.DistutilsError\n # Could not find suitable distribution for Requirement.parse('cython')\n 'ffpyplayer',\n 'flask',\n 'groestlcoin_hash',\n 'hostpython3crystax',\n # https://github.com/kivy/python-for-android/issues/1354\n 'kiwisolver',\n # https://github.com/kivy/python-for-android/issues/1399\n 'libglob',\n 'libmysqlclient',\n 'libsecp256k1',\n 'libtribler',\n 'ndghttpsclient',\n 'm2crypto',\n # ImportError: No module named setuptools\n 'netifaces',\n 'Pillow',\n # depends on cffi that still seems to have compilation issues\n 'protobuf_cpp',\n 'xeddsa',\n 'x3dh',\n 'pynacl',\n 'doubleratchet',\n # The opencv recipe fails to pass travis tests due to the long processing\n # when building it and the lack of console output, so, it's only broken\n # for travis, see: https://github.com/kivy/python-for-android/pull/1661\n 'opencv',\n 'omemo',\n # requires `libpq-dev` system dependency e.g. for `pg_config` binary\n 'psycopg2',\n 'pygame',\n # most likely some setup in the Docker container, because it works in host\n 'pyjnius', 'pyopenal',\n 'pyproj',\n 'pysdl2',\n 'pyzmq',\n 'secp256k1',\n 'shapely',\n # mpmath package with a version >= 0.19 required\n 'sympy',\n 'twisted',\n 'vlc',\n 'websocket-client',\n 'zeroconf',\n 'zope',\n])\nBROKEN_RECIPES_PYTHON3 = set([\n 'brokenrecipe',\n # enum34 is not compatible with Python 3.6 standard library\n # https://stackoverflow.com/a/45716067/185510\n 'enum34',\n # https://github.com/kivy/python-for-android/issues/1399\n 'libglob',\n # build_dir = glob.glob('build/lib.*')[0]\n # IndexError: list index out of range\n 'secp256k1',\n 'ffpyplayer',\n 'icu',\n # https://github.com/kivy/python-for-android/issues/1354\n # The opencv recipe fails to pass travis tests due to the long processing\n # when building it and the lack of console output, so, it's only broken\n # for travis, see: https://github.com/kivy/python-for-android/pull/1661\n 'opencv',\n # requires `libpq-dev` system dependency e.g. for `pg_config` binary\n 'psycopg2',\n 'protobuf_cpp',\n # most likely some setup in the Docker container, because it works in host\n 'pyjnius', 'pyopenal',\n # SyntaxError: invalid syntax (Python2)\n 'storm',\n # mpmath package with a version >= 0.19 required\n 'sympy',\n 'vlc',\n])\n\nBROKEN_RECIPES = {\n TargetPython.python2: BROKEN_RECIPES_PYTHON2,\n TargetPython.python3: BROKEN_RECIPES_PYTHON3,\n}\n# recipes that were already built will be skipped\nCORE_RECIPES = set([\n 'pyjnius', 'kivy', 'openssl', 'requests', 'sqlite3', 'setuptools',\n 'numpy', 'android', 'python2', 'python3',\n])\n", "path": "ci/constants.py"}], "after_files": [{"content": "\"\"\"\n android libglob\n available via '-lglob' LDFLAG\n\"\"\"\nfrom os.path import exists, join\nfrom pythonforandroid.recipe import CompiledComponentsPythonRecipe\nfrom pythonforandroid.toolchain import current_directory\nfrom pythonforandroid.logger import info, shprint\nimport sh\n\n\nclass LibGlobRecipe(CompiledComponentsPythonRecipe):\n \"\"\"Make a glob.h and glob.so for the python_install_dir()\"\"\"\n version = '0.0.1'\n url = None\n #\n # glob.h and glob.c extracted from\n # https://github.com/white-gecko/TokyoCabinet, e.g.:\n # https://raw.githubusercontent.com/white-gecko/TokyoCabinet/master/glob.h\n # https://raw.githubusercontent.com/white-gecko/TokyoCabinet/master/glob.c\n # and pushed in via patch\n name = 'libglob'\n\n depends = [('hostpython2', 'hostpython3')]\n patches = ['glob.patch']\n\n def should_build(self, arch):\n \"\"\"It's faster to build than check\"\"\"\n return True\n\n def prebuild_arch(self, arch):\n \"\"\"Make the build and target directories\"\"\"\n path = self.get_build_dir(arch.arch)\n if not exists(path):\n info(\"creating {}\".format(path))\n shprint(sh.mkdir, '-p', path)\n\n def build_arch(self, arch):\n \"\"\"simple shared compile\"\"\"\n env = self.get_recipe_env(arch, with_flags_in_cc=False)\n for path in (\n self.get_build_dir(arch.arch),\n join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Lib'),\n join(self.ctx.python_recipe.get_build_dir(arch.arch), 'Include')):\n if not exists(path):\n info(\"creating {}\".format(path))\n shprint(sh.mkdir, '-p', path)\n cli = env['CC'].split()[0]\n # makes sure first CC command is the compiler rather than ccache, refs:\n # https://github.com/kivy/python-for-android/issues/1399\n if 'ccache' in cli:\n cli = env['CC'].split()[1]\n cc = sh.Command(cli)\n\n with current_directory(self.get_build_dir(arch.arch)):\n cflags = env['CFLAGS'].split()\n cflags.extend(['-I.', '-c', '-l.', 'glob.c', '-I.'])\n shprint(cc, *cflags, _env=env)\n cflags = env['CFLAGS'].split()\n cflags.extend(['-shared', '-I.', 'glob.o', '-o', 'libglob.so'])\n cflags.extend(env['LDFLAGS'].split())\n shprint(cc, *cflags, _env=env)\n shprint(sh.cp, 'libglob.so', join(self.ctx.libs_dir, arch.arch))\n\n\nrecipe = LibGlobRecipe()\n", "path": "pythonforandroid/recipes/libglob/__init__.py"}, {"content": "from enum import Enum\n\n\nclass TargetPython(Enum):\n python2 = 0\n python3crystax = 1\n python3 = 2\n\n\n# recipes that currently break the build\n# a recipe could be broken for a target Python and not for the other,\n# hence we're maintaining one list per Python target\nBROKEN_RECIPES_PYTHON2 = set([\n # pythonhelpers.h:12:18: fatal error: string: No such file or directory\n 'atom',\n # https://github.com/kivy/python-for-android/issues/550\n 'audiostream',\n 'brokenrecipe',\n 'evdev',\n # distutils.errors.DistutilsError\n # Could not find suitable distribution for Requirement.parse('cython')\n 'ffpyplayer',\n 'flask',\n 'groestlcoin_hash',\n 'hostpython3crystax',\n # https://github.com/kivy/python-for-android/issues/1354\n 'kiwisolver',\n 'libmysqlclient',\n 'libsecp256k1',\n 'libtribler',\n 'ndghttpsclient',\n 'm2crypto',\n # ImportError: No module named setuptools\n 'netifaces',\n 'Pillow',\n # depends on cffi that still seems to have compilation issues\n 'protobuf_cpp',\n 'xeddsa',\n 'x3dh',\n 'pynacl',\n 'doubleratchet',\n # The opencv recipe fails to pass travis tests due to the long processing\n # when building it and the lack of console output, so, it's only broken\n # for travis, see: https://github.com/kivy/python-for-android/pull/1661\n 'opencv',\n 'omemo',\n # requires `libpq-dev` system dependency e.g. for `pg_config` binary\n 'psycopg2',\n 'pygame',\n # most likely some setup in the Docker container, because it works in host\n 'pyjnius', 'pyopenal',\n 'pyproj',\n 'pysdl2',\n 'pyzmq',\n 'secp256k1',\n 'shapely',\n # mpmath package with a version >= 0.19 required\n 'sympy',\n 'twisted',\n 'vlc',\n 'websocket-client',\n 'zeroconf',\n 'zope',\n])\nBROKEN_RECIPES_PYTHON3 = set([\n 'brokenrecipe',\n # enum34 is not compatible with Python 3.6 standard library\n # https://stackoverflow.com/a/45716067/185510\n 'enum34',\n # build_dir = glob.glob('build/lib.*')[0]\n # IndexError: list index out of range\n 'secp256k1',\n 'ffpyplayer',\n 'icu',\n # https://github.com/kivy/python-for-android/issues/1354\n # The opencv recipe fails to pass travis tests due to the long processing\n # when building it and the lack of console output, so, it's only broken\n # for travis, see: https://github.com/kivy/python-for-android/pull/1661\n 'opencv',\n # requires `libpq-dev` system dependency e.g. for `pg_config` binary\n 'psycopg2',\n 'protobuf_cpp',\n # most likely some setup in the Docker container, because it works in host\n 'pyjnius', 'pyopenal',\n # SyntaxError: invalid syntax (Python2)\n 'storm',\n # mpmath package with a version >= 0.19 required\n 'sympy',\n 'vlc',\n])\n\nBROKEN_RECIPES = {\n TargetPython.python2: BROKEN_RECIPES_PYTHON2,\n TargetPython.python3: BROKEN_RECIPES_PYTHON3,\n}\n# recipes that were already built will be skipped\nCORE_RECIPES = set([\n 'pyjnius', 'kivy', 'openssl', 'requests', 'sqlite3', 'setuptools',\n 'numpy', 'android', 'python2', 'python3',\n])\n", "path": "ci/constants.py"}]} | 2,343 | 771 |
gh_patches_debug_2966 | rasdani/github-patches | git_diff | ivy-llc__ivy-16518 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
uniform
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/random.py`
Content:
```
1 # global
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/random.py b/ivy/functional/frontends/paddle/tensor/random.py
--- a/ivy/functional/frontends/paddle/tensor/random.py
+++ b/ivy/functional/frontends/paddle/tensor/random.py
@@ -1 +1,15 @@
# global
+import ivy
+from ivy.func_wrapper import with_supported_dtypes
+from ivy.functional.frontends.paddle.func_wrapper import (
+ to_ivy_arrays_and_back,
+)
+
+
+@with_supported_dtypes(
+ {"2.4.2 and below": ("float32", "float64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):
+ return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/random.py b/ivy/functional/frontends/paddle/tensor/random.py\n--- a/ivy/functional/frontends/paddle/tensor/random.py\n+++ b/ivy/functional/frontends/paddle/tensor/random.py\n@@ -1 +1,15 @@\n # global\n+import ivy\n+from ivy.func_wrapper import with_supported_dtypes\n+from ivy.functional.frontends.paddle.func_wrapper import (\n+ to_ivy_arrays_and_back,\n+)\n+\n+\n+@with_supported_dtypes(\n+ {\"2.4.2 and below\": (\"float32\", \"float64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):\n+ return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)\n", "issue": "uniform\n\n", "before_files": [{"content": "# global\n", "path": "ivy/functional/frontends/paddle/tensor/random.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):\n return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)\n", "path": "ivy/functional/frontends/paddle/tensor/random.py"}]} | 272 | 212 |
gh_patches_debug_21213 | rasdani/github-patches | git_diff | crytic__slither-2310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug-Candidate]: --disable-color ignored, printer produces colored outputs
### Describe the issue:
Flag `--disable-color` seem to be ignored and printer produces colored output with ASCII escape characters not suitable to capture into plaintext files
```
slither --help
usage: slither target [flag]
Additional options:
...
--disable-color Disable output colorization
```
Workaround: pass the output through the following sed script:
```
slither . --print function-summary 2>&1 | sed 's/\x1b\[[0-9;]*m//g'
```
### Code example to reproduce the issue:
<img width="1192" alt="image" src="https://github.com/crytic/slither/assets/7992612/850e41d6-e60e-4383-bdb4-c6d6a385c320">
### Version:
slither --version
0.10.0
From docker image `ghcr.io/trailofbits/eth-security-toolbox:nightly`
### Relevant log output:
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/utils/myprettytable.py`
Content:
```
1 from typing import List, Dict, Union
2
3 from prettytable.colortable import ColorTable, Themes
4
5
6 class MyPrettyTable:
7 def __init__(self, field_names: List[str], pretty_align: bool = True): # TODO: True by default?
8 self._field_names = field_names
9 self._rows: List = []
10 self._options: Dict = {}
11 if pretty_align:
12 self._options["set_alignment"] = []
13 self._options["set_alignment"] += [(field_names[0], "l")]
14 for field_name in field_names[1:]:
15 self._options["set_alignment"] += [(field_name, "r")]
16 else:
17 self._options["set_alignment"] = []
18
19 def add_row(self, row: List[Union[str, List[str]]]) -> None:
20 self._rows.append(row)
21
22 def to_pretty_table(self) -> ColorTable:
23 table = ColorTable(self._field_names, theme=Themes.OCEAN)
24 for row in self._rows:
25 table.add_row(row)
26 if len(self._options["set_alignment"]):
27 for column_header, value in self._options["set_alignment"]:
28 table.align[column_header] = value
29 return table
30
31 def to_json(self) -> Dict:
32 return {"fields_names": self._field_names, "rows": self._rows}
33
34 def __str__(self) -> str:
35 return str(self.to_pretty_table())
36
37
38 # UTILITY FUNCTIONS
39
40
41 def make_pretty_table(
42 headers: list, body: dict, totals: bool = False, total_header="TOTAL"
43 ) -> MyPrettyTable:
44 """
45 Converts a dict to a MyPrettyTable. Dict keys are the row headers.
46 Args:
47 headers: str[] of column names
48 body: dict of row headers with a dict of the values
49 totals: bool optional add Totals row
50 total_header: str optional if totals is set to True this will override the default "TOTAL" header
51 Returns:
52 MyPrettyTable
53 """
54 table = MyPrettyTable(headers)
55 for row in body:
56 table_row = [row] + [body[row][key] for key in headers[1:]]
57 table.add_row(table_row)
58 if totals:
59 table.add_row(
60 [total_header] + [sum([body[row][key] for row in body]) for key in headers[1:]]
61 )
62 return table
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/utils/myprettytable.py b/slither/utils/myprettytable.py
--- a/slither/utils/myprettytable.py
+++ b/slither/utils/myprettytable.py
@@ -1,7 +1,10 @@
from typing import List, Dict, Union
+from prettytable import PrettyTable
from prettytable.colortable import ColorTable, Themes
+from slither.utils.colors import Colors
+
class MyPrettyTable:
def __init__(self, field_names: List[str], pretty_align: bool = True): # TODO: True by default?
@@ -19,8 +22,12 @@
def add_row(self, row: List[Union[str, List[str]]]) -> None:
self._rows.append(row)
- def to_pretty_table(self) -> ColorTable:
- table = ColorTable(self._field_names, theme=Themes.OCEAN)
+ def to_pretty_table(self) -> PrettyTable:
+ if Colors.COLORIZATION_ENABLED:
+ table = ColorTable(self._field_names, theme=Themes.OCEAN)
+ else:
+ table = PrettyTable(self._field_names)
+
for row in self._rows:
table.add_row(row)
if len(self._options["set_alignment"]):
| {"golden_diff": "diff --git a/slither/utils/myprettytable.py b/slither/utils/myprettytable.py\n--- a/slither/utils/myprettytable.py\n+++ b/slither/utils/myprettytable.py\n@@ -1,7 +1,10 @@\n from typing import List, Dict, Union\n \n+from prettytable import PrettyTable\n from prettytable.colortable import ColorTable, Themes\n \n+from slither.utils.colors import Colors\n+\n \n class MyPrettyTable:\n def __init__(self, field_names: List[str], pretty_align: bool = True): # TODO: True by default?\n@@ -19,8 +22,12 @@\n def add_row(self, row: List[Union[str, List[str]]]) -> None:\n self._rows.append(row)\n \n- def to_pretty_table(self) -> ColorTable:\n- table = ColorTable(self._field_names, theme=Themes.OCEAN)\n+ def to_pretty_table(self) -> PrettyTable:\n+ if Colors.COLORIZATION_ENABLED:\n+ table = ColorTable(self._field_names, theme=Themes.OCEAN)\n+ else:\n+ table = PrettyTable(self._field_names)\n+\n for row in self._rows:\n table.add_row(row)\n if len(self._options[\"set_alignment\"]):\n", "issue": "[Bug-Candidate]: --disable-color ignored, printer produces colored outputs\n### Describe the issue:\n\nFlag `--disable-color` seem to be ignored and printer produces colored output with ASCII escape characters not suitable to capture into plaintext files\r\n\r\n```\r\nslither --help \r\nusage: slither target [flag]\r\nAdditional options:\r\n...\r\n --disable-color Disable output colorization\r\n```\r\n\r\nWorkaround: pass the output through the following sed script:\r\n```\r\nslither . --print function-summary 2>&1 | sed 's/\\x1b\\[[0-9;]*m//g'\r\n```\n\n### Code example to reproduce the issue:\n\n<img width=\"1192\" alt=\"image\" src=\"https://github.com/crytic/slither/assets/7992612/850e41d6-e60e-4383-bdb4-c6d6a385c320\">\r\n\n\n### Version:\n\nslither --version\r\n0.10.0\r\n\r\nFrom docker image `ghcr.io/trailofbits/eth-security-toolbox:nightly`\n\n### Relevant log output:\n\n_No response_\n", "before_files": [{"content": "from typing import List, Dict, Union\n\nfrom prettytable.colortable import ColorTable, Themes\n\n\nclass MyPrettyTable:\n def __init__(self, field_names: List[str], pretty_align: bool = True): # TODO: True by default?\n self._field_names = field_names\n self._rows: List = []\n self._options: Dict = {}\n if pretty_align:\n self._options[\"set_alignment\"] = []\n self._options[\"set_alignment\"] += [(field_names[0], \"l\")]\n for field_name in field_names[1:]:\n self._options[\"set_alignment\"] += [(field_name, \"r\")]\n else:\n self._options[\"set_alignment\"] = []\n\n def add_row(self, row: List[Union[str, List[str]]]) -> None:\n self._rows.append(row)\n\n def to_pretty_table(self) -> ColorTable:\n table = ColorTable(self._field_names, theme=Themes.OCEAN)\n for row in self._rows:\n table.add_row(row)\n if len(self._options[\"set_alignment\"]):\n for column_header, value in self._options[\"set_alignment\"]:\n table.align[column_header] = value\n return table\n\n def to_json(self) -> Dict:\n return {\"fields_names\": self._field_names, \"rows\": self._rows}\n\n def __str__(self) -> str:\n return str(self.to_pretty_table())\n\n\n# UTILITY FUNCTIONS\n\n\ndef make_pretty_table(\n headers: list, body: dict, totals: bool = False, total_header=\"TOTAL\"\n) -> MyPrettyTable:\n \"\"\"\n Converts a dict to a MyPrettyTable. Dict keys are the row headers.\n Args:\n headers: str[] of column names\n body: dict of row headers with a dict of the values\n totals: bool optional add Totals row\n total_header: str optional if totals is set to True this will override the default \"TOTAL\" header\n Returns:\n MyPrettyTable\n \"\"\"\n table = MyPrettyTable(headers)\n for row in body:\n table_row = [row] + [body[row][key] for key in headers[1:]]\n table.add_row(table_row)\n if totals:\n table.add_row(\n [total_header] + [sum([body[row][key] for row in body]) for key in headers[1:]]\n )\n return table\n", "path": "slither/utils/myprettytable.py"}], "after_files": [{"content": "from typing import List, Dict, Union\n\nfrom prettytable import PrettyTable\nfrom prettytable.colortable import ColorTable, Themes\n\nfrom slither.utils.colors import Colors\n\n\nclass MyPrettyTable:\n def __init__(self, field_names: List[str], pretty_align: bool = True): # TODO: True by default?\n self._field_names = field_names\n self._rows: List = []\n self._options: Dict = {}\n if pretty_align:\n self._options[\"set_alignment\"] = []\n self._options[\"set_alignment\"] += [(field_names[0], \"l\")]\n for field_name in field_names[1:]:\n self._options[\"set_alignment\"] += [(field_name, \"r\")]\n else:\n self._options[\"set_alignment\"] = []\n\n def add_row(self, row: List[Union[str, List[str]]]) -> None:\n self._rows.append(row)\n\n def to_pretty_table(self) -> PrettyTable:\n if Colors.COLORIZATION_ENABLED:\n table = ColorTable(self._field_names, theme=Themes.OCEAN)\n else:\n table = PrettyTable(self._field_names)\n\n for row in self._rows:\n table.add_row(row)\n if len(self._options[\"set_alignment\"]):\n for column_header, value in self._options[\"set_alignment\"]:\n table.align[column_header] = value\n return table\n\n def to_json(self) -> Dict:\n return {\"fields_names\": self._field_names, \"rows\": self._rows}\n\n def __str__(self) -> str:\n return str(self.to_pretty_table())\n\n\n# UTILITY FUNCTIONS\n\n\ndef make_pretty_table(\n headers: list, body: dict, totals: bool = False, total_header=\"TOTAL\"\n) -> MyPrettyTable:\n \"\"\"\n Converts a dict to a MyPrettyTable. Dict keys are the row headers.\n Args:\n headers: str[] of column names\n body: dict of row headers with a dict of the values\n totals: bool optional add Totals row\n total_header: str optional if totals is set to True this will override the default \"TOTAL\" header\n Returns:\n MyPrettyTable\n \"\"\"\n table = MyPrettyTable(headers)\n for row in body:\n table_row = [row] + [body[row][key] for key in headers[1:]]\n table.add_row(table_row)\n if totals:\n table.add_row(\n [total_header] + [sum([body[row][key] for row in body]) for key in headers[1:]]\n )\n return table\n", "path": "slither/utils/myprettytable.py"}]} | 1,148 | 280 |
gh_patches_debug_13123 | rasdani/github-patches | git_diff | ietf-tools__datatracker-3727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The non-wg list view contains things that do not belong.
The list contains things that do not belong. For example, 'geopriv' is listed as a non-wg list, but it is a concluded wg. Maybe this should be a separate issue.
_Originally posted by @russhousley in https://github.com/ietf-tools/datatracker/issues/3675#issuecomment-1075013354_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/mailinglists/views.py`
Content:
```
1 # Copyright The IETF Trust 2007, All Rights Reserved
2
3 import re
4
5 from django.shortcuts import render
6
7 import debug # pyflakes:ignore
8
9 from ietf.group.models import Group
10 from ietf.mailinglists.models import List
11
12 def groups(request):
13 groups = Group.objects.filter(type__features__acts_like_wg=True, list_archive__startswith='http').exclude(state__in=('bof', 'conclude')).order_by("acronym")
14
15 return render(request, "mailinglists/group_archives.html", { "groups": groups } )
16
17 def nonwg(request):
18 groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof', 'conclude']).order_by("acronym")
19
20 #urls = [ g.list_archive for g in groups if '.ietf.org' in g.list_archive ]
21
22 wg_lists = set()
23 for g in groups:
24 wg_lists.add(g.acronym)
25 match = re.search(r'^(https?://mailarchive.ietf.org/arch/(browse/|search/\?email-list=))(?P<name>[^/]*)/?$', g.list_archive)
26 if match:
27 wg_lists.add(match.group('name').lower())
28
29 lists = List.objects.filter(advertised=True)
30 #debug.show('lists.count()')
31 lists = lists.exclude(name__in=wg_lists).order_by('name')
32 #debug.show('lists.count()')
33 return render(request, "mailinglists/nonwg.html", { "lists": lists } )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ietf/mailinglists/views.py b/ietf/mailinglists/views.py
--- a/ietf/mailinglists/views.py
+++ b/ietf/mailinglists/views.py
@@ -1,4 +1,4 @@
-# Copyright The IETF Trust 2007, All Rights Reserved
+# Copyright The IETF Trust 2007-2022, All Rights Reserved
import re
@@ -15,7 +15,7 @@
return render(request, "mailinglists/group_archives.html", { "groups": groups } )
def nonwg(request):
- groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof', 'conclude']).order_by("acronym")
+ groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof']).order_by("acronym")
#urls = [ g.list_archive for g in groups if '.ietf.org' in g.list_archive ]
| {"golden_diff": "diff --git a/ietf/mailinglists/views.py b/ietf/mailinglists/views.py\n--- a/ietf/mailinglists/views.py\n+++ b/ietf/mailinglists/views.py\n@@ -1,4 +1,4 @@\n-# Copyright The IETF Trust 2007, All Rights Reserved\n+# Copyright The IETF Trust 2007-2022, All Rights Reserved\n \n import re\n \n@@ -15,7 +15,7 @@\n return render(request, \"mailinglists/group_archives.html\", { \"groups\": groups } )\n \n def nonwg(request):\n- groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof', 'conclude']).order_by(\"acronym\")\n+ groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof']).order_by(\"acronym\")\n \n #urls = [ g.list_archive for g in groups if '.ietf.org' in g.list_archive ]\n", "issue": "The non-wg list view contains things that do not belong.\nThe list contains things that do not belong. For example, 'geopriv' is listed as a non-wg list, but it is a concluded wg. Maybe this should be a separate issue.\r\n\r\n_Originally posted by @russhousley in https://github.com/ietf-tools/datatracker/issues/3675#issuecomment-1075013354_\n", "before_files": [{"content": "# Copyright The IETF Trust 2007, All Rights Reserved\n\nimport re\n\nfrom django.shortcuts import render\n\nimport debug # pyflakes:ignore\n\nfrom ietf.group.models import Group\nfrom ietf.mailinglists.models import List\n\ndef groups(request):\n groups = Group.objects.filter(type__features__acts_like_wg=True, list_archive__startswith='http').exclude(state__in=('bof', 'conclude')).order_by(\"acronym\")\n\n return render(request, \"mailinglists/group_archives.html\", { \"groups\": groups } )\n\ndef nonwg(request):\n groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof', 'conclude']).order_by(\"acronym\")\n\n #urls = [ g.list_archive for g in groups if '.ietf.org' in g.list_archive ]\n\n wg_lists = set()\n for g in groups:\n wg_lists.add(g.acronym)\n match = re.search(r'^(https?://mailarchive.ietf.org/arch/(browse/|search/\\?email-list=))(?P<name>[^/]*)/?$', g.list_archive)\n if match:\n wg_lists.add(match.group('name').lower())\n\n lists = List.objects.filter(advertised=True)\n #debug.show('lists.count()')\n lists = lists.exclude(name__in=wg_lists).order_by('name')\n #debug.show('lists.count()')\n return render(request, \"mailinglists/nonwg.html\", { \"lists\": lists } )\n", "path": "ietf/mailinglists/views.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2007-2022, All Rights Reserved\n\nimport re\n\nfrom django.shortcuts import render\n\nimport debug # pyflakes:ignore\n\nfrom ietf.group.models import Group\nfrom ietf.mailinglists.models import List\n\ndef groups(request):\n groups = Group.objects.filter(type__features__acts_like_wg=True, list_archive__startswith='http').exclude(state__in=('bof', 'conclude')).order_by(\"acronym\")\n\n return render(request, \"mailinglists/group_archives.html\", { \"groups\": groups } )\n\ndef nonwg(request):\n groups = Group.objects.filter(type__features__acts_like_wg=True).exclude(state__in=['bof']).order_by(\"acronym\")\n\n #urls = [ g.list_archive for g in groups if '.ietf.org' in g.list_archive ]\n\n wg_lists = set()\n for g in groups:\n wg_lists.add(g.acronym)\n match = re.search(r'^(https?://mailarchive.ietf.org/arch/(browse/|search/\\?email-list=))(?P<name>[^/]*)/?$', g.list_archive)\n if match:\n wg_lists.add(match.group('name').lower())\n\n lists = List.objects.filter(advertised=True)\n #debug.show('lists.count()')\n lists = lists.exclude(name__in=wg_lists).order_by('name')\n #debug.show('lists.count()')\n return render(request, \"mailinglists/nonwg.html\", { \"lists\": lists } )\n", "path": "ietf/mailinglists/views.py"}]} | 751 | 219 |
gh_patches_debug_1637 | rasdani/github-patches | git_diff | pre-commit__pre-commit-67 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError while instantiating LoggingHandler (2.6)
I assume this is new-style vs old-style classes being grumpy?
```
>>> from pre_commit.logging_handler import LoggingHandler
>>> LoggingHandler(True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../py_env/lib/python2.6/site-packages/pre_commit/logging_handler.py", line 19, in __init__
super(LoggingHandler, self).__init__()
TypeError: super() argument 1 must be type, not classobj
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/logging_handler.py`
Content:
```
1
2 from __future__ import print_function
3
4 import logging
5
6 from pre_commit import color
7
8
9 LOG_LEVEL_COLORS = {
10 'DEBUG': '',
11 'INFO': '',
12 'WARNING': color.YELLOW,
13 'ERROR': color.RED,
14 }
15
16
17 class LoggingHandler(logging.Handler):
18 def __init__(self, use_color):
19 super(LoggingHandler, self).__init__()
20 self.use_color = use_color
21
22 def emit(self, record):
23 print(
24 u'{0}{1}'.format(
25 color.format_color(
26 '[{0}]'.format(record.levelname),
27 LOG_LEVEL_COLORS[record.levelname],
28 self.use_color,
29 ) + ' ' if record.levelno >= logging.WARNING else '',
30 record.getMessage(),
31 )
32 )
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/logging_handler.py b/pre_commit/logging_handler.py
--- a/pre_commit/logging_handler.py
+++ b/pre_commit/logging_handler.py
@@ -16,7 +16,7 @@
class LoggingHandler(logging.Handler):
def __init__(self, use_color):
- super(LoggingHandler, self).__init__()
+ logging.Handler.__init__(self)
self.use_color = use_color
def emit(self, record):
| {"golden_diff": "diff --git a/pre_commit/logging_handler.py b/pre_commit/logging_handler.py\n--- a/pre_commit/logging_handler.py\n+++ b/pre_commit/logging_handler.py\n@@ -16,7 +16,7 @@\n \n class LoggingHandler(logging.Handler):\n def __init__(self, use_color):\n- super(LoggingHandler, self).__init__()\n+ logging.Handler.__init__(self)\n self.use_color = use_color\n \n def emit(self, record):\n", "issue": "TypeError while instantiating LoggingHandler (2.6)\nI assume this is new-style vs old-style classes being grumpy?\n\n```\n>>> from pre_commit.logging_handler import LoggingHandler\n>>> LoggingHandler(True)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \".../py_env/lib/python2.6/site-packages/pre_commit/logging_handler.py\", line 19, in __init__\n super(LoggingHandler, self).__init__()\nTypeError: super() argument 1 must be type, not classobj\n```\n\n", "before_files": [{"content": "\nfrom __future__ import print_function\n\nimport logging\n\nfrom pre_commit import color\n\n\nLOG_LEVEL_COLORS = {\n 'DEBUG': '',\n 'INFO': '',\n 'WARNING': color.YELLOW,\n 'ERROR': color.RED,\n}\n\n\nclass LoggingHandler(logging.Handler):\n def __init__(self, use_color):\n super(LoggingHandler, self).__init__()\n self.use_color = use_color\n\n def emit(self, record):\n print(\n u'{0}{1}'.format(\n color.format_color(\n '[{0}]'.format(record.levelname),\n LOG_LEVEL_COLORS[record.levelname],\n self.use_color,\n ) + ' ' if record.levelno >= logging.WARNING else '',\n record.getMessage(),\n )\n )\n", "path": "pre_commit/logging_handler.py"}], "after_files": [{"content": "\nfrom __future__ import print_function\n\nimport logging\n\nfrom pre_commit import color\n\n\nLOG_LEVEL_COLORS = {\n 'DEBUG': '',\n 'INFO': '',\n 'WARNING': color.YELLOW,\n 'ERROR': color.RED,\n}\n\n\nclass LoggingHandler(logging.Handler):\n def __init__(self, use_color):\n logging.Handler.__init__(self)\n self.use_color = use_color\n\n def emit(self, record):\n print(\n u'{0}{1}'.format(\n color.format_color(\n '[{0}]'.format(record.levelname),\n LOG_LEVEL_COLORS[record.levelname],\n self.use_color,\n ) + ' ' if record.levelno >= logging.WARNING else '',\n record.getMessage(),\n )\n )\n", "path": "pre_commit/logging_handler.py"}]} | 594 | 96 |
gh_patches_debug_43192 | rasdani/github-patches | git_diff | conda__conda-5342 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add --json flag on conda-env that works!
The json flag seems to be there, but it is not working.
With a file that looks like
```yaml
# environment.yaml
name: test_env_1
channels:
- defaults
dependencies:
- python
- pytest-cov
```
```
(root) $ conda-env create --json
Using Anaconda API: http s://api.anaconda.org
(root) $
```
So two problems here:
1.) No json output on the create process
2.) `Using Anaconda API: https://api.anaconda.org`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_env/cli/main_export.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 from argparse import RawDescriptionHelpFormatter
4 import os
5 import textwrap
6
7 from conda.cli.common import add_parser_prefix
8 # conda env import
9 from .common import get_prefix
10 from ..env import from_environment
11 from ..exceptions import CondaEnvException
12
13 description = """
14 Export a given environment
15 """
16
17 example = """
18 examples:
19 conda env export
20 conda env export --file SOME_FILE
21 """
22
23
24 def configure_parser(sub_parsers):
25 p = sub_parsers.add_parser(
26 'export',
27 formatter_class=RawDescriptionHelpFormatter,
28 description=description,
29 help=description,
30 epilog=example,
31 )
32
33 p.add_argument(
34 '-c', '--channel',
35 action='append',
36 help='Additional channel to include in the export'
37 )
38
39 p.add_argument(
40 "--override-channels",
41 action="store_true",
42 help="Do not include .condarc channels",
43 )
44 add_parser_prefix(p)
45
46 p.add_argument(
47 '-f', '--file',
48 default=None,
49 required=False
50 )
51
52 p.add_argument(
53 '--no-builds',
54 default=False,
55 action='store_true',
56 required=False,
57 help='Remove build specification from dependencies'
58 )
59
60 p.add_argument(
61 '--ignore-channels',
62 default=False,
63 action='store_true',
64 required=False,
65 help='Do not include channel names with package names.')
66
67 p.set_defaults(func=execute)
68
69
70 # TODO Make this aware of channels that were used to install packages
71 def execute(args, parser):
72 if not (args.name or args.prefix):
73 # Note, this is a hack fofr get_prefix that assumes argparse results
74 # TODO Refactor common.get_prefix
75 name = os.environ.get('CONDA_DEFAULT_ENV', False)
76 if not name:
77 msg = "Unable to determine environment\n\n"
78 msg += textwrap.dedent("""
79 Please re-run this command with one of the following options:
80
81 * Provide an environment name via --name or -n
82 * Re-run this command inside an activated conda environment.""").lstrip()
83 # TODO Add json support
84 raise CondaEnvException(msg)
85 args.name = name
86 else:
87 name = args.name
88 prefix = get_prefix(args)
89 env = from_environment(name, prefix, no_builds=args.no_builds,
90 ignore_channels=args.ignore_channels)
91
92 if args.override_channels:
93 env.remove_channels()
94
95 if args.channel is not None:
96 env.add_channels(args.channel)
97
98 if args.file is None:
99 print(env.to_yaml())
100 else:
101 fp = open(args.file, 'wb')
102 env.to_yaml(stream=fp)
103
```
Path: `conda/cli/main.py`
Content:
```
1 # (c) Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 """conda is a tool for managing environments and packages.
7
8 conda provides the following commands:
9
10 Information
11 ===========
12
13 info : display information about the current install
14 list : list packages linked into a specified environment
15 search : print information about a specified package
16 help : display a list of available conda commands and their help
17 strings
18
19 Package Management
20 ==================
21
22 create : create a new conda environment from a list of specified
23 packages
24 install : install new packages into an existing conda environment
25 update : update packages in a specified conda environment
26
27
28 Packaging
29 =========
30
31 package : create a conda package in an environment
32
33 Additional help for each command can be accessed by using:
34
35 conda <command> -h
36 """
37
38 from __future__ import absolute_import, division, print_function, unicode_literals
39
40 import importlib
41 import sys
42 from argparse import SUPPRESS
43 from logging import CRITICAL, DEBUG, getLogger
44
45 from .. import __version__
46
47 log = getLogger(__name__)
48
49
50 def generate_parser():
51 from ..cli import conda_argparse
52 p = conda_argparse.ArgumentParser(
53 description='conda is a tool for managing and deploying applications,'
54 ' environments and packages.',
55 )
56 p.add_argument(
57 '-V', '--version',
58 action='version',
59 version='conda %s' % __version__,
60 help="Show the conda version number and exit."
61 )
62 p.add_argument(
63 "--debug",
64 action="store_true",
65 help=SUPPRESS,
66 )
67 p.add_argument(
68 "--json",
69 action="store_true",
70 help=SUPPRESS,
71 )
72 sub_parsers = p.add_subparsers(
73 metavar='command',
74 dest='cmd',
75 )
76 # http://bugs.python.org/issue9253
77 # http://stackoverflow.com/a/18283730/1599393
78 sub_parsers.required = True
79
80 return p, sub_parsers
81
82
83 def _main(*args):
84 from ..base.constants import SEARCH_PATH
85 from ..base.context import context
86
87 from ..gateways.logging import set_all_logger_level, set_verbosity
88
89 if len(args) == 1:
90 args = args + ('-h',)
91
92 p, sub_parsers = generate_parser()
93
94 main_modules = ["info", "help", "list", "search", "create", "install", "update",
95 "remove", "config", "clean", "package"]
96 modules = ["conda.cli.main_"+suffix for suffix in main_modules]
97 for module in modules:
98 imported = importlib.import_module(module)
99 imported.configure_parser(sub_parsers)
100 if "update" in module:
101 imported.configure_parser(sub_parsers, name='upgrade')
102 if "remove" in module:
103 imported.configure_parser(sub_parsers, name='uninstall')
104
105 from .find_commands import find_commands
106
107 def completer(prefix, **kwargs):
108 return [i for i in list(sub_parsers.choices) + find_commands()
109 if i.startswith(prefix)]
110
111 # when using sys.argv, first argument is generally conda or __main__.py. Ignore it.
112 if (any(sname in args[0] for sname in ('conda', 'conda.exe', '__main__.py', 'conda-script.py'))
113 and (args[1] in list(sub_parsers.choices.keys()) + find_commands()
114 or args[1].startswith('-'))):
115 log.debug("Ignoring first argument (%s), as it is not a subcommand", args[0])
116 args = args[1:]
117
118 sub_parsers.completer = completer
119 args = p.parse_args(args)
120
121 context.__init__(SEARCH_PATH, 'conda', args)
122
123 if getattr(args, 'json', False):
124 # Silence logging info to avoid interfering with JSON output
125 for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):
126 getLogger(logger).setLevel(CRITICAL + 1)
127
128 if context.debug:
129 set_all_logger_level(DEBUG)
130 elif context.verbosity:
131 set_verbosity(context.verbosity)
132 log.debug("verbosity set to %s", context.verbosity)
133
134 exit_code = args.func(args, p)
135 if isinstance(exit_code, int):
136 return exit_code
137
138
139 def _ensure_text_type(value):
140 # copying here from conda/common/compat.py to avoid the import
141 try:
142 return value.decode('utf-8')
143 except AttributeError:
144 # AttributeError: '<>' object has no attribute 'decode'
145 # In this case assume already text_type and do nothing
146 return value
147 except UnicodeDecodeError:
148 from requests.packages.chardet import detect
149 encoding = detect(value).get('encoding') or 'utf-8'
150 return value.decode(encoding)
151
152
153 def main(*args):
154 if not args:
155 args = sys.argv
156
157 args = tuple(_ensure_text_type(s) for s in args)
158
159 log.debug("conda.cli.main called with %s", args)
160 if len(args) > 1:
161 try:
162 argv1 = args[1].strip()
163 if argv1.startswith('..'):
164 import conda.cli.activate as activate
165 activate.main()
166 return
167 if argv1 in ('activate', 'deactivate'):
168 from ..exceptions import CommandNotFoundError
169 raise CommandNotFoundError(argv1)
170 except Exception as e:
171 from ..exceptions import handle_exception
172 return handle_exception(e)
173
174 from ..exceptions import conda_exception_handler
175 return conda_exception_handler(_main, *args)
176
177
178 if __name__ == '__main__':
179 main()
180
```
Path: `conda_env/cli/main.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2
3 from logging import getLogger, CRITICAL
4
5 import os
6 import sys
7
8 try:
9 from conda.exceptions import conda_exception_handler
10 except ImportError as e:
11 if 'CONDA_DEFAULT_ENV' in os.environ:
12 sys.stderr.write("""
13 There was an error importing conda.
14
15 It appears this was caused by installing conda-env into a conda
16 environment. Like conda, conda-env needs to be installed into your
17 root conda/Anaconda environment.
18
19 Please deactivate your current environment, then re-install conda-env
20 using this command:
21
22 conda install -c conda conda-env
23
24 If you are seeing this error and have not installed conda-env into an
25 environment, please open a bug report at:
26 https://github.com/conda/conda-env
27
28 """.lstrip())
29 sys.exit(-1)
30 else:
31 raise e
32
33 from conda.cli.conda_argparse import ArgumentParser
34
35 from . import main_attach
36 from . import main_create
37 from . import main_export
38 from . import main_list
39 from . import main_remove
40 from . import main_upload
41 from . import main_update
42 from conda.base.context import context
43
44 # TODO: This belongs in a helper library somewhere
45 # Note: This only works with `conda-env` as a sub-command. If this gets
46 # merged into conda-env, this needs to be adjusted.
47 def show_help_on_empty_command():
48 if len(sys.argv) == 1: # sys.argv == ['/path/to/bin/conda-env']
49 sys.argv.append('--help')
50
51
52 def create_parser():
53 p = ArgumentParser()
54 sub_parsers = p.add_subparsers()
55
56 main_attach.configure_parser(sub_parsers)
57 main_create.configure_parser(sub_parsers)
58 main_export.configure_parser(sub_parsers)
59 main_list.configure_parser(sub_parsers)
60 main_remove.configure_parser(sub_parsers)
61 main_upload.configure_parser(sub_parsers)
62 main_update.configure_parser(sub_parsers)
63
64 show_help_on_empty_command()
65 return p
66
67
68 def main():
69 parser = create_parser()
70 args = parser.parse_args()
71 context._set_argparse_args(args)
72 if getattr(args, 'json', False):
73 # # Silence logging info to avoid interfering with JSON output
74 # for logger in Logger.manager.loggerDict:
75 # if logger not in ('fetch', 'progress'):
76 # getLogger(logger).setLevel(CRITICAL + 1)
77 for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):
78 getLogger(logger).setLevel(CRITICAL + 1)
79
80 return conda_exception_handler(args.func, args, parser)
81
82
83 if __name__ == '__main__':
84 sys.exit(main())
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda/cli/main.py b/conda/cli/main.py
--- a/conda/cli/main.py
+++ b/conda/cli/main.py
@@ -80,12 +80,24 @@
return p, sub_parsers
+def init_loggers(context):
+ from ..gateways.logging import set_all_logger_level, set_verbosity
+ if not context.json:
+ # Silence logging info to avoid interfering with JSON output
+ for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):
+ getLogger(logger).setLevel(CRITICAL + 1)
+
+ if context.debug:
+ set_all_logger_level(DEBUG)
+ elif context.verbosity:
+ set_verbosity(context.verbosity)
+ log.debug("verbosity set to %s", context.verbosity)
+
+
def _main(*args):
from ..base.constants import SEARCH_PATH
from ..base.context import context
- from ..gateways.logging import set_all_logger_level, set_verbosity
-
if len(args) == 1:
args = args + ('-h',)
@@ -119,17 +131,7 @@
args = p.parse_args(args)
context.__init__(SEARCH_PATH, 'conda', args)
-
- if getattr(args, 'json', False):
- # Silence logging info to avoid interfering with JSON output
- for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):
- getLogger(logger).setLevel(CRITICAL + 1)
-
- if context.debug:
- set_all_logger_level(DEBUG)
- elif context.verbosity:
- set_verbosity(context.verbosity)
- log.debug("verbosity set to %s", context.verbosity)
+ init_loggers(context)
exit_code = args.func(args, p)
if isinstance(exit_code, int):
diff --git a/conda_env/cli/main.py b/conda_env/cli/main.py
--- a/conda_env/cli/main.py
+++ b/conda_env/cli/main.py
@@ -1,10 +1,13 @@
-from __future__ import print_function, division, absolute_import
-
-from logging import getLogger, CRITICAL
+from __future__ import absolute_import, division, print_function
import os
import sys
+from conda.base.constants import SEARCH_PATH
+from conda.base.context import context
+from conda.cli.conda_argparse import ArgumentParser
+from conda.cli.main import init_loggers
+
try:
from conda.exceptions import conda_exception_handler
except ImportError as e:
@@ -30,8 +33,6 @@
else:
raise e
-from conda.cli.conda_argparse import ArgumentParser
-
from . import main_attach
from . import main_create
from . import main_export
@@ -39,7 +40,7 @@
from . import main_remove
from . import main_upload
from . import main_update
-from conda.base.context import context
+
# TODO: This belongs in a helper library somewhere
# Note: This only works with `conda-env` as a sub-command. If this gets
@@ -68,15 +69,8 @@
def main():
parser = create_parser()
args = parser.parse_args()
- context._set_argparse_args(args)
- if getattr(args, 'json', False):
- # # Silence logging info to avoid interfering with JSON output
- # for logger in Logger.manager.loggerDict:
- # if logger not in ('fetch', 'progress'):
- # getLogger(logger).setLevel(CRITICAL + 1)
- for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):
- getLogger(logger).setLevel(CRITICAL + 1)
-
+ context.__init__(SEARCH_PATH, 'conda', args)
+ init_loggers(context)
return conda_exception_handler(args.func, args, parser)
diff --git a/conda_env/cli/main_export.py b/conda_env/cli/main_export.py
--- a/conda_env/cli/main_export.py
+++ b/conda_env/cli/main_export.py
@@ -4,7 +4,7 @@
import os
import textwrap
-from conda.cli.common import add_parser_prefix
+from conda.cli.common import add_parser_json, add_parser_prefix
# conda env import
from .common import get_prefix
from ..env import from_environment
@@ -63,7 +63,7 @@
action='store_true',
required=False,
help='Do not include channel names with package names.')
-
+ add_parser_json(p)
p.set_defaults(func=execute)
| {"golden_diff": "diff --git a/conda/cli/main.py b/conda/cli/main.py\n--- a/conda/cli/main.py\n+++ b/conda/cli/main.py\n@@ -80,12 +80,24 @@\n return p, sub_parsers\n \n \n+def init_loggers(context):\n+ from ..gateways.logging import set_all_logger_level, set_verbosity\n+ if not context.json:\n+ # Silence logging info to avoid interfering with JSON output\n+ for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):\n+ getLogger(logger).setLevel(CRITICAL + 1)\n+\n+ if context.debug:\n+ set_all_logger_level(DEBUG)\n+ elif context.verbosity:\n+ set_verbosity(context.verbosity)\n+ log.debug(\"verbosity set to %s\", context.verbosity)\n+\n+\n def _main(*args):\n from ..base.constants import SEARCH_PATH\n from ..base.context import context\n \n- from ..gateways.logging import set_all_logger_level, set_verbosity\n-\n if len(args) == 1:\n args = args + ('-h',)\n \n@@ -119,17 +131,7 @@\n args = p.parse_args(args)\n \n context.__init__(SEARCH_PATH, 'conda', args)\n-\n- if getattr(args, 'json', False):\n- # Silence logging info to avoid interfering with JSON output\n- for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):\n- getLogger(logger).setLevel(CRITICAL + 1)\n-\n- if context.debug:\n- set_all_logger_level(DEBUG)\n- elif context.verbosity:\n- set_verbosity(context.verbosity)\n- log.debug(\"verbosity set to %s\", context.verbosity)\n+ init_loggers(context)\n \n exit_code = args.func(args, p)\n if isinstance(exit_code, int):\ndiff --git a/conda_env/cli/main.py b/conda_env/cli/main.py\n--- a/conda_env/cli/main.py\n+++ b/conda_env/cli/main.py\n@@ -1,10 +1,13 @@\n-from __future__ import print_function, division, absolute_import\n-\n-from logging import getLogger, CRITICAL\n+from __future__ import absolute_import, division, print_function\n \n import os\n import sys\n \n+from conda.base.constants import SEARCH_PATH\n+from conda.base.context import context\n+from conda.cli.conda_argparse import ArgumentParser\n+from conda.cli.main import init_loggers\n+\n try:\n from conda.exceptions import conda_exception_handler\n except ImportError as e:\n@@ -30,8 +33,6 @@\n else:\n raise e\n \n-from conda.cli.conda_argparse import ArgumentParser\n-\n from . import main_attach\n from . import main_create\n from . import main_export\n@@ -39,7 +40,7 @@\n from . import main_remove\n from . import main_upload\n from . import main_update\n-from conda.base.context import context\n+\n \n # TODO: This belongs in a helper library somewhere\n # Note: This only works with `conda-env` as a sub-command. If this gets\n@@ -68,15 +69,8 @@\n def main():\n parser = create_parser()\n args = parser.parse_args()\n- context._set_argparse_args(args)\n- if getattr(args, 'json', False):\n- # # Silence logging info to avoid interfering with JSON output\n- # for logger in Logger.manager.loggerDict:\n- # if logger not in ('fetch', 'progress'):\n- # getLogger(logger).setLevel(CRITICAL + 1)\n- for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):\n- getLogger(logger).setLevel(CRITICAL + 1)\n-\n+ context.__init__(SEARCH_PATH, 'conda', args)\n+ init_loggers(context)\n return conda_exception_handler(args.func, args, parser)\n \n \ndiff --git a/conda_env/cli/main_export.py b/conda_env/cli/main_export.py\n--- a/conda_env/cli/main_export.py\n+++ b/conda_env/cli/main_export.py\n@@ -4,7 +4,7 @@\n import os\n import textwrap\n \n-from conda.cli.common import add_parser_prefix\n+from conda.cli.common import add_parser_json, add_parser_prefix\n # conda env import\n from .common import get_prefix\n from ..env import from_environment\n@@ -63,7 +63,7 @@\n action='store_true',\n required=False,\n help='Do not include channel names with package names.')\n-\n+ add_parser_json(p)\n p.set_defaults(func=execute)\n", "issue": "Add --json flag on conda-env that works!\nThe json flag seems to be there, but it is not working.\r\n\r\nWith a file that looks like\r\n\r\n```yaml\r\n# environment.yaml\r\nname: test_env_1\r\nchannels:\r\n- defaults\r\ndependencies:\r\n- python\r\n- pytest-cov\r\n```\r\n\r\n```\r\n(root) $ conda-env create --json\r\nUsing Anaconda API: http s://api.anaconda.org\r\n(root) $\r\n```\r\n\r\nSo two problems here:\r\n1.) No json output on the create process\r\n2.) `Using Anaconda API: https://api.anaconda.org`\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom argparse import RawDescriptionHelpFormatter\nimport os\nimport textwrap\n\nfrom conda.cli.common import add_parser_prefix\n# conda env import\nfrom .common import get_prefix\nfrom ..env import from_environment\nfrom ..exceptions import CondaEnvException\n\ndescription = \"\"\"\nExport a given environment\n\"\"\"\n\nexample = \"\"\"\nexamples:\n conda env export\n conda env export --file SOME_FILE\n\"\"\"\n\n\ndef configure_parser(sub_parsers):\n p = sub_parsers.add_parser(\n 'export',\n formatter_class=RawDescriptionHelpFormatter,\n description=description,\n help=description,\n epilog=example,\n )\n\n p.add_argument(\n '-c', '--channel',\n action='append',\n help='Additional channel to include in the export'\n )\n\n p.add_argument(\n \"--override-channels\",\n action=\"store_true\",\n help=\"Do not include .condarc channels\",\n )\n add_parser_prefix(p)\n\n p.add_argument(\n '-f', '--file',\n default=None,\n required=False\n )\n\n p.add_argument(\n '--no-builds',\n default=False,\n action='store_true',\n required=False,\n help='Remove build specification from dependencies'\n )\n\n p.add_argument(\n '--ignore-channels',\n default=False,\n action='store_true',\n required=False,\n help='Do not include channel names with package names.')\n\n p.set_defaults(func=execute)\n\n\n# TODO Make this aware of channels that were used to install packages\ndef execute(args, parser):\n if not (args.name or args.prefix):\n # Note, this is a hack fofr get_prefix that assumes argparse results\n # TODO Refactor common.get_prefix\n name = os.environ.get('CONDA_DEFAULT_ENV', False)\n if not name:\n msg = \"Unable to determine environment\\n\\n\"\n msg += textwrap.dedent(\"\"\"\n Please re-run this command with one of the following options:\n\n * Provide an environment name via --name or -n\n * Re-run this command inside an activated conda environment.\"\"\").lstrip()\n # TODO Add json support\n raise CondaEnvException(msg)\n args.name = name\n else:\n name = args.name\n prefix = get_prefix(args)\n env = from_environment(name, prefix, no_builds=args.no_builds,\n ignore_channels=args.ignore_channels)\n\n if args.override_channels:\n env.remove_channels()\n\n if args.channel is not None:\n env.add_channels(args.channel)\n\n if args.file is None:\n print(env.to_yaml())\n else:\n fp = open(args.file, 'wb')\n env.to_yaml(stream=fp)\n", "path": "conda_env/cli/main_export.py"}, {"content": "# (c) Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\"\"\"conda is a tool for managing environments and packages.\n\nconda provides the following commands:\n\n Information\n ===========\n\n info : display information about the current install\n list : list packages linked into a specified environment\n search : print information about a specified package\n help : display a list of available conda commands and their help\n strings\n\n Package Management\n ==================\n\n create : create a new conda environment from a list of specified\n packages\n install : install new packages into an existing conda environment\n update : update packages in a specified conda environment\n\n\n Packaging\n =========\n\n package : create a conda package in an environment\n\nAdditional help for each command can be accessed by using:\n\n conda <command> -h\n\"\"\"\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\nimport sys\nfrom argparse import SUPPRESS\nfrom logging import CRITICAL, DEBUG, getLogger\n\nfrom .. import __version__\n\nlog = getLogger(__name__)\n\n\ndef generate_parser():\n from ..cli import conda_argparse\n p = conda_argparse.ArgumentParser(\n description='conda is a tool for managing and deploying applications,'\n ' environments and packages.',\n )\n p.add_argument(\n '-V', '--version',\n action='version',\n version='conda %s' % __version__,\n help=\"Show the conda version number and exit.\"\n )\n p.add_argument(\n \"--debug\",\n action=\"store_true\",\n help=SUPPRESS,\n )\n p.add_argument(\n \"--json\",\n action=\"store_true\",\n help=SUPPRESS,\n )\n sub_parsers = p.add_subparsers(\n metavar='command',\n dest='cmd',\n )\n # http://bugs.python.org/issue9253\n # http://stackoverflow.com/a/18283730/1599393\n sub_parsers.required = True\n\n return p, sub_parsers\n\n\ndef _main(*args):\n from ..base.constants import SEARCH_PATH\n from ..base.context import context\n\n from ..gateways.logging import set_all_logger_level, set_verbosity\n\n if len(args) == 1:\n args = args + ('-h',)\n\n p, sub_parsers = generate_parser()\n\n main_modules = [\"info\", \"help\", \"list\", \"search\", \"create\", \"install\", \"update\",\n \"remove\", \"config\", \"clean\", \"package\"]\n modules = [\"conda.cli.main_\"+suffix for suffix in main_modules]\n for module in modules:\n imported = importlib.import_module(module)\n imported.configure_parser(sub_parsers)\n if \"update\" in module:\n imported.configure_parser(sub_parsers, name='upgrade')\n if \"remove\" in module:\n imported.configure_parser(sub_parsers, name='uninstall')\n\n from .find_commands import find_commands\n\n def completer(prefix, **kwargs):\n return [i for i in list(sub_parsers.choices) + find_commands()\n if i.startswith(prefix)]\n\n # when using sys.argv, first argument is generally conda or __main__.py. Ignore it.\n if (any(sname in args[0] for sname in ('conda', 'conda.exe', '__main__.py', 'conda-script.py'))\n and (args[1] in list(sub_parsers.choices.keys()) + find_commands()\n or args[1].startswith('-'))):\n log.debug(\"Ignoring first argument (%s), as it is not a subcommand\", args[0])\n args = args[1:]\n\n sub_parsers.completer = completer\n args = p.parse_args(args)\n\n context.__init__(SEARCH_PATH, 'conda', args)\n\n if getattr(args, 'json', False):\n # Silence logging info to avoid interfering with JSON output\n for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):\n getLogger(logger).setLevel(CRITICAL + 1)\n\n if context.debug:\n set_all_logger_level(DEBUG)\n elif context.verbosity:\n set_verbosity(context.verbosity)\n log.debug(\"verbosity set to %s\", context.verbosity)\n\n exit_code = args.func(args, p)\n if isinstance(exit_code, int):\n return exit_code\n\n\ndef _ensure_text_type(value):\n # copying here from conda/common/compat.py to avoid the import\n try:\n return value.decode('utf-8')\n except AttributeError:\n # AttributeError: '<>' object has no attribute 'decode'\n # In this case assume already text_type and do nothing\n return value\n except UnicodeDecodeError:\n from requests.packages.chardet import detect\n encoding = detect(value).get('encoding') or 'utf-8'\n return value.decode(encoding)\n\n\ndef main(*args):\n if not args:\n args = sys.argv\n\n args = tuple(_ensure_text_type(s) for s in args)\n\n log.debug(\"conda.cli.main called with %s\", args)\n if len(args) > 1:\n try:\n argv1 = args[1].strip()\n if argv1.startswith('..'):\n import conda.cli.activate as activate\n activate.main()\n return\n if argv1 in ('activate', 'deactivate'):\n from ..exceptions import CommandNotFoundError\n raise CommandNotFoundError(argv1)\n except Exception as e:\n from ..exceptions import handle_exception\n return handle_exception(e)\n\n from ..exceptions import conda_exception_handler\n return conda_exception_handler(_main, *args)\n\n\nif __name__ == '__main__':\n main()\n", "path": "conda/cli/main.py"}, {"content": "from __future__ import print_function, division, absolute_import\n\nfrom logging import getLogger, CRITICAL\n\nimport os\nimport sys\n\ntry:\n from conda.exceptions import conda_exception_handler\nexcept ImportError as e:\n if 'CONDA_DEFAULT_ENV' in os.environ:\n sys.stderr.write(\"\"\"\nThere was an error importing conda.\n\nIt appears this was caused by installing conda-env into a conda\nenvironment. Like conda, conda-env needs to be installed into your\nroot conda/Anaconda environment.\n\nPlease deactivate your current environment, then re-install conda-env\nusing this command:\n\n conda install -c conda conda-env\n\nIf you are seeing this error and have not installed conda-env into an\nenvironment, please open a bug report at:\n https://github.com/conda/conda-env\n\n\"\"\".lstrip())\n sys.exit(-1)\n else:\n raise e\n\nfrom conda.cli.conda_argparse import ArgumentParser\n\nfrom . import main_attach\nfrom . import main_create\nfrom . import main_export\nfrom . import main_list\nfrom . import main_remove\nfrom . import main_upload\nfrom . import main_update\nfrom conda.base.context import context\n\n# TODO: This belongs in a helper library somewhere\n# Note: This only works with `conda-env` as a sub-command. If this gets\n# merged into conda-env, this needs to be adjusted.\ndef show_help_on_empty_command():\n if len(sys.argv) == 1: # sys.argv == ['/path/to/bin/conda-env']\n sys.argv.append('--help')\n\n\ndef create_parser():\n p = ArgumentParser()\n sub_parsers = p.add_subparsers()\n\n main_attach.configure_parser(sub_parsers)\n main_create.configure_parser(sub_parsers)\n main_export.configure_parser(sub_parsers)\n main_list.configure_parser(sub_parsers)\n main_remove.configure_parser(sub_parsers)\n main_upload.configure_parser(sub_parsers)\n main_update.configure_parser(sub_parsers)\n\n show_help_on_empty_command()\n return p\n\n\ndef main():\n parser = create_parser()\n args = parser.parse_args()\n context._set_argparse_args(args)\n if getattr(args, 'json', False):\n # # Silence logging info to avoid interfering with JSON output\n # for logger in Logger.manager.loggerDict:\n # if logger not in ('fetch', 'progress'):\n # getLogger(logger).setLevel(CRITICAL + 1)\n for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):\n getLogger(logger).setLevel(CRITICAL + 1)\n\n return conda_exception_handler(args.func, args, parser)\n\n\nif __name__ == '__main__':\n sys.exit(main())\n", "path": "conda_env/cli/main.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nfrom argparse import RawDescriptionHelpFormatter\nimport os\nimport textwrap\n\nfrom conda.cli.common import add_parser_json, add_parser_prefix\n# conda env import\nfrom .common import get_prefix\nfrom ..env import from_environment\nfrom ..exceptions import CondaEnvException\n\ndescription = \"\"\"\nExport a given environment\n\"\"\"\n\nexample = \"\"\"\nexamples:\n conda env export\n conda env export --file SOME_FILE\n\"\"\"\n\n\ndef configure_parser(sub_parsers):\n p = sub_parsers.add_parser(\n 'export',\n formatter_class=RawDescriptionHelpFormatter,\n description=description,\n help=description,\n epilog=example,\n )\n\n p.add_argument(\n '-c', '--channel',\n action='append',\n help='Additional channel to include in the export'\n )\n\n p.add_argument(\n \"--override-channels\",\n action=\"store_true\",\n help=\"Do not include .condarc channels\",\n )\n add_parser_prefix(p)\n\n p.add_argument(\n '-f', '--file',\n default=None,\n required=False\n )\n\n p.add_argument(\n '--no-builds',\n default=False,\n action='store_true',\n required=False,\n help='Remove build specification from dependencies'\n )\n\n p.add_argument(\n '--ignore-channels',\n default=False,\n action='store_true',\n required=False,\n help='Do not include channel names with package names.')\n add_parser_json(p)\n p.set_defaults(func=execute)\n\n\n# TODO Make this aware of channels that were used to install packages\ndef execute(args, parser):\n if not (args.name or args.prefix):\n # Note, this is a hack fofr get_prefix that assumes argparse results\n # TODO Refactor common.get_prefix\n name = os.environ.get('CONDA_DEFAULT_ENV', False)\n if not name:\n msg = \"Unable to determine environment\\n\\n\"\n msg += textwrap.dedent(\"\"\"\n Please re-run this command with one of the following options:\n\n * Provide an environment name via --name or -n\n * Re-run this command inside an activated conda environment.\"\"\").lstrip()\n # TODO Add json support\n raise CondaEnvException(msg)\n args.name = name\n else:\n name = args.name\n prefix = get_prefix(args)\n env = from_environment(name, prefix, no_builds=args.no_builds,\n ignore_channels=args.ignore_channels)\n\n if args.override_channels:\n env.remove_channels()\n\n if args.channel is not None:\n env.add_channels(args.channel)\n\n if args.file is None:\n print(env.to_yaml())\n else:\n fp = open(args.file, 'wb')\n env.to_yaml(stream=fp)\n", "path": "conda_env/cli/main_export.py"}, {"content": "# (c) Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\"\"\"conda is a tool for managing environments and packages.\n\nconda provides the following commands:\n\n Information\n ===========\n\n info : display information about the current install\n list : list packages linked into a specified environment\n search : print information about a specified package\n help : display a list of available conda commands and their help\n strings\n\n Package Management\n ==================\n\n create : create a new conda environment from a list of specified\n packages\n install : install new packages into an existing conda environment\n update : update packages in a specified conda environment\n\n\n Packaging\n =========\n\n package : create a conda package in an environment\n\nAdditional help for each command can be accessed by using:\n\n conda <command> -h\n\"\"\"\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\nimport sys\nfrom argparse import SUPPRESS\nfrom logging import CRITICAL, DEBUG, getLogger\n\nfrom .. import __version__\n\nlog = getLogger(__name__)\n\n\ndef generate_parser():\n from ..cli import conda_argparse\n p = conda_argparse.ArgumentParser(\n description='conda is a tool for managing and deploying applications,'\n ' environments and packages.',\n )\n p.add_argument(\n '-V', '--version',\n action='version',\n version='conda %s' % __version__,\n help=\"Show the conda version number and exit.\"\n )\n p.add_argument(\n \"--debug\",\n action=\"store_true\",\n help=SUPPRESS,\n )\n p.add_argument(\n \"--json\",\n action=\"store_true\",\n help=SUPPRESS,\n )\n sub_parsers = p.add_subparsers(\n metavar='command',\n dest='cmd',\n )\n # http://bugs.python.org/issue9253\n # http://stackoverflow.com/a/18283730/1599393\n sub_parsers.required = True\n\n return p, sub_parsers\n\n\ndef init_loggers(context):\n from ..gateways.logging import set_all_logger_level, set_verbosity\n if not context.json:\n # Silence logging info to avoid interfering with JSON output\n for logger in ('print', 'dotupdate', 'stdoutlog', 'stderrlog'):\n getLogger(logger).setLevel(CRITICAL + 1)\n\n if context.debug:\n set_all_logger_level(DEBUG)\n elif context.verbosity:\n set_verbosity(context.verbosity)\n log.debug(\"verbosity set to %s\", context.verbosity)\n\n\ndef _main(*args):\n from ..base.constants import SEARCH_PATH\n from ..base.context import context\n\n if len(args) == 1:\n args = args + ('-h',)\n\n p, sub_parsers = generate_parser()\n\n main_modules = [\"info\", \"help\", \"list\", \"search\", \"create\", \"install\", \"update\",\n \"remove\", \"config\", \"clean\", \"package\"]\n modules = [\"conda.cli.main_\"+suffix for suffix in main_modules]\n for module in modules:\n imported = importlib.import_module(module)\n imported.configure_parser(sub_parsers)\n if \"update\" in module:\n imported.configure_parser(sub_parsers, name='upgrade')\n if \"remove\" in module:\n imported.configure_parser(sub_parsers, name='uninstall')\n\n from .find_commands import find_commands\n\n def completer(prefix, **kwargs):\n return [i for i in list(sub_parsers.choices) + find_commands()\n if i.startswith(prefix)]\n\n # when using sys.argv, first argument is generally conda or __main__.py. Ignore it.\n if (any(sname in args[0] for sname in ('conda', 'conda.exe', '__main__.py', 'conda-script.py'))\n and (args[1] in list(sub_parsers.choices.keys()) + find_commands()\n or args[1].startswith('-'))):\n log.debug(\"Ignoring first argument (%s), as it is not a subcommand\", args[0])\n args = args[1:]\n\n sub_parsers.completer = completer\n args = p.parse_args(args)\n\n context.__init__(SEARCH_PATH, 'conda', args)\n init_loggers(context)\n\n exit_code = args.func(args, p)\n if isinstance(exit_code, int):\n return exit_code\n\n\ndef _ensure_text_type(value):\n # copying here from conda/common/compat.py to avoid the import\n try:\n return value.decode('utf-8')\n except AttributeError:\n # AttributeError: '<>' object has no attribute 'decode'\n # In this case assume already text_type and do nothing\n return value\n except UnicodeDecodeError:\n from requests.packages.chardet import detect\n encoding = detect(value).get('encoding') or 'utf-8'\n return value.decode(encoding)\n\n\ndef main(*args):\n if not args:\n args = sys.argv\n\n args = tuple(_ensure_text_type(s) for s in args)\n\n log.debug(\"conda.cli.main called with %s\", args)\n if len(args) > 1:\n try:\n argv1 = args[1].strip()\n if argv1.startswith('..'):\n import conda.cli.activate as activate\n activate.main()\n return\n if argv1 in ('activate', 'deactivate'):\n from ..exceptions import CommandNotFoundError\n raise CommandNotFoundError(argv1)\n except Exception as e:\n from ..exceptions import handle_exception\n return handle_exception(e)\n\n from ..exceptions import conda_exception_handler\n return conda_exception_handler(_main, *args)\n\n\nif __name__ == '__main__':\n main()\n", "path": "conda/cli/main.py"}, {"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\n\nfrom conda.base.constants import SEARCH_PATH\nfrom conda.base.context import context\nfrom conda.cli.conda_argparse import ArgumentParser\nfrom conda.cli.main import init_loggers\n\ntry:\n from conda.exceptions import conda_exception_handler\nexcept ImportError as e:\n if 'CONDA_DEFAULT_ENV' in os.environ:\n sys.stderr.write(\"\"\"\nThere was an error importing conda.\n\nIt appears this was caused by installing conda-env into a conda\nenvironment. Like conda, conda-env needs to be installed into your\nroot conda/Anaconda environment.\n\nPlease deactivate your current environment, then re-install conda-env\nusing this command:\n\n conda install -c conda conda-env\n\nIf you are seeing this error and have not installed conda-env into an\nenvironment, please open a bug report at:\n https://github.com/conda/conda-env\n\n\"\"\".lstrip())\n sys.exit(-1)\n else:\n raise e\n\nfrom . import main_attach\nfrom . import main_create\nfrom . import main_export\nfrom . import main_list\nfrom . import main_remove\nfrom . import main_upload\nfrom . import main_update\n\n\n# TODO: This belongs in a helper library somewhere\n# Note: This only works with `conda-env` as a sub-command. If this gets\n# merged into conda-env, this needs to be adjusted.\ndef show_help_on_empty_command():\n if len(sys.argv) == 1: # sys.argv == ['/path/to/bin/conda-env']\n sys.argv.append('--help')\n\n\ndef create_parser():\n p = ArgumentParser()\n sub_parsers = p.add_subparsers()\n\n main_attach.configure_parser(sub_parsers)\n main_create.configure_parser(sub_parsers)\n main_export.configure_parser(sub_parsers)\n main_list.configure_parser(sub_parsers)\n main_remove.configure_parser(sub_parsers)\n main_upload.configure_parser(sub_parsers)\n main_update.configure_parser(sub_parsers)\n\n show_help_on_empty_command()\n return p\n\n\ndef main():\n parser = create_parser()\n args = parser.parse_args()\n context.__init__(SEARCH_PATH, 'conda', args)\n init_loggers(context)\n return conda_exception_handler(args.func, args, parser)\n\n\nif __name__ == '__main__':\n sys.exit(main())\n", "path": "conda_env/cli/main.py"}]} | 3,687 | 1,010 |
gh_patches_debug_41175 | rasdani/github-patches | git_diff | qtile__qtile-2894 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows in Tile layout grow beyond the edge of the screen
On the latest Qtile on Arch Linux.
Using the `Tile` layout, when I grow windows to the very end on either side, they go on forever. With other layouts such as `Columns`, the windows stop growing at a certain point. The slave windows almost go underneath the master window.
Below you can see a demonstration of this, which you can also see `screenkey` through the terminals to show the windows do in fact grow beyond the screen borders, based on how many times you've pressed the key.
https://user-images.githubusercontent.com/76445071/137324841-f3c3538a-c257-4c3f-807d-26e9cb5ca3b8.mp4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/layout/tile.py`
Content:
```
1 # Copyright (c) 2010 Aldo Cortesi
2 # Copyright (c) 2010-2011 Paul Colomiets
3 # Copyright (c) 2011 Mounier Florian
4 # Copyright (c) 2011 Tzbob
5 # Copyright (c) 2012 roger
6 # Copyright (c) 2012-2014 Tycho Andersen
7 # Copyright (c) 2013 Tao Sauvage
8 # Copyright (c) 2014 ramnes
9 # Copyright (c) 2014 Sean Vig
10 # Copyright (c) 2014 dmpayton
11 # Copyright (c) 2014 dequis
12 # Copyright (c) 2017 Dirk Hartmann.
13 # Copyright (c) 2018 Nazar Mokrynskyi
14 #
15 # Permission is hereby granted, free of charge, to any person obtaining a copy
16 # of this software and associated documentation files (the "Software"), to deal
17 # in the Software without restriction, including without limitation the rights
18 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
19 # copies of the Software, and to permit persons to whom the Software is
20 # furnished to do so, subject to the following conditions:
21 #
22 # The above copyright notice and this permission notice shall be included in
23 # all copies or substantial portions of the Software.
24 #
25 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
26 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
27 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
28 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
29 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
30 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
31 # SOFTWARE.
32
33 from libqtile.layout.base import _SimpleLayoutBase
34
35
36 class Tile(_SimpleLayoutBase):
37 """A layout with two stacks of windows dividing the screen
38
39 The Tile layout divides the screen_rect horizontally into two stacks. The
40 maximum amount of "master" windows can be configured; surplus windows will
41 be displayed in the slave stack on the right.
42 Within their stacks, the windows will be tiled vertically.
43 The windows can be rotated in their entirety by calling up() or down() or,
44 if shift_windows is set to True, individually.
45 """
46
47 defaults = [
48 ("border_focus", "#0000ff", "Border colour(s) for the focused window."),
49 ("border_normal", "#000000", "Border colour(s) for un-focused windows."),
50 ("border_width", 1, "Border width."),
51 ("margin", 0, "Margin of the layout (int or list of ints [N E S W])"),
52 ("ratio", 0.618,
53 "Width-percentage of screen size reserved for master windows."),
54 ("master_length", 1,
55 "Amount of windows displayed in the master stack. Surplus windows "
56 "will be moved to the slave stack."),
57 ("expand", True,
58 "Expand the master windows to the full screen width if no slaves "
59 "are present."),
60 ("ratio_increment", 0.05,
61 "By which amount to change ratio when cmd_decrease_ratio or "
62 "cmd_increase_ratio are called."),
63 ("add_on_top", True,
64 "Add new clients before all the others, potentially pushing other "
65 "windows into slave stack."),
66 ("add_after_last", False,
67 "Add new clients after all the others. If this is True, it "
68 "overrides add_on_top."),
69 ("shift_windows", False,
70 "Allow to shift windows within the layout. If False, the layout "
71 "will be rotated instead."),
72 ("master_match", None,
73 "A Match object defining which window(s) should be kept masters."),
74 ]
75
76 def __init__(self, **config):
77 _SimpleLayoutBase.__init__(self, **config)
78 self.add_defaults(Tile.defaults)
79
80 @property
81 def master_windows(self):
82 return self.clients[:self.master_length]
83
84 @property
85 def slave_windows(self):
86 return self.clients[self.master_length:]
87
88 def up(self):
89 if self.shift_windows:
90 self.clients.shuffle_up()
91 else:
92 self.clients.rotate_down()
93 self.group.layout_all()
94
95 def down(self):
96 if self.shift_windows:
97 self.clients.shuffle_down()
98 else:
99 self.clients.rotate_up()
100 self.group.layout_all()
101
102 def reset_master(self, match=None):
103 if not match and not self.master_match:
104 return
105 match = match or self.master_match
106 if self.clients:
107 masters = [c for c in self.clients if match.compare(c)]
108 for client in reversed(masters):
109 self.clients.remove(client)
110 self.clients.append_head(client)
111
112 def clone(self, group):
113 c = _SimpleLayoutBase.clone(self, group)
114 return c
115
116 def add(self, client, offset_to_current=1):
117 if self.add_after_last:
118 self.clients.append(client)
119 elif self.add_on_top:
120 self.clients.append_head(client)
121 else:
122 super().add(client, offset_to_current)
123 self.reset_master()
124
125 def configure(self, client, screen_rect):
126 screen_width = screen_rect.width
127 screen_height = screen_rect.height
128 border_width = self.border_width
129 if self.clients and client in self.clients:
130 pos = self.clients.index(client)
131 if client in self.master_windows:
132 w = int(screen_width * self.ratio) \
133 if len(self.slave_windows) or not self.expand \
134 else screen_width
135 h = screen_height // self.master_length
136 x = screen_rect.x
137 y = screen_rect.y + pos * h
138 else:
139 w = screen_width - int(screen_width * self.ratio)
140 h = screen_height // (len(self.slave_windows))
141 x = screen_rect.x + int(screen_width * self.ratio)
142 y = screen_rect.y + self.clients[self.master_length:].index(client) * h
143 if client.has_focus:
144 bc = self.border_focus
145 else:
146 bc = self.border_normal
147 client.place(
148 x,
149 y,
150 w - border_width * 2,
151 h - border_width * 2,
152 border_width,
153 bc,
154 margin=self.margin,
155 )
156 client.unhide()
157 else:
158 client.hide()
159
160 def info(self):
161 d = _SimpleLayoutBase.info(self)
162 d.update(dict(
163 master=[c.name for c in self.master_windows],
164 slave=[c.name for c in self.slave_windows],
165 ))
166 return d
167
168 def cmd_shuffle_down(self):
169 self.down()
170
171 def cmd_shuffle_up(self):
172 self.up()
173
174 cmd_shuffle_left = cmd_shuffle_up
175 cmd_shuffle_right = cmd_shuffle_down
176
177 cmd_previous = _SimpleLayoutBase.previous
178 cmd_next = _SimpleLayoutBase.next
179 cmd_up = cmd_previous
180 cmd_down = cmd_next
181 cmd_left = cmd_previous
182 cmd_right = cmd_next
183
184 def cmd_decrease_ratio(self):
185 self.ratio -= self.ratio_increment
186 self.group.layout_all()
187
188 def cmd_increase_ratio(self):
189 self.ratio += self.ratio_increment
190 self.group.layout_all()
191
192 def cmd_decrease_nmaster(self):
193 self.master_length -= 1
194 if self.master_length <= 0:
195 self.master_length = 1
196 self.group.layout_all()
197
198 def cmd_increase_nmaster(self):
199 self.master_length += 1
200 self.group.layout_all()
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/layout/tile.py b/libqtile/layout/tile.py
--- a/libqtile/layout/tile.py
+++ b/libqtile/layout/tile.py
@@ -51,6 +51,8 @@
("margin", 0, "Margin of the layout (int or list of ints [N E S W])"),
("ratio", 0.618,
"Width-percentage of screen size reserved for master windows."),
+ ("max_ratio", 0.85, "Maximum width of master windows"),
+ ("min_ratio", 0.15, "Minimum width of master windows"),
("master_length", 1,
"Amount of windows displayed in the master stack. Surplus windows "
"will be moved to the slave stack."),
@@ -76,6 +78,15 @@
def __init__(self, **config):
_SimpleLayoutBase.__init__(self, **config)
self.add_defaults(Tile.defaults)
+ self._initial_ratio = self.ratio
+
+ @property
+ def ratio_size(self):
+ return self.ratio
+
+ @ratio_size.setter
+ def ratio_size(self, ratio):
+ self.ratio = min(max(ratio, self.min_ratio), self.max_ratio)
@property
def master_windows(self):
@@ -129,16 +140,16 @@
if self.clients and client in self.clients:
pos = self.clients.index(client)
if client in self.master_windows:
- w = int(screen_width * self.ratio) \
+ w = int(screen_width * self.ratio_size) \
if len(self.slave_windows) or not self.expand \
else screen_width
h = screen_height // self.master_length
x = screen_rect.x
y = screen_rect.y + pos * h
else:
- w = screen_width - int(screen_width * self.ratio)
+ w = screen_width - int(screen_width * self.ratio_size)
h = screen_height // (len(self.slave_windows))
- x = screen_rect.x + int(screen_width * self.ratio)
+ x = screen_rect.x + int(screen_width * self.ratio_size)
y = screen_rect.y + self.clients[self.master_length:].index(client) * h
if client.has_focus:
bc = self.border_focus
@@ -171,6 +182,12 @@
def cmd_shuffle_up(self):
self.up()
+ def cmd_reset(self):
+ self.ratio_size = self._initial_ratio
+ self.group.layout_all()
+
+ cmd_normalize = cmd_reset
+
cmd_shuffle_left = cmd_shuffle_up
cmd_shuffle_right = cmd_shuffle_down
@@ -182,11 +199,11 @@
cmd_right = cmd_next
def cmd_decrease_ratio(self):
- self.ratio -= self.ratio_increment
+ self.ratio_size -= self.ratio_increment
self.group.layout_all()
def cmd_increase_ratio(self):
- self.ratio += self.ratio_increment
+ self.ratio_size += self.ratio_increment
self.group.layout_all()
def cmd_decrease_nmaster(self):
| {"golden_diff": "diff --git a/libqtile/layout/tile.py b/libqtile/layout/tile.py\n--- a/libqtile/layout/tile.py\n+++ b/libqtile/layout/tile.py\n@@ -51,6 +51,8 @@\n (\"margin\", 0, \"Margin of the layout (int or list of ints [N E S W])\"),\n (\"ratio\", 0.618,\n \"Width-percentage of screen size reserved for master windows.\"),\n+ (\"max_ratio\", 0.85, \"Maximum width of master windows\"),\n+ (\"min_ratio\", 0.15, \"Minimum width of master windows\"),\n (\"master_length\", 1,\n \"Amount of windows displayed in the master stack. Surplus windows \"\n \"will be moved to the slave stack.\"),\n@@ -76,6 +78,15 @@\n def __init__(self, **config):\n _SimpleLayoutBase.__init__(self, **config)\n self.add_defaults(Tile.defaults)\n+ self._initial_ratio = self.ratio\n+\n+ @property\n+ def ratio_size(self):\n+ return self.ratio\n+\n+ @ratio_size.setter\n+ def ratio_size(self, ratio):\n+ self.ratio = min(max(ratio, self.min_ratio), self.max_ratio)\n \n @property\n def master_windows(self):\n@@ -129,16 +140,16 @@\n if self.clients and client in self.clients:\n pos = self.clients.index(client)\n if client in self.master_windows:\n- w = int(screen_width * self.ratio) \\\n+ w = int(screen_width * self.ratio_size) \\\n if len(self.slave_windows) or not self.expand \\\n else screen_width\n h = screen_height // self.master_length\n x = screen_rect.x\n y = screen_rect.y + pos * h\n else:\n- w = screen_width - int(screen_width * self.ratio)\n+ w = screen_width - int(screen_width * self.ratio_size)\n h = screen_height // (len(self.slave_windows))\n- x = screen_rect.x + int(screen_width * self.ratio)\n+ x = screen_rect.x + int(screen_width * self.ratio_size)\n y = screen_rect.y + self.clients[self.master_length:].index(client) * h\n if client.has_focus:\n bc = self.border_focus\n@@ -171,6 +182,12 @@\n def cmd_shuffle_up(self):\n self.up()\n \n+ def cmd_reset(self):\n+ self.ratio_size = self._initial_ratio\n+ self.group.layout_all()\n+\n+ cmd_normalize = cmd_reset\n+\n cmd_shuffle_left = cmd_shuffle_up\n cmd_shuffle_right = cmd_shuffle_down\n \n@@ -182,11 +199,11 @@\n cmd_right = cmd_next\n \n def cmd_decrease_ratio(self):\n- self.ratio -= self.ratio_increment\n+ self.ratio_size -= self.ratio_increment\n self.group.layout_all()\n \n def cmd_increase_ratio(self):\n- self.ratio += self.ratio_increment\n+ self.ratio_size += self.ratio_increment\n self.group.layout_all()\n \n def cmd_decrease_nmaster(self):\n", "issue": "Windows in Tile layout grow beyond the edge of the screen\nOn the latest Qtile on Arch Linux.\r\n\r\nUsing the `Tile` layout, when I grow windows to the very end on either side, they go on forever. With other layouts such as `Columns`, the windows stop growing at a certain point. The slave windows almost go underneath the master window.\r\n\r\nBelow you can see a demonstration of this, which you can also see `screenkey` through the terminals to show the windows do in fact grow beyond the screen borders, based on how many times you've pressed the key.\r\n \r\nhttps://user-images.githubusercontent.com/76445071/137324841-f3c3538a-c257-4c3f-807d-26e9cb5ca3b8.mp4\n", "before_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2010-2011 Paul Colomiets\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2011 Tzbob\n# Copyright (c) 2012 roger\n# Copyright (c) 2012-2014 Tycho Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014 ramnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 dmpayton\n# Copyright (c) 2014 dequis\n# Copyright (c) 2017 Dirk Hartmann.\n# Copyright (c) 2018 Nazar Mokrynskyi\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile.layout.base import _SimpleLayoutBase\n\n\nclass Tile(_SimpleLayoutBase):\n \"\"\"A layout with two stacks of windows dividing the screen\n\n The Tile layout divides the screen_rect horizontally into two stacks. The\n maximum amount of \"master\" windows can be configured; surplus windows will\n be displayed in the slave stack on the right.\n Within their stacks, the windows will be tiled vertically.\n The windows can be rotated in their entirety by calling up() or down() or,\n if shift_windows is set to True, individually.\n \"\"\"\n\n defaults = [\n (\"border_focus\", \"#0000ff\", \"Border colour(s) for the focused window.\"),\n (\"border_normal\", \"#000000\", \"Border colour(s) for un-focused windows.\"),\n (\"border_width\", 1, \"Border width.\"),\n (\"margin\", 0, \"Margin of the layout (int or list of ints [N E S W])\"),\n (\"ratio\", 0.618,\n \"Width-percentage of screen size reserved for master windows.\"),\n (\"master_length\", 1,\n \"Amount of windows displayed in the master stack. Surplus windows \"\n \"will be moved to the slave stack.\"),\n (\"expand\", True,\n \"Expand the master windows to the full screen width if no slaves \"\n \"are present.\"),\n (\"ratio_increment\", 0.05,\n \"By which amount to change ratio when cmd_decrease_ratio or \"\n \"cmd_increase_ratio are called.\"),\n (\"add_on_top\", True,\n \"Add new clients before all the others, potentially pushing other \"\n \"windows into slave stack.\"),\n (\"add_after_last\", False,\n \"Add new clients after all the others. If this is True, it \"\n \"overrides add_on_top.\"),\n (\"shift_windows\", False,\n \"Allow to shift windows within the layout. If False, the layout \"\n \"will be rotated instead.\"),\n (\"master_match\", None,\n \"A Match object defining which window(s) should be kept masters.\"),\n ]\n\n def __init__(self, **config):\n _SimpleLayoutBase.__init__(self, **config)\n self.add_defaults(Tile.defaults)\n\n @property\n def master_windows(self):\n return self.clients[:self.master_length]\n\n @property\n def slave_windows(self):\n return self.clients[self.master_length:]\n\n def up(self):\n if self.shift_windows:\n self.clients.shuffle_up()\n else:\n self.clients.rotate_down()\n self.group.layout_all()\n\n def down(self):\n if self.shift_windows:\n self.clients.shuffle_down()\n else:\n self.clients.rotate_up()\n self.group.layout_all()\n\n def reset_master(self, match=None):\n if not match and not self.master_match:\n return\n match = match or self.master_match\n if self.clients:\n masters = [c for c in self.clients if match.compare(c)]\n for client in reversed(masters):\n self.clients.remove(client)\n self.clients.append_head(client)\n\n def clone(self, group):\n c = _SimpleLayoutBase.clone(self, group)\n return c\n\n def add(self, client, offset_to_current=1):\n if self.add_after_last:\n self.clients.append(client)\n elif self.add_on_top:\n self.clients.append_head(client)\n else:\n super().add(client, offset_to_current)\n self.reset_master()\n\n def configure(self, client, screen_rect):\n screen_width = screen_rect.width\n screen_height = screen_rect.height\n border_width = self.border_width\n if self.clients and client in self.clients:\n pos = self.clients.index(client)\n if client in self.master_windows:\n w = int(screen_width * self.ratio) \\\n if len(self.slave_windows) or not self.expand \\\n else screen_width\n h = screen_height // self.master_length\n x = screen_rect.x\n y = screen_rect.y + pos * h\n else:\n w = screen_width - int(screen_width * self.ratio)\n h = screen_height // (len(self.slave_windows))\n x = screen_rect.x + int(screen_width * self.ratio)\n y = screen_rect.y + self.clients[self.master_length:].index(client) * h\n if client.has_focus:\n bc = self.border_focus\n else:\n bc = self.border_normal\n client.place(\n x,\n y,\n w - border_width * 2,\n h - border_width * 2,\n border_width,\n bc,\n margin=self.margin,\n )\n client.unhide()\n else:\n client.hide()\n\n def info(self):\n d = _SimpleLayoutBase.info(self)\n d.update(dict(\n master=[c.name for c in self.master_windows],\n slave=[c.name for c in self.slave_windows],\n ))\n return d\n\n def cmd_shuffle_down(self):\n self.down()\n\n def cmd_shuffle_up(self):\n self.up()\n\n cmd_shuffle_left = cmd_shuffle_up\n cmd_shuffle_right = cmd_shuffle_down\n\n cmd_previous = _SimpleLayoutBase.previous\n cmd_next = _SimpleLayoutBase.next\n cmd_up = cmd_previous\n cmd_down = cmd_next\n cmd_left = cmd_previous\n cmd_right = cmd_next\n\n def cmd_decrease_ratio(self):\n self.ratio -= self.ratio_increment\n self.group.layout_all()\n\n def cmd_increase_ratio(self):\n self.ratio += self.ratio_increment\n self.group.layout_all()\n\n def cmd_decrease_nmaster(self):\n self.master_length -= 1\n if self.master_length <= 0:\n self.master_length = 1\n self.group.layout_all()\n\n def cmd_increase_nmaster(self):\n self.master_length += 1\n self.group.layout_all()\n", "path": "libqtile/layout/tile.py"}], "after_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2010-2011 Paul Colomiets\n# Copyright (c) 2011 Mounier Florian\n# Copyright (c) 2011 Tzbob\n# Copyright (c) 2012 roger\n# Copyright (c) 2012-2014 Tycho Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014 ramnes\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 dmpayton\n# Copyright (c) 2014 dequis\n# Copyright (c) 2017 Dirk Hartmann.\n# Copyright (c) 2018 Nazar Mokrynskyi\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom libqtile.layout.base import _SimpleLayoutBase\n\n\nclass Tile(_SimpleLayoutBase):\n \"\"\"A layout with two stacks of windows dividing the screen\n\n The Tile layout divides the screen_rect horizontally into two stacks. The\n maximum amount of \"master\" windows can be configured; surplus windows will\n be displayed in the slave stack on the right.\n Within their stacks, the windows will be tiled vertically.\n The windows can be rotated in their entirety by calling up() or down() or,\n if shift_windows is set to True, individually.\n \"\"\"\n\n defaults = [\n (\"border_focus\", \"#0000ff\", \"Border colour(s) for the focused window.\"),\n (\"border_normal\", \"#000000\", \"Border colour(s) for un-focused windows.\"),\n (\"border_width\", 1, \"Border width.\"),\n (\"margin\", 0, \"Margin of the layout (int or list of ints [N E S W])\"),\n (\"ratio\", 0.618,\n \"Width-percentage of screen size reserved for master windows.\"),\n (\"max_ratio\", 0.85, \"Maximum width of master windows\"),\n (\"min_ratio\", 0.15, \"Minimum width of master windows\"),\n (\"master_length\", 1,\n \"Amount of windows displayed in the master stack. Surplus windows \"\n \"will be moved to the slave stack.\"),\n (\"expand\", True,\n \"Expand the master windows to the full screen width if no slaves \"\n \"are present.\"),\n (\"ratio_increment\", 0.05,\n \"By which amount to change ratio when cmd_decrease_ratio or \"\n \"cmd_increase_ratio are called.\"),\n (\"add_on_top\", True,\n \"Add new clients before all the others, potentially pushing other \"\n \"windows into slave stack.\"),\n (\"add_after_last\", False,\n \"Add new clients after all the others. If this is True, it \"\n \"overrides add_on_top.\"),\n (\"shift_windows\", False,\n \"Allow to shift windows within the layout. If False, the layout \"\n \"will be rotated instead.\"),\n (\"master_match\", None,\n \"A Match object defining which window(s) should be kept masters.\"),\n ]\n\n def __init__(self, **config):\n _SimpleLayoutBase.__init__(self, **config)\n self.add_defaults(Tile.defaults)\n self._initial_ratio = self.ratio\n\n @property\n def ratio_size(self):\n return self.ratio\n\n @ratio_size.setter\n def ratio_size(self, ratio):\n self.ratio = min(max(ratio, self.min_ratio), self.max_ratio)\n\n @property\n def master_windows(self):\n return self.clients[:self.master_length]\n\n @property\n def slave_windows(self):\n return self.clients[self.master_length:]\n\n def up(self):\n if self.shift_windows:\n self.clients.shuffle_up()\n else:\n self.clients.rotate_down()\n self.group.layout_all()\n\n def down(self):\n if self.shift_windows:\n self.clients.shuffle_down()\n else:\n self.clients.rotate_up()\n self.group.layout_all()\n\n def reset_master(self, match=None):\n if not match and not self.master_match:\n return\n match = match or self.master_match\n if self.clients:\n masters = [c for c in self.clients if match.compare(c)]\n for client in reversed(masters):\n self.clients.remove(client)\n self.clients.append_head(client)\n\n def clone(self, group):\n c = _SimpleLayoutBase.clone(self, group)\n return c\n\n def add(self, client, offset_to_current=1):\n if self.add_after_last:\n self.clients.append(client)\n elif self.add_on_top:\n self.clients.append_head(client)\n else:\n super().add(client, offset_to_current)\n self.reset_master()\n\n def configure(self, client, screen_rect):\n screen_width = screen_rect.width\n screen_height = screen_rect.height\n border_width = self.border_width\n if self.clients and client in self.clients:\n pos = self.clients.index(client)\n if client in self.master_windows:\n w = int(screen_width * self.ratio_size) \\\n if len(self.slave_windows) or not self.expand \\\n else screen_width\n h = screen_height // self.master_length\n x = screen_rect.x\n y = screen_rect.y + pos * h\n else:\n w = screen_width - int(screen_width * self.ratio_size)\n h = screen_height // (len(self.slave_windows))\n x = screen_rect.x + int(screen_width * self.ratio_size)\n y = screen_rect.y + self.clients[self.master_length:].index(client) * h\n if client.has_focus:\n bc = self.border_focus\n else:\n bc = self.border_normal\n client.place(\n x,\n y,\n w - border_width * 2,\n h - border_width * 2,\n border_width,\n bc,\n margin=self.margin,\n )\n client.unhide()\n else:\n client.hide()\n\n def info(self):\n d = _SimpleLayoutBase.info(self)\n d.update(dict(\n master=[c.name for c in self.master_windows],\n slave=[c.name for c in self.slave_windows],\n ))\n return d\n\n def cmd_shuffle_down(self):\n self.down()\n\n def cmd_shuffle_up(self):\n self.up()\n\n def cmd_reset(self):\n self.ratio_size = self._initial_ratio\n self.group.layout_all()\n\n cmd_normalize = cmd_reset\n\n cmd_shuffle_left = cmd_shuffle_up\n cmd_shuffle_right = cmd_shuffle_down\n\n cmd_previous = _SimpleLayoutBase.previous\n cmd_next = _SimpleLayoutBase.next\n cmd_up = cmd_previous\n cmd_down = cmd_next\n cmd_left = cmd_previous\n cmd_right = cmd_next\n\n def cmd_decrease_ratio(self):\n self.ratio_size -= self.ratio_increment\n self.group.layout_all()\n\n def cmd_increase_ratio(self):\n self.ratio_size += self.ratio_increment\n self.group.layout_all()\n\n def cmd_decrease_nmaster(self):\n self.master_length -= 1\n if self.master_length <= 0:\n self.master_length = 1\n self.group.layout_all()\n\n def cmd_increase_nmaster(self):\n self.master_length += 1\n self.group.layout_all()\n", "path": "libqtile/layout/tile.py"}]} | 2,602 | 708 |
gh_patches_debug_22883 | rasdani/github-patches | git_diff | getsentry__sentry-3447 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Turn the option system.logging-format into an enum.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/logging/__init__.py`
Content:
```
1 """
2 sentry.logging
3 ~~~~~~~~~~~~~~
4 :copyright: (c) 2010-2016 by the Sentry Team, see AUTHORS for more details.
5 :license: BSD, see LICENSE for more details.
6 """
7
8 from __future__ import absolute_import
9
```
Path: `src/sentry/options/defaults.py`
Content:
```
1 """
2 sentry.options.defaults
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 :copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.
6 :license: BSD, see LICENSE for more details.
7 """
8 from __future__ import absolute_import, print_function
9
10 from sentry.options import (
11 FLAG_IMMUTABLE, FLAG_NOSTORE, FLAG_PRIORITIZE_DISK, FLAG_REQUIRED, FLAG_ALLOW_EMPTY,
12 register,
13 )
14 from sentry.utils.types import Dict, String
15
16 # Cache
17 # register('cache.backend', flags=FLAG_NOSTORE)
18 # register('cache.options', type=Dict, flags=FLAG_NOSTORE)
19
20 # System
21 register('system.admin-email', flags=FLAG_REQUIRED)
22 register('system.databases', type=Dict, flags=FLAG_NOSTORE)
23 # register('system.debug', default=False, flags=FLAG_NOSTORE)
24 register('system.rate-limit', default=0, flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
25 register('system.secret-key', flags=FLAG_NOSTORE)
26 # Absolute URL to the sentry root directory. Should not include a trailing slash.
27 register('system.url-prefix', ttl=60, grace=3600, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)
28 register('system.root-api-key', flags=FLAG_PRIORITIZE_DISK)
29 register('system.logging-format', default='human', flags=FLAG_PRIORITIZE_DISK)
30
31 # Redis
32 register(
33 'redis.clusters',
34 type=Dict,
35 default={
36 'default': {
37 'hosts': {
38 0: {
39 'host': '127.0.0.1',
40 'port': 6379,
41 }
42 },
43 },
44 },
45 flags=FLAG_NOSTORE | FLAG_IMMUTABLE
46 )
47 register('redis.options', type=Dict, flags=FLAG_NOSTORE)
48
49 # symbolizer specifics
50 register('dsym.llvm-symbolizer-path', type=String)
51 register('dsym.cache-path', type=String, default='/tmp/sentry-dsym-cache')
52
53 # Mail
54 register('mail.backend', default='smtp', flags=FLAG_NOSTORE)
55 register('mail.host', default='localhost', flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)
56 register('mail.port', default=25, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)
57 register('mail.username', flags=FLAG_REQUIRED | FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
58 register('mail.password', flags=FLAG_REQUIRED | FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
59 register('mail.use-tls', default=False, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)
60 register('mail.subject-prefix', default='[Sentry] ', flags=FLAG_PRIORITIZE_DISK)
61 register('mail.from', default='root@localhost', flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)
62 register('mail.list-namespace', type=String, default='localhost', flags=FLAG_NOSTORE)
63 register('mail.enable-replies', default=False, flags=FLAG_PRIORITIZE_DISK)
64 register('mail.reply-hostname', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
65 register('mail.mailgun-api-key', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
66
67 # SMS
68 register('sms.twilio-account', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
69 register('sms.twilio-token', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
70 register('sms.twilio-number', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/logging/__init__.py b/src/sentry/logging/__init__.py
--- a/src/sentry/logging/__init__.py
+++ b/src/sentry/logging/__init__.py
@@ -6,3 +6,8 @@
"""
from __future__ import absolute_import
+
+
+class LoggingFormat(object):
+ HUMAN = 'human'
+ MACHINE = 'machine'
diff --git a/src/sentry/options/defaults.py b/src/sentry/options/defaults.py
--- a/src/sentry/options/defaults.py
+++ b/src/sentry/options/defaults.py
@@ -7,6 +7,7 @@
"""
from __future__ import absolute_import, print_function
+from sentry.logging import LoggingFormat
from sentry.options import (
FLAG_IMMUTABLE, FLAG_NOSTORE, FLAG_PRIORITIZE_DISK, FLAG_REQUIRED, FLAG_ALLOW_EMPTY,
register,
@@ -26,7 +27,7 @@
# Absolute URL to the sentry root directory. Should not include a trailing slash.
register('system.url-prefix', ttl=60, grace=3600, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)
register('system.root-api-key', flags=FLAG_PRIORITIZE_DISK)
-register('system.logging-format', default='human', flags=FLAG_PRIORITIZE_DISK)
+register('system.logging-format', default=LoggingFormat.HUMAN, flags=FLAG_PRIORITIZE_DISK)
# Redis
register(
| {"golden_diff": "diff --git a/src/sentry/logging/__init__.py b/src/sentry/logging/__init__.py\n--- a/src/sentry/logging/__init__.py\n+++ b/src/sentry/logging/__init__.py\n@@ -6,3 +6,8 @@\n \"\"\"\n \n from __future__ import absolute_import\n+\n+\n+class LoggingFormat(object):\n+ HUMAN = 'human'\n+ MACHINE = 'machine'\ndiff --git a/src/sentry/options/defaults.py b/src/sentry/options/defaults.py\n--- a/src/sentry/options/defaults.py\n+++ b/src/sentry/options/defaults.py\n@@ -7,6 +7,7 @@\n \"\"\"\n from __future__ import absolute_import, print_function\n \n+from sentry.logging import LoggingFormat\n from sentry.options import (\n FLAG_IMMUTABLE, FLAG_NOSTORE, FLAG_PRIORITIZE_DISK, FLAG_REQUIRED, FLAG_ALLOW_EMPTY,\n register,\n@@ -26,7 +27,7 @@\n # Absolute URL to the sentry root directory. Should not include a trailing slash.\n register('system.url-prefix', ttl=60, grace=3600, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\n register('system.root-api-key', flags=FLAG_PRIORITIZE_DISK)\n-register('system.logging-format', default='human', flags=FLAG_PRIORITIZE_DISK)\n+register('system.logging-format', default=LoggingFormat.HUMAN, flags=FLAG_PRIORITIZE_DISK)\n \n # Redis\n register(\n", "issue": "Turn the option system.logging-format into an enum.\n\n", "before_files": [{"content": "\"\"\"\nsentry.logging\n~~~~~~~~~~~~~~\n:copyright: (c) 2010-2016 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom __future__ import absolute_import\n", "path": "src/sentry/logging/__init__.py"}, {"content": "\"\"\"\nsentry.options.defaults\n~~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import, print_function\n\nfrom sentry.options import (\n FLAG_IMMUTABLE, FLAG_NOSTORE, FLAG_PRIORITIZE_DISK, FLAG_REQUIRED, FLAG_ALLOW_EMPTY,\n register,\n)\nfrom sentry.utils.types import Dict, String\n\n# Cache\n# register('cache.backend', flags=FLAG_NOSTORE)\n# register('cache.options', type=Dict, flags=FLAG_NOSTORE)\n\n# System\nregister('system.admin-email', flags=FLAG_REQUIRED)\nregister('system.databases', type=Dict, flags=FLAG_NOSTORE)\n# register('system.debug', default=False, flags=FLAG_NOSTORE)\nregister('system.rate-limit', default=0, flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('system.secret-key', flags=FLAG_NOSTORE)\n# Absolute URL to the sentry root directory. Should not include a trailing slash.\nregister('system.url-prefix', ttl=60, grace=3600, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('system.root-api-key', flags=FLAG_PRIORITIZE_DISK)\nregister('system.logging-format', default='human', flags=FLAG_PRIORITIZE_DISK)\n\n# Redis\nregister(\n 'redis.clusters',\n type=Dict,\n default={\n 'default': {\n 'hosts': {\n 0: {\n 'host': '127.0.0.1',\n 'port': 6379,\n }\n },\n },\n },\n flags=FLAG_NOSTORE | FLAG_IMMUTABLE\n)\nregister('redis.options', type=Dict, flags=FLAG_NOSTORE)\n\n# symbolizer specifics\nregister('dsym.llvm-symbolizer-path', type=String)\nregister('dsym.cache-path', type=String, default='/tmp/sentry-dsym-cache')\n\n# Mail\nregister('mail.backend', default='smtp', flags=FLAG_NOSTORE)\nregister('mail.host', default='localhost', flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.port', default=25, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.username', flags=FLAG_REQUIRED | FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('mail.password', flags=FLAG_REQUIRED | FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('mail.use-tls', default=False, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.subject-prefix', default='[Sentry] ', flags=FLAG_PRIORITIZE_DISK)\nregister('mail.from', default='root@localhost', flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.list-namespace', type=String, default='localhost', flags=FLAG_NOSTORE)\nregister('mail.enable-replies', default=False, flags=FLAG_PRIORITIZE_DISK)\nregister('mail.reply-hostname', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('mail.mailgun-api-key', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\n\n# SMS\nregister('sms.twilio-account', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('sms.twilio-token', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('sms.twilio-number', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\n", "path": "src/sentry/options/defaults.py"}], "after_files": [{"content": "\"\"\"\nsentry.logging\n~~~~~~~~~~~~~~\n:copyright: (c) 2010-2016 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom __future__ import absolute_import\n\n\nclass LoggingFormat(object):\n HUMAN = 'human'\n MACHINE = 'machine'\n", "path": "src/sentry/logging/__init__.py"}, {"content": "\"\"\"\nsentry.options.defaults\n~~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import, print_function\n\nfrom sentry.logging import LoggingFormat\nfrom sentry.options import (\n FLAG_IMMUTABLE, FLAG_NOSTORE, FLAG_PRIORITIZE_DISK, FLAG_REQUIRED, FLAG_ALLOW_EMPTY,\n register,\n)\nfrom sentry.utils.types import Dict, String\n\n# Cache\n# register('cache.backend', flags=FLAG_NOSTORE)\n# register('cache.options', type=Dict, flags=FLAG_NOSTORE)\n\n# System\nregister('system.admin-email', flags=FLAG_REQUIRED)\nregister('system.databases', type=Dict, flags=FLAG_NOSTORE)\n# register('system.debug', default=False, flags=FLAG_NOSTORE)\nregister('system.rate-limit', default=0, flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('system.secret-key', flags=FLAG_NOSTORE)\n# Absolute URL to the sentry root directory. Should not include a trailing slash.\nregister('system.url-prefix', ttl=60, grace=3600, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('system.root-api-key', flags=FLAG_PRIORITIZE_DISK)\nregister('system.logging-format', default=LoggingFormat.HUMAN, flags=FLAG_PRIORITIZE_DISK)\n\n# Redis\nregister(\n 'redis.clusters',\n type=Dict,\n default={\n 'default': {\n 'hosts': {\n 0: {\n 'host': '127.0.0.1',\n 'port': 6379,\n }\n },\n },\n },\n flags=FLAG_NOSTORE | FLAG_IMMUTABLE\n)\nregister('redis.options', type=Dict, flags=FLAG_NOSTORE)\n\n# symbolizer specifics\nregister('dsym.llvm-symbolizer-path', type=String)\nregister('dsym.cache-path', type=String, default='/tmp/sentry-dsym-cache')\n\n# Mail\nregister('mail.backend', default='smtp', flags=FLAG_NOSTORE)\nregister('mail.host', default='localhost', flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.port', default=25, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.username', flags=FLAG_REQUIRED | FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('mail.password', flags=FLAG_REQUIRED | FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('mail.use-tls', default=False, flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.subject-prefix', default='[Sentry] ', flags=FLAG_PRIORITIZE_DISK)\nregister('mail.from', default='root@localhost', flags=FLAG_REQUIRED | FLAG_PRIORITIZE_DISK)\nregister('mail.list-namespace', type=String, default='localhost', flags=FLAG_NOSTORE)\nregister('mail.enable-replies', default=False, flags=FLAG_PRIORITIZE_DISK)\nregister('mail.reply-hostname', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('mail.mailgun-api-key', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\n\n# SMS\nregister('sms.twilio-account', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('sms.twilio-token', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\nregister('sms.twilio-number', default='', flags=FLAG_ALLOW_EMPTY | FLAG_PRIORITIZE_DISK)\n", "path": "src/sentry/options/defaults.py"}]} | 1,245 | 310 |
gh_patches_debug_46 | rasdani/github-patches | git_diff | archlinux__archinstall-1300 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Archinstall discover shop non-functional.
Hello,
I have installed Arch with archinstall twice now, selected the desktop option then KDE but I noticed that by default the "Discover" shop does not want to function I have to download the packagekit-qt5 package then it functions. Just wanted to let you know.
Archinstall discover shop non-functional.
Hello,
I have installed Arch with archinstall twice now, selected the desktop option then KDE but I noticed that by default the "Discover" shop does not want to function I have to download the packagekit-qt5 package then it functions. Just wanted to let you know.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `profiles/kde.py`
Content:
```
1 # A desktop environment using "KDE".
2
3 import archinstall
4
5 is_top_level_profile = False
6
7 __packages__ = [
8 "plasma-meta",
9 "konsole",
10 "kwrite",
11 "dolphin",
12 "ark",
13 "sddm",
14 "plasma-wayland-session",
15 "egl-wayland",
16 ]
17
18
19 # TODO: Remove hard dependency of bash (due to .bash_profile)
20
21
22 def _prep_function(*args, **kwargs):
23 """
24 Magic function called by the importing installer
25 before continuing any further. It also avoids executing any
26 other code in this stage. So it's a safe way to ask the user
27 for more input before any other installer steps start.
28 """
29
30 # KDE requires a functioning Xorg installation.
31 profile = archinstall.Profile(None, 'xorg')
32 with profile.load_instructions(namespace='xorg.py') as imported:
33 if hasattr(imported, '_prep_function'):
34 return imported._prep_function()
35 else:
36 print('Deprecated (??): xorg profile has no _prep_function() anymore')
37
38
39 """
40 def _post_install(*args, **kwargs):
41 if "nvidia" in _gfx_driver_packages:
42 print("Plasma Wayland has known compatibility issues with the proprietary Nvidia driver")
43 print("After booting, you can choose between Wayland and Xorg using the drop-down menu")
44 return True
45 """
46
47 # Ensures that this code only gets executed if executed
48 # through importlib.util.spec_from_file_location("kde", "/somewhere/kde.py")
49 # or through conventional import kde
50 if __name__ == 'kde':
51 # Install dependency profiles
52 archinstall.storage['installation_session'].install_profile('xorg')
53
54 # Install the KDE packages
55 archinstall.storage['installation_session'].add_additional_packages(__packages__)
56
57 # Enable autostart of KDE for all users
58 archinstall.storage['installation_session'].enable_service('sddm')
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/profiles/kde.py b/profiles/kde.py
--- a/profiles/kde.py
+++ b/profiles/kde.py
@@ -13,6 +13,7 @@
"sddm",
"plasma-wayland-session",
"egl-wayland",
+ "packagekit-qt5",
]
| {"golden_diff": "diff --git a/profiles/kde.py b/profiles/kde.py\n--- a/profiles/kde.py\n+++ b/profiles/kde.py\n@@ -13,6 +13,7 @@\n \t\"sddm\",\n \t\"plasma-wayland-session\",\n \t\"egl-wayland\",\n+\t\"packagekit-qt5\",\n ]\n", "issue": "Archinstall discover shop non-functional.\nHello,\r\n\r\nI have installed Arch with archinstall twice now, selected the desktop option then KDE but I noticed that by default the \"Discover\" shop does not want to function I have to download the packagekit-qt5 package then it functions. Just wanted to let you know.\r\n\r\n\nArchinstall discover shop non-functional.\nHello,\r\n\r\nI have installed Arch with archinstall twice now, selected the desktop option then KDE but I noticed that by default the \"Discover\" shop does not want to function I have to download the packagekit-qt5 package then it functions. Just wanted to let you know.\r\n\r\n\n", "before_files": [{"content": "# A desktop environment using \"KDE\".\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"plasma-meta\",\n\t\"konsole\",\n\t\"kwrite\",\n\t\"dolphin\",\n\t\"ark\",\n\t\"sddm\",\n\t\"plasma-wayland-session\",\n\t\"egl-wayland\",\n]\n\n\n# TODO: Remove hard dependency of bash (due to .bash_profile)\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# KDE requires a functioning Xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n\"\"\"\ndef _post_install(*args, **kwargs):\n\tif \"nvidia\" in _gfx_driver_packages:\n\t\tprint(\"Plasma Wayland has known compatibility issues with the proprietary Nvidia driver\")\n\tprint(\"After booting, you can choose between Wayland and Xorg using the drop-down menu\")\n\treturn True\n\"\"\"\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"kde\", \"/somewhere/kde.py\")\n# or through conventional import kde\nif __name__ == 'kde':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the KDE packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\t# Enable autostart of KDE for all users\n\tarchinstall.storage['installation_session'].enable_service('sddm')\n", "path": "profiles/kde.py"}], "after_files": [{"content": "# A desktop environment using \"KDE\".\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"plasma-meta\",\n\t\"konsole\",\n\t\"kwrite\",\n\t\"dolphin\",\n\t\"ark\",\n\t\"sddm\",\n\t\"plasma-wayland-session\",\n\t\"egl-wayland\",\n\t\"packagekit-qt5\",\n]\n\n\n# TODO: Remove hard dependency of bash (due to .bash_profile)\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# KDE requires a functioning Xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n\"\"\"\ndef _post_install(*args, **kwargs):\n\tif \"nvidia\" in _gfx_driver_packages:\n\t\tprint(\"Plasma Wayland has known compatibility issues with the proprietary Nvidia driver\")\n\tprint(\"After booting, you can choose between Wayland and Xorg using the drop-down menu\")\n\treturn True\n\"\"\"\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"kde\", \"/somewhere/kde.py\")\n# or through conventional import kde\nif __name__ == 'kde':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the KDE packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\t# Enable autostart of KDE for all users\n\tarchinstall.storage['installation_session'].enable_service('sddm')\n", "path": "profiles/kde.py"}]} | 935 | 76 |
gh_patches_debug_1815 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-959 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix for checkbox accessibility no longer working
### Current Behavior
Checkboxes in django admin superuser no longer generated with an associated label.
### Expected Behavior
Expect to see accessible checkboxes in django admin, no missing columns in either superuser or staff views.
### Steps to Reproduce
1. Log in as superuser
2. Go to list view on a model
3. Run ANDI or inspect checkboxes
### Environment
_No response_
### Additional Context
Traced this to the fix for missing columns in staff view. The check {% if results.0.form %} did not work and failed silently. Have a fix for this.
Will prioritize implementation and deployment to staging since we have some accessibility testing in progress.
### Issue Links
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/templatetags/custom_filters.py`
Content:
```
1 from django import template
2 import re
3
4 register = template.Library()
5
6
7 @register.filter(name="extract_value")
8 def extract_value(html_input):
9 match = re.search(r'value="([^"]*)"', html_input)
10 if match:
11 return match.group(1)
12 return ""
13
14
15 @register.filter
16 def extract_a_text(value):
17 # Use regex to extract the text within the <a> tag
18 pattern = r"<a\b[^>]*>(.*?)</a>"
19 match = re.search(pattern, value)
20 if match:
21 extracted_text = match.group(1)
22 else:
23 extracted_text = ""
24
25 return extracted_text
26
27
28 @register.filter
29 def find_index(haystack, needle):
30 try:
31 return haystack.index(needle)
32 except ValueError:
33 return -1
34
35
36 @register.filter
37 def slice_after(value, substring):
38 index = value.find(substring)
39 if index != -1:
40 result = value[index + len(substring) :]
41 return result
42 return value
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/registrar/templatetags/custom_filters.py b/src/registrar/templatetags/custom_filters.py
--- a/src/registrar/templatetags/custom_filters.py
+++ b/src/registrar/templatetags/custom_filters.py
@@ -40,3 +40,11 @@
result = value[index + len(substring) :]
return result
return value
+
+
[email protected]
+def contains_checkbox(html_list):
+ for html_string in html_list:
+ if re.search(r'<input[^>]*type="checkbox"', html_string):
+ return True
+ return False
| {"golden_diff": "diff --git a/src/registrar/templatetags/custom_filters.py b/src/registrar/templatetags/custom_filters.py\n--- a/src/registrar/templatetags/custom_filters.py\n+++ b/src/registrar/templatetags/custom_filters.py\n@@ -40,3 +40,11 @@\n result = value[index + len(substring) :]\n return result\n return value\n+\n+\[email protected]\n+def contains_checkbox(html_list):\n+ for html_string in html_list:\n+ if re.search(r'<input[^>]*type=\"checkbox\"', html_string):\n+ return True\n+ return False\n", "issue": "Fix for checkbox accessibility no longer working\n### Current Behavior\n\nCheckboxes in django admin superuser no longer generated with an associated label.\n\n### Expected Behavior\n\nExpect to see accessible checkboxes in django admin, no missing columns in either superuser or staff views.\n\n### Steps to Reproduce\n\n1. Log in as superuser\r\n2. Go to list view on a model\r\n3. Run ANDI or inspect checkboxes\r\n\n\n### Environment\n\n_No response_\n\n### Additional Context\n\nTraced this to the fix for missing columns in staff view. The check {% if results.0.form %} did not work and failed silently. Have a fix for this.\r\n\r\nWill prioritize implementation and deployment to staging since we have some accessibility testing in progress.\n\n### Issue Links\n\n_No response_\n", "before_files": [{"content": "from django import template\nimport re\n\nregister = template.Library()\n\n\[email protected](name=\"extract_value\")\ndef extract_value(html_input):\n match = re.search(r'value=\"([^\"]*)\"', html_input)\n if match:\n return match.group(1)\n return \"\"\n\n\[email protected]\ndef extract_a_text(value):\n # Use regex to extract the text within the <a> tag\n pattern = r\"<a\\b[^>]*>(.*?)</a>\"\n match = re.search(pattern, value)\n if match:\n extracted_text = match.group(1)\n else:\n extracted_text = \"\"\n\n return extracted_text\n\n\[email protected]\ndef find_index(haystack, needle):\n try:\n return haystack.index(needle)\n except ValueError:\n return -1\n\n\[email protected]\ndef slice_after(value, substring):\n index = value.find(substring)\n if index != -1:\n result = value[index + len(substring) :]\n return result\n return value\n", "path": "src/registrar/templatetags/custom_filters.py"}], "after_files": [{"content": "from django import template\nimport re\n\nregister = template.Library()\n\n\[email protected](name=\"extract_value\")\ndef extract_value(html_input):\n match = re.search(r'value=\"([^\"]*)\"', html_input)\n if match:\n return match.group(1)\n return \"\"\n\n\[email protected]\ndef extract_a_text(value):\n # Use regex to extract the text within the <a> tag\n pattern = r\"<a\\b[^>]*>(.*?)</a>\"\n match = re.search(pattern, value)\n if match:\n extracted_text = match.group(1)\n else:\n extracted_text = \"\"\n\n return extracted_text\n\n\[email protected]\ndef find_index(haystack, needle):\n try:\n return haystack.index(needle)\n except ValueError:\n return -1\n\n\[email protected]\ndef slice_after(value, substring):\n index = value.find(substring)\n if index != -1:\n result = value[index + len(substring) :]\n return result\n return value\n\n\[email protected]\ndef contains_checkbox(html_list):\n for html_string in html_list:\n if re.search(r'<input[^>]*type=\"checkbox\"', html_string):\n return True\n return False\n", "path": "src/registrar/templatetags/custom_filters.py"}]} | 715 | 139 |
gh_patches_debug_2449 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-10168 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PubSub: declaratively drop Python 3.4 support
The README and the language classifiers in `setup.py` both only claim support for Python 3.5+ (and 2.7), but not Python 3.4. However, the `python_requires` in `setup.py` does not reflect that, and does not prevent installing the library in Python 3.4.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pubsub/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-pubsub"
24 description = "Google Cloud Pub/Sub API client library"
25 version = "1.1.0"
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "google-api-core[grpc] >= 1.14.0, < 2.0.0dev",
33 "grpc-google-iam-v1 >= 0.12.3, < 0.13dev",
34 'enum34; python_version < "3.4"',
35 ]
36 extras = {}
37
38
39 # Setup boilerplate below this line.
40
41 package_root = os.path.abspath(os.path.dirname(__file__))
42
43 readme_filename = os.path.join(package_root, "README.rst")
44 with io.open(readme_filename, encoding="utf-8") as readme_file:
45 readme = readme_file.read()
46
47 # Only include packages under the 'google' namespace. Do not include tests,
48 # benchmarks, etc.
49 packages = [
50 package for package in setuptools.find_packages() if package.startswith("google")
51 ]
52
53 # Determine which namespaces are needed.
54 namespaces = ["google"]
55 if "google.cloud" in packages:
56 namespaces.append("google.cloud")
57
58
59 setuptools.setup(
60 name=name,
61 version=version,
62 description=description,
63 long_description=readme,
64 author="Google LLC",
65 author_email="[email protected]",
66 license="Apache 2.0",
67 url="https://github.com/GoogleCloudPlatform/google-cloud-python",
68 classifiers=[
69 release_status,
70 "Intended Audience :: Developers",
71 "License :: OSI Approved :: Apache Software License",
72 "Programming Language :: Python",
73 "Programming Language :: Python :: 2",
74 "Programming Language :: Python :: 2.7",
75 "Programming Language :: Python :: 3",
76 "Programming Language :: Python :: 3.5",
77 "Programming Language :: Python :: 3.6",
78 "Programming Language :: Python :: 3.7",
79 "Operating System :: OS Independent",
80 "Topic :: Internet",
81 ],
82 platforms="Posix; MacOS X; Windows",
83 packages=packages,
84 namespace_packages=namespaces,
85 install_requires=dependencies,
86 extras_require=extras,
87 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
88 include_package_data=True,
89 zip_safe=False,
90 )
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pubsub/setup.py b/pubsub/setup.py
--- a/pubsub/setup.py
+++ b/pubsub/setup.py
@@ -84,7 +84,7 @@
namespace_packages=namespaces,
install_requires=dependencies,
extras_require=extras,
- python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
+ python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*",
include_package_data=True,
zip_safe=False,
)
| {"golden_diff": "diff --git a/pubsub/setup.py b/pubsub/setup.py\n--- a/pubsub/setup.py\n+++ b/pubsub/setup.py\n@@ -84,7 +84,7 @@\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n- python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n+ python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*\",\n include_package_data=True,\n zip_safe=False,\n )\n", "issue": "PubSub: declaratively drop Python 3.4 support\nThe README and the language classifiers in `setup.py` both only claim support for Python 3.5+ (and 2.7), but not Python 3.4. However, the `python_requires` in `setup.py` does not reflect that, and does not prevent installing the library in Python 3.4.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-pubsub\"\ndescription = \"Google Cloud Pub/Sub API client library\"\nversion = \"1.1.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"google-api-core[grpc] >= 1.14.0, < 2.0.0dev\",\n \"grpc-google-iam-v1 >= 0.12.3, < 0.13dev\",\n 'enum34; python_version < \"3.4\"',\n]\nextras = {}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/GoogleCloudPlatform/google-cloud-python\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "pubsub/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-pubsub\"\ndescription = \"Google Cloud Pub/Sub API client library\"\nversion = \"1.1.0\"\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"google-api-core[grpc] >= 1.14.0, < 2.0.0dev\",\n \"grpc-google-iam-v1 >= 0.12.3, < 0.13dev\",\n 'enum34; python_version < \"3.4\"',\n]\nextras = {}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages() if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/GoogleCloudPlatform/google-cloud-python\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "pubsub/setup.py"}]} | 1,205 | 138 |
gh_patches_debug_2406 | rasdani/github-patches | git_diff | buildbot__buildbot-3490 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnboundLocalError in mq/base.py on master shutdown
Hello,
We're using buildbot in multi-master mode and got this stacktrace on one of the master when shutting it down:
```
2017-07-17 12:33:29+0000 [-] Waiting for 1 build(s) to finish
2017-07-17 12:33:29+0000 [-] Builder <Builder 'u'sql-monitor-bitbucket_scality_ring-monitor_ring_frequent-prod-frontend-0'' at 140555339856784> has 1 builds running
2017-07-17 12:33:29+0000 [-] Not shutting down, there are 1 builds running
2017-07-17 12:33:29+0000 [-] Trying shutdown sequence again
2017-07-17 12:33:30+0000 [-] <Build sql-monitor-bitbucket_scality_ring-monitor_ring_frequent-prod-frontend-0 number:32108L results:exception>: stopping build: Master Shutdown 5
2017-07-17 12:33:30+0000 [-] Unhandled error in Deferred:
2017-07-17 12:33:30+0000 [-] Unhandled Error
Traceback (most recent call last):
File "/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = g.send(result)
File "/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/buildbot/process/botmaster.py", line 105, in cleanShutdown
l.append(build.waitUntilFinished())
File "/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/buildbot/process/build.py", line 687, in waitUntilFinished
lambda: self.finished)
File "/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1445, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
— <exception caught here> —
File "/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = g.send(result)
File "/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/buildbot/mq/base.py", line 40, in waitUntilEvent
defer.returnValue(res)
exceptions.UnboundLocalError: local variable 'res' referenced before assignment
```
Looking at the code at the end of `waitUntilEvent()`:
```
if not check:
res = yield d
yield buildCompleteConsumer.stopConsuming
defer.returnValue(res)
```
If the check returned false, we try to return a value (`res`) that was never defined.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/mq/base.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import print_function
18
19 from twisted.internet import defer
20 from twisted.python import failure
21 from twisted.python import log
22
23 from buildbot.util import service
24
25
26 class MQBase(service.AsyncService):
27 name = 'mq-implementation'
28
29 @defer.inlineCallbacks
30 def waitUntilEvent(self, filter, check_callback):
31 d = defer.Deferred()
32 buildCompleteConsumer = yield self.startConsuming(
33 lambda key, value: d.callback((key, value)),
34 filter)
35 check = yield check_callback()
36 # we only wait if the check callback return true
37 if not check:
38 res = yield d
39 yield buildCompleteConsumer.stopConsuming
40 defer.returnValue(res)
41
42
43 class QueueRef(object):
44
45 __slots__ = ['callback']
46
47 def __init__(self, callback):
48 self.callback = callback
49
50 def invoke(self, routing_key, data):
51 if not self.callback:
52 return
53
54 try:
55 x = self.callback(routing_key, data)
56 except Exception:
57 log.err(failure.Failure(), 'while invoking %r' % (self.callback,))
58 return
59 if isinstance(x, defer.Deferred):
60 x.addErrback(log.err, 'while invoking %r' % (self.callback,))
61
62 def stopConsuming(self):
63 # subclasses should set self.callback to None in this method
64 raise NotImplementedError
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/mq/base.py b/master/buildbot/mq/base.py
--- a/master/buildbot/mq/base.py
+++ b/master/buildbot/mq/base.py
@@ -36,7 +36,9 @@
# we only wait if the check callback return true
if not check:
res = yield d
- yield buildCompleteConsumer.stopConsuming
+ else:
+ res = None
+ yield buildCompleteConsumer.stopConsuming()
defer.returnValue(res)
| {"golden_diff": "diff --git a/master/buildbot/mq/base.py b/master/buildbot/mq/base.py\n--- a/master/buildbot/mq/base.py\n+++ b/master/buildbot/mq/base.py\n@@ -36,7 +36,9 @@\n # we only wait if the check callback return true\n if not check:\n res = yield d\n- yield buildCompleteConsumer.stopConsuming\n+ else:\n+ res = None\n+ yield buildCompleteConsumer.stopConsuming()\n defer.returnValue(res)\n", "issue": "UnboundLocalError in mq/base.py on master shutdown\nHello,\r\n\r\nWe're using buildbot in multi-master mode and got this stacktrace on one of the master when shutting it down:\r\n```\r\n2017-07-17 12:33:29+0000 [-] Waiting for 1 build(s) to finish\r\n2017-07-17 12:33:29+0000 [-] Builder <Builder 'u'sql-monitor-bitbucket_scality_ring-monitor_ring_frequent-prod-frontend-0'' at 140555339856784> has 1 builds running\r\n2017-07-17 12:33:29+0000 [-] Not shutting down, there are 1 builds running\r\n2017-07-17 12:33:29+0000 [-] Trying shutdown sequence again\r\n2017-07-17 12:33:30+0000 [-] <Build sql-monitor-bitbucket_scality_ring-monitor_ring_frequent-prod-frontend-0 number:32108L results:exception>: stopping build: Master Shutdown 5\r\n2017-07-17 12:33:30+0000 [-] Unhandled error in Deferred:\r\n2017-07-17 12:33:30+0000 [-] Unhandled Error\r\nTraceback (most recent call last):\r\nFile \"/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 1299, in _inlineCallbacks\r\nresult = g.send(result)\r\nFile \"/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/buildbot/process/botmaster.py\", line 105, in cleanShutdown\r\nl.append(build.waitUntilFinished())\r\nFile \"/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/buildbot/process/build.py\", line 687, in waitUntilFinished\r\nlambda: self.finished)\r\nFile \"/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 1445, in unwindGenerator\r\nreturn _inlineCallbacks(None, gen, Deferred())\r\n\u2014 <exception caught here> \u2014\r\nFile \"/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 1299, in _inlineCallbacks\r\nresult = g.send(result)\r\nFile \"/root/bitbucket/scality/ring/venv/local/lib/python2.7/site-packages/buildbot/mq/base.py\", line 40, in waitUntilEvent\r\ndefer.returnValue(res)\r\nexceptions.UnboundLocalError: local variable 'res' referenced before assignment\r\n```\r\nLooking at the code at the end of `waitUntilEvent()`:\r\n```\r\n if not check:\r\n res = yield d\r\n yield buildCompleteConsumer.stopConsuming\r\n defer.returnValue(res)\r\n```\r\n\r\nIf the check returned false, we try to return a value (`res`) that was never defined.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nfrom twisted.internet import defer\nfrom twisted.python import failure\nfrom twisted.python import log\n\nfrom buildbot.util import service\n\n\nclass MQBase(service.AsyncService):\n name = 'mq-implementation'\n\n @defer.inlineCallbacks\n def waitUntilEvent(self, filter, check_callback):\n d = defer.Deferred()\n buildCompleteConsumer = yield self.startConsuming(\n lambda key, value: d.callback((key, value)),\n filter)\n check = yield check_callback()\n # we only wait if the check callback return true\n if not check:\n res = yield d\n yield buildCompleteConsumer.stopConsuming\n defer.returnValue(res)\n\n\nclass QueueRef(object):\n\n __slots__ = ['callback']\n\n def __init__(self, callback):\n self.callback = callback\n\n def invoke(self, routing_key, data):\n if not self.callback:\n return\n\n try:\n x = self.callback(routing_key, data)\n except Exception:\n log.err(failure.Failure(), 'while invoking %r' % (self.callback,))\n return\n if isinstance(x, defer.Deferred):\n x.addErrback(log.err, 'while invoking %r' % (self.callback,))\n\n def stopConsuming(self):\n # subclasses should set self.callback to None in this method\n raise NotImplementedError\n", "path": "master/buildbot/mq/base.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nfrom twisted.internet import defer\nfrom twisted.python import failure\nfrom twisted.python import log\n\nfrom buildbot.util import service\n\n\nclass MQBase(service.AsyncService):\n name = 'mq-implementation'\n\n @defer.inlineCallbacks\n def waitUntilEvent(self, filter, check_callback):\n d = defer.Deferred()\n buildCompleteConsumer = yield self.startConsuming(\n lambda key, value: d.callback((key, value)),\n filter)\n check = yield check_callback()\n # we only wait if the check callback return true\n if not check:\n res = yield d\n else:\n res = None\n yield buildCompleteConsumer.stopConsuming()\n defer.returnValue(res)\n\n\nclass QueueRef(object):\n\n __slots__ = ['callback']\n\n def __init__(self, callback):\n self.callback = callback\n\n def invoke(self, routing_key, data):\n if not self.callback:\n return\n\n try:\n x = self.callback(routing_key, data)\n except Exception:\n log.err(failure.Failure(), 'while invoking %r' % (self.callback,))\n return\n if isinstance(x, defer.Deferred):\n x.addErrback(log.err, 'while invoking %r' % (self.callback,))\n\n def stopConsuming(self):\n # subclasses should set self.callback to None in this method\n raise NotImplementedError\n", "path": "master/buildbot/mq/base.py"}]} | 1,566 | 110 |
gh_patches_debug_21185 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1852 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Your books: All books: shelved date is incorrect
**Describe the bug**
I just started using Bookwyrm, and added 4 books as "To Read". On my "All Books" page, the "Shelved" dates for 3 of those books are incorrect. https://bookwyrm.social/user/chorist/books
If I click over to my "To Read" page however, the Shelved dates are all correct (all showing "today").
**Screenshots**
<img width="1181" alt="Screen Shot 2022-01-18 at 4 52 23 PM" src="https://user-images.githubusercontent.com/557851/150031715-652dc082-a45a-4e71-af7f-efc34dfb0de9.png">
**Instance**
bookwyrm.social
**Desktop (please complete the following information):**
- OS: MacOS
- Browser: Safari
- Version 15.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/shelf/shelf.py`
Content:
```
1 """ shelf views """
2 from collections import namedtuple
3
4 from django.db.models import OuterRef, Subquery, F
5 from django.contrib.auth.decorators import login_required
6 from django.core.paginator import Paginator
7 from django.http import HttpResponseBadRequest
8 from django.shortcuts import get_object_or_404, redirect
9 from django.template.response import TemplateResponse
10 from django.utils.decorators import method_decorator
11 from django.utils.translation import gettext_lazy as _
12 from django.views import View
13
14 from bookwyrm import forms, models
15 from bookwyrm.activitypub import ActivitypubResponse
16 from bookwyrm.settings import PAGE_LENGTH
17 from bookwyrm.views.helpers import is_api_request, get_user_from_username
18
19
20 # pylint: disable=no-self-use
21 class Shelf(View):
22 """shelf page"""
23
24 def get(self, request, username, shelf_identifier=None):
25 """display a shelf"""
26 user = get_user_from_username(request.user, username)
27
28 is_self = user == request.user
29
30 if is_self:
31 shelves = user.shelf_set.all()
32 else:
33 shelves = models.Shelf.privacy_filter(request.user).filter(user=user).all()
34
35 # get the shelf and make sure the logged in user should be able to see it
36 if shelf_identifier:
37 shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)
38 shelf.raise_visible_to_user(request.user)
39 books = shelf.books
40 else:
41 # this is a constructed "all books" view, with a fake "shelf" obj
42 FakeShelf = namedtuple(
43 "Shelf", ("identifier", "name", "user", "books", "privacy")
44 )
45 books = (
46 models.Edition.viewer_aware_objects(request.user)
47 .filter(
48 # privacy is ensured because the shelves are already filtered above
49 shelfbook__shelf__in=shelves
50 )
51 .distinct()
52 )
53 shelf = FakeShelf("all", _("All books"), user, books, "public")
54
55 if is_api_request(request) and shelf_identifier:
56 return ActivitypubResponse(shelf.to_activity(**request.GET))
57
58 reviews = models.Review.objects
59 if not is_self:
60 reviews = models.Review.privacy_filter(request.user)
61
62 reviews = reviews.filter(
63 user=user,
64 rating__isnull=False,
65 book__id=OuterRef("id"),
66 deleted=False,
67 ).order_by("-published_date")
68
69 reading = models.ReadThrough.objects
70
71 reading = reading.filter(user=user, book__id=OuterRef("id")).order_by(
72 "start_date"
73 )
74
75 if shelf_identifier:
76 books = books.annotate(shelved_date=F("shelfbook__shelved_date"))
77 else:
78 # sorting by shelved date will cause duplicates in the "all books" view
79 books = books.annotate(shelved_date=F("updated_date"))
80 books = books.annotate(
81 rating=Subquery(reviews.values("rating")[:1]),
82 start_date=Subquery(reading.values("start_date")[:1]),
83 finish_date=Subquery(reading.values("finish_date")[:1]),
84 author=Subquery(
85 models.Book.objects.filter(id=OuterRef("id")).values("authors__name")[
86 :1
87 ]
88 ),
89 ).prefetch_related("authors")
90
91 books = sort_books(books, request.GET.get("sort"))
92
93 paginated = Paginator(
94 books,
95 PAGE_LENGTH,
96 )
97 page = paginated.get_page(request.GET.get("page"))
98 data = {
99 "user": user,
100 "is_self": is_self,
101 "shelves": shelves,
102 "shelf": shelf,
103 "books": page,
104 "edit_form": forms.ShelfForm(instance=shelf if shelf_identifier else None),
105 "create_form": forms.ShelfForm(),
106 "sort": request.GET.get("sort"),
107 "page_range": paginated.get_elided_page_range(
108 page.number, on_each_side=2, on_ends=1
109 ),
110 }
111
112 return TemplateResponse(request, "shelf/shelf.html", data)
113
114 @method_decorator(login_required, name="dispatch")
115 # pylint: disable=unused-argument
116 def post(self, request, username, shelf_identifier):
117 """edit a shelf"""
118 user = get_user_from_username(request.user, username)
119 shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)
120 shelf.raise_not_editable(request.user)
121
122 # you can't change the name of the default shelves
123 if not shelf.editable and request.POST.get("name") != shelf.name:
124 return HttpResponseBadRequest()
125
126 form = forms.ShelfForm(request.POST, instance=shelf)
127 if not form.is_valid():
128 return redirect(shelf.local_path)
129 shelf = form.save()
130 return redirect(shelf.local_path)
131
132
133 def sort_books(books, sort):
134 """Books in shelf sorting"""
135 sort_fields = [
136 "title",
137 "author",
138 "shelved_date",
139 "start_date",
140 "finish_date",
141 "rating",
142 ]
143
144 if sort in sort_fields:
145 books = books.order_by(sort)
146 elif sort and sort[1:] in sort_fields:
147 books = books.order_by(F(sort[1:]).desc(nulls_last=True))
148 else:
149 books = books.order_by("-shelved_date")
150 return books
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/shelf/shelf.py b/bookwyrm/views/shelf/shelf.py
--- a/bookwyrm/views/shelf/shelf.py
+++ b/bookwyrm/views/shelf/shelf.py
@@ -1,7 +1,7 @@
""" shelf views """
from collections import namedtuple
-from django.db.models import OuterRef, Subquery, F
+from django.db.models import OuterRef, Subquery, F, Max
from django.contrib.auth.decorators import login_required
from django.core.paginator import Paginator
from django.http import HttpResponseBadRequest
@@ -72,11 +72,7 @@
"start_date"
)
- if shelf_identifier:
- books = books.annotate(shelved_date=F("shelfbook__shelved_date"))
- else:
- # sorting by shelved date will cause duplicates in the "all books" view
- books = books.annotate(shelved_date=F("updated_date"))
+ books = books.annotate(shelved_date=Max("shelfbook__shelved_date"))
books = books.annotate(
rating=Subquery(reviews.values("rating")[:1]),
start_date=Subquery(reading.values("start_date")[:1]),
| {"golden_diff": "diff --git a/bookwyrm/views/shelf/shelf.py b/bookwyrm/views/shelf/shelf.py\n--- a/bookwyrm/views/shelf/shelf.py\n+++ b/bookwyrm/views/shelf/shelf.py\n@@ -1,7 +1,7 @@\n \"\"\" shelf views \"\"\"\n from collections import namedtuple\n \n-from django.db.models import OuterRef, Subquery, F\n+from django.db.models import OuterRef, Subquery, F, Max\n from django.contrib.auth.decorators import login_required\n from django.core.paginator import Paginator\n from django.http import HttpResponseBadRequest\n@@ -72,11 +72,7 @@\n \"start_date\"\n )\n \n- if shelf_identifier:\n- books = books.annotate(shelved_date=F(\"shelfbook__shelved_date\"))\n- else:\n- # sorting by shelved date will cause duplicates in the \"all books\" view\n- books = books.annotate(shelved_date=F(\"updated_date\"))\n+ books = books.annotate(shelved_date=Max(\"shelfbook__shelved_date\"))\n books = books.annotate(\n rating=Subquery(reviews.values(\"rating\")[:1]),\n start_date=Subquery(reading.values(\"start_date\")[:1]),\n", "issue": "Your books: All books: shelved date is incorrect\n**Describe the bug**\r\n\r\nI just started using Bookwyrm, and added 4 books as \"To Read\". On my \"All Books\" page, the \"Shelved\" dates for 3 of those books are incorrect. https://bookwyrm.social/user/chorist/books\r\n\r\nIf I click over to my \"To Read\" page however, the Shelved dates are all correct (all showing \"today\").\r\n\r\n**Screenshots**\r\n\r\n<img width=\"1181\" alt=\"Screen Shot 2022-01-18 at 4 52 23 PM\" src=\"https://user-images.githubusercontent.com/557851/150031715-652dc082-a45a-4e71-af7f-efc34dfb0de9.png\">\r\n\r\n**Instance**\r\nbookwyrm.social\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: MacOS\r\n - Browser: Safari\r\n - Version 15.2\r\n\r\n\n", "before_files": [{"content": "\"\"\" shelf views \"\"\"\nfrom collections import namedtuple\n\nfrom django.db.models import OuterRef, Subquery, F\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom bookwyrm.views.helpers import is_api_request, get_user_from_username\n\n\n# pylint: disable=no-self-use\nclass Shelf(View):\n \"\"\"shelf page\"\"\"\n\n def get(self, request, username, shelf_identifier=None):\n \"\"\"display a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n\n is_self = user == request.user\n\n if is_self:\n shelves = user.shelf_set.all()\n else:\n shelves = models.Shelf.privacy_filter(request.user).filter(user=user).all()\n\n # get the shelf and make sure the logged in user should be able to see it\n if shelf_identifier:\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_visible_to_user(request.user)\n books = shelf.books\n else:\n # this is a constructed \"all books\" view, with a fake \"shelf\" obj\n FakeShelf = namedtuple(\n \"Shelf\", (\"identifier\", \"name\", \"user\", \"books\", \"privacy\")\n )\n books = (\n models.Edition.viewer_aware_objects(request.user)\n .filter(\n # privacy is ensured because the shelves are already filtered above\n shelfbook__shelf__in=shelves\n )\n .distinct()\n )\n shelf = FakeShelf(\"all\", _(\"All books\"), user, books, \"public\")\n\n if is_api_request(request) and shelf_identifier:\n return ActivitypubResponse(shelf.to_activity(**request.GET))\n\n reviews = models.Review.objects\n if not is_self:\n reviews = models.Review.privacy_filter(request.user)\n\n reviews = reviews.filter(\n user=user,\n rating__isnull=False,\n book__id=OuterRef(\"id\"),\n deleted=False,\n ).order_by(\"-published_date\")\n\n reading = models.ReadThrough.objects\n\n reading = reading.filter(user=user, book__id=OuterRef(\"id\")).order_by(\n \"start_date\"\n )\n\n if shelf_identifier:\n books = books.annotate(shelved_date=F(\"shelfbook__shelved_date\"))\n else:\n # sorting by shelved date will cause duplicates in the \"all books\" view\n books = books.annotate(shelved_date=F(\"updated_date\"))\n books = books.annotate(\n rating=Subquery(reviews.values(\"rating\")[:1]),\n start_date=Subquery(reading.values(\"start_date\")[:1]),\n finish_date=Subquery(reading.values(\"finish_date\")[:1]),\n author=Subquery(\n models.Book.objects.filter(id=OuterRef(\"id\")).values(\"authors__name\")[\n :1\n ]\n ),\n ).prefetch_related(\"authors\")\n\n books = sort_books(books, request.GET.get(\"sort\"))\n\n paginated = Paginator(\n books,\n PAGE_LENGTH,\n )\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"user\": user,\n \"is_self\": is_self,\n \"shelves\": shelves,\n \"shelf\": shelf,\n \"books\": page,\n \"edit_form\": forms.ShelfForm(instance=shelf if shelf_identifier else None),\n \"create_form\": forms.ShelfForm(),\n \"sort\": request.GET.get(\"sort\"),\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n\n return TemplateResponse(request, \"shelf/shelf.html\", data)\n\n @method_decorator(login_required, name=\"dispatch\")\n # pylint: disable=unused-argument\n def post(self, request, username, shelf_identifier):\n \"\"\"edit a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_not_editable(request.user)\n\n # you can't change the name of the default shelves\n if not shelf.editable and request.POST.get(\"name\") != shelf.name:\n return HttpResponseBadRequest()\n\n form = forms.ShelfForm(request.POST, instance=shelf)\n if not form.is_valid():\n return redirect(shelf.local_path)\n shelf = form.save()\n return redirect(shelf.local_path)\n\n\ndef sort_books(books, sort):\n \"\"\"Books in shelf sorting\"\"\"\n sort_fields = [\n \"title\",\n \"author\",\n \"shelved_date\",\n \"start_date\",\n \"finish_date\",\n \"rating\",\n ]\n\n if sort in sort_fields:\n books = books.order_by(sort)\n elif sort and sort[1:] in sort_fields:\n books = books.order_by(F(sort[1:]).desc(nulls_last=True))\n else:\n books = books.order_by(\"-shelved_date\")\n return books\n", "path": "bookwyrm/views/shelf/shelf.py"}], "after_files": [{"content": "\"\"\" shelf views \"\"\"\nfrom collections import namedtuple\n\nfrom django.db.models import OuterRef, Subquery, F, Max\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom bookwyrm.views.helpers import is_api_request, get_user_from_username\n\n\n# pylint: disable=no-self-use\nclass Shelf(View):\n \"\"\"shelf page\"\"\"\n\n def get(self, request, username, shelf_identifier=None):\n \"\"\"display a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n\n is_self = user == request.user\n\n if is_self:\n shelves = user.shelf_set.all()\n else:\n shelves = models.Shelf.privacy_filter(request.user).filter(user=user).all()\n\n # get the shelf and make sure the logged in user should be able to see it\n if shelf_identifier:\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_visible_to_user(request.user)\n books = shelf.books\n else:\n # this is a constructed \"all books\" view, with a fake \"shelf\" obj\n FakeShelf = namedtuple(\n \"Shelf\", (\"identifier\", \"name\", \"user\", \"books\", \"privacy\")\n )\n books = (\n models.Edition.viewer_aware_objects(request.user)\n .filter(\n # privacy is ensured because the shelves are already filtered above\n shelfbook__shelf__in=shelves\n )\n .distinct()\n )\n shelf = FakeShelf(\"all\", _(\"All books\"), user, books, \"public\")\n\n if is_api_request(request) and shelf_identifier:\n return ActivitypubResponse(shelf.to_activity(**request.GET))\n\n reviews = models.Review.objects\n if not is_self:\n reviews = models.Review.privacy_filter(request.user)\n\n reviews = reviews.filter(\n user=user,\n rating__isnull=False,\n book__id=OuterRef(\"id\"),\n deleted=False,\n ).order_by(\"-published_date\")\n\n reading = models.ReadThrough.objects\n\n reading = reading.filter(user=user, book__id=OuterRef(\"id\")).order_by(\n \"start_date\"\n )\n\n books = books.annotate(shelved_date=Max(\"shelfbook__shelved_date\"))\n books = books.annotate(\n rating=Subquery(reviews.values(\"rating\")[:1]),\n start_date=Subquery(reading.values(\"start_date\")[:1]),\n finish_date=Subquery(reading.values(\"finish_date\")[:1]),\n author=Subquery(\n models.Book.objects.filter(id=OuterRef(\"id\")).values(\"authors__name\")[\n :1\n ]\n ),\n ).prefetch_related(\"authors\")\n\n books = sort_books(books, request.GET.get(\"sort\"))\n\n paginated = Paginator(\n books,\n PAGE_LENGTH,\n )\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"user\": user,\n \"is_self\": is_self,\n \"shelves\": shelves,\n \"shelf\": shelf,\n \"books\": page,\n \"edit_form\": forms.ShelfForm(instance=shelf if shelf_identifier else None),\n \"create_form\": forms.ShelfForm(),\n \"sort\": request.GET.get(\"sort\"),\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n\n return TemplateResponse(request, \"shelf/shelf.html\", data)\n\n @method_decorator(login_required, name=\"dispatch\")\n # pylint: disable=unused-argument\n def post(self, request, username, shelf_identifier):\n \"\"\"edit a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_not_editable(request.user)\n\n # you can't change the name of the default shelves\n if not shelf.editable and request.POST.get(\"name\") != shelf.name:\n return HttpResponseBadRequest()\n\n form = forms.ShelfForm(request.POST, instance=shelf)\n if not form.is_valid():\n return redirect(shelf.local_path)\n shelf = form.save()\n return redirect(shelf.local_path)\n\n\ndef sort_books(books, sort):\n \"\"\"Books in shelf sorting\"\"\"\n sort_fields = [\n \"title\",\n \"author\",\n \"shelved_date\",\n \"start_date\",\n \"finish_date\",\n \"rating\",\n ]\n\n if sort in sort_fields:\n books = books.order_by(sort)\n elif sort and sort[1:] in sort_fields:\n books = books.order_by(F(sort[1:]).desc(nulls_last=True))\n else:\n books = books.order_by(\"-shelved_date\")\n return books\n", "path": "bookwyrm/views/shelf/shelf.py"}]} | 1,989 | 261 |
gh_patches_debug_57017 | rasdani/github-patches | git_diff | fidals__shopelectro-995 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Resolve stuck tests
CI fails because of stuck tests. They are working at the local and relevant code looks like they should pass
https://ci.fidals.com/fidals/shopelectro/1727/9
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/settings/drone.py`
Content:
```
1 """Settings especially for drone CI."""
2
3 from .base import *
4
5
6 DEBUG = True
7
8 # http://bit.ly/sorl-thumbnail-docs
9 THUMBNAIL_DEBUG = True
10
11 SITE_DOMAIN_NAME = 'stage.shopelectro.ru'
12
13 YANDEX_KASSA_LINK = 'https://demomoney.yandex.ru/eshop.xml'
14
15 SELENIUM_URL = os.environ.get('SELENIUM_URL', 'http://selenium:4444/wd/hub')
16 SELENIUM_WAIT_SECONDS = int(os.environ['SELENIUM_WAIT_SECONDS'])
17 SELENIUM_TIMEOUT_SECONDS = int(os.environ['SELENIUM_TIMEOUT_SECONDS'])
18 SELENIUM_IMPLICIT_WAIT = int(os.environ['SELENIUM_IMPLICIT_WAIT'])
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/settings/drone.py b/shopelectro/settings/drone.py
--- a/shopelectro/settings/drone.py
+++ b/shopelectro/settings/drone.py
@@ -5,6 +5,15 @@
DEBUG = True
+# Header categories menu uses cache in templates.
+# Disable cache to avoid stale menu testing.
+# See #991 for details.
+CACHES = {
+ 'default': {
+ 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
+ }
+}
+
# http://bit.ly/sorl-thumbnail-docs
THUMBNAIL_DEBUG = True
| {"golden_diff": "diff --git a/shopelectro/settings/drone.py b/shopelectro/settings/drone.py\n--- a/shopelectro/settings/drone.py\n+++ b/shopelectro/settings/drone.py\n@@ -5,6 +5,15 @@\n \n DEBUG = True\n \n+# Header categories menu uses cache in templates.\n+# Disable cache to avoid stale menu testing.\n+# See #991 for details.\n+CACHES = {\n+ 'default': {\n+ 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n+ }\n+}\n+\n # http://bit.ly/sorl-thumbnail-docs\n THUMBNAIL_DEBUG = True\n", "issue": "Resolve stuck tests\nCI fails because of stuck tests. They are working at the local and relevant code looks like they should pass\r\nhttps://ci.fidals.com/fidals/shopelectro/1727/9\n", "before_files": [{"content": "\"\"\"Settings especially for drone CI.\"\"\"\n\nfrom .base import *\n\n\nDEBUG = True\n\n# http://bit.ly/sorl-thumbnail-docs\nTHUMBNAIL_DEBUG = True\n\nSITE_DOMAIN_NAME = 'stage.shopelectro.ru'\n\nYANDEX_KASSA_LINK = 'https://demomoney.yandex.ru/eshop.xml'\n\nSELENIUM_URL = os.environ.get('SELENIUM_URL', 'http://selenium:4444/wd/hub')\nSELENIUM_WAIT_SECONDS = int(os.environ['SELENIUM_WAIT_SECONDS'])\nSELENIUM_TIMEOUT_SECONDS = int(os.environ['SELENIUM_TIMEOUT_SECONDS'])\nSELENIUM_IMPLICIT_WAIT = int(os.environ['SELENIUM_IMPLICIT_WAIT'])\n", "path": "shopelectro/settings/drone.py"}], "after_files": [{"content": "\"\"\"Settings especially for drone CI.\"\"\"\n\nfrom .base import *\n\n\nDEBUG = True\n\n# Header categories menu uses cache in templates.\n# Disable cache to avoid stale menu testing.\n# See #991 for details.\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.dummy.DummyCache',\n }\n}\n\n# http://bit.ly/sorl-thumbnail-docs\nTHUMBNAIL_DEBUG = True\n\nSITE_DOMAIN_NAME = 'stage.shopelectro.ru'\n\nYANDEX_KASSA_LINK = 'https://demomoney.yandex.ru/eshop.xml'\n\nSELENIUM_URL = os.environ.get('SELENIUM_URL', 'http://selenium:4444/wd/hub')\nSELENIUM_WAIT_SECONDS = int(os.environ['SELENIUM_WAIT_SECONDS'])\nSELENIUM_TIMEOUT_SECONDS = int(os.environ['SELENIUM_TIMEOUT_SECONDS'])\nSELENIUM_IMPLICIT_WAIT = int(os.environ['SELENIUM_IMPLICIT_WAIT'])\n", "path": "shopelectro/settings/drone.py"}]} | 503 | 141 |
gh_patches_debug_18393 | rasdani/github-patches | git_diff | tensorflow__addons-834 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add nightly tests for windows/macos
Currently we only test our nightlies on linux:
https://github.com/tensorflow/addons/blob/master/.travis.yml#L17
It should be relatively simple to enable tests for macos/windows, with the one caveat that `tf-nightly` is not published for windows.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/losses/__init__.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Additional losses that conform to Keras API."""
16
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 from tensorflow_addons.losses.contrastive import contrastive_loss, ContrastiveLoss
22 from tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy
23 from tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss
24 from tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss
25 from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss
26 from tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss
27 from tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow_addons/losses/__init__.py b/tensorflow_addons/losses/__init__.py
--- a/tensorflow_addons/losses/__init__.py
+++ b/tensorflow_addons/losses/__init__.py
@@ -22,6 +22,11 @@
from tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy
from tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss
from tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss
-from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss
from tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss
from tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss
+
+# Temporarily disable for windows
+# Remove after: https://github.com/tensorflow/addons/issues/838
+import os
+if os.name != 'nt':
+ from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss
| {"golden_diff": "diff --git a/tensorflow_addons/losses/__init__.py b/tensorflow_addons/losses/__init__.py\n--- a/tensorflow_addons/losses/__init__.py\n+++ b/tensorflow_addons/losses/__init__.py\n@@ -22,6 +22,11 @@\n from tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy\n from tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss\n from tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss\n-from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\n from tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss\n from tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss\n+\n+# Temporarily disable for windows\n+# Remove after: https://github.com/tensorflow/addons/issues/838\n+import os\n+if os.name != 'nt':\n+ from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\n", "issue": "Add nightly tests for windows/macos\nCurrently we only test our nightlies on linux:\r\nhttps://github.com/tensorflow/addons/blob/master/.travis.yml#L17\r\n\r\nIt should be relatively simple to enable tests for macos/windows, with the one caveat that `tf-nightly` is not published for windows. \n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Additional losses that conform to Keras API.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom tensorflow_addons.losses.contrastive import contrastive_loss, ContrastiveLoss\nfrom tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy\nfrom tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss\nfrom tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss\nfrom tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\nfrom tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss\nfrom tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss\n", "path": "tensorflow_addons/losses/__init__.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Additional losses that conform to Keras API.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom tensorflow_addons.losses.contrastive import contrastive_loss, ContrastiveLoss\nfrom tensorflow_addons.losses.focal_loss import sigmoid_focal_crossentropy, SigmoidFocalCrossEntropy\nfrom tensorflow_addons.losses.giou_loss import giou_loss, GIoULoss\nfrom tensorflow_addons.losses.lifted import lifted_struct_loss, LiftedStructLoss\nfrom tensorflow_addons.losses.sparsemax_loss import sparsemax_loss, SparsemaxLoss\nfrom tensorflow_addons.losses.triplet import triplet_semihard_loss, TripletSemiHardLoss\n\n# Temporarily disable for windows\n# Remove after: https://github.com/tensorflow/addons/issues/838\nimport os\nif os.name != 'nt':\n from tensorflow_addons.losses.npairs import npairs_loss, NpairsLoss, npairs_multilabel_loss, NpairsMultilabelLoss\n", "path": "tensorflow_addons/losses/__init__.py"}]} | 701 | 275 |
gh_patches_debug_2355 | rasdani/github-patches | git_diff | pytorch__text-248 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A batch object created by fromvars does not have "fields" attribute
When making a batch object, the value of the `fields` attribute is set in its `__init__` method.
However, when created with `fromvars` class method, `fields` attribute is not set since the method first creates an empty object and then add information.
It should be modified to be analogous with the one created by `__init__` method.
It can be simply done by adding the following after https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L36:
```
batch.fields = dataset.fields.keys()
```
This kind of object creation is found when using BPTT iterator. Without `fields` attribute, printing a batch object is not possible due to https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L49.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/data/batch.py`
Content:
```
1 from torch import typename
2 from torch.tensor import _TensorBase
3
4
5 class Batch(object):
6 """Defines a batch of examples along with its Fields.
7
8 Attributes:
9 batch_size: Number of examples in the batch.
10 dataset: A reference to the dataset object the examples come from
11 (which itself contains the dataset's Field objects).
12 train: Whether the batch is from a training set.
13
14 Also stores the Variable for each column in the batch as an attribute.
15 """
16
17 def __init__(self, data=None, dataset=None, device=None, train=True):
18 """Create a Batch from a list of examples."""
19 if data is not None:
20 self.batch_size = len(data)
21 self.dataset = dataset
22 self.train = train
23 self.fields = dataset.fields.keys() # copy field names
24
25 for (name, field) in dataset.fields.items():
26 if field is not None:
27 batch = [x.__dict__[name] for x in data]
28 setattr(self, name, field.process(batch, device=device, train=train))
29
30 @classmethod
31 def fromvars(cls, dataset, batch_size, train=True, **kwargs):
32 """Create a Batch directly from a number of Variables."""
33 batch = cls()
34 batch.batch_size = batch_size
35 batch.dataset = dataset
36 batch.train = train
37 for k, v in kwargs.items():
38 setattr(batch, k, v)
39 return batch
40
41 def __repr__(self):
42 return str(self)
43
44 def __str__(self):
45 if not self.__dict__:
46 return 'Empty {} instance'.format(typename(self))
47
48 var_strs = '\n'.join(['\t[.' + name + ']' + ":" + _short_str(getattr(self, name))
49 for name in self.fields if hasattr(self, name)])
50
51 data_str = (' from {}'.format(self.dataset.name.upper())
52 if hasattr(self.dataset, 'name') and
53 isinstance(self.dataset.name, str) else '')
54
55 strt = '[{} of size {}{}]\n{}'.format(typename(self),
56 self.batch_size, data_str, var_strs)
57 return '\n' + strt
58
59 def __len__(self):
60 return self.batch_size
61
62
63 def _short_str(tensor):
64 # unwrap variable to tensor
65 if hasattr(tensor, 'data'):
66 tensor = tensor.data
67
68 # fallback in case of wrong argument type
69 if issubclass(type(tensor), _TensorBase) is False:
70 return str(tensor)
71
72 # copied from torch _tensor_str
73 size_str = 'x'.join(str(size) for size in tensor.size())
74 device_str = '' if not tensor.is_cuda else \
75 ' (GPU {})'.format(tensor.get_device())
76 strt = '[{} of size {}{}]'.format(typename(tensor),
77 size_str, device_str)
78 return strt
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py
--- a/torchtext/data/batch.py
+++ b/torchtext/data/batch.py
@@ -34,6 +34,7 @@
batch.batch_size = batch_size
batch.dataset = dataset
batch.train = train
+ batch.fields = dataset.fields.keys()
for k, v in kwargs.items():
setattr(batch, k, v)
return batch
| {"golden_diff": "diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py\n--- a/torchtext/data/batch.py\n+++ b/torchtext/data/batch.py\n@@ -34,6 +34,7 @@\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n+ batch.fields = dataset.fields.keys()\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n", "issue": "A batch object created by fromvars does not have \"fields\" attribute\nWhen making a batch object, the value of the `fields` attribute is set in its `__init__` method.\r\nHowever, when created with `fromvars` class method, `fields` attribute is not set since the method first creates an empty object and then add information.\r\nIt should be modified to be analogous with the one created by `__init__` method.\r\nIt can be simply done by adding the following after https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L36:\r\n```\r\nbatch.fields = dataset.fields.keys()\r\n```\r\n\r\nThis kind of object creation is found when using BPTT iterator. Without `fields` attribute, printing a batch object is not possible due to https://github.com/pytorch/text/blob/master/torchtext/data/batch.py#L49.\n", "before_files": [{"content": "from torch import typename\nfrom torch.tensor import _TensorBase\n\n\nclass Batch(object):\n \"\"\"Defines a batch of examples along with its Fields.\n\n Attributes:\n batch_size: Number of examples in the batch.\n dataset: A reference to the dataset object the examples come from\n (which itself contains the dataset's Field objects).\n train: Whether the batch is from a training set.\n\n Also stores the Variable for each column in the batch as an attribute.\n \"\"\"\n\n def __init__(self, data=None, dataset=None, device=None, train=True):\n \"\"\"Create a Batch from a list of examples.\"\"\"\n if data is not None:\n self.batch_size = len(data)\n self.dataset = dataset\n self.train = train\n self.fields = dataset.fields.keys() # copy field names\n\n for (name, field) in dataset.fields.items():\n if field is not None:\n batch = [x.__dict__[name] for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n\n @classmethod\n def fromvars(cls, dataset, batch_size, train=True, **kwargs):\n \"\"\"Create a Batch directly from a number of Variables.\"\"\"\n batch = cls()\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n if not self.__dict__:\n return 'Empty {} instance'.format(typename(self))\n\n var_strs = '\\n'.join(['\\t[.' + name + ']' + \":\" + _short_str(getattr(self, name))\n for name in self.fields if hasattr(self, name)])\n\n data_str = (' from {}'.format(self.dataset.name.upper())\n if hasattr(self.dataset, 'name') and\n isinstance(self.dataset.name, str) else '')\n\n strt = '[{} of size {}{}]\\n{}'.format(typename(self),\n self.batch_size, data_str, var_strs)\n return '\\n' + strt\n\n def __len__(self):\n return self.batch_size\n\n\ndef _short_str(tensor):\n # unwrap variable to tensor\n if hasattr(tensor, 'data'):\n tensor = tensor.data\n\n # fallback in case of wrong argument type\n if issubclass(type(tensor), _TensorBase) is False:\n return str(tensor)\n\n # copied from torch _tensor_str\n size_str = 'x'.join(str(size) for size in tensor.size())\n device_str = '' if not tensor.is_cuda else \\\n ' (GPU {})'.format(tensor.get_device())\n strt = '[{} of size {}{}]'.format(typename(tensor),\n size_str, device_str)\n return strt\n", "path": "torchtext/data/batch.py"}], "after_files": [{"content": "from torch import typename\nfrom torch.tensor import _TensorBase\n\n\nclass Batch(object):\n \"\"\"Defines a batch of examples along with its Fields.\n\n Attributes:\n batch_size: Number of examples in the batch.\n dataset: A reference to the dataset object the examples come from\n (which itself contains the dataset's Field objects).\n train: Whether the batch is from a training set.\n\n Also stores the Variable for each column in the batch as an attribute.\n \"\"\"\n\n def __init__(self, data=None, dataset=None, device=None, train=True):\n \"\"\"Create a Batch from a list of examples.\"\"\"\n if data is not None:\n self.batch_size = len(data)\n self.dataset = dataset\n self.train = train\n self.fields = dataset.fields.keys() # copy field names\n\n for (name, field) in dataset.fields.items():\n if field is not None:\n batch = [x.__dict__[name] for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n\n @classmethod\n def fromvars(cls, dataset, batch_size, train=True, **kwargs):\n \"\"\"Create a Batch directly from a number of Variables.\"\"\"\n batch = cls()\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n batch.fields = dataset.fields.keys()\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n if not self.__dict__:\n return 'Empty {} instance'.format(typename(self))\n\n var_strs = '\\n'.join(['\\t[.' + name + ']' + \":\" + _short_str(getattr(self, name))\n for name in self.fields if hasattr(self, name)])\n\n data_str = (' from {}'.format(self.dataset.name.upper())\n if hasattr(self.dataset, 'name') and\n isinstance(self.dataset.name, str) else '')\n\n strt = '[{} of size {}{}]\\n{}'.format(typename(self),\n self.batch_size, data_str, var_strs)\n return '\\n' + strt\n\n def __len__(self):\n return self.batch_size\n\n\ndef _short_str(tensor):\n # unwrap variable to tensor\n if hasattr(tensor, 'data'):\n tensor = tensor.data\n\n # fallback in case of wrong argument type\n if issubclass(type(tensor), _TensorBase) is False:\n return str(tensor)\n\n # copied from torch _tensor_str\n size_str = 'x'.join(str(size) for size in tensor.size())\n device_str = '' if not tensor.is_cuda else \\\n ' (GPU {})'.format(tensor.get_device())\n strt = '[{} of size {}{}]'.format(typename(tensor),\n size_str, device_str)\n return strt\n", "path": "torchtext/data/batch.py"}]} | 1,203 | 102 |
gh_patches_debug_12042 | rasdani/github-patches | git_diff | pytorch__pytorch-4563 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RuntimeError: Unsupported op descriptor: thnn_batch_norm_forward-5-eps-momentum-training. File a bug report.
I started getting this with the latest code while using JIT:
RuntimeError: Unsupported op descriptor: thnn_batch_norm_forward-5-eps-momentum-training. File a bug report.
It did not happen 7 days ago and still does not happen if I roll back pytorch to the version I had 7 days ago. Do you need any simplified test case to fix it?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/jit/gen_jit_dispatch.py`
Content:
```
1 import os
2 import argparse
3 from itertools import count
4 from ..autograd.utils import CodeTemplate, write
5 from ..autograd.gen_autograd import load_aten_declarations
6
7 template_path = os.path.join(os.path.dirname(__file__), 'templates')
8
9 ATEN_DISPATCH_H = CodeTemplate.from_file(template_path + '/aten_dispatch.h')
10 ATEN_DISPATCH_CPP = CodeTemplate.from_file(template_path + '/aten_dispatch.cpp')
11
12 ATTR_METHOD_MAP = {
13 'int64_t': 'i',
14 'IntList': 'is',
15 'Scalar': 't',
16 'bool': 'i',
17 'double': 'f',
18 'std::array<bool,2>': 'is',
19 'std::array<bool,3>': 'is',
20 }
21
22 TYPE_CASTS = {
23 'std::array<bool,2>': 'as_bool_array<2>',
24 'std::array<bool,3>': 'as_bool_array<3>',
25 'Scalar': 'Scalar',
26 'IntList': 'std::vector<int64_t>',
27 }
28
29 ATTR_ASSIGNMENT = CodeTemplate("""\
30 auto ${name} = ${type_cast}(node->${method}(stringToSymbol("${name}")));\
31 """)
32
33 CALL_NAMESPACE = CodeTemplate("at::${name}(${args})")
34 CALL_METHOD = CodeTemplate("TensorTemporary(inputs[0]).value().${name}(${args})")
35
36 CONSTRUCTOR = CodeTemplate("""\
37 {"${descriptor}", [](Node *node) {
38 ${assignments}
39 return TensorOp([=](const list_of_retainable & inputs,
40 list_of_retainable & outputs) {
41 autograd::profiler::RecordFunction record("${name}");
42 AutoGPU device_guard(deviceForInputs(inputs));
43 pack_list(outputs, ${call});
44 }, "${name}", ${num_inputs});
45 }},
46 """)
47
48
49 def is_jit_op(decl):
50 return (not decl['api_name'].endswith('_') and
51 not decl['name'].endswith('_out') and
52 not decl['name'].endswith('_forward') and
53 not any(arg['simple_type'] == 'Generator' for arg in decl['arguments']) and
54 not any(arg['simple_type'] == 'SparseTensor' for arg in decl['arguments']) and
55 not any(arg['simple_type'] == 'Storage' for arg in decl['arguments']) and
56 any(arg['simple_type'] in {'Tensor', 'TensorList'} for arg in decl['arguments']) and
57 'Tensor' in decl['return_type'])
58
59
60 def gen_jit_dispatch(declarations, out):
61 aten_decls = load_aten_declarations(declarations)
62 jit_decls = [d for d in aten_decls if is_jit_op(d)]
63
64 def is_tensor_arg(arg):
65 return arg['simple_type'] in {'Tensor', 'TensorList'}
66
67 ops = {}
68 for decl in jit_decls:
69 arguments = decl['arguments']
70 name = decl['name']
71 scalar_args = [arg for arg in arguments if not is_tensor_arg(arg)]
72 has_tensorlist = any(arg['simple_type'] == 'TensorList' for arg in arguments)
73
74 # Descriptor is a unique identified for a particular overload of an op
75 attr_names = sorted([arg['name'] for arg in scalar_args])
76 num_inputs = len(arguments) - len(scalar_args) if not has_tensorlist else "*"
77 descriptor = '-'.join([decl['name'], str(num_inputs)] + attr_names)
78
79 # All scalar args need to be assigned, so they can be captured by a lambda
80 assignments = [ATTR_ASSIGNMENT.substitute(type=arg['simple_type'],
81 type_cast=TYPE_CASTS.get(arg['simple_type'], arg['simple_type']),
82 name=arg['name'],
83 method=ATTR_METHOD_MAP[arg['simple_type']])
84 for arg in scalar_args]
85
86 # Generate the actuall ATen call. This gets a bit tricky because of
87 # TensorList arguments, and functions that are only available as methods.
88 if 'namespace' in decl['method_of']:
89 if has_tensorlist:
90 if sum(map(is_tensor_arg, arguments)) != 1:
91 # TODO: support this
92 continue
93 args = ['TensorTemporaryList(inputs)' if is_tensor_arg(arg) else arg['name']
94 for arg in arguments]
95 else:
96 tensor_id = iter(count(start=0))
97 args = ['TensorTemporary(inputs[{}]).value()'.format(
98 next(tensor_id)) if is_tensor_arg(arg) else arg['name']
99 for arg in arguments]
100 call = CALL_NAMESPACE.substitute(name=name, args=args)
101 else:
102 tensor_id = iter(count(start=1))
103 args = ['TensorTemporary(inputs[{}]).value()'.format(next(tensor_id)) if is_tensor_arg(arg) else arg['name']
104 for arg in arguments[1:]]
105 call = CALL_METHOD.substitute(name=name, args=args)
106
107 constructor = CONSTRUCTOR.substitute(descriptor=descriptor, name=name, call=call,
108 assignments=assignments,
109 # num_inputs is only used in AutogradClosure, which
110 # is going to be removed soon anyway. There's no good value
111 # we can provide for cat.
112 num_inputs=num_inputs if num_inputs != "*" else 0)
113 assert descriptor not in ops, descriptor
114 ops[descriptor] = constructor
115
116 # Sort the generated snippets to ensure that the generation is deterministic
117 env = {'constructors': sorted(list(ops.values()))}
118 write(out, 'aten_dispatch.h', ATEN_DISPATCH_H, env)
119 write(out, 'aten_dispatch.cpp', ATEN_DISPATCH_CPP, env)
120
121
122 def main():
123 parser = argparse.ArgumentParser(
124 description='Generate JIT op dispatch')
125 parser.add_argument('declarations', metavar='DECL',
126 help='path to Declarations.yaml')
127 parser.add_argument('out', metavar='OUT',
128 help='path to output directory')
129 args = parser.parse_args()
130 gen_jit_dispatch(args.declarations, args.out)
131
132
133 if __name__ == '__main__':
134 main()
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/jit/gen_jit_dispatch.py b/tools/jit/gen_jit_dispatch.py
--- a/tools/jit/gen_jit_dispatch.py
+++ b/tools/jit/gen_jit_dispatch.py
@@ -49,7 +49,6 @@
def is_jit_op(decl):
return (not decl['api_name'].endswith('_') and
not decl['name'].endswith('_out') and
- not decl['name'].endswith('_forward') and
not any(arg['simple_type'] == 'Generator' for arg in decl['arguments']) and
not any(arg['simple_type'] == 'SparseTensor' for arg in decl['arguments']) and
not any(arg['simple_type'] == 'Storage' for arg in decl['arguments']) and
| {"golden_diff": "diff --git a/tools/jit/gen_jit_dispatch.py b/tools/jit/gen_jit_dispatch.py\n--- a/tools/jit/gen_jit_dispatch.py\n+++ b/tools/jit/gen_jit_dispatch.py\n@@ -49,7 +49,6 @@\n def is_jit_op(decl):\n return (not decl['api_name'].endswith('_') and\n not decl['name'].endswith('_out') and\n- not decl['name'].endswith('_forward') and\n not any(arg['simple_type'] == 'Generator' for arg in decl['arguments']) and\n not any(arg['simple_type'] == 'SparseTensor' for arg in decl['arguments']) and\n not any(arg['simple_type'] == 'Storage' for arg in decl['arguments']) and\n", "issue": "RuntimeError: Unsupported op descriptor: thnn_batch_norm_forward-5-eps-momentum-training. File a bug report.\nI started getting this with the latest code while using JIT:\r\n\r\nRuntimeError: Unsupported op descriptor: thnn_batch_norm_forward-5-eps-momentum-training. File a bug report.\r\n\r\nIt did not happen 7 days ago and still does not happen if I roll back pytorch to the version I had 7 days ago. Do you need any simplified test case to fix it?\r\n \n", "before_files": [{"content": "import os\nimport argparse\nfrom itertools import count\nfrom ..autograd.utils import CodeTemplate, write\nfrom ..autograd.gen_autograd import load_aten_declarations\n\ntemplate_path = os.path.join(os.path.dirname(__file__), 'templates')\n\nATEN_DISPATCH_H = CodeTemplate.from_file(template_path + '/aten_dispatch.h')\nATEN_DISPATCH_CPP = CodeTemplate.from_file(template_path + '/aten_dispatch.cpp')\n\nATTR_METHOD_MAP = {\n 'int64_t': 'i',\n 'IntList': 'is',\n 'Scalar': 't',\n 'bool': 'i',\n 'double': 'f',\n 'std::array<bool,2>': 'is',\n 'std::array<bool,3>': 'is',\n}\n\nTYPE_CASTS = {\n 'std::array<bool,2>': 'as_bool_array<2>',\n 'std::array<bool,3>': 'as_bool_array<3>',\n 'Scalar': 'Scalar',\n 'IntList': 'std::vector<int64_t>',\n}\n\nATTR_ASSIGNMENT = CodeTemplate(\"\"\"\\\nauto ${name} = ${type_cast}(node->${method}(stringToSymbol(\"${name}\")));\\\n\"\"\")\n\nCALL_NAMESPACE = CodeTemplate(\"at::${name}(${args})\")\nCALL_METHOD = CodeTemplate(\"TensorTemporary(inputs[0]).value().${name}(${args})\")\n\nCONSTRUCTOR = CodeTemplate(\"\"\"\\\n{\"${descriptor}\", [](Node *node) {\n ${assignments}\n return TensorOp([=](const list_of_retainable & inputs,\n list_of_retainable & outputs) {\n autograd::profiler::RecordFunction record(\"${name}\");\n AutoGPU device_guard(deviceForInputs(inputs));\n pack_list(outputs, ${call});\n }, \"${name}\", ${num_inputs});\n}},\n\"\"\")\n\n\ndef is_jit_op(decl):\n return (not decl['api_name'].endswith('_') and\n not decl['name'].endswith('_out') and\n not decl['name'].endswith('_forward') and\n not any(arg['simple_type'] == 'Generator' for arg in decl['arguments']) and\n not any(arg['simple_type'] == 'SparseTensor' for arg in decl['arguments']) and\n not any(arg['simple_type'] == 'Storage' for arg in decl['arguments']) and\n any(arg['simple_type'] in {'Tensor', 'TensorList'} for arg in decl['arguments']) and\n 'Tensor' in decl['return_type'])\n\n\ndef gen_jit_dispatch(declarations, out):\n aten_decls = load_aten_declarations(declarations)\n jit_decls = [d for d in aten_decls if is_jit_op(d)]\n\n def is_tensor_arg(arg):\n return arg['simple_type'] in {'Tensor', 'TensorList'}\n\n ops = {}\n for decl in jit_decls:\n arguments = decl['arguments']\n name = decl['name']\n scalar_args = [arg for arg in arguments if not is_tensor_arg(arg)]\n has_tensorlist = any(arg['simple_type'] == 'TensorList' for arg in arguments)\n\n # Descriptor is a unique identified for a particular overload of an op\n attr_names = sorted([arg['name'] for arg in scalar_args])\n num_inputs = len(arguments) - len(scalar_args) if not has_tensorlist else \"*\"\n descriptor = '-'.join([decl['name'], str(num_inputs)] + attr_names)\n\n # All scalar args need to be assigned, so they can be captured by a lambda\n assignments = [ATTR_ASSIGNMENT.substitute(type=arg['simple_type'],\n type_cast=TYPE_CASTS.get(arg['simple_type'], arg['simple_type']),\n name=arg['name'],\n method=ATTR_METHOD_MAP[arg['simple_type']])\n for arg in scalar_args]\n\n # Generate the actuall ATen call. This gets a bit tricky because of\n # TensorList arguments, and functions that are only available as methods.\n if 'namespace' in decl['method_of']:\n if has_tensorlist:\n if sum(map(is_tensor_arg, arguments)) != 1:\n # TODO: support this\n continue\n args = ['TensorTemporaryList(inputs)' if is_tensor_arg(arg) else arg['name']\n for arg in arguments]\n else:\n tensor_id = iter(count(start=0))\n args = ['TensorTemporary(inputs[{}]).value()'.format(\n next(tensor_id)) if is_tensor_arg(arg) else arg['name']\n for arg in arguments]\n call = CALL_NAMESPACE.substitute(name=name, args=args)\n else:\n tensor_id = iter(count(start=1))\n args = ['TensorTemporary(inputs[{}]).value()'.format(next(tensor_id)) if is_tensor_arg(arg) else arg['name']\n for arg in arguments[1:]]\n call = CALL_METHOD.substitute(name=name, args=args)\n\n constructor = CONSTRUCTOR.substitute(descriptor=descriptor, name=name, call=call,\n assignments=assignments,\n # num_inputs is only used in AutogradClosure, which\n # is going to be removed soon anyway. There's no good value\n # we can provide for cat.\n num_inputs=num_inputs if num_inputs != \"*\" else 0)\n assert descriptor not in ops, descriptor\n ops[descriptor] = constructor\n\n # Sort the generated snippets to ensure that the generation is deterministic\n env = {'constructors': sorted(list(ops.values()))}\n write(out, 'aten_dispatch.h', ATEN_DISPATCH_H, env)\n write(out, 'aten_dispatch.cpp', ATEN_DISPATCH_CPP, env)\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Generate JIT op dispatch')\n parser.add_argument('declarations', metavar='DECL',\n help='path to Declarations.yaml')\n parser.add_argument('out', metavar='OUT',\n help='path to output directory')\n args = parser.parse_args()\n gen_jit_dispatch(args.declarations, args.out)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/jit/gen_jit_dispatch.py"}], "after_files": [{"content": "import os\nimport argparse\nfrom itertools import count\nfrom ..autograd.utils import CodeTemplate, write\nfrom ..autograd.gen_autograd import load_aten_declarations\n\ntemplate_path = os.path.join(os.path.dirname(__file__), 'templates')\n\nATEN_DISPATCH_H = CodeTemplate.from_file(template_path + '/aten_dispatch.h')\nATEN_DISPATCH_CPP = CodeTemplate.from_file(template_path + '/aten_dispatch.cpp')\n\nATTR_METHOD_MAP = {\n 'int64_t': 'i',\n 'IntList': 'is',\n 'Scalar': 't',\n 'bool': 'i',\n 'double': 'f',\n 'std::array<bool,2>': 'is',\n 'std::array<bool,3>': 'is',\n}\n\nTYPE_CASTS = {\n 'std::array<bool,2>': 'as_bool_array<2>',\n 'std::array<bool,3>': 'as_bool_array<3>',\n 'Scalar': 'Scalar',\n 'IntList': 'std::vector<int64_t>',\n}\n\nATTR_ASSIGNMENT = CodeTemplate(\"\"\"\\\nauto ${name} = ${type_cast}(node->${method}(stringToSymbol(\"${name}\")));\\\n\"\"\")\n\nCALL_NAMESPACE = CodeTemplate(\"at::${name}(${args})\")\nCALL_METHOD = CodeTemplate(\"TensorTemporary(inputs[0]).value().${name}(${args})\")\n\nCONSTRUCTOR = CodeTemplate(\"\"\"\\\n{\"${descriptor}\", [](Node *node) {\n ${assignments}\n return TensorOp([=](const list_of_retainable & inputs,\n list_of_retainable & outputs) {\n autograd::profiler::RecordFunction record(\"${name}\");\n AutoGPU device_guard(deviceForInputs(inputs));\n pack_list(outputs, ${call});\n }, \"${name}\", ${num_inputs});\n}},\n\"\"\")\n\n\ndef is_jit_op(decl):\n return (not decl['api_name'].endswith('_') and\n not decl['name'].endswith('_out') and\n not any(arg['simple_type'] == 'Generator' for arg in decl['arguments']) and\n not any(arg['simple_type'] == 'SparseTensor' for arg in decl['arguments']) and\n not any(arg['simple_type'] == 'Storage' for arg in decl['arguments']) and\n any(arg['simple_type'] in {'Tensor', 'TensorList'} for arg in decl['arguments']) and\n 'Tensor' in decl['return_type'])\n\n\ndef gen_jit_dispatch(declarations, out):\n aten_decls = load_aten_declarations(declarations)\n jit_decls = [d for d in aten_decls if is_jit_op(d)]\n\n def is_tensor_arg(arg):\n return arg['simple_type'] in {'Tensor', 'TensorList'}\n\n ops = {}\n for decl in jit_decls:\n arguments = decl['arguments']\n name = decl['name']\n scalar_args = [arg for arg in arguments if not is_tensor_arg(arg)]\n has_tensorlist = any(arg['simple_type'] == 'TensorList' for arg in arguments)\n\n # Descriptor is a unique identified for a particular overload of an op\n attr_names = sorted([arg['name'] for arg in scalar_args])\n num_inputs = len(arguments) - len(scalar_args) if not has_tensorlist else \"*\"\n descriptor = '-'.join([decl['name'], str(num_inputs)] + attr_names)\n\n # All scalar args need to be assigned, so they can be captured by a lambda\n assignments = [ATTR_ASSIGNMENT.substitute(type=arg['simple_type'],\n type_cast=TYPE_CASTS.get(arg['simple_type'], arg['simple_type']),\n name=arg['name'],\n method=ATTR_METHOD_MAP[arg['simple_type']])\n for arg in scalar_args]\n\n # Generate the actuall ATen call. This gets a bit tricky because of\n # TensorList arguments, and functions that are only available as methods.\n if 'namespace' in decl['method_of']:\n if has_tensorlist:\n if sum(map(is_tensor_arg, arguments)) != 1:\n # TODO: support this\n continue\n args = ['TensorTemporaryList(inputs)' if is_tensor_arg(arg) else arg['name']\n for arg in arguments]\n else:\n tensor_id = iter(count(start=0))\n args = ['TensorTemporary(inputs[{}]).value()'.format(\n next(tensor_id)) if is_tensor_arg(arg) else arg['name']\n for arg in arguments]\n call = CALL_NAMESPACE.substitute(name=name, args=args)\n else:\n tensor_id = iter(count(start=1))\n args = ['TensorTemporary(inputs[{}]).value()'.format(next(tensor_id)) if is_tensor_arg(arg) else arg['name']\n for arg in arguments[1:]]\n call = CALL_METHOD.substitute(name=name, args=args)\n\n constructor = CONSTRUCTOR.substitute(descriptor=descriptor, name=name, call=call,\n assignments=assignments,\n # num_inputs is only used in AutogradClosure, which\n # is going to be removed soon anyway. There's no good value\n # we can provide for cat.\n num_inputs=num_inputs if num_inputs != \"*\" else 0)\n assert descriptor not in ops, descriptor\n ops[descriptor] = constructor\n\n # Sort the generated snippets to ensure that the generation is deterministic\n env = {'constructors': sorted(list(ops.values()))}\n write(out, 'aten_dispatch.h', ATEN_DISPATCH_H, env)\n write(out, 'aten_dispatch.cpp', ATEN_DISPATCH_CPP, env)\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Generate JIT op dispatch')\n parser.add_argument('declarations', metavar='DECL',\n help='path to Declarations.yaml')\n parser.add_argument('out', metavar='OUT',\n help='path to output directory')\n args = parser.parse_args()\n gen_jit_dispatch(args.declarations, args.out)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/jit/gen_jit_dispatch.py"}]} | 1,954 | 170 |
gh_patches_debug_19170 | rasdani/github-patches | git_diff | goauthentik__authentik-6187 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Posssible issue with LDAP after 2023.6.0 upgrade?
### Discussed in https://github.com/goauthentik/authentik/discussions/6185
<div type='discussions-op-text'>
<sup>Originally posted by **AndrewBucklin** July 7, 2023</sup>
After upgrading to 2023.6.0, the LDAP sync appears to be syncing 5 users at a time and creating a ton of entries in the Sync Status window. Is this normal? This list continues to grow, but here's what it looks like right now:
> Sync status
>
> ldap_sync:default:users:72ee93db-39d0-406e-8058-b814a0930744
>
> Last sync: 7/7/2023, 4:48:41 PM
> Synced 5 objects.
> ldap_sync:default:users:1e498568-f2c3-4212-b090-dbc1fcde940a
>
> Last sync: 7/7/2023, 4:48:12 PM
> Synced 5 objects.
> ldap_sync:default:users:e62eb0d7-d3c3-46b8-8a74-36544951df0c
>
> Last sync: 7/7/2023, 4:48:52 PM
> Synced 5 objects.
> ldap_sync:default:membership
>
> Last sync: 7/7/2023, 4:34:08 PM
> Synced 5736 objects.
> ldap_sync:default:users:5786c286-be82-4e13-a206-fd38b792d59c
>
> Last sync: 7/7/2023, 4:50:11 PM
> Synced 5 objects.
> ldap_sync:default:users:8c4601fa-c3a4-4fbd-a49c-e74bf7f9a280
>
> Last sync: 7/7/2023, 4:50:22 PM
> Synced 5 objects.
> ldap_sync:default:users:1f9f4657-0053-4bb7-82f0-f89f163c2ec3
>
> Last sync: 7/7/2023, 4:47:36 PM
> Synced 5 objects.
> ldap_sync:default:users:0411e4c4-b427-413e-9870-469643827f87
>
> Last sync: 7/7/2023, 4:50:52 PM
> Synced 5 objects.
> ldap_sync:default:users:0561cadd-2f8e-4432-befb-fe100ff779bf
>
> Last sync: 7/7/2023, 4:51:02 PM
> Synced 5 objects.
> ldap_sync:default:users:a0965cd3-306f-48d2-887b-88a0559fe627
>
> Last sync: 7/7/2023, 4:48:50 PM
> Synced 5 objects.
> ldap_sync:default:users:27357591-647d-4765-9256-127c5b03a11a
>
> Last sync: 7/7/2023, 4:49:56 PM
> Synced 5 objects.
> ldap_sync:default:users:2e7429e0-3b5f-4fa7-8df9-e270c85697ec
>
> Last sync: 7/7/2023, 4:50:17 PM
> Synced 5 objects.
> ldap_sync:default:group
>
> Last sync: 7/7/2023, 4:34:03 PM
> Synced 98 objects.
> ldap_sync:default:users:7bbad8c6-1612-460c-ae2e-ff93cda66de9
>
> Last sync: 7/7/2023, 4:49:36 PM
> Synced 5 objects.
> ldap_sync:default:users:8e981fc1-1976-411b-93c0-5d005300ab74
>
> Last sync: 7/7/2023, 4:49:31 PM
> Synced 5 objects.
> ldap_sync:default:user
>
> Last sync: 7/7/2023, 2:56:18 PM
> Synced 156 objects.
</div>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/sources/ldap/sync/base.py`
Content:
```
1 """Sync LDAP Users and groups into authentik"""
2 from typing import Any, Generator
3
4 from django.conf import settings
5 from django.db.models.base import Model
6 from django.db.models.query import QuerySet
7 from ldap3 import DEREF_ALWAYS, SUBTREE, Connection
8 from structlog.stdlib import BoundLogger, get_logger
9
10 from authentik.core.exceptions import PropertyMappingExpressionException
11 from authentik.events.models import Event, EventAction
12 from authentik.lib.merge import MERGE_LIST_UNIQUE
13 from authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME
14 from authentik.sources.ldap.models import LDAPPropertyMapping, LDAPSource
15
16 LDAP_UNIQUENESS = "ldap_uniq"
17
18
19 class BaseLDAPSynchronizer:
20 """Sync LDAP Users and groups into authentik"""
21
22 _source: LDAPSource
23 _logger: BoundLogger
24 _connection: Connection
25 _messages: list[str]
26
27 def __init__(self, source: LDAPSource):
28 self._source = source
29 self._connection = source.connection()
30 self._messages = []
31 self._logger = get_logger().bind(source=source, syncer=self.__class__.__name__)
32
33 @staticmethod
34 def name() -> str:
35 """UI name for the type of object this class synchronizes"""
36 raise NotImplementedError
37
38 def sync_full(self):
39 """Run full sync, this function should only be used in tests"""
40 if not settings.TEST: # noqa
41 raise RuntimeError(
42 f"{self.__class__.__name__}.sync_full() should only be used in tests"
43 )
44 for page in self.get_objects():
45 self.sync(page)
46
47 def sync(self, page_data: list) -> int:
48 """Sync function, implemented in subclass"""
49 raise NotImplementedError()
50
51 @property
52 def messages(self) -> list[str]:
53 """Get all UI messages"""
54 return self._messages
55
56 @property
57 def base_dn_users(self) -> str:
58 """Shortcut to get full base_dn for user lookups"""
59 if self._source.additional_user_dn:
60 return f"{self._source.additional_user_dn},{self._source.base_dn}"
61 return self._source.base_dn
62
63 @property
64 def base_dn_groups(self) -> str:
65 """Shortcut to get full base_dn for group lookups"""
66 if self._source.additional_group_dn:
67 return f"{self._source.additional_group_dn},{self._source.base_dn}"
68 return self._source.base_dn
69
70 def message(self, *args, **kwargs):
71 """Add message that is later added to the System Task and shown to the user"""
72 formatted_message = " ".join(args)
73 if "dn" in kwargs:
74 formatted_message += f"; DN: {kwargs['dn']}"
75 self._messages.append(formatted_message)
76 self._logger.warning(*args, **kwargs)
77
78 def get_objects(self, **kwargs) -> Generator:
79 """Get objects from LDAP, implemented in subclass"""
80 raise NotImplementedError()
81
82 # pylint: disable=too-many-arguments
83 def search_paginator(
84 self,
85 search_base,
86 search_filter,
87 search_scope=SUBTREE,
88 dereference_aliases=DEREF_ALWAYS,
89 attributes=None,
90 size_limit=0,
91 time_limit=0,
92 types_only=False,
93 get_operational_attributes=False,
94 controls=None,
95 paged_size=5,
96 paged_criticality=False,
97 ):
98 """Search in pages, returns each page"""
99 cookie = True
100 while cookie:
101 self._connection.search(
102 search_base,
103 search_filter,
104 search_scope,
105 dereference_aliases,
106 attributes,
107 size_limit,
108 time_limit,
109 types_only,
110 get_operational_attributes,
111 controls,
112 paged_size,
113 paged_criticality,
114 None if cookie is True else cookie,
115 )
116 try:
117 cookie = self._connection.result["controls"]["1.2.840.113556.1.4.319"]["value"][
118 "cookie"
119 ]
120 except KeyError:
121 cookie = None
122 yield self._connection.response
123
124 def _flatten(self, value: Any) -> Any:
125 """Flatten `value` if its a list"""
126 if isinstance(value, list):
127 if len(value) < 1:
128 return None
129 return value[0]
130 return value
131
132 def build_user_properties(self, user_dn: str, **kwargs) -> dict[str, Any]:
133 """Build attributes for User object based on property mappings."""
134 props = self._build_object_properties(user_dn, self._source.property_mappings, **kwargs)
135 props["path"] = self._source.get_user_path()
136 return props
137
138 def build_group_properties(self, group_dn: str, **kwargs) -> dict[str, Any]:
139 """Build attributes for Group object based on property mappings."""
140 return self._build_object_properties(
141 group_dn, self._source.property_mappings_group, **kwargs
142 )
143
144 def _build_object_properties(
145 self, object_dn: str, mappings: QuerySet, **kwargs
146 ) -> dict[str, dict[Any, Any]]:
147 properties = {"attributes": {}}
148 for mapping in mappings.all().select_subclasses():
149 if not isinstance(mapping, LDAPPropertyMapping):
150 continue
151 mapping: LDAPPropertyMapping
152 try:
153 value = mapping.evaluate(user=None, request=None, ldap=kwargs, dn=object_dn)
154 if value is None:
155 continue
156 if isinstance(value, (bytes)):
157 continue
158 object_field = mapping.object_field
159 if object_field.startswith("attributes."):
160 # Because returning a list might desired, we can't
161 # rely on self._flatten here. Instead, just save the result as-is
162 properties["attributes"][object_field.replace("attributes.", "")] = value
163 else:
164 properties[object_field] = self._flatten(value)
165 except PropertyMappingExpressionException as exc:
166 Event.new(
167 EventAction.CONFIGURATION_ERROR,
168 message=f"Failed to evaluate property-mapping: '{mapping.name}'",
169 source=self._source,
170 mapping=mapping,
171 ).save()
172 self._logger.warning("Mapping failed to evaluate", exc=exc, mapping=mapping)
173 continue
174 if self._source.object_uniqueness_field in kwargs:
175 properties["attributes"][LDAP_UNIQUENESS] = self._flatten(
176 kwargs.get(self._source.object_uniqueness_field)
177 )
178 properties["attributes"][LDAP_DISTINGUISHED_NAME] = object_dn
179 return properties
180
181 def update_or_create_attributes(
182 self,
183 obj: type[Model],
184 query: dict[str, Any],
185 data: dict[str, Any],
186 ) -> tuple[Model, bool]:
187 """Same as django's update_or_create but correctly update attributes by merging dicts"""
188 instance = obj.objects.filter(**query).first()
189 if not instance:
190 return (obj.objects.create(**data), True)
191 for key, value in data.items():
192 if key == "attributes":
193 continue
194 setattr(instance, key, value)
195 final_attributes = {}
196 MERGE_LIST_UNIQUE.merge(final_attributes, instance.attributes)
197 MERGE_LIST_UNIQUE.merge(final_attributes, data.get("attributes", {}))
198 instance.attributes = final_attributes
199 instance.save()
200 return (instance, False)
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/sources/ldap/sync/base.py b/authentik/sources/ldap/sync/base.py
--- a/authentik/sources/ldap/sync/base.py
+++ b/authentik/sources/ldap/sync/base.py
@@ -9,6 +9,7 @@
from authentik.core.exceptions import PropertyMappingExpressionException
from authentik.events.models import Event, EventAction
+from authentik.lib.config import CONFIG
from authentik.lib.merge import MERGE_LIST_UNIQUE
from authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME
from authentik.sources.ldap.models import LDAPPropertyMapping, LDAPSource
@@ -92,7 +93,7 @@
types_only=False,
get_operational_attributes=False,
controls=None,
- paged_size=5,
+ paged_size=int(CONFIG.y("ldap.page_size", 50)),
paged_criticality=False,
):
"""Search in pages, returns each page"""
| {"golden_diff": "diff --git a/authentik/sources/ldap/sync/base.py b/authentik/sources/ldap/sync/base.py\n--- a/authentik/sources/ldap/sync/base.py\n+++ b/authentik/sources/ldap/sync/base.py\n@@ -9,6 +9,7 @@\n \n from authentik.core.exceptions import PropertyMappingExpressionException\n from authentik.events.models import Event, EventAction\n+from authentik.lib.config import CONFIG\n from authentik.lib.merge import MERGE_LIST_UNIQUE\n from authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME\n from authentik.sources.ldap.models import LDAPPropertyMapping, LDAPSource\n@@ -92,7 +93,7 @@\n types_only=False,\n get_operational_attributes=False,\n controls=None,\n- paged_size=5,\n+ paged_size=int(CONFIG.y(\"ldap.page_size\", 50)),\n paged_criticality=False,\n ):\n \"\"\"Search in pages, returns each page\"\"\"\n", "issue": "Posssible issue with LDAP after 2023.6.0 upgrade?\n### Discussed in https://github.com/goauthentik/authentik/discussions/6185\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **AndrewBucklin** July 7, 2023</sup>\r\nAfter upgrading to 2023.6.0, the LDAP sync appears to be syncing 5 users at a time and creating a ton of entries in the Sync Status window. Is this normal? This list continues to grow, but here's what it looks like right now:\r\n\r\n> Sync status\r\n> \r\n> ldap_sync:default:users:72ee93db-39d0-406e-8058-b814a0930744\r\n> \r\n> Last sync: 7/7/2023, 4:48:41 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:1e498568-f2c3-4212-b090-dbc1fcde940a\r\n> \r\n> Last sync: 7/7/2023, 4:48:12 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:e62eb0d7-d3c3-46b8-8a74-36544951df0c\r\n> \r\n> Last sync: 7/7/2023, 4:48:52 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:membership\r\n> \r\n> Last sync: 7/7/2023, 4:34:08 PM\r\n> Synced 5736 objects.\r\n> ldap_sync:default:users:5786c286-be82-4e13-a206-fd38b792d59c\r\n> \r\n> Last sync: 7/7/2023, 4:50:11 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:8c4601fa-c3a4-4fbd-a49c-e74bf7f9a280\r\n> \r\n> Last sync: 7/7/2023, 4:50:22 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:1f9f4657-0053-4bb7-82f0-f89f163c2ec3\r\n> \r\n> Last sync: 7/7/2023, 4:47:36 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:0411e4c4-b427-413e-9870-469643827f87\r\n> \r\n> Last sync: 7/7/2023, 4:50:52 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:0561cadd-2f8e-4432-befb-fe100ff779bf\r\n> \r\n> Last sync: 7/7/2023, 4:51:02 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:a0965cd3-306f-48d2-887b-88a0559fe627\r\n> \r\n> Last sync: 7/7/2023, 4:48:50 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:27357591-647d-4765-9256-127c5b03a11a\r\n> \r\n> Last sync: 7/7/2023, 4:49:56 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:2e7429e0-3b5f-4fa7-8df9-e270c85697ec\r\n> \r\n> Last sync: 7/7/2023, 4:50:17 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:group\r\n> \r\n> Last sync: 7/7/2023, 4:34:03 PM\r\n> Synced 98 objects.\r\n> ldap_sync:default:users:7bbad8c6-1612-460c-ae2e-ff93cda66de9\r\n> \r\n> Last sync: 7/7/2023, 4:49:36 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:users:8e981fc1-1976-411b-93c0-5d005300ab74\r\n> \r\n> Last sync: 7/7/2023, 4:49:31 PM\r\n> Synced 5 objects.\r\n> ldap_sync:default:user\r\n> \r\n> Last sync: 7/7/2023, 2:56:18 PM\r\n> Synced 156 objects.\r\n\r\n</div>\n", "before_files": [{"content": "\"\"\"Sync LDAP Users and groups into authentik\"\"\"\nfrom typing import Any, Generator\n\nfrom django.conf import settings\nfrom django.db.models.base import Model\nfrom django.db.models.query import QuerySet\nfrom ldap3 import DEREF_ALWAYS, SUBTREE, Connection\nfrom structlog.stdlib import BoundLogger, get_logger\n\nfrom authentik.core.exceptions import PropertyMappingExpressionException\nfrom authentik.events.models import Event, EventAction\nfrom authentik.lib.merge import MERGE_LIST_UNIQUE\nfrom authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME\nfrom authentik.sources.ldap.models import LDAPPropertyMapping, LDAPSource\n\nLDAP_UNIQUENESS = \"ldap_uniq\"\n\n\nclass BaseLDAPSynchronizer:\n \"\"\"Sync LDAP Users and groups into authentik\"\"\"\n\n _source: LDAPSource\n _logger: BoundLogger\n _connection: Connection\n _messages: list[str]\n\n def __init__(self, source: LDAPSource):\n self._source = source\n self._connection = source.connection()\n self._messages = []\n self._logger = get_logger().bind(source=source, syncer=self.__class__.__name__)\n\n @staticmethod\n def name() -> str:\n \"\"\"UI name for the type of object this class synchronizes\"\"\"\n raise NotImplementedError\n\n def sync_full(self):\n \"\"\"Run full sync, this function should only be used in tests\"\"\"\n if not settings.TEST: # noqa\n raise RuntimeError(\n f\"{self.__class__.__name__}.sync_full() should only be used in tests\"\n )\n for page in self.get_objects():\n self.sync(page)\n\n def sync(self, page_data: list) -> int:\n \"\"\"Sync function, implemented in subclass\"\"\"\n raise NotImplementedError()\n\n @property\n def messages(self) -> list[str]:\n \"\"\"Get all UI messages\"\"\"\n return self._messages\n\n @property\n def base_dn_users(self) -> str:\n \"\"\"Shortcut to get full base_dn for user lookups\"\"\"\n if self._source.additional_user_dn:\n return f\"{self._source.additional_user_dn},{self._source.base_dn}\"\n return self._source.base_dn\n\n @property\n def base_dn_groups(self) -> str:\n \"\"\"Shortcut to get full base_dn for group lookups\"\"\"\n if self._source.additional_group_dn:\n return f\"{self._source.additional_group_dn},{self._source.base_dn}\"\n return self._source.base_dn\n\n def message(self, *args, **kwargs):\n \"\"\"Add message that is later added to the System Task and shown to the user\"\"\"\n formatted_message = \" \".join(args)\n if \"dn\" in kwargs:\n formatted_message += f\"; DN: {kwargs['dn']}\"\n self._messages.append(formatted_message)\n self._logger.warning(*args, **kwargs)\n\n def get_objects(self, **kwargs) -> Generator:\n \"\"\"Get objects from LDAP, implemented in subclass\"\"\"\n raise NotImplementedError()\n\n # pylint: disable=too-many-arguments\n def search_paginator(\n self,\n search_base,\n search_filter,\n search_scope=SUBTREE,\n dereference_aliases=DEREF_ALWAYS,\n attributes=None,\n size_limit=0,\n time_limit=0,\n types_only=False,\n get_operational_attributes=False,\n controls=None,\n paged_size=5,\n paged_criticality=False,\n ):\n \"\"\"Search in pages, returns each page\"\"\"\n cookie = True\n while cookie:\n self._connection.search(\n search_base,\n search_filter,\n search_scope,\n dereference_aliases,\n attributes,\n size_limit,\n time_limit,\n types_only,\n get_operational_attributes,\n controls,\n paged_size,\n paged_criticality,\n None if cookie is True else cookie,\n )\n try:\n cookie = self._connection.result[\"controls\"][\"1.2.840.113556.1.4.319\"][\"value\"][\n \"cookie\"\n ]\n except KeyError:\n cookie = None\n yield self._connection.response\n\n def _flatten(self, value: Any) -> Any:\n \"\"\"Flatten `value` if its a list\"\"\"\n if isinstance(value, list):\n if len(value) < 1:\n return None\n return value[0]\n return value\n\n def build_user_properties(self, user_dn: str, **kwargs) -> dict[str, Any]:\n \"\"\"Build attributes for User object based on property mappings.\"\"\"\n props = self._build_object_properties(user_dn, self._source.property_mappings, **kwargs)\n props[\"path\"] = self._source.get_user_path()\n return props\n\n def build_group_properties(self, group_dn: str, **kwargs) -> dict[str, Any]:\n \"\"\"Build attributes for Group object based on property mappings.\"\"\"\n return self._build_object_properties(\n group_dn, self._source.property_mappings_group, **kwargs\n )\n\n def _build_object_properties(\n self, object_dn: str, mappings: QuerySet, **kwargs\n ) -> dict[str, dict[Any, Any]]:\n properties = {\"attributes\": {}}\n for mapping in mappings.all().select_subclasses():\n if not isinstance(mapping, LDAPPropertyMapping):\n continue\n mapping: LDAPPropertyMapping\n try:\n value = mapping.evaluate(user=None, request=None, ldap=kwargs, dn=object_dn)\n if value is None:\n continue\n if isinstance(value, (bytes)):\n continue\n object_field = mapping.object_field\n if object_field.startswith(\"attributes.\"):\n # Because returning a list might desired, we can't\n # rely on self._flatten here. Instead, just save the result as-is\n properties[\"attributes\"][object_field.replace(\"attributes.\", \"\")] = value\n else:\n properties[object_field] = self._flatten(value)\n except PropertyMappingExpressionException as exc:\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=f\"Failed to evaluate property-mapping: '{mapping.name}'\",\n source=self._source,\n mapping=mapping,\n ).save()\n self._logger.warning(\"Mapping failed to evaluate\", exc=exc, mapping=mapping)\n continue\n if self._source.object_uniqueness_field in kwargs:\n properties[\"attributes\"][LDAP_UNIQUENESS] = self._flatten(\n kwargs.get(self._source.object_uniqueness_field)\n )\n properties[\"attributes\"][LDAP_DISTINGUISHED_NAME] = object_dn\n return properties\n\n def update_or_create_attributes(\n self,\n obj: type[Model],\n query: dict[str, Any],\n data: dict[str, Any],\n ) -> tuple[Model, bool]:\n \"\"\"Same as django's update_or_create but correctly update attributes by merging dicts\"\"\"\n instance = obj.objects.filter(**query).first()\n if not instance:\n return (obj.objects.create(**data), True)\n for key, value in data.items():\n if key == \"attributes\":\n continue\n setattr(instance, key, value)\n final_attributes = {}\n MERGE_LIST_UNIQUE.merge(final_attributes, instance.attributes)\n MERGE_LIST_UNIQUE.merge(final_attributes, data.get(\"attributes\", {}))\n instance.attributes = final_attributes\n instance.save()\n return (instance, False)\n", "path": "authentik/sources/ldap/sync/base.py"}], "after_files": [{"content": "\"\"\"Sync LDAP Users and groups into authentik\"\"\"\nfrom typing import Any, Generator\n\nfrom django.conf import settings\nfrom django.db.models.base import Model\nfrom django.db.models.query import QuerySet\nfrom ldap3 import DEREF_ALWAYS, SUBTREE, Connection\nfrom structlog.stdlib import BoundLogger, get_logger\n\nfrom authentik.core.exceptions import PropertyMappingExpressionException\nfrom authentik.events.models import Event, EventAction\nfrom authentik.lib.config import CONFIG\nfrom authentik.lib.merge import MERGE_LIST_UNIQUE\nfrom authentik.sources.ldap.auth import LDAP_DISTINGUISHED_NAME\nfrom authentik.sources.ldap.models import LDAPPropertyMapping, LDAPSource\n\nLDAP_UNIQUENESS = \"ldap_uniq\"\n\n\nclass BaseLDAPSynchronizer:\n \"\"\"Sync LDAP Users and groups into authentik\"\"\"\n\n _source: LDAPSource\n _logger: BoundLogger\n _connection: Connection\n _messages: list[str]\n\n def __init__(self, source: LDAPSource):\n self._source = source\n self._connection = source.connection()\n self._messages = []\n self._logger = get_logger().bind(source=source, syncer=self.__class__.__name__)\n\n @staticmethod\n def name() -> str:\n \"\"\"UI name for the type of object this class synchronizes\"\"\"\n raise NotImplementedError\n\n def sync_full(self):\n \"\"\"Run full sync, this function should only be used in tests\"\"\"\n if not settings.TEST: # noqa\n raise RuntimeError(\n f\"{self.__class__.__name__}.sync_full() should only be used in tests\"\n )\n for page in self.get_objects():\n self.sync(page)\n\n def sync(self, page_data: list) -> int:\n \"\"\"Sync function, implemented in subclass\"\"\"\n raise NotImplementedError()\n\n @property\n def messages(self) -> list[str]:\n \"\"\"Get all UI messages\"\"\"\n return self._messages\n\n @property\n def base_dn_users(self) -> str:\n \"\"\"Shortcut to get full base_dn for user lookups\"\"\"\n if self._source.additional_user_dn:\n return f\"{self._source.additional_user_dn},{self._source.base_dn}\"\n return self._source.base_dn\n\n @property\n def base_dn_groups(self) -> str:\n \"\"\"Shortcut to get full base_dn for group lookups\"\"\"\n if self._source.additional_group_dn:\n return f\"{self._source.additional_group_dn},{self._source.base_dn}\"\n return self._source.base_dn\n\n def message(self, *args, **kwargs):\n \"\"\"Add message that is later added to the System Task and shown to the user\"\"\"\n formatted_message = \" \".join(args)\n if \"dn\" in kwargs:\n formatted_message += f\"; DN: {kwargs['dn']}\"\n self._messages.append(formatted_message)\n self._logger.warning(*args, **kwargs)\n\n def get_objects(self, **kwargs) -> Generator:\n \"\"\"Get objects from LDAP, implemented in subclass\"\"\"\n raise NotImplementedError()\n\n # pylint: disable=too-many-arguments\n def search_paginator(\n self,\n search_base,\n search_filter,\n search_scope=SUBTREE,\n dereference_aliases=DEREF_ALWAYS,\n attributes=None,\n size_limit=0,\n time_limit=0,\n types_only=False,\n get_operational_attributes=False,\n controls=None,\n paged_size=int(CONFIG.y(\"ldap.page_size\", 50)),\n paged_criticality=False,\n ):\n \"\"\"Search in pages, returns each page\"\"\"\n cookie = True\n while cookie:\n self._connection.search(\n search_base,\n search_filter,\n search_scope,\n dereference_aliases,\n attributes,\n size_limit,\n time_limit,\n types_only,\n get_operational_attributes,\n controls,\n paged_size,\n paged_criticality,\n None if cookie is True else cookie,\n )\n try:\n cookie = self._connection.result[\"controls\"][\"1.2.840.113556.1.4.319\"][\"value\"][\n \"cookie\"\n ]\n except KeyError:\n cookie = None\n yield self._connection.response\n\n def _flatten(self, value: Any) -> Any:\n \"\"\"Flatten `value` if its a list\"\"\"\n if isinstance(value, list):\n if len(value) < 1:\n return None\n return value[0]\n return value\n\n def build_user_properties(self, user_dn: str, **kwargs) -> dict[str, Any]:\n \"\"\"Build attributes for User object based on property mappings.\"\"\"\n props = self._build_object_properties(user_dn, self._source.property_mappings, **kwargs)\n props[\"path\"] = self._source.get_user_path()\n return props\n\n def build_group_properties(self, group_dn: str, **kwargs) -> dict[str, Any]:\n \"\"\"Build attributes for Group object based on property mappings.\"\"\"\n return self._build_object_properties(\n group_dn, self._source.property_mappings_group, **kwargs\n )\n\n def _build_object_properties(\n self, object_dn: str, mappings: QuerySet, **kwargs\n ) -> dict[str, dict[Any, Any]]:\n properties = {\"attributes\": {}}\n for mapping in mappings.all().select_subclasses():\n if not isinstance(mapping, LDAPPropertyMapping):\n continue\n mapping: LDAPPropertyMapping\n try:\n value = mapping.evaluate(user=None, request=None, ldap=kwargs, dn=object_dn)\n if value is None:\n continue\n if isinstance(value, (bytes)):\n continue\n object_field = mapping.object_field\n if object_field.startswith(\"attributes.\"):\n # Because returning a list might desired, we can't\n # rely on self._flatten here. Instead, just save the result as-is\n properties[\"attributes\"][object_field.replace(\"attributes.\", \"\")] = value\n else:\n properties[object_field] = self._flatten(value)\n except PropertyMappingExpressionException as exc:\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=f\"Failed to evaluate property-mapping: '{mapping.name}'\",\n source=self._source,\n mapping=mapping,\n ).save()\n self._logger.warning(\"Mapping failed to evaluate\", exc=exc, mapping=mapping)\n continue\n if self._source.object_uniqueness_field in kwargs:\n properties[\"attributes\"][LDAP_UNIQUENESS] = self._flatten(\n kwargs.get(self._source.object_uniqueness_field)\n )\n properties[\"attributes\"][LDAP_DISTINGUISHED_NAME] = object_dn\n return properties\n\n def update_or_create_attributes(\n self,\n obj: type[Model],\n query: dict[str, Any],\n data: dict[str, Any],\n ) -> tuple[Model, bool]:\n \"\"\"Same as django's update_or_create but correctly update attributes by merging dicts\"\"\"\n instance = obj.objects.filter(**query).first()\n if not instance:\n return (obj.objects.create(**data), True)\n for key, value in data.items():\n if key == \"attributes\":\n continue\n setattr(instance, key, value)\n final_attributes = {}\n MERGE_LIST_UNIQUE.merge(final_attributes, instance.attributes)\n MERGE_LIST_UNIQUE.merge(final_attributes, data.get(\"attributes\", {}))\n instance.attributes = final_attributes\n instance.save()\n return (instance, False)\n", "path": "authentik/sources/ldap/sync/base.py"}]} | 3,559 | 213 |
gh_patches_debug_37845 | rasdani/github-patches | git_diff | learningequality__kolibri-2832 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no mastery achieved on pdf's
### Observed behavior
Mastery is not achieved (no complete icon, and no points earned) after spending time scrolling through the pdf.
### Expected behavior
PDF is 9 pages long. Mastery is expected after 4.5 minutes. The 'in progress' icon appears but mastery is never achieved.
### Steps to reproduce
1. Select a pdf content, like this one: https://instantschools-v2.learningequality.org/learn/#/topics/c/2d902e00e30245799081fda4da5cabd0
2. Keep pdf open for 5+ minutes
3. No mastery achieved

### Context
* Kolibri version: 0.6.1 pex on online demo server
* Operating system: pex
* Browser: Chrome
URL: https://instantschools-v2.learningequality.org/learn/#/topics/c/2d902e00e30245799081fda4da5cabd0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/content/management/commands/importcontent.py`
Content:
```
1 import os
2 import logging as logger
3 from django.conf import settings
4 from django.core.management.base import CommandError
5 from kolibri.tasks.management.commands.base import AsyncCommand
6 from requests.exceptions import HTTPError
7
8 from ...utils import annotation, paths, transfer, import_export_content
9
10 # constants to specify the transfer method to be used
11 DOWNLOAD_METHOD = "download"
12 COPY_METHOD = "copy"
13
14 logging = logger.getLogger(__name__)
15
16
17 class Command(AsyncCommand):
18
19 def add_arguments(self, parser):
20 # let's save the parser in case we need to print a help statement
21 self._parser = parser
22
23 # we want two groups of arguments. One group is when the
24 # 'importcontent disk' command is given, where we'll expect a file
25 # directory to be given. Another is the 'importcontent network'
26 # command to be given, where we'll expect a channel.
27
28 # However, some optional arguments apply to both groups. Add them here!
29 node_ids_help_text = """
30 Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.
31
32 e.g.
33
34 kolibri manage importcontent --node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>
35 """
36 parser.add_argument(
37 "--node_ids", "-n",
38 # Split the comma separated string we get, into a list of strings
39 type=lambda x: x.split(","),
40 default=[],
41 required=False,
42 dest="node_ids",
43 help=node_ids_help_text,
44 )
45
46 exclude_node_ids_help_text = """
47 Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.
48
49 e.g.
50
51 kolibri manage importcontent --exclude_node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>
52 """
53 parser.add_argument(
54 "--exclude_node_ids",
55 # Split the comma separated string we get, into a list of string
56 type=lambda x: x.split(","),
57 default=[],
58 required=False,
59 dest="exclude_node_ids",
60 help=exclude_node_ids_help_text
61 )
62
63 # to implement these two groups of commands and their corresponding
64 # arguments, we'll need argparse.subparsers.
65 subparsers = parser.add_subparsers(dest='command', help="The following subcommands are available.")
66
67 # the network command has a channel id required positional argument,
68 # and some optional content_id arguments.
69
70 # TODO: implement a --content-domain parameter, for optionally
71 # specifying the domain for the curation server.
72
73 # Note: cmd should be the management command instance, as though the
74 # interface for adding arguments is argparse, Django overrides the
75 # parser object with its own thing, hence why we need to add cmd. See
76 # http://stackoverflow.com/questions/36706220/is-it-possible-to-create-subparsers-in-a-django-management-command
77 network_subparser = subparsers.add_parser(
78 name='network',
79 cmd=self,
80 help="Download the given channel through the network.",
81 )
82 network_subparser.add_argument('channel_id', type=str)
83
84 default_studio_url = settings.CENTRAL_CONTENT_DOWNLOAD_BASE_URL
85 network_subparser.add_argument(
86 "--baseurl",
87 type=str,
88 default=default_studio_url,
89 dest="baseurl",
90 )
91
92 disk_subparser = subparsers.add_parser(
93 name='disk',
94 cmd=self,
95 help='Copy the content from the given folder.'
96 )
97 disk_subparser.add_argument('channel_id', type=str)
98 disk_subparser.add_argument('directory', type=str)
99
100 def download_content(self, channel_id, node_ids=None, exclude_node_ids=None, baseurl=None):
101 self._transfer(DOWNLOAD_METHOD, channel_id, node_ids=node_ids, exclude_node_ids=exclude_node_ids, baseurl=baseurl)
102
103 def copy_content(self, channel_id, path, node_ids=None, exclude_node_ids=None):
104 self._transfer(COPY_METHOD, channel_id, path=path, node_ids=node_ids, exclude_node_ids=exclude_node_ids)
105
106 def _transfer(self, method, channel_id, path=None, node_ids=None, exclude_node_ids=None, baseurl=None): # noqa: max-complexity=16
107
108 files_to_download, total_bytes_to_transfer = import_export_content.get_files_to_transfer(
109 channel_id, node_ids, exclude_node_ids, False)
110
111 downloaded_files = []
112 number_of_skipped_files = 0
113 file_checksums_to_annotate = []
114
115 with self.start_progress(total=total_bytes_to_transfer) as overall_progress_update:
116
117 for f in files_to_download:
118
119 if self.is_cancelled():
120 break
121
122 filename = f.get_filename()
123 dest = paths.get_content_storage_file_path(filename)
124
125 # if the file already exists, add its size to our overall progress, and skip
126 if os.path.isfile(dest) and os.path.getsize(dest) == f.file_size:
127 overall_progress_update(f.file_size)
128 file_checksums_to_annotate.append(f.id)
129 continue
130
131 # determine where we're downloading/copying from, and create appropriate transfer object
132 if method == DOWNLOAD_METHOD:
133 url = paths.get_content_storage_remote_url(filename, baseurl=baseurl)
134 filetransfer = transfer.FileDownload(url, dest)
135 elif method == COPY_METHOD:
136 srcpath = paths.get_content_storage_file_path(filename, datafolder=path)
137 filetransfer = transfer.FileCopy(srcpath, dest)
138
139 try:
140
141 with filetransfer:
142
143 with self.start_progress(total=filetransfer.total_size) as file_dl_progress_update:
144
145 for chunk in filetransfer:
146 if self.is_cancelled():
147 filetransfer.cancel()
148 break
149 length = len(chunk)
150 overall_progress_update(length)
151 file_dl_progress_update(length)
152 else:
153 # If the for loop didn't break, add this to downloaded files.
154 downloaded_files.append(dest)
155
156 file_checksums_to_annotate.append(f.id)
157
158 except HTTPError:
159 overall_progress_update(f.file_size)
160
161 except OSError:
162 number_of_skipped_files += 1
163 overall_progress_update(f.file_size)
164
165 if self.is_cancelled():
166 # Cancelled, clean up any already downloading files.
167 for dest in downloaded_files:
168 os.remove(dest)
169 self.cancel()
170 else:
171 if number_of_skipped_files > 0:
172 logging.warning(
173 "{} files are skipped, because they are not found in the given external drive.".format(number_of_skipped_files))
174
175 annotation.set_availability(channel_id, file_checksums_to_annotate)
176
177 def handle_async(self, *args, **options):
178 if options['command'] == 'network':
179 self.download_content(options["channel_id"],
180 node_ids=options["node_ids"],
181 exclude_node_ids=options['exclude_node_ids'],
182 baseurl=options["baseurl"])
183 elif options['command'] == 'disk':
184 self.copy_content(options["channel_id"],
185 options["directory"],
186 node_ids=options["node_ids"],
187 exclude_node_ids=options["exclude_node_ids"])
188 else:
189 self._parser.print_help()
190 raise CommandError("Please give a valid subcommand. You gave: {}".format(options["command"]))
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/content/management/commands/importcontent.py b/kolibri/content/management/commands/importcontent.py
--- a/kolibri/content/management/commands/importcontent.py
+++ b/kolibri/content/management/commands/importcontent.py
@@ -5,7 +5,7 @@
from kolibri.tasks.management.commands.base import AsyncCommand
from requests.exceptions import HTTPError
-from ...utils import annotation, paths, transfer, import_export_content
+from ...utils import annotation, import_export_content, paths, transfer
# constants to specify the transfer method to be used
DOWNLOAD_METHOD = "download"
@@ -108,7 +108,6 @@
files_to_download, total_bytes_to_transfer = import_export_content.get_files_to_transfer(
channel_id, node_ids, exclude_node_ids, False)
- downloaded_files = []
number_of_skipped_files = 0
file_checksums_to_annotate = []
@@ -149,9 +148,6 @@
length = len(chunk)
overall_progress_update(length)
file_dl_progress_update(length)
- else:
- # If the for loop didn't break, add this to downloaded files.
- downloaded_files.append(dest)
file_checksums_to_annotate.append(f.id)
@@ -162,17 +158,15 @@
number_of_skipped_files += 1
overall_progress_update(f.file_size)
+ annotation.set_availability(channel_id, file_checksums_to_annotate)
+
+ if number_of_skipped_files > 0:
+ logging.warning(
+ "{} files are skipped, because they are not found in the given external drive.".format(
+ number_of_skipped_files))
+
if self.is_cancelled():
- # Cancelled, clean up any already downloading files.
- for dest in downloaded_files:
- os.remove(dest)
self.cancel()
- else:
- if number_of_skipped_files > 0:
- logging.warning(
- "{} files are skipped, because they are not found in the given external drive.".format(number_of_skipped_files))
-
- annotation.set_availability(channel_id, file_checksums_to_annotate)
def handle_async(self, *args, **options):
if options['command'] == 'network':
| {"golden_diff": "diff --git a/kolibri/content/management/commands/importcontent.py b/kolibri/content/management/commands/importcontent.py\n--- a/kolibri/content/management/commands/importcontent.py\n+++ b/kolibri/content/management/commands/importcontent.py\n@@ -5,7 +5,7 @@\n from kolibri.tasks.management.commands.base import AsyncCommand\n from requests.exceptions import HTTPError\n \n-from ...utils import annotation, paths, transfer, import_export_content\n+from ...utils import annotation, import_export_content, paths, transfer\n \n # constants to specify the transfer method to be used\n DOWNLOAD_METHOD = \"download\"\n@@ -108,7 +108,6 @@\n files_to_download, total_bytes_to_transfer = import_export_content.get_files_to_transfer(\n channel_id, node_ids, exclude_node_ids, False)\n \n- downloaded_files = []\n number_of_skipped_files = 0\n file_checksums_to_annotate = []\n \n@@ -149,9 +148,6 @@\n length = len(chunk)\n overall_progress_update(length)\n file_dl_progress_update(length)\n- else:\n- # If the for loop didn't break, add this to downloaded files.\n- downloaded_files.append(dest)\n \n file_checksums_to_annotate.append(f.id)\n \n@@ -162,17 +158,15 @@\n number_of_skipped_files += 1\n overall_progress_update(f.file_size)\n \n+ annotation.set_availability(channel_id, file_checksums_to_annotate)\n+\n+ if number_of_skipped_files > 0:\n+ logging.warning(\n+ \"{} files are skipped, because they are not found in the given external drive.\".format(\n+ number_of_skipped_files))\n+\n if self.is_cancelled():\n- # Cancelled, clean up any already downloading files.\n- for dest in downloaded_files:\n- os.remove(dest)\n self.cancel()\n- else:\n- if number_of_skipped_files > 0:\n- logging.warning(\n- \"{} files are skipped, because they are not found in the given external drive.\".format(number_of_skipped_files))\n-\n- annotation.set_availability(channel_id, file_checksums_to_annotate)\n \n def handle_async(self, *args, **options):\n if options['command'] == 'network':\n", "issue": "no mastery achieved on pdf's\n### Observed behavior\r\n\r\nMastery is not achieved (no complete icon, and no points earned) after spending time scrolling through the pdf.\r\n\r\n### Expected behavior\r\n\r\nPDF is 9 pages long. Mastery is expected after 4.5 minutes. The 'in progress' icon appears but mastery is never achieved.\r\n\r\n### Steps to reproduce\r\n\r\n1. Select a pdf content, like this one: https://instantschools-v2.learningequality.org/learn/#/topics/c/2d902e00e30245799081fda4da5cabd0\r\n2. Keep pdf open for 5+ minutes\r\n3. No mastery achieved \r\n\r\n\r\n\r\n### Context\r\n\r\n* Kolibri version: 0.6.1 pex on online demo server\r\n* Operating system: pex\r\n* Browser: Chrome\r\nURL: https://instantschools-v2.learningequality.org/learn/#/topics/c/2d902e00e30245799081fda4da5cabd0\r\n\n", "before_files": [{"content": "import os\nimport logging as logger\nfrom django.conf import settings\nfrom django.core.management.base import CommandError\nfrom kolibri.tasks.management.commands.base import AsyncCommand\nfrom requests.exceptions import HTTPError\n\nfrom ...utils import annotation, paths, transfer, import_export_content\n\n# constants to specify the transfer method to be used\nDOWNLOAD_METHOD = \"download\"\nCOPY_METHOD = \"copy\"\n\nlogging = logger.getLogger(__name__)\n\n\nclass Command(AsyncCommand):\n\n def add_arguments(self, parser):\n # let's save the parser in case we need to print a help statement\n self._parser = parser\n\n # we want two groups of arguments. One group is when the\n # 'importcontent disk' command is given, where we'll expect a file\n # directory to be given. Another is the 'importcontent network'\n # command to be given, where we'll expect a channel.\n\n # However, some optional arguments apply to both groups. Add them here!\n node_ids_help_text = \"\"\"\n Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.\n\n e.g.\n\n kolibri manage importcontent --node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--node_ids\", \"-n\",\n # Split the comma separated string we get, into a list of strings\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"node_ids\",\n help=node_ids_help_text,\n )\n\n exclude_node_ids_help_text = \"\"\"\n Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.\n\n e.g.\n\n kolibri manage importcontent --exclude_node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--exclude_node_ids\",\n # Split the comma separated string we get, into a list of string\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"exclude_node_ids\",\n help=exclude_node_ids_help_text\n )\n\n # to implement these two groups of commands and their corresponding\n # arguments, we'll need argparse.subparsers.\n subparsers = parser.add_subparsers(dest='command', help=\"The following subcommands are available.\")\n\n # the network command has a channel id required positional argument,\n # and some optional content_id arguments.\n\n # TODO: implement a --content-domain parameter, for optionally\n # specifying the domain for the curation server.\n\n # Note: cmd should be the management command instance, as though the\n # interface for adding arguments is argparse, Django overrides the\n # parser object with its own thing, hence why we need to add cmd. See\n # http://stackoverflow.com/questions/36706220/is-it-possible-to-create-subparsers-in-a-django-management-command\n network_subparser = subparsers.add_parser(\n name='network',\n cmd=self,\n help=\"Download the given channel through the network.\",\n )\n network_subparser.add_argument('channel_id', type=str)\n\n default_studio_url = settings.CENTRAL_CONTENT_DOWNLOAD_BASE_URL\n network_subparser.add_argument(\n \"--baseurl\",\n type=str,\n default=default_studio_url,\n dest=\"baseurl\",\n )\n\n disk_subparser = subparsers.add_parser(\n name='disk',\n cmd=self,\n help='Copy the content from the given folder.'\n )\n disk_subparser.add_argument('channel_id', type=str)\n disk_subparser.add_argument('directory', type=str)\n\n def download_content(self, channel_id, node_ids=None, exclude_node_ids=None, baseurl=None):\n self._transfer(DOWNLOAD_METHOD, channel_id, node_ids=node_ids, exclude_node_ids=exclude_node_ids, baseurl=baseurl)\n\n def copy_content(self, channel_id, path, node_ids=None, exclude_node_ids=None):\n self._transfer(COPY_METHOD, channel_id, path=path, node_ids=node_ids, exclude_node_ids=exclude_node_ids)\n\n def _transfer(self, method, channel_id, path=None, node_ids=None, exclude_node_ids=None, baseurl=None): # noqa: max-complexity=16\n\n files_to_download, total_bytes_to_transfer = import_export_content.get_files_to_transfer(\n channel_id, node_ids, exclude_node_ids, False)\n\n downloaded_files = []\n number_of_skipped_files = 0\n file_checksums_to_annotate = []\n\n with self.start_progress(total=total_bytes_to_transfer) as overall_progress_update:\n\n for f in files_to_download:\n\n if self.is_cancelled():\n break\n\n filename = f.get_filename()\n dest = paths.get_content_storage_file_path(filename)\n\n # if the file already exists, add its size to our overall progress, and skip\n if os.path.isfile(dest) and os.path.getsize(dest) == f.file_size:\n overall_progress_update(f.file_size)\n file_checksums_to_annotate.append(f.id)\n continue\n\n # determine where we're downloading/copying from, and create appropriate transfer object\n if method == DOWNLOAD_METHOD:\n url = paths.get_content_storage_remote_url(filename, baseurl=baseurl)\n filetransfer = transfer.FileDownload(url, dest)\n elif method == COPY_METHOD:\n srcpath = paths.get_content_storage_file_path(filename, datafolder=path)\n filetransfer = transfer.FileCopy(srcpath, dest)\n\n try:\n\n with filetransfer:\n\n with self.start_progress(total=filetransfer.total_size) as file_dl_progress_update:\n\n for chunk in filetransfer:\n if self.is_cancelled():\n filetransfer.cancel()\n break\n length = len(chunk)\n overall_progress_update(length)\n file_dl_progress_update(length)\n else:\n # If the for loop didn't break, add this to downloaded files.\n downloaded_files.append(dest)\n\n file_checksums_to_annotate.append(f.id)\n\n except HTTPError:\n overall_progress_update(f.file_size)\n\n except OSError:\n number_of_skipped_files += 1\n overall_progress_update(f.file_size)\n\n if self.is_cancelled():\n # Cancelled, clean up any already downloading files.\n for dest in downloaded_files:\n os.remove(dest)\n self.cancel()\n else:\n if number_of_skipped_files > 0:\n logging.warning(\n \"{} files are skipped, because they are not found in the given external drive.\".format(number_of_skipped_files))\n\n annotation.set_availability(channel_id, file_checksums_to_annotate)\n\n def handle_async(self, *args, **options):\n if options['command'] == 'network':\n self.download_content(options[\"channel_id\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options['exclude_node_ids'],\n baseurl=options[\"baseurl\"])\n elif options['command'] == 'disk':\n self.copy_content(options[\"channel_id\"],\n options[\"directory\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options[\"exclude_node_ids\"])\n else:\n self._parser.print_help()\n raise CommandError(\"Please give a valid subcommand. You gave: {}\".format(options[\"command\"]))\n", "path": "kolibri/content/management/commands/importcontent.py"}], "after_files": [{"content": "import os\nimport logging as logger\nfrom django.conf import settings\nfrom django.core.management.base import CommandError\nfrom kolibri.tasks.management.commands.base import AsyncCommand\nfrom requests.exceptions import HTTPError\n\nfrom ...utils import annotation, import_export_content, paths, transfer\n\n# constants to specify the transfer method to be used\nDOWNLOAD_METHOD = \"download\"\nCOPY_METHOD = \"copy\"\n\nlogging = logger.getLogger(__name__)\n\n\nclass Command(AsyncCommand):\n\n def add_arguments(self, parser):\n # let's save the parser in case we need to print a help statement\n self._parser = parser\n\n # we want two groups of arguments. One group is when the\n # 'importcontent disk' command is given, where we'll expect a file\n # directory to be given. Another is the 'importcontent network'\n # command to be given, where we'll expect a channel.\n\n # However, some optional arguments apply to both groups. Add them here!\n node_ids_help_text = \"\"\"\n Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.\n\n e.g.\n\n kolibri manage importcontent --node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--node_ids\", \"-n\",\n # Split the comma separated string we get, into a list of strings\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"node_ids\",\n help=node_ids_help_text,\n )\n\n exclude_node_ids_help_text = \"\"\"\n Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.\n\n e.g.\n\n kolibri manage importcontent --exclude_node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--exclude_node_ids\",\n # Split the comma separated string we get, into a list of string\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"exclude_node_ids\",\n help=exclude_node_ids_help_text\n )\n\n # to implement these two groups of commands and their corresponding\n # arguments, we'll need argparse.subparsers.\n subparsers = parser.add_subparsers(dest='command', help=\"The following subcommands are available.\")\n\n # the network command has a channel id required positional argument,\n # and some optional content_id arguments.\n\n # TODO: implement a --content-domain parameter, for optionally\n # specifying the domain for the curation server.\n\n # Note: cmd should be the management command instance, as though the\n # interface for adding arguments is argparse, Django overrides the\n # parser object with its own thing, hence why we need to add cmd. See\n # http://stackoverflow.com/questions/36706220/is-it-possible-to-create-subparsers-in-a-django-management-command\n network_subparser = subparsers.add_parser(\n name='network',\n cmd=self,\n help=\"Download the given channel through the network.\",\n )\n network_subparser.add_argument('channel_id', type=str)\n\n default_studio_url = settings.CENTRAL_CONTENT_DOWNLOAD_BASE_URL\n network_subparser.add_argument(\n \"--baseurl\",\n type=str,\n default=default_studio_url,\n dest=\"baseurl\",\n )\n\n disk_subparser = subparsers.add_parser(\n name='disk',\n cmd=self,\n help='Copy the content from the given folder.'\n )\n disk_subparser.add_argument('channel_id', type=str)\n disk_subparser.add_argument('directory', type=str)\n\n def download_content(self, channel_id, node_ids=None, exclude_node_ids=None, baseurl=None):\n self._transfer(DOWNLOAD_METHOD, channel_id, node_ids=node_ids, exclude_node_ids=exclude_node_ids, baseurl=baseurl)\n\n def copy_content(self, channel_id, path, node_ids=None, exclude_node_ids=None):\n self._transfer(COPY_METHOD, channel_id, path=path, node_ids=node_ids, exclude_node_ids=exclude_node_ids)\n\n def _transfer(self, method, channel_id, path=None, node_ids=None, exclude_node_ids=None, baseurl=None): # noqa: max-complexity=16\n\n files_to_download, total_bytes_to_transfer = import_export_content.get_files_to_transfer(\n channel_id, node_ids, exclude_node_ids, False)\n\n number_of_skipped_files = 0\n file_checksums_to_annotate = []\n\n with self.start_progress(total=total_bytes_to_transfer) as overall_progress_update:\n\n for f in files_to_download:\n\n if self.is_cancelled():\n break\n\n filename = f.get_filename()\n dest = paths.get_content_storage_file_path(filename)\n\n # if the file already exists, add its size to our overall progress, and skip\n if os.path.isfile(dest) and os.path.getsize(dest) == f.file_size:\n overall_progress_update(f.file_size)\n file_checksums_to_annotate.append(f.id)\n continue\n\n # determine where we're downloading/copying from, and create appropriate transfer object\n if method == DOWNLOAD_METHOD:\n url = paths.get_content_storage_remote_url(filename, baseurl=baseurl)\n filetransfer = transfer.FileDownload(url, dest)\n elif method == COPY_METHOD:\n srcpath = paths.get_content_storage_file_path(filename, datafolder=path)\n filetransfer = transfer.FileCopy(srcpath, dest)\n\n try:\n\n with filetransfer:\n\n with self.start_progress(total=filetransfer.total_size) as file_dl_progress_update:\n\n for chunk in filetransfer:\n if self.is_cancelled():\n filetransfer.cancel()\n break\n length = len(chunk)\n overall_progress_update(length)\n file_dl_progress_update(length)\n\n file_checksums_to_annotate.append(f.id)\n\n except HTTPError:\n overall_progress_update(f.file_size)\n\n except OSError:\n number_of_skipped_files += 1\n overall_progress_update(f.file_size)\n\n annotation.set_availability(channel_id, file_checksums_to_annotate)\n\n if number_of_skipped_files > 0:\n logging.warning(\n \"{} files are skipped, because they are not found in the given external drive.\".format(\n number_of_skipped_files))\n\n if self.is_cancelled():\n self.cancel()\n\n def handle_async(self, *args, **options):\n if options['command'] == 'network':\n self.download_content(options[\"channel_id\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options['exclude_node_ids'],\n baseurl=options[\"baseurl\"])\n elif options['command'] == 'disk':\n self.copy_content(options[\"channel_id\"],\n options[\"directory\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options[\"exclude_node_ids\"])\n else:\n self._parser.print_help()\n raise CommandError(\"Please give a valid subcommand. You gave: {}\".format(options[\"command\"]))\n", "path": "kolibri/content/management/commands/importcontent.py"}]} | 2,595 | 498 |
gh_patches_debug_29650 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1231 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pydantic conversion fails for Union types
If you include a field in a Pydantic object with a `Union[A, B]` type, where A and B are Pydantic objects, then conversion fails with `AttributeError: 'StrawberryUnion' object has no attribute '_type_definition'`. We need to handle the branches of the union separately to get the right behavior here.
```
class BranchA(pydantic.BaseModel):
field_a: str
class BranchB(pydantic.BaseModel):
field_b: int
class User(pydantic.BaseModel):
age: int
union_field: Union[BranchA, BranchB]
@strawberry.experimental.pydantic.type(BranchA, fields=["field_a"])
class BranchAType:
pass
@strawberry.experimental.pydantic.type(BranchB, fields=["field_b"])
class BranchBType:
pass
@strawberry.experimental.pydantic.type(User, fields=["age", "union_field"])
class UserType:
pass
origin_user = User(age=1, union_field=BranchA(field_a="abc"))
user = UserType.from_pydantic(origin_user)
# raises AttributeError, should return UserType(age=1, union_field=BranchAType(field_a="abc"))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/experimental/pydantic/conversion.py`
Content:
```
1 from typing import Union, cast
2
3 from strawberry.field import StrawberryField
4 from strawberry.scalars import is_scalar
5 from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType
6
7
8 def _convert_from_pydantic_to_strawberry_type(
9 type_: Union[StrawberryType, type], data_from_model=None, extra=None
10 ):
11 data = data_from_model if data_from_model is not None else extra
12
13 if isinstance(type_, StrawberryOptional):
14 if data is None:
15 return data
16 return _convert_from_pydantic_to_strawberry_type(
17 type_.of_type, data_from_model=data, extra=extra
18 )
19 if isinstance(type_, StrawberryList):
20 items = []
21 for index, item in enumerate(data):
22 items.append(
23 _convert_from_pydantic_to_strawberry_type(
24 type_.of_type,
25 data_from_model=item,
26 extra=extra[index] if extra else None,
27 )
28 )
29
30 return items
31 elif is_scalar(type_):
32 return data
33 else:
34 return convert_pydantic_model_to_strawberry_class(
35 type_, model_instance=data_from_model, extra=extra
36 )
37
38
39 def convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):
40 extra = extra or {}
41 kwargs = {}
42
43 for field in cls._type_definition.fields:
44 field = cast(StrawberryField, field)
45 python_name = field.python_name
46
47 data_from_extra = extra.get(python_name, None)
48 data_from_model = (
49 getattr(model_instance, python_name, None) if model_instance else None
50 )
51 kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(
52 field.type, data_from_model, extra=data_from_extra
53 )
54
55 return cls(**kwargs)
56
```
Path: `strawberry/experimental/pydantic/object_type.py`
Content:
```
1 import builtins
2 import dataclasses
3 from functools import partial
4 from typing import Any, Dict, List, Optional, Tuple, Type, cast
5
6 from pydantic import BaseModel
7 from pydantic.fields import ModelField
8
9 from strawberry.arguments import UNSET
10 from strawberry.experimental.pydantic.conversion import (
11 convert_pydantic_model_to_strawberry_class,
12 )
13 from strawberry.experimental.pydantic.fields import get_basic_type
14 from strawberry.field import StrawberryField
15 from strawberry.object_type import _process_type, _wrap_dataclass
16 from strawberry.private import Private
17 from strawberry.types.type_resolver import _get_fields
18 from strawberry.types.types import FederationTypeParams, TypeDefinition
19
20 from .exceptions import MissingFieldsListError, UnregisteredTypeException
21
22
23 def replace_pydantic_types(type_: Any):
24 if hasattr(type_, "__args__"):
25 new_type = type_.copy_with(
26 tuple(replace_pydantic_types(t) for t in type_.__args__)
27 )
28
29 if isinstance(new_type, TypeDefinition):
30 # TODO: Not sure if this is necessary. No coverage in tests
31 # TODO: Unnecessary with StrawberryObject
32
33 new_type = builtins.type(
34 new_type.name,
35 (),
36 {"_type_definition": new_type},
37 )
38
39 return new_type
40
41 if issubclass(type_, BaseModel):
42 if hasattr(type_, "_strawberry_type"):
43 return type_._strawberry_type
44 else:
45 raise UnregisteredTypeException(type_)
46
47 return type_
48
49
50 def get_type_for_field(field: ModelField):
51 type_ = field.outer_type_
52 type_ = get_basic_type(type_)
53 type_ = replace_pydantic_types(type_)
54
55 if not field.required:
56 type_ = Optional[type_]
57
58 return type_
59
60
61 def _get_private_fields(cls: Type) -> List[dataclasses.Field]:
62 private_fields: List[dataclasses.Field] = []
63 for field in dataclasses.fields(cls):
64 if isinstance(field.type, Private):
65 private_fields.append(field)
66 return private_fields
67
68
69 def type(
70 model: Type[BaseModel],
71 *,
72 fields: List[str],
73 name: Optional[str] = None,
74 is_input: bool = False,
75 is_interface: bool = False,
76 description: Optional[str] = None,
77 federation: Optional[FederationTypeParams] = None,
78 ):
79 def wrap(cls):
80 if not fields:
81 raise MissingFieldsListError(model)
82
83 model_fields = model.__fields__
84 fields_set = set(fields)
85
86 all_fields: List[Tuple[str, Any, dataclasses.Field]] = [
87 (
88 name,
89 get_type_for_field(field),
90 StrawberryField(
91 python_name=field.name,
92 graphql_name=field.alias if field.has_alias else None,
93 default=field.default if not field.required else UNSET,
94 default_factory=(
95 field.default_factory if field.default_factory else UNSET
96 ),
97 type_annotation=get_type_for_field(field),
98 ),
99 )
100 for name, field in model_fields.items()
101 if name in fields_set
102 ]
103
104 wrapped = _wrap_dataclass(cls)
105 extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))
106 private_fields = _get_private_fields(wrapped)
107
108 all_fields.extend(
109 (
110 (
111 field.name,
112 field.type,
113 field,
114 )
115 for field in extra_fields + private_fields
116 )
117 )
118
119 # Sort fields so that fields with missing defaults go first
120 # because dataclasses require that fields with no defaults are defined
121 # first
122 missing_default = []
123 has_default = []
124 for field in all_fields:
125 if field[2].default is dataclasses.MISSING:
126 missing_default.append(field)
127 else:
128 has_default.append(field)
129
130 sorted_fields = missing_default + has_default
131
132 cls = dataclasses.make_dataclass(
133 cls.__name__,
134 sorted_fields,
135 )
136
137 _process_type(
138 cls,
139 name=name,
140 is_input=is_input,
141 is_interface=is_interface,
142 description=description,
143 federation=federation,
144 )
145
146 model._strawberry_type = cls # type: ignore
147
148 def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:
149 return convert_pydantic_model_to_strawberry_class(
150 cls=cls, model_instance=instance, extra=extra
151 )
152
153 def to_pydantic(self) -> Any:
154 instance_kwargs = dataclasses.asdict(self)
155
156 return model(**instance_kwargs)
157
158 cls.from_pydantic = staticmethod(from_pydantic)
159 cls.to_pydantic = to_pydantic
160
161 return cls
162
163 return wrap
164
165
166 input = partial(type, is_input=True)
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py
--- a/strawberry/experimental/pydantic/conversion.py
+++ b/strawberry/experimental/pydantic/conversion.py
@@ -3,6 +3,7 @@
from strawberry.field import StrawberryField
from strawberry.scalars import is_scalar
from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType
+from strawberry.union import StrawberryUnion
def _convert_from_pydantic_to_strawberry_type(
@@ -16,6 +17,16 @@
return _convert_from_pydantic_to_strawberry_type(
type_.of_type, data_from_model=data, extra=extra
)
+ if isinstance(type_, StrawberryUnion):
+ for option_type in type_.types:
+ if hasattr(option_type, "_pydantic_type"):
+ source_type = option_type._pydantic_type # type: ignore
+ else:
+ source_type = cast(type, option_type)
+ if isinstance(data, source_type):
+ return _convert_from_pydantic_to_strawberry_type(
+ option_type, data_from_model=data, extra=extra
+ )
if isinstance(type_, StrawberryList):
items = []
for index, item in enumerate(data):
diff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py
--- a/strawberry/experimental/pydantic/object_type.py
+++ b/strawberry/experimental/pydantic/object_type.py
@@ -144,6 +144,7 @@
)
model._strawberry_type = cls # type: ignore
+ cls._pydantic_type = model # type: ignore
def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:
return convert_pydantic_model_to_strawberry_class(
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py\n--- a/strawberry/experimental/pydantic/conversion.py\n+++ b/strawberry/experimental/pydantic/conversion.py\n@@ -3,6 +3,7 @@\n from strawberry.field import StrawberryField\n from strawberry.scalars import is_scalar\n from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\n+from strawberry.union import StrawberryUnion\n \n \n def _convert_from_pydantic_to_strawberry_type(\n@@ -16,6 +17,16 @@\n return _convert_from_pydantic_to_strawberry_type(\n type_.of_type, data_from_model=data, extra=extra\n )\n+ if isinstance(type_, StrawberryUnion):\n+ for option_type in type_.types:\n+ if hasattr(option_type, \"_pydantic_type\"):\n+ source_type = option_type._pydantic_type # type: ignore\n+ else:\n+ source_type = cast(type, option_type)\n+ if isinstance(data, source_type):\n+ return _convert_from_pydantic_to_strawberry_type(\n+ option_type, data_from_model=data, extra=extra\n+ )\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\ndiff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py\n--- a/strawberry/experimental/pydantic/object_type.py\n+++ b/strawberry/experimental/pydantic/object_type.py\n@@ -144,6 +144,7 @@\n )\n \n model._strawberry_type = cls # type: ignore\n+ cls._pydantic_type = model # type: ignore\n \n def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:\n return convert_pydantic_model_to_strawberry_class(\n", "issue": "Pydantic conversion fails for Union types\nIf you include a field in a Pydantic object with a `Union[A, B]` type, where A and B are Pydantic objects, then conversion fails with `AttributeError: 'StrawberryUnion' object has no attribute '_type_definition'`. We need to handle the branches of the union separately to get the right behavior here.\r\n\r\n```\r\nclass BranchA(pydantic.BaseModel):\r\n field_a: str\r\n\r\nclass BranchB(pydantic.BaseModel):\r\n field_b: int\r\n\r\nclass User(pydantic.BaseModel):\r\n age: int\r\n union_field: Union[BranchA, BranchB]\r\n\r\[email protected](BranchA, fields=[\"field_a\"])\r\nclass BranchAType:\r\n pass\r\n\r\[email protected](BranchB, fields=[\"field_b\"])\r\nclass BranchBType:\r\n pass\r\n\r\[email protected](User, fields=[\"age\", \"union_field\"])\r\nclass UserType:\r\n pass\r\n\r\norigin_user = User(age=1, union_field=BranchA(field_a=\"abc\"))\r\nuser = UserType.from_pydantic(origin_user)\r\n# raises AttributeError, should return UserType(age=1, union_field=BranchAType(field_a=\"abc\"))\r\n```\n", "before_files": [{"content": "from typing import Union, cast\n\nfrom strawberry.field import StrawberryField\nfrom strawberry.scalars import is_scalar\nfrom strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\n\n\ndef _convert_from_pydantic_to_strawberry_type(\n type_: Union[StrawberryType, type], data_from_model=None, extra=None\n):\n data = data_from_model if data_from_model is not None else extra\n\n if isinstance(type_, StrawberryOptional):\n if data is None:\n return data\n return _convert_from_pydantic_to_strawberry_type(\n type_.of_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n items.append(\n _convert_from_pydantic_to_strawberry_type(\n type_.of_type,\n data_from_model=item,\n extra=extra[index] if extra else None,\n )\n )\n\n return items\n elif is_scalar(type_):\n return data\n else:\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\n\n\ndef convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):\n extra = extra or {}\n kwargs = {}\n\n for field in cls._type_definition.fields:\n field = cast(StrawberryField, field)\n python_name = field.python_name\n\n data_from_extra = extra.get(python_name, None)\n data_from_model = (\n getattr(model_instance, python_name, None) if model_instance else None\n )\n kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(\n field.type, data_from_model, extra=data_from_extra\n )\n\n return cls(**kwargs)\n", "path": "strawberry/experimental/pydantic/conversion.py"}, {"content": "import builtins\nimport dataclasses\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional, Tuple, Type, cast\n\nfrom pydantic import BaseModel\nfrom pydantic.fields import ModelField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.experimental.pydantic.conversion import (\n convert_pydantic_model_to_strawberry_class,\n)\nfrom strawberry.experimental.pydantic.fields import get_basic_type\nfrom strawberry.field import StrawberryField\nfrom strawberry.object_type import _process_type, _wrap_dataclass\nfrom strawberry.private import Private\nfrom strawberry.types.type_resolver import _get_fields\nfrom strawberry.types.types import FederationTypeParams, TypeDefinition\n\nfrom .exceptions import MissingFieldsListError, UnregisteredTypeException\n\n\ndef replace_pydantic_types(type_: Any):\n if hasattr(type_, \"__args__\"):\n new_type = type_.copy_with(\n tuple(replace_pydantic_types(t) for t in type_.__args__)\n )\n\n if isinstance(new_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n\n new_type = builtins.type(\n new_type.name,\n (),\n {\"_type_definition\": new_type},\n )\n\n return new_type\n\n if issubclass(type_, BaseModel):\n if hasattr(type_, \"_strawberry_type\"):\n return type_._strawberry_type\n else:\n raise UnregisteredTypeException(type_)\n\n return type_\n\n\ndef get_type_for_field(field: ModelField):\n type_ = field.outer_type_\n type_ = get_basic_type(type_)\n type_ = replace_pydantic_types(type_)\n\n if not field.required:\n type_ = Optional[type_]\n\n return type_\n\n\ndef _get_private_fields(cls: Type) -> List[dataclasses.Field]:\n private_fields: List[dataclasses.Field] = []\n for field in dataclasses.fields(cls):\n if isinstance(field.type, Private):\n private_fields.append(field)\n return private_fields\n\n\ndef type(\n model: Type[BaseModel],\n *,\n fields: List[str],\n name: Optional[str] = None,\n is_input: bool = False,\n is_interface: bool = False,\n description: Optional[str] = None,\n federation: Optional[FederationTypeParams] = None,\n):\n def wrap(cls):\n if not fields:\n raise MissingFieldsListError(model)\n\n model_fields = model.__fields__\n fields_set = set(fields)\n\n all_fields: List[Tuple[str, Any, dataclasses.Field]] = [\n (\n name,\n get_type_for_field(field),\n StrawberryField(\n python_name=field.name,\n graphql_name=field.alias if field.has_alias else None,\n default=field.default if not field.required else UNSET,\n default_factory=(\n field.default_factory if field.default_factory else UNSET\n ),\n type_annotation=get_type_for_field(field),\n ),\n )\n for name, field in model_fields.items()\n if name in fields_set\n ]\n\n wrapped = _wrap_dataclass(cls)\n extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))\n private_fields = _get_private_fields(wrapped)\n\n all_fields.extend(\n (\n (\n field.name,\n field.type,\n field,\n )\n for field in extra_fields + private_fields\n )\n )\n\n # Sort fields so that fields with missing defaults go first\n # because dataclasses require that fields with no defaults are defined\n # first\n missing_default = []\n has_default = []\n for field in all_fields:\n if field[2].default is dataclasses.MISSING:\n missing_default.append(field)\n else:\n has_default.append(field)\n\n sorted_fields = missing_default + has_default\n\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n )\n\n _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n federation=federation,\n )\n\n model._strawberry_type = cls # type: ignore\n\n def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:\n return convert_pydantic_model_to_strawberry_class(\n cls=cls, model_instance=instance, extra=extra\n )\n\n def to_pydantic(self) -> Any:\n instance_kwargs = dataclasses.asdict(self)\n\n return model(**instance_kwargs)\n\n cls.from_pydantic = staticmethod(from_pydantic)\n cls.to_pydantic = to_pydantic\n\n return cls\n\n return wrap\n\n\ninput = partial(type, is_input=True)\n", "path": "strawberry/experimental/pydantic/object_type.py"}], "after_files": [{"content": "from typing import Union, cast\n\nfrom strawberry.field import StrawberryField\nfrom strawberry.scalars import is_scalar\nfrom strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\nfrom strawberry.union import StrawberryUnion\n\n\ndef _convert_from_pydantic_to_strawberry_type(\n type_: Union[StrawberryType, type], data_from_model=None, extra=None\n):\n data = data_from_model if data_from_model is not None else extra\n\n if isinstance(type_, StrawberryOptional):\n if data is None:\n return data\n return _convert_from_pydantic_to_strawberry_type(\n type_.of_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryUnion):\n for option_type in type_.types:\n if hasattr(option_type, \"_pydantic_type\"):\n source_type = option_type._pydantic_type # type: ignore\n else:\n source_type = cast(type, option_type)\n if isinstance(data, source_type):\n return _convert_from_pydantic_to_strawberry_type(\n option_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n items.append(\n _convert_from_pydantic_to_strawberry_type(\n type_.of_type,\n data_from_model=item,\n extra=extra[index] if extra else None,\n )\n )\n\n return items\n elif is_scalar(type_):\n return data\n else:\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\n\n\ndef convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):\n extra = extra or {}\n kwargs = {}\n\n for field in cls._type_definition.fields:\n field = cast(StrawberryField, field)\n python_name = field.python_name\n\n data_from_extra = extra.get(python_name, None)\n data_from_model = (\n getattr(model_instance, python_name, None) if model_instance else None\n )\n kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(\n field.type, data_from_model, extra=data_from_extra\n )\n\n return cls(**kwargs)\n", "path": "strawberry/experimental/pydantic/conversion.py"}, {"content": "import builtins\nimport dataclasses\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional, Tuple, Type, cast\n\nfrom pydantic import BaseModel\nfrom pydantic.fields import ModelField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.experimental.pydantic.conversion import (\n convert_pydantic_model_to_strawberry_class,\n)\nfrom strawberry.experimental.pydantic.fields import get_basic_type\nfrom strawberry.field import StrawberryField\nfrom strawberry.object_type import _process_type, _wrap_dataclass\nfrom strawberry.private import Private\nfrom strawberry.types.type_resolver import _get_fields\nfrom strawberry.types.types import FederationTypeParams, TypeDefinition\n\nfrom .exceptions import MissingFieldsListError, UnregisteredTypeException\n\n\ndef replace_pydantic_types(type_: Any):\n if hasattr(type_, \"__args__\"):\n new_type = type_.copy_with(\n tuple(replace_pydantic_types(t) for t in type_.__args__)\n )\n\n if isinstance(new_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n\n new_type = builtins.type(\n new_type.name,\n (),\n {\"_type_definition\": new_type},\n )\n\n return new_type\n\n if issubclass(type_, BaseModel):\n if hasattr(type_, \"_strawberry_type\"):\n return type_._strawberry_type\n else:\n raise UnregisteredTypeException(type_)\n\n return type_\n\n\ndef get_type_for_field(field: ModelField):\n type_ = field.outer_type_\n type_ = get_basic_type(type_)\n type_ = replace_pydantic_types(type_)\n\n if not field.required:\n type_ = Optional[type_]\n\n return type_\n\n\ndef _get_private_fields(cls: Type) -> List[dataclasses.Field]:\n private_fields: List[dataclasses.Field] = []\n for field in dataclasses.fields(cls):\n if isinstance(field.type, Private):\n private_fields.append(field)\n return private_fields\n\n\ndef type(\n model: Type[BaseModel],\n *,\n fields: List[str],\n name: Optional[str] = None,\n is_input: bool = False,\n is_interface: bool = False,\n description: Optional[str] = None,\n federation: Optional[FederationTypeParams] = None,\n):\n def wrap(cls):\n if not fields:\n raise MissingFieldsListError(model)\n\n model_fields = model.__fields__\n fields_set = set(fields)\n\n all_fields: List[Tuple[str, Any, dataclasses.Field]] = [\n (\n name,\n get_type_for_field(field),\n StrawberryField(\n python_name=field.name,\n graphql_name=field.alias if field.has_alias else None,\n default=field.default if not field.required else UNSET,\n default_factory=(\n field.default_factory if field.default_factory else UNSET\n ),\n type_annotation=get_type_for_field(field),\n ),\n )\n for name, field in model_fields.items()\n if name in fields_set\n ]\n\n wrapped = _wrap_dataclass(cls)\n extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))\n private_fields = _get_private_fields(wrapped)\n\n all_fields.extend(\n (\n (\n field.name,\n field.type,\n field,\n )\n for field in extra_fields + private_fields\n )\n )\n\n # Sort fields so that fields with missing defaults go first\n # because dataclasses require that fields with no defaults are defined\n # first\n missing_default = []\n has_default = []\n for field in all_fields:\n if field[2].default is dataclasses.MISSING:\n missing_default.append(field)\n else:\n has_default.append(field)\n\n sorted_fields = missing_default + has_default\n\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n )\n\n _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n federation=federation,\n )\n\n model._strawberry_type = cls # type: ignore\n cls._pydantic_type = model # type: ignore\n\n def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:\n return convert_pydantic_model_to_strawberry_class(\n cls=cls, model_instance=instance, extra=extra\n )\n\n def to_pydantic(self) -> Any:\n instance_kwargs = dataclasses.asdict(self)\n\n return model(**instance_kwargs)\n\n cls.from_pydantic = staticmethod(from_pydantic)\n cls.to_pydantic = to_pydantic\n\n return cls\n\n return wrap\n\n\ninput = partial(type, is_input=True)\n", "path": "strawberry/experimental/pydantic/object_type.py"}]} | 2,466 | 435 |
gh_patches_debug_36636 | rasdani/github-patches | git_diff | falconry__falcon-541 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Has compile_uri_template been removed?
I can't see it in the code any more.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/routing/util.py`
Content:
```
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from falcon import HTTP_METHODS, responders
16 from falcon.hooks import _wrap_with_hooks
17
18
19 def create_http_method_map(resource, before, after):
20 """Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.
21
22 Args:
23 resource: An object with *responder* methods, following the naming
24 convention *on_\**, that correspond to each method the resource
25 supports. For example, if a resource supports GET and POST, it
26 should define ``on_get(self, req, resp)`` and
27 ``on_post(self, req, resp)``.
28 before: An action hook or ``list`` of hooks to be called before each
29 *on_\** responder defined by the resource.
30 after: An action hook or ``list`` of hooks to be called after each
31 *on_\** responder defined by the resource.
32
33 Returns:
34 dict: A mapping of HTTP methods to responders.
35
36 """
37
38 method_map = {}
39
40 for method in HTTP_METHODS:
41 try:
42 responder = getattr(resource, 'on_' + method.lower())
43 except AttributeError:
44 # resource does not implement this method
45 pass
46 else:
47 # Usually expect a method, but any callable will do
48 if callable(responder):
49 responder = _wrap_with_hooks(
50 before, after, responder, resource)
51 method_map[method] = responder
52
53 # Attach a resource for unsupported HTTP methods
54 allowed_methods = sorted(list(method_map.keys()))
55
56 # NOTE(sebasmagri): We want the OPTIONS and 405 (Not Allowed) methods
57 # responders to be wrapped on global hooks
58 if 'OPTIONS' not in method_map:
59 # OPTIONS itself is intentionally excluded from the Allow header
60 responder = responders.create_default_options(
61 allowed_methods)
62 method_map['OPTIONS'] = _wrap_with_hooks(
63 before, after, responder, resource)
64 allowed_methods.append('OPTIONS')
65
66 na_responder = responders.create_method_not_allowed(allowed_methods)
67
68 for method in HTTP_METHODS:
69 if method not in allowed_methods:
70 method_map[method] = _wrap_with_hooks(
71 before, after, na_responder, resource)
72
73 return method_map
74
```
Path: `falcon/routing/__init__.py`
Content:
```
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from falcon.routing.compiled import CompiledRouter
16 from falcon.routing.util import create_http_method_map # NOQA
17
18
19 DefaultRouter = CompiledRouter
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/falcon/routing/__init__.py b/falcon/routing/__init__.py
--- a/falcon/routing/__init__.py
+++ b/falcon/routing/__init__.py
@@ -14,6 +14,7 @@
from falcon.routing.compiled import CompiledRouter
from falcon.routing.util import create_http_method_map # NOQA
+from falcon.routing.util import compile_uri_template # NOQA
DefaultRouter = CompiledRouter
diff --git a/falcon/routing/util.py b/falcon/routing/util.py
--- a/falcon/routing/util.py
+++ b/falcon/routing/util.py
@@ -12,10 +12,72 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import re
+
+import six
+
from falcon import HTTP_METHODS, responders
from falcon.hooks import _wrap_with_hooks
+# NOTE(kgriffs): Published method; take care to avoid breaking changes.
+def compile_uri_template(template):
+ """Compile the given URI template string into a pattern matcher.
+
+ This function can be used to construct custom routing engines that
+ iterate through a list of possible routes, attempting to match
+ an incoming request against each route's compiled regular expression.
+
+ Each field is converted to a named group, so that when a match
+ is found, the fields can be easily extracted using
+ :py:meth:`re.MatchObject.groupdict`.
+
+ This function does not support the more flexible templating
+ syntax used in the default router. Only simple paths with bracketed
+ field expressions are recognized. For example::
+
+ /
+ /books
+ /books/{isbn}
+ /books/{isbn}/characters
+ /books/{isbn}/characters/{name}
+
+ Also, note that if the template contains a trailing slash character,
+ it will be stripped in order to normalize the routing logic.
+
+ Args:
+ template(str): The template to compile. Note that field names are
+ restricted to ASCII a-z, A-Z, and the underscore character.
+
+ Returns:
+ tuple: (template_field_names, template_regex)
+ """
+
+ if not isinstance(template, six.string_types):
+ raise TypeError('uri_template is not a string')
+
+ if not template.startswith('/'):
+ raise ValueError("uri_template must start with '/'")
+
+ if '//' in template:
+ raise ValueError("uri_template may not contain '//'")
+
+ if template != '/' and template.endswith('/'):
+ template = template[:-1]
+
+ expression_pattern = r'{([a-zA-Z][a-zA-Z_]*)}'
+
+ # Get a list of field names
+ fields = set(re.findall(expression_pattern, template))
+
+ # Convert Level 1 var patterns to equivalent named regex groups
+ escaped = re.sub(r'[\.\(\)\[\]\?\*\+\^\|]', r'\\\g<0>', template)
+ pattern = re.sub(expression_pattern, r'(?P<\1>[^/]+)', escaped)
+ pattern = r'\A' + pattern + r'\Z'
+
+ return fields, re.compile(pattern, re.IGNORECASE)
+
+
def create_http_method_map(resource, before, after):
"""Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.
| {"golden_diff": "diff --git a/falcon/routing/__init__.py b/falcon/routing/__init__.py\n--- a/falcon/routing/__init__.py\n+++ b/falcon/routing/__init__.py\n@@ -14,6 +14,7 @@\n \n from falcon.routing.compiled import CompiledRouter\n from falcon.routing.util import create_http_method_map # NOQA\n+from falcon.routing.util import compile_uri_template # NOQA\n \n \n DefaultRouter = CompiledRouter\ndiff --git a/falcon/routing/util.py b/falcon/routing/util.py\n--- a/falcon/routing/util.py\n+++ b/falcon/routing/util.py\n@@ -12,10 +12,72 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import re\n+\n+import six\n+\n from falcon import HTTP_METHODS, responders\n from falcon.hooks import _wrap_with_hooks\n \n \n+# NOTE(kgriffs): Published method; take care to avoid breaking changes.\n+def compile_uri_template(template):\n+ \"\"\"Compile the given URI template string into a pattern matcher.\n+\n+ This function can be used to construct custom routing engines that\n+ iterate through a list of possible routes, attempting to match\n+ an incoming request against each route's compiled regular expression.\n+\n+ Each field is converted to a named group, so that when a match\n+ is found, the fields can be easily extracted using\n+ :py:meth:`re.MatchObject.groupdict`.\n+\n+ This function does not support the more flexible templating\n+ syntax used in the default router. Only simple paths with bracketed\n+ field expressions are recognized. For example::\n+\n+ /\n+ /books\n+ /books/{isbn}\n+ /books/{isbn}/characters\n+ /books/{isbn}/characters/{name}\n+\n+ Also, note that if the template contains a trailing slash character,\n+ it will be stripped in order to normalize the routing logic.\n+\n+ Args:\n+ template(str): The template to compile. Note that field names are\n+ restricted to ASCII a-z, A-Z, and the underscore character.\n+\n+ Returns:\n+ tuple: (template_field_names, template_regex)\n+ \"\"\"\n+\n+ if not isinstance(template, six.string_types):\n+ raise TypeError('uri_template is not a string')\n+\n+ if not template.startswith('/'):\n+ raise ValueError(\"uri_template must start with '/'\")\n+\n+ if '//' in template:\n+ raise ValueError(\"uri_template may not contain '//'\")\n+\n+ if template != '/' and template.endswith('/'):\n+ template = template[:-1]\n+\n+ expression_pattern = r'{([a-zA-Z][a-zA-Z_]*)}'\n+\n+ # Get a list of field names\n+ fields = set(re.findall(expression_pattern, template))\n+\n+ # Convert Level 1 var patterns to equivalent named regex groups\n+ escaped = re.sub(r'[\\.\\(\\)\\[\\]\\?\\*\\+\\^\\|]', r'\\\\\\g<0>', template)\n+ pattern = re.sub(expression_pattern, r'(?P<\\1>[^/]+)', escaped)\n+ pattern = r'\\A' + pattern + r'\\Z'\n+\n+ return fields, re.compile(pattern, re.IGNORECASE)\n+\n+\n def create_http_method_map(resource, before, after):\n \"\"\"Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.\n", "issue": "Has compile_uri_template been removed?\nI can't see it in the code any more.\n\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon import HTTP_METHODS, responders\nfrom falcon.hooks import _wrap_with_hooks\n\n\ndef create_http_method_map(resource, before, after):\n \"\"\"Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.\n\n Args:\n resource: An object with *responder* methods, following the naming\n convention *on_\\**, that correspond to each method the resource\n supports. For example, if a resource supports GET and POST, it\n should define ``on_get(self, req, resp)`` and\n ``on_post(self, req, resp)``.\n before: An action hook or ``list`` of hooks to be called before each\n *on_\\** responder defined by the resource.\n after: An action hook or ``list`` of hooks to be called after each\n *on_\\** responder defined by the resource.\n\n Returns:\n dict: A mapping of HTTP methods to responders.\n\n \"\"\"\n\n method_map = {}\n\n for method in HTTP_METHODS:\n try:\n responder = getattr(resource, 'on_' + method.lower())\n except AttributeError:\n # resource does not implement this method\n pass\n else:\n # Usually expect a method, but any callable will do\n if callable(responder):\n responder = _wrap_with_hooks(\n before, after, responder, resource)\n method_map[method] = responder\n\n # Attach a resource for unsupported HTTP methods\n allowed_methods = sorted(list(method_map.keys()))\n\n # NOTE(sebasmagri): We want the OPTIONS and 405 (Not Allowed) methods\n # responders to be wrapped on global hooks\n if 'OPTIONS' not in method_map:\n # OPTIONS itself is intentionally excluded from the Allow header\n responder = responders.create_default_options(\n allowed_methods)\n method_map['OPTIONS'] = _wrap_with_hooks(\n before, after, responder, resource)\n allowed_methods.append('OPTIONS')\n\n na_responder = responders.create_method_not_allowed(allowed_methods)\n\n for method in HTTP_METHODS:\n if method not in allowed_methods:\n method_map[method] = _wrap_with_hooks(\n before, after, na_responder, resource)\n\n return method_map\n", "path": "falcon/routing/util.py"}, {"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.routing.compiled import CompiledRouter\nfrom falcon.routing.util import create_http_method_map # NOQA\n\n\nDefaultRouter = CompiledRouter\n", "path": "falcon/routing/__init__.py"}], "after_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport re\n\nimport six\n\nfrom falcon import HTTP_METHODS, responders\nfrom falcon.hooks import _wrap_with_hooks\n\n\n# NOTE(kgriffs): Published method; take care to avoid breaking changes.\ndef compile_uri_template(template):\n \"\"\"Compile the given URI template string into a pattern matcher.\n\n This function can be used to construct custom routing engines that\n iterate through a list of possible routes, attempting to match\n an incoming request against each route's compiled regular expression.\n\n Each field is converted to a named group, so that when a match\n is found, the fields can be easily extracted using\n :py:meth:`re.MatchObject.groupdict`.\n\n This function does not support the more flexible templating\n syntax used in the default router. Only simple paths with bracketed\n field expressions are recognized. For example::\n\n /\n /books\n /books/{isbn}\n /books/{isbn}/characters\n /books/{isbn}/characters/{name}\n\n Also, note that if the template contains a trailing slash character,\n it will be stripped in order to normalize the routing logic.\n\n Args:\n template(str): The template to compile. Note that field names are\n restricted to ASCII a-z, A-Z, and the underscore character.\n\n Returns:\n tuple: (template_field_names, template_regex)\n \"\"\"\n\n if not isinstance(template, six.string_types):\n raise TypeError('uri_template is not a string')\n\n if not template.startswith('/'):\n raise ValueError(\"uri_template must start with '/'\")\n\n if '//' in template:\n raise ValueError(\"uri_template may not contain '//'\")\n\n if template != '/' and template.endswith('/'):\n template = template[:-1]\n\n expression_pattern = r'{([a-zA-Z][a-zA-Z_]*)}'\n\n # Get a list of field names\n fields = set(re.findall(expression_pattern, template))\n\n # Convert Level 1 var patterns to equivalent named regex groups\n escaped = re.sub(r'[\\.\\(\\)\\[\\]\\?\\*\\+\\^\\|]', r'\\\\\\g<0>', template)\n pattern = re.sub(expression_pattern, r'(?P<\\1>[^/]+)', escaped)\n pattern = r'\\A' + pattern + r'\\Z'\n\n return fields, re.compile(pattern, re.IGNORECASE)\n\n\ndef create_http_method_map(resource, before, after):\n \"\"\"Maps HTTP methods (e.g., 'GET', 'POST') to methods of a resource object.\n\n Args:\n resource: An object with *responder* methods, following the naming\n convention *on_\\**, that correspond to each method the resource\n supports. For example, if a resource supports GET and POST, it\n should define ``on_get(self, req, resp)`` and\n ``on_post(self, req, resp)``.\n before: An action hook or ``list`` of hooks to be called before each\n *on_\\** responder defined by the resource.\n after: An action hook or ``list`` of hooks to be called after each\n *on_\\** responder defined by the resource.\n\n Returns:\n dict: A mapping of HTTP methods to responders.\n\n \"\"\"\n\n method_map = {}\n\n for method in HTTP_METHODS:\n try:\n responder = getattr(resource, 'on_' + method.lower())\n except AttributeError:\n # resource does not implement this method\n pass\n else:\n # Usually expect a method, but any callable will do\n if callable(responder):\n responder = _wrap_with_hooks(\n before, after, responder, resource)\n method_map[method] = responder\n\n # Attach a resource for unsupported HTTP methods\n allowed_methods = sorted(list(method_map.keys()))\n\n # NOTE(sebasmagri): We want the OPTIONS and 405 (Not Allowed) methods\n # responders to be wrapped on global hooks\n if 'OPTIONS' not in method_map:\n # OPTIONS itself is intentionally excluded from the Allow header\n responder = responders.create_default_options(\n allowed_methods)\n method_map['OPTIONS'] = _wrap_with_hooks(\n before, after, responder, resource)\n allowed_methods.append('OPTIONS')\n\n na_responder = responders.create_method_not_allowed(allowed_methods)\n\n for method in HTTP_METHODS:\n if method not in allowed_methods:\n method_map[method] = _wrap_with_hooks(\n before, after, na_responder, resource)\n\n return method_map\n", "path": "falcon/routing/util.py"}, {"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.routing.compiled import CompiledRouter\nfrom falcon.routing.util import create_http_method_map # NOQA\nfrom falcon.routing.util import compile_uri_template # NOQA\n\n\nDefaultRouter = CompiledRouter\n", "path": "falcon/routing/__init__.py"}]} | 1,253 | 752 |
gh_patches_debug_9545 | rasdani/github-patches | git_diff | fossasia__open-event-server-4310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add email to valid types in custom-form
**Current**
Currently we are not able to set an email type to the custom-form which leads to `Error: 422`.
**Expected**
email should be a valid type for the custom-form
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/custom_forms.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from marshmallow_jsonapi import fields
4 import marshmallow.validate as validate
5 from app.api.helpers.permissions import jwt_required
6 from flask_rest_jsonapi.exceptions import ObjectNotFound
7
8 from app.api.bootstrap import api
9 from app.api.helpers.utilities import dasherize
10 from app.models import db
11 from app.models.custom_form import CustomForms
12 from app.models.event import Event
13 from app.api.helpers.db import safe_query
14 from app.api.helpers.utilities import require_relationship
15 from app.api.helpers.permission_manager import has_access
16 from app.api.helpers.query import event_query
17
18
19 class CustomFormSchema(Schema):
20 """
21 API Schema for Custom Forms database model
22 """
23 class Meta:
24 """
25 Meta class for CustomForm Schema
26 """
27 type_ = 'custom-form'
28 self_view = 'v1.custom_form_detail'
29 self_view_kwargs = {'id': '<id>'}
30 inflect = dasherize
31
32 id = fields.Integer(dump_only=True)
33 field_identifier = fields.Str(required=True)
34 form = fields.Str(required=True)
35 type = fields.Str(default="text", validate=validate.OneOf(
36 choices=["text", "checkbox", "select", "file", "image"]))
37 is_required = fields.Boolean(default=False)
38 is_included = fields.Boolean(default=False)
39 is_fixed = fields.Boolean(default=False)
40 event = Relationship(attribute='event',
41 self_view='v1.custom_form_event',
42 self_view_kwargs={'id': '<id>'},
43 related_view='v1.event_detail',
44 related_view_kwargs={'custom_form_id': '<id>'},
45 schema='EventSchema',
46 type_='event')
47
48
49 class CustomFormListPost(ResourceList):
50 """
51 Create and List Custom Forms
52 """
53
54 def before_post(self, args, kwargs, data):
55 """
56 method to check for required relationship with event
57 :param args:
58 :param kwargs:
59 :param data:
60 :return:
61 """
62 require_relationship(['event'], data)
63 if not has_access('is_coorganizer', event_id=data['event']):
64 raise ObjectNotFound({'parameter': 'event_id'},
65 "Event: {} not found".format(data['event_id']))
66
67 schema = CustomFormSchema
68 methods = ['POST', ]
69 data_layer = {'session': db.session,
70 'model': CustomForms
71 }
72
73
74 class CustomFormList(ResourceList):
75 """
76 Create and List Custom Forms
77 """
78 def query(self, view_kwargs):
79 """
80 query method for different view_kwargs
81 :param view_kwargs:
82 :return:
83 """
84 query_ = self.session.query(CustomForms)
85 query_ = event_query(self, query_, view_kwargs)
86 return query_
87
88 view_kwargs = True
89 decorators = (jwt_required, )
90 methods = ['GET', ]
91 schema = CustomFormSchema
92 data_layer = {'session': db.session,
93 'model': CustomForms,
94 'methods': {
95 'query': query
96 }}
97
98
99 class CustomFormDetail(ResourceDetail):
100 """
101 CustomForm Resource
102 """
103
104 def before_get_object(self, view_kwargs):
105 """
106 before get method
107 :param view_kwargs:
108 :return:
109 """
110 event = None
111 if view_kwargs.get('event_id'):
112 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
113 elif view_kwargs.get('event_identifier'):
114 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
115
116 if event:
117 custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')
118 view_kwargs['id'] = custom_form.id
119
120 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
121 fetch_as="event_id", model=CustomForms, methods="PATCH,DELETE"), )
122 schema = CustomFormSchema
123 data_layer = {'session': db.session,
124 'model': CustomForms}
125
126
127 class CustomFormRelationshipRequired(ResourceRelationship):
128 """
129 CustomForm Relationship (Required)
130 """
131 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
132 fetch_as="event_id", model=CustomForms, methods="PATCH"),)
133 methods = ['GET', 'PATCH']
134 schema = CustomFormSchema
135 data_layer = {'session': db.session,
136 'model': CustomForms}
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py
--- a/app/api/custom_forms.py
+++ b/app/api/custom_forms.py
@@ -33,7 +33,7 @@
field_identifier = fields.Str(required=True)
form = fields.Str(required=True)
type = fields.Str(default="text", validate=validate.OneOf(
- choices=["text", "checkbox", "select", "file", "image"]))
+ choices=["text", "checkbox", "select", "file", "image", "email"]))
is_required = fields.Boolean(default=False)
is_included = fields.Boolean(default=False)
is_fixed = fields.Boolean(default=False)
| {"golden_diff": "diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py\n--- a/app/api/custom_forms.py\n+++ b/app/api/custom_forms.py\n@@ -33,7 +33,7 @@\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n- choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n+ choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\", \"email\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n", "issue": "Add email to valid types in custom-form\n**Current**\r\nCurrently we are not able to set an email type to the custom-form which leads to `Error: 422`.\r\n\r\n**Expected**\r\nemail should be a valid type for the custom-form\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nimport marshmallow.validate as validate\nfrom app.api.helpers.permissions import jwt_required\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.custom_form import CustomForms\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\n\n\nclass CustomFormSchema(Schema):\n \"\"\"\n API Schema for Custom Forms database model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n type_ = 'custom-form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Integer(dump_only=True)\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.custom_form_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'custom_form_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass CustomFormListPost(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to check for required relationship with event\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n schema = CustomFormSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': CustomForms\n }\n\n\nclass CustomFormList(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(CustomForms)\n query_ = event_query(self, query_, view_kwargs)\n return query_\n\n view_kwargs = True\n decorators = (jwt_required, )\n methods = ['GET', ]\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms,\n 'methods': {\n 'query': query\n }}\n\n\nclass CustomFormDetail(ResourceDetail):\n \"\"\"\n CustomForm Resource\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method\n :param view_kwargs:\n :return:\n \"\"\"\n event = None\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n\n if event:\n custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')\n view_kwargs['id'] = custom_form.id\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH,DELETE\"), )\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n\n\nclass CustomFormRelationshipRequired(ResourceRelationship):\n \"\"\"\n CustomForm Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n", "path": "app/api/custom_forms.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nimport marshmallow.validate as validate\nfrom app.api.helpers.permissions import jwt_required\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.custom_form import CustomForms\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\n\n\nclass CustomFormSchema(Schema):\n \"\"\"\n API Schema for Custom Forms database model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n type_ = 'custom-form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Integer(dump_only=True)\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\", \"email\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.custom_form_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'custom_form_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass CustomFormListPost(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to check for required relationship with event\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n schema = CustomFormSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': CustomForms\n }\n\n\nclass CustomFormList(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(CustomForms)\n query_ = event_query(self, query_, view_kwargs)\n return query_\n\n view_kwargs = True\n decorators = (jwt_required, )\n methods = ['GET', ]\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms,\n 'methods': {\n 'query': query\n }}\n\n\nclass CustomFormDetail(ResourceDetail):\n \"\"\"\n CustomForm Resource\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method\n :param view_kwargs:\n :return:\n \"\"\"\n event = None\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n\n if event:\n custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')\n view_kwargs['id'] = custom_form.id\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH,DELETE\"), )\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n\n\nclass CustomFormRelationshipRequired(ResourceRelationship):\n \"\"\"\n CustomForm Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n", "path": "app/api/custom_forms.py"}]} | 1,571 | 143 |
gh_patches_debug_26322 | rasdani/github-patches | git_diff | cobbler__cobbler-3065 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Backport] [mkloaders] Add the missing 'syslinux/libutil.c32'
### Original feature issue
<!--- (if present): Describe the feature --->
- PR: #3024
### Target release
- [x] release33
- [ ] release32
- [ ] release30
### Reason
In Uyuni/SUSE Manager we need a stable version of Cobbler which is suitable for long term support. This issue is part of the effort to stabilize the Release V3.3.x series for this usecase.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/actions/mkloaders.py`
Content:
```
1 """Cobbler action to create bootable Grub2 images.
2
3 This action calls grub2-mkimage for all bootloader formats configured in
4 Cobbler's settings. See man(1) grub2-mkimage for available formats.
5 """
6 import logging
7 import pathlib
8 import re
9 import subprocess
10 import sys
11 import typing
12
13 from cobbler import utils
14
15
16 # NOTE: does not warrant being a class, but all Cobbler actions use a class's ".run()" as the entrypoint
17 class MkLoaders:
18 """
19 Action to create bootloader images.
20 """
21
22 def __init__(self, api):
23 """
24 MkLoaders constructor.
25
26 :param api: CobblerAPI instance for accessing settings
27 """
28 self.logger = logging.getLogger()
29 self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)
30 # GRUB 2
31 self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)
32 self.boot_loaders_formats: typing.Dict = api.settings().bootloaders_formats
33 self.modules: typing.List = api.settings().bootloaders_modules
34 # Syslinux
35 self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)
36 self.syslinux_memdisk_folder = pathlib.Path(api.settings().syslinux_memdisk_folder)
37 self.syslinux_pxelinux_folder = pathlib.Path(api.settings().syslinux_pxelinux_folder)
38 # Shim
39 self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)
40 self.shim_regex = re.compile(api.settings().bootloaders_shim_file)
41 # iPXE
42 self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)
43
44 def run(self):
45 """
46 Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the
47 creation after it is logged that this is not available.
48 """
49 self.create_directories()
50
51 self.make_shim()
52 self.make_ipxe()
53 self.make_syslinux()
54 self.make_grub()
55
56 def make_shim(self):
57 """
58 Create symlink of the shim bootloader in case it is available on the system.
59 """
60 # Check well-known locations
61 # Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773
62 parts = self.shim_glob.parts
63 start_at = 1 if self.shim_glob.is_absolute() else 0
64 bootloader_path_parts = pathlib.Path(*parts[start_at:])
65 results = sorted(pathlib.Path(self.shim_glob.root).glob(str(bootloader_path_parts)))
66 # If no match, then report and bail out.
67 if len(results) <= 0:
68 self.logger.info('Unable to find the folder which should be scanned for "shim.efi"! Bailing out of linking '
69 'the shim!')
70 return
71 # Now scan the folders with the regex
72 target_shim = None
73 for possible_folder in results:
74 for child in possible_folder.iterdir():
75 if self.shim_regex.search(str(child)):
76 target_shim = child.resolve()
77 break
78 # If no match is found report and return
79 if target_shim is None:
80 self.logger.info('Unable to find "shim.efi" file. Please adjust "bootloaders_shim_file" regex. Bailing out '
81 'of linking the shim!')
82 return
83 # Symlink the absolute target of the match
84 symlink(
85 target_shim,
86 self.bootloaders_dir.joinpath(pathlib.Path("grub/shim.efi")),
87 skip_existing=True
88 )
89
90 def make_ipxe(self):
91 """
92 Create symlink of the iPXE bootloader in case it is available on the system.
93 """
94 if not self.ipxe_folder.exists():
95 self.logger.info('ipxe directory did not exist. Please adjust the "bootloaders_ipxe_folder". Bailing out '
96 'of iPXE setup!')
97 return
98 symlink(
99 self.ipxe_folder.joinpath("undionly.kpxe"),
100 self.bootloaders_dir.joinpath(pathlib.Path("undionly.pxe")),
101 skip_existing=True
102 )
103
104 def make_syslinux(self):
105 """
106 Create symlink of the important syslinux bootloader files in case they are available on the system.
107 """
108 if not utils.command_existing("syslinux"):
109 self.logger.info("syslinux command not available. Bailing out of syslinux setup!")
110 return
111 # Make modules
112 symlink(
113 self.syslinux_folder.joinpath("menu.c32"),
114 self.bootloaders_dir.joinpath("menu.c32"),
115 skip_existing=True
116 )
117 if get_syslinux_version() < 5:
118 # This file is only required for Syslinux 5 and newer.
119 # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules
120 self.logger.info('syslinux version 4 detected! Skip making symlink of "ldlinux.c32" file!')
121 else:
122 symlink(
123 self.syslinux_folder.joinpath("ldlinux.c32"),
124 self.bootloaders_dir.joinpath("ldlinux.c32"),
125 skip_existing=True
126 )
127 # Make memdisk
128 symlink(
129 self.syslinux_memdisk_folder.joinpath("memdisk"),
130 self.bootloaders_dir.joinpath("memdisk"),
131 skip_existing=True
132 )
133 # Make pxelinux.0
134 symlink(
135 self.syslinux_pxelinux_folder.joinpath("pxelinux.0"),
136 self.bootloaders_dir.joinpath("pxelinux.0"),
137 skip_existing=True
138 )
139
140 def make_grub(self):
141 """
142 Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders
143 for other architectures if the modules to do so are available.
144 """
145 if not utils.command_existing("grub2-mkimage"):
146 self.logger.info("grub2-mkimage command not available. Bailing out of GRUB2 generation!")
147 return
148
149 for image_format, options in self.boot_loaders_formats.items():
150 bl_mod_dir = options.get("mod_dir", image_format)
151 mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)
152 if not mod_dir.exists():
153 self.logger.info(
154 'GRUB2 modules directory for arch "%s" did no exist. Skipping GRUB2 creation',
155 image_format
156 )
157 continue
158 try:
159 mkimage(
160 image_format,
161 self.bootloaders_dir.joinpath("grub", options["binary_name"]),
162 self.modules + options.get("extra_modules", []),
163 )
164 except subprocess.CalledProcessError:
165 self.logger.info('grub2-mkimage failed for arch "%s"! Maybe you did forget to install the grub modules '
166 'for the architecture?', image_format)
167 utils.log_exc()
168 # don't create module symlinks if grub2-mkimage is unsuccessful
169 continue
170 self.logger.info('Successfully built bootloader for arch "%s"!', image_format)
171
172 # Create a symlink for GRUB 2 modules
173 # assumes a single GRUB can be used to boot all kinds of distros
174 # if this assumption turns out incorrect, individual "grub" subdirectories are needed
175 symlink(
176 mod_dir,
177 self.bootloaders_dir.joinpath("grub", bl_mod_dir),
178 skip_existing=True
179 )
180
181 def create_directories(self):
182 """
183 Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for
184 all supported bootloaders, regardless of the capabilities to symlink/install/build them.
185 """
186 if not self.bootloaders_dir.exists():
187 raise FileNotFoundError("Main bootloader directory not found! Please create it yourself!")
188
189 grub_dir = self.bootloaders_dir.joinpath("grub")
190 if not grub_dir.exists():
191 grub_dir.mkdir(mode=0o644)
192
193
194 # NOTE: move this to cobbler.utils?
195 # cobbler.utils.linkfile does a lot of things, it might be worth it to have a
196 # function just for symbolic links
197 def symlink(target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False):
198 """Create a symlink LINK pointing to TARGET.
199
200 :param target: File/directory that the link will point to. The file/directory must exist.
201 :param link: Filename for the link.
202 :param skip_existing: Controls if existing links are skipped, defaults to False.
203 :raises FileNotFoundError: ``target`` is not an existing file.
204 :raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.
205 """
206
207 if not target.exists():
208 raise FileNotFoundError(
209 f"{target} does not exist, can't create a symlink to it."
210 )
211 try:
212 link.symlink_to(target)
213 except FileExistsError:
214 if not skip_existing:
215 raise
216
217
218 def mkimage(image_format: str, image_filename: pathlib.Path, modules: typing.List):
219 """Create a bootable image of GRUB using grub2-mkimage.
220
221 :param image_format: Format of the image that is being created. See man(1)
222 grub2-mkimage for a list of supported formats.
223 :param image_filename: Location of the image that is being created.
224 :param modules: List of GRUB modules to include into the image
225 :raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.
226 """
227
228 if not image_filename.parent.exists():
229 image_filename.parent.mkdir(parents=True)
230
231 cmd = ["grub2-mkimage"]
232 cmd.extend(("--format", image_format))
233 cmd.extend(("--output", str(image_filename)))
234 cmd.append("--prefix=")
235 cmd.extend(modules)
236
237 # The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our
238 # own custom exception together with cobbler.utils.subprocess_* functions
239 subprocess.run(cmd, check=True)
240
241
242 def get_syslinux_version() -> int:
243 """
244 This calls syslinux and asks for the version number.
245
246 :return: The major syslinux release number.
247 :raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.
248 """
249 # Example output: "syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al"
250 cmd = ["syslinux", "-v"]
251 completed_process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
252 encoding=sys.getdefaultencoding())
253 output = completed_process.stdout.split()
254 return int(float(output[1]))
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cobbler/actions/mkloaders.py b/cobbler/actions/mkloaders.py
--- a/cobbler/actions/mkloaders.py
+++ b/cobbler/actions/mkloaders.py
@@ -108,13 +108,23 @@
if not utils.command_existing("syslinux"):
self.logger.info("syslinux command not available. Bailing out of syslinux setup!")
return
+ syslinux_version = get_syslinux_version()
# Make modules
symlink(
self.syslinux_folder.joinpath("menu.c32"),
self.bootloaders_dir.joinpath("menu.c32"),
skip_existing=True
)
- if get_syslinux_version() < 5:
+ # According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,
+ # 'menu.c32' depends on 'libutil.c32'.
+ libutil_c32_path = self.syslinux_folder.joinpath("libutil.c32")
+ if syslinux_version > 4 and libutil_c32_path.exists():
+ symlink(
+ libutil_c32_path,
+ self.bootloaders_dir.joinpath("libutil.c32"),
+ skip_existing=True,
+ )
+ if syslinux_version < 5:
# This file is only required for Syslinux 5 and newer.
# Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules
self.logger.info('syslinux version 4 detected! Skip making symlink of "ldlinux.c32" file!')
| {"golden_diff": "diff --git a/cobbler/actions/mkloaders.py b/cobbler/actions/mkloaders.py\n--- a/cobbler/actions/mkloaders.py\n+++ b/cobbler/actions/mkloaders.py\n@@ -108,13 +108,23 @@\n if not utils.command_existing(\"syslinux\"):\n self.logger.info(\"syslinux command not available. Bailing out of syslinux setup!\")\n return\n+ syslinux_version = get_syslinux_version()\n # Make modules\n symlink(\n self.syslinux_folder.joinpath(\"menu.c32\"),\n self.bootloaders_dir.joinpath(\"menu.c32\"),\n skip_existing=True\n )\n- if get_syslinux_version() < 5:\n+ # According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,\n+ # 'menu.c32' depends on 'libutil.c32'.\n+ libutil_c32_path = self.syslinux_folder.joinpath(\"libutil.c32\")\n+ if syslinux_version > 4 and libutil_c32_path.exists():\n+ symlink(\n+ libutil_c32_path,\n+ self.bootloaders_dir.joinpath(\"libutil.c32\"),\n+ skip_existing=True,\n+ )\n+ if syslinux_version < 5:\n # This file is only required for Syslinux 5 and newer.\n # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules\n self.logger.info('syslinux version 4 detected! Skip making symlink of \"ldlinux.c32\" file!')\n", "issue": "[Backport] [mkloaders] Add the missing 'syslinux/libutil.c32'\n### Original feature issue\r\n\r\n<!--- (if present): Describe the feature --->\r\n- PR: #3024\r\n\r\n### Target release\r\n\r\n- [x] release33\r\n- [ ] release32\r\n- [ ] release30\r\n\r\n### Reason\r\n\r\nIn Uyuni/SUSE Manager we need a stable version of Cobbler which is suitable for long term support. This issue is part of the effort to stabilize the Release V3.3.x series for this usecase.\r\n\n", "before_files": [{"content": "\"\"\"Cobbler action to create bootable Grub2 images.\n\nThis action calls grub2-mkimage for all bootloader formats configured in\nCobbler's settings. See man(1) grub2-mkimage for available formats.\n\"\"\"\nimport logging\nimport pathlib\nimport re\nimport subprocess\nimport sys\nimport typing\n\nfrom cobbler import utils\n\n\n# NOTE: does not warrant being a class, but all Cobbler actions use a class's \".run()\" as the entrypoint\nclass MkLoaders:\n \"\"\"\n Action to create bootloader images.\n \"\"\"\n\n def __init__(self, api):\n \"\"\"\n MkLoaders constructor.\n\n :param api: CobblerAPI instance for accessing settings\n \"\"\"\n self.logger = logging.getLogger()\n self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)\n # GRUB 2\n self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)\n self.boot_loaders_formats: typing.Dict = api.settings().bootloaders_formats\n self.modules: typing.List = api.settings().bootloaders_modules\n # Syslinux\n self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)\n self.syslinux_memdisk_folder = pathlib.Path(api.settings().syslinux_memdisk_folder)\n self.syslinux_pxelinux_folder = pathlib.Path(api.settings().syslinux_pxelinux_folder)\n # Shim\n self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)\n self.shim_regex = re.compile(api.settings().bootloaders_shim_file)\n # iPXE\n self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)\n\n def run(self):\n \"\"\"\n Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the\n creation after it is logged that this is not available.\n \"\"\"\n self.create_directories()\n\n self.make_shim()\n self.make_ipxe()\n self.make_syslinux()\n self.make_grub()\n\n def make_shim(self):\n \"\"\"\n Create symlink of the shim bootloader in case it is available on the system.\n \"\"\"\n # Check well-known locations\n # Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773\n parts = self.shim_glob.parts\n start_at = 1 if self.shim_glob.is_absolute() else 0\n bootloader_path_parts = pathlib.Path(*parts[start_at:])\n results = sorted(pathlib.Path(self.shim_glob.root).glob(str(bootloader_path_parts)))\n # If no match, then report and bail out.\n if len(results) <= 0:\n self.logger.info('Unable to find the folder which should be scanned for \"shim.efi\"! Bailing out of linking '\n 'the shim!')\n return\n # Now scan the folders with the regex\n target_shim = None\n for possible_folder in results:\n for child in possible_folder.iterdir():\n if self.shim_regex.search(str(child)):\n target_shim = child.resolve()\n break\n # If no match is found report and return\n if target_shim is None:\n self.logger.info('Unable to find \"shim.efi\" file. Please adjust \"bootloaders_shim_file\" regex. Bailing out '\n 'of linking the shim!')\n return\n # Symlink the absolute target of the match\n symlink(\n target_shim,\n self.bootloaders_dir.joinpath(pathlib.Path(\"grub/shim.efi\")),\n skip_existing=True\n )\n\n def make_ipxe(self):\n \"\"\"\n Create symlink of the iPXE bootloader in case it is available on the system.\n \"\"\"\n if not self.ipxe_folder.exists():\n self.logger.info('ipxe directory did not exist. Please adjust the \"bootloaders_ipxe_folder\". Bailing out '\n 'of iPXE setup!')\n return\n symlink(\n self.ipxe_folder.joinpath(\"undionly.kpxe\"),\n self.bootloaders_dir.joinpath(pathlib.Path(\"undionly.pxe\")),\n skip_existing=True\n )\n\n def make_syslinux(self):\n \"\"\"\n Create symlink of the important syslinux bootloader files in case they are available on the system.\n \"\"\"\n if not utils.command_existing(\"syslinux\"):\n self.logger.info(\"syslinux command not available. Bailing out of syslinux setup!\")\n return\n # Make modules\n symlink(\n self.syslinux_folder.joinpath(\"menu.c32\"),\n self.bootloaders_dir.joinpath(\"menu.c32\"),\n skip_existing=True\n )\n if get_syslinux_version() < 5:\n # This file is only required for Syslinux 5 and newer.\n # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules\n self.logger.info('syslinux version 4 detected! Skip making symlink of \"ldlinux.c32\" file!')\n else:\n symlink(\n self.syslinux_folder.joinpath(\"ldlinux.c32\"),\n self.bootloaders_dir.joinpath(\"ldlinux.c32\"),\n skip_existing=True\n )\n # Make memdisk\n symlink(\n self.syslinux_memdisk_folder.joinpath(\"memdisk\"),\n self.bootloaders_dir.joinpath(\"memdisk\"),\n skip_existing=True\n )\n # Make pxelinux.0\n symlink(\n self.syslinux_pxelinux_folder.joinpath(\"pxelinux.0\"),\n self.bootloaders_dir.joinpath(\"pxelinux.0\"),\n skip_existing=True\n )\n\n def make_grub(self):\n \"\"\"\n Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders\n for other architectures if the modules to do so are available.\n \"\"\"\n if not utils.command_existing(\"grub2-mkimage\"):\n self.logger.info(\"grub2-mkimage command not available. Bailing out of GRUB2 generation!\")\n return\n\n for image_format, options in self.boot_loaders_formats.items():\n bl_mod_dir = options.get(\"mod_dir\", image_format)\n mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)\n if not mod_dir.exists():\n self.logger.info(\n 'GRUB2 modules directory for arch \"%s\" did no exist. Skipping GRUB2 creation',\n image_format\n )\n continue\n try:\n mkimage(\n image_format,\n self.bootloaders_dir.joinpath(\"grub\", options[\"binary_name\"]),\n self.modules + options.get(\"extra_modules\", []),\n )\n except subprocess.CalledProcessError:\n self.logger.info('grub2-mkimage failed for arch \"%s\"! Maybe you did forget to install the grub modules '\n 'for the architecture?', image_format)\n utils.log_exc()\n # don't create module symlinks if grub2-mkimage is unsuccessful\n continue\n self.logger.info('Successfully built bootloader for arch \"%s\"!', image_format)\n\n # Create a symlink for GRUB 2 modules\n # assumes a single GRUB can be used to boot all kinds of distros\n # if this assumption turns out incorrect, individual \"grub\" subdirectories are needed\n symlink(\n mod_dir,\n self.bootloaders_dir.joinpath(\"grub\", bl_mod_dir),\n skip_existing=True\n )\n\n def create_directories(self):\n \"\"\"\n Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for\n all supported bootloaders, regardless of the capabilities to symlink/install/build them.\n \"\"\"\n if not self.bootloaders_dir.exists():\n raise FileNotFoundError(\"Main bootloader directory not found! Please create it yourself!\")\n\n grub_dir = self.bootloaders_dir.joinpath(\"grub\")\n if not grub_dir.exists():\n grub_dir.mkdir(mode=0o644)\n\n\n# NOTE: move this to cobbler.utils?\n# cobbler.utils.linkfile does a lot of things, it might be worth it to have a\n# function just for symbolic links\ndef symlink(target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False):\n \"\"\"Create a symlink LINK pointing to TARGET.\n\n :param target: File/directory that the link will point to. The file/directory must exist.\n :param link: Filename for the link.\n :param skip_existing: Controls if existing links are skipped, defaults to False.\n :raises FileNotFoundError: ``target`` is not an existing file.\n :raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.\n \"\"\"\n\n if not target.exists():\n raise FileNotFoundError(\n f\"{target} does not exist, can't create a symlink to it.\"\n )\n try:\n link.symlink_to(target)\n except FileExistsError:\n if not skip_existing:\n raise\n\n\ndef mkimage(image_format: str, image_filename: pathlib.Path, modules: typing.List):\n \"\"\"Create a bootable image of GRUB using grub2-mkimage.\n\n :param image_format: Format of the image that is being created. See man(1)\n grub2-mkimage for a list of supported formats.\n :param image_filename: Location of the image that is being created.\n :param modules: List of GRUB modules to include into the image\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.\n \"\"\"\n\n if not image_filename.parent.exists():\n image_filename.parent.mkdir(parents=True)\n\n cmd = [\"grub2-mkimage\"]\n cmd.extend((\"--format\", image_format))\n cmd.extend((\"--output\", str(image_filename)))\n cmd.append(\"--prefix=\")\n cmd.extend(modules)\n\n # The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our\n # own custom exception together with cobbler.utils.subprocess_* functions\n subprocess.run(cmd, check=True)\n\n\ndef get_syslinux_version() -> int:\n \"\"\"\n This calls syslinux and asks for the version number.\n\n :return: The major syslinux release number.\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.\n \"\"\"\n # Example output: \"syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al\"\n cmd = [\"syslinux\", \"-v\"]\n completed_process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n encoding=sys.getdefaultencoding())\n output = completed_process.stdout.split()\n return int(float(output[1]))\n", "path": "cobbler/actions/mkloaders.py"}], "after_files": [{"content": "\"\"\"Cobbler action to create bootable Grub2 images.\n\nThis action calls grub2-mkimage for all bootloader formats configured in\nCobbler's settings. See man(1) grub2-mkimage for available formats.\n\"\"\"\nimport logging\nimport pathlib\nimport re\nimport subprocess\nimport sys\nimport typing\n\nfrom cobbler import utils\n\n\n# NOTE: does not warrant being a class, but all Cobbler actions use a class's \".run()\" as the entrypoint\nclass MkLoaders:\n \"\"\"\n Action to create bootloader images.\n \"\"\"\n\n def __init__(self, api):\n \"\"\"\n MkLoaders constructor.\n\n :param api: CobblerAPI instance for accessing settings\n \"\"\"\n self.logger = logging.getLogger()\n self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)\n # GRUB 2\n self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)\n self.boot_loaders_formats: typing.Dict = api.settings().bootloaders_formats\n self.modules: typing.List = api.settings().bootloaders_modules\n # Syslinux\n self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)\n self.syslinux_memdisk_folder = pathlib.Path(api.settings().syslinux_memdisk_folder)\n self.syslinux_pxelinux_folder = pathlib.Path(api.settings().syslinux_pxelinux_folder)\n # Shim\n self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)\n self.shim_regex = re.compile(api.settings().bootloaders_shim_file)\n # iPXE\n self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)\n\n def run(self):\n \"\"\"\n Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the\n creation after it is logged that this is not available.\n \"\"\"\n self.create_directories()\n\n self.make_shim()\n self.make_ipxe()\n self.make_syslinux()\n self.make_grub()\n\n def make_shim(self):\n \"\"\"\n Create symlink of the shim bootloader in case it is available on the system.\n \"\"\"\n # Check well-known locations\n # Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773\n parts = self.shim_glob.parts\n start_at = 1 if self.shim_glob.is_absolute() else 0\n bootloader_path_parts = pathlib.Path(*parts[start_at:])\n results = sorted(pathlib.Path(self.shim_glob.root).glob(str(bootloader_path_parts)))\n # If no match, then report and bail out.\n if len(results) <= 0:\n self.logger.info('Unable to find the folder which should be scanned for \"shim.efi\"! Bailing out of linking '\n 'the shim!')\n return\n # Now scan the folders with the regex\n target_shim = None\n for possible_folder in results:\n for child in possible_folder.iterdir():\n if self.shim_regex.search(str(child)):\n target_shim = child.resolve()\n break\n # If no match is found report and return\n if target_shim is None:\n self.logger.info('Unable to find \"shim.efi\" file. Please adjust \"bootloaders_shim_file\" regex. Bailing out '\n 'of linking the shim!')\n return\n # Symlink the absolute target of the match\n symlink(\n target_shim,\n self.bootloaders_dir.joinpath(pathlib.Path(\"grub/shim.efi\")),\n skip_existing=True\n )\n\n def make_ipxe(self):\n \"\"\"\n Create symlink of the iPXE bootloader in case it is available on the system.\n \"\"\"\n if not self.ipxe_folder.exists():\n self.logger.info('ipxe directory did not exist. Please adjust the \"bootloaders_ipxe_folder\". Bailing out '\n 'of iPXE setup!')\n return\n symlink(\n self.ipxe_folder.joinpath(\"undionly.kpxe\"),\n self.bootloaders_dir.joinpath(pathlib.Path(\"undionly.pxe\")),\n skip_existing=True\n )\n\n def make_syslinux(self):\n \"\"\"\n Create symlink of the important syslinux bootloader files in case they are available on the system.\n \"\"\"\n if not utils.command_existing(\"syslinux\"):\n self.logger.info(\"syslinux command not available. Bailing out of syslinux setup!\")\n return\n syslinux_version = get_syslinux_version()\n # Make modules\n symlink(\n self.syslinux_folder.joinpath(\"menu.c32\"),\n self.bootloaders_dir.joinpath(\"menu.c32\"),\n skip_existing=True\n )\n # According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,\n # 'menu.c32' depends on 'libutil.c32'.\n libutil_c32_path = self.syslinux_folder.joinpath(\"libutil.c32\")\n if syslinux_version > 4 and libutil_c32_path.exists():\n symlink(\n libutil_c32_path,\n self.bootloaders_dir.joinpath(\"libutil.c32\"),\n skip_existing=True,\n )\n if syslinux_version < 5:\n # This file is only required for Syslinux 5 and newer.\n # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules\n self.logger.info('syslinux version 4 detected! Skip making symlink of \"ldlinux.c32\" file!')\n else:\n symlink(\n self.syslinux_folder.joinpath(\"ldlinux.c32\"),\n self.bootloaders_dir.joinpath(\"ldlinux.c32\"),\n skip_existing=True\n )\n # Make memdisk\n symlink(\n self.syslinux_memdisk_folder.joinpath(\"memdisk\"),\n self.bootloaders_dir.joinpath(\"memdisk\"),\n skip_existing=True\n )\n # Make pxelinux.0\n symlink(\n self.syslinux_pxelinux_folder.joinpath(\"pxelinux.0\"),\n self.bootloaders_dir.joinpath(\"pxelinux.0\"),\n skip_existing=True\n )\n\n def make_grub(self):\n \"\"\"\n Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders\n for other architectures if the modules to do so are available.\n \"\"\"\n if not utils.command_existing(\"grub2-mkimage\"):\n self.logger.info(\"grub2-mkimage command not available. Bailing out of GRUB2 generation!\")\n return\n\n for image_format, options in self.boot_loaders_formats.items():\n bl_mod_dir = options.get(\"mod_dir\", image_format)\n mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)\n if not mod_dir.exists():\n self.logger.info(\n 'GRUB2 modules directory for arch \"%s\" did no exist. Skipping GRUB2 creation',\n image_format\n )\n continue\n try:\n mkimage(\n image_format,\n self.bootloaders_dir.joinpath(\"grub\", options[\"binary_name\"]),\n self.modules + options.get(\"extra_modules\", []),\n )\n except subprocess.CalledProcessError:\n self.logger.info('grub2-mkimage failed for arch \"%s\"! Maybe you did forget to install the grub modules '\n 'for the architecture?', image_format)\n utils.log_exc()\n # don't create module symlinks if grub2-mkimage is unsuccessful\n continue\n self.logger.info('Successfully built bootloader for arch \"%s\"!', image_format)\n\n # Create a symlink for GRUB 2 modules\n # assumes a single GRUB can be used to boot all kinds of distros\n # if this assumption turns out incorrect, individual \"grub\" subdirectories are needed\n symlink(\n mod_dir,\n self.bootloaders_dir.joinpath(\"grub\", bl_mod_dir),\n skip_existing=True\n )\n\n def create_directories(self):\n \"\"\"\n Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for\n all supported bootloaders, regardless of the capabilities to symlink/install/build them.\n \"\"\"\n if not self.bootloaders_dir.exists():\n raise FileNotFoundError(\"Main bootloader directory not found! Please create it yourself!\")\n\n grub_dir = self.bootloaders_dir.joinpath(\"grub\")\n if not grub_dir.exists():\n grub_dir.mkdir(mode=0o644)\n\n\n# NOTE: move this to cobbler.utils?\n# cobbler.utils.linkfile does a lot of things, it might be worth it to have a\n# function just for symbolic links\ndef symlink(target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False):\n \"\"\"Create a symlink LINK pointing to TARGET.\n\n :param target: File/directory that the link will point to. The file/directory must exist.\n :param link: Filename for the link.\n :param skip_existing: Controls if existing links are skipped, defaults to False.\n :raises FileNotFoundError: ``target`` is not an existing file.\n :raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.\n \"\"\"\n\n if not target.exists():\n raise FileNotFoundError(\n f\"{target} does not exist, can't create a symlink to it.\"\n )\n try:\n link.symlink_to(target)\n except FileExistsError:\n if not skip_existing:\n raise\n\n\ndef mkimage(image_format: str, image_filename: pathlib.Path, modules: typing.List):\n \"\"\"Create a bootable image of GRUB using grub2-mkimage.\n\n :param image_format: Format of the image that is being created. See man(1)\n grub2-mkimage for a list of supported formats.\n :param image_filename: Location of the image that is being created.\n :param modules: List of GRUB modules to include into the image\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.\n \"\"\"\n\n if not image_filename.parent.exists():\n image_filename.parent.mkdir(parents=True)\n\n cmd = [\"grub2-mkimage\"]\n cmd.extend((\"--format\", image_format))\n cmd.extend((\"--output\", str(image_filename)))\n cmd.append(\"--prefix=\")\n cmd.extend(modules)\n\n # The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our\n # own custom exception together with cobbler.utils.subprocess_* functions\n subprocess.run(cmd, check=True)\n\n\ndef get_syslinux_version() -> int:\n \"\"\"\n This calls syslinux and asks for the version number.\n\n :return: The major syslinux release number.\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.\n \"\"\"\n # Example output: \"syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al\"\n cmd = [\"syslinux\", \"-v\"]\n completed_process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n encoding=sys.getdefaultencoding())\n output = completed_process.stdout.split()\n return int(float(output[1]))\n", "path": "cobbler/actions/mkloaders.py"}]} | 3,355 | 346 |
gh_patches_debug_32847 | rasdani/github-patches | git_diff | mdn__kuma-5652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
wiki.views.translate with `tolocale` invalid causes ISE
E.g. https://sentry.prod.mozaws.net/operations/mdn-prod/issues/6147484/?environment=oregon%3Aprod
```
KeyError: u'he?async'
File "django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "newrelic/hooks/framework_django.py", line 544, in wrapper
return wrapped(*args, **kwargs)
File "kuma/core/decorators.py", line 213, in wrapped
return func(request, *args, **kwargs)
File "django/views/decorators/cache.py", line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "kuma/core/decorators.py", line 148, in agent_blocked_view
return view_func(request, *args, **kwargs)
File "kuma/core/decorators.py", line 78, in _wrapped_view
return view_fn(request, *args, **kwargs)
File "csp/decorators.py", line 19, in _wrapped
r = f(*a, **kw)
File "kuma/wiki/decorators.py", line 105, in process
return func(request, *args, **kwargs)
File "kuma/wiki/decorators.py", line 48, in _check_readonly
return view(request, *args, **kwargs)
File "kuma/wiki/decorators.py", line 21, in _added_header
response = func(request, *args, **kwargs)
File "kuma/wiki/views/translate.py", line 265, in translate
language = language_mapping[document_locale.lower()]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/wiki/views/translate.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from csp.decorators import csp_update
3 from django.conf import settings
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.http import JsonResponse
6 from django.shortcuts import get_object_or_404, redirect, render
7 from django.utils.safestring import mark_safe
8 from django.utils.six.moves.urllib.parse import urlencode
9 from django.utils.translation import ugettext_lazy as _
10 from django.views.decorators.cache import never_cache
11
12 import kuma.wiki.content
13 from kuma.attachments.forms import AttachmentRevisionForm
14 from kuma.core.decorators import (block_user_agents, ensure_wiki_domain,
15 login_required)
16 from kuma.core.i18n import get_language_mapping
17 from kuma.core.urlresolvers import reverse
18 from kuma.core.utils import get_object_or_none, smart_int, urlparams
19
20 from .utils import document_form_initial, split_slug
21 from ..decorators import (check_readonly, prevent_indexing,
22 process_document_path)
23 from ..forms import DocumentForm, RevisionForm
24 from ..models import Document, Revision
25
26
27 @ensure_wiki_domain
28 @never_cache
29 @block_user_agents
30 @login_required
31 @process_document_path
32 def select_locale(request, document_slug, document_locale):
33 """
34 Select a locale to translate the document to.
35 """
36 doc = get_object_or_404(Document,
37 locale=document_locale,
38 slug=document_slug)
39 return render(request, 'wiki/select_locale.html', {'document': doc})
40
41
42 @ensure_wiki_domain
43 @never_cache
44 @block_user_agents
45 @login_required
46 @csp_update(SCRIPT_SRC="'unsafe-eval'") # Required until CKEditor 4.7
47 @process_document_path
48 @check_readonly
49 @prevent_indexing
50 def translate(request, document_slug, document_locale):
51 """
52 Create a new translation of a wiki document.
53
54 * document_slug is for the default locale
55 * translation is to the request locale
56 """
57 # TODO: Refactor this view into two views? (new, edit)
58 # That might help reduce the headache-inducing branchiness.
59
60 # The parent document to translate from
61 parent_doc = get_object_or_404(Document,
62 locale=settings.WIKI_DEFAULT_LANGUAGE,
63 slug=document_slug)
64
65 # HACK: Seems weird, but sticking the translate-to locale in a query
66 # param is the best way to avoid the MindTouch-legacy locale
67 # redirection logic.
68 document_locale = request.GET.get('tolocale', document_locale)
69
70 # Set a "Discard Changes" page
71 discard_href = ''
72
73 if settings.WIKI_DEFAULT_LANGUAGE == document_locale:
74 # Don't translate to the default language.
75 return redirect(reverse(
76 'wiki.edit', locale=settings.WIKI_DEFAULT_LANGUAGE,
77 args=[parent_doc.slug]))
78
79 if not parent_doc.is_localizable:
80 message = _(u'You cannot translate this document.')
81 context = {'message': message}
82 return render(request, 'handlers/400.html', context, status=400)
83
84 based_on_rev = parent_doc.current_or_latest_revision()
85
86 disclose_description = bool(request.GET.get('opendescription'))
87
88 try:
89 doc = parent_doc.translations.get(locale=document_locale)
90 slug_dict = split_slug(doc.slug)
91 except Document.DoesNotExist:
92 doc = None
93 disclose_description = True
94 slug_dict = split_slug(document_slug)
95
96 # Find the "real" parent topic, which is its translation
97 if parent_doc.parent_topic:
98 try:
99 parent_topic_translated_doc = (parent_doc.parent_topic
100 .translations
101 .get(locale=document_locale))
102 slug_dict = split_slug(parent_topic_translated_doc.slug +
103 '/' +
104 slug_dict['specific'])
105 except ObjectDoesNotExist:
106 pass
107
108 user_has_doc_perm = (not doc) or (doc and doc.allows_editing_by(request.user))
109
110 doc_form = None
111 if user_has_doc_perm:
112 if doc:
113 # If there's an existing doc, populate form from it.
114 discard_href = doc.get_absolute_url()
115 doc.slug = slug_dict['specific']
116 doc_initial = document_form_initial(doc)
117 else:
118 # If no existing doc, bring over the original title and slug.
119 discard_href = parent_doc.get_absolute_url()
120 doc_initial = {'title': based_on_rev.title,
121 'slug': slug_dict['specific']}
122 doc_form = DocumentForm(initial=doc_initial,
123 parent_slug=slug_dict['parent'])
124
125 initial = {
126 'based_on': based_on_rev.id,
127 'current_rev': doc.current_or_latest_revision().id if doc else None,
128 'comment': '',
129 'toc_depth': based_on_rev.toc_depth,
130 'localization_tags': ['inprogress'],
131 }
132 content = None
133 if not doc:
134 content = based_on_rev.content
135 if content:
136 # TODO: There will be no need to "filterEditorSafety" when the code
137 # that calls "clean_content" on Revision.save is deployed to
138 # production, AND the current revisions of all docs have had
139 # their content cleaned with "clean_content".
140 initial.update(content=kuma.wiki.content.parse(content)
141 .filterEditorSafety()
142 .serialize())
143 instance = doc and doc.current_or_latest_revision()
144 rev_form = RevisionForm(request=request,
145 instance=instance,
146 initial=initial,
147 parent_slug=slug_dict['parent'])
148
149 if request.method == 'POST':
150 which_form = request.POST.get('form-type', 'both')
151 doc_form_invalid = False
152
153 # Grab the posted slug value in case it's invalid
154 posted_slug = request.POST.get('slug', slug_dict['specific'])
155
156 if user_has_doc_perm and which_form in ['doc', 'both']:
157 disclose_description = True
158 post_data = request.POST.copy()
159
160 post_data.update({'locale': document_locale})
161
162 doc_form = DocumentForm(post_data, instance=doc,
163 parent_slug=slug_dict['parent'])
164 doc_form.instance.locale = document_locale
165 doc_form.instance.parent = parent_doc
166
167 if which_form == 'both':
168 # Sending a new copy of post so the slug change above
169 # doesn't cause problems during validation
170 rev_form = RevisionForm(request=request,
171 data=post_data,
172 parent_slug=slug_dict['parent'])
173
174 # If we are submitting the whole form, we need to check that
175 # the Revision is valid before saving the Document.
176 if doc_form.is_valid() and (which_form == 'doc' or
177 rev_form.is_valid()):
178 doc = doc_form.save(parent=parent_doc)
179
180 if which_form == 'doc':
181 url = urlparams(doc.get_edit_url(), opendescription=1)
182 return redirect(url)
183 else:
184 doc_form.data['slug'] = posted_slug
185 doc_form_invalid = True
186
187 if doc and which_form in ['rev', 'both']:
188 post_data = request.POST.copy()
189 if 'slug' not in post_data:
190 post_data['slug'] = posted_slug
191
192 # update the post data with the toc_depth of original
193 post_data['toc_depth'] = based_on_rev.toc_depth
194
195 # Pass in the locale for the akistmet "blog_lang".
196 post_data['locale'] = document_locale
197
198 rev_form = RevisionForm(request=request,
199 data=post_data,
200 parent_slug=slug_dict['parent'])
201 rev_form.instance.document = doc # for rev_form.clean()
202
203 if rev_form.is_valid() and not doc_form_invalid:
204 parent_id = request.POST.get('parent_id', '')
205
206 # Attempt to set a parent
207 if parent_id:
208 try:
209 parent_doc = get_object_or_404(Document, id=parent_id)
210 rev_form.instance.document.parent = parent_doc
211 doc.parent = parent_doc
212 rev_form.instance.based_on.document = doc.original
213 except Document.DoesNotExist:
214 pass
215
216 rev_form.save(doc)
217 # If this is an Ajax POST, then return a JsonResponse
218 if request.is_ajax():
219 data = {
220 'error': False,
221 'new_revision_id': rev_form.instance.id,
222 }
223
224 return JsonResponse(data)
225
226 # Construct the redirect URL, adding any needed parameters
227 url = doc.get_absolute_url()
228 params = {}
229 # Parameter for the document saved, so that we can delete the cached draft on load
230 params['rev_saved'] = request.POST.get('current_rev', '')
231 url = '%s?%s' % (url, urlencode(params))
232 return redirect(url)
233 else:
234 # If this is an Ajax POST, then return a JsonResponse with error
235 if request.is_ajax():
236 if 'current_rev' in rev_form._errors:
237 # Make the error message safe so the '<' and '>' don't
238 # get turned into '<' and '>', respectively
239 rev_form.errors['current_rev'][0] = mark_safe(
240 rev_form.errors['current_rev'][0])
241 errors = [rev_form.errors[key][0] for key in rev_form.errors.keys()]
242 data = {
243 "error": True,
244 "error_message": errors,
245 "new_revision_id": rev_form.instance.id,
246 }
247 return JsonResponse(data=data)
248
249 if doc:
250 from_id = smart_int(request.GET.get('from'), None)
251 to_id = smart_int(request.GET.get('to'), None)
252
253 revision_from = get_object_or_none(Revision,
254 pk=from_id,
255 document=doc.parent)
256 revision_to = get_object_or_none(Revision,
257 pk=to_id,
258 document=doc.parent)
259 else:
260 revision_from = revision_to = None
261
262 parent_split = split_slug(parent_doc.slug)
263
264 language_mapping = get_language_mapping()
265 language = language_mapping[document_locale.lower()]
266 default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]
267
268 context = {
269 'parent': parent_doc,
270 'document': doc,
271 'document_form': doc_form,
272 'revision_form': rev_form,
273 'locale': document_locale,
274 'default_locale': default_locale,
275 'language': language,
276 'based_on': based_on_rev,
277 'disclose_description': disclose_description,
278 'discard_href': discard_href,
279 'attachment_form': AttachmentRevisionForm(),
280 'specific_slug': parent_split['specific'],
281 'parent_slug': parent_split['parent'],
282 'revision_from': revision_from,
283 'revision_to': revision_to,
284 }
285 return render(request, 'wiki/translate.html', context)
286
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kuma/wiki/views/translate.py b/kuma/wiki/views/translate.py
--- a/kuma/wiki/views/translate.py
+++ b/kuma/wiki/views/translate.py
@@ -2,7 +2,7 @@
from csp.decorators import csp_update
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
-from django.http import JsonResponse
+from django.http import Http404, JsonResponse
from django.shortcuts import get_object_or_404, redirect, render
from django.utils.safestring import mark_safe
from django.utils.six.moves.urllib.parse import urlencode
@@ -62,10 +62,18 @@
locale=settings.WIKI_DEFAULT_LANGUAGE,
slug=document_slug)
+ # Get the mapping here and now so it can be used for input validation
+ language_mapping = get_language_mapping()
+
# HACK: Seems weird, but sticking the translate-to locale in a query
# param is the best way to avoid the MindTouch-legacy locale
# redirection logic.
document_locale = request.GET.get('tolocale', document_locale)
+ if document_locale.lower() not in language_mapping:
+ # The 'tolocale' query string parameters aren't free-text. They're
+ # explicitly listed on the "Select language" page (`...$locales`)
+ # If a locale was entered that wasn't a link it's a user bug.
+ raise Http404
# Set a "Discard Changes" page
discard_href = ''
@@ -261,7 +269,6 @@
parent_split = split_slug(parent_doc.slug)
- language_mapping = get_language_mapping()
language = language_mapping[document_locale.lower()]
default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]
| {"golden_diff": "diff --git a/kuma/wiki/views/translate.py b/kuma/wiki/views/translate.py\n--- a/kuma/wiki/views/translate.py\n+++ b/kuma/wiki/views/translate.py\n@@ -2,7 +2,7 @@\n from csp.decorators import csp_update\n from django.conf import settings\n from django.core.exceptions import ObjectDoesNotExist\n-from django.http import JsonResponse\n+from django.http import Http404, JsonResponse\n from django.shortcuts import get_object_or_404, redirect, render\n from django.utils.safestring import mark_safe\n from django.utils.six.moves.urllib.parse import urlencode\n@@ -62,10 +62,18 @@\n locale=settings.WIKI_DEFAULT_LANGUAGE,\n slug=document_slug)\n \n+ # Get the mapping here and now so it can be used for input validation\n+ language_mapping = get_language_mapping()\n+\n # HACK: Seems weird, but sticking the translate-to locale in a query\n # param is the best way to avoid the MindTouch-legacy locale\n # redirection logic.\n document_locale = request.GET.get('tolocale', document_locale)\n+ if document_locale.lower() not in language_mapping:\n+ # The 'tolocale' query string parameters aren't free-text. They're\n+ # explicitly listed on the \"Select language\" page (`...$locales`)\n+ # If a locale was entered that wasn't a link it's a user bug.\n+ raise Http404\n \n # Set a \"Discard Changes\" page\n discard_href = ''\n@@ -261,7 +269,6 @@\n \n parent_split = split_slug(parent_doc.slug)\n \n- language_mapping = get_language_mapping()\n language = language_mapping[document_locale.lower()]\n default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]\n", "issue": "wiki.views.translate with `tolocale` invalid causes ISE\nE.g. https://sentry.prod.mozaws.net/operations/mdn-prod/issues/6147484/?environment=oregon%3Aprod\r\n\r\n\r\n```\r\nKeyError: u'he?async'\r\n File \"django/core/handlers/exception.py\", line 41, in inner\r\n response = get_response(request)\r\n File \"django/core/handlers/base.py\", line 187, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"django/core/handlers/base.py\", line 185, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"newrelic/hooks/framework_django.py\", line 544, in wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"kuma/core/decorators.py\", line 213, in wrapped\r\n return func(request, *args, **kwargs)\r\n File \"django/views/decorators/cache.py\", line 57, in _wrapped_view_func\r\n response = view_func(request, *args, **kwargs)\r\n File \"kuma/core/decorators.py\", line 148, in agent_blocked_view\r\n return view_func(request, *args, **kwargs)\r\n File \"kuma/core/decorators.py\", line 78, in _wrapped_view\r\n return view_fn(request, *args, **kwargs)\r\n File \"csp/decorators.py\", line 19, in _wrapped\r\n r = f(*a, **kw)\r\n File \"kuma/wiki/decorators.py\", line 105, in process\r\n return func(request, *args, **kwargs)\r\n File \"kuma/wiki/decorators.py\", line 48, in _check_readonly\r\n return view(request, *args, **kwargs)\r\n File \"kuma/wiki/decorators.py\", line 21, in _added_header\r\n response = func(request, *args, **kwargs)\r\n File \"kuma/wiki/views/translate.py\", line 265, in translate\r\n language = language_mapping[document_locale.lower()]\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom csp.decorators import csp_update\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.safestring import mark_safe\nfrom django.utils.six.moves.urllib.parse import urlencode\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.decorators.cache import never_cache\n\nimport kuma.wiki.content\nfrom kuma.attachments.forms import AttachmentRevisionForm\nfrom kuma.core.decorators import (block_user_agents, ensure_wiki_domain,\n login_required)\nfrom kuma.core.i18n import get_language_mapping\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import get_object_or_none, smart_int, urlparams\n\nfrom .utils import document_form_initial, split_slug\nfrom ..decorators import (check_readonly, prevent_indexing,\n process_document_path)\nfrom ..forms import DocumentForm, RevisionForm\nfrom ..models import Document, Revision\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@process_document_path\ndef select_locale(request, document_slug, document_locale):\n \"\"\"\n Select a locale to translate the document to.\n \"\"\"\n doc = get_object_or_404(Document,\n locale=document_locale,\n slug=document_slug)\n return render(request, 'wiki/select_locale.html', {'document': doc})\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@csp_update(SCRIPT_SRC=\"'unsafe-eval'\") # Required until CKEditor 4.7\n@process_document_path\n@check_readonly\n@prevent_indexing\ndef translate(request, document_slug, document_locale):\n \"\"\"\n Create a new translation of a wiki document.\n\n * document_slug is for the default locale\n * translation is to the request locale\n \"\"\"\n # TODO: Refactor this view into two views? (new, edit)\n # That might help reduce the headache-inducing branchiness.\n\n # The parent document to translate from\n parent_doc = get_object_or_404(Document,\n locale=settings.WIKI_DEFAULT_LANGUAGE,\n slug=document_slug)\n\n # HACK: Seems weird, but sticking the translate-to locale in a query\n # param is the best way to avoid the MindTouch-legacy locale\n # redirection logic.\n document_locale = request.GET.get('tolocale', document_locale)\n\n # Set a \"Discard Changes\" page\n discard_href = ''\n\n if settings.WIKI_DEFAULT_LANGUAGE == document_locale:\n # Don't translate to the default language.\n return redirect(reverse(\n 'wiki.edit', locale=settings.WIKI_DEFAULT_LANGUAGE,\n args=[parent_doc.slug]))\n\n if not parent_doc.is_localizable:\n message = _(u'You cannot translate this document.')\n context = {'message': message}\n return render(request, 'handlers/400.html', context, status=400)\n\n based_on_rev = parent_doc.current_or_latest_revision()\n\n disclose_description = bool(request.GET.get('opendescription'))\n\n try:\n doc = parent_doc.translations.get(locale=document_locale)\n slug_dict = split_slug(doc.slug)\n except Document.DoesNotExist:\n doc = None\n disclose_description = True\n slug_dict = split_slug(document_slug)\n\n # Find the \"real\" parent topic, which is its translation\n if parent_doc.parent_topic:\n try:\n parent_topic_translated_doc = (parent_doc.parent_topic\n .translations\n .get(locale=document_locale))\n slug_dict = split_slug(parent_topic_translated_doc.slug +\n '/' +\n slug_dict['specific'])\n except ObjectDoesNotExist:\n pass\n\n user_has_doc_perm = (not doc) or (doc and doc.allows_editing_by(request.user))\n\n doc_form = None\n if user_has_doc_perm:\n if doc:\n # If there's an existing doc, populate form from it.\n discard_href = doc.get_absolute_url()\n doc.slug = slug_dict['specific']\n doc_initial = document_form_initial(doc)\n else:\n # If no existing doc, bring over the original title and slug.\n discard_href = parent_doc.get_absolute_url()\n doc_initial = {'title': based_on_rev.title,\n 'slug': slug_dict['specific']}\n doc_form = DocumentForm(initial=doc_initial,\n parent_slug=slug_dict['parent'])\n\n initial = {\n 'based_on': based_on_rev.id,\n 'current_rev': doc.current_or_latest_revision().id if doc else None,\n 'comment': '',\n 'toc_depth': based_on_rev.toc_depth,\n 'localization_tags': ['inprogress'],\n }\n content = None\n if not doc:\n content = based_on_rev.content\n if content:\n # TODO: There will be no need to \"filterEditorSafety\" when the code\n # that calls \"clean_content\" on Revision.save is deployed to\n # production, AND the current revisions of all docs have had\n # their content cleaned with \"clean_content\".\n initial.update(content=kuma.wiki.content.parse(content)\n .filterEditorSafety()\n .serialize())\n instance = doc and doc.current_or_latest_revision()\n rev_form = RevisionForm(request=request,\n instance=instance,\n initial=initial,\n parent_slug=slug_dict['parent'])\n\n if request.method == 'POST':\n which_form = request.POST.get('form-type', 'both')\n doc_form_invalid = False\n\n # Grab the posted slug value in case it's invalid\n posted_slug = request.POST.get('slug', slug_dict['specific'])\n\n if user_has_doc_perm and which_form in ['doc', 'both']:\n disclose_description = True\n post_data = request.POST.copy()\n\n post_data.update({'locale': document_locale})\n\n doc_form = DocumentForm(post_data, instance=doc,\n parent_slug=slug_dict['parent'])\n doc_form.instance.locale = document_locale\n doc_form.instance.parent = parent_doc\n\n if which_form == 'both':\n # Sending a new copy of post so the slug change above\n # doesn't cause problems during validation\n rev_form = RevisionForm(request=request,\n data=post_data,\n parent_slug=slug_dict['parent'])\n\n # If we are submitting the whole form, we need to check that\n # the Revision is valid before saving the Document.\n if doc_form.is_valid() and (which_form == 'doc' or\n rev_form.is_valid()):\n doc = doc_form.save(parent=parent_doc)\n\n if which_form == 'doc':\n url = urlparams(doc.get_edit_url(), opendescription=1)\n return redirect(url)\n else:\n doc_form.data['slug'] = posted_slug\n doc_form_invalid = True\n\n if doc and which_form in ['rev', 'both']:\n post_data = request.POST.copy()\n if 'slug' not in post_data:\n post_data['slug'] = posted_slug\n\n # update the post data with the toc_depth of original\n post_data['toc_depth'] = based_on_rev.toc_depth\n\n # Pass in the locale for the akistmet \"blog_lang\".\n post_data['locale'] = document_locale\n\n rev_form = RevisionForm(request=request,\n data=post_data,\n parent_slug=slug_dict['parent'])\n rev_form.instance.document = doc # for rev_form.clean()\n\n if rev_form.is_valid() and not doc_form_invalid:\n parent_id = request.POST.get('parent_id', '')\n\n # Attempt to set a parent\n if parent_id:\n try:\n parent_doc = get_object_or_404(Document, id=parent_id)\n rev_form.instance.document.parent = parent_doc\n doc.parent = parent_doc\n rev_form.instance.based_on.document = doc.original\n except Document.DoesNotExist:\n pass\n\n rev_form.save(doc)\n # If this is an Ajax POST, then return a JsonResponse\n if request.is_ajax():\n data = {\n 'error': False,\n 'new_revision_id': rev_form.instance.id,\n }\n\n return JsonResponse(data)\n\n # Construct the redirect URL, adding any needed parameters\n url = doc.get_absolute_url()\n params = {}\n # Parameter for the document saved, so that we can delete the cached draft on load\n params['rev_saved'] = request.POST.get('current_rev', '')\n url = '%s?%s' % (url, urlencode(params))\n return redirect(url)\n else:\n # If this is an Ajax POST, then return a JsonResponse with error\n if request.is_ajax():\n if 'current_rev' in rev_form._errors:\n # Make the error message safe so the '<' and '>' don't\n # get turned into '<' and '>', respectively\n rev_form.errors['current_rev'][0] = mark_safe(\n rev_form.errors['current_rev'][0])\n errors = [rev_form.errors[key][0] for key in rev_form.errors.keys()]\n data = {\n \"error\": True,\n \"error_message\": errors,\n \"new_revision_id\": rev_form.instance.id,\n }\n return JsonResponse(data=data)\n\n if doc:\n from_id = smart_int(request.GET.get('from'), None)\n to_id = smart_int(request.GET.get('to'), None)\n\n revision_from = get_object_or_none(Revision,\n pk=from_id,\n document=doc.parent)\n revision_to = get_object_or_none(Revision,\n pk=to_id,\n document=doc.parent)\n else:\n revision_from = revision_to = None\n\n parent_split = split_slug(parent_doc.slug)\n\n language_mapping = get_language_mapping()\n language = language_mapping[document_locale.lower()]\n default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]\n\n context = {\n 'parent': parent_doc,\n 'document': doc,\n 'document_form': doc_form,\n 'revision_form': rev_form,\n 'locale': document_locale,\n 'default_locale': default_locale,\n 'language': language,\n 'based_on': based_on_rev,\n 'disclose_description': disclose_description,\n 'discard_href': discard_href,\n 'attachment_form': AttachmentRevisionForm(),\n 'specific_slug': parent_split['specific'],\n 'parent_slug': parent_split['parent'],\n 'revision_from': revision_from,\n 'revision_to': revision_to,\n }\n return render(request, 'wiki/translate.html', context)\n", "path": "kuma/wiki/views/translate.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom csp.decorators import csp_update\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.http import Http404, JsonResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.safestring import mark_safe\nfrom django.utils.six.moves.urllib.parse import urlencode\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.decorators.cache import never_cache\n\nimport kuma.wiki.content\nfrom kuma.attachments.forms import AttachmentRevisionForm\nfrom kuma.core.decorators import (block_user_agents, ensure_wiki_domain,\n login_required)\nfrom kuma.core.i18n import get_language_mapping\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import get_object_or_none, smart_int, urlparams\n\nfrom .utils import document_form_initial, split_slug\nfrom ..decorators import (check_readonly, prevent_indexing,\n process_document_path)\nfrom ..forms import DocumentForm, RevisionForm\nfrom ..models import Document, Revision\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@process_document_path\ndef select_locale(request, document_slug, document_locale):\n \"\"\"\n Select a locale to translate the document to.\n \"\"\"\n doc = get_object_or_404(Document,\n locale=document_locale,\n slug=document_slug)\n return render(request, 'wiki/select_locale.html', {'document': doc})\n\n\n@ensure_wiki_domain\n@never_cache\n@block_user_agents\n@login_required\n@csp_update(SCRIPT_SRC=\"'unsafe-eval'\") # Required until CKEditor 4.7\n@process_document_path\n@check_readonly\n@prevent_indexing\ndef translate(request, document_slug, document_locale):\n \"\"\"\n Create a new translation of a wiki document.\n\n * document_slug is for the default locale\n * translation is to the request locale\n \"\"\"\n # TODO: Refactor this view into two views? (new, edit)\n # That might help reduce the headache-inducing branchiness.\n\n # The parent document to translate from\n parent_doc = get_object_or_404(Document,\n locale=settings.WIKI_DEFAULT_LANGUAGE,\n slug=document_slug)\n\n # Get the mapping here and now so it can be used for input validation\n language_mapping = get_language_mapping()\n\n # HACK: Seems weird, but sticking the translate-to locale in a query\n # param is the best way to avoid the MindTouch-legacy locale\n # redirection logic.\n document_locale = request.GET.get('tolocale', document_locale)\n if document_locale.lower() not in language_mapping:\n # The 'tolocale' query string parameters aren't free-text. They're\n # explicitly listed on the \"Select language\" page (`...$locales`)\n # If a locale was entered that wasn't a link it's a user bug.\n raise Http404\n\n # Set a \"Discard Changes\" page\n discard_href = ''\n\n if settings.WIKI_DEFAULT_LANGUAGE == document_locale:\n # Don't translate to the default language.\n return redirect(reverse(\n 'wiki.edit', locale=settings.WIKI_DEFAULT_LANGUAGE,\n args=[parent_doc.slug]))\n\n if not parent_doc.is_localizable:\n message = _(u'You cannot translate this document.')\n context = {'message': message}\n return render(request, 'handlers/400.html', context, status=400)\n\n based_on_rev = parent_doc.current_or_latest_revision()\n\n disclose_description = bool(request.GET.get('opendescription'))\n\n try:\n doc = parent_doc.translations.get(locale=document_locale)\n slug_dict = split_slug(doc.slug)\n except Document.DoesNotExist:\n doc = None\n disclose_description = True\n slug_dict = split_slug(document_slug)\n\n # Find the \"real\" parent topic, which is its translation\n if parent_doc.parent_topic:\n try:\n parent_topic_translated_doc = (parent_doc.parent_topic\n .translations\n .get(locale=document_locale))\n slug_dict = split_slug(parent_topic_translated_doc.slug +\n '/' +\n slug_dict['specific'])\n except ObjectDoesNotExist:\n pass\n\n user_has_doc_perm = (not doc) or (doc and doc.allows_editing_by(request.user))\n\n doc_form = None\n if user_has_doc_perm:\n if doc:\n # If there's an existing doc, populate form from it.\n discard_href = doc.get_absolute_url()\n doc.slug = slug_dict['specific']\n doc_initial = document_form_initial(doc)\n else:\n # If no existing doc, bring over the original title and slug.\n discard_href = parent_doc.get_absolute_url()\n doc_initial = {'title': based_on_rev.title,\n 'slug': slug_dict['specific']}\n doc_form = DocumentForm(initial=doc_initial,\n parent_slug=slug_dict['parent'])\n\n initial = {\n 'based_on': based_on_rev.id,\n 'current_rev': doc.current_or_latest_revision().id if doc else None,\n 'comment': '',\n 'toc_depth': based_on_rev.toc_depth,\n 'localization_tags': ['inprogress'],\n }\n content = None\n if not doc:\n content = based_on_rev.content\n if content:\n # TODO: There will be no need to \"filterEditorSafety\" when the code\n # that calls \"clean_content\" on Revision.save is deployed to\n # production, AND the current revisions of all docs have had\n # their content cleaned with \"clean_content\".\n initial.update(content=kuma.wiki.content.parse(content)\n .filterEditorSafety()\n .serialize())\n instance = doc and doc.current_or_latest_revision()\n rev_form = RevisionForm(request=request,\n instance=instance,\n initial=initial,\n parent_slug=slug_dict['parent'])\n\n if request.method == 'POST':\n which_form = request.POST.get('form-type', 'both')\n doc_form_invalid = False\n\n # Grab the posted slug value in case it's invalid\n posted_slug = request.POST.get('slug', slug_dict['specific'])\n\n if user_has_doc_perm and which_form in ['doc', 'both']:\n disclose_description = True\n post_data = request.POST.copy()\n\n post_data.update({'locale': document_locale})\n\n doc_form = DocumentForm(post_data, instance=doc,\n parent_slug=slug_dict['parent'])\n doc_form.instance.locale = document_locale\n doc_form.instance.parent = parent_doc\n\n if which_form == 'both':\n # Sending a new copy of post so the slug change above\n # doesn't cause problems during validation\n rev_form = RevisionForm(request=request,\n data=post_data,\n parent_slug=slug_dict['parent'])\n\n # If we are submitting the whole form, we need to check that\n # the Revision is valid before saving the Document.\n if doc_form.is_valid() and (which_form == 'doc' or\n rev_form.is_valid()):\n doc = doc_form.save(parent=parent_doc)\n\n if which_form == 'doc':\n url = urlparams(doc.get_edit_url(), opendescription=1)\n return redirect(url)\n else:\n doc_form.data['slug'] = posted_slug\n doc_form_invalid = True\n\n if doc and which_form in ['rev', 'both']:\n post_data = request.POST.copy()\n if 'slug' not in post_data:\n post_data['slug'] = posted_slug\n\n # update the post data with the toc_depth of original\n post_data['toc_depth'] = based_on_rev.toc_depth\n\n # Pass in the locale for the akistmet \"blog_lang\".\n post_data['locale'] = document_locale\n\n rev_form = RevisionForm(request=request,\n data=post_data,\n parent_slug=slug_dict['parent'])\n rev_form.instance.document = doc # for rev_form.clean()\n\n if rev_form.is_valid() and not doc_form_invalid:\n parent_id = request.POST.get('parent_id', '')\n\n # Attempt to set a parent\n if parent_id:\n try:\n parent_doc = get_object_or_404(Document, id=parent_id)\n rev_form.instance.document.parent = parent_doc\n doc.parent = parent_doc\n rev_form.instance.based_on.document = doc.original\n except Document.DoesNotExist:\n pass\n\n rev_form.save(doc)\n # If this is an Ajax POST, then return a JsonResponse\n if request.is_ajax():\n data = {\n 'error': False,\n 'new_revision_id': rev_form.instance.id,\n }\n\n return JsonResponse(data)\n\n # Construct the redirect URL, adding any needed parameters\n url = doc.get_absolute_url()\n params = {}\n # Parameter for the document saved, so that we can delete the cached draft on load\n params['rev_saved'] = request.POST.get('current_rev', '')\n url = '%s?%s' % (url, urlencode(params))\n return redirect(url)\n else:\n # If this is an Ajax POST, then return a JsonResponse with error\n if request.is_ajax():\n if 'current_rev' in rev_form._errors:\n # Make the error message safe so the '<' and '>' don't\n # get turned into '<' and '>', respectively\n rev_form.errors['current_rev'][0] = mark_safe(\n rev_form.errors['current_rev'][0])\n errors = [rev_form.errors[key][0] for key in rev_form.errors.keys()]\n data = {\n \"error\": True,\n \"error_message\": errors,\n \"new_revision_id\": rev_form.instance.id,\n }\n return JsonResponse(data=data)\n\n if doc:\n from_id = smart_int(request.GET.get('from'), None)\n to_id = smart_int(request.GET.get('to'), None)\n\n revision_from = get_object_or_none(Revision,\n pk=from_id,\n document=doc.parent)\n revision_to = get_object_or_none(Revision,\n pk=to_id,\n document=doc.parent)\n else:\n revision_from = revision_to = None\n\n parent_split = split_slug(parent_doc.slug)\n\n language = language_mapping[document_locale.lower()]\n default_locale = language_mapping[settings.WIKI_DEFAULT_LANGUAGE.lower()]\n\n context = {\n 'parent': parent_doc,\n 'document': doc,\n 'document_form': doc_form,\n 'revision_form': rev_form,\n 'locale': document_locale,\n 'default_locale': default_locale,\n 'language': language,\n 'based_on': based_on_rev,\n 'disclose_description': disclose_description,\n 'discard_href': discard_href,\n 'attachment_form': AttachmentRevisionForm(),\n 'specific_slug': parent_split['specific'],\n 'parent_slug': parent_split['parent'],\n 'revision_from': revision_from,\n 'revision_to': revision_to,\n }\n return render(request, 'wiki/translate.html', context)\n", "path": "kuma/wiki/views/translate.py"}]} | 3,762 | 390 |
gh_patches_debug_34239 | rasdani/github-patches | git_diff | easybuilders__easybuild-framework-3140 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python RuntimeError during sanity check with python3
When installing easybuild with `pip install easybuild` using python3 (I tried with python3.5 and python3.6), the built process fails during the sanity checking step with
```
RuntimeError: dictionary changed size during iteration
```
that happens in [the block line 173 to 177 of easybuild/tools/environment.py](https://github.com/easybuilders/easybuild-framework/blob/2e5d9f00f9f83e1f27b38d1aa17e1b614f2c4aad/easybuild/tools/environment.py#L173-L177). I put the [full traceback](https://github.com/easybuilders/easybuild-framework/files/3987846/traceback.txt) in attachment because maybe it's a bit too long.
I tried only with a couple of easyconfigs, for instance [Julia-1.2.0-linux-x86_64.eb](https://github.com/easybuilders/easybuild-easyconfigs/blob/master/easybuild/easyconfigs/j/Julia/Julia-1.2.0-linux-x86_64.eb). It can be reproduced with `eb --install-latest-eb-release` as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/tools/environment.py`
Content:
```
1 ##
2 # Copyright 2012-2019 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 Utility module for modifying os.environ
27
28 :author: Toon Willems (Ghent University)
29 :author: Ward Poelmans (Ghent University)
30 """
31 import copy
32 import os
33
34 from easybuild.base import fancylogger
35 from easybuild.tools.build_log import EasyBuildError, dry_run_msg
36 from easybuild.tools.config import build_option
37 from easybuild.tools.utilities import shell_quote
38
39
40 # take copy of original environemt, so we can restore (parts of) it later
41 ORIG_OS_ENVIRON = copy.deepcopy(os.environ)
42
43
44 _log = fancylogger.getLogger('environment', fname=False)
45
46 _changes = {}
47
48
49 def write_changes(filename):
50 """
51 Write current changes to filename and reset environment afterwards
52 """
53 script = None
54 try:
55 script = open(filename, 'w')
56
57 for key in _changes:
58 script.write('export %s=%s\n' % (key, shell_quote(_changes[key])))
59
60 script.close()
61 except IOError as err:
62 if script is not None:
63 script.close()
64 raise EasyBuildError("Failed to write to %s: %s", filename, err)
65 reset_changes()
66
67
68 def reset_changes():
69 """
70 Reset the changes tracked by this module
71 """
72 global _changes
73 _changes = {}
74
75
76 def get_changes():
77 """
78 Return tracked changes made in environment.
79 """
80 return _changes
81
82
83 def setvar(key, value, verbose=True):
84 """
85 put key in the environment with value
86 tracks added keys until write_changes has been called
87
88 :param verbose: include message in dry run output for defining this environment variable
89 """
90 if key in os.environ:
91 oldval_info = "previous value: '%s'" % os.environ[key]
92 else:
93 oldval_info = "previously undefined"
94 # os.putenv() is not necessary. os.environ will call this.
95 os.environ[key] = value
96 _changes[key] = value
97 _log.info("Environment variable %s set to %s (%s)", key, value, oldval_info)
98
99 if verbose and build_option('extended_dry_run'):
100 quoted_value = shell_quote(value)
101 if quoted_value[0] not in ['"', "'"]:
102 quoted_value = '"%s"' % quoted_value
103 dry_run_msg(" export %s=%s" % (key, quoted_value), silent=build_option('silent'))
104
105
106 def unset_env_vars(keys, verbose=True):
107 """
108 Unset the keys given in the environment
109 Returns a dict with the old values of the unset keys
110 """
111 old_environ = {}
112
113 if keys and verbose and build_option('extended_dry_run'):
114 dry_run_msg("Undefining environment variables:\n", silent=build_option('silent'))
115
116 for key in keys:
117 if key in os.environ:
118 _log.info("Unsetting environment variable %s (value: %s)" % (key, os.environ[key]))
119 old_environ[key] = os.environ[key]
120 del os.environ[key]
121 if verbose and build_option('extended_dry_run'):
122 dry_run_msg(" unset %s # value was: %s" % (key, old_environ[key]), silent=build_option('silent'))
123
124 return old_environ
125
126
127 def restore_env_vars(env_keys):
128 """
129 Restore the environment by setting the keys in the env_keys dict again with their old value
130 """
131 for key in env_keys:
132 if env_keys[key] is not None:
133 _log.info("Restoring environment variable %s (value: %s)" % (key, env_keys[key]))
134 os.environ[key] = env_keys[key]
135
136
137 def read_environment(env_vars, strict=False):
138 """
139 Read variables from the environment
140 :param env_vars: a dict with key a name, value a environment variable name
141 :param strict: boolean, if True enforces that all specified environment variables are found
142 """
143 result = dict([(k, os.environ.get(v)) for k, v in env_vars.items() if v in os.environ])
144
145 if not len(env_vars) == len(result):
146 missing = ','.join(["%s / %s" % (k, v) for k, v in env_vars.items() if k not in result])
147 msg = 'Following name/variable not found in environment: %s' % missing
148 if strict:
149 raise EasyBuildError(msg)
150 else:
151 _log.debug(msg)
152
153 return result
154
155
156 def modify_env(old, new, verbose=True):
157 """
158 Compares 2 os.environ dumps. Adapts final environment.
159 """
160 oldKeys = old.keys()
161 newKeys = new.keys()
162 for key in newKeys:
163 # set them all. no smart checking for changed/identical values
164 if key in oldKeys:
165 # hmm, smart checking with debug logging
166 if not new[key] == old[key]:
167 _log.debug("Key in new environment found that is different from old one: %s (%s)" % (key, new[key]))
168 setvar(key, new[key], verbose=verbose)
169 else:
170 _log.debug("Key in new environment found that is not in old one: %s (%s)" % (key, new[key]))
171 setvar(key, new[key], verbose=verbose)
172
173 for key in oldKeys:
174 if key not in newKeys:
175 _log.debug("Key in old environment found that is not in new one: %s (%s)" % (key, old[key]))
176 os.unsetenv(key)
177 del os.environ[key]
178
179
180 def restore_env(env):
181 """
182 Restore active environment based on specified dictionary.
183 """
184 modify_env(os.environ, env, verbose=False)
185
186
187 def sanitize_env():
188 """
189 Sanitize environment.
190
191 This function undefines all $PYTHON* environment variables,
192 since they may affect the build/install procedure of Python packages.
193
194 cfr. https://docs.python.org/2/using/cmdline.html#environment-variables
195
196 While the $PYTHON* environment variables may be relevant/required for EasyBuild itself,
197 and for any non-stdlib Python packages it uses,
198 they are irrelevant (and potentially harmful) when installing Python packages.
199
200 Note that this is not an airtight protection against the Python being used in the build/install procedure
201 picking up non-stdlib Python packages (e.g., setuptools, vsc-base, ...), thanks to the magic of .pth files,
202 cfr. https://docs.python.org/2/library/site.html .
203 """
204 keys_to_unset = [key for key in os.environ if key.startswith('PYTHON')]
205 unset_env_vars(keys_to_unset, verbose=False)
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/easybuild/tools/environment.py b/easybuild/tools/environment.py
--- a/easybuild/tools/environment.py
+++ b/easybuild/tools/environment.py
@@ -113,7 +113,7 @@
if keys and verbose and build_option('extended_dry_run'):
dry_run_msg("Undefining environment variables:\n", silent=build_option('silent'))
- for key in keys:
+ for key in list(keys):
if key in os.environ:
_log.info("Unsetting environment variable %s (value: %s)" % (key, os.environ[key]))
old_environ[key] = os.environ[key]
@@ -155,24 +155,25 @@
def modify_env(old, new, verbose=True):
"""
- Compares 2 os.environ dumps. Adapts final environment.
+ Compares two os.environ dumps. Adapts final environment.
"""
- oldKeys = old.keys()
- newKeys = new.keys()
- for key in newKeys:
+ old_keys = list(old.keys())
+ new_keys = list(new.keys())
+
+ for key in new_keys:
# set them all. no smart checking for changed/identical values
- if key in oldKeys:
+ if key in old_keys:
# hmm, smart checking with debug logging
if not new[key] == old[key]:
- _log.debug("Key in new environment found that is different from old one: %s (%s)" % (key, new[key]))
+ _log.debug("Key in new environment found that is different from old one: %s (%s)", key, new[key])
setvar(key, new[key], verbose=verbose)
else:
- _log.debug("Key in new environment found that is not in old one: %s (%s)" % (key, new[key]))
+ _log.debug("Key in new environment found that is not in old one: %s (%s)", key, new[key])
setvar(key, new[key], verbose=verbose)
- for key in oldKeys:
- if key not in newKeys:
- _log.debug("Key in old environment found that is not in new one: %s (%s)" % (key, old[key]))
+ for key in old_keys:
+ if key not in new_keys:
+ _log.debug("Key in old environment found that is not in new one: %s (%s)", key, old[key])
os.unsetenv(key)
del os.environ[key]
| {"golden_diff": "diff --git a/easybuild/tools/environment.py b/easybuild/tools/environment.py\n--- a/easybuild/tools/environment.py\n+++ b/easybuild/tools/environment.py\n@@ -113,7 +113,7 @@\n if keys and verbose and build_option('extended_dry_run'):\n dry_run_msg(\"Undefining environment variables:\\n\", silent=build_option('silent'))\n \n- for key in keys:\n+ for key in list(keys):\n if key in os.environ:\n _log.info(\"Unsetting environment variable %s (value: %s)\" % (key, os.environ[key]))\n old_environ[key] = os.environ[key]\n@@ -155,24 +155,25 @@\n \n def modify_env(old, new, verbose=True):\n \"\"\"\n- Compares 2 os.environ dumps. Adapts final environment.\n+ Compares two os.environ dumps. Adapts final environment.\n \"\"\"\n- oldKeys = old.keys()\n- newKeys = new.keys()\n- for key in newKeys:\n+ old_keys = list(old.keys())\n+ new_keys = list(new.keys())\n+\n+ for key in new_keys:\n # set them all. no smart checking for changed/identical values\n- if key in oldKeys:\n+ if key in old_keys:\n # hmm, smart checking with debug logging\n if not new[key] == old[key]:\n- _log.debug(\"Key in new environment found that is different from old one: %s (%s)\" % (key, new[key]))\n+ _log.debug(\"Key in new environment found that is different from old one: %s (%s)\", key, new[key])\n setvar(key, new[key], verbose=verbose)\n else:\n- _log.debug(\"Key in new environment found that is not in old one: %s (%s)\" % (key, new[key]))\n+ _log.debug(\"Key in new environment found that is not in old one: %s (%s)\", key, new[key])\n setvar(key, new[key], verbose=verbose)\n \n- for key in oldKeys:\n- if key not in newKeys:\n- _log.debug(\"Key in old environment found that is not in new one: %s (%s)\" % (key, old[key]))\n+ for key in old_keys:\n+ if key not in new_keys:\n+ _log.debug(\"Key in old environment found that is not in new one: %s (%s)\", key, old[key])\n os.unsetenv(key)\n del os.environ[key]\n", "issue": "Python RuntimeError during sanity check with python3\nWhen installing easybuild with `pip install easybuild` using python3 (I tried with python3.5 and python3.6), the built process fails during the sanity checking step with\r\n```\r\nRuntimeError: dictionary changed size during iteration\r\n```\r\nthat happens in [the block line 173 to 177 of easybuild/tools/environment.py](https://github.com/easybuilders/easybuild-framework/blob/2e5d9f00f9f83e1f27b38d1aa17e1b614f2c4aad/easybuild/tools/environment.py#L173-L177). I put the [full traceback](https://github.com/easybuilders/easybuild-framework/files/3987846/traceback.txt) in attachment because maybe it's a bit too long.\r\n\r\nI tried only with a couple of easyconfigs, for instance [Julia-1.2.0-linux-x86_64.eb](https://github.com/easybuilders/easybuild-easyconfigs/blob/master/easybuild/easyconfigs/j/Julia/Julia-1.2.0-linux-x86_64.eb). It can be reproduced with `eb --install-latest-eb-release` as well.\n", "before_files": [{"content": "##\n# Copyright 2012-2019 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nUtility module for modifying os.environ\n\n:author: Toon Willems (Ghent University)\n:author: Ward Poelmans (Ghent University)\n\"\"\"\nimport copy\nimport os\n\nfrom easybuild.base import fancylogger\nfrom easybuild.tools.build_log import EasyBuildError, dry_run_msg\nfrom easybuild.tools.config import build_option\nfrom easybuild.tools.utilities import shell_quote\n\n\n# take copy of original environemt, so we can restore (parts of) it later\nORIG_OS_ENVIRON = copy.deepcopy(os.environ)\n\n\n_log = fancylogger.getLogger('environment', fname=False)\n\n_changes = {}\n\n\ndef write_changes(filename):\n \"\"\"\n Write current changes to filename and reset environment afterwards\n \"\"\"\n script = None\n try:\n script = open(filename, 'w')\n\n for key in _changes:\n script.write('export %s=%s\\n' % (key, shell_quote(_changes[key])))\n\n script.close()\n except IOError as err:\n if script is not None:\n script.close()\n raise EasyBuildError(\"Failed to write to %s: %s\", filename, err)\n reset_changes()\n\n\ndef reset_changes():\n \"\"\"\n Reset the changes tracked by this module\n \"\"\"\n global _changes\n _changes = {}\n\n\ndef get_changes():\n \"\"\"\n Return tracked changes made in environment.\n \"\"\"\n return _changes\n\n\ndef setvar(key, value, verbose=True):\n \"\"\"\n put key in the environment with value\n tracks added keys until write_changes has been called\n\n :param verbose: include message in dry run output for defining this environment variable\n \"\"\"\n if key in os.environ:\n oldval_info = \"previous value: '%s'\" % os.environ[key]\n else:\n oldval_info = \"previously undefined\"\n # os.putenv() is not necessary. os.environ will call this.\n os.environ[key] = value\n _changes[key] = value\n _log.info(\"Environment variable %s set to %s (%s)\", key, value, oldval_info)\n\n if verbose and build_option('extended_dry_run'):\n quoted_value = shell_quote(value)\n if quoted_value[0] not in ['\"', \"'\"]:\n quoted_value = '\"%s\"' % quoted_value\n dry_run_msg(\" export %s=%s\" % (key, quoted_value), silent=build_option('silent'))\n\n\ndef unset_env_vars(keys, verbose=True):\n \"\"\"\n Unset the keys given in the environment\n Returns a dict with the old values of the unset keys\n \"\"\"\n old_environ = {}\n\n if keys and verbose and build_option('extended_dry_run'):\n dry_run_msg(\"Undefining environment variables:\\n\", silent=build_option('silent'))\n\n for key in keys:\n if key in os.environ:\n _log.info(\"Unsetting environment variable %s (value: %s)\" % (key, os.environ[key]))\n old_environ[key] = os.environ[key]\n del os.environ[key]\n if verbose and build_option('extended_dry_run'):\n dry_run_msg(\" unset %s # value was: %s\" % (key, old_environ[key]), silent=build_option('silent'))\n\n return old_environ\n\n\ndef restore_env_vars(env_keys):\n \"\"\"\n Restore the environment by setting the keys in the env_keys dict again with their old value\n \"\"\"\n for key in env_keys:\n if env_keys[key] is not None:\n _log.info(\"Restoring environment variable %s (value: %s)\" % (key, env_keys[key]))\n os.environ[key] = env_keys[key]\n\n\ndef read_environment(env_vars, strict=False):\n \"\"\"\n Read variables from the environment\n :param env_vars: a dict with key a name, value a environment variable name\n :param strict: boolean, if True enforces that all specified environment variables are found\n \"\"\"\n result = dict([(k, os.environ.get(v)) for k, v in env_vars.items() if v in os.environ])\n\n if not len(env_vars) == len(result):\n missing = ','.join([\"%s / %s\" % (k, v) for k, v in env_vars.items() if k not in result])\n msg = 'Following name/variable not found in environment: %s' % missing\n if strict:\n raise EasyBuildError(msg)\n else:\n _log.debug(msg)\n\n return result\n\n\ndef modify_env(old, new, verbose=True):\n \"\"\"\n Compares 2 os.environ dumps. Adapts final environment.\n \"\"\"\n oldKeys = old.keys()\n newKeys = new.keys()\n for key in newKeys:\n # set them all. no smart checking for changed/identical values\n if key in oldKeys:\n # hmm, smart checking with debug logging\n if not new[key] == old[key]:\n _log.debug(\"Key in new environment found that is different from old one: %s (%s)\" % (key, new[key]))\n setvar(key, new[key], verbose=verbose)\n else:\n _log.debug(\"Key in new environment found that is not in old one: %s (%s)\" % (key, new[key]))\n setvar(key, new[key], verbose=verbose)\n\n for key in oldKeys:\n if key not in newKeys:\n _log.debug(\"Key in old environment found that is not in new one: %s (%s)\" % (key, old[key]))\n os.unsetenv(key)\n del os.environ[key]\n\n\ndef restore_env(env):\n \"\"\"\n Restore active environment based on specified dictionary.\n \"\"\"\n modify_env(os.environ, env, verbose=False)\n\n\ndef sanitize_env():\n \"\"\"\n Sanitize environment.\n\n This function undefines all $PYTHON* environment variables,\n since they may affect the build/install procedure of Python packages.\n\n cfr. https://docs.python.org/2/using/cmdline.html#environment-variables\n\n While the $PYTHON* environment variables may be relevant/required for EasyBuild itself,\n and for any non-stdlib Python packages it uses,\n they are irrelevant (and potentially harmful) when installing Python packages.\n\n Note that this is not an airtight protection against the Python being used in the build/install procedure\n picking up non-stdlib Python packages (e.g., setuptools, vsc-base, ...), thanks to the magic of .pth files,\n cfr. https://docs.python.org/2/library/site.html .\n \"\"\"\n keys_to_unset = [key for key in os.environ if key.startswith('PYTHON')]\n unset_env_vars(keys_to_unset, verbose=False)\n", "path": "easybuild/tools/environment.py"}], "after_files": [{"content": "##\n# Copyright 2012-2019 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nUtility module for modifying os.environ\n\n:author: Toon Willems (Ghent University)\n:author: Ward Poelmans (Ghent University)\n\"\"\"\nimport copy\nimport os\n\nfrom easybuild.base import fancylogger\nfrom easybuild.tools.build_log import EasyBuildError, dry_run_msg\nfrom easybuild.tools.config import build_option\nfrom easybuild.tools.utilities import shell_quote\n\n\n# take copy of original environemt, so we can restore (parts of) it later\nORIG_OS_ENVIRON = copy.deepcopy(os.environ)\n\n\n_log = fancylogger.getLogger('environment', fname=False)\n\n_changes = {}\n\n\ndef write_changes(filename):\n \"\"\"\n Write current changes to filename and reset environment afterwards\n \"\"\"\n script = None\n try:\n script = open(filename, 'w')\n\n for key in _changes:\n script.write('export %s=%s\\n' % (key, shell_quote(_changes[key])))\n\n script.close()\n except IOError as err:\n if script is not None:\n script.close()\n raise EasyBuildError(\"Failed to write to %s: %s\", filename, err)\n reset_changes()\n\n\ndef reset_changes():\n \"\"\"\n Reset the changes tracked by this module\n \"\"\"\n global _changes\n _changes = {}\n\n\ndef get_changes():\n \"\"\"\n Return tracked changes made in environment.\n \"\"\"\n return _changes\n\n\ndef setvar(key, value, verbose=True):\n \"\"\"\n put key in the environment with value\n tracks added keys until write_changes has been called\n\n :param verbose: include message in dry run output for defining this environment variable\n \"\"\"\n if key in os.environ:\n oldval_info = \"previous value: '%s'\" % os.environ[key]\n else:\n oldval_info = \"previously undefined\"\n # os.putenv() is not necessary. os.environ will call this.\n os.environ[key] = value\n _changes[key] = value\n _log.info(\"Environment variable %s set to %s (%s)\", key, value, oldval_info)\n\n if verbose and build_option('extended_dry_run'):\n quoted_value = shell_quote(value)\n if quoted_value[0] not in ['\"', \"'\"]:\n quoted_value = '\"%s\"' % quoted_value\n dry_run_msg(\" export %s=%s\" % (key, quoted_value), silent=build_option('silent'))\n\n\ndef unset_env_vars(keys, verbose=True):\n \"\"\"\n Unset the keys given in the environment\n Returns a dict with the old values of the unset keys\n \"\"\"\n old_environ = {}\n\n if keys and verbose and build_option('extended_dry_run'):\n dry_run_msg(\"Undefining environment variables:\\n\", silent=build_option('silent'))\n\n for key in list(keys):\n if key in os.environ:\n _log.info(\"Unsetting environment variable %s (value: %s)\" % (key, os.environ[key]))\n old_environ[key] = os.environ[key]\n del os.environ[key]\n if verbose and build_option('extended_dry_run'):\n dry_run_msg(\" unset %s # value was: %s\" % (key, old_environ[key]), silent=build_option('silent'))\n\n return old_environ\n\n\ndef restore_env_vars(env_keys):\n \"\"\"\n Restore the environment by setting the keys in the env_keys dict again with their old value\n \"\"\"\n for key in env_keys:\n if env_keys[key] is not None:\n _log.info(\"Restoring environment variable %s (value: %s)\" % (key, env_keys[key]))\n os.environ[key] = env_keys[key]\n\n\ndef read_environment(env_vars, strict=False):\n \"\"\"\n Read variables from the environment\n :param env_vars: a dict with key a name, value a environment variable name\n :param strict: boolean, if True enforces that all specified environment variables are found\n \"\"\"\n result = dict([(k, os.environ.get(v)) for k, v in env_vars.items() if v in os.environ])\n\n if not len(env_vars) == len(result):\n missing = ','.join([\"%s / %s\" % (k, v) for k, v in env_vars.items() if k not in result])\n msg = 'Following name/variable not found in environment: %s' % missing\n if strict:\n raise EasyBuildError(msg)\n else:\n _log.debug(msg)\n\n return result\n\n\ndef modify_env(old, new, verbose=True):\n \"\"\"\n Compares two os.environ dumps. Adapts final environment.\n \"\"\"\n old_keys = list(old.keys())\n new_keys = list(new.keys())\n\n for key in new_keys:\n # set them all. no smart checking for changed/identical values\n if key in old_keys:\n # hmm, smart checking with debug logging\n if not new[key] == old[key]:\n _log.debug(\"Key in new environment found that is different from old one: %s (%s)\", key, new[key])\n setvar(key, new[key], verbose=verbose)\n else:\n _log.debug(\"Key in new environment found that is not in old one: %s (%s)\", key, new[key])\n setvar(key, new[key], verbose=verbose)\n\n for key in old_keys:\n if key not in new_keys:\n _log.debug(\"Key in old environment found that is not in new one: %s (%s)\", key, old[key])\n os.unsetenv(key)\n del os.environ[key]\n\n\ndef restore_env(env):\n \"\"\"\n Restore active environment based on specified dictionary.\n \"\"\"\n modify_env(os.environ, env, verbose=False)\n\n\ndef sanitize_env():\n \"\"\"\n Sanitize environment.\n\n This function undefines all $PYTHON* environment variables,\n since they may affect the build/install procedure of Python packages.\n\n cfr. https://docs.python.org/2/using/cmdline.html#environment-variables\n\n While the $PYTHON* environment variables may be relevant/required for EasyBuild itself,\n and for any non-stdlib Python packages it uses,\n they are irrelevant (and potentially harmful) when installing Python packages.\n\n Note that this is not an airtight protection against the Python being used in the build/install procedure\n picking up non-stdlib Python packages (e.g., setuptools, vsc-base, ...), thanks to the magic of .pth files,\n cfr. https://docs.python.org/2/library/site.html .\n \"\"\"\n keys_to_unset = [key for key in os.environ if key.startswith('PYTHON')]\n unset_env_vars(keys_to_unset, verbose=False)\n", "path": "easybuild/tools/environment.py"}]} | 2,766 | 557 |
gh_patches_debug_19905 | rasdani/github-patches | git_diff | google__TensorNetwork-456 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`auto` calls `branch` with incorrect number of parameters when n >= 5 and n < 7
https://github.com/google/TensorNetwork/blob/003fdc789afa006e08f12818ad76c5f31828aa69/tensornetwork/contractors/opt_einsum_paths/path_contractors.py#L243
The `branch` method accepts 5 parameters, but here is called with 4. This results in the `ignore_edge_order` parameter being passed into `branch` as `nbranch` and an inability to set `ignore_edge_order` when `n >=5 and n < 7`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensornetwork/contractors/opt_einsum_paths/path_contractors.py`
Content:
```
1 # pylint: disable=cyclic-import
2 # Copyright 2019 The TensorNetwork Authors
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Contractors based on `opt_einsum`'s path algorithms."""
16
17 import functools
18 import opt_einsum
19 # pylint: disable=line-too-long
20 from tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling
21 # pylint: disable=line-too-long
22 from tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between
23 from tensornetwork.network_components import Edge, BaseNode
24 from tensornetwork.contractors.opt_einsum_paths import utils
25 from typing import Any, Optional, Sequence, Iterable
26
27 #TODO (martin): add return types of functions back once TensorNetwork is gone
28 # remove _base_network
29 # _base_nodes -> base
30
31
32 def base(nodes: Iterable[BaseNode],
33 algorithm: utils.Algorithm,
34 output_edge_order: Optional[Sequence[Edge]] = None,
35 ignore_edge_order: bool = False) -> BaseNode:
36 """Base method for all `opt_einsum` contractors.
37
38 Args:
39 nodes: A collection of connected nodes.
40 algorithm: `opt_einsum` contraction method to use.
41 output_edge_order: An optional list of edges. Edges of the
42 final node in `nodes_set`
43 are reordered into `output_edge_order`;
44 if final node has more than one edge,
45 `output_edge_order` must be pronvided.
46 ignore_edge_order: An option to ignore the output edge
47 order.
48
49 Returns:
50 Final node after full contraction.
51 """
52 nodes_set = set(nodes)
53 edges = get_all_edges(nodes_set)
54 #output edge order has to be determinded before any contraction
55 #(edges are refreshed after contractions)
56
57 if not ignore_edge_order:
58 if output_edge_order is None:
59 output_edge_order = list(get_subgraph_dangling(nodes))
60 if len(output_edge_order) > 1:
61 raise ValueError("The final node after contraction has more than "
62 "one remaining edge. In this case `output_edge_order` "
63 "has to be provided.")
64
65 if set(output_edge_order) != get_subgraph_dangling(nodes):
66 raise ValueError(
67 "output edges are not equal to the remaining "
68 "non-contracted edges of the final node."
69 )
70
71 for edge in edges:
72 if not edge.is_disabled: #if its disabled we already contracted it
73 if edge.is_trace():
74 nodes_set.remove(edge.node1)
75 nodes_set.add(contract_parallel(edge))
76
77 if len(nodes_set) == 1:
78 # There's nothing to contract.
79 if ignore_edge_order:
80 return list(nodes_set)[0]
81 return list(nodes_set)[0].reorder_edges(output_edge_order)
82
83 # Then apply `opt_einsum`'s algorithm
84 path, nodes = utils.get_path(nodes_set, algorithm)
85 for a, b in path:
86 new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)
87 nodes.append(new_node)
88 nodes = utils.multi_remove(nodes, [a, b])
89
90 # if the final node has more than one edge,
91 # output_edge_order has to be specified
92 final_node = nodes[0] # nodes were connected, we checked this
93 if not ignore_edge_order:
94 final_node.reorder_edges(output_edge_order)
95 return final_node
96
97
98 def optimal(
99 nodes: Iterable[BaseNode],
100 output_edge_order: Optional[Sequence[Edge]] = None,
101 memory_limit: Optional[int] = None,
102 ignore_edge_order: bool = False) -> BaseNode:
103 """Optimal contraction order via `opt_einsum`.
104
105 This method will find the truly optimal contraction order via
106 `opt_einsum`'s depth first search algorithm. Since this search is
107 exhaustive, if your network is large (n>10), then the search may
108 take longer than just contracting in a suboptimal way.
109
110 Args:
111 nodes: an iterable of Nodes
112 output_edge_order: An optional list of edges.
113 Edges of the final node in `nodes_set`
114 are reordered into `output_edge_order`;
115 if final node has more than one edge,
116 `output_edge_order` must be provided.
117 memory_limit: Maximum number of elements in an array during contractions.
118 ignore_edge_order: An option to ignore the output edge order.
119
120 Returns:
121 The final node after full contraction.
122 """
123 alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)
124 return base(nodes, alg, output_edge_order, ignore_edge_order)
125
126
127 def branch(nodes: Iterable[BaseNode],
128 output_edge_order: Optional[Sequence[Edge]] = None,
129 memory_limit: Optional[int] = None,
130 nbranch: Optional[int] = None,
131 ignore_edge_order: bool = False) -> BaseNode:
132 """Branch contraction path via `opt_einsum`.
133
134 This method uses the DFS approach of `optimal` while sorting potential
135 contractions based on a heuristic cost, in order to reduce time spent
136 in exploring paths which are unlikely to be optimal.
137 For more details:
138 https://optimized-einsum.readthedocs.io/en/latest/branching_path.html
139
140 Args:
141 nodes: an iterable of Nodes
142 output_edge_order: An optional list of edges.
143 Edges of the final node in `nodes_set`
144 are reordered into `output_edge_order`;
145 if final node has more than one edge,
146 `output_edge_order` must be provided.
147 memory_limit: Maximum number of elements in an array during contractions.
148 nbranch: Number of best contractions to explore.
149 If None it explores all inner products starting with those that
150 have the best cost heuristic.
151 ignore_edge_order: An option to ignore the output edge order.
152
153 Returns:
154 The final node after full contraction.
155 """
156 alg = functools.partial(
157 opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)
158 return base(nodes, alg, output_edge_order, ignore_edge_order)
159
160
161 def greedy(
162 nodes: Iterable[BaseNode],
163 output_edge_order: Optional[Sequence[Edge]] = None,
164 memory_limit: Optional[int] = None,
165 ignore_edge_order: bool = False) -> BaseNode:
166 """Greedy contraction path via `opt_einsum`.
167
168 This provides a more efficient strategy than `optimal` for finding
169 contraction paths in large networks. First contracts pairs of tensors
170 by finding the pair with the lowest cost at each step. Then it performs
171 the outer products.
172 For more details:
173 https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html
174
175 Args:
176 nodes: an iterable of Nodes
177 output_edge_order: An optional list of edges.
178 Edges of the final node in `nodes_set`
179 are reordered into `output_edge_order`;
180 if final node has more than one edge,
181 `output_edge_order` must be provided.
182 memory_limit: Maximum number of elements in an array during contractions.
183 ignore_edge_order: An option to ignore the output edge order.
184
185 Returns:
186 The final node after full contraction.
187 """
188 alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)
189 return base(nodes, alg, output_edge_order, ignore_edge_order)
190
191
192 # pylint: disable=too-many-return-statements
193 def auto(
194 nodes: Iterable[BaseNode],
195 output_edge_order: Optional[Sequence[Edge]] = None,
196 memory_limit: Optional[int] = None,
197 ignore_edge_order: bool = False) -> BaseNode:
198 """Chooses one of the above algorithms according to network size.
199
200 Default behavior is based on `opt_einsum`'s `auto` contractor.
201
202 Args:
203 nodes: A collection of connected nodes.
204 output_edge_order: An optional list of edges.
205 Edges of the final node in `nodes_set`
206 are reordered into `output_edge_order`;
207 if final node has more than one edge,
208 `output_edge_order` must be provided.
209 memory_limit: Maximum number of elements in an array during contractions.
210 ignore_edge_order: An option to ignore the output edge order.
211
212 Returns:
213 Final node after full contraction.
214 """
215
216 n = len(list(nodes)) #pytype thing
217 _nodes = nodes
218 if n <= 0:
219 raise ValueError("Cannot contract empty tensor network.")
220 if n == 1:
221 if not ignore_edge_order:
222 if output_edge_order is None:
223 output_edge_order = list(
224 (get_all_edges(_nodes) - get_all_nondangling(_nodes)))
225 if len(output_edge_order) > 1:
226 raise ValueError("The final node after contraction has more than "
227 "one dangling edge. In this case `output_edge_order` "
228 "has to be provided.")
229
230 edges = get_all_nondangling(_nodes)
231 if edges:
232 final_node = contract_parallel(edges.pop())
233 else:
234 final_node = list(_nodes)[0]
235 final_node.reorder_edges(output_edge_order)
236 if not ignore_edge_order:
237 final_node.reorder_edges(output_edge_order)
238 return final_node
239
240 if n < 5:
241 return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)
242 if n < 7:
243 return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)
244 if n < 9:
245 return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)
246 if n < 15:
247 return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)
248 return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)
249
250
251 def custom(
252 nodes: Iterable[BaseNode],
253 optimizer: Any,
254 output_edge_order: Sequence[Edge] = None,
255 memory_limit: Optional[int] = None,
256 ignore_edge_order: bool = False) -> BaseNode:
257 """Uses a custom path optimizer created by the user to calculate paths.
258
259 The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.
260 For more details:
261 https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html
262
263 Args:
264 nodes: an iterable of Nodes
265 output_edge_order: An optional list of edges.
266 Edges of the final node in `nodes_set`
267 are reordered into `output_edge_order`;
268 if final node has more than one edge,
269 output_edge_order` must be provided.
270 optimizer: A custom `opt_einsum.PathOptimizer` object.
271 memory_limit: Maximum number of elements in an array during contractions.
272 ignore_edge_order: An option to ignore the output edge order.
273
274 Returns:
275 Final node after full contraction.
276 """
277 alg = functools.partial(optimizer, memory_limit=memory_limit)
278 return base(nodes, alg, output_edge_order, ignore_edge_order)
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
--- a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
+++ b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
@@ -240,11 +240,21 @@
if n < 5:
return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)
if n < 7:
- return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)
+ return branch(nodes,
+ output_edge_order=output_edge_order,
+ memory_limit=memory_limit,
+ ignore_edge_order=ignore_edge_order)
if n < 9:
- return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)
+ return branch(nodes,
+ output_edge_order=output_edge_order,
+ memory_limit=memory_limit,
+ nbranch=2,
+ ignore_edge_order=ignore_edge_order)
if n < 15:
- return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)
+ return branch(nodes,
+ output_edge_order=output_edge_order,
+ nbranch=1,
+ ignore_edge_order=ignore_edge_order)
return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)
| {"golden_diff": "diff --git a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n--- a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n+++ b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n@@ -240,11 +240,21 @@\n if n < 5:\n return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 7:\n- return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)\n+ return branch(nodes,\n+ output_edge_order=output_edge_order,\n+ memory_limit=memory_limit,\n+ ignore_edge_order=ignore_edge_order)\n if n < 9:\n- return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)\n+ return branch(nodes,\n+ output_edge_order=output_edge_order,\n+ memory_limit=memory_limit,\n+ nbranch=2,\n+ ignore_edge_order=ignore_edge_order)\n if n < 15:\n- return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)\n+ return branch(nodes,\n+ output_edge_order=output_edge_order,\n+ nbranch=1,\n+ ignore_edge_order=ignore_edge_order)\n return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)\n", "issue": "`auto` calls `branch` with incorrect number of parameters when n >= 5 and n < 7\nhttps://github.com/google/TensorNetwork/blob/003fdc789afa006e08f12818ad76c5f31828aa69/tensornetwork/contractors/opt_einsum_paths/path_contractors.py#L243\r\n\r\nThe `branch` method accepts 5 parameters, but here is called with 4. This results in the `ignore_edge_order` parameter being passed into `branch` as `nbranch` and an inability to set `ignore_edge_order` when `n >=5 and n < 7`\r\n\n", "before_files": [{"content": "# pylint: disable=cyclic-import\n# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Contractors based on `opt_einsum`'s path algorithms.\"\"\"\n\nimport functools\nimport opt_einsum\n# pylint: disable=line-too-long\nfrom tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling\n# pylint: disable=line-too-long\nfrom tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between\nfrom tensornetwork.network_components import Edge, BaseNode\nfrom tensornetwork.contractors.opt_einsum_paths import utils\nfrom typing import Any, Optional, Sequence, Iterable\n\n#TODO (martin): add return types of functions back once TensorNetwork is gone\n# remove _base_network\n# _base_nodes -> base\n\n\ndef base(nodes: Iterable[BaseNode],\n algorithm: utils.Algorithm,\n output_edge_order: Optional[Sequence[Edge]] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Base method for all `opt_einsum` contractors.\n\n Args:\n nodes: A collection of connected nodes.\n algorithm: `opt_einsum` contraction method to use.\n output_edge_order: An optional list of edges. Edges of the\n final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be pronvided.\n ignore_edge_order: An option to ignore the output edge\n order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n nodes_set = set(nodes)\n edges = get_all_edges(nodes_set)\n #output edge order has to be determinded before any contraction\n #(edges are refreshed after contractions)\n\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(get_subgraph_dangling(nodes))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one remaining edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n if set(output_edge_order) != get_subgraph_dangling(nodes):\n raise ValueError(\n \"output edges are not equal to the remaining \"\n \"non-contracted edges of the final node.\"\n )\n\n for edge in edges:\n if not edge.is_disabled: #if its disabled we already contracted it\n if edge.is_trace():\n nodes_set.remove(edge.node1)\n nodes_set.add(contract_parallel(edge))\n\n if len(nodes_set) == 1:\n # There's nothing to contract.\n if ignore_edge_order:\n return list(nodes_set)[0]\n return list(nodes_set)[0].reorder_edges(output_edge_order)\n\n # Then apply `opt_einsum`'s algorithm\n path, nodes = utils.get_path(nodes_set, algorithm)\n for a, b in path:\n new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)\n nodes.append(new_node)\n nodes = utils.multi_remove(nodes, [a, b])\n\n # if the final node has more than one edge,\n # output_edge_order has to be specified\n final_node = nodes[0] # nodes were connected, we checked this\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n\ndef optimal(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Optimal contraction order via `opt_einsum`.\n\n This method will find the truly optimal contraction order via\n `opt_einsum`'s depth first search algorithm. Since this search is\n exhaustive, if your network is large (n>10), then the search may\n take longer than just contracting in a suboptimal way.\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef branch(nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n nbranch: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Branch contraction path via `opt_einsum`.\n\n This method uses the DFS approach of `optimal` while sorting potential\n contractions based on a heuristic cost, in order to reduce time spent\n in exploring paths which are unlikely to be optimal.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/branching_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n nbranch: Number of best contractions to explore.\n If None it explores all inner products starting with those that\n have the best cost heuristic.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(\n opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef greedy(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Greedy contraction path via `opt_einsum`.\n\n This provides a more efficient strategy than `optimal` for finding\n contraction paths in large networks. First contracts pairs of tensors\n by finding the pair with the lowest cost at each step. Then it performs\n the outer products.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\n# pylint: disable=too-many-return-statements\ndef auto(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Chooses one of the above algorithms according to network size.\n\n Default behavior is based on `opt_einsum`'s `auto` contractor.\n\n Args:\n nodes: A collection of connected nodes.\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n\n n = len(list(nodes)) #pytype thing\n _nodes = nodes\n if n <= 0:\n raise ValueError(\"Cannot contract empty tensor network.\")\n if n == 1:\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(\n (get_all_edges(_nodes) - get_all_nondangling(_nodes)))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one dangling edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n edges = get_all_nondangling(_nodes)\n if edges:\n final_node = contract_parallel(edges.pop())\n else:\n final_node = list(_nodes)[0]\n final_node.reorder_edges(output_edge_order)\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n if n < 5:\n return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 7:\n return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 9:\n return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)\n if n < 15:\n return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)\n return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)\n\n\ndef custom(\n nodes: Iterable[BaseNode],\n optimizer: Any,\n output_edge_order: Sequence[Edge] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Uses a custom path optimizer created by the user to calculate paths.\n\n The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n output_edge_order` must be provided.\n optimizer: A custom `opt_einsum.PathOptimizer` object.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n alg = functools.partial(optimizer, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n", "path": "tensornetwork/contractors/opt_einsum_paths/path_contractors.py"}], "after_files": [{"content": "# pylint: disable=cyclic-import\n# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Contractors based on `opt_einsum`'s path algorithms.\"\"\"\n\nimport functools\nimport opt_einsum\n# pylint: disable=line-too-long\nfrom tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling\n# pylint: disable=line-too-long\nfrom tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between\nfrom tensornetwork.network_components import Edge, BaseNode\nfrom tensornetwork.contractors.opt_einsum_paths import utils\nfrom typing import Any, Optional, Sequence, Iterable\n\n#TODO (martin): add return types of functions back once TensorNetwork is gone\n# remove _base_network\n# _base_nodes -> base\n\n\ndef base(nodes: Iterable[BaseNode],\n algorithm: utils.Algorithm,\n output_edge_order: Optional[Sequence[Edge]] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Base method for all `opt_einsum` contractors.\n\n Args:\n nodes: A collection of connected nodes.\n algorithm: `opt_einsum` contraction method to use.\n output_edge_order: An optional list of edges. Edges of the\n final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be pronvided.\n ignore_edge_order: An option to ignore the output edge\n order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n nodes_set = set(nodes)\n edges = get_all_edges(nodes_set)\n #output edge order has to be determinded before any contraction\n #(edges are refreshed after contractions)\n\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(get_subgraph_dangling(nodes))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one remaining edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n if set(output_edge_order) != get_subgraph_dangling(nodes):\n raise ValueError(\n \"output edges are not equal to the remaining \"\n \"non-contracted edges of the final node.\"\n )\n\n for edge in edges:\n if not edge.is_disabled: #if its disabled we already contracted it\n if edge.is_trace():\n nodes_set.remove(edge.node1)\n nodes_set.add(contract_parallel(edge))\n\n if len(nodes_set) == 1:\n # There's nothing to contract.\n if ignore_edge_order:\n return list(nodes_set)[0]\n return list(nodes_set)[0].reorder_edges(output_edge_order)\n\n # Then apply `opt_einsum`'s algorithm\n path, nodes = utils.get_path(nodes_set, algorithm)\n for a, b in path:\n new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)\n nodes.append(new_node)\n nodes = utils.multi_remove(nodes, [a, b])\n\n # if the final node has more than one edge,\n # output_edge_order has to be specified\n final_node = nodes[0] # nodes were connected, we checked this\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n\ndef optimal(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Optimal contraction order via `opt_einsum`.\n\n This method will find the truly optimal contraction order via\n `opt_einsum`'s depth first search algorithm. Since this search is\n exhaustive, if your network is large (n>10), then the search may\n take longer than just contracting in a suboptimal way.\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef branch(nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n nbranch: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Branch contraction path via `opt_einsum`.\n\n This method uses the DFS approach of `optimal` while sorting potential\n contractions based on a heuristic cost, in order to reduce time spent\n in exploring paths which are unlikely to be optimal.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/branching_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n nbranch: Number of best contractions to explore.\n If None it explores all inner products starting with those that\n have the best cost heuristic.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(\n opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef greedy(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Greedy contraction path via `opt_einsum`.\n\n This provides a more efficient strategy than `optimal` for finding\n contraction paths in large networks. First contracts pairs of tensors\n by finding the pair with the lowest cost at each step. Then it performs\n the outer products.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\n# pylint: disable=too-many-return-statements\ndef auto(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Chooses one of the above algorithms according to network size.\n\n Default behavior is based on `opt_einsum`'s `auto` contractor.\n\n Args:\n nodes: A collection of connected nodes.\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n\n n = len(list(nodes)) #pytype thing\n _nodes = nodes\n if n <= 0:\n raise ValueError(\"Cannot contract empty tensor network.\")\n if n == 1:\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(\n (get_all_edges(_nodes) - get_all_nondangling(_nodes)))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one dangling edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n edges = get_all_nondangling(_nodes)\n if edges:\n final_node = contract_parallel(edges.pop())\n else:\n final_node = list(_nodes)[0]\n final_node.reorder_edges(output_edge_order)\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n if n < 5:\n return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 7:\n return branch(nodes,\n output_edge_order=output_edge_order,\n memory_limit=memory_limit,\n ignore_edge_order=ignore_edge_order)\n if n < 9:\n return branch(nodes,\n output_edge_order=output_edge_order,\n memory_limit=memory_limit,\n nbranch=2,\n ignore_edge_order=ignore_edge_order)\n if n < 15:\n return branch(nodes,\n output_edge_order=output_edge_order,\n nbranch=1,\n ignore_edge_order=ignore_edge_order)\n return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)\n\n\ndef custom(\n nodes: Iterable[BaseNode],\n optimizer: Any,\n output_edge_order: Sequence[Edge] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Uses a custom path optimizer created by the user to calculate paths.\n\n The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n output_edge_order` must be provided.\n optimizer: A custom `opt_einsum.PathOptimizer` object.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n alg = functools.partial(optimizer, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n", "path": "tensornetwork/contractors/opt_einsum_paths/path_contractors.py"}]} | 3,648 | 324 |
gh_patches_debug_32728 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-2596 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Personnalisation plus facile de loadpaths
J'ai un cas où j'aimerais importer un gros fichier Shape, mais où j'aimerais filtrer selon certains attributs de chaque élément. Pour éviter de devoir réécrire ma propre command `loadpaths` complète, il serait pratique de déporter le filtrage des objets dans une méthode de la commande. Le patch proposé arrive...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/core/management/commands/loadpaths.py`
Content:
```
1 from django.contrib.gis.gdal import DataSource, GDALException
2 from geotrek.core.models import Path
3 from geotrek.authent.models import Structure
4 from django.contrib.gis.geos.collections import Polygon, LineString
5 from django.core.management.base import BaseCommand, CommandError
6 from django.conf import settings
7 from django.db.utils import IntegrityError, InternalError
8 from django.db import transaction
9
10
11 class Command(BaseCommand):
12 help = 'Load Paths from a file within the spatial extent\n'
13
14 def add_arguments(self, parser):
15 parser.add_argument('file_path', help="File's path of the paths")
16 parser.add_argument('--structure', action='store', dest='structure', help="Define the structure")
17 parser.add_argument('--name-attribute', '-n', action='store', dest='name', default='nom',
18 help="Name of the name's attribute inside the file")
19 parser.add_argument('--comments-attribute', '-c', nargs='*', action='store', dest='comment',
20 help="")
21 parser.add_argument('--encoding', '-e', action='store', dest='encoding', default='utf-8',
22 help='File encoding, default utf-8')
23 parser.add_argument('--srid', '-s', action='store', dest='srid', default=4326, type=int,
24 help="File's SRID")
25 parser.add_argument('--intersect', '-i', action='store_true', dest='intersect', default=False,
26 help="Check paths intersect spatial extent and not only within")
27 parser.add_argument('--fail', '-f', action='store_true', dest='fail', default=False,
28 help="Allows to grant fails")
29 parser.add_argument('--dry', '-d', action='store_true', dest='dry', default=False,
30 help="Do not change the database, dry run. Show the number of fail"
31 " and objects potentially created")
32
33 def handle(self, *args, **options):
34 verbosity = options.get('verbosity')
35 encoding = options.get('encoding')
36 file_path = options.get('file_path')
37 structure = options.get('structure')
38 name_column = options.get('name')
39 srid = options.get('srid')
40 do_intersect = options.get('intersect')
41 comments_columns = options.get('comment')
42 fail = options.get('fail')
43 dry = options.get('dry')
44
45 if dry:
46 fail = True
47
48 counter = 0
49 counter_fail = 0
50
51 if structure:
52 try:
53 structure = Structure.objects.get(name=structure)
54 except Structure.DoesNotExist:
55 raise CommandError("Structure does not match with instance's structures\n"
56 "Change your option --structure")
57 elif Structure.objects.count() == 1:
58 structure = Structure.objects.first()
59 else:
60 raise CommandError("There are more than 1 structure and you didn't define the option structure\n"
61 "Use --structure to define it")
62 if verbosity > 0:
63 self.stdout.write("All paths in DataSource will be linked to the structure : %s" % structure)
64
65 ds = DataSource(file_path, encoding=encoding)
66
67 bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)
68 bbox.srid = settings.SRID
69
70 sid = transaction.savepoint()
71
72 for layer in ds:
73 for feat in layer:
74 name = feat.get(name_column) if name_column in layer.fields else ''
75 comment_final_tab = []
76 if comments_columns:
77 for comment_column in comments_columns:
78 if comment_column in layer.fields:
79 comment_final_tab.append(feat.get(comment_column))
80 geom = feat.geom.geos
81 if not isinstance(geom, LineString):
82 if verbosity > 0:
83 self.stdout.write("%s's geometry is not a Linestring" % feat)
84 break
85 self.check_srid(srid, geom)
86 geom.dim = 2
87 if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):
88 try:
89 with transaction.atomic():
90 comment_final = '</br>'.join(comment_final_tab)
91 path = Path.objects.create(name=name,
92 structure=structure,
93 geom=geom,
94 comments=comment_final)
95 counter += 1
96 if verbosity > 0:
97 self.stdout.write('Create path with pk : {}'.format(path.pk))
98 if verbosity > 1:
99 self.stdout.write("The comment %s was added on %s" % (comment_final, name))
100 except (IntegrityError, InternalError):
101 if fail:
102 counter_fail += 1
103 self.stdout.write('Integrity Error on path : {}, {}'.format(name, geom))
104 else:
105 raise
106 if not dry:
107 transaction.savepoint_commit(sid)
108 if verbosity >= 2:
109 self.stdout.write(self.style.NOTICE(
110 "{0} objects created, {1} objects failed".format(counter, counter_fail)))
111 else:
112 transaction.savepoint_rollback(sid)
113 self.stdout.write(self.style.NOTICE(
114 "{0} objects will be create, {1} objects failed;".format(counter, counter_fail)))
115
116 def check_srid(self, srid, geom):
117 if not geom.srid:
118 geom.srid = srid
119 if geom.srid != settings.SRID:
120 try:
121 geom.transform(settings.SRID)
122 except GDALException:
123 raise CommandError("SRID is not well configurate, change/add option srid")
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geotrek/core/management/commands/loadpaths.py b/geotrek/core/management/commands/loadpaths.py
--- a/geotrek/core/management/commands/loadpaths.py
+++ b/geotrek/core/management/commands/loadpaths.py
@@ -37,7 +37,7 @@
structure = options.get('structure')
name_column = options.get('name')
srid = options.get('srid')
- do_intersect = options.get('intersect')
+ self.do_intersect = options.get('intersect')
comments_columns = options.get('comment')
fail = options.get('fail')
dry = options.get('dry')
@@ -64,8 +64,8 @@
ds = DataSource(file_path, encoding=encoding)
- bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)
- bbox.srid = settings.SRID
+ self.bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)
+ self.bbox.srid = settings.SRID
sid = transaction.savepoint()
@@ -84,7 +84,7 @@
break
self.check_srid(srid, geom)
geom.dim = 2
- if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):
+ if self.should_import(feat, geom):
try:
with transaction.atomic():
comment_final = '</br>'.join(comment_final_tab)
@@ -121,3 +121,9 @@
geom.transform(settings.SRID)
except GDALException:
raise CommandError("SRID is not well configurate, change/add option srid")
+
+ def should_import(self, feature, geom):
+ return (
+ self.do_intersect and self.bbox.intersects(geom)
+ or not self.do_intersect and geom.within(self.bbox)
+ )
| {"golden_diff": "diff --git a/geotrek/core/management/commands/loadpaths.py b/geotrek/core/management/commands/loadpaths.py\n--- a/geotrek/core/management/commands/loadpaths.py\n+++ b/geotrek/core/management/commands/loadpaths.py\n@@ -37,7 +37,7 @@\n structure = options.get('structure')\n name_column = options.get('name')\n srid = options.get('srid')\n- do_intersect = options.get('intersect')\n+ self.do_intersect = options.get('intersect')\n comments_columns = options.get('comment')\n fail = options.get('fail')\n dry = options.get('dry')\n@@ -64,8 +64,8 @@\n \n ds = DataSource(file_path, encoding=encoding)\n \n- bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n- bbox.srid = settings.SRID\n+ self.bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n+ self.bbox.srid = settings.SRID\n \n sid = transaction.savepoint()\n \n@@ -84,7 +84,7 @@\n break\n self.check_srid(srid, geom)\n geom.dim = 2\n- if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):\n+ if self.should_import(feat, geom):\n try:\n with transaction.atomic():\n comment_final = '</br>'.join(comment_final_tab)\n@@ -121,3 +121,9 @@\n geom.transform(settings.SRID)\n except GDALException:\n raise CommandError(\"SRID is not well configurate, change/add option srid\")\n+\n+ def should_import(self, feature, geom):\n+ return (\n+ self.do_intersect and self.bbox.intersects(geom)\n+ or not self.do_intersect and geom.within(self.bbox)\n+ )\n", "issue": "Personnalisation plus facile de loadpaths\nJ'ai un cas o\u00f9 j'aimerais importer un gros fichier Shape, mais o\u00f9 j'aimerais filtrer selon certains attributs de chaque \u00e9l\u00e9ment. Pour \u00e9viter de devoir r\u00e9\u00e9crire ma propre command `loadpaths` compl\u00e8te, il serait pratique de d\u00e9porter le filtrage des objets dans une m\u00e9thode de la commande. Le patch propos\u00e9 arrive...\n", "before_files": [{"content": "from django.contrib.gis.gdal import DataSource, GDALException\nfrom geotrek.core.models import Path\nfrom geotrek.authent.models import Structure\nfrom django.contrib.gis.geos.collections import Polygon, LineString\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.conf import settings\nfrom django.db.utils import IntegrityError, InternalError\nfrom django.db import transaction\n\n\nclass Command(BaseCommand):\n help = 'Load Paths from a file within the spatial extent\\n'\n\n def add_arguments(self, parser):\n parser.add_argument('file_path', help=\"File's path of the paths\")\n parser.add_argument('--structure', action='store', dest='structure', help=\"Define the structure\")\n parser.add_argument('--name-attribute', '-n', action='store', dest='name', default='nom',\n help=\"Name of the name's attribute inside the file\")\n parser.add_argument('--comments-attribute', '-c', nargs='*', action='store', dest='comment',\n help=\"\")\n parser.add_argument('--encoding', '-e', action='store', dest='encoding', default='utf-8',\n help='File encoding, default utf-8')\n parser.add_argument('--srid', '-s', action='store', dest='srid', default=4326, type=int,\n help=\"File's SRID\")\n parser.add_argument('--intersect', '-i', action='store_true', dest='intersect', default=False,\n help=\"Check paths intersect spatial extent and not only within\")\n parser.add_argument('--fail', '-f', action='store_true', dest='fail', default=False,\n help=\"Allows to grant fails\")\n parser.add_argument('--dry', '-d', action='store_true', dest='dry', default=False,\n help=\"Do not change the database, dry run. Show the number of fail\"\n \" and objects potentially created\")\n\n def handle(self, *args, **options):\n verbosity = options.get('verbosity')\n encoding = options.get('encoding')\n file_path = options.get('file_path')\n structure = options.get('structure')\n name_column = options.get('name')\n srid = options.get('srid')\n do_intersect = options.get('intersect')\n comments_columns = options.get('comment')\n fail = options.get('fail')\n dry = options.get('dry')\n\n if dry:\n fail = True\n\n counter = 0\n counter_fail = 0\n\n if structure:\n try:\n structure = Structure.objects.get(name=structure)\n except Structure.DoesNotExist:\n raise CommandError(\"Structure does not match with instance's structures\\n\"\n \"Change your option --structure\")\n elif Structure.objects.count() == 1:\n structure = Structure.objects.first()\n else:\n raise CommandError(\"There are more than 1 structure and you didn't define the option structure\\n\"\n \"Use --structure to define it\")\n if verbosity > 0:\n self.stdout.write(\"All paths in DataSource will be linked to the structure : %s\" % structure)\n\n ds = DataSource(file_path, encoding=encoding)\n\n bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n bbox.srid = settings.SRID\n\n sid = transaction.savepoint()\n\n for layer in ds:\n for feat in layer:\n name = feat.get(name_column) if name_column in layer.fields else ''\n comment_final_tab = []\n if comments_columns:\n for comment_column in comments_columns:\n if comment_column in layer.fields:\n comment_final_tab.append(feat.get(comment_column))\n geom = feat.geom.geos\n if not isinstance(geom, LineString):\n if verbosity > 0:\n self.stdout.write(\"%s's geometry is not a Linestring\" % feat)\n break\n self.check_srid(srid, geom)\n geom.dim = 2\n if do_intersect and bbox.intersects(geom) or not do_intersect and geom.within(bbox):\n try:\n with transaction.atomic():\n comment_final = '</br>'.join(comment_final_tab)\n path = Path.objects.create(name=name,\n structure=structure,\n geom=geom,\n comments=comment_final)\n counter += 1\n if verbosity > 0:\n self.stdout.write('Create path with pk : {}'.format(path.pk))\n if verbosity > 1:\n self.stdout.write(\"The comment %s was added on %s\" % (comment_final, name))\n except (IntegrityError, InternalError):\n if fail:\n counter_fail += 1\n self.stdout.write('Integrity Error on path : {}, {}'.format(name, geom))\n else:\n raise\n if not dry:\n transaction.savepoint_commit(sid)\n if verbosity >= 2:\n self.stdout.write(self.style.NOTICE(\n \"{0} objects created, {1} objects failed\".format(counter, counter_fail)))\n else:\n transaction.savepoint_rollback(sid)\n self.stdout.write(self.style.NOTICE(\n \"{0} objects will be create, {1} objects failed;\".format(counter, counter_fail)))\n\n def check_srid(self, srid, geom):\n if not geom.srid:\n geom.srid = srid\n if geom.srid != settings.SRID:\n try:\n geom.transform(settings.SRID)\n except GDALException:\n raise CommandError(\"SRID is not well configurate, change/add option srid\")\n", "path": "geotrek/core/management/commands/loadpaths.py"}], "after_files": [{"content": "from django.contrib.gis.gdal import DataSource, GDALException\nfrom geotrek.core.models import Path\nfrom geotrek.authent.models import Structure\nfrom django.contrib.gis.geos.collections import Polygon, LineString\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.conf import settings\nfrom django.db.utils import IntegrityError, InternalError\nfrom django.db import transaction\n\n\nclass Command(BaseCommand):\n help = 'Load Paths from a file within the spatial extent\\n'\n\n def add_arguments(self, parser):\n parser.add_argument('file_path', help=\"File's path of the paths\")\n parser.add_argument('--structure', action='store', dest='structure', help=\"Define the structure\")\n parser.add_argument('--name-attribute', '-n', action='store', dest='name', default='nom',\n help=\"Name of the name's attribute inside the file\")\n parser.add_argument('--comments-attribute', '-c', nargs='*', action='store', dest='comment',\n help=\"\")\n parser.add_argument('--encoding', '-e', action='store', dest='encoding', default='utf-8',\n help='File encoding, default utf-8')\n parser.add_argument('--srid', '-s', action='store', dest='srid', default=4326, type=int,\n help=\"File's SRID\")\n parser.add_argument('--intersect', '-i', action='store_true', dest='intersect', default=False,\n help=\"Check paths intersect spatial extent and not only within\")\n parser.add_argument('--fail', '-f', action='store_true', dest='fail', default=False,\n help=\"Allows to grant fails\")\n parser.add_argument('--dry', '-d', action='store_true', dest='dry', default=False,\n help=\"Do not change the database, dry run. Show the number of fail\"\n \" and objects potentially created\")\n\n def handle(self, *args, **options):\n verbosity = options.get('verbosity')\n encoding = options.get('encoding')\n file_path = options.get('file_path')\n structure = options.get('structure')\n name_column = options.get('name')\n srid = options.get('srid')\n self.do_intersect = options.get('intersect')\n comments_columns = options.get('comment')\n fail = options.get('fail')\n dry = options.get('dry')\n\n if dry:\n fail = True\n\n counter = 0\n counter_fail = 0\n\n if structure:\n try:\n structure = Structure.objects.get(name=structure)\n except Structure.DoesNotExist:\n raise CommandError(\"Structure does not match with instance's structures\\n\"\n \"Change your option --structure\")\n elif Structure.objects.count() == 1:\n structure = Structure.objects.first()\n else:\n raise CommandError(\"There are more than 1 structure and you didn't define the option structure\\n\"\n \"Use --structure to define it\")\n if verbosity > 0:\n self.stdout.write(\"All paths in DataSource will be linked to the structure : %s\" % structure)\n\n ds = DataSource(file_path, encoding=encoding)\n\n self.bbox = Polygon.from_bbox(settings.SPATIAL_EXTENT)\n self.bbox.srid = settings.SRID\n\n sid = transaction.savepoint()\n\n for layer in ds:\n for feat in layer:\n name = feat.get(name_column) if name_column in layer.fields else ''\n comment_final_tab = []\n if comments_columns:\n for comment_column in comments_columns:\n if comment_column in layer.fields:\n comment_final_tab.append(feat.get(comment_column))\n geom = feat.geom.geos\n if not isinstance(geom, LineString):\n if verbosity > 0:\n self.stdout.write(\"%s's geometry is not a Linestring\" % feat)\n break\n self.check_srid(srid, geom)\n geom.dim = 2\n if self.should_import(feat, geom):\n try:\n with transaction.atomic():\n comment_final = '</br>'.join(comment_final_tab)\n path = Path.objects.create(name=name,\n structure=structure,\n geom=geom,\n comments=comment_final)\n counter += 1\n if verbosity > 0:\n self.stdout.write('Create path with pk : {}'.format(path.pk))\n if verbosity > 1:\n self.stdout.write(\"The comment %s was added on %s\" % (comment_final, name))\n except (IntegrityError, InternalError):\n if fail:\n counter_fail += 1\n self.stdout.write('Integrity Error on path : {}, {}'.format(name, geom))\n else:\n raise\n if not dry:\n transaction.savepoint_commit(sid)\n if verbosity >= 2:\n self.stdout.write(self.style.NOTICE(\n \"{0} objects created, {1} objects failed\".format(counter, counter_fail)))\n else:\n transaction.savepoint_rollback(sid)\n self.stdout.write(self.style.NOTICE(\n \"{0} objects will be create, {1} objects failed;\".format(counter, counter_fail)))\n\n def check_srid(self, srid, geom):\n if not geom.srid:\n geom.srid = srid\n if geom.srid != settings.SRID:\n try:\n geom.transform(settings.SRID)\n except GDALException:\n raise CommandError(\"SRID is not well configurate, change/add option srid\")\n\n def should_import(self, feature, geom):\n return (\n self.do_intersect and self.bbox.intersects(geom)\n or not self.do_intersect and geom.within(self.bbox)\n )\n", "path": "geotrek/core/management/commands/loadpaths.py"}]} | 1,762 | 405 |
gh_patches_debug_23542 | rasdani/github-patches | git_diff | qutip__qutip-362 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fidelity greater than one
fidelity() returns >1 by a significant amount for certain pure states. States are normalised, and taking the inner product returns 1. Code below provides two examples of states which give fidelity >1 with themselves.
```
#to test qutip's fidelity routine
import qutip as qu
psi0=qu.Qobj()
i=0
while i<=1:
j=0
while j<=1:
psi0+=qu.state_number_qobj([2,2],[i,j]) #even superposition of qubit states
j+=1
i+=1
psi0=psi0.unit() #normalise
print(qu.fidelity(psi0,psi0))
print((psi0.dag()*psi0).norm()**2)
print("\n")
psi0=qu.tensor(psi0,qu.basis(10,1)) #tensor product with Fock state
print(qu.fidelity(psi0,psi0))
print((psi0.dag()*psi0).norm()**2)
```
Output:
```
1.00000002107
1.0
1.00000003485
1.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutip/metrics.py`
Content:
```
1 # This file is part of QuTiP: Quantum Toolbox in Python.
2 #
3 # Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are
8 # met:
9 #
10 # 1. Redistributions of source code must retain the above copyright notice,
11 # this list of conditions and the following disclaimer.
12 #
13 # 2. Redistributions in binary form must reproduce the above copyright
14 # notice, this list of conditions and the following disclaimer in the
15 # documentation and/or other materials provided with the distribution.
16 #
17 # 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names
18 # of its contributors may be used to endorse or promote products derived
19 # from this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
22 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
23 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
24 # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
25 # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
26 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
27 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
28 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
29 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
30 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
31 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 ###############################################################################
33 """
34 This module contains a collection of functions for calculating metrics
35 (distance measures) between states and operators.
36 """
37
38 __all__ = ['fidelity', 'tracedist', 'bures_dist', 'bures_angle',
39 'hilbert_dist', 'average_gate_fidelity', 'process_fidelity',
40 'unitarity']
41
42 import numpy as np
43 from qutip.sparse import sp_eigs
44 from qutip.states import ket2dm
45 from qutip.superop_reps import to_kraus, _super_to_superpauli
46
47
48 def fidelity(A, B):
49 """
50 Calculates the fidelity (pseudo-metric) between two density matrices.
51 See: Nielsen & Chuang, "Quantum Computation and Quantum Information"
52
53 Parameters
54 ----------
55 A : qobj
56 Density matrix or state vector.
57 B : qobj
58 Density matrix or state vector with same dimensions as A.
59
60 Returns
61 -------
62 fid : float
63 Fidelity pseudo-metric between A and B.
64
65 Examples
66 --------
67 >>> x = fock_dm(5,3)
68 >>> y = coherent_dm(5,1)
69 >>> fidelity(x,y)
70 0.24104350624628332
71
72 """
73 if A.isket or A.isbra:
74 A = ket2dm(A)
75 if B.isket or B.isbra:
76 B = ket2dm(B)
77
78 if A.dims != B.dims:
79 raise TypeError('Density matrices do not have same dimensions.')
80
81 A = A.sqrtm()
82 return float(np.real((A * (B * A)).sqrtm().tr()))
83
84
85 def process_fidelity(U1, U2, normalize=True):
86 """
87 Calculate the process fidelity given two process operators.
88 """
89 if normalize:
90 return (U1 * U2).tr() / (U1.tr() * U2.tr())
91 else:
92 return (U1 * U2).tr()
93
94
95 def average_gate_fidelity(oper):
96 """
97 Given a Qobj representing the supermatrix form of a map, returns the
98 average gate fidelity (pseudo-metric) of that map.
99
100 Parameters
101 ----------
102 A : Qobj
103 Quantum object representing a superoperator.
104
105 Returns
106 -------
107 fid : float
108 Fidelity pseudo-metric between A and the identity superoperator.
109 """
110 kraus_form = to_kraus(oper)
111 d = kraus_form[0].shape[0]
112
113 if kraus_form[0].shape[1] != d:
114 return TypeError("Average gate fielity only implemented for square "
115 "superoperators.")
116
117 return (d + np.sum([np.abs(A_k.tr())**2
118 for A_k in kraus_form])) / (d**2 + d)
119
120
121 def tracedist(A, B, sparse=False, tol=0):
122 """
123 Calculates the trace distance between two density matrices..
124 See: Nielsen & Chuang, "Quantum Computation and Quantum Information"
125
126 Parameters
127 ----------!=
128 A : qobj
129 Density matrix or state vector.
130 B : qobj
131 Density matrix or state vector with same dimensions as A.
132 tol : float
133 Tolerance used by sparse eigensolver, if used. (0=Machine precision)
134 sparse : {False, True}
135 Use sparse eigensolver.
136
137 Returns
138 -------
139 tracedist : float
140 Trace distance between A and B.
141
142 Examples
143 --------
144 >>> x=fock_dm(5,3)
145 >>> y=coherent_dm(5,1)
146 >>> tracedist(x,y)
147 0.9705143161472971
148
149 """
150 if A.isket or A.isbra:
151 A = ket2dm(A)
152 if B.isket or B.isbra:
153 B = ket2dm(B)
154
155 if A.dims != B.dims:
156 raise TypeError("A and B do not have same dimensions.")
157
158 diff = A - B
159 diff = diff.dag() * diff
160 vals = sp_eigs(diff.data, diff.isherm, vecs=False, sparse=sparse, tol=tol)
161 return float(np.real(0.5 * np.sum(np.sqrt(np.abs(vals)))))
162
163
164 def hilbert_dist(A, B):
165 """
166 Returns the Hilbert-Schmidt distance between two density matrices A & B.
167
168 Parameters
169 ----------
170 A : qobj
171 Density matrix or state vector.
172 B : qobj
173 Density matrix or state vector with same dimensions as A.
174
175 Returns
176 -------
177 dist : float
178 Hilbert-Schmidt distance between density matrices.
179
180 Notes
181 -----
182 See V. Vedral and M. B. Plenio, Phys. Rev. A 57, 1619 (1998).
183
184 """
185 if A.isket or A.isbra:
186 A = ket2dm(A)
187 if B.isket or B.isbra:
188 B = ket2dm(B)
189
190 if A.dims != B.dims:
191 raise TypeError('A and B do not have same dimensions.')
192
193 return ((A - B)**2).tr()
194
195
196 def bures_dist(A, B):
197 """
198 Returns the Bures distance between two density matrices A & B.
199
200 The Bures distance ranges from 0, for states with unit fidelity,
201 to sqrt(2).
202
203 Parameters
204 ----------
205 A : qobj
206 Density matrix or state vector.
207 B : qobj
208 Density matrix or state vector with same dimensions as A.
209
210 Returns
211 -------
212 dist : float
213 Bures distance between density matrices.
214 """
215 if A.isket or A.isbra:
216 A = ket2dm(A)
217 if B.isket or B.isbra:
218 B = ket2dm(B)
219
220 if A.dims != B.dims:
221 raise TypeError('A and B do not have same dimensions.')
222
223 dist = np.sqrt(2.0 * (1.0 - fidelity(A, B)))
224 return dist
225
226
227 def bures_angle(A, B):
228 """
229 Returns the Bures Angle between two density matrices A & B.
230
231 The Bures angle ranges from 0, for states with unit fidelity, to pi/2.
232
233 Parameters
234 ----------
235 A : qobj
236 Density matrix or state vector.
237 B : qobj
238 Density matrix or state vector with same dimensions as A.
239
240 Returns
241 -------
242 angle : float
243 Bures angle between density matrices.
244 """
245 if A.isket or A.isbra:
246 A = ket2dm(A)
247 if B.isket or B.isbra:
248 B = ket2dm(B)
249
250 if A.dims != B.dims:
251 raise TypeError('A and B do not have same dimensions.')
252
253 return np.arccos(fidelity(A, B))
254
255 def unitarity(oper):
256 """
257 Returns the unitarity of a quantum map, defined as the Frobenius norm
258 of the unital block of that map's superoperator representation.
259
260 Parameters
261 ----------
262 oper : Qobj
263 Quantum map under consideration.
264
265 Returns
266 -------
267 u : float
268 Unitarity of ``oper``.
269 """
270 Eu = _super_to_superpauli(oper).full()[1:, 1:]
271 #return np.real(np.trace(np.dot(Eu, Eu.conj().T))) / len(Eu)
272 return np.linalg.norm(Eu, 'fro')**2 / len(Eu)
273
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qutip/metrics.py b/qutip/metrics.py
--- a/qutip/metrics.py
+++ b/qutip/metrics.py
@@ -71,15 +71,31 @@
"""
if A.isket or A.isbra:
- A = ket2dm(A)
- if B.isket or B.isbra:
- B = ket2dm(B)
-
- if A.dims != B.dims:
+ # Take advantage of the fact that the density operator for A
+ # is a projector to avoid a sqrtm call.
+ sqrtmA = ket2dm(A)
+ # Check whether we have to turn B into a density operator, too.
+ if B.isket or B.isbra:
+ B = ket2dm(B)
+ else:
+ if B.isket or B.isbra:
+ # Swap the order so that we can take a more numerically
+ # stable square root of B.
+ return fidelity(B, A)
+ # If we made it here, both A and B are operators, so
+ # we have to take the sqrtm of one of them.
+ sqrtmA = A.sqrtm()
+
+ if sqrtmA.dims != B.dims:
raise TypeError('Density matrices do not have same dimensions.')
- A = A.sqrtm()
- return float(np.real((A * (B * A)).sqrtm().tr()))
+ # We don't actually need the whole matrix here, just the trace
+ # of its square root, so let's just get its eigenenergies instead.
+ # We also truncate negative eigenvalues to avoid nan propagation;
+ # even for positive semidefinite matrices, small negative eigenvalues
+ # can be reported.
+ eig_vals = (sqrtmA * B * sqrtmA).eigenenergies()
+ return float(np.real(np.sqrt(eig_vals[eig_vals > 0]).sum()))
def process_fidelity(U1, U2, normalize=True):
| {"golden_diff": "diff --git a/qutip/metrics.py b/qutip/metrics.py\n--- a/qutip/metrics.py\n+++ b/qutip/metrics.py\n@@ -71,15 +71,31 @@\n \n \"\"\"\n if A.isket or A.isbra:\n- A = ket2dm(A)\n- if B.isket or B.isbra:\n- B = ket2dm(B)\n-\n- if A.dims != B.dims:\n+ # Take advantage of the fact that the density operator for A\n+ # is a projector to avoid a sqrtm call.\n+ sqrtmA = ket2dm(A)\n+ # Check whether we have to turn B into a density operator, too.\n+ if B.isket or B.isbra:\n+ B = ket2dm(B)\n+ else:\n+ if B.isket or B.isbra:\n+ # Swap the order so that we can take a more numerically\n+ # stable square root of B.\n+ return fidelity(B, A)\n+ # If we made it here, both A and B are operators, so\n+ # we have to take the sqrtm of one of them.\n+ sqrtmA = A.sqrtm()\n+\n+ if sqrtmA.dims != B.dims:\n raise TypeError('Density matrices do not have same dimensions.')\n \n- A = A.sqrtm()\n- return float(np.real((A * (B * A)).sqrtm().tr()))\n+ # We don't actually need the whole matrix here, just the trace\n+ # of its square root, so let's just get its eigenenergies instead.\n+ # We also truncate negative eigenvalues to avoid nan propagation;\n+ # even for positive semidefinite matrices, small negative eigenvalues\n+ # can be reported.\n+ eig_vals = (sqrtmA * B * sqrtmA).eigenenergies()\n+ return float(np.real(np.sqrt(eig_vals[eig_vals > 0]).sum()))\n \n \n def process_fidelity(U1, U2, normalize=True):\n", "issue": "Fidelity greater than one\nfidelity() returns >1 by a significant amount for certain pure states. States are normalised, and taking the inner product returns 1. Code below provides two examples of states which give fidelity >1 with themselves.\n\n```\n#to test qutip's fidelity routine\nimport qutip as qu\n\npsi0=qu.Qobj()\n\ni=0\nwhile i<=1:\n j=0\n while j<=1:\n psi0+=qu.state_number_qobj([2,2],[i,j]) #even superposition of qubit states\n j+=1\n i+=1\n\npsi0=psi0.unit() #normalise\n\nprint(qu.fidelity(psi0,psi0))\nprint((psi0.dag()*psi0).norm()**2)\nprint(\"\\n\")\n\npsi0=qu.tensor(psi0,qu.basis(10,1)) #tensor product with Fock state\n\nprint(qu.fidelity(psi0,psi0))\nprint((psi0.dag()*psi0).norm()**2)\n\n```\n\nOutput:\n\n```\n1.00000002107\n1.0\n\n\n1.00000003485\n1.0\n```\n\n", "before_files": [{"content": "# This file is part of QuTiP: Quantum Toolbox in Python.\n#\n# Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n# 1. Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n#\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n#\n# 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names\n# of its contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\n# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\"\"\"\nThis module contains a collection of functions for calculating metrics\n(distance measures) between states and operators.\n\"\"\"\n\n__all__ = ['fidelity', 'tracedist', 'bures_dist', 'bures_angle',\n 'hilbert_dist', 'average_gate_fidelity', 'process_fidelity',\n 'unitarity']\n\nimport numpy as np\nfrom qutip.sparse import sp_eigs\nfrom qutip.states import ket2dm\nfrom qutip.superop_reps import to_kraus, _super_to_superpauli\n\n\ndef fidelity(A, B):\n \"\"\"\n Calculates the fidelity (pseudo-metric) between two density matrices.\n See: Nielsen & Chuang, \"Quantum Computation and Quantum Information\"\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n fid : float\n Fidelity pseudo-metric between A and B.\n\n Examples\n --------\n >>> x = fock_dm(5,3)\n >>> y = coherent_dm(5,1)\n >>> fidelity(x,y)\n 0.24104350624628332\n\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('Density matrices do not have same dimensions.')\n\n A = A.sqrtm()\n return float(np.real((A * (B * A)).sqrtm().tr()))\n\n\ndef process_fidelity(U1, U2, normalize=True):\n \"\"\"\n Calculate the process fidelity given two process operators.\n \"\"\"\n if normalize:\n return (U1 * U2).tr() / (U1.tr() * U2.tr())\n else:\n return (U1 * U2).tr()\n\n\ndef average_gate_fidelity(oper):\n \"\"\"\n Given a Qobj representing the supermatrix form of a map, returns the\n average gate fidelity (pseudo-metric) of that map.\n\n Parameters\n ----------\n A : Qobj\n Quantum object representing a superoperator.\n\n Returns\n -------\n fid : float\n Fidelity pseudo-metric between A and the identity superoperator.\n \"\"\"\n kraus_form = to_kraus(oper)\n d = kraus_form[0].shape[0]\n\n if kraus_form[0].shape[1] != d:\n return TypeError(\"Average gate fielity only implemented for square \"\n \"superoperators.\")\n\n return (d + np.sum([np.abs(A_k.tr())**2\n for A_k in kraus_form])) / (d**2 + d)\n\n\ndef tracedist(A, B, sparse=False, tol=0):\n \"\"\"\n Calculates the trace distance between two density matrices..\n See: Nielsen & Chuang, \"Quantum Computation and Quantum Information\"\n\n Parameters\n ----------!=\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n tol : float\n Tolerance used by sparse eigensolver, if used. (0=Machine precision)\n sparse : {False, True}\n Use sparse eigensolver.\n\n Returns\n -------\n tracedist : float\n Trace distance between A and B.\n\n Examples\n --------\n >>> x=fock_dm(5,3)\n >>> y=coherent_dm(5,1)\n >>> tracedist(x,y)\n 0.9705143161472971\n\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError(\"A and B do not have same dimensions.\")\n\n diff = A - B\n diff = diff.dag() * diff\n vals = sp_eigs(diff.data, diff.isherm, vecs=False, sparse=sparse, tol=tol)\n return float(np.real(0.5 * np.sum(np.sqrt(np.abs(vals)))))\n\n\ndef hilbert_dist(A, B):\n \"\"\"\n Returns the Hilbert-Schmidt distance between two density matrices A & B.\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n dist : float\n Hilbert-Schmidt distance between density matrices.\n\n Notes\n -----\n See V. Vedral and M. B. Plenio, Phys. Rev. A 57, 1619 (1998).\n\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('A and B do not have same dimensions.')\n\n return ((A - B)**2).tr()\n\n\ndef bures_dist(A, B):\n \"\"\"\n Returns the Bures distance between two density matrices A & B.\n\n The Bures distance ranges from 0, for states with unit fidelity,\n to sqrt(2).\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n dist : float\n Bures distance between density matrices.\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('A and B do not have same dimensions.')\n\n dist = np.sqrt(2.0 * (1.0 - fidelity(A, B)))\n return dist\n\n\ndef bures_angle(A, B):\n \"\"\"\n Returns the Bures Angle between two density matrices A & B.\n\n The Bures angle ranges from 0, for states with unit fidelity, to pi/2.\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n angle : float\n Bures angle between density matrices.\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('A and B do not have same dimensions.')\n\n return np.arccos(fidelity(A, B))\n\ndef unitarity(oper):\n \"\"\"\n Returns the unitarity of a quantum map, defined as the Frobenius norm\n of the unital block of that map's superoperator representation.\n\n Parameters\n ----------\n oper : Qobj\n Quantum map under consideration.\n\n Returns\n -------\n u : float\n Unitarity of ``oper``.\n \"\"\"\n Eu = _super_to_superpauli(oper).full()[1:, 1:]\n #return np.real(np.trace(np.dot(Eu, Eu.conj().T))) / len(Eu)\n return np.linalg.norm(Eu, 'fro')**2 / len(Eu)\n", "path": "qutip/metrics.py"}], "after_files": [{"content": "# This file is part of QuTiP: Quantum Toolbox in Python.\n#\n# Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n# 1. Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n#\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n#\n# 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names\n# of its contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\n# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\"\"\"\nThis module contains a collection of functions for calculating metrics\n(distance measures) between states and operators.\n\"\"\"\n\n__all__ = ['fidelity', 'tracedist', 'bures_dist', 'bures_angle',\n 'hilbert_dist', 'average_gate_fidelity', 'process_fidelity',\n 'unitarity']\n\nimport numpy as np\nfrom qutip.sparse import sp_eigs\nfrom qutip.states import ket2dm\nfrom qutip.superop_reps import to_kraus, _super_to_superpauli\n\n\ndef fidelity(A, B):\n \"\"\"\n Calculates the fidelity (pseudo-metric) between two density matrices.\n See: Nielsen & Chuang, \"Quantum Computation and Quantum Information\"\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n fid : float\n Fidelity pseudo-metric between A and B.\n\n Examples\n --------\n >>> x = fock_dm(5,3)\n >>> y = coherent_dm(5,1)\n >>> fidelity(x,y)\n 0.24104350624628332\n\n \"\"\"\n if A.isket or A.isbra:\n # Take advantage of the fact that the density operator for A\n # is a projector to avoid a sqrtm call.\n sqrtmA = ket2dm(A)\n # Check whether we have to turn B into a density operator, too.\n if B.isket or B.isbra:\n B = ket2dm(B)\n else:\n if B.isket or B.isbra:\n # Swap the order so that we can take a more numerically\n # stable square root of B.\n return fidelity(B, A)\n # If we made it here, both A and B are operators, so\n # we have to take the sqrtm of one of them.\n sqrtmA = A.sqrtm()\n\n if sqrtmA.dims != B.dims:\n raise TypeError('Density matrices do not have same dimensions.')\n\n # We don't actually need the whole matrix here, just the trace\n # of its square root, so let's just get its eigenenergies instead.\n # We also truncate negative eigenvalues to avoid nan propagation;\n # even for positive semidefinite matrices, small negative eigenvalues\n # can be reported.\n eig_vals = (sqrtmA * B * sqrtmA).eigenenergies()\n return float(np.real(np.sqrt(eig_vals[eig_vals > 0]).sum()))\n\n\ndef process_fidelity(U1, U2, normalize=True):\n \"\"\"\n Calculate the process fidelity given two process operators.\n \"\"\"\n if normalize:\n return (U1 * U2).tr() / (U1.tr() * U2.tr())\n else:\n return (U1 * U2).tr()\n\n\ndef average_gate_fidelity(oper):\n \"\"\"\n Given a Qobj representing the supermatrix form of a map, returns the\n average gate fidelity (pseudo-metric) of that map.\n\n Parameters\n ----------\n A : Qobj\n Quantum object representing a superoperator.\n\n Returns\n -------\n fid : float\n Fidelity pseudo-metric between A and the identity superoperator.\n \"\"\"\n kraus_form = to_kraus(oper)\n d = kraus_form[0].shape[0]\n\n if kraus_form[0].shape[1] != d:\n return TypeError(\"Average gate fielity only implemented for square \"\n \"superoperators.\")\n\n return (d + np.sum([np.abs(A_k.tr())**2\n for A_k in kraus_form])) / (d**2 + d)\n\n\ndef tracedist(A, B, sparse=False, tol=0):\n \"\"\"\n Calculates the trace distance between two density matrices..\n See: Nielsen & Chuang, \"Quantum Computation and Quantum Information\"\n\n Parameters\n ----------!=\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n tol : float\n Tolerance used by sparse eigensolver, if used. (0=Machine precision)\n sparse : {False, True}\n Use sparse eigensolver.\n\n Returns\n -------\n tracedist : float\n Trace distance between A and B.\n\n Examples\n --------\n >>> x=fock_dm(5,3)\n >>> y=coherent_dm(5,1)\n >>> tracedist(x,y)\n 0.9705143161472971\n\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError(\"A and B do not have same dimensions.\")\n\n diff = A - B\n diff = diff.dag() * diff\n vals = sp_eigs(diff.data, diff.isherm, vecs=False, sparse=sparse, tol=tol)\n return float(np.real(0.5 * np.sum(np.sqrt(np.abs(vals)))))\n\n\ndef hilbert_dist(A, B):\n \"\"\"\n Returns the Hilbert-Schmidt distance between two density matrices A & B.\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n dist : float\n Hilbert-Schmidt distance between density matrices.\n\n Notes\n -----\n See V. Vedral and M. B. Plenio, Phys. Rev. A 57, 1619 (1998).\n\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('A and B do not have same dimensions.')\n\n return ((A - B)**2).tr()\n\n\ndef bures_dist(A, B):\n \"\"\"\n Returns the Bures distance between two density matrices A & B.\n\n The Bures distance ranges from 0, for states with unit fidelity,\n to sqrt(2).\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n dist : float\n Bures distance between density matrices.\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('A and B do not have same dimensions.')\n\n dist = np.sqrt(2.0 * (1.0 - fidelity(A, B)))\n return dist\n\n\ndef bures_angle(A, B):\n \"\"\"\n Returns the Bures Angle between two density matrices A & B.\n\n The Bures angle ranges from 0, for states with unit fidelity, to pi/2.\n\n Parameters\n ----------\n A : qobj\n Density matrix or state vector.\n B : qobj\n Density matrix or state vector with same dimensions as A.\n\n Returns\n -------\n angle : float\n Bures angle between density matrices.\n \"\"\"\n if A.isket or A.isbra:\n A = ket2dm(A)\n if B.isket or B.isbra:\n B = ket2dm(B)\n\n if A.dims != B.dims:\n raise TypeError('A and B do not have same dimensions.')\n\n return np.arccos(fidelity(A, B))\n\ndef unitarity(oper):\n \"\"\"\n Returns the unitarity of a quantum map, defined as the Frobenius norm\n of the unital block of that map's superoperator representation.\n\n Parameters\n ----------\n oper : Qobj\n Quantum map under consideration.\n\n Returns\n -------\n u : float\n Unitarity of ``oper``.\n \"\"\"\n Eu = _super_to_superpauli(oper).full()[1:, 1:]\n #return np.real(np.trace(np.dot(Eu, Eu.conj().T))) / len(Eu)\n return np.linalg.norm(Eu, 'fro')**2 / len(Eu)\n", "path": "qutip/metrics.py"}]} | 3,272 | 448 |
gh_patches_debug_12303 | rasdani/github-patches | git_diff | openshift__openshift-ansible-5874 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
installer status callback plugin should be scoped to installer playbooks
#### Description
> When running non-install playbooks (health checks, re-certs, restarts, upgrades...) the callback still displays as if there is an install in progress. This could be confusing for users.
##### Version
origin master
##### Steps To Reproduce
1. `ansible-playbook -i hosts playbooks/byo/openshift-checks/pre-install.yml`
##### Observed Results
Describe what is actually happening.
```
...
INSTALLER STATUS **************************************************************************************************************************************************************************************************
Initialization : Complete
etcd Install : Not Started
NFS Install : Not Started
Load balancer Install : Not Started
Master Install : Not Started
Master Additional Install : Not Started
Node Install : Not Started
GlusterFS Install : Not Started
Hosted Install : Not Started
Metrics Install : Not Started
Logging Install : Not Started
Service Catalog Install : Not Started
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `roles/installer_checkpoint/callback_plugins/installer_checkpoint.py`
Content:
```
1 """Ansible callback plugin to print a summary completion status of installation
2 phases.
3 """
4 from ansible.plugins.callback import CallbackBase
5 from ansible import constants as C
6
7 DOCUMENTATION = '''
8
9 '''
10
11 EXAMPLES = '''
12 ---------------------------------------------
13 Example display of a successful playbook run:
14
15 PLAY RECAP *********************************************************************
16 master01.example.com : ok=158 changed=16 unreachable=0 failed=0
17 node01.example.com : ok=469 changed=74 unreachable=0 failed=0
18 node02.example.com : ok=157 changed=17 unreachable=0 failed=0
19 localhost : ok=24 changed=0 unreachable=0 failed=0
20
21
22 INSTALLER STATUS ***************************************************************
23 Initialization : Complete
24 etcd Install : Complete
25 NFS Install : Not Started
26 Load balancer Install : Not Started
27 Master Install : Complete
28 Master Additional Install : Complete
29 Node Install : Complete
30 GlusterFS Install : Not Started
31 Hosted Install : Complete
32 Metrics Install : Not Started
33 Logging Install : Not Started
34 Service Catalog Install : Not Started
35
36 -----------------------------------------------------
37 Example display if a failure occurs during execution:
38
39 INSTALLER STATUS ***************************************************************
40 Initialization : Complete
41 etcd Install : Complete
42 NFS Install : Not Started
43 Load balancer Install : Not Started
44 Master Install : In Progress
45 This phase can be restarted by running: playbooks/byo/openshift-master/config.yml
46 Master Additional Install : Not Started
47 Node Install : Not Started
48 GlusterFS Install : Not Started
49 Hosted Install : Not Started
50 Metrics Install : Not Started
51 Logging Install : Not Started
52 Service Catalog Install : Not Started
53
54 '''
55
56
57 class CallbackModule(CallbackBase):
58 """This callback summarizes installation phase status."""
59
60 CALLBACK_VERSION = 2.0
61 CALLBACK_TYPE = 'aggregate'
62 CALLBACK_NAME = 'installer_checkpoint'
63 CALLBACK_NEEDS_WHITELIST = False
64
65 def __init__(self):
66 super(CallbackModule, self).__init__()
67
68 def v2_playbook_on_stats(self, stats):
69
70 # Set the order of the installer phases
71 installer_phases = [
72 'installer_phase_initialize',
73 'installer_phase_etcd',
74 'installer_phase_nfs',
75 'installer_phase_loadbalancer',
76 'installer_phase_master',
77 'installer_phase_master_additional',
78 'installer_phase_node',
79 'installer_phase_glusterfs',
80 'installer_phase_hosted',
81 'installer_phase_metrics',
82 'installer_phase_logging',
83 'installer_phase_servicecatalog',
84 'installer_phase_management',
85 ]
86
87 # Define the attributes of the installer phases
88 phase_attributes = {
89 'installer_phase_initialize': {
90 'title': 'Initialization',
91 'playbook': ''
92 },
93 'installer_phase_etcd': {
94 'title': 'etcd Install',
95 'playbook': 'playbooks/byo/openshift-etcd/config.yml'
96 },
97 'installer_phase_nfs': {
98 'title': 'NFS Install',
99 'playbook': 'playbooks/byo/openshift-nfs/config.yml'
100 },
101 'installer_phase_loadbalancer': {
102 'title': 'Load balancer Install',
103 'playbook': 'playbooks/byo/openshift-loadbalancer/config.yml'
104 },
105 'installer_phase_master': {
106 'title': 'Master Install',
107 'playbook': 'playbooks/byo/openshift-master/config.yml'
108 },
109 'installer_phase_master_additional': {
110 'title': 'Master Additional Install',
111 'playbook': 'playbooks/byo/openshift-master/additional_config.yml'
112 },
113 'installer_phase_node': {
114 'title': 'Node Install',
115 'playbook': 'playbooks/byo/openshift-node/config.yml'
116 },
117 'installer_phase_glusterfs': {
118 'title': 'GlusterFS Install',
119 'playbook': 'playbooks/byo/openshift-glusterfs/config.yml'
120 },
121 'installer_phase_hosted': {
122 'title': 'Hosted Install',
123 'playbook': 'playbooks/byo/openshift-cluster/openshift-hosted.yml'
124 },
125 'installer_phase_metrics': {
126 'title': 'Metrics Install',
127 'playbook': 'playbooks/byo/openshift-cluster/openshift-metrics.yml'
128 },
129 'installer_phase_logging': {
130 'title': 'Logging Install',
131 'playbook': 'playbooks/byo/openshift-cluster/openshift-logging.yml'
132 },
133 'installer_phase_servicecatalog': {
134 'title': 'Service Catalog Install',
135 'playbook': 'playbooks/byo/openshift-cluster/service-catalog.yml'
136 },
137 'installer_phase_management': {
138 'title': 'Management Install',
139 'playbook': 'playbooks/byo/openshift-management/config.yml'
140 },
141 }
142
143 # Find the longest phase title
144 max_column = 0
145 for phase in phase_attributes:
146 max_column = max(max_column, len(phase_attributes[phase]['title']))
147
148 if '_run' in stats.custom:
149 self._display.banner('INSTALLER STATUS')
150 for phase in installer_phases:
151 phase_title = phase_attributes[phase]['title']
152 padding = max_column - len(phase_title) + 2
153 if phase in stats.custom['_run']:
154 phase_status = stats.custom['_run'][phase]
155 self._display.display(
156 '{}{}: {}'.format(phase_title, ' ' * padding, phase_status),
157 color=self.phase_color(phase_status))
158 if phase_status == 'In Progress' and phase != 'installer_phase_initialize':
159 self._display.display(
160 '\tThis phase can be restarted by running: {}'.format(
161 phase_attributes[phase]['playbook']))
162 else:
163 # Phase was not found in custom stats
164 self._display.display(
165 '{}{}: {}'.format(phase_title, ' ' * padding, 'Not Started'),
166 color=C.COLOR_SKIP)
167
168 self._display.display("", screen_only=True)
169
170 def phase_color(self, status):
171 """ Return color code for installer phase"""
172 valid_status = [
173 'In Progress',
174 'Complete',
175 ]
176
177 if status not in valid_status:
178 self._display.warning('Invalid phase status defined: {}'.format(status))
179
180 if status == 'Complete':
181 phase_color = C.COLOR_OK
182 elif status == 'In Progress':
183 phase_color = C.COLOR_ERROR
184 else:
185 phase_color = C.COLOR_WARN
186
187 return phase_color
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py b/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py
--- a/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py
+++ b/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py
@@ -159,11 +159,6 @@
self._display.display(
'\tThis phase can be restarted by running: {}'.format(
phase_attributes[phase]['playbook']))
- else:
- # Phase was not found in custom stats
- self._display.display(
- '{}{}: {}'.format(phase_title, ' ' * padding, 'Not Started'),
- color=C.COLOR_SKIP)
self._display.display("", screen_only=True)
| {"golden_diff": "diff --git a/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py b/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py\n--- a/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py\n+++ b/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py\n@@ -159,11 +159,6 @@\n self._display.display(\n '\\tThis phase can be restarted by running: {}'.format(\n phase_attributes[phase]['playbook']))\n- else:\n- # Phase was not found in custom stats\n- self._display.display(\n- '{}{}: {}'.format(phase_title, ' ' * padding, 'Not Started'),\n- color=C.COLOR_SKIP)\n \n self._display.display(\"\", screen_only=True)\n", "issue": "installer status callback plugin should be scoped to installer playbooks\n#### Description\r\n\r\n> When running non-install playbooks (health checks, re-certs, restarts, upgrades...) the callback still displays as if there is an install in progress. This could be confusing for users.\r\n\r\n\r\n##### Version\r\n\r\norigin master\r\n\r\n##### Steps To Reproduce\r\n1. `ansible-playbook -i hosts playbooks/byo/openshift-checks/pre-install.yml`\r\n\r\n##### Observed Results\r\nDescribe what is actually happening.\r\n\r\n```\r\n...\r\nINSTALLER STATUS **************************************************************************************************************************************************************************************************\r\nInitialization : Complete\r\netcd Install : Not Started\r\nNFS Install : Not Started\r\nLoad balancer Install : Not Started\r\nMaster Install : Not Started\r\nMaster Additional Install : Not Started\r\nNode Install : Not Started\r\nGlusterFS Install : Not Started\r\nHosted Install : Not Started\r\nMetrics Install : Not Started\r\nLogging Install : Not Started\r\nService Catalog Install : Not Started\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Ansible callback plugin to print a summary completion status of installation\nphases.\n\"\"\"\nfrom ansible.plugins.callback import CallbackBase\nfrom ansible import constants as C\n\nDOCUMENTATION = '''\n\n'''\n\nEXAMPLES = '''\n---------------------------------------------\nExample display of a successful playbook run:\n\nPLAY RECAP *********************************************************************\nmaster01.example.com : ok=158 changed=16 unreachable=0 failed=0\nnode01.example.com : ok=469 changed=74 unreachable=0 failed=0\nnode02.example.com : ok=157 changed=17 unreachable=0 failed=0\nlocalhost : ok=24 changed=0 unreachable=0 failed=0\n\n\nINSTALLER STATUS ***************************************************************\nInitialization : Complete\netcd Install : Complete\nNFS Install : Not Started\nLoad balancer Install : Not Started\nMaster Install : Complete\nMaster Additional Install : Complete\nNode Install : Complete\nGlusterFS Install : Not Started\nHosted Install : Complete\nMetrics Install : Not Started\nLogging Install : Not Started\nService Catalog Install : Not Started\n\n-----------------------------------------------------\nExample display if a failure occurs during execution:\n\nINSTALLER STATUS ***************************************************************\nInitialization : Complete\netcd Install : Complete\nNFS Install : Not Started\nLoad balancer Install : Not Started\nMaster Install : In Progress\n This phase can be restarted by running: playbooks/byo/openshift-master/config.yml\nMaster Additional Install : Not Started\nNode Install : Not Started\nGlusterFS Install : Not Started\nHosted Install : Not Started\nMetrics Install : Not Started\nLogging Install : Not Started\nService Catalog Install : Not Started\n\n'''\n\n\nclass CallbackModule(CallbackBase):\n \"\"\"This callback summarizes installation phase status.\"\"\"\n\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'aggregate'\n CALLBACK_NAME = 'installer_checkpoint'\n CALLBACK_NEEDS_WHITELIST = False\n\n def __init__(self):\n super(CallbackModule, self).__init__()\n\n def v2_playbook_on_stats(self, stats):\n\n # Set the order of the installer phases\n installer_phases = [\n 'installer_phase_initialize',\n 'installer_phase_etcd',\n 'installer_phase_nfs',\n 'installer_phase_loadbalancer',\n 'installer_phase_master',\n 'installer_phase_master_additional',\n 'installer_phase_node',\n 'installer_phase_glusterfs',\n 'installer_phase_hosted',\n 'installer_phase_metrics',\n 'installer_phase_logging',\n 'installer_phase_servicecatalog',\n 'installer_phase_management',\n ]\n\n # Define the attributes of the installer phases\n phase_attributes = {\n 'installer_phase_initialize': {\n 'title': 'Initialization',\n 'playbook': ''\n },\n 'installer_phase_etcd': {\n 'title': 'etcd Install',\n 'playbook': 'playbooks/byo/openshift-etcd/config.yml'\n },\n 'installer_phase_nfs': {\n 'title': 'NFS Install',\n 'playbook': 'playbooks/byo/openshift-nfs/config.yml'\n },\n 'installer_phase_loadbalancer': {\n 'title': 'Load balancer Install',\n 'playbook': 'playbooks/byo/openshift-loadbalancer/config.yml'\n },\n 'installer_phase_master': {\n 'title': 'Master Install',\n 'playbook': 'playbooks/byo/openshift-master/config.yml'\n },\n 'installer_phase_master_additional': {\n 'title': 'Master Additional Install',\n 'playbook': 'playbooks/byo/openshift-master/additional_config.yml'\n },\n 'installer_phase_node': {\n 'title': 'Node Install',\n 'playbook': 'playbooks/byo/openshift-node/config.yml'\n },\n 'installer_phase_glusterfs': {\n 'title': 'GlusterFS Install',\n 'playbook': 'playbooks/byo/openshift-glusterfs/config.yml'\n },\n 'installer_phase_hosted': {\n 'title': 'Hosted Install',\n 'playbook': 'playbooks/byo/openshift-cluster/openshift-hosted.yml'\n },\n 'installer_phase_metrics': {\n 'title': 'Metrics Install',\n 'playbook': 'playbooks/byo/openshift-cluster/openshift-metrics.yml'\n },\n 'installer_phase_logging': {\n 'title': 'Logging Install',\n 'playbook': 'playbooks/byo/openshift-cluster/openshift-logging.yml'\n },\n 'installer_phase_servicecatalog': {\n 'title': 'Service Catalog Install',\n 'playbook': 'playbooks/byo/openshift-cluster/service-catalog.yml'\n },\n 'installer_phase_management': {\n 'title': 'Management Install',\n 'playbook': 'playbooks/byo/openshift-management/config.yml'\n },\n }\n\n # Find the longest phase title\n max_column = 0\n for phase in phase_attributes:\n max_column = max(max_column, len(phase_attributes[phase]['title']))\n\n if '_run' in stats.custom:\n self._display.banner('INSTALLER STATUS')\n for phase in installer_phases:\n phase_title = phase_attributes[phase]['title']\n padding = max_column - len(phase_title) + 2\n if phase in stats.custom['_run']:\n phase_status = stats.custom['_run'][phase]\n self._display.display(\n '{}{}: {}'.format(phase_title, ' ' * padding, phase_status),\n color=self.phase_color(phase_status))\n if phase_status == 'In Progress' and phase != 'installer_phase_initialize':\n self._display.display(\n '\\tThis phase can be restarted by running: {}'.format(\n phase_attributes[phase]['playbook']))\n else:\n # Phase was not found in custom stats\n self._display.display(\n '{}{}: {}'.format(phase_title, ' ' * padding, 'Not Started'),\n color=C.COLOR_SKIP)\n\n self._display.display(\"\", screen_only=True)\n\n def phase_color(self, status):\n \"\"\" Return color code for installer phase\"\"\"\n valid_status = [\n 'In Progress',\n 'Complete',\n ]\n\n if status not in valid_status:\n self._display.warning('Invalid phase status defined: {}'.format(status))\n\n if status == 'Complete':\n phase_color = C.COLOR_OK\n elif status == 'In Progress':\n phase_color = C.COLOR_ERROR\n else:\n phase_color = C.COLOR_WARN\n\n return phase_color\n", "path": "roles/installer_checkpoint/callback_plugins/installer_checkpoint.py"}], "after_files": [{"content": "\"\"\"Ansible callback plugin to print a summary completion status of installation\nphases.\n\"\"\"\nfrom ansible.plugins.callback import CallbackBase\nfrom ansible import constants as C\n\nDOCUMENTATION = '''\n\n'''\n\nEXAMPLES = '''\n---------------------------------------------\nExample display of a successful playbook run:\n\nPLAY RECAP *********************************************************************\nmaster01.example.com : ok=158 changed=16 unreachable=0 failed=0\nnode01.example.com : ok=469 changed=74 unreachable=0 failed=0\nnode02.example.com : ok=157 changed=17 unreachable=0 failed=0\nlocalhost : ok=24 changed=0 unreachable=0 failed=0\n\n\nINSTALLER STATUS ***************************************************************\nInitialization : Complete\netcd Install : Complete\nNFS Install : Not Started\nLoad balancer Install : Not Started\nMaster Install : Complete\nMaster Additional Install : Complete\nNode Install : Complete\nGlusterFS Install : Not Started\nHosted Install : Complete\nMetrics Install : Not Started\nLogging Install : Not Started\nService Catalog Install : Not Started\n\n-----------------------------------------------------\nExample display if a failure occurs during execution:\n\nINSTALLER STATUS ***************************************************************\nInitialization : Complete\netcd Install : Complete\nNFS Install : Not Started\nLoad balancer Install : Not Started\nMaster Install : In Progress\n This phase can be restarted by running: playbooks/byo/openshift-master/config.yml\nMaster Additional Install : Not Started\nNode Install : Not Started\nGlusterFS Install : Not Started\nHosted Install : Not Started\nMetrics Install : Not Started\nLogging Install : Not Started\nService Catalog Install : Not Started\n\n'''\n\n\nclass CallbackModule(CallbackBase):\n \"\"\"This callback summarizes installation phase status.\"\"\"\n\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'aggregate'\n CALLBACK_NAME = 'installer_checkpoint'\n CALLBACK_NEEDS_WHITELIST = False\n\n def __init__(self):\n super(CallbackModule, self).__init__()\n\n def v2_playbook_on_stats(self, stats):\n\n # Set the order of the installer phases\n installer_phases = [\n 'installer_phase_initialize',\n 'installer_phase_etcd',\n 'installer_phase_nfs',\n 'installer_phase_loadbalancer',\n 'installer_phase_master',\n 'installer_phase_master_additional',\n 'installer_phase_node',\n 'installer_phase_glusterfs',\n 'installer_phase_hosted',\n 'installer_phase_metrics',\n 'installer_phase_logging',\n 'installer_phase_servicecatalog',\n 'installer_phase_management',\n ]\n\n # Define the attributes of the installer phases\n phase_attributes = {\n 'installer_phase_initialize': {\n 'title': 'Initialization',\n 'playbook': ''\n },\n 'installer_phase_etcd': {\n 'title': 'etcd Install',\n 'playbook': 'playbooks/byo/openshift-etcd/config.yml'\n },\n 'installer_phase_nfs': {\n 'title': 'NFS Install',\n 'playbook': 'playbooks/byo/openshift-nfs/config.yml'\n },\n 'installer_phase_loadbalancer': {\n 'title': 'Load balancer Install',\n 'playbook': 'playbooks/byo/openshift-loadbalancer/config.yml'\n },\n 'installer_phase_master': {\n 'title': 'Master Install',\n 'playbook': 'playbooks/byo/openshift-master/config.yml'\n },\n 'installer_phase_master_additional': {\n 'title': 'Master Additional Install',\n 'playbook': 'playbooks/byo/openshift-master/additional_config.yml'\n },\n 'installer_phase_node': {\n 'title': 'Node Install',\n 'playbook': 'playbooks/byo/openshift-node/config.yml'\n },\n 'installer_phase_glusterfs': {\n 'title': 'GlusterFS Install',\n 'playbook': 'playbooks/byo/openshift-glusterfs/config.yml'\n },\n 'installer_phase_hosted': {\n 'title': 'Hosted Install',\n 'playbook': 'playbooks/byo/openshift-cluster/openshift-hosted.yml'\n },\n 'installer_phase_metrics': {\n 'title': 'Metrics Install',\n 'playbook': 'playbooks/byo/openshift-cluster/openshift-metrics.yml'\n },\n 'installer_phase_logging': {\n 'title': 'Logging Install',\n 'playbook': 'playbooks/byo/openshift-cluster/openshift-logging.yml'\n },\n 'installer_phase_servicecatalog': {\n 'title': 'Service Catalog Install',\n 'playbook': 'playbooks/byo/openshift-cluster/service-catalog.yml'\n },\n 'installer_phase_management': {\n 'title': 'Management Install',\n 'playbook': 'playbooks/byo/openshift-management/config.yml'\n },\n }\n\n # Find the longest phase title\n max_column = 0\n for phase in phase_attributes:\n max_column = max(max_column, len(phase_attributes[phase]['title']))\n\n if '_run' in stats.custom:\n self._display.banner('INSTALLER STATUS')\n for phase in installer_phases:\n phase_title = phase_attributes[phase]['title']\n padding = max_column - len(phase_title) + 2\n if phase in stats.custom['_run']:\n phase_status = stats.custom['_run'][phase]\n self._display.display(\n '{}{}: {}'.format(phase_title, ' ' * padding, phase_status),\n color=self.phase_color(phase_status))\n if phase_status == 'In Progress' and phase != 'installer_phase_initialize':\n self._display.display(\n '\\tThis phase can be restarted by running: {}'.format(\n phase_attributes[phase]['playbook']))\n\n self._display.display(\"\", screen_only=True)\n\n def phase_color(self, status):\n \"\"\" Return color code for installer phase\"\"\"\n valid_status = [\n 'In Progress',\n 'Complete',\n ]\n\n if status not in valid_status:\n self._display.warning('Invalid phase status defined: {}'.format(status))\n\n if status == 'Complete':\n phase_color = C.COLOR_OK\n elif status == 'In Progress':\n phase_color = C.COLOR_ERROR\n else:\n phase_color = C.COLOR_WARN\n\n return phase_color\n", "path": "roles/installer_checkpoint/callback_plugins/installer_checkpoint.py"}]} | 2,333 | 159 |
gh_patches_debug_30260 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1368 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Costa Rica has a new generation type and gives too many warnings
There seems to be a new generation type for Costa Rica that is causing a warning.
https://kibana.electricitymap.org/app/kibana#/doc/93e631f0-245f-11e8-a779-9d01de8d7a71/logstash-2018.04.28/doc?id=N-7nDGMBoL7AEh1EXs9P&_g=()
```unknown is not mapped to generation type```
However looking at the logs for the past week [here](https://kibana.electricitymap.org/app/kibana#/discover/1710fdd0-2460-11e8-a779-9d01de8d7a71?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now%2Fw,mode:quick,to:now%2Fw))&_a=(columns:!(level,extra.key,message),filters:!(('$state':(store:appState),exists:(field:level),meta:(alias:!n,disabled:!f,index:'93e631f0-245f-11e8-a779-9d01de8d7a71',key:level,negate:!f,type:exists,value:exists)),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'93e631f0-245f-11e8-a779-9d01de8d7a71',key:level,negate:!t,params:(query:INFO,type:phrase),type:phrase,value:INFO),query:(match:(level:(query:INFO,type:phrase))))),index:'93e631f0-245f-11e8-a779-9d01de8d7a71',interval:auto,query:(language:lucene,query:CR),sort:!('@timestamp',desc))) the warning happens multiple times on one run. If I remember rightly the parser gets data for each hour available, we should alter it to only warn once per run to reduce clutter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/CR.py`
Content:
```
1 #!/usr/bin/env python3
2 # coding=utf-8
3
4 import logging
5
6 import arrow
7 import pandas as pd
8 import requests
9 from bs4 import BeautifulSoup
10
11 TIMEZONE = 'America/Costa_Rica'
12 DATE_FORMAT = 'DD/MM/YYYY'
13 MONTH_FORMAT = 'MM/YYYY'
14 POWER_PLANTS = {
15 u'Aeroenergía': 'wind',
16 u'Altamira': 'wind',
17 u'Angostura': 'hydro',
18 u'Arenal': 'hydro',
19 u'Balsa Inferior': 'hydro',
20 u'Barranca': 'unknown',
21 u'Barro Morado': 'geothermal',
22 u'Bijagua': 'hydro',
23 u'Birris12': 'hydro',
24 u'Birris3': 'hydro',
25 u'Boca de Pozo': 'hydro',
26 u'CNFL': 'unknown',
27 u'Cachí': 'hydro',
28 u'Campos Azules': 'wind',
29 u'Canalete': 'unknown',
30 u'Cariblanco': 'hydro',
31 u'Carrillos': 'hydro',
32 u'Caño Grande': 'hydro',
33 u'Caño Grande III': 'hydro',
34 u'Chiripa': 'wind',
35 u'Chocosuelas': 'hydro',
36 u'Chucás': 'hydro',
37 u'Cubujuquí': 'hydro',
38 u'Daniel Gutiérrez': 'hydro',
39 u'Dengo': 'hydro',
40 u'Don Pedro': 'hydro',
41 u'Doña Julia': 'hydro',
42 u'Echandi': 'hydro',
43 u'El Angel': 'hydro',
44 u'El Angel Ampliación': 'hydro',
45 u'El Embalse': 'hydro',
46 u'El General': 'hydro',
47 u'El Viejo': 'biomass',
48 u'Garabito': 'oil',
49 u'Garita': 'hydro',
50 u'Guápiles': 'oil',
51 u'Hidrozarcas': 'hydro',
52 u'La Esperanza (CoopeL)': 'hydro',
53 u'La Joya': 'hydro',
54 u'Los Negros': 'hydro',
55 u'Los Santos': 'wind',
56 u'MOVASA': 'wind',
57 u'Matamoros': 'unknown',
58 u'Miravalles I': 'geothermal',
59 u'Miravalles II': 'geothermal',
60 u'Miravalles III': 'geothermal',
61 u'Miravalles V': 'geothermal',
62 u'Moín II': 'oil',
63 u'Moín III': 'oil',
64 u'Orosí': 'wind',
65 u'Orotina': 'unknown',
66 u'Otros': 'unknown',
67 u'PE Mogote': 'wind',
68 u'PEG': 'wind',
69 u'Pailas': 'geothermal',
70 u'Parque Solar Juanilama': 'solar',
71 u'Parque Solar Miravalles': 'solar',
72 u'Peñas Blancas': 'hydro',
73 u'Pirrís': 'hydro',
74 u'Plantas Eólicas': 'wind',
75 u'Platanar': 'hydro',
76 u'Pocosol': 'hydro',
77 u'Poás I y II': 'hydro',
78 u'Reventazón': 'hydro',
79 u'Río Lajas': 'hydro',
80 u'Río Macho': 'hydro',
81 u'San Antonio': 'oil',
82 u'San Lorenzo (C)': 'hydro',
83 u'Sandillal': 'hydro',
84 u'Suerkata': 'hydro',
85 u'Taboga': 'biomass',
86 u'Tacares': 'hydro',
87 u'Tejona': 'wind',
88 u'Tilawind': 'wind',
89 u'Torito': 'hydro',
90 u'Toro I': 'hydro',
91 u'Toro II': 'hydro',
92 u'Toro III': 'hydro',
93 u'Tuis (JASEC)': 'hydro',
94 u'Valle Central': 'wind',
95 u'Vara Blanca': 'hydro',
96 u'Ventanas-Garita': 'hydro',
97 u'Vientos de La Perla': 'wind',
98 u'Vientos de Miramar': 'wind',
99 u'Vientos del Este': 'wind',
100 u'Volcán': 'hydro',
101 }
102
103 CHARACTERISTIC_NAME = 'Angostura'
104
105
106 def empty_record(zone_key):
107 return {
108 'zoneKey': zone_key,
109 'capacity': {},
110 'production': {
111 'biomass': 0.0,
112 'coal': 0.0,
113 'gas': 0.0,
114 'hydro': 0.0,
115 'nuclear': 0.0,
116 'oil': 0.0,
117 'solar': 0.0,
118 'wind': 0.0,
119 'geothermal': 0.0,
120 'unknown': 0.0
121 },
122 'storage': {},
123 'source': 'grupoice.com'
124 }
125
126
127 def df_to_data(zone_key, day, df, logger):
128 df = df.dropna(axis=1, how='any')
129 # Check for empty dataframe
130 if df.shape == (1, 1):
131 return []
132 df = df.drop(['Intercambio Sur', 'Intercambio Norte', 'Total'], errors='ignore')
133 df = df.iloc[:, :-1]
134
135 results = []
136 hour = 0
137 for column in df:
138 data = empty_record(zone_key)
139 data_time = day.replace(hour=hour, minute=0, second=0, microsecond=0).datetime
140 for index, value in df[column].items():
141 source = POWER_PLANTS.get(index)
142 if not source:
143 source = 'unknown'
144 logger.warning('{} is not mapped to generation type'.format(source),
145 extra={'key': zone_key})
146 data['datetime'] = data_time
147 data['production'][source] += max(0.0, value)
148 hour += 1
149 results.append(data)
150
151 return results
152
153
154 def fetch_production(zone_key='CR', session=None,
155 target_datetime=None, logger=logging.getLogger(__name__)):
156 # ensure we have an arrow object. if no target_datetime is specified, this defaults to now.
157 target_datetime = arrow.get(target_datetime).to(TIMEZONE)
158
159 if target_datetime < arrow.get('2012-07-01'):
160 # data availability limit found by manual trial and error
161 logger.error('CR API does not provide data before 2012-07-01, '
162 '{} was requested'.format(target_datetime),
163 extra={"key": zone_key})
164 return None
165
166 # Do not use existing session as some amount of cache is taking place
167 r = requests.session()
168 url = 'https://appcenter.grupoice.com/CenceWeb/CencePosdespachoNacional.jsf'
169 response = r.get(url)
170
171 soup = BeautifulSoup(response.text, 'html.parser')
172 jsf_view_state = soup.select('#javax.faces.ViewState')[0]['value']
173
174 data = [
175 ('formPosdespacho', 'formPosdespacho'),
176 ('formPosdespacho:txtFechaInicio_input', target_datetime.format(DATE_FORMAT)),
177 ('formPosdespacho:pickFecha', ''),
178 ('formPosdespacho:j_idt60_selection', ''),
179 ('formPosdespacho:j_idt60_scrollState', '0,1915'),
180 ('javax.faces.ViewState', jsf_view_state),
181 ]
182 response = r.post(url, cookies={}, data=data)
183
184 # tell pandas which table to use by providing CHARACTERISTIC_NAME
185 df = pd.read_html(response.text, match=CHARACTERISTIC_NAME, skiprows=1, index_col=0)[0]
186
187 results = df_to_data(zone_key, target_datetime, df, logger)
188
189 return results
190
191
192 def fetch_exchange(zone_key1='CR', zone_key2='NI', session=None, target_datetime=None, logger=None):
193 """Requests the last known power exchange (in MW) between two regions
194
195 Arguments:
196 zone_key1 -- the first country code
197 zone_key2 -- the second country code; order of the two codes in params doesn't matter
198 session (optional) -- request session passed in order to re-use an existing session
199
200 Return:
201 A dictionary in the form:
202 {
203 'sortedZoneKeys': 'DK->NO',
204 'datetime': '2017-01-01T00:00:00Z',
205 'netFlow': 0.0,
206 'source': 'mysource.com'
207 }
208
209 where net flow is from DK into NO
210 """
211 if target_datetime:
212 raise NotImplementedError('This parser is not yet able to parse past dates')
213
214 sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))
215
216 df = pd.read_csv('http://www.enteoperador.org/newsite/flash/data.csv', index_col=False)
217
218 if sorted_zone_keys == 'CR->NI':
219 flow = df['NICR'][0]
220 elif sorted_zone_keys == 'CR->PA':
221 flow = -1 * df['CRPA'][0]
222 else:
223 raise NotImplementedError('This exchange pair is not implemented')
224
225 data = {
226 'datetime': arrow.now(TIMEZONE).datetime,
227 'sortedZoneKeys': sorted_zone_keys,
228 'netFlow': flow,
229 'source': 'enteoperador.org'
230 }
231
232 return data
233
234
235 if __name__ == '__main__':
236 """Main method, never used by the Electricity Map backend, but handy for testing."""
237
238 from pprint import pprint
239
240 print('fetch_production() ->')
241 pprint(fetch_production())
242
243 print('fetch_production(target_datetime=arrow.get("2018-03-13T12:00Z") ->')
244 pprint(fetch_production(target_datetime=arrow.get('2018-03-13T12:00Z')))
245
246 # this should work
247 print('fetch_production(target_datetime=arrow.get("2013-03-13T12:00Z") ->')
248 pprint(fetch_production(target_datetime=arrow.get('2013-03-13T12:00Z')))
249
250 # this should return None
251 print('fetch_production(target_datetime=arrow.get("2007-03-13T12:00Z") ->')
252 pprint(fetch_production(target_datetime=arrow.get('2007-03-13T12:00Z')))
253
254 print('fetch_exchange() ->')
255 print(fetch_exchange())
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsers/CR.py b/parsers/CR.py
--- a/parsers/CR.py
+++ b/parsers/CR.py
@@ -52,6 +52,7 @@
u'La Esperanza (CoopeL)': 'hydro',
u'La Joya': 'hydro',
u'Los Negros': 'hydro',
+ u'Los Negros II': 'hydro',
u'Los Santos': 'wind',
u'MOVASA': 'wind',
u'Matamoros': 'unknown',
@@ -59,6 +60,7 @@
u'Miravalles II': 'geothermal',
u'Miravalles III': 'geothermal',
u'Miravalles V': 'geothermal',
+ u'Moín I': 'oil',
u'Moín II': 'oil',
u'Moín III': 'oil',
u'Orosí': 'wind',
@@ -133,6 +135,7 @@
df = df.iloc[:, :-1]
results = []
+ unknown_plants = set()
hour = 0
for column in df:
data = empty_record(zone_key)
@@ -141,13 +144,16 @@
source = POWER_PLANTS.get(index)
if not source:
source = 'unknown'
- logger.warning('{} is not mapped to generation type'.format(source),
- extra={'key': zone_key})
+ unknown_plants.add(index)
data['datetime'] = data_time
data['production'][source] += max(0.0, value)
hour += 1
results.append(data)
+ for plant in unknown_plants:
+ logger.warning('{} is not mapped to generation type'.format(plant),
+ extra={'key': zone_key})
+
return results
| {"golden_diff": "diff --git a/parsers/CR.py b/parsers/CR.py\n--- a/parsers/CR.py\n+++ b/parsers/CR.py\n@@ -52,6 +52,7 @@\n u'La Esperanza (CoopeL)': 'hydro',\n u'La Joya': 'hydro',\n u'Los Negros': 'hydro',\n+ u'Los Negros II': 'hydro',\n u'Los Santos': 'wind',\n u'MOVASA': 'wind',\n u'Matamoros': 'unknown',\n@@ -59,6 +60,7 @@\n u'Miravalles II': 'geothermal',\n u'Miravalles III': 'geothermal',\n u'Miravalles V': 'geothermal',\n+ u'Mo\u00edn I': 'oil',\n u'Mo\u00edn II': 'oil',\n u'Mo\u00edn III': 'oil',\n u'Oros\u00ed': 'wind',\n@@ -133,6 +135,7 @@\n df = df.iloc[:, :-1]\n \n results = []\n+ unknown_plants = set()\n hour = 0\n for column in df:\n data = empty_record(zone_key)\n@@ -141,13 +144,16 @@\n source = POWER_PLANTS.get(index)\n if not source:\n source = 'unknown'\n- logger.warning('{} is not mapped to generation type'.format(source),\n- extra={'key': zone_key})\n+ unknown_plants.add(index)\n data['datetime'] = data_time\n data['production'][source] += max(0.0, value)\n hour += 1\n results.append(data)\n \n+ for plant in unknown_plants:\n+ logger.warning('{} is not mapped to generation type'.format(plant),\n+ extra={'key': zone_key})\n+\n return results\n", "issue": "Costa Rica has a new generation type and gives too many warnings\nThere seems to be a new generation type for Costa Rica that is causing a warning.\r\n\r\nhttps://kibana.electricitymap.org/app/kibana#/doc/93e631f0-245f-11e8-a779-9d01de8d7a71/logstash-2018.04.28/doc?id=N-7nDGMBoL7AEh1EXs9P&_g=()\r\n\r\n```unknown is not mapped to generation type```\r\n\r\nHowever looking at the logs for the past week [here](https://kibana.electricitymap.org/app/kibana#/discover/1710fdd0-2460-11e8-a779-9d01de8d7a71?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now%2Fw,mode:quick,to:now%2Fw))&_a=(columns:!(level,extra.key,message),filters:!(('$state':(store:appState),exists:(field:level),meta:(alias:!n,disabled:!f,index:'93e631f0-245f-11e8-a779-9d01de8d7a71',key:level,negate:!f,type:exists,value:exists)),('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'93e631f0-245f-11e8-a779-9d01de8d7a71',key:level,negate:!t,params:(query:INFO,type:phrase),type:phrase,value:INFO),query:(match:(level:(query:INFO,type:phrase))))),index:'93e631f0-245f-11e8-a779-9d01de8d7a71',interval:auto,query:(language:lucene,query:CR),sort:!('@timestamp',desc))) the warning happens multiple times on one run. If I remember rightly the parser gets data for each hour available, we should alter it to only warn once per run to reduce clutter.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# coding=utf-8\n\nimport logging\n\nimport arrow\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup\n\nTIMEZONE = 'America/Costa_Rica'\nDATE_FORMAT = 'DD/MM/YYYY'\nMONTH_FORMAT = 'MM/YYYY'\nPOWER_PLANTS = {\n u'Aeroenerg\u00eda': 'wind',\n u'Altamira': 'wind',\n u'Angostura': 'hydro',\n u'Arenal': 'hydro',\n u'Balsa Inferior': 'hydro',\n u'Barranca': 'unknown',\n u'Barro Morado': 'geothermal',\n u'Bijagua': 'hydro',\n u'Birris12': 'hydro',\n u'Birris3': 'hydro',\n u'Boca de Pozo': 'hydro',\n u'CNFL': 'unknown',\n u'Cach\u00ed': 'hydro',\n u'Campos Azules': 'wind',\n u'Canalete': 'unknown',\n u'Cariblanco': 'hydro',\n u'Carrillos': 'hydro',\n u'Ca\u00f1o Grande': 'hydro',\n u'Ca\u00f1o Grande III': 'hydro',\n u'Chiripa': 'wind',\n u'Chocosuelas': 'hydro',\n u'Chuc\u00e1s': 'hydro',\n u'Cubujuqu\u00ed': 'hydro',\n u'Daniel Guti\u00e9rrez': 'hydro',\n u'Dengo': 'hydro',\n u'Don Pedro': 'hydro',\n u'Do\u00f1a Julia': 'hydro',\n u'Echandi': 'hydro',\n u'El Angel': 'hydro',\n u'El Angel Ampliaci\u00f3n': 'hydro',\n u'El Embalse': 'hydro',\n u'El General': 'hydro',\n u'El Viejo': 'biomass',\n u'Garabito': 'oil',\n u'Garita': 'hydro',\n u'Gu\u00e1piles': 'oil',\n u'Hidrozarcas': 'hydro',\n u'La Esperanza (CoopeL)': 'hydro',\n u'La Joya': 'hydro',\n u'Los Negros': 'hydro',\n u'Los Santos': 'wind',\n u'MOVASA': 'wind',\n u'Matamoros': 'unknown',\n u'Miravalles I': 'geothermal',\n u'Miravalles II': 'geothermal',\n u'Miravalles III': 'geothermal',\n u'Miravalles V': 'geothermal',\n u'Mo\u00edn II': 'oil',\n u'Mo\u00edn III': 'oil',\n u'Oros\u00ed': 'wind',\n u'Orotina': 'unknown',\n u'Otros': 'unknown',\n u'PE Mogote': 'wind',\n u'PEG': 'wind',\n u'Pailas': 'geothermal',\n u'Parque Solar Juanilama': 'solar',\n u'Parque Solar Miravalles': 'solar',\n u'Pe\u00f1as Blancas': 'hydro',\n u'Pirr\u00eds': 'hydro',\n u'Plantas E\u00f3licas': 'wind',\n u'Platanar': 'hydro',\n u'Pocosol': 'hydro',\n u'Po\u00e1s I y II': 'hydro',\n u'Reventaz\u00f3n': 'hydro',\n u'R\u00edo Lajas': 'hydro',\n u'R\u00edo Macho': 'hydro',\n u'San Antonio': 'oil',\n u'San Lorenzo (C)': 'hydro',\n u'Sandillal': 'hydro',\n u'Suerkata': 'hydro',\n u'Taboga': 'biomass',\n u'Tacares': 'hydro',\n u'Tejona': 'wind',\n u'Tilawind': 'wind',\n u'Torito': 'hydro',\n u'Toro I': 'hydro',\n u'Toro II': 'hydro',\n u'Toro III': 'hydro',\n u'Tuis (JASEC)': 'hydro',\n u'Valle Central': 'wind',\n u'Vara Blanca': 'hydro',\n u'Ventanas-Garita': 'hydro',\n u'Vientos de La Perla': 'wind',\n u'Vientos de Miramar': 'wind',\n u'Vientos del Este': 'wind',\n u'Volc\u00e1n': 'hydro',\n}\n\nCHARACTERISTIC_NAME = 'Angostura'\n\n\ndef empty_record(zone_key):\n return {\n 'zoneKey': zone_key,\n 'capacity': {},\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': 0.0,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {},\n 'source': 'grupoice.com'\n }\n\n\ndef df_to_data(zone_key, day, df, logger):\n df = df.dropna(axis=1, how='any')\n # Check for empty dataframe\n if df.shape == (1, 1):\n return []\n df = df.drop(['Intercambio Sur', 'Intercambio Norte', 'Total'], errors='ignore')\n df = df.iloc[:, :-1]\n\n results = []\n hour = 0\n for column in df:\n data = empty_record(zone_key)\n data_time = day.replace(hour=hour, minute=0, second=0, microsecond=0).datetime\n for index, value in df[column].items():\n source = POWER_PLANTS.get(index)\n if not source:\n source = 'unknown'\n logger.warning('{} is not mapped to generation type'.format(source),\n extra={'key': zone_key})\n data['datetime'] = data_time\n data['production'][source] += max(0.0, value)\n hour += 1\n results.append(data)\n\n return results\n\n\ndef fetch_production(zone_key='CR', session=None,\n target_datetime=None, logger=logging.getLogger(__name__)):\n # ensure we have an arrow object. if no target_datetime is specified, this defaults to now.\n target_datetime = arrow.get(target_datetime).to(TIMEZONE)\n\n if target_datetime < arrow.get('2012-07-01'):\n # data availability limit found by manual trial and error\n logger.error('CR API does not provide data before 2012-07-01, '\n '{} was requested'.format(target_datetime),\n extra={\"key\": zone_key})\n return None\n\n # Do not use existing session as some amount of cache is taking place\n r = requests.session()\n url = 'https://appcenter.grupoice.com/CenceWeb/CencePosdespachoNacional.jsf'\n response = r.get(url)\n\n soup = BeautifulSoup(response.text, 'html.parser')\n jsf_view_state = soup.select('#javax.faces.ViewState')[0]['value']\n\n data = [\n ('formPosdespacho', 'formPosdespacho'),\n ('formPosdespacho:txtFechaInicio_input', target_datetime.format(DATE_FORMAT)),\n ('formPosdespacho:pickFecha', ''),\n ('formPosdespacho:j_idt60_selection', ''),\n ('formPosdespacho:j_idt60_scrollState', '0,1915'),\n ('javax.faces.ViewState', jsf_view_state),\n ]\n response = r.post(url, cookies={}, data=data)\n\n # tell pandas which table to use by providing CHARACTERISTIC_NAME\n df = pd.read_html(response.text, match=CHARACTERISTIC_NAME, skiprows=1, index_col=0)[0]\n\n results = df_to_data(zone_key, target_datetime, df, logger)\n\n return results\n\n\ndef fetch_exchange(zone_key1='CR', zone_key2='NI', session=None, target_datetime=None, logger=None):\n \"\"\"Requests the last known power exchange (in MW) between two regions\n\n Arguments:\n zone_key1 -- the first country code\n zone_key2 -- the second country code; order of the two codes in params doesn't matter\n session (optional) -- request session passed in order to re-use an existing session\n\n Return:\n A dictionary in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n\n where net flow is from DK into NO\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n df = pd.read_csv('http://www.enteoperador.org/newsite/flash/data.csv', index_col=False)\n\n if sorted_zone_keys == 'CR->NI':\n flow = df['NICR'][0]\n elif sorted_zone_keys == 'CR->PA':\n flow = -1 * df['CRPA'][0]\n else:\n raise NotImplementedError('This exchange pair is not implemented')\n\n data = {\n 'datetime': arrow.now(TIMEZONE).datetime,\n 'sortedZoneKeys': sorted_zone_keys,\n 'netFlow': flow,\n 'source': 'enteoperador.org'\n }\n\n return data\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n from pprint import pprint\n\n print('fetch_production() ->')\n pprint(fetch_production())\n\n print('fetch_production(target_datetime=arrow.get(\"2018-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get('2018-03-13T12:00Z')))\n\n # this should work\n print('fetch_production(target_datetime=arrow.get(\"2013-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get('2013-03-13T12:00Z')))\n\n # this should return None\n print('fetch_production(target_datetime=arrow.get(\"2007-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get('2007-03-13T12:00Z')))\n\n print('fetch_exchange() ->')\n print(fetch_exchange())\n", "path": "parsers/CR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# coding=utf-8\n\nimport logging\n\nimport arrow\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup\n\nTIMEZONE = 'America/Costa_Rica'\nDATE_FORMAT = 'DD/MM/YYYY'\nMONTH_FORMAT = 'MM/YYYY'\nPOWER_PLANTS = {\n u'Aeroenerg\u00eda': 'wind',\n u'Altamira': 'wind',\n u'Angostura': 'hydro',\n u'Arenal': 'hydro',\n u'Balsa Inferior': 'hydro',\n u'Barranca': 'unknown',\n u'Barro Morado': 'geothermal',\n u'Bijagua': 'hydro',\n u'Birris12': 'hydro',\n u'Birris3': 'hydro',\n u'Boca de Pozo': 'hydro',\n u'CNFL': 'unknown',\n u'Cach\u00ed': 'hydro',\n u'Campos Azules': 'wind',\n u'Canalete': 'unknown',\n u'Cariblanco': 'hydro',\n u'Carrillos': 'hydro',\n u'Ca\u00f1o Grande': 'hydro',\n u'Ca\u00f1o Grande III': 'hydro',\n u'Chiripa': 'wind',\n u'Chocosuelas': 'hydro',\n u'Chuc\u00e1s': 'hydro',\n u'Cubujuqu\u00ed': 'hydro',\n u'Daniel Guti\u00e9rrez': 'hydro',\n u'Dengo': 'hydro',\n u'Don Pedro': 'hydro',\n u'Do\u00f1a Julia': 'hydro',\n u'Echandi': 'hydro',\n u'El Angel': 'hydro',\n u'El Angel Ampliaci\u00f3n': 'hydro',\n u'El Embalse': 'hydro',\n u'El General': 'hydro',\n u'El Viejo': 'biomass',\n u'Garabito': 'oil',\n u'Garita': 'hydro',\n u'Gu\u00e1piles': 'oil',\n u'Hidrozarcas': 'hydro',\n u'La Esperanza (CoopeL)': 'hydro',\n u'La Joya': 'hydro',\n u'Los Negros': 'hydro',\n u'Los Negros II': 'hydro',\n u'Los Santos': 'wind',\n u'MOVASA': 'wind',\n u'Matamoros': 'unknown',\n u'Miravalles I': 'geothermal',\n u'Miravalles II': 'geothermal',\n u'Miravalles III': 'geothermal',\n u'Miravalles V': 'geothermal',\n u'Mo\u00edn I': 'oil',\n u'Mo\u00edn II': 'oil',\n u'Mo\u00edn III': 'oil',\n u'Oros\u00ed': 'wind',\n u'Orotina': 'unknown',\n u'Otros': 'unknown',\n u'PE Mogote': 'wind',\n u'PEG': 'wind',\n u'Pailas': 'geothermal',\n u'Parque Solar Juanilama': 'solar',\n u'Parque Solar Miravalles': 'solar',\n u'Pe\u00f1as Blancas': 'hydro',\n u'Pirr\u00eds': 'hydro',\n u'Plantas E\u00f3licas': 'wind',\n u'Platanar': 'hydro',\n u'Pocosol': 'hydro',\n u'Po\u00e1s I y II': 'hydro',\n u'Reventaz\u00f3n': 'hydro',\n u'R\u00edo Lajas': 'hydro',\n u'R\u00edo Macho': 'hydro',\n u'San Antonio': 'oil',\n u'San Lorenzo (C)': 'hydro',\n u'Sandillal': 'hydro',\n u'Suerkata': 'hydro',\n u'Taboga': 'biomass',\n u'Tacares': 'hydro',\n u'Tejona': 'wind',\n u'Tilawind': 'wind',\n u'Torito': 'hydro',\n u'Toro I': 'hydro',\n u'Toro II': 'hydro',\n u'Toro III': 'hydro',\n u'Tuis (JASEC)': 'hydro',\n u'Valle Central': 'wind',\n u'Vara Blanca': 'hydro',\n u'Ventanas-Garita': 'hydro',\n u'Vientos de La Perla': 'wind',\n u'Vientos de Miramar': 'wind',\n u'Vientos del Este': 'wind',\n u'Volc\u00e1n': 'hydro',\n}\n\nCHARACTERISTIC_NAME = 'Angostura'\n\n\ndef empty_record(zone_key):\n return {\n 'zoneKey': zone_key,\n 'capacity': {},\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': 0.0,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {},\n 'source': 'grupoice.com'\n }\n\n\ndef df_to_data(zone_key, day, df, logger):\n df = df.dropna(axis=1, how='any')\n # Check for empty dataframe\n if df.shape == (1, 1):\n return []\n df = df.drop(['Intercambio Sur', 'Intercambio Norte', 'Total'], errors='ignore')\n df = df.iloc[:, :-1]\n\n results = []\n unknown_plants = set()\n hour = 0\n for column in df:\n data = empty_record(zone_key)\n data_time = day.replace(hour=hour, minute=0, second=0, microsecond=0).datetime\n for index, value in df[column].items():\n source = POWER_PLANTS.get(index)\n if not source:\n source = 'unknown'\n unknown_plants.add(index)\n data['datetime'] = data_time\n data['production'][source] += max(0.0, value)\n hour += 1\n results.append(data)\n\n for plant in unknown_plants:\n logger.warning('{} is not mapped to generation type'.format(plant),\n extra={'key': zone_key})\n\n return results\n\n\ndef fetch_production(zone_key='CR', session=None,\n target_datetime=None, logger=logging.getLogger(__name__)):\n # ensure we have an arrow object. if no target_datetime is specified, this defaults to now.\n target_datetime = arrow.get(target_datetime).to(TIMEZONE)\n\n if target_datetime < arrow.get('2012-07-01'):\n # data availability limit found by manual trial and error\n logger.error('CR API does not provide data before 2012-07-01, '\n '{} was requested'.format(target_datetime),\n extra={\"key\": zone_key})\n return None\n\n # Do not use existing session as some amount of cache is taking place\n r = requests.session()\n url = 'https://appcenter.grupoice.com/CenceWeb/CencePosdespachoNacional.jsf'\n response = r.get(url)\n\n soup = BeautifulSoup(response.text, 'html.parser')\n jsf_view_state = soup.select('#javax.faces.ViewState')[0]['value']\n\n data = [\n ('formPosdespacho', 'formPosdespacho'),\n ('formPosdespacho:txtFechaInicio_input', target_datetime.format(DATE_FORMAT)),\n ('formPosdespacho:pickFecha', ''),\n ('formPosdespacho:j_idt60_selection', ''),\n ('formPosdespacho:j_idt60_scrollState', '0,1915'),\n ('javax.faces.ViewState', jsf_view_state),\n ]\n response = r.post(url, cookies={}, data=data)\n\n # tell pandas which table to use by providing CHARACTERISTIC_NAME\n df = pd.read_html(response.text, match=CHARACTERISTIC_NAME, skiprows=1, index_col=0)[0]\n\n results = df_to_data(zone_key, target_datetime, df, logger)\n\n return results\n\n\ndef fetch_exchange(zone_key1='CR', zone_key2='NI', session=None, target_datetime=None, logger=None):\n \"\"\"Requests the last known power exchange (in MW) between two regions\n\n Arguments:\n zone_key1 -- the first country code\n zone_key2 -- the second country code; order of the two codes in params doesn't matter\n session (optional) -- request session passed in order to re-use an existing session\n\n Return:\n A dictionary in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n\n where net flow is from DK into NO\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n df = pd.read_csv('http://www.enteoperador.org/newsite/flash/data.csv', index_col=False)\n\n if sorted_zone_keys == 'CR->NI':\n flow = df['NICR'][0]\n elif sorted_zone_keys == 'CR->PA':\n flow = -1 * df['CRPA'][0]\n else:\n raise NotImplementedError('This exchange pair is not implemented')\n\n data = {\n 'datetime': arrow.now(TIMEZONE).datetime,\n 'sortedZoneKeys': sorted_zone_keys,\n 'netFlow': flow,\n 'source': 'enteoperador.org'\n }\n\n return data\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n from pprint import pprint\n\n print('fetch_production() ->')\n pprint(fetch_production())\n\n print('fetch_production(target_datetime=arrow.get(\"2018-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get('2018-03-13T12:00Z')))\n\n # this should work\n print('fetch_production(target_datetime=arrow.get(\"2013-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get('2013-03-13T12:00Z')))\n\n # this should return None\n print('fetch_production(target_datetime=arrow.get(\"2007-03-13T12:00Z\") ->')\n pprint(fetch_production(target_datetime=arrow.get('2007-03-13T12:00Z')))\n\n print('fetch_exchange() ->')\n print(fetch_exchange())\n", "path": "parsers/CR.py"}]} | 3,783 | 408 |
gh_patches_debug_33518 | rasdani/github-patches | git_diff | qtile__qtile-1696 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
python compatibility about timezone parameter for widget.Clock
# Issue description
The following widget configuration doesn't work for python 3.8.2:
```
widget.Clock( format="%H:%M:%S", timezone="Asia/Taipei")
```
I made a workaround for this:
```
from dateutil.tz import *
widget.Clock( format="%H:%M:%S", timezone=gettz("Asia/Taipei"))
```
This error is related to the code snippets in `libqtile/widget/clock.py`:
```
def poll(self):
if self.timezone:
now = datetime.now(timezone.utc).astimezone(self.timezone)
else:
now = datetime.now(timezone.utc).astimezone()
return (now + self.DELTA).strftime(self.format)
```
It seems python 3.6+ has compatibility issue of timezone parameters, and native python doesn't support timezone locale like "Asia/Tokyo","Europe/Warsaw", ... or so. Currently I include `dateutil` to bypass the syntax error
# Qtile version
qtile 0.15.1-1 (ArchLinux)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/clock.py`
Content:
```
1 # Copyright (c) 2010 Aldo Cortesi
2 # Copyright (c) 2012 Andrew Grigorev
3 # Copyright (c) 2014 Sean Vig
4 # Copyright (c) 2014 Tycho Andersen
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22 # SOFTWARE.
23
24 import sys
25 import time
26 from datetime import datetime, timedelta, timezone
27
28 from libqtile.log_utils import logger
29 from libqtile.widget import base
30
31 try:
32 import pytz
33 except ImportError:
34 pass
35
36
37 class Clock(base.InLoopPollText):
38 """A simple but flexible text-based clock"""
39 orientations = base.ORIENTATION_HORIZONTAL
40 defaults = [
41 ('format', '%H:%M', 'A Python datetime format string'),
42 ('update_interval', 1., 'Update interval for the clock'),
43 ('timezone', None, 'The timezone to use for this clock, either as'
44 ' string if pytz is installed (e.g. "US/Central" or anything in'
45 ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'
46 ' None means the system local timezone and is the default.')
47 ]
48 DELTA = timedelta(seconds=0.5)
49
50 def __init__(self, **config):
51 base.InLoopPollText.__init__(self, **config)
52 self.add_defaults(Clock.defaults)
53 if isinstance(self.timezone, str):
54 if "pytz" in sys.modules:
55 self.timezone = pytz.timezone(self.timezone)
56 else:
57 logger.warning('Clock widget can not infer its timezone from a'
58 ' string without the pytz library. Install pytz'
59 ' or give it a datetime.tzinfo instance.')
60 if self.timezone is None:
61 logger.info('Defaulting to the system local timezone.')
62
63 def tick(self):
64 self.update(self.poll())
65 return self.update_interval - time.time() % self.update_interval
66
67 # adding .5 to get a proper seconds value because glib could
68 # theoreticaly call our method too early and we could get something
69 # like (x-1).999 instead of x.000
70 def poll(self):
71 if self.timezone:
72 now = datetime.now(timezone.utc).astimezone(self.timezone)
73 else:
74 now = datetime.now(timezone.utc).astimezone()
75 return (now + self.DELTA).strftime(self.format)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/widget/clock.py b/libqtile/widget/clock.py
--- a/libqtile/widget/clock.py
+++ b/libqtile/widget/clock.py
@@ -33,6 +33,11 @@
except ImportError:
pass
+try:
+ import dateutil.tz
+except ImportError:
+ pass
+
class Clock(base.InLoopPollText):
"""A simple but flexible text-based clock"""
@@ -41,9 +46,10 @@
('format', '%H:%M', 'A Python datetime format string'),
('update_interval', 1., 'Update interval for the clock'),
('timezone', None, 'The timezone to use for this clock, either as'
- ' string if pytz is installed (e.g. "US/Central" or anything in'
- ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'
- ' None means the system local timezone and is the default.')
+ ' string if pytz or dateutil is installed (e.g. "US/Central" or'
+ ' anything in /usr/share/zoneinfo), or as tzinfo (e.g.'
+ ' datetime.timezone.utc). None means the system local timezone and is'
+ ' the default.')
]
DELTA = timedelta(seconds=0.5)
@@ -53,10 +59,13 @@
if isinstance(self.timezone, str):
if "pytz" in sys.modules:
self.timezone = pytz.timezone(self.timezone)
+ elif "dateutil" in sys.modules:
+ self.timezone = dateutil.tz.gettz(self.timezone)
else:
logger.warning('Clock widget can not infer its timezone from a'
- ' string without the pytz library. Install pytz'
- ' or give it a datetime.tzinfo instance.')
+ ' string without pytz or dateutil. Install one'
+ ' of these libraries, or give it a'
+ ' datetime.tzinfo instance.')
if self.timezone is None:
logger.info('Defaulting to the system local timezone.')
| {"golden_diff": "diff --git a/libqtile/widget/clock.py b/libqtile/widget/clock.py\n--- a/libqtile/widget/clock.py\n+++ b/libqtile/widget/clock.py\n@@ -33,6 +33,11 @@\n except ImportError:\n pass\n \n+try:\n+ import dateutil.tz\n+except ImportError:\n+ pass\n+\n \n class Clock(base.InLoopPollText):\n \"\"\"A simple but flexible text-based clock\"\"\"\n@@ -41,9 +46,10 @@\n ('format', '%H:%M', 'A Python datetime format string'),\n ('update_interval', 1., 'Update interval for the clock'),\n ('timezone', None, 'The timezone to use for this clock, either as'\n- ' string if pytz is installed (e.g. \"US/Central\" or anything in'\n- ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'\n- ' None means the system local timezone and is the default.')\n+ ' string if pytz or dateutil is installed (e.g. \"US/Central\" or'\n+ ' anything in /usr/share/zoneinfo), or as tzinfo (e.g.'\n+ ' datetime.timezone.utc). None means the system local timezone and is'\n+ ' the default.')\n ]\n DELTA = timedelta(seconds=0.5)\n \n@@ -53,10 +59,13 @@\n if isinstance(self.timezone, str):\n if \"pytz\" in sys.modules:\n self.timezone = pytz.timezone(self.timezone)\n+ elif \"dateutil\" in sys.modules:\n+ self.timezone = dateutil.tz.gettz(self.timezone)\n else:\n logger.warning('Clock widget can not infer its timezone from a'\n- ' string without the pytz library. Install pytz'\n- ' or give it a datetime.tzinfo instance.')\n+ ' string without pytz or dateutil. Install one'\n+ ' of these libraries, or give it a'\n+ ' datetime.tzinfo instance.')\n if self.timezone is None:\n logger.info('Defaulting to the system local timezone.')\n", "issue": "python compatibility about timezone parameter for widget.Clock\n# Issue description\r\n\r\nThe following widget configuration doesn't work for python 3.8.2:\r\n```\r\nwidget.Clock( format=\"%H:%M:%S\", timezone=\"Asia/Taipei\")\r\n```\r\n\r\nI made a workaround for this:\r\n```\r\nfrom dateutil.tz import *\r\nwidget.Clock( format=\"%H:%M:%S\", timezone=gettz(\"Asia/Taipei\"))\r\n```\r\n\r\nThis error is related to the code snippets in `libqtile/widget/clock.py`:\r\n```\r\n def poll(self):\r\n if self.timezone:\r\n now = datetime.now(timezone.utc).astimezone(self.timezone)\r\n else:\r\n now = datetime.now(timezone.utc).astimezone()\r\n return (now + self.DELTA).strftime(self.format)\r\n```\r\n\r\nIt seems python 3.6+ has compatibility issue of timezone parameters, and native python doesn't support timezone locale like \"Asia/Tokyo\",\"Europe/Warsaw\", ... or so. Currently I include `dateutil` to bypass the syntax error\r\n\r\n\r\n# Qtile version\r\nqtile 0.15.1-1 (ArchLinux)\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2012 Andrew Grigorev\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport sys\nimport time\nfrom datetime import datetime, timedelta, timezone\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\ntry:\n import pytz\nexcept ImportError:\n pass\n\n\nclass Clock(base.InLoopPollText):\n \"\"\"A simple but flexible text-based clock\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('format', '%H:%M', 'A Python datetime format string'),\n ('update_interval', 1., 'Update interval for the clock'),\n ('timezone', None, 'The timezone to use for this clock, either as'\n ' string if pytz is installed (e.g. \"US/Central\" or anything in'\n ' /usr/share/zoneinfo), or as tzinfo (e.g. datetime.timezone.utc).'\n ' None means the system local timezone and is the default.')\n ]\n DELTA = timedelta(seconds=0.5)\n\n def __init__(self, **config):\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(Clock.defaults)\n if isinstance(self.timezone, str):\n if \"pytz\" in sys.modules:\n self.timezone = pytz.timezone(self.timezone)\n else:\n logger.warning('Clock widget can not infer its timezone from a'\n ' string without the pytz library. Install pytz'\n ' or give it a datetime.tzinfo instance.')\n if self.timezone is None:\n logger.info('Defaulting to the system local timezone.')\n\n def tick(self):\n self.update(self.poll())\n return self.update_interval - time.time() % self.update_interval\n\n # adding .5 to get a proper seconds value because glib could\n # theoreticaly call our method too early and we could get something\n # like (x-1).999 instead of x.000\n def poll(self):\n if self.timezone:\n now = datetime.now(timezone.utc).astimezone(self.timezone)\n else:\n now = datetime.now(timezone.utc).astimezone()\n return (now + self.DELTA).strftime(self.format)\n", "path": "libqtile/widget/clock.py"}], "after_files": [{"content": "# Copyright (c) 2010 Aldo Cortesi\n# Copyright (c) 2012 Andrew Grigorev\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Tycho Andersen\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport sys\nimport time\nfrom datetime import datetime, timedelta, timezone\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\ntry:\n import pytz\nexcept ImportError:\n pass\n\ntry:\n import dateutil.tz\nexcept ImportError:\n pass\n\n\nclass Clock(base.InLoopPollText):\n \"\"\"A simple but flexible text-based clock\"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('format', '%H:%M', 'A Python datetime format string'),\n ('update_interval', 1., 'Update interval for the clock'),\n ('timezone', None, 'The timezone to use for this clock, either as'\n ' string if pytz or dateutil is installed (e.g. \"US/Central\" or'\n ' anything in /usr/share/zoneinfo), or as tzinfo (e.g.'\n ' datetime.timezone.utc). None means the system local timezone and is'\n ' the default.')\n ]\n DELTA = timedelta(seconds=0.5)\n\n def __init__(self, **config):\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(Clock.defaults)\n if isinstance(self.timezone, str):\n if \"pytz\" in sys.modules:\n self.timezone = pytz.timezone(self.timezone)\n elif \"dateutil\" in sys.modules:\n self.timezone = dateutil.tz.gettz(self.timezone)\n else:\n logger.warning('Clock widget can not infer its timezone from a'\n ' string without pytz or dateutil. Install one'\n ' of these libraries, or give it a'\n ' datetime.tzinfo instance.')\n if self.timezone is None:\n logger.info('Defaulting to the system local timezone.')\n\n def tick(self):\n self.update(self.poll())\n return self.update_interval - time.time() % self.update_interval\n\n # adding .5 to get a proper seconds value because glib could\n # theoreticaly call our method too early and we could get something\n # like (x-1).999 instead of x.000\n def poll(self):\n if self.timezone:\n now = datetime.now(timezone.utc).astimezone(self.timezone)\n else:\n now = datetime.now(timezone.utc).astimezone()\n return (now + self.DELTA).strftime(self.format)\n", "path": "libqtile/widget/clock.py"}]} | 1,376 | 463 |
gh_patches_debug_401 | rasdani/github-patches | git_diff | getmoto__moto-698 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to create a key with a trailing slash using OrdinaryCallingFormat
When using OrdinaryCallingFormat, it's not possible to create a key ending with a slash (e.g. when mimicking directory creation), since this is stripped off when parsing the key name. I can't comment on S3, but this is at least different behaviour from Ceph.
For example, the below fails as is, but works if the connection uses SubdomainCallingFormat instead.
```
import boto
import moto
import unittest
class TestCreatingKeyEndingWithSlash(unittest.TestCase):
@moto.mock_s3
def test_ordinary_calling_format(self):
bucket_name = 'testbucket'
key_name = 'key_ending_with_slash/'
conn = boto.connect_s3('access_key', 'secret_key',
calling_format=boto.s3.connection.OrdinaryCallingFormat())
bucket = conn.create_bucket(bucket_name)
key = boto.s3.key.Key(bucket)
key.key = key_name
key.set_contents_from_string('')
self.assertIn(key_name, [k.name for k in bucket.get_all_keys()])
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/s3bucket_path/utils.py`
Content:
```
1 from __future__ import unicode_literals
2 from six.moves.urllib.parse import urlparse
3
4
5 def bucket_name_from_url(url):
6 pth = urlparse(url).path.lstrip("/")
7
8 l = pth.lstrip("/").split("/")
9 if len(l) == 0 or l[0] == "":
10 return None
11 return l[0]
12
13
14 def parse_key_name(path):
15 return "/".join(path.rstrip("/").split("/")[2:])
16
17
18 def is_delete_keys(request, path, bucket_name):
19 return (
20 path == u'/' + bucket_name + u'/?delete' or
21 path == u'/' + bucket_name + u'?delete' or
22 (path == u'/' + bucket_name and
23 getattr(request, "query_string", "") == "delete")
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/moto/s3bucket_path/utils.py b/moto/s3bucket_path/utils.py
--- a/moto/s3bucket_path/utils.py
+++ b/moto/s3bucket_path/utils.py
@@ -12,7 +12,7 @@
def parse_key_name(path):
- return "/".join(path.rstrip("/").split("/")[2:])
+ return "/".join(path.split("/")[2:])
def is_delete_keys(request, path, bucket_name):
| {"golden_diff": "diff --git a/moto/s3bucket_path/utils.py b/moto/s3bucket_path/utils.py\n--- a/moto/s3bucket_path/utils.py\n+++ b/moto/s3bucket_path/utils.py\n@@ -12,7 +12,7 @@\n \n \n def parse_key_name(path):\n- return \"/\".join(path.rstrip(\"/\").split(\"/\")[2:])\n+ return \"/\".join(path.split(\"/\")[2:])\n \n \n def is_delete_keys(request, path, bucket_name):\n", "issue": "Unable to create a key with a trailing slash using OrdinaryCallingFormat\nWhen using OrdinaryCallingFormat, it's not possible to create a key ending with a slash (e.g. when mimicking directory creation), since this is stripped off when parsing the key name. I can't comment on S3, but this is at least different behaviour from Ceph.\n\nFor example, the below fails as is, but works if the connection uses SubdomainCallingFormat instead.\n\n```\nimport boto\nimport moto\nimport unittest\n\n\nclass TestCreatingKeyEndingWithSlash(unittest.TestCase):\n\n @moto.mock_s3\n def test_ordinary_calling_format(self):\n bucket_name = 'testbucket'\n key_name = 'key_ending_with_slash/'\n\n conn = boto.connect_s3('access_key', 'secret_key',\n calling_format=boto.s3.connection.OrdinaryCallingFormat())\n bucket = conn.create_bucket(bucket_name)\n\n key = boto.s3.key.Key(bucket)\n key.key = key_name\n key.set_contents_from_string('')\n\n self.assertIn(key_name, [k.name for k in bucket.get_all_keys()])\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom six.moves.urllib.parse import urlparse\n\n\ndef bucket_name_from_url(url):\n pth = urlparse(url).path.lstrip(\"/\")\n\n l = pth.lstrip(\"/\").split(\"/\")\n if len(l) == 0 or l[0] == \"\":\n return None\n return l[0]\n\n\ndef parse_key_name(path):\n return \"/\".join(path.rstrip(\"/\").split(\"/\")[2:])\n\n\ndef is_delete_keys(request, path, bucket_name):\n return (\n path == u'/' + bucket_name + u'/?delete' or\n path == u'/' + bucket_name + u'?delete' or\n (path == u'/' + bucket_name and\n getattr(request, \"query_string\", \"\") == \"delete\")\n )\n", "path": "moto/s3bucket_path/utils.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom six.moves.urllib.parse import urlparse\n\n\ndef bucket_name_from_url(url):\n pth = urlparse(url).path.lstrip(\"/\")\n\n l = pth.lstrip(\"/\").split(\"/\")\n if len(l) == 0 or l[0] == \"\":\n return None\n return l[0]\n\n\ndef parse_key_name(path):\n return \"/\".join(path.split(\"/\")[2:])\n\n\ndef is_delete_keys(request, path, bucket_name):\n return (\n path == u'/' + bucket_name + u'/?delete' or\n path == u'/' + bucket_name + u'?delete' or\n (path == u'/' + bucket_name and\n getattr(request, \"query_string\", \"\") == \"delete\")\n )\n", "path": "moto/s3bucket_path/utils.py"}]} | 709 | 102 |
gh_patches_debug_22508 | rasdani/github-patches | git_diff | getredash__redash-3304 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JQL: add support for fetching all the results by way of pagination
### Issue Summary
The JQL integration returns only the first 50 issues. This is the default number of issues returned via JIRA REST API. A mechanism should be implemented where a query is executed multiple times to fetch subsequent issues from JIRA.
### Steps to Reproduce
1. Configure JIRA integration
2. Create any JQL query which returns more than 50 issues
3. Execute the query
Expected: More than 50 issues returned.
Actual: Only 50 issues returned.
### Technical details:
* Redash Version: 0.12.0+b2449
* Browser/OS: Ubuntu 16.4
* How did you install Redash: Provisioning script from https://redash.io/help-onpremise/setup/setting-up-redash-instance.html
JQL: add support for fetching all the results by way of pagination
### Issue Summary
The JQL integration returns only the first 50 issues. This is the default number of issues returned via JIRA REST API. A mechanism should be implemented where a query is executed multiple times to fetch subsequent issues from JIRA.
### Steps to Reproduce
1. Configure JIRA integration
2. Create any JQL query which returns more than 50 issues
3. Execute the query
Expected: More than 50 issues returned.
Actual: Only 50 issues returned.
### Technical details:
* Redash Version: 0.12.0+b2449
* Browser/OS: Ubuntu 16.4
* How did you install Redash: Provisioning script from https://redash.io/help-onpremise/setup/setting-up-redash-instance.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/query_runner/jql.py`
Content:
```
1 import re
2 from collections import OrderedDict
3
4 from redash.query_runner import *
5 from redash.utils import json_dumps, json_loads
6
7
8 # TODO: make this more general and move into __init__.py
9 class ResultSet(object):
10 def __init__(self):
11 self.columns = OrderedDict()
12 self.rows = []
13
14 def add_row(self, row):
15 for key in row.keys():
16 self.add_column(key)
17
18 self.rows.append(row)
19
20 def add_column(self, column, column_type=TYPE_STRING):
21 if column not in self.columns:
22 self.columns[column] = {'name': column, 'type': column_type, 'friendly_name': column}
23
24 def to_json(self):
25 return json_dumps({'rows': self.rows, 'columns': self.columns.values()})
26
27
28 def parse_issue(issue, field_mapping):
29 result = OrderedDict()
30 result['key'] = issue['key']
31
32 for k, v in issue['fields'].iteritems():#
33 output_name = field_mapping.get_output_field_name(k)
34 member_names = field_mapping.get_dict_members(k)
35
36 if isinstance(v, dict):
37 if len(member_names) > 0:
38 # if field mapping with dict member mappings defined get value of each member
39 for member_name in member_names:
40 if member_name in v:
41 result[field_mapping.get_dict_output_field_name(k, member_name)] = v[member_name]
42
43 else:
44 # these special mapping rules are kept for backwards compatibility
45 if 'key' in v:
46 result['{}_key'.format(output_name)] = v['key']
47 if 'name' in v:
48 result['{}_name'.format(output_name)] = v['name']
49
50 if k in v:
51 result[output_name] = v[k]
52
53 if 'watchCount' in v:
54 result[output_name] = v['watchCount']
55
56 elif isinstance(v, list):
57 if len(member_names) > 0:
58 # if field mapping with dict member mappings defined get value of each member
59 for member_name in member_names:
60 listValues = []
61 for listItem in v:
62 if isinstance(listItem, dict):
63 if member_name in listItem:
64 listValues.append(listItem[member_name])
65 if len(listValues) > 0:
66 result[field_mapping.get_dict_output_field_name(k, member_name)] = ','.join(listValues)
67
68 else:
69 # otherwise support list values only for non-dict items
70 listValues = []
71 for listItem in v:
72 if not isinstance(listItem, dict):
73 listValues.append(listItem)
74 if len(listValues) > 0:
75 result[output_name] = ','.join(listValues)
76
77 else:
78 result[output_name] = v
79
80 return result
81
82
83 def parse_issues(data, field_mapping):
84 results = ResultSet()
85
86 for issue in data['issues']:
87 results.add_row(parse_issue(issue, field_mapping))
88
89 return results
90
91
92 def parse_count(data):
93 results = ResultSet()
94 results.add_row({'count': data['total']})
95 return results
96
97
98 class FieldMapping:
99
100 def __init__(cls, query_field_mapping):
101 cls.mapping = []
102 for k, v in query_field_mapping.iteritems():
103 field_name = k
104 member_name = None
105
106 # check for member name contained in field name
107 member_parser = re.search('(\w+)\.(\w+)', k)
108 if (member_parser):
109 field_name = member_parser.group(1)
110 member_name = member_parser.group(2)
111
112 cls.mapping.append({
113 'field_name': field_name,
114 'member_name': member_name,
115 'output_field_name': v
116 })
117
118 def get_output_field_name(cls,field_name):
119 for item in cls.mapping:
120 if item['field_name'] == field_name and not item['member_name']:
121 return item['output_field_name']
122 return field_name
123
124 def get_dict_members(cls,field_name):
125 member_names = []
126 for item in cls.mapping:
127 if item['field_name'] == field_name and item['member_name']:
128 member_names.append(item['member_name'])
129 return member_names
130
131 def get_dict_output_field_name(cls,field_name, member_name):
132 for item in cls.mapping:
133 if item['field_name'] == field_name and item['member_name'] == member_name:
134 return item['output_field_name']
135 return None
136
137
138 class JiraJQL(BaseHTTPQueryRunner):
139 noop_query = '{"queryType": "count"}'
140 response_error = "JIRA returned unexpected status code"
141 requires_authentication = True
142 url_title = 'JIRA URL'
143 username_title = 'Username'
144 password_title = 'Password'
145
146 @classmethod
147 def name(cls):
148 return "JIRA (JQL)"
149
150 @classmethod
151 def annotate_query(cls):
152 return False
153
154 def __init__(self, configuration):
155 super(JiraJQL, self).__init__(configuration)
156 self.syntax = 'json'
157
158 def run_query(self, query, user):
159 jql_url = '{}/rest/api/2/search'.format(self.configuration["url"])
160
161 try:
162 query = json_loads(query)
163 query_type = query.pop('queryType', 'select')
164 field_mapping = FieldMapping(query.pop('fieldMapping', {}))
165
166 if query_type == 'count':
167 query['maxResults'] = 1
168 query['fields'] = ''
169 else:
170 query['maxResults'] = query.get('maxResults', 1000)
171
172 response, error = self.get_response(jql_url, params=query)
173 if error is not None:
174 return None, error
175
176 data = response.json()
177
178 if query_type == 'count':
179 results = parse_count(data)
180 else:
181 results = parse_issues(data, field_mapping)
182
183 return results.to_json(), None
184 except KeyboardInterrupt:
185 return None, "Query cancelled by user."
186
187 register(JiraJQL)
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/query_runner/jql.py b/redash/query_runner/jql.py
--- a/redash/query_runner/jql.py
+++ b/redash/query_runner/jql.py
@@ -24,6 +24,8 @@
def to_json(self):
return json_dumps({'rows': self.rows, 'columns': self.columns.values()})
+ def merge(self, set):
+ self.rows = self.rows + set.rows
def parse_issue(issue, field_mapping):
result = OrderedDict()
@@ -179,6 +181,19 @@
results = parse_count(data)
else:
results = parse_issues(data, field_mapping)
+ index = data['startAt'] + data['maxResults']
+
+ while data['total'] > index:
+ query['startAt'] = index
+ response, error = self.get_response(jql_url, params=query)
+ if error is not None:
+ return None, error
+
+ data = response.json()
+ index = data['startAt'] + data['maxResults']
+
+ addl_results = parse_issues(data, field_mapping)
+ results.merge(addl_results)
return results.to_json(), None
except KeyboardInterrupt:
| {"golden_diff": "diff --git a/redash/query_runner/jql.py b/redash/query_runner/jql.py\n--- a/redash/query_runner/jql.py\n+++ b/redash/query_runner/jql.py\n@@ -24,6 +24,8 @@\n def to_json(self):\n return json_dumps({'rows': self.rows, 'columns': self.columns.values()})\n \n+ def merge(self, set):\n+ self.rows = self.rows + set.rows\n \n def parse_issue(issue, field_mapping):\n result = OrderedDict()\n@@ -179,6 +181,19 @@\n results = parse_count(data)\n else:\n results = parse_issues(data, field_mapping)\n+ index = data['startAt'] + data['maxResults']\n+\n+ while data['total'] > index:\n+ query['startAt'] = index\n+ response, error = self.get_response(jql_url, params=query)\n+ if error is not None:\n+ return None, error\n+\n+ data = response.json()\n+ index = data['startAt'] + data['maxResults']\n+\n+ addl_results = parse_issues(data, field_mapping)\n+ results.merge(addl_results)\n \n return results.to_json(), None\n except KeyboardInterrupt:\n", "issue": "JQL: add support for fetching all the results by way of pagination\n### Issue Summary\r\n\r\nThe JQL integration returns only the first 50 issues. This is the default number of issues returned via JIRA REST API. A mechanism should be implemented where a query is executed multiple times to fetch subsequent issues from JIRA.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Configure JIRA integration\r\n2. Create any JQL query which returns more than 50 issues\r\n3. Execute the query\r\n\r\nExpected: More than 50 issues returned.\r\nActual: Only 50 issues returned.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 0.12.0+b2449\r\n* Browser/OS: Ubuntu 16.4\r\n* How did you install Redash: Provisioning script from https://redash.io/help-onpremise/setup/setting-up-redash-instance.html\nJQL: add support for fetching all the results by way of pagination\n### Issue Summary\r\n\r\nThe JQL integration returns only the first 50 issues. This is the default number of issues returned via JIRA REST API. A mechanism should be implemented where a query is executed multiple times to fetch subsequent issues from JIRA.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Configure JIRA integration\r\n2. Create any JQL query which returns more than 50 issues\r\n3. Execute the query\r\n\r\nExpected: More than 50 issues returned.\r\nActual: Only 50 issues returned.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 0.12.0+b2449\r\n* Browser/OS: Ubuntu 16.4\r\n* How did you install Redash: Provisioning script from https://redash.io/help-onpremise/setup/setting-up-redash-instance.html\n", "before_files": [{"content": "import re\nfrom collections import OrderedDict\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\n\n# TODO: make this more general and move into __init__.py\nclass ResultSet(object):\n def __init__(self):\n self.columns = OrderedDict()\n self.rows = []\n\n def add_row(self, row):\n for key in row.keys():\n self.add_column(key)\n\n self.rows.append(row)\n\n def add_column(self, column, column_type=TYPE_STRING):\n if column not in self.columns:\n self.columns[column] = {'name': column, 'type': column_type, 'friendly_name': column}\n\n def to_json(self):\n return json_dumps({'rows': self.rows, 'columns': self.columns.values()})\n\n\ndef parse_issue(issue, field_mapping):\n result = OrderedDict()\n result['key'] = issue['key']\n\n for k, v in issue['fields'].iteritems():#\n output_name = field_mapping.get_output_field_name(k)\n member_names = field_mapping.get_dict_members(k)\n\n if isinstance(v, dict):\n if len(member_names) > 0:\n # if field mapping with dict member mappings defined get value of each member\n for member_name in member_names:\n if member_name in v:\n result[field_mapping.get_dict_output_field_name(k, member_name)] = v[member_name]\n\n else:\n # these special mapping rules are kept for backwards compatibility\n if 'key' in v:\n result['{}_key'.format(output_name)] = v['key']\n if 'name' in v:\n result['{}_name'.format(output_name)] = v['name']\n\n if k in v:\n result[output_name] = v[k]\n\n if 'watchCount' in v:\n result[output_name] = v['watchCount']\n\n elif isinstance(v, list):\n if len(member_names) > 0:\n # if field mapping with dict member mappings defined get value of each member\n for member_name in member_names:\n listValues = []\n for listItem in v:\n if isinstance(listItem, dict):\n if member_name in listItem:\n listValues.append(listItem[member_name])\n if len(listValues) > 0:\n result[field_mapping.get_dict_output_field_name(k, member_name)] = ','.join(listValues)\n\n else:\n # otherwise support list values only for non-dict items\n listValues = []\n for listItem in v:\n if not isinstance(listItem, dict):\n listValues.append(listItem)\n if len(listValues) > 0:\n result[output_name] = ','.join(listValues)\n\n else:\n result[output_name] = v\n\n return result\n\n\ndef parse_issues(data, field_mapping):\n results = ResultSet()\n\n for issue in data['issues']:\n results.add_row(parse_issue(issue, field_mapping))\n\n return results\n\n\ndef parse_count(data):\n results = ResultSet()\n results.add_row({'count': data['total']})\n return results\n\n\nclass FieldMapping:\n\n def __init__(cls, query_field_mapping):\n cls.mapping = []\n for k, v in query_field_mapping.iteritems():\n field_name = k\n member_name = None\n\n # check for member name contained in field name\n member_parser = re.search('(\\w+)\\.(\\w+)', k)\n if (member_parser):\n field_name = member_parser.group(1)\n member_name = member_parser.group(2)\n\n cls.mapping.append({\n 'field_name': field_name,\n 'member_name': member_name,\n 'output_field_name': v\n })\n\n def get_output_field_name(cls,field_name):\n for item in cls.mapping:\n if item['field_name'] == field_name and not item['member_name']:\n return item['output_field_name']\n return field_name\n\n def get_dict_members(cls,field_name):\n member_names = []\n for item in cls.mapping:\n if item['field_name'] == field_name and item['member_name']:\n member_names.append(item['member_name'])\n return member_names\n\n def get_dict_output_field_name(cls,field_name, member_name):\n for item in cls.mapping:\n if item['field_name'] == field_name and item['member_name'] == member_name:\n return item['output_field_name']\n return None\n\n\nclass JiraJQL(BaseHTTPQueryRunner):\n noop_query = '{\"queryType\": \"count\"}'\n response_error = \"JIRA returned unexpected status code\"\n requires_authentication = True\n url_title = 'JIRA URL'\n username_title = 'Username'\n password_title = 'Password'\n\n @classmethod\n def name(cls):\n return \"JIRA (JQL)\"\n\n @classmethod\n def annotate_query(cls):\n return False\n\n def __init__(self, configuration):\n super(JiraJQL, self).__init__(configuration)\n self.syntax = 'json'\n\n def run_query(self, query, user):\n jql_url = '{}/rest/api/2/search'.format(self.configuration[\"url\"])\n\n try:\n query = json_loads(query)\n query_type = query.pop('queryType', 'select')\n field_mapping = FieldMapping(query.pop('fieldMapping', {}))\n\n if query_type == 'count':\n query['maxResults'] = 1\n query['fields'] = ''\n else:\n query['maxResults'] = query.get('maxResults', 1000)\n\n response, error = self.get_response(jql_url, params=query)\n if error is not None:\n return None, error\n\n data = response.json()\n\n if query_type == 'count':\n results = parse_count(data)\n else:\n results = parse_issues(data, field_mapping)\n\n return results.to_json(), None\n except KeyboardInterrupt:\n return None, \"Query cancelled by user.\"\n\nregister(JiraJQL)\n", "path": "redash/query_runner/jql.py"}], "after_files": [{"content": "import re\nfrom collections import OrderedDict\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\n\n# TODO: make this more general and move into __init__.py\nclass ResultSet(object):\n def __init__(self):\n self.columns = OrderedDict()\n self.rows = []\n\n def add_row(self, row):\n for key in row.keys():\n self.add_column(key)\n\n self.rows.append(row)\n\n def add_column(self, column, column_type=TYPE_STRING):\n if column not in self.columns:\n self.columns[column] = {'name': column, 'type': column_type, 'friendly_name': column}\n\n def to_json(self):\n return json_dumps({'rows': self.rows, 'columns': self.columns.values()})\n\n def merge(self, set):\n self.rows = self.rows + set.rows\n\ndef parse_issue(issue, field_mapping):\n result = OrderedDict()\n result['key'] = issue['key']\n\n for k, v in issue['fields'].iteritems():#\n output_name = field_mapping.get_output_field_name(k)\n member_names = field_mapping.get_dict_members(k)\n\n if isinstance(v, dict):\n if len(member_names) > 0:\n # if field mapping with dict member mappings defined get value of each member\n for member_name in member_names:\n if member_name in v:\n result[field_mapping.get_dict_output_field_name(k, member_name)] = v[member_name]\n\n else:\n # these special mapping rules are kept for backwards compatibility\n if 'key' in v:\n result['{}_key'.format(output_name)] = v['key']\n if 'name' in v:\n result['{}_name'.format(output_name)] = v['name']\n\n if k in v:\n result[output_name] = v[k]\n\n if 'watchCount' in v:\n result[output_name] = v['watchCount']\n\n elif isinstance(v, list):\n if len(member_names) > 0:\n # if field mapping with dict member mappings defined get value of each member\n for member_name in member_names:\n listValues = []\n for listItem in v:\n if isinstance(listItem, dict):\n if member_name in listItem:\n listValues.append(listItem[member_name])\n if len(listValues) > 0:\n result[field_mapping.get_dict_output_field_name(k, member_name)] = ','.join(listValues)\n\n else:\n # otherwise support list values only for non-dict items\n listValues = []\n for listItem in v:\n if not isinstance(listItem, dict):\n listValues.append(listItem)\n if len(listValues) > 0:\n result[output_name] = ','.join(listValues)\n\n else:\n result[output_name] = v\n\n return result\n\n\ndef parse_issues(data, field_mapping):\n results = ResultSet()\n\n for issue in data['issues']:\n results.add_row(parse_issue(issue, field_mapping))\n\n return results\n\n\ndef parse_count(data):\n results = ResultSet()\n results.add_row({'count': data['total']})\n return results\n\n\nclass FieldMapping:\n\n def __init__(cls, query_field_mapping):\n cls.mapping = []\n for k, v in query_field_mapping.iteritems():\n field_name = k\n member_name = None\n\n # check for member name contained in field name\n member_parser = re.search('(\\w+)\\.(\\w+)', k)\n if (member_parser):\n field_name = member_parser.group(1)\n member_name = member_parser.group(2)\n\n cls.mapping.append({\n 'field_name': field_name,\n 'member_name': member_name,\n 'output_field_name': v\n })\n\n def get_output_field_name(cls,field_name):\n for item in cls.mapping:\n if item['field_name'] == field_name and not item['member_name']:\n return item['output_field_name']\n return field_name\n\n def get_dict_members(cls,field_name):\n member_names = []\n for item in cls.mapping:\n if item['field_name'] == field_name and item['member_name']:\n member_names.append(item['member_name'])\n return member_names\n\n def get_dict_output_field_name(cls,field_name, member_name):\n for item in cls.mapping:\n if item['field_name'] == field_name and item['member_name'] == member_name:\n return item['output_field_name']\n return None\n\n\nclass JiraJQL(BaseHTTPQueryRunner):\n noop_query = '{\"queryType\": \"count\"}'\n response_error = \"JIRA returned unexpected status code\"\n requires_authentication = True\n url_title = 'JIRA URL'\n username_title = 'Username'\n password_title = 'Password'\n\n @classmethod\n def name(cls):\n return \"JIRA (JQL)\"\n\n @classmethod\n def annotate_query(cls):\n return False\n\n def __init__(self, configuration):\n super(JiraJQL, self).__init__(configuration)\n self.syntax = 'json'\n\n def run_query(self, query, user):\n jql_url = '{}/rest/api/2/search'.format(self.configuration[\"url\"])\n\n try:\n query = json_loads(query)\n query_type = query.pop('queryType', 'select')\n field_mapping = FieldMapping(query.pop('fieldMapping', {}))\n\n if query_type == 'count':\n query['maxResults'] = 1\n query['fields'] = ''\n else:\n query['maxResults'] = query.get('maxResults', 1000)\n\n response, error = self.get_response(jql_url, params=query)\n if error is not None:\n return None, error\n\n data = response.json()\n\n if query_type == 'count':\n results = parse_count(data)\n else:\n results = parse_issues(data, field_mapping)\n index = data['startAt'] + data['maxResults']\n\n while data['total'] > index:\n query['startAt'] = index\n response, error = self.get_response(jql_url, params=query)\n if error is not None:\n return None, error\n\n data = response.json()\n index = data['startAt'] + data['maxResults']\n\n addl_results = parse_issues(data, field_mapping)\n results.merge(addl_results)\n\n return results.to_json(), None\n except KeyboardInterrupt:\n return None, \"Query cancelled by user.\"\n\nregister(JiraJQL)\n", "path": "redash/query_runner/jql.py"}]} | 2,377 | 269 |
gh_patches_debug_32018 | rasdani/github-patches | git_diff | cal-itp__benefits-680 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add tests for enrollment API
Broken out from https://github.com/cal-itp/benefits/issues/413.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/enrollment/api.py`
Content:
```
1 """
2 The enrollment application: Benefits Enrollment API implementation.
3 """
4 import logging
5 from tempfile import NamedTemporaryFile
6 import time
7
8 import requests
9
10
11 logger = logging.getLogger(__name__)
12
13
14 class ApiError(Exception):
15 """Error calling the enrollment APIs."""
16
17 pass
18
19
20 class AccessTokenResponse:
21 """Benefits Enrollment API Access Token response."""
22
23 def __init__(self, response):
24 logger.info("Read access token from response")
25
26 try:
27 payload = response.json()
28 except ValueError:
29 raise ApiError("Invalid response format")
30
31 self.access_token = payload.get("access_token")
32 self.token_type = payload.get("token_type")
33 self.expires_in = payload.get("expires_in")
34 if self.expires_in is not None:
35 logger.debug("Access token has expiry")
36 self.expiry = time.time() + self.expires_in
37 else:
38 logger.debug("Access token has no expiry")
39 self.expiry = None
40
41 logger.info("Access token successfully read from response")
42
43
44 class CustomerResponse:
45 """Benefits Enrollment Customer API response."""
46
47 def __init__(self, response):
48 logger.info("Read customer details from response")
49
50 try:
51 payload = response.json()
52 self.id = payload["id"]
53 except (KeyError, ValueError):
54 raise ApiError("Invalid response format")
55
56 if self.id is None:
57 raise ApiError("Invalid response format")
58
59 self.is_registered = str(payload.get("is_registered", "false")).lower() == "true"
60
61 logger.info("Customer details successfully read from response")
62
63
64 class GroupResponse:
65 """Benefits Enrollment Customer Group API response."""
66
67 def __init__(self, response, requested_id, payload=None):
68 if payload is None:
69 try:
70 payload = response.json()
71 except ValueError:
72 raise ApiError("Invalid response format")
73 else:
74 try:
75 # Group API uses an error response (500) to indicate that the customer already exists in the group (!!!)
76 # The error message should contain the customer ID we sent via payload and start with "Duplicate"
77 error = response.json()["errors"][0]
78 customer_id = payload[0]
79 detail = error["detail"]
80
81 failure = (
82 customer_id is None,
83 detail is None,
84 customer_id not in detail,
85 customer_id in detail and not detail.startswith("Duplicate"),
86 )
87
88 if any(failure):
89 raise ApiError("Invalid response format")
90 except (KeyError, ValueError):
91 raise ApiError("Invalid response format")
92
93 self.customer_ids = list(payload)
94 self.updated_customer_id = self.customer_ids[0] if len(self.customer_ids) == 1 else None
95 self.success = requested_id == self.updated_customer_id
96 self.message = "Updated customer_id does not match enrolled customer_id" if not self.success else ""
97
98
99 class Client:
100 """Benefits Enrollment API client."""
101
102 def __init__(self, agency):
103 logger.debug("Initialize Benefits Enrollment API Client")
104
105 if agency is None:
106 raise ValueError("agency")
107 if agency.payment_processor is None:
108 raise ValueError("agency.payment_processor")
109
110 self.agency = agency
111 self.payment_processor = agency.payment_processor
112 self.headers = {"Accept": "application/json", "Content-type": "application/json"}
113
114 def _headers(self, headers=None):
115 h = dict(self.headers)
116 if headers:
117 h.update(headers)
118 return h
119
120 def _make_url(self, *parts):
121 return "/".join((self.payment_processor.api_base_url, self.agency.merchant_id, *parts))
122
123 def _get(self, url, payload, headers=None):
124 h = self._headers(headers)
125 return self._cert_request(lambda verify, cert: requests.get(url, headers=h, params=payload, verify=verify, cert=cert))
126
127 def _patch(self, url, payload, headers=None):
128 h = self._headers(headers)
129 return self._cert_request(lambda verify, cert: requests.patch(url, headers=h, json=payload, verify=verify, cert=cert))
130
131 def _post(self, url, payload, headers=None):
132 h = self._headers(headers)
133 return self._cert_request(lambda verify, cert: requests.post(url, headers=h, json=payload, verify=verify, cert=cert))
134
135 def _cert_request(self, request_func):
136 """
137 Creates named (on-disk) temp files for client cert auth.
138 * request_func: curried callable from `requests` library (e.g. `requests.get`).
139 """
140 # requests library reads temp files from file path
141 # The "with" context destroys temp files when response comes back
142 with NamedTemporaryFile("w+") as cert, NamedTemporaryFile("w+") as key, NamedTemporaryFile("w+") as ca:
143 # write client cert data to temp files
144 # resetting so they can be read again by requests
145 cert.write(self.payment_processor.client_cert.text)
146 cert.seek(0)
147
148 key.write(self.payment_processor.client_cert_private_key.text)
149 key.seek(0)
150
151 ca.write(self.payment_processor.client_cert_root_ca.text)
152 ca.seek(0)
153
154 # request using temp file paths
155 return request_func(verify=ca.name, cert=(cert.name, key.name))
156
157 def _get_customer(self, token):
158 """Get a customer record from Payment Processor's system"""
159 logger.info("Check for existing customer record")
160
161 if token is None:
162 raise ValueError("token")
163
164 url = self._make_url(self.payment_processor.customers_endpoint)
165 payload = {"token": token}
166
167 try:
168 r = self._get(url, payload)
169 if r.status_code == 200:
170 logger.debug("Customer record exists")
171 customer = CustomerResponse(r)
172 if customer.is_registered:
173 logger.debug("Customer is registered, skip update")
174 return customer
175 else:
176 logger.debug("Customer is not registered, update")
177 return self._update_customer(customer.id)
178 else:
179 r.raise_for_status()
180 except requests.ConnectionError:
181 raise ApiError("Connection to enrollment server failed")
182 except requests.Timeout:
183 raise ApiError("Connection to enrollment server timed out")
184 except requests.TooManyRedirects:
185 raise ApiError("Too many redirects to enrollment server")
186 except requests.HTTPError as e:
187 raise ApiError(e)
188
189 def _update_customer(self, customer_id):
190 """Update a customer using their unique info."""
191 logger.info("Update existing customer record")
192
193 if customer_id is None:
194 raise ValueError("customer_id")
195
196 url = self._make_url(self.payment_processor.customer_endpoint, customer_id)
197 payload = {"is_registered": True, "id": customer_id}
198
199 r = self._patch(url, payload)
200 r.raise_for_status()
201
202 return CustomerResponse(r)
203
204 def access_token(self):
205 """Obtain an access token to use for integrating with other APIs."""
206 logger.info("Get new access token")
207
208 url = self._make_url(self.payment_processor.api_access_token_endpoint)
209 payload = {self.payment_processor.api_access_token_request_key: self.payment_processor.api_access_token_request_val}
210
211 try:
212 r = self._post(url, payload)
213 r.raise_for_status()
214 except requests.ConnectionError:
215 raise ApiError("Connection to enrollment server failed")
216 except requests.Timeout:
217 raise ApiError("Connection to enrollment server timed out")
218 except requests.TooManyRedirects:
219 raise ApiError("Too many redirects to enrollment server")
220 except requests.HTTPError as e:
221 raise ApiError(e)
222
223 return AccessTokenResponse(r)
224
225 def enroll(self, customer_token, group_id):
226 """Enroll a customer in a product group using the token that represents that customer."""
227 logger.info("Enroll customer in product group")
228
229 if customer_token is None:
230 raise ValueError("customer_token")
231 if group_id is None:
232 raise ValueError("group_id")
233
234 customer = self._get_customer(customer_token)
235 url = self._make_url(self.payment_processor.group_endpoint, group_id)
236 payload = [customer.id]
237
238 try:
239 r = self._patch(url, payload)
240
241 if r.status_code in (200, 201):
242 logger.info("Customer enrolled in group")
243 return GroupResponse(r, customer.id)
244 elif r.status_code == 500:
245 logger.info("Customer already exists in group")
246 return GroupResponse(r, customer.id, payload=payload)
247 else:
248 r.raise_for_status()
249 except requests.ConnectionError:
250 raise ApiError("Connection to enrollment server failed")
251 except requests.Timeout:
252 raise ApiError("Connection to enrollment server timed out")
253 except requests.TooManyRedirects:
254 raise ApiError("Too many redirects to enrollment server")
255 except requests.HTTPError as e:
256 raise ApiError(e)
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benefits/enrollment/api.py b/benefits/enrollment/api.py
--- a/benefits/enrollment/api.py
+++ b/benefits/enrollment/api.py
@@ -79,13 +79,14 @@
detail = error["detail"]
failure = (
- customer_id is None,
- detail is None,
- customer_id not in detail,
- customer_id in detail and not detail.startswith("Duplicate"),
+ customer_id is None
+ or detail is None
+ or customer_id not in detail
+ or customer_id in detail
+ and not detail.startswith("Duplicate")
)
- if any(failure):
+ if failure:
raise ApiError("Invalid response format")
except (KeyError, ValueError):
raise ApiError("Invalid response format")
@@ -166,17 +167,17 @@
try:
r = self._get(url, payload)
- if r.status_code == 200:
- logger.debug("Customer record exists")
- customer = CustomerResponse(r)
- if customer.is_registered:
- logger.debug("Customer is registered, skip update")
- return customer
- else:
- logger.debug("Customer is not registered, update")
- return self._update_customer(customer.id)
+ r.raise_for_status()
+
+ logger.debug("Customer record exists")
+ customer = CustomerResponse(r)
+ if customer.is_registered:
+ logger.debug("Customer is registered, skip update")
+ return customer
else:
- r.raise_for_status()
+ logger.debug("Customer is not registered, update")
+ return self._update_customer(customer.id)
+
except requests.ConnectionError:
raise ApiError("Connection to enrollment server failed")
except requests.Timeout:
| {"golden_diff": "diff --git a/benefits/enrollment/api.py b/benefits/enrollment/api.py\n--- a/benefits/enrollment/api.py\n+++ b/benefits/enrollment/api.py\n@@ -79,13 +79,14 @@\n detail = error[\"detail\"]\n \n failure = (\n- customer_id is None,\n- detail is None,\n- customer_id not in detail,\n- customer_id in detail and not detail.startswith(\"Duplicate\"),\n+ customer_id is None\n+ or detail is None\n+ or customer_id not in detail\n+ or customer_id in detail\n+ and not detail.startswith(\"Duplicate\")\n )\n \n- if any(failure):\n+ if failure:\n raise ApiError(\"Invalid response format\")\n except (KeyError, ValueError):\n raise ApiError(\"Invalid response format\")\n@@ -166,17 +167,17 @@\n \n try:\n r = self._get(url, payload)\n- if r.status_code == 200:\n- logger.debug(\"Customer record exists\")\n- customer = CustomerResponse(r)\n- if customer.is_registered:\n- logger.debug(\"Customer is registered, skip update\")\n- return customer\n- else:\n- logger.debug(\"Customer is not registered, update\")\n- return self._update_customer(customer.id)\n+ r.raise_for_status()\n+\n+ logger.debug(\"Customer record exists\")\n+ customer = CustomerResponse(r)\n+ if customer.is_registered:\n+ logger.debug(\"Customer is registered, skip update\")\n+ return customer\n else:\n- r.raise_for_status()\n+ logger.debug(\"Customer is not registered, update\")\n+ return self._update_customer(customer.id)\n+\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n", "issue": "Add tests for enrollment API\nBroken out from https://github.com/cal-itp/benefits/issues/413.\n", "before_files": [{"content": "\"\"\"\nThe enrollment application: Benefits Enrollment API implementation.\n\"\"\"\nimport logging\nfrom tempfile import NamedTemporaryFile\nimport time\n\nimport requests\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApiError(Exception):\n \"\"\"Error calling the enrollment APIs.\"\"\"\n\n pass\n\n\nclass AccessTokenResponse:\n \"\"\"Benefits Enrollment API Access Token response.\"\"\"\n\n def __init__(self, response):\n logger.info(\"Read access token from response\")\n\n try:\n payload = response.json()\n except ValueError:\n raise ApiError(\"Invalid response format\")\n\n self.access_token = payload.get(\"access_token\")\n self.token_type = payload.get(\"token_type\")\n self.expires_in = payload.get(\"expires_in\")\n if self.expires_in is not None:\n logger.debug(\"Access token has expiry\")\n self.expiry = time.time() + self.expires_in\n else:\n logger.debug(\"Access token has no expiry\")\n self.expiry = None\n\n logger.info(\"Access token successfully read from response\")\n\n\nclass CustomerResponse:\n \"\"\"Benefits Enrollment Customer API response.\"\"\"\n\n def __init__(self, response):\n logger.info(\"Read customer details from response\")\n\n try:\n payload = response.json()\n self.id = payload[\"id\"]\n except (KeyError, ValueError):\n raise ApiError(\"Invalid response format\")\n\n if self.id is None:\n raise ApiError(\"Invalid response format\")\n\n self.is_registered = str(payload.get(\"is_registered\", \"false\")).lower() == \"true\"\n\n logger.info(\"Customer details successfully read from response\")\n\n\nclass GroupResponse:\n \"\"\"Benefits Enrollment Customer Group API response.\"\"\"\n\n def __init__(self, response, requested_id, payload=None):\n if payload is None:\n try:\n payload = response.json()\n except ValueError:\n raise ApiError(\"Invalid response format\")\n else:\n try:\n # Group API uses an error response (500) to indicate that the customer already exists in the group (!!!)\n # The error message should contain the customer ID we sent via payload and start with \"Duplicate\"\n error = response.json()[\"errors\"][0]\n customer_id = payload[0]\n detail = error[\"detail\"]\n\n failure = (\n customer_id is None,\n detail is None,\n customer_id not in detail,\n customer_id in detail and not detail.startswith(\"Duplicate\"),\n )\n\n if any(failure):\n raise ApiError(\"Invalid response format\")\n except (KeyError, ValueError):\n raise ApiError(\"Invalid response format\")\n\n self.customer_ids = list(payload)\n self.updated_customer_id = self.customer_ids[0] if len(self.customer_ids) == 1 else None\n self.success = requested_id == self.updated_customer_id\n self.message = \"Updated customer_id does not match enrolled customer_id\" if not self.success else \"\"\n\n\nclass Client:\n \"\"\"Benefits Enrollment API client.\"\"\"\n\n def __init__(self, agency):\n logger.debug(\"Initialize Benefits Enrollment API Client\")\n\n if agency is None:\n raise ValueError(\"agency\")\n if agency.payment_processor is None:\n raise ValueError(\"agency.payment_processor\")\n\n self.agency = agency\n self.payment_processor = agency.payment_processor\n self.headers = {\"Accept\": \"application/json\", \"Content-type\": \"application/json\"}\n\n def _headers(self, headers=None):\n h = dict(self.headers)\n if headers:\n h.update(headers)\n return h\n\n def _make_url(self, *parts):\n return \"/\".join((self.payment_processor.api_base_url, self.agency.merchant_id, *parts))\n\n def _get(self, url, payload, headers=None):\n h = self._headers(headers)\n return self._cert_request(lambda verify, cert: requests.get(url, headers=h, params=payload, verify=verify, cert=cert))\n\n def _patch(self, url, payload, headers=None):\n h = self._headers(headers)\n return self._cert_request(lambda verify, cert: requests.patch(url, headers=h, json=payload, verify=verify, cert=cert))\n\n def _post(self, url, payload, headers=None):\n h = self._headers(headers)\n return self._cert_request(lambda verify, cert: requests.post(url, headers=h, json=payload, verify=verify, cert=cert))\n\n def _cert_request(self, request_func):\n \"\"\"\n Creates named (on-disk) temp files for client cert auth.\n * request_func: curried callable from `requests` library (e.g. `requests.get`).\n \"\"\"\n # requests library reads temp files from file path\n # The \"with\" context destroys temp files when response comes back\n with NamedTemporaryFile(\"w+\") as cert, NamedTemporaryFile(\"w+\") as key, NamedTemporaryFile(\"w+\") as ca:\n # write client cert data to temp files\n # resetting so they can be read again by requests\n cert.write(self.payment_processor.client_cert.text)\n cert.seek(0)\n\n key.write(self.payment_processor.client_cert_private_key.text)\n key.seek(0)\n\n ca.write(self.payment_processor.client_cert_root_ca.text)\n ca.seek(0)\n\n # request using temp file paths\n return request_func(verify=ca.name, cert=(cert.name, key.name))\n\n def _get_customer(self, token):\n \"\"\"Get a customer record from Payment Processor's system\"\"\"\n logger.info(\"Check for existing customer record\")\n\n if token is None:\n raise ValueError(\"token\")\n\n url = self._make_url(self.payment_processor.customers_endpoint)\n payload = {\"token\": token}\n\n try:\n r = self._get(url, payload)\n if r.status_code == 200:\n logger.debug(\"Customer record exists\")\n customer = CustomerResponse(r)\n if customer.is_registered:\n logger.debug(\"Customer is registered, skip update\")\n return customer\n else:\n logger.debug(\"Customer is not registered, update\")\n return self._update_customer(customer.id)\n else:\n r.raise_for_status()\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to enrollment server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to enrollment server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n\n def _update_customer(self, customer_id):\n \"\"\"Update a customer using their unique info.\"\"\"\n logger.info(\"Update existing customer record\")\n\n if customer_id is None:\n raise ValueError(\"customer_id\")\n\n url = self._make_url(self.payment_processor.customer_endpoint, customer_id)\n payload = {\"is_registered\": True, \"id\": customer_id}\n\n r = self._patch(url, payload)\n r.raise_for_status()\n\n return CustomerResponse(r)\n\n def access_token(self):\n \"\"\"Obtain an access token to use for integrating with other APIs.\"\"\"\n logger.info(\"Get new access token\")\n\n url = self._make_url(self.payment_processor.api_access_token_endpoint)\n payload = {self.payment_processor.api_access_token_request_key: self.payment_processor.api_access_token_request_val}\n\n try:\n r = self._post(url, payload)\n r.raise_for_status()\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to enrollment server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to enrollment server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n\n return AccessTokenResponse(r)\n\n def enroll(self, customer_token, group_id):\n \"\"\"Enroll a customer in a product group using the token that represents that customer.\"\"\"\n logger.info(\"Enroll customer in product group\")\n\n if customer_token is None:\n raise ValueError(\"customer_token\")\n if group_id is None:\n raise ValueError(\"group_id\")\n\n customer = self._get_customer(customer_token)\n url = self._make_url(self.payment_processor.group_endpoint, group_id)\n payload = [customer.id]\n\n try:\n r = self._patch(url, payload)\n\n if r.status_code in (200, 201):\n logger.info(\"Customer enrolled in group\")\n return GroupResponse(r, customer.id)\n elif r.status_code == 500:\n logger.info(\"Customer already exists in group\")\n return GroupResponse(r, customer.id, payload=payload)\n else:\n r.raise_for_status()\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to enrollment server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to enrollment server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n", "path": "benefits/enrollment/api.py"}], "after_files": [{"content": "\"\"\"\nThe enrollment application: Benefits Enrollment API implementation.\n\"\"\"\nimport logging\nfrom tempfile import NamedTemporaryFile\nimport time\n\nimport requests\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApiError(Exception):\n \"\"\"Error calling the enrollment APIs.\"\"\"\n\n pass\n\n\nclass AccessTokenResponse:\n \"\"\"Benefits Enrollment API Access Token response.\"\"\"\n\n def __init__(self, response):\n logger.info(\"Read access token from response\")\n\n try:\n payload = response.json()\n except ValueError:\n raise ApiError(\"Invalid response format\")\n\n self.access_token = payload.get(\"access_token\")\n self.token_type = payload.get(\"token_type\")\n self.expires_in = payload.get(\"expires_in\")\n if self.expires_in is not None:\n logger.debug(\"Access token has expiry\")\n self.expiry = time.time() + self.expires_in\n else:\n logger.debug(\"Access token has no expiry\")\n self.expiry = None\n\n logger.info(\"Access token successfully read from response\")\n\n\nclass CustomerResponse:\n \"\"\"Benefits Enrollment Customer API response.\"\"\"\n\n def __init__(self, response):\n logger.info(\"Read customer details from response\")\n\n try:\n payload = response.json()\n self.id = payload[\"id\"]\n except (KeyError, ValueError):\n raise ApiError(\"Invalid response format\")\n\n if self.id is None:\n raise ApiError(\"Invalid response format\")\n\n self.is_registered = str(payload.get(\"is_registered\", \"false\")).lower() == \"true\"\n\n logger.info(\"Customer details successfully read from response\")\n\n\nclass GroupResponse:\n \"\"\"Benefits Enrollment Customer Group API response.\"\"\"\n\n def __init__(self, response, requested_id, payload=None):\n if payload is None:\n try:\n payload = response.json()\n except ValueError:\n raise ApiError(\"Invalid response format\")\n else:\n try:\n # Group API uses an error response (500) to indicate that the customer already exists in the group (!!!)\n # The error message should contain the customer ID we sent via payload and start with \"Duplicate\"\n error = response.json()[\"errors\"][0]\n customer_id = payload[0]\n detail = error[\"detail\"]\n\n failure = (\n customer_id is None\n or detail is None\n or customer_id not in detail\n or customer_id in detail\n and not detail.startswith(\"Duplicate\")\n )\n\n if failure:\n raise ApiError(\"Invalid response format\")\n except (KeyError, ValueError):\n raise ApiError(\"Invalid response format\")\n\n self.customer_ids = list(payload)\n self.updated_customer_id = self.customer_ids[0] if len(self.customer_ids) == 1 else None\n self.success = requested_id == self.updated_customer_id\n self.message = \"Updated customer_id does not match enrolled customer_id\" if not self.success else \"\"\n\n\nclass Client:\n \"\"\"Benefits Enrollment API client.\"\"\"\n\n def __init__(self, agency):\n logger.debug(\"Initialize Benefits Enrollment API Client\")\n\n if agency is None:\n raise ValueError(\"agency\")\n if agency.payment_processor is None:\n raise ValueError(\"agency.payment_processor\")\n\n self.agency = agency\n self.payment_processor = agency.payment_processor\n self.headers = {\"Accept\": \"application/json\", \"Content-type\": \"application/json\"}\n\n def _headers(self, headers=None):\n h = dict(self.headers)\n if headers:\n h.update(headers)\n return h\n\n def _make_url(self, *parts):\n return \"/\".join((self.payment_processor.api_base_url, self.agency.merchant_id, *parts))\n\n def _get(self, url, payload, headers=None):\n h = self._headers(headers)\n return self._cert_request(lambda verify, cert: requests.get(url, headers=h, params=payload, verify=verify, cert=cert))\n\n def _patch(self, url, payload, headers=None):\n h = self._headers(headers)\n return self._cert_request(lambda verify, cert: requests.patch(url, headers=h, json=payload, verify=verify, cert=cert))\n\n def _post(self, url, payload, headers=None):\n h = self._headers(headers)\n return self._cert_request(lambda verify, cert: requests.post(url, headers=h, json=payload, verify=verify, cert=cert))\n\n def _cert_request(self, request_func):\n \"\"\"\n Creates named (on-disk) temp files for client cert auth.\n * request_func: curried callable from `requests` library (e.g. `requests.get`).\n \"\"\"\n # requests library reads temp files from file path\n # The \"with\" context destroys temp files when response comes back\n with NamedTemporaryFile(\"w+\") as cert, NamedTemporaryFile(\"w+\") as key, NamedTemporaryFile(\"w+\") as ca:\n # write client cert data to temp files\n # resetting so they can be read again by requests\n cert.write(self.payment_processor.client_cert.text)\n cert.seek(0)\n\n key.write(self.payment_processor.client_cert_private_key.text)\n key.seek(0)\n\n ca.write(self.payment_processor.client_cert_root_ca.text)\n ca.seek(0)\n\n # request using temp file paths\n return request_func(verify=ca.name, cert=(cert.name, key.name))\n\n def _get_customer(self, token):\n \"\"\"Get a customer record from Payment Processor's system\"\"\"\n logger.info(\"Check for existing customer record\")\n\n if token is None:\n raise ValueError(\"token\")\n\n url = self._make_url(self.payment_processor.customers_endpoint)\n payload = {\"token\": token}\n\n try:\n r = self._get(url, payload)\n r.raise_for_status()\n\n logger.debug(\"Customer record exists\")\n customer = CustomerResponse(r)\n if customer.is_registered:\n logger.debug(\"Customer is registered, skip update\")\n return customer\n else:\n logger.debug(\"Customer is not registered, update\")\n return self._update_customer(customer.id)\n\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to enrollment server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to enrollment server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n\n def _update_customer(self, customer_id):\n \"\"\"Update a customer using their unique info.\"\"\"\n logger.info(\"Update existing customer record\")\n\n if customer_id is None:\n raise ValueError(\"customer_id\")\n\n url = self._make_url(self.payment_processor.customer_endpoint, customer_id)\n payload = {\"is_registered\": True, \"id\": customer_id}\n\n r = self._patch(url, payload)\n r.raise_for_status()\n\n return CustomerResponse(r)\n\n def access_token(self):\n \"\"\"Obtain an access token to use for integrating with other APIs.\"\"\"\n logger.info(\"Get new access token\")\n\n url = self._make_url(self.payment_processor.api_access_token_endpoint)\n payload = {self.payment_processor.api_access_token_request_key: self.payment_processor.api_access_token_request_val}\n\n try:\n r = self._post(url, payload)\n r.raise_for_status()\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to enrollment server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to enrollment server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n\n return AccessTokenResponse(r)\n\n def enroll(self, customer_token, group_id):\n \"\"\"Enroll a customer in a product group using the token that represents that customer.\"\"\"\n logger.info(\"Enroll customer in product group\")\n\n if customer_token is None:\n raise ValueError(\"customer_token\")\n if group_id is None:\n raise ValueError(\"group_id\")\n\n customer = self._get_customer(customer_token)\n url = self._make_url(self.payment_processor.group_endpoint, group_id)\n payload = [customer.id]\n\n try:\n r = self._patch(url, payload)\n\n if r.status_code in (200, 201):\n logger.info(\"Customer enrolled in group\")\n return GroupResponse(r, customer.id)\n elif r.status_code == 500:\n logger.info(\"Customer already exists in group\")\n return GroupResponse(r, customer.id, payload=payload)\n else:\n r.raise_for_status()\n except requests.ConnectionError:\n raise ApiError(\"Connection to enrollment server failed\")\n except requests.Timeout:\n raise ApiError(\"Connection to enrollment server timed out\")\n except requests.TooManyRedirects:\n raise ApiError(\"Too many redirects to enrollment server\")\n except requests.HTTPError as e:\n raise ApiError(e)\n", "path": "benefits/enrollment/api.py"}]} | 2,838 | 394 |
gh_patches_debug_14293 | rasdani/github-patches | git_diff | psychopy__psychopy-569 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feature request: develop a pylint rule-set
pylint is a code analysis tool, and will err of the side of being super duper ultra nitpicky (which is great). You just then turn some things off to see the signal in the noise. For example, I've been bitten by mutable default values to a method / function, and it will catch this. It flags bare excepts -- lots of useful stuff.
If anyone has experience with pylint, it would be great to have advice on what works well, and what is likely to work well for PsychoPy given its history and current conventions. If its counterproductive to start using pylint with a codebase this large, that would be helpful to know.
I'm thinking that even if its never run as part of the build process, it might be nice to have a project-wide pylintrc file that makes explicit what style conventions are expected (long lines ok, variable name conventions, etc). This seems like a powerful way to communicate the conventions.
PsychoPy currently has lots of bare excepts, bad indentations, unused variables, redefined builtins, unused imports, and so on -- seemingly all good targets for clean-up work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/misc.py`
Content:
```
1 #!/usr/bin/env python2
2
3 # Part of the PsychoPy library
4 # Copyright (C) 2014 Jonathan Peirce
5 # Distributed under the terms of the GNU General Public License (GPL).
6
7 '''Wrapper for all miscellaneous functions and classes from psychopy.tools'''
8
9 from psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,
10 ratioRange, shuffleArray, val2array)
11
12 from psychopy.tools.attributetools import attributeSetter, setWithOperation
13
14 from psychopy.tools.colorspacetools import (dkl2rgb, dklCart2rgb,
15 hsv2rgb, lms2rgb,
16 rgb2dklCart, rgb2lms)
17
18 from psychopy.tools.coordinatetools import (cart2pol, pol2cart,
19 cart2sph, sph2cart)
20
21 from psychopy.tools.fileerrortools import handleFileCollision
22
23 from psychopy.tools.filetools import toFile, fromFile, mergeFolder
24
25 from psychopy.tools.imagetools import array2image, image2array, makeImageAuto
26
27 from psychopy.tools.monitorunittools import (cm2deg, deg2cm, cm2pix, pix2cm,
28 deg2pix, pix2deg, convertToPix)
29
30 from psychopy.tools.plottools import plotFrameIntervals
31
32 from psychopy.tools.typetools import float_uint8, float_uint16, uint8_float
33
34 from numpy import radians, degrees
35
```
Path: `psychopy/tools/unittools.py`
Content:
```
1 #!/usr/bin/env python2
2
3 # Part of the PsychoPy library
4 # Copyright (C) 2014 Jonathan Peirce
5 # Distributed under the terms of the GNU General Public License (GPL).
6
7 '''Functions and classes related to unit conversion'''
8
9 # This module is not used by psychopy; retained for backwards compatibility
10 # for user-scripts.
11
12 from numpy import radians, degrees
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/psychopy/misc.py b/psychopy/misc.py
--- a/psychopy/misc.py
+++ b/psychopy/misc.py
@@ -6,6 +6,9 @@
'''Wrapper for all miscellaneous functions and classes from psychopy.tools'''
+# pylint: disable=W0611
+# W0611 = Unused import %s
+
from psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,
ratioRange, shuffleArray, val2array)
diff --git a/psychopy/tools/unittools.py b/psychopy/tools/unittools.py
--- a/psychopy/tools/unittools.py
+++ b/psychopy/tools/unittools.py
@@ -9,4 +9,7 @@
# This module is not used by psychopy; retained for backwards compatibility
# for user-scripts.
+# pylint: disable=W0611
+# W0611 = Unused import %s
+
from numpy import radians, degrees
| {"golden_diff": "diff --git a/psychopy/misc.py b/psychopy/misc.py\n--- a/psychopy/misc.py\n+++ b/psychopy/misc.py\n@@ -6,6 +6,9 @@\n \n '''Wrapper for all miscellaneous functions and classes from psychopy.tools'''\n \n+# pylint: disable=W0611\n+# W0611 = Unused import %s\n+\n from psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,\n ratioRange, shuffleArray, val2array)\n \ndiff --git a/psychopy/tools/unittools.py b/psychopy/tools/unittools.py\n--- a/psychopy/tools/unittools.py\n+++ b/psychopy/tools/unittools.py\n@@ -9,4 +9,7 @@\n # This module is not used by psychopy; retained for backwards compatibility\n # for user-scripts.\n \n+# pylint: disable=W0611\n+# W0611 = Unused import %s\n+\n from numpy import radians, degrees\n", "issue": "feature request: develop a pylint rule-set\npylint is a code analysis tool, and will err of the side of being super duper ultra nitpicky (which is great). You just then turn some things off to see the signal in the noise. For example, I've been bitten by mutable default values to a method / function, and it will catch this. It flags bare excepts -- lots of useful stuff.\n\nIf anyone has experience with pylint, it would be great to have advice on what works well, and what is likely to work well for PsychoPy given its history and current conventions. If its counterproductive to start using pylint with a codebase this large, that would be helpful to know.\n\nI'm thinking that even if its never run as part of the build process, it might be nice to have a project-wide pylintrc file that makes explicit what style conventions are expected (long lines ok, variable name conventions, etc). This seems like a powerful way to communicate the conventions. \n\nPsychoPy currently has lots of bare excepts, bad indentations, unused variables, redefined builtins, unused imports, and so on -- seemingly all good targets for clean-up work.\n\n", "before_files": [{"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n'''Wrapper for all miscellaneous functions and classes from psychopy.tools'''\n\nfrom psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,\n ratioRange, shuffleArray, val2array)\n\nfrom psychopy.tools.attributetools import attributeSetter, setWithOperation\n\nfrom psychopy.tools.colorspacetools import (dkl2rgb, dklCart2rgb,\n hsv2rgb, lms2rgb,\n rgb2dklCart, rgb2lms)\n\nfrom psychopy.tools.coordinatetools import (cart2pol, pol2cart,\n cart2sph, sph2cart)\n\nfrom psychopy.tools.fileerrortools import handleFileCollision\n\nfrom psychopy.tools.filetools import toFile, fromFile, mergeFolder\n\nfrom psychopy.tools.imagetools import array2image, image2array, makeImageAuto\n\nfrom psychopy.tools.monitorunittools import (cm2deg, deg2cm, cm2pix, pix2cm,\n deg2pix, pix2deg, convertToPix)\n\nfrom psychopy.tools.plottools import plotFrameIntervals\n\nfrom psychopy.tools.typetools import float_uint8, float_uint16, uint8_float\n\nfrom numpy import radians, degrees\n", "path": "psychopy/misc.py"}, {"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n'''Functions and classes related to unit conversion'''\n\n# This module is not used by psychopy; retained for backwards compatibility\n# for user-scripts.\n\nfrom numpy import radians, degrees\n", "path": "psychopy/tools/unittools.py"}], "after_files": [{"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n'''Wrapper for all miscellaneous functions and classes from psychopy.tools'''\n\n# pylint: disable=W0611\n# W0611 = Unused import %s\n\nfrom psychopy.tools.arraytools import (createXYs, extendArr, makeRadialMatrix,\n ratioRange, shuffleArray, val2array)\n\nfrom psychopy.tools.attributetools import attributeSetter, setWithOperation\n\nfrom psychopy.tools.colorspacetools import (dkl2rgb, dklCart2rgb,\n hsv2rgb, lms2rgb,\n rgb2dklCart, rgb2lms)\n\nfrom psychopy.tools.coordinatetools import (cart2pol, pol2cart,\n cart2sph, sph2cart)\n\nfrom psychopy.tools.fileerrortools import handleFileCollision\n\nfrom psychopy.tools.filetools import toFile, fromFile, mergeFolder\n\nfrom psychopy.tools.imagetools import array2image, image2array, makeImageAuto\n\nfrom psychopy.tools.monitorunittools import (cm2deg, deg2cm, cm2pix, pix2cm,\n deg2pix, pix2deg, convertToPix)\n\nfrom psychopy.tools.plottools import plotFrameIntervals\n\nfrom psychopy.tools.typetools import float_uint8, float_uint16, uint8_float\n\nfrom numpy import radians, degrees\n", "path": "psychopy/misc.py"}, {"content": "#!/usr/bin/env python2\n\n# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\n'''Functions and classes related to unit conversion'''\n\n# This module is not used by psychopy; retained for backwards compatibility\n# for user-scripts.\n\n# pylint: disable=W0611\n# W0611 = Unused import %s\n\nfrom numpy import radians, degrees\n", "path": "psychopy/tools/unittools.py"}]} | 999 | 213 |
gh_patches_debug_12247 | rasdani/github-patches | git_diff | conan-io__conan-14167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[feature] Add '--strip' option to cmake install
### What is your suggestion?
Can you add a parameter to `cmake.install` function in order to be able to strip binaries before install ?
install and strip were added to CMake about 4 years ago, and if I got it correctly in version 3.15
https://gitlab.kitware.com/cmake/cmake/-/merge_requests/3069
Strip is very useful to remove debug symbols (actually all DWARF extension) from binaries and especially release binaries.
I tried many options to make it work with conan, but without success, so I think adding this option as a parameter to `install` function in `cmake` tool will be the best solution instead of some workarounds.
### Have you read the CONTRIBUTING guide?
- [X] I've read the CONTRIBUTING guide
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conan/tools/cmake/cmake.py`
Content:
```
1 import os
2
3 from conan.tools.build import build_jobs
4 from conan.tools.cmake.presets import load_cmake_presets
5 from conan.tools.cmake.utils import is_multi_configuration
6 from conan.tools.files import chdir, mkdir
7 from conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg
8 from conans.client.tools.oss import args_to_string
9 from conans.errors import ConanException
10
11
12 def _validate_recipe(conanfile):
13 forbidden_generators = ["cmake", "cmake_multi"]
14 if any(it in conanfile.generators for it in forbidden_generators):
15 raise ConanException("Usage of toolchain is only supported with 'cmake_find_package'"
16 " or 'cmake_find_package_multi' generators")
17
18
19 def _cmake_cmd_line_args(conanfile, generator):
20 args = []
21 if not generator:
22 return args
23
24 # Arguments related to parallel
25 njobs = build_jobs(conanfile)
26 if njobs and ("Makefiles" in generator or "Ninja" in generator) and "NMake" not in generator:
27 args.append("-j{}".format(njobs))
28
29 maxcpucount = conanfile.conf.get("tools.microsoft.msbuild:max_cpu_count", check_type=int)
30 if maxcpucount and "Visual Studio" in generator:
31 args.append("/m:{}".format(njobs))
32
33 # Arguments for verbosity
34 if "Visual Studio" in generator:
35 verbosity = msbuild_verbosity_cmd_line_arg(conanfile)
36 if verbosity:
37 args.append(verbosity)
38
39 return args
40
41
42 class CMake(object):
43 """ CMake helper to use together with the toolchain feature. It implements a very simple
44 wrapper to call the cmake executable, but without passing compile flags, preprocessor
45 definitions... all that is set by the toolchain. Only the generator and the CMAKE_TOOLCHAIN_FILE
46 are passed to the command line, plus the ``--config Release`` for builds in multi-config
47 """
48
49 def __init__(self, conanfile):
50 _validate_recipe(conanfile)
51
52 # Store a reference to useful data
53 self._conanfile = conanfile
54
55 cmake_presets = load_cmake_presets(conanfile.generators_folder)
56 # Conan generated presets will have exactly 1 configurePresets, no more
57 configure_preset = cmake_presets["configurePresets"][0]
58
59 self._generator = configure_preset["generator"]
60 self._toolchain_file = configure_preset.get("toolchainFile")
61 self._cache_variables = configure_preset["cacheVariables"]
62
63 self._cmake_program = "cmake" # Path to CMake should be handled by environment
64
65 def configure(self, variables=None, build_script_folder=None, cli_args=None):
66 cmakelist_folder = self._conanfile.source_folder
67 if build_script_folder:
68 cmakelist_folder = os.path.join(self._conanfile.source_folder, build_script_folder)
69
70 build_folder = self._conanfile.build_folder
71 generator_folder = self._conanfile.generators_folder
72
73 mkdir(self._conanfile, build_folder)
74
75 arg_list = [self._cmake_program]
76 if self._generator:
77 arg_list.append('-G "{}"'.format(self._generator))
78 if self._toolchain_file:
79 if os.path.isabs(self._toolchain_file):
80 toolpath = self._toolchain_file
81 else:
82 toolpath = os.path.join(generator_folder, self._toolchain_file)
83 arg_list.append('-DCMAKE_TOOLCHAIN_FILE="{}"'.format(toolpath.replace("\\", "/")))
84 if self._conanfile.package_folder:
85 pkg_folder = self._conanfile.package_folder.replace("\\", "/")
86 arg_list.append('-DCMAKE_INSTALL_PREFIX="{}"'.format(pkg_folder))
87
88 if not variables:
89 variables = {}
90 self._cache_variables.update(variables)
91
92 arg_list.extend(['-D{}="{}"'.format(k, v) for k, v in self._cache_variables.items()])
93 arg_list.append('"{}"'.format(cmakelist_folder))
94
95 if cli_args:
96 arg_list.extend(cli_args)
97
98 command = " ".join(arg_list)
99 self._conanfile.output.info("CMake command: %s" % command)
100 with chdir(self, build_folder):
101 self._conanfile.run(command)
102
103 def _build(self, build_type=None, target=None, cli_args=None, build_tool_args=None, env=""):
104 bf = self._conanfile.build_folder
105 is_multi = is_multi_configuration(self._generator)
106 if build_type and not is_multi:
107 self._conanfile.output.error("Don't specify 'build_type' at build time for "
108 "single-config build systems")
109
110 bt = build_type or self._conanfile.settings.get_safe("build_type")
111 if not bt:
112 raise ConanException("build_type setting should be defined.")
113 build_config = "--config {}".format(bt) if bt and is_multi else ""
114
115 args = []
116 if target is not None:
117 args = ["--target", target]
118 if cli_args:
119 args.extend(cli_args)
120
121 cmd_line_args = _cmake_cmd_line_args(self._conanfile, self._generator)
122 if build_tool_args:
123 cmd_line_args.extend(build_tool_args)
124 if cmd_line_args:
125 args += ['--'] + cmd_line_args
126
127 arg_list = ['"{}"'.format(bf), build_config, args_to_string(args)]
128 arg_list = " ".join(filter(None, arg_list))
129 command = "%s --build %s" % (self._cmake_program, arg_list)
130 self._conanfile.output.info("CMake command: %s" % command)
131 self._conanfile.run(command, env=env)
132
133 def build(self, build_type=None, target=None, cli_args=None, build_tool_args=None):
134 self._build(build_type, target, cli_args, build_tool_args)
135
136 def install(self, build_type=None, component=None):
137 mkdir(self._conanfile, self._conanfile.package_folder)
138
139 bt = build_type or self._conanfile.settings.get_safe("build_type")
140 if not bt:
141 raise ConanException("build_type setting should be defined.")
142 is_multi = is_multi_configuration(self._generator)
143 build_config = "--config {}".format(bt) if bt and is_multi else ""
144
145 pkg_folder = '"{}"'.format(self._conanfile.package_folder.replace("\\", "/"))
146 build_folder = '"{}"'.format(self._conanfile.build_folder)
147 arg_list = ["--install", build_folder, build_config, "--prefix", pkg_folder]
148 if component:
149 arg_list.extend(["--component", component])
150 arg_list = " ".join(filter(None, arg_list))
151 command = "%s %s" % (self._cmake_program, arg_list)
152 self._conanfile.output.info("CMake command: %s" % command)
153 self._conanfile.run(command)
154
155 def test(self, build_type=None, target=None, cli_args=None, build_tool_args=None, env=""):
156 if self._conanfile.conf.get("tools.build:skip_test", check_type=bool):
157 return
158 if not target:
159 is_multi = is_multi_configuration(self._generator)
160 is_ninja = "Ninja" in self._generator
161 target = "RUN_TESTS" if is_multi and not is_ninja else "test"
162
163 # The default for ``test()`` is both the buildenv and the runenv
164 env = ["conanbuild", "conanrun"] if env == "" else env
165 self._build(build_type=build_type, target=target, cli_args=cli_args,
166 build_tool_args=build_tool_args, env=env)
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conan/tools/cmake/cmake.py b/conan/tools/cmake/cmake.py
--- a/conan/tools/cmake/cmake.py
+++ b/conan/tools/cmake/cmake.py
@@ -147,6 +147,10 @@
arg_list = ["--install", build_folder, build_config, "--prefix", pkg_folder]
if component:
arg_list.extend(["--component", component])
+
+ do_strip = self._conanfile.conf.get("tools.cmake:install_strip", check_type=bool)
+ if do_strip:
+ arg_list.append("--strip")
arg_list = " ".join(filter(None, arg_list))
command = "%s %s" % (self._cmake_program, arg_list)
self._conanfile.output.info("CMake command: %s" % command)
| {"golden_diff": "diff --git a/conan/tools/cmake/cmake.py b/conan/tools/cmake/cmake.py\n--- a/conan/tools/cmake/cmake.py\n+++ b/conan/tools/cmake/cmake.py\n@@ -147,6 +147,10 @@\n arg_list = [\"--install\", build_folder, build_config, \"--prefix\", pkg_folder]\n if component:\n arg_list.extend([\"--component\", component])\n+\n+ do_strip = self._conanfile.conf.get(\"tools.cmake:install_strip\", check_type=bool)\n+ if do_strip:\n+ arg_list.append(\"--strip\")\n arg_list = \" \".join(filter(None, arg_list))\n command = \"%s %s\" % (self._cmake_program, arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n", "issue": "[feature] Add '--strip' option to cmake install\n### What is your suggestion?\n\nCan you add a parameter to `cmake.install` function in order to be able to strip binaries before install ?\r\n\r\ninstall and strip were added to CMake about 4 years ago, and if I got it correctly in version 3.15\r\nhttps://gitlab.kitware.com/cmake/cmake/-/merge_requests/3069\r\n\r\nStrip is very useful to remove debug symbols (actually all DWARF extension) from binaries and especially release binaries.\r\n\r\nI tried many options to make it work with conan, but without success, so I think adding this option as a parameter to `install` function in `cmake` tool will be the best solution instead of some workarounds.\n\n### Have you read the CONTRIBUTING guide?\n\n- [X] I've read the CONTRIBUTING guide\n", "before_files": [{"content": "import os\n\nfrom conan.tools.build import build_jobs\nfrom conan.tools.cmake.presets import load_cmake_presets\nfrom conan.tools.cmake.utils import is_multi_configuration\nfrom conan.tools.files import chdir, mkdir\nfrom conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg\nfrom conans.client.tools.oss import args_to_string\nfrom conans.errors import ConanException\n\n\ndef _validate_recipe(conanfile):\n forbidden_generators = [\"cmake\", \"cmake_multi\"]\n if any(it in conanfile.generators for it in forbidden_generators):\n raise ConanException(\"Usage of toolchain is only supported with 'cmake_find_package'\"\n \" or 'cmake_find_package_multi' generators\")\n\n\ndef _cmake_cmd_line_args(conanfile, generator):\n args = []\n if not generator:\n return args\n\n # Arguments related to parallel\n njobs = build_jobs(conanfile)\n if njobs and (\"Makefiles\" in generator or \"Ninja\" in generator) and \"NMake\" not in generator:\n args.append(\"-j{}\".format(njobs))\n\n maxcpucount = conanfile.conf.get(\"tools.microsoft.msbuild:max_cpu_count\", check_type=int)\n if maxcpucount and \"Visual Studio\" in generator:\n args.append(\"/m:{}\".format(njobs))\n\n # Arguments for verbosity\n if \"Visual Studio\" in generator:\n verbosity = msbuild_verbosity_cmd_line_arg(conanfile)\n if verbosity:\n args.append(verbosity)\n\n return args\n\n\nclass CMake(object):\n \"\"\" CMake helper to use together with the toolchain feature. It implements a very simple\n wrapper to call the cmake executable, but without passing compile flags, preprocessor\n definitions... all that is set by the toolchain. Only the generator and the CMAKE_TOOLCHAIN_FILE\n are passed to the command line, plus the ``--config Release`` for builds in multi-config\n \"\"\"\n\n def __init__(self, conanfile):\n _validate_recipe(conanfile)\n\n # Store a reference to useful data\n self._conanfile = conanfile\n\n cmake_presets = load_cmake_presets(conanfile.generators_folder)\n # Conan generated presets will have exactly 1 configurePresets, no more\n configure_preset = cmake_presets[\"configurePresets\"][0]\n\n self._generator = configure_preset[\"generator\"]\n self._toolchain_file = configure_preset.get(\"toolchainFile\")\n self._cache_variables = configure_preset[\"cacheVariables\"]\n\n self._cmake_program = \"cmake\" # Path to CMake should be handled by environment\n\n def configure(self, variables=None, build_script_folder=None, cli_args=None):\n cmakelist_folder = self._conanfile.source_folder\n if build_script_folder:\n cmakelist_folder = os.path.join(self._conanfile.source_folder, build_script_folder)\n\n build_folder = self._conanfile.build_folder\n generator_folder = self._conanfile.generators_folder\n\n mkdir(self._conanfile, build_folder)\n\n arg_list = [self._cmake_program]\n if self._generator:\n arg_list.append('-G \"{}\"'.format(self._generator))\n if self._toolchain_file:\n if os.path.isabs(self._toolchain_file):\n toolpath = self._toolchain_file\n else:\n toolpath = os.path.join(generator_folder, self._toolchain_file)\n arg_list.append('-DCMAKE_TOOLCHAIN_FILE=\"{}\"'.format(toolpath.replace(\"\\\\\", \"/\")))\n if self._conanfile.package_folder:\n pkg_folder = self._conanfile.package_folder.replace(\"\\\\\", \"/\")\n arg_list.append('-DCMAKE_INSTALL_PREFIX=\"{}\"'.format(pkg_folder))\n\n if not variables:\n variables = {}\n self._cache_variables.update(variables)\n\n arg_list.extend(['-D{}=\"{}\"'.format(k, v) for k, v in self._cache_variables.items()])\n arg_list.append('\"{}\"'.format(cmakelist_folder))\n\n if cli_args:\n arg_list.extend(cli_args)\n\n command = \" \".join(arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n with chdir(self, build_folder):\n self._conanfile.run(command)\n\n def _build(self, build_type=None, target=None, cli_args=None, build_tool_args=None, env=\"\"):\n bf = self._conanfile.build_folder\n is_multi = is_multi_configuration(self._generator)\n if build_type and not is_multi:\n self._conanfile.output.error(\"Don't specify 'build_type' at build time for \"\n \"single-config build systems\")\n\n bt = build_type or self._conanfile.settings.get_safe(\"build_type\")\n if not bt:\n raise ConanException(\"build_type setting should be defined.\")\n build_config = \"--config {}\".format(bt) if bt and is_multi else \"\"\n\n args = []\n if target is not None:\n args = [\"--target\", target]\n if cli_args:\n args.extend(cli_args)\n\n cmd_line_args = _cmake_cmd_line_args(self._conanfile, self._generator)\n if build_tool_args:\n cmd_line_args.extend(build_tool_args)\n if cmd_line_args:\n args += ['--'] + cmd_line_args\n\n arg_list = ['\"{}\"'.format(bf), build_config, args_to_string(args)]\n arg_list = \" \".join(filter(None, arg_list))\n command = \"%s --build %s\" % (self._cmake_program, arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n self._conanfile.run(command, env=env)\n\n def build(self, build_type=None, target=None, cli_args=None, build_tool_args=None):\n self._build(build_type, target, cli_args, build_tool_args)\n\n def install(self, build_type=None, component=None):\n mkdir(self._conanfile, self._conanfile.package_folder)\n\n bt = build_type or self._conanfile.settings.get_safe(\"build_type\")\n if not bt:\n raise ConanException(\"build_type setting should be defined.\")\n is_multi = is_multi_configuration(self._generator)\n build_config = \"--config {}\".format(bt) if bt and is_multi else \"\"\n\n pkg_folder = '\"{}\"'.format(self._conanfile.package_folder.replace(\"\\\\\", \"/\"))\n build_folder = '\"{}\"'.format(self._conanfile.build_folder)\n arg_list = [\"--install\", build_folder, build_config, \"--prefix\", pkg_folder]\n if component:\n arg_list.extend([\"--component\", component])\n arg_list = \" \".join(filter(None, arg_list))\n command = \"%s %s\" % (self._cmake_program, arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n self._conanfile.run(command)\n\n def test(self, build_type=None, target=None, cli_args=None, build_tool_args=None, env=\"\"):\n if self._conanfile.conf.get(\"tools.build:skip_test\", check_type=bool):\n return\n if not target:\n is_multi = is_multi_configuration(self._generator)\n is_ninja = \"Ninja\" in self._generator\n target = \"RUN_TESTS\" if is_multi and not is_ninja else \"test\"\n\n # The default for ``test()`` is both the buildenv and the runenv\n env = [\"conanbuild\", \"conanrun\"] if env == \"\" else env\n self._build(build_type=build_type, target=target, cli_args=cli_args,\n build_tool_args=build_tool_args, env=env)\n", "path": "conan/tools/cmake/cmake.py"}], "after_files": [{"content": "import os\n\nfrom conan.tools.build import build_jobs\nfrom conan.tools.cmake.presets import load_cmake_presets\nfrom conan.tools.cmake.utils import is_multi_configuration\nfrom conan.tools.files import chdir, mkdir\nfrom conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg\nfrom conans.client.tools.oss import args_to_string\nfrom conans.errors import ConanException\n\n\ndef _validate_recipe(conanfile):\n forbidden_generators = [\"cmake\", \"cmake_multi\"]\n if any(it in conanfile.generators for it in forbidden_generators):\n raise ConanException(\"Usage of toolchain is only supported with 'cmake_find_package'\"\n \" or 'cmake_find_package_multi' generators\")\n\n\ndef _cmake_cmd_line_args(conanfile, generator):\n args = []\n if not generator:\n return args\n\n # Arguments related to parallel\n njobs = build_jobs(conanfile)\n if njobs and (\"Makefiles\" in generator or \"Ninja\" in generator) and \"NMake\" not in generator:\n args.append(\"-j{}\".format(njobs))\n\n maxcpucount = conanfile.conf.get(\"tools.microsoft.msbuild:max_cpu_count\", check_type=int)\n if maxcpucount and \"Visual Studio\" in generator:\n args.append(\"/m:{}\".format(njobs))\n\n # Arguments for verbosity\n if \"Visual Studio\" in generator:\n verbosity = msbuild_verbosity_cmd_line_arg(conanfile)\n if verbosity:\n args.append(verbosity)\n\n return args\n\n\nclass CMake(object):\n \"\"\" CMake helper to use together with the toolchain feature. It implements a very simple\n wrapper to call the cmake executable, but without passing compile flags, preprocessor\n definitions... all that is set by the toolchain. Only the generator and the CMAKE_TOOLCHAIN_FILE\n are passed to the command line, plus the ``--config Release`` for builds in multi-config\n \"\"\"\n\n def __init__(self, conanfile):\n _validate_recipe(conanfile)\n\n # Store a reference to useful data\n self._conanfile = conanfile\n\n cmake_presets = load_cmake_presets(conanfile.generators_folder)\n # Conan generated presets will have exactly 1 configurePresets, no more\n configure_preset = cmake_presets[\"configurePresets\"][0]\n\n self._generator = configure_preset[\"generator\"]\n self._toolchain_file = configure_preset.get(\"toolchainFile\")\n self._cache_variables = configure_preset[\"cacheVariables\"]\n\n self._cmake_program = \"cmake\" # Path to CMake should be handled by environment\n\n def configure(self, variables=None, build_script_folder=None, cli_args=None):\n cmakelist_folder = self._conanfile.source_folder\n if build_script_folder:\n cmakelist_folder = os.path.join(self._conanfile.source_folder, build_script_folder)\n\n build_folder = self._conanfile.build_folder\n generator_folder = self._conanfile.generators_folder\n\n mkdir(self._conanfile, build_folder)\n\n arg_list = [self._cmake_program]\n if self._generator:\n arg_list.append('-G \"{}\"'.format(self._generator))\n if self._toolchain_file:\n if os.path.isabs(self._toolchain_file):\n toolpath = self._toolchain_file\n else:\n toolpath = os.path.join(generator_folder, self._toolchain_file)\n arg_list.append('-DCMAKE_TOOLCHAIN_FILE=\"{}\"'.format(toolpath.replace(\"\\\\\", \"/\")))\n if self._conanfile.package_folder:\n pkg_folder = self._conanfile.package_folder.replace(\"\\\\\", \"/\")\n arg_list.append('-DCMAKE_INSTALL_PREFIX=\"{}\"'.format(pkg_folder))\n\n if not variables:\n variables = {}\n self._cache_variables.update(variables)\n\n arg_list.extend(['-D{}=\"{}\"'.format(k, v) for k, v in self._cache_variables.items()])\n arg_list.append('\"{}\"'.format(cmakelist_folder))\n\n if cli_args:\n arg_list.extend(cli_args)\n\n command = \" \".join(arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n with chdir(self, build_folder):\n self._conanfile.run(command)\n\n def _build(self, build_type=None, target=None, cli_args=None, build_tool_args=None, env=\"\"):\n bf = self._conanfile.build_folder\n is_multi = is_multi_configuration(self._generator)\n if build_type and not is_multi:\n self._conanfile.output.error(\"Don't specify 'build_type' at build time for \"\n \"single-config build systems\")\n\n bt = build_type or self._conanfile.settings.get_safe(\"build_type\")\n if not bt:\n raise ConanException(\"build_type setting should be defined.\")\n build_config = \"--config {}\".format(bt) if bt and is_multi else \"\"\n\n args = []\n if target is not None:\n args = [\"--target\", target]\n if cli_args:\n args.extend(cli_args)\n\n cmd_line_args = _cmake_cmd_line_args(self._conanfile, self._generator)\n if build_tool_args:\n cmd_line_args.extend(build_tool_args)\n if cmd_line_args:\n args += ['--'] + cmd_line_args\n\n arg_list = ['\"{}\"'.format(bf), build_config, args_to_string(args)]\n arg_list = \" \".join(filter(None, arg_list))\n command = \"%s --build %s\" % (self._cmake_program, arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n self._conanfile.run(command, env=env)\n\n def build(self, build_type=None, target=None, cli_args=None, build_tool_args=None):\n self._build(build_type, target, cli_args, build_tool_args)\n\n def install(self, build_type=None, component=None):\n mkdir(self._conanfile, self._conanfile.package_folder)\n\n bt = build_type or self._conanfile.settings.get_safe(\"build_type\")\n if not bt:\n raise ConanException(\"build_type setting should be defined.\")\n is_multi = is_multi_configuration(self._generator)\n build_config = \"--config {}\".format(bt) if bt and is_multi else \"\"\n\n pkg_folder = '\"{}\"'.format(self._conanfile.package_folder.replace(\"\\\\\", \"/\"))\n build_folder = '\"{}\"'.format(self._conanfile.build_folder)\n arg_list = [\"--install\", build_folder, build_config, \"--prefix\", pkg_folder]\n if component:\n arg_list.extend([\"--component\", component])\n\n do_strip = self._conanfile.conf.get(\"tools.cmake:install_strip\", check_type=bool)\n if do_strip:\n arg_list.append(\"--strip\")\n arg_list = \" \".join(filter(None, arg_list))\n command = \"%s %s\" % (self._cmake_program, arg_list)\n self._conanfile.output.info(\"CMake command: %s\" % command)\n self._conanfile.run(command)\n\n def test(self, build_type=None, target=None, cli_args=None, build_tool_args=None, env=\"\"):\n if self._conanfile.conf.get(\"tools.build:skip_test\", check_type=bool):\n return\n if not target:\n is_multi = is_multi_configuration(self._generator)\n is_ninja = \"Ninja\" in self._generator\n target = \"RUN_TESTS\" if is_multi and not is_ninja else \"test\"\n\n # The default for ``test()`` is both the buildenv and the runenv\n env = [\"conanbuild\", \"conanrun\"] if env == \"\" else env\n self._build(build_type=build_type, target=target, cli_args=cli_args,\n build_tool_args=build_tool_args, env=env)\n", "path": "conan/tools/cmake/cmake.py"}]} | 2,533 | 183 |
gh_patches_debug_10732 | rasdani/github-patches | git_diff | streamlink__streamlink-5376 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.mediavitrina: no playable streams found on player URLs
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
Since january streamlink can't handle gazprom-media mediavitrina urls like:
https://player.mediavitrina.ru/gpm_tnt_v2/tnt/vitrinatv_web/player.html
https://player.mediavitrina.ru/gpm_friday_v2/friday/vitrinatv_web/player.html
https://player.mediavitrina.ru/tv3_v2/tv3/vitrinatv_web/player.html
The reason for that is beause mediavitrina can't open a required json file like
https://media.mediavitrina.ru/api/v3/gpm-tnt/playlist/tnt_as_array.json?application_id=&player_referer_hostname=vitrina.tv&config_checksum_sha256=&egress_version_id=1950111
what i know:
when i try to open this json file directly in browser it fails but when i specify a referer "https://player.mediavitrina.ru/" for media.mediavitrina.ru url using firefox extension it opens perfectly
so i think mediavitrina plugin does not send this referer requesting json from media.mediavitrina.ru URL, it sends referer only for player.mediavitrina.ru URLs
please fix this issue
P.S.:
it would be futureproof if this plugin just could handle https://media.mediavitrina.ru/api/v1/gpm-tnt/playlist/tnt_as_array.json URLs directly
### Debug log
```text
[cli][info] Found matching plugin mediavitrina for URL https://player.mediavitri
na.ru/gpm_tnt_v2/tnt/vitrinatv_web/player.html
error: No playable streams found on this URL: https://player.mediavitrina.ru/gpm
_tnt_v2/tnt/vitrinatv_web/player.html
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/mediavitrina.py`
Content:
```
1 """
2 $description Russian live streaming platform hosting various Russian live TV channels.
3 $url mediavitrina.ru
4 $type live
5 $region Russia
6 """
7
8 import logging
9 import re
10 from urllib.parse import urlparse
11
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream.hls import HLSStream
15 from streamlink.utils.url import update_qsd
16
17
18 log = logging.getLogger(__name__)
19
20
21 @pluginmatcher(re.compile(r"""https?://(?:www\.)?(?:
22 chetv
23 |
24 ctc(?:love)?
25 |
26 domashniy
27 )\.ru/(?:live|online)""", re.VERBOSE))
28 @pluginmatcher(re.compile(r"https?://player\.mediavitrina\.ru/.+/player\.html"))
29 class MediaVitrina(Plugin):
30 _re_url_json = re.compile(r"https://media\.mediavitrina\.ru/(?:proxy)?api/v3/\w+/playlist/[\w-]+_as_array\.json[^\"']+")
31
32 def _get_streams(self):
33 self.session.http.headers.update({"Referer": self.url})
34
35 p_netloc = urlparse(self.url).netloc
36 if p_netloc == "player.mediavitrina.ru":
37 # https://player.mediavitrina.ru/
38 url_player = self.url
39 elif p_netloc.endswith("ctc.ru"):
40 # https://ctc.ru/online/
41 url_player = self.session.http.get(
42 "https://ctc.ru/api/page/v1/online/",
43 schema=validate.Schema(
44 validate.parse_json(),
45 {"content": validate.all(
46 [dict],
47 validate.filter(lambda n: n.get("type") == "on-air"),
48 [{"onAirLink": validate.url(netloc="player.mediavitrina.ru")}],
49 validate.get((0, "onAirLink")),
50 )},
51 validate.get("content"),
52 ),
53 )
54 else:
55 # https://chetv.ru/online/
56 # https://ctclove.ru/online/
57 # https://domashniy.ru/online/
58 url_player = self.session.http.get(self.url, schema=validate.Schema(
59 validate.parse_html(),
60 validate.xml_xpath_string(".//iframe[starts-with(@src,'https://player.mediavitrina.ru/')]/@src"),
61 ), acceptable_status=(200, 403, 404))
62
63 if not url_player:
64 return
65
66 log.debug(f"url_player={url_player}")
67 script_data = self.session.http.get(url_player, schema=validate.Schema(
68 validate.parse_html(),
69 validate.xml_xpath_string(".//script[contains(text(),'media.mediavitrina.ru/')]/text()"),
70 ))
71 if not script_data:
72 log.debug("invalid script_data")
73 return
74
75 m = self._re_url_json.search(script_data)
76 if not m:
77 log.debug("invalid url_json")
78 return
79
80 url_json = m.group(0)
81 log.debug(f"url_json={url_json}")
82 url_json = re.sub(r"\{\{PLAYER_REFERER_HOSTNAME\}\}", "mediavitrina.ru", url_json)
83 url_json = re.sub(r"\{\{[A-Za-z_]+\}\}", "", url_json)
84
85 res_token = self.session.http.get(
86 "https://media.mediavitrina.ru/get_token",
87 schema=validate.Schema(
88 validate.parse_json(),
89 {"result": {"token": str}},
90 validate.get("result"),
91 ))
92 url = self.session.http.get(
93 update_qsd(url_json, qsd=res_token),
94 schema=validate.Schema(
95 validate.parse_json(),
96 {"hls": [validate.url()]},
97 validate.get(("hls", 0)),
98 ))
99
100 if not url:
101 return
102
103 if "georestrictions" in url:
104 log.error("Stream is geo-restricted")
105 return
106
107 return HLSStream.parse_variant_playlist(self.session, url, name_fmt="{pixels}_{bitrate}")
108
109
110 __plugin__ = MediaVitrina
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/mediavitrina.py b/src/streamlink/plugins/mediavitrina.py
--- a/src/streamlink/plugins/mediavitrina.py
+++ b/src/streamlink/plugins/mediavitrina.py
@@ -27,7 +27,7 @@
)\.ru/(?:live|online)""", re.VERBOSE))
@pluginmatcher(re.compile(r"https?://player\.mediavitrina\.ru/.+/player\.html"))
class MediaVitrina(Plugin):
- _re_url_json = re.compile(r"https://media\.mediavitrina\.ru/(?:proxy)?api/v3/\w+/playlist/[\w-]+_as_array\.json[^\"']+")
+ _re_url_json = re.compile(r"https://media\.mediavitrina\.ru/(?:proxy)?api/v3/[\w-]+/playlist/[\w-]+_as_array\.json[^\"']+")
def _get_streams(self):
self.session.http.headers.update({"Referer": self.url})
| {"golden_diff": "diff --git a/src/streamlink/plugins/mediavitrina.py b/src/streamlink/plugins/mediavitrina.py\n--- a/src/streamlink/plugins/mediavitrina.py\n+++ b/src/streamlink/plugins/mediavitrina.py\n@@ -27,7 +27,7 @@\n )\\.ru/(?:live|online)\"\"\", re.VERBOSE))\n @pluginmatcher(re.compile(r\"https?://player\\.mediavitrina\\.ru/.+/player\\.html\"))\n class MediaVitrina(Plugin):\n- _re_url_json = re.compile(r\"https://media\\.mediavitrina\\.ru/(?:proxy)?api/v3/\\w+/playlist/[\\w-]+_as_array\\.json[^\\\"']+\")\n+ _re_url_json = re.compile(r\"https://media\\.mediavitrina\\.ru/(?:proxy)?api/v3/[\\w-]+/playlist/[\\w-]+_as_array\\.json[^\\\"']+\")\n \n def _get_streams(self):\n self.session.http.headers.update({\"Referer\": self.url})\n", "issue": "plugins.mediavitrina: no playable streams found on player URLs\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nSince january streamlink can't handle gazprom-media mediavitrina urls like:\r\n\r\nhttps://player.mediavitrina.ru/gpm_tnt_v2/tnt/vitrinatv_web/player.html\r\nhttps://player.mediavitrina.ru/gpm_friday_v2/friday/vitrinatv_web/player.html\r\nhttps://player.mediavitrina.ru/tv3_v2/tv3/vitrinatv_web/player.html\r\n\r\nThe reason for that is beause mediavitrina can't open a required json file like\r\nhttps://media.mediavitrina.ru/api/v3/gpm-tnt/playlist/tnt_as_array.json?application_id=&player_referer_hostname=vitrina.tv&config_checksum_sha256=&egress_version_id=1950111\r\n\r\nwhat i know:\r\nwhen i try to open this json file directly in browser it fails but when i specify a referer \"https://player.mediavitrina.ru/\" for media.mediavitrina.ru url using firefox extension it opens perfectly\r\nso i think mediavitrina plugin does not send this referer requesting json from media.mediavitrina.ru URL, it sends referer only for player.mediavitrina.ru URLs\r\n\r\nplease fix this issue\r\nP.S.:\r\nit would be futureproof if this plugin just could handle https://media.mediavitrina.ru/api/v1/gpm-tnt/playlist/tnt_as_array.json URLs directly\n\n### Debug log\n\n```text\n[cli][info] Found matching plugin mediavitrina for URL https://player.mediavitri\r\nna.ru/gpm_tnt_v2/tnt/vitrinatv_web/player.html\r\nerror: No playable streams found on this URL: https://player.mediavitrina.ru/gpm\r\n_tnt_v2/tnt/vitrinatv_web/player.html\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Russian live streaming platform hosting various Russian live TV channels.\n$url mediavitrina.ru\n$type live\n$region Russia\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.url import update_qsd\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"https?://(?:www\\.)?(?:\n chetv\n |\n ctc(?:love)?\n |\n domashniy\n)\\.ru/(?:live|online)\"\"\", re.VERBOSE))\n@pluginmatcher(re.compile(r\"https?://player\\.mediavitrina\\.ru/.+/player\\.html\"))\nclass MediaVitrina(Plugin):\n _re_url_json = re.compile(r\"https://media\\.mediavitrina\\.ru/(?:proxy)?api/v3/\\w+/playlist/[\\w-]+_as_array\\.json[^\\\"']+\")\n\n def _get_streams(self):\n self.session.http.headers.update({\"Referer\": self.url})\n\n p_netloc = urlparse(self.url).netloc\n if p_netloc == \"player.mediavitrina.ru\":\n # https://player.mediavitrina.ru/\n url_player = self.url\n elif p_netloc.endswith(\"ctc.ru\"):\n # https://ctc.ru/online/\n url_player = self.session.http.get(\n \"https://ctc.ru/api/page/v1/online/\",\n schema=validate.Schema(\n validate.parse_json(),\n {\"content\": validate.all(\n [dict],\n validate.filter(lambda n: n.get(\"type\") == \"on-air\"),\n [{\"onAirLink\": validate.url(netloc=\"player.mediavitrina.ru\")}],\n validate.get((0, \"onAirLink\")),\n )},\n validate.get(\"content\"),\n ),\n )\n else:\n # https://chetv.ru/online/\n # https://ctclove.ru/online/\n # https://domashniy.ru/online/\n url_player = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//iframe[starts-with(@src,'https://player.mediavitrina.ru/')]/@src\"),\n ), acceptable_status=(200, 403, 404))\n\n if not url_player:\n return\n\n log.debug(f\"url_player={url_player}\")\n script_data = self.session.http.get(url_player, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'media.mediavitrina.ru/')]/text()\"),\n ))\n if not script_data:\n log.debug(\"invalid script_data\")\n return\n\n m = self._re_url_json.search(script_data)\n if not m:\n log.debug(\"invalid url_json\")\n return\n\n url_json = m.group(0)\n log.debug(f\"url_json={url_json}\")\n url_json = re.sub(r\"\\{\\{PLAYER_REFERER_HOSTNAME\\}\\}\", \"mediavitrina.ru\", url_json)\n url_json = re.sub(r\"\\{\\{[A-Za-z_]+\\}\\}\", \"\", url_json)\n\n res_token = self.session.http.get(\n \"https://media.mediavitrina.ru/get_token\",\n schema=validate.Schema(\n validate.parse_json(),\n {\"result\": {\"token\": str}},\n validate.get(\"result\"),\n ))\n url = self.session.http.get(\n update_qsd(url_json, qsd=res_token),\n schema=validate.Schema(\n validate.parse_json(),\n {\"hls\": [validate.url()]},\n validate.get((\"hls\", 0)),\n ))\n\n if not url:\n return\n\n if \"georestrictions\" in url:\n log.error(\"Stream is geo-restricted\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, url, name_fmt=\"{pixels}_{bitrate}\")\n\n\n__plugin__ = MediaVitrina\n", "path": "src/streamlink/plugins/mediavitrina.py"}], "after_files": [{"content": "\"\"\"\n$description Russian live streaming platform hosting various Russian live TV channels.\n$url mediavitrina.ru\n$type live\n$region Russia\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.url import update_qsd\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"https?://(?:www\\.)?(?:\n chetv\n |\n ctc(?:love)?\n |\n domashniy\n)\\.ru/(?:live|online)\"\"\", re.VERBOSE))\n@pluginmatcher(re.compile(r\"https?://player\\.mediavitrina\\.ru/.+/player\\.html\"))\nclass MediaVitrina(Plugin):\n _re_url_json = re.compile(r\"https://media\\.mediavitrina\\.ru/(?:proxy)?api/v3/[\\w-]+/playlist/[\\w-]+_as_array\\.json[^\\\"']+\")\n\n def _get_streams(self):\n self.session.http.headers.update({\"Referer\": self.url})\n\n p_netloc = urlparse(self.url).netloc\n if p_netloc == \"player.mediavitrina.ru\":\n # https://player.mediavitrina.ru/\n url_player = self.url\n elif p_netloc.endswith(\"ctc.ru\"):\n # https://ctc.ru/online/\n url_player = self.session.http.get(\n \"https://ctc.ru/api/page/v1/online/\",\n schema=validate.Schema(\n validate.parse_json(),\n {\"content\": validate.all(\n [dict],\n validate.filter(lambda n: n.get(\"type\") == \"on-air\"),\n [{\"onAirLink\": validate.url(netloc=\"player.mediavitrina.ru\")}],\n validate.get((0, \"onAirLink\")),\n )},\n validate.get(\"content\"),\n ),\n )\n else:\n # https://chetv.ru/online/\n # https://ctclove.ru/online/\n # https://domashniy.ru/online/\n url_player = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//iframe[starts-with(@src,'https://player.mediavitrina.ru/')]/@src\"),\n ), acceptable_status=(200, 403, 404))\n\n if not url_player:\n return\n\n log.debug(f\"url_player={url_player}\")\n script_data = self.session.http.get(url_player, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'media.mediavitrina.ru/')]/text()\"),\n ))\n if not script_data:\n log.debug(\"invalid script_data\")\n return\n\n m = self._re_url_json.search(script_data)\n if not m:\n log.debug(\"invalid url_json\")\n return\n\n url_json = m.group(0)\n log.debug(f\"url_json={url_json}\")\n url_json = re.sub(r\"\\{\\{PLAYER_REFERER_HOSTNAME\\}\\}\", \"mediavitrina.ru\", url_json)\n url_json = re.sub(r\"\\{\\{[A-Za-z_]+\\}\\}\", \"\", url_json)\n\n res_token = self.session.http.get(\n \"https://media.mediavitrina.ru/get_token\",\n schema=validate.Schema(\n validate.parse_json(),\n {\"result\": {\"token\": str}},\n validate.get(\"result\"),\n ))\n url = self.session.http.get(\n update_qsd(url_json, qsd=res_token),\n schema=validate.Schema(\n validate.parse_json(),\n {\"hls\": [validate.url()]},\n validate.get((\"hls\", 0)),\n ))\n\n if not url:\n return\n\n if \"georestrictions\" in url:\n log.error(\"Stream is geo-restricted\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, url, name_fmt=\"{pixels}_{bitrate}\")\n\n\n__plugin__ = MediaVitrina\n", "path": "src/streamlink/plugins/mediavitrina.py"}]} | 1,918 | 224 |
gh_patches_debug_5017 | rasdani/github-patches | git_diff | kserve__kserve-1463 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Storage initializer download tar.gz or zip from uri with query params fails
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
1. actually i am using Seldon with KFServing Storage Initializer
2. use v0.5.0 storage initializer image
3. specify storage uri with query params, eg, http://foo.bar/model.tar.gz?Signature=kewliewu
4. the storage initializer will raise error , expecting Content-Type:application/octet-stream
5. fail to download from uri
**What did you expect to happen:**
1. the *.tar.gz file is download and extracted
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
The bug is caused by using the full url with query params to guess
```python
# uri = http://foo.bar/model.tar.gz?Signature=k32k32
# mimetype=None, encoding=None
mimetype, encoding = mimetypes.guess_type(uri)
```
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version:
- Kubeflow version:
- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kfserving/kfserving/storage.py`
Content:
```
1 # Copyright 2020 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import logging
17 import tempfile
18 import mimetypes
19 import os
20 import re
21 import json
22 import shutil
23 import tarfile
24 import zipfile
25 import gzip
26 from urllib.parse import urlparse
27 import requests
28 from azure.storage.blob import BlockBlobService
29 from google.auth import exceptions
30 from google.cloud import storage
31 from minio import Minio
32 from kfserving.kfmodel_repository import MODEL_MOUNT_DIRS
33
34 _GCS_PREFIX = "gs://"
35 _S3_PREFIX = "s3://"
36 _BLOB_RE = "https://(.+?).blob.core.windows.net/(.+)"
37 _LOCAL_PREFIX = "file://"
38 _URI_RE = "https?://(.+)/(.+)"
39 _HTTP_PREFIX = "http(s)://"
40 _HEADERS_SUFFIX = "-headers"
41
42 class Storage(object): # pylint: disable=too-few-public-methods
43 @staticmethod
44 def download(uri: str, out_dir: str = None) -> str:
45 logging.info("Copying contents of %s to local", uri)
46
47 is_local = False
48 if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri):
49 is_local = True
50
51 if out_dir is None:
52 if is_local:
53 # noop if out_dir is not set and the path is local
54 return Storage._download_local(uri)
55 out_dir = tempfile.mkdtemp()
56 elif not os.path.exists(out_dir):
57 os.mkdir(out_dir)
58
59 if uri.startswith(_GCS_PREFIX):
60 Storage._download_gcs(uri, out_dir)
61 elif uri.startswith(_S3_PREFIX):
62 Storage._download_s3(uri, out_dir)
63 elif re.search(_BLOB_RE, uri):
64 Storage._download_blob(uri, out_dir)
65 elif is_local:
66 return Storage._download_local(uri, out_dir)
67 elif re.search(_URI_RE, uri):
68 return Storage._download_from_uri(uri, out_dir)
69 elif uri.startswith(MODEL_MOUNT_DIRS):
70 # Don't need to download models if this InferenceService is running in the multi-model
71 # serving mode. The model agent will download models.
72 return out_dir
73 else:
74 raise Exception("Cannot recognize storage type for " + uri +
75 "\n'%s', '%s', '%s', and '%s' are the current available storage type." %
76 (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX, _HTTP_PREFIX))
77
78 logging.info("Successfully copied %s to %s", uri, out_dir)
79 return out_dir
80
81 @staticmethod
82 def _download_s3(uri, temp_dir: str):
83 client = Storage._create_minio_client()
84 bucket_args = uri.replace(_S3_PREFIX, "", 1).split("/", 1)
85 bucket_name = bucket_args[0]
86 bucket_path = bucket_args[1] if len(bucket_args) > 1 else ""
87 objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True)
88 count = 0
89 for obj in objects:
90 # Replace any prefix from the object key with temp_dir
91 subdir_object_key = obj.object_name.replace(bucket_path, "", 1).strip("/")
92 # fget_object handles directory creation if does not exist
93 if not obj.is_dir:
94 if subdir_object_key == "":
95 subdir_object_key = obj.object_name
96 client.fget_object(bucket_name, obj.object_name,
97 os.path.join(temp_dir, subdir_object_key))
98 count = count + 1
99 if count == 0:
100 raise RuntimeError("Failed to fetch model. \
101 The path or model %s does not exist." % (uri))
102
103 @staticmethod
104 def _download_gcs(uri, temp_dir: str):
105 try:
106 storage_client = storage.Client()
107 except exceptions.DefaultCredentialsError:
108 storage_client = storage.Client.create_anonymous_client()
109 bucket_args = uri.replace(_GCS_PREFIX, "", 1).split("/", 1)
110 bucket_name = bucket_args[0]
111 bucket_path = bucket_args[1] if len(bucket_args) > 1 else ""
112 bucket = storage_client.bucket(bucket_name)
113 prefix = bucket_path
114 if not prefix.endswith("/"):
115 prefix = prefix + "/"
116 blobs = bucket.list_blobs(prefix=prefix)
117 count = 0
118 for blob in blobs:
119 # Replace any prefix from the object key with temp_dir
120 subdir_object_key = blob.name.replace(bucket_path, "", 1).strip("/")
121
122 # Create necessary subdirectory to store the object locally
123 if "/" in subdir_object_key:
124 local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit("/", 1)[0])
125 if not os.path.isdir(local_object_dir):
126 os.makedirs(local_object_dir, exist_ok=True)
127 if subdir_object_key.strip() != "":
128 dest_path = os.path.join(temp_dir, subdir_object_key)
129 logging.info("Downloading: %s", dest_path)
130 blob.download_to_filename(dest_path)
131 count = count + 1
132 if count == 0:
133 raise RuntimeError("Failed to fetch model. \
134 The path or model %s does not exist." % (uri))
135
136 @staticmethod
137 def _download_blob(uri, out_dir: str): # pylint: disable=too-many-locals
138 match = re.search(_BLOB_RE, uri)
139 account_name = match.group(1)
140 storage_url = match.group(2)
141 container_name, prefix = storage_url.split("/", 1)
142
143 logging.info("Connecting to BLOB account: [%s], container: [%s], prefix: [%s]",
144 account_name,
145 container_name,
146 prefix)
147 try:
148 block_blob_service = BlockBlobService(account_name=account_name)
149 blobs = block_blob_service.list_blobs(container_name, prefix=prefix)
150 except Exception: # pylint: disable=broad-except
151 token = Storage._get_azure_storage_token()
152 if token is None:
153 logging.warning("Azure credentials not found, retrying anonymous access")
154 block_blob_service = BlockBlobService(account_name=account_name, token_credential=token)
155 blobs = block_blob_service.list_blobs(container_name, prefix=prefix)
156 count = 0
157 for blob in blobs:
158 dest_path = os.path.join(out_dir, blob.name)
159 if "/" in blob.name:
160 head, tail = os.path.split(blob.name)
161 if prefix is not None:
162 head = head[len(prefix):]
163 if head.startswith('/'):
164 head = head[1:]
165 dir_path = os.path.join(out_dir, head)
166 dest_path = os.path.join(dir_path, tail)
167 if not os.path.isdir(dir_path):
168 os.makedirs(dir_path)
169
170 logging.info("Downloading: %s to %s", blob.name, dest_path)
171 block_blob_service.get_blob_to_path(container_name, blob.name, dest_path)
172 count = count + 1
173 if count == 0:
174 raise RuntimeError("Failed to fetch model. \
175 The path or model %s does not exist." % (uri))
176
177 @staticmethod
178 def _get_azure_storage_token():
179 tenant_id = os.getenv("AZ_TENANT_ID", "")
180 client_id = os.getenv("AZ_CLIENT_ID", "")
181 client_secret = os.getenv("AZ_CLIENT_SECRET", "")
182 subscription_id = os.getenv("AZ_SUBSCRIPTION_ID", "")
183
184 if tenant_id == "" or client_id == "" or client_secret == "" or subscription_id == "":
185 return None
186
187 # note the SP must have "Storage Blob Data Owner" perms for this to work
188 import adal
189 from azure.storage.common import TokenCredential
190
191 authority_url = "https://login.microsoftonline.com/" + tenant_id
192
193 context = adal.AuthenticationContext(authority_url)
194
195 token = context.acquire_token_with_client_credentials(
196 "https://storage.azure.com/",
197 client_id,
198 client_secret)
199
200 token_credential = TokenCredential(token["accessToken"])
201
202 logging.info("Retrieved SP token credential for client_id: %s", client_id)
203
204 return token_credential
205
206 @staticmethod
207 def _download_local(uri, out_dir=None):
208 local_path = uri.replace(_LOCAL_PREFIX, "", 1)
209 if not os.path.exists(local_path):
210 raise RuntimeError("Local path %s does not exist." % (uri))
211
212 if out_dir is None:
213 return local_path
214 elif not os.path.isdir(out_dir):
215 os.makedirs(out_dir)
216
217 if os.path.isdir(local_path):
218 local_path = os.path.join(local_path, "*")
219
220 for src in glob.glob(local_path):
221 _, tail = os.path.split(src)
222 dest_path = os.path.join(out_dir, tail)
223 logging.info("Linking: %s to %s", src, dest_path)
224 os.symlink(src, dest_path)
225 return out_dir
226
227 @staticmethod
228 def _download_from_uri(uri, out_dir=None):
229 url = urlparse(uri)
230 filename = os.path.basename(url.path)
231 mimetype, encoding = mimetypes.guess_type(uri)
232 local_path = os.path.join(out_dir, filename)
233
234 if filename == '':
235 raise ValueError('No filename contained in URI: %s' % (uri))
236
237 # Get header information from host url
238 headers = {}
239 host_uri = url.hostname
240
241 headers_json = os.getenv(host_uri + _HEADERS_SUFFIX, "{}")
242 headers = json.loads(headers_json)
243
244 with requests.get(uri, stream=True, headers=headers) as response:
245 if response.status_code != 200:
246 raise RuntimeError("URI: %s returned a %s response code." % (uri, response.status_code))
247 if mimetype == 'application/zip' and not response.headers.get('Content-Type', '').startswith('application/zip'):
248 raise RuntimeError("URI: %s did not respond with \'Content-Type\': \'application/zip\'" % (uri))
249 if mimetype == 'application/x-tar' and not response.headers.get('Content-Type', '').startswith('application/x-tar'):
250 raise RuntimeError("URI: %s did not respond with \'Content-Type\': \'application/x-tar\'" % (uri))
251 if (mimetype != 'application/zip' and mimetype != 'application/x-tar') and not response.headers.get('Content-Type', '').startswith('application/octet-stream'):
252 raise RuntimeError("URI: %s did not respond with \'Content-Type\': \'application/octet-stream\'" % (uri))
253
254 if encoding == 'gzip':
255 stream = gzip.GzipFile(fileobj=response.raw)
256 local_path = os.path.join(out_dir, f'{filename}.tar')
257 else:
258 stream = response.raw
259 with open(local_path, 'wb') as out:
260 shutil.copyfileobj(stream, out)
261
262 if mimetype in ["application/x-tar", "application/zip"]:
263 if mimetype == "application/x-tar":
264 archive = tarfile.open(local_path, 'r', encoding='utf-8')
265 else:
266 archive = zipfile.ZipFile(local_path, 'r')
267 archive.extractall(out_dir)
268 archive.close()
269 os.remove(local_path)
270
271 return out_dir
272
273 @staticmethod
274 def _create_minio_client():
275 # Adding prefixing "http" in urlparse is necessary for it to be the netloc
276 url = urlparse(os.getenv("AWS_ENDPOINT_URL", "http://s3.amazonaws.com"))
277 use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv("S3_USE_HTTPS", "true"))
278 return Minio(url.netloc,
279 access_key=os.getenv("AWS_ACCESS_KEY_ID", ""),
280 secret_key=os.getenv("AWS_SECRET_ACCESS_KEY", ""),
281 region=os.getenv("AWS_REGION", ""),
282 secure=use_ssl)
283
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/kfserving/kfserving/storage.py b/python/kfserving/kfserving/storage.py
--- a/python/kfserving/kfserving/storage.py
+++ b/python/kfserving/kfserving/storage.py
@@ -228,7 +228,7 @@
def _download_from_uri(uri, out_dir=None):
url = urlparse(uri)
filename = os.path.basename(url.path)
- mimetype, encoding = mimetypes.guess_type(uri)
+ mimetype, encoding = mimetypes.guess_type(url.path)
local_path = os.path.join(out_dir, filename)
if filename == '':
| {"golden_diff": "diff --git a/python/kfserving/kfserving/storage.py b/python/kfserving/kfserving/storage.py\n--- a/python/kfserving/kfserving/storage.py\n+++ b/python/kfserving/kfserving/storage.py\n@@ -228,7 +228,7 @@\n def _download_from_uri(uri, out_dir=None):\n url = urlparse(uri)\n filename = os.path.basename(url.path)\n- mimetype, encoding = mimetypes.guess_type(uri)\n+ mimetype, encoding = mimetypes.guess_type(url.path)\n local_path = os.path.join(out_dir, filename)\n \n if filename == '':\n", "issue": "Storage initializer download tar.gz or zip from uri with query params fails\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\n[A clear and concise description of what the bug is.]\r\n\r\n1. actually i am using Seldon with KFServing Storage Initializer\r\n2. use v0.5.0 storage initializer image\r\n3. specify storage uri with query params, eg, http://foo.bar/model.tar.gz?Signature=kewliewu\r\n4. the storage initializer will raise error , expecting Content-Type:application/octet-stream\r\n5. fail to download from uri \r\n\r\n**What did you expect to happen:**\r\n\r\n1. the *.tar.gz file is download and extracted\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\nThe bug is caused by using the full url with query params to guess\r\n```python\r\n# uri = http://foo.bar/model.tar.gz?Signature=k32k32\r\n# mimetype=None, encoding=None\r\nmimetype, encoding = mimetypes.guess_type(uri) \r\n```\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version:\r\n- Kubeflow version:\r\n- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2020 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport logging\nimport tempfile\nimport mimetypes\nimport os\nimport re\nimport json\nimport shutil\nimport tarfile\nimport zipfile\nimport gzip\nfrom urllib.parse import urlparse\nimport requests \nfrom azure.storage.blob import BlockBlobService\nfrom google.auth import exceptions\nfrom google.cloud import storage\nfrom minio import Minio\nfrom kfserving.kfmodel_repository import MODEL_MOUNT_DIRS\n\n_GCS_PREFIX = \"gs://\"\n_S3_PREFIX = \"s3://\"\n_BLOB_RE = \"https://(.+?).blob.core.windows.net/(.+)\"\n_LOCAL_PREFIX = \"file://\"\n_URI_RE = \"https?://(.+)/(.+)\"\n_HTTP_PREFIX = \"http(s)://\"\n_HEADERS_SUFFIX = \"-headers\"\n\nclass Storage(object): # pylint: disable=too-few-public-methods\n @staticmethod\n def download(uri: str, out_dir: str = None) -> str:\n logging.info(\"Copying contents of %s to local\", uri)\n\n is_local = False\n if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri):\n is_local = True\n\n if out_dir is None:\n if is_local:\n # noop if out_dir is not set and the path is local\n return Storage._download_local(uri)\n out_dir = tempfile.mkdtemp()\n elif not os.path.exists(out_dir):\n os.mkdir(out_dir)\n\n if uri.startswith(_GCS_PREFIX):\n Storage._download_gcs(uri, out_dir)\n elif uri.startswith(_S3_PREFIX):\n Storage._download_s3(uri, out_dir)\n elif re.search(_BLOB_RE, uri):\n Storage._download_blob(uri, out_dir)\n elif is_local:\n return Storage._download_local(uri, out_dir)\n elif re.search(_URI_RE, uri):\n return Storage._download_from_uri(uri, out_dir)\n elif uri.startswith(MODEL_MOUNT_DIRS):\n # Don't need to download models if this InferenceService is running in the multi-model\n # serving mode. The model agent will download models.\n return out_dir\n else:\n raise Exception(\"Cannot recognize storage type for \" + uri +\n \"\\n'%s', '%s', '%s', and '%s' are the current available storage type.\" %\n (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX, _HTTP_PREFIX))\n\n logging.info(\"Successfully copied %s to %s\", uri, out_dir)\n return out_dir\n\n @staticmethod\n def _download_s3(uri, temp_dir: str):\n client = Storage._create_minio_client()\n bucket_args = uri.replace(_S3_PREFIX, \"\", 1).split(\"/\", 1)\n bucket_name = bucket_args[0]\n bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True)\n count = 0\n for obj in objects:\n # Replace any prefix from the object key with temp_dir\n subdir_object_key = obj.object_name.replace(bucket_path, \"\", 1).strip(\"/\")\n # fget_object handles directory creation if does not exist\n if not obj.is_dir:\n if subdir_object_key == \"\":\n subdir_object_key = obj.object_name\n client.fget_object(bucket_name, obj.object_name,\n os.path.join(temp_dir, subdir_object_key))\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _download_gcs(uri, temp_dir: str):\n try:\n storage_client = storage.Client()\n except exceptions.DefaultCredentialsError:\n storage_client = storage.Client.create_anonymous_client()\n bucket_args = uri.replace(_GCS_PREFIX, \"\", 1).split(\"/\", 1)\n bucket_name = bucket_args[0]\n bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n bucket = storage_client.bucket(bucket_name)\n prefix = bucket_path\n if not prefix.endswith(\"/\"):\n prefix = prefix + \"/\"\n blobs = bucket.list_blobs(prefix=prefix)\n count = 0\n for blob in blobs:\n # Replace any prefix from the object key with temp_dir\n subdir_object_key = blob.name.replace(bucket_path, \"\", 1).strip(\"/\")\n\n # Create necessary subdirectory to store the object locally\n if \"/\" in subdir_object_key:\n local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit(\"/\", 1)[0])\n if not os.path.isdir(local_object_dir):\n os.makedirs(local_object_dir, exist_ok=True)\n if subdir_object_key.strip() != \"\":\n dest_path = os.path.join(temp_dir, subdir_object_key)\n logging.info(\"Downloading: %s\", dest_path)\n blob.download_to_filename(dest_path)\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _download_blob(uri, out_dir: str): # pylint: disable=too-many-locals\n match = re.search(_BLOB_RE, uri)\n account_name = match.group(1)\n storage_url = match.group(2)\n container_name, prefix = storage_url.split(\"/\", 1)\n\n logging.info(\"Connecting to BLOB account: [%s], container: [%s], prefix: [%s]\",\n account_name,\n container_name,\n prefix)\n try:\n block_blob_service = BlockBlobService(account_name=account_name)\n blobs = block_blob_service.list_blobs(container_name, prefix=prefix)\n except Exception: # pylint: disable=broad-except\n token = Storage._get_azure_storage_token()\n if token is None:\n logging.warning(\"Azure credentials not found, retrying anonymous access\")\n block_blob_service = BlockBlobService(account_name=account_name, token_credential=token)\n blobs = block_blob_service.list_blobs(container_name, prefix=prefix)\n count = 0\n for blob in blobs:\n dest_path = os.path.join(out_dir, blob.name)\n if \"/\" in blob.name:\n head, tail = os.path.split(blob.name)\n if prefix is not None:\n head = head[len(prefix):]\n if head.startswith('/'):\n head = head[1:]\n dir_path = os.path.join(out_dir, head)\n dest_path = os.path.join(dir_path, tail)\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n logging.info(\"Downloading: %s to %s\", blob.name, dest_path)\n block_blob_service.get_blob_to_path(container_name, blob.name, dest_path)\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _get_azure_storage_token():\n tenant_id = os.getenv(\"AZ_TENANT_ID\", \"\")\n client_id = os.getenv(\"AZ_CLIENT_ID\", \"\")\n client_secret = os.getenv(\"AZ_CLIENT_SECRET\", \"\")\n subscription_id = os.getenv(\"AZ_SUBSCRIPTION_ID\", \"\")\n\n if tenant_id == \"\" or client_id == \"\" or client_secret == \"\" or subscription_id == \"\":\n return None\n\n # note the SP must have \"Storage Blob Data Owner\" perms for this to work\n import adal\n from azure.storage.common import TokenCredential\n\n authority_url = \"https://login.microsoftonline.com/\" + tenant_id\n\n context = adal.AuthenticationContext(authority_url)\n\n token = context.acquire_token_with_client_credentials(\n \"https://storage.azure.com/\",\n client_id,\n client_secret)\n\n token_credential = TokenCredential(token[\"accessToken\"])\n\n logging.info(\"Retrieved SP token credential for client_id: %s\", client_id)\n\n return token_credential\n\n @staticmethod\n def _download_local(uri, out_dir=None):\n local_path = uri.replace(_LOCAL_PREFIX, \"\", 1)\n if not os.path.exists(local_path):\n raise RuntimeError(\"Local path %s does not exist.\" % (uri))\n\n if out_dir is None:\n return local_path\n elif not os.path.isdir(out_dir):\n os.makedirs(out_dir)\n\n if os.path.isdir(local_path):\n local_path = os.path.join(local_path, \"*\")\n\n for src in glob.glob(local_path):\n _, tail = os.path.split(src)\n dest_path = os.path.join(out_dir, tail)\n logging.info(\"Linking: %s to %s\", src, dest_path)\n os.symlink(src, dest_path)\n return out_dir\n\n @staticmethod\n def _download_from_uri(uri, out_dir=None):\n url = urlparse(uri)\n filename = os.path.basename(url.path)\n mimetype, encoding = mimetypes.guess_type(uri)\n local_path = os.path.join(out_dir, filename)\n\n if filename == '':\n raise ValueError('No filename contained in URI: %s' % (uri))\n\n # Get header information from host url\n headers = {}\n host_uri = url.hostname\n\n headers_json = os.getenv(host_uri + _HEADERS_SUFFIX, \"{}\")\n headers = json.loads(headers_json)\n\n with requests.get(uri, stream=True, headers=headers) as response:\n if response.status_code != 200:\n raise RuntimeError(\"URI: %s returned a %s response code.\" % (uri, response.status_code))\n if mimetype == 'application/zip' and not response.headers.get('Content-Type', '').startswith('application/zip'):\n raise RuntimeError(\"URI: %s did not respond with \\'Content-Type\\': \\'application/zip\\'\" % (uri))\n if mimetype == 'application/x-tar' and not response.headers.get('Content-Type', '').startswith('application/x-tar'):\n raise RuntimeError(\"URI: %s did not respond with \\'Content-Type\\': \\'application/x-tar\\'\" % (uri))\n if (mimetype != 'application/zip' and mimetype != 'application/x-tar') and not response.headers.get('Content-Type', '').startswith('application/octet-stream'):\n raise RuntimeError(\"URI: %s did not respond with \\'Content-Type\\': \\'application/octet-stream\\'\" % (uri))\n\n if encoding == 'gzip':\n stream = gzip.GzipFile(fileobj=response.raw)\n local_path = os.path.join(out_dir, f'{filename}.tar')\n else:\n stream = response.raw\n with open(local_path, 'wb') as out:\n shutil.copyfileobj(stream, out)\n \n if mimetype in [\"application/x-tar\", \"application/zip\"]:\n if mimetype == \"application/x-tar\":\n archive = tarfile.open(local_path, 'r', encoding='utf-8')\n else:\n archive = zipfile.ZipFile(local_path, 'r')\n archive.extractall(out_dir)\n archive.close()\n os.remove(local_path)\n\n return out_dir\n\n @staticmethod\n def _create_minio_client():\n # Adding prefixing \"http\" in urlparse is necessary for it to be the netloc\n url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"http://s3.amazonaws.com\"))\n use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv(\"S3_USE_HTTPS\", \"true\"))\n return Minio(url.netloc,\n access_key=os.getenv(\"AWS_ACCESS_KEY_ID\", \"\"),\n secret_key=os.getenv(\"AWS_SECRET_ACCESS_KEY\", \"\"),\n region=os.getenv(\"AWS_REGION\", \"\"),\n secure=use_ssl)\n", "path": "python/kfserving/kfserving/storage.py"}], "after_files": [{"content": "# Copyright 2020 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport logging\nimport tempfile\nimport mimetypes\nimport os\nimport re\nimport json\nimport shutil\nimport tarfile\nimport zipfile\nimport gzip\nfrom urllib.parse import urlparse\nimport requests \nfrom azure.storage.blob import BlockBlobService\nfrom google.auth import exceptions\nfrom google.cloud import storage\nfrom minio import Minio\nfrom kfserving.kfmodel_repository import MODEL_MOUNT_DIRS\n\n_GCS_PREFIX = \"gs://\"\n_S3_PREFIX = \"s3://\"\n_BLOB_RE = \"https://(.+?).blob.core.windows.net/(.+)\"\n_LOCAL_PREFIX = \"file://\"\n_URI_RE = \"https?://(.+)/(.+)\"\n_HTTP_PREFIX = \"http(s)://\"\n_HEADERS_SUFFIX = \"-headers\"\n\nclass Storage(object): # pylint: disable=too-few-public-methods\n @staticmethod\n def download(uri: str, out_dir: str = None) -> str:\n logging.info(\"Copying contents of %s to local\", uri)\n\n is_local = False\n if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri):\n is_local = True\n\n if out_dir is None:\n if is_local:\n # noop if out_dir is not set and the path is local\n return Storage._download_local(uri)\n out_dir = tempfile.mkdtemp()\n elif not os.path.exists(out_dir):\n os.mkdir(out_dir)\n\n if uri.startswith(_GCS_PREFIX):\n Storage._download_gcs(uri, out_dir)\n elif uri.startswith(_S3_PREFIX):\n Storage._download_s3(uri, out_dir)\n elif re.search(_BLOB_RE, uri):\n Storage._download_blob(uri, out_dir)\n elif is_local:\n return Storage._download_local(uri, out_dir)\n elif re.search(_URI_RE, uri):\n return Storage._download_from_uri(uri, out_dir)\n elif uri.startswith(MODEL_MOUNT_DIRS):\n # Don't need to download models if this InferenceService is running in the multi-model\n # serving mode. The model agent will download models.\n return out_dir\n else:\n raise Exception(\"Cannot recognize storage type for \" + uri +\n \"\\n'%s', '%s', '%s', and '%s' are the current available storage type.\" %\n (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX, _HTTP_PREFIX))\n\n logging.info(\"Successfully copied %s to %s\", uri, out_dir)\n return out_dir\n\n @staticmethod\n def _download_s3(uri, temp_dir: str):\n client = Storage._create_minio_client()\n bucket_args = uri.replace(_S3_PREFIX, \"\", 1).split(\"/\", 1)\n bucket_name = bucket_args[0]\n bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True)\n count = 0\n for obj in objects:\n # Replace any prefix from the object key with temp_dir\n subdir_object_key = obj.object_name.replace(bucket_path, \"\", 1).strip(\"/\")\n # fget_object handles directory creation if does not exist\n if not obj.is_dir:\n if subdir_object_key == \"\":\n subdir_object_key = obj.object_name\n client.fget_object(bucket_name, obj.object_name,\n os.path.join(temp_dir, subdir_object_key))\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _download_gcs(uri, temp_dir: str):\n try:\n storage_client = storage.Client()\n except exceptions.DefaultCredentialsError:\n storage_client = storage.Client.create_anonymous_client()\n bucket_args = uri.replace(_GCS_PREFIX, \"\", 1).split(\"/\", 1)\n bucket_name = bucket_args[0]\n bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n bucket = storage_client.bucket(bucket_name)\n prefix = bucket_path\n if not prefix.endswith(\"/\"):\n prefix = prefix + \"/\"\n blobs = bucket.list_blobs(prefix=prefix)\n count = 0\n for blob in blobs:\n # Replace any prefix from the object key with temp_dir\n subdir_object_key = blob.name.replace(bucket_path, \"\", 1).strip(\"/\")\n\n # Create necessary subdirectory to store the object locally\n if \"/\" in subdir_object_key:\n local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit(\"/\", 1)[0])\n if not os.path.isdir(local_object_dir):\n os.makedirs(local_object_dir, exist_ok=True)\n if subdir_object_key.strip() != \"\":\n dest_path = os.path.join(temp_dir, subdir_object_key)\n logging.info(\"Downloading: %s\", dest_path)\n blob.download_to_filename(dest_path)\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _download_blob(uri, out_dir: str): # pylint: disable=too-many-locals\n match = re.search(_BLOB_RE, uri)\n account_name = match.group(1)\n storage_url = match.group(2)\n container_name, prefix = storage_url.split(\"/\", 1)\n\n logging.info(\"Connecting to BLOB account: [%s], container: [%s], prefix: [%s]\",\n account_name,\n container_name,\n prefix)\n try:\n block_blob_service = BlockBlobService(account_name=account_name)\n blobs = block_blob_service.list_blobs(container_name, prefix=prefix)\n except Exception: # pylint: disable=broad-except\n token = Storage._get_azure_storage_token()\n if token is None:\n logging.warning(\"Azure credentials not found, retrying anonymous access\")\n block_blob_service = BlockBlobService(account_name=account_name, token_credential=token)\n blobs = block_blob_service.list_blobs(container_name, prefix=prefix)\n count = 0\n for blob in blobs:\n dest_path = os.path.join(out_dir, blob.name)\n if \"/\" in blob.name:\n head, tail = os.path.split(blob.name)\n if prefix is not None:\n head = head[len(prefix):]\n if head.startswith('/'):\n head = head[1:]\n dir_path = os.path.join(out_dir, head)\n dest_path = os.path.join(dir_path, tail)\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n logging.info(\"Downloading: %s to %s\", blob.name, dest_path)\n block_blob_service.get_blob_to_path(container_name, blob.name, dest_path)\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _get_azure_storage_token():\n tenant_id = os.getenv(\"AZ_TENANT_ID\", \"\")\n client_id = os.getenv(\"AZ_CLIENT_ID\", \"\")\n client_secret = os.getenv(\"AZ_CLIENT_SECRET\", \"\")\n subscription_id = os.getenv(\"AZ_SUBSCRIPTION_ID\", \"\")\n\n if tenant_id == \"\" or client_id == \"\" or client_secret == \"\" or subscription_id == \"\":\n return None\n\n # note the SP must have \"Storage Blob Data Owner\" perms for this to work\n import adal\n from azure.storage.common import TokenCredential\n\n authority_url = \"https://login.microsoftonline.com/\" + tenant_id\n\n context = adal.AuthenticationContext(authority_url)\n\n token = context.acquire_token_with_client_credentials(\n \"https://storage.azure.com/\",\n client_id,\n client_secret)\n\n token_credential = TokenCredential(token[\"accessToken\"])\n\n logging.info(\"Retrieved SP token credential for client_id: %s\", client_id)\n\n return token_credential\n\n @staticmethod\n def _download_local(uri, out_dir=None):\n local_path = uri.replace(_LOCAL_PREFIX, \"\", 1)\n if not os.path.exists(local_path):\n raise RuntimeError(\"Local path %s does not exist.\" % (uri))\n\n if out_dir is None:\n return local_path\n elif not os.path.isdir(out_dir):\n os.makedirs(out_dir)\n\n if os.path.isdir(local_path):\n local_path = os.path.join(local_path, \"*\")\n\n for src in glob.glob(local_path):\n _, tail = os.path.split(src)\n dest_path = os.path.join(out_dir, tail)\n logging.info(\"Linking: %s to %s\", src, dest_path)\n os.symlink(src, dest_path)\n return out_dir\n\n @staticmethod\n def _download_from_uri(uri, out_dir=None):\n url = urlparse(uri)\n filename = os.path.basename(url.path)\n mimetype, encoding = mimetypes.guess_type(url.path)\n local_path = os.path.join(out_dir, filename)\n\n if filename == '':\n raise ValueError('No filename contained in URI: %s' % (uri))\n\n # Get header information from host url\n headers = {}\n host_uri = url.hostname\n\n headers_json = os.getenv(host_uri + _HEADERS_SUFFIX, \"{}\")\n headers = json.loads(headers_json)\n\n with requests.get(uri, stream=True, headers=headers) as response:\n if response.status_code != 200:\n raise RuntimeError(\"URI: %s returned a %s response code.\" % (uri, response.status_code))\n if mimetype == 'application/zip' and not response.headers.get('Content-Type', '').startswith('application/zip'):\n raise RuntimeError(\"URI: %s did not respond with \\'Content-Type\\': \\'application/zip\\'\" % (uri))\n if mimetype == 'application/x-tar' and not response.headers.get('Content-Type', '').startswith('application/x-tar'):\n raise RuntimeError(\"URI: %s did not respond with \\'Content-Type\\': \\'application/x-tar\\'\" % (uri))\n if (mimetype != 'application/zip' and mimetype != 'application/x-tar') and not response.headers.get('Content-Type', '').startswith('application/octet-stream'):\n raise RuntimeError(\"URI: %s did not respond with \\'Content-Type\\': \\'application/octet-stream\\'\" % (uri))\n\n if encoding == 'gzip':\n stream = gzip.GzipFile(fileobj=response.raw)\n local_path = os.path.join(out_dir, f'{filename}.tar')\n else:\n stream = response.raw\n with open(local_path, 'wb') as out:\n shutil.copyfileobj(stream, out)\n \n if mimetype in [\"application/x-tar\", \"application/zip\"]:\n if mimetype == \"application/x-tar\":\n archive = tarfile.open(local_path, 'r', encoding='utf-8')\n else:\n archive = zipfile.ZipFile(local_path, 'r')\n archive.extractall(out_dir)\n archive.close()\n os.remove(local_path)\n\n return out_dir\n\n @staticmethod\n def _create_minio_client():\n # Adding prefixing \"http\" in urlparse is necessary for it to be the netloc\n url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"http://s3.amazonaws.com\"))\n use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv(\"S3_USE_HTTPS\", \"true\"))\n return Minio(url.netloc,\n access_key=os.getenv(\"AWS_ACCESS_KEY_ID\", \"\"),\n secret_key=os.getenv(\"AWS_SECRET_ACCESS_KEY\", \"\"),\n region=os.getenv(\"AWS_REGION\", \"\"),\n secure=use_ssl)\n", "path": "python/kfserving/kfserving/storage.py"}]} | 3,947 | 134 |
gh_patches_debug_24058 | rasdani/github-patches | git_diff | avocado-framework__avocado-4726 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Avocado crashed unexpectedly with the SIGINT
When the SIGINT is sent to the avocado in the early stages the avocado will crash.
This is happening on both runner legacy and nrunner.
```
avocado run /bin/true
JOB ID : ee66540de61211c164d9d9cb5b0e9aaf65dca8a2
JOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T16.36-ee66540/job.log
^CAvocado crashed unexpectedly:
You can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_16:36:38-_m3ikjhl.log
```
```
avocado run --test-runner=nrunner /bin/true
JOB ID : da09a60ab32ff647c79d919781f82db3543e107f
JOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T15.09-da09a60/job.log
^CAvocado crashed unexpectedly:
You can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_15:09:37-my68_dsy.log
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `avocado/core/main.py`
Content:
```
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; specifically version 2 of the License.
4 #
5 # This program is distributed in the hope that it will be useful,
6 # but WITHOUT ANY WARRANTY; without even the implied warranty of
7 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
8 #
9 # See LICENSE for more details.
10 #
11 # Copyright: RedHat 2013-2014
12 # Author: Lucas Meneghel Rodrigues <[email protected]>
13
14
15 import os
16 import sys
17 import tempfile
18 import time
19 import traceback
20
21 try:
22 from avocado.core.settings import settings
23 except ImportError:
24 sys.stderr.write("Unable to import Avocado libraries, please verify "
25 "your installation, and if necessary reinstall it.\n")
26 # This exit code is replicated from avocado/core/exit_codes.py and not
27 # imported because we are dealing with import failures
28 sys.exit(-1)
29
30
31 def get_crash_dir():
32 config = settings.as_dict()
33 crash_dir_path = os.path.join(config.get('datadir.paths.data_dir'),
34 "crashes")
35 try:
36 os.makedirs(crash_dir_path)
37 except OSError:
38 pass
39 return crash_dir_path
40
41
42 def handle_exception(*exc_info):
43 # Print traceback if AVOCADO_LOG_DEBUG environment variable is set
44 msg = "Avocado crashed:\n" + "".join(traceback.format_exception(*exc_info))
45 msg += "\n"
46 if os.environ.get("AVOCADO_LOG_DEBUG"):
47 os.write(2, msg.encode('utf-8'))
48 # Store traceback in data_dir or TMPDIR
49 prefix = "avocado-traceback-"
50 prefix += time.strftime("%F_%T") + "-"
51 tmp, name = tempfile.mkstemp(".log", prefix, get_crash_dir())
52 os.write(tmp, msg.encode('utf-8'))
53 os.close(tmp)
54 # Print friendly message in console-like output
55 msg = ("Avocado crashed unexpectedly: %s\nYou can find details in %s\n"
56 % (exc_info[1], name))
57 os.write(2, msg.encode('utf-8'))
58 # This exit code is replicated from avocado/core/exit_codes.py and not
59 # imported because we are dealing with import failures
60 sys.exit(-1)
61
62
63 def main():
64 sys.excepthook = handle_exception
65 from avocado.core.app import AvocadoApp # pylint: disable=E0611
66
67 # Override tmp in case it's not set in env
68 for attr in ("TMP", "TEMP", "TMPDIR"):
69 if attr in os.environ:
70 break
71 else: # TMP not set by user, use /var/tmp if exists
72 # TMP not set by user in environment. Try to use /var/tmp to avoid
73 # possible problems with "/tmp" being mounted as TMPFS without the
74 # support for O_DIRECT
75 if os.path.exists("/var/tmp"):
76 os.environ["TMP"] = "/var/tmp"
77 app = AvocadoApp()
78 return app.run()
79
80
81 if __name__ == '__main__':
82 sys.exit(main())
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/avocado/core/main.py b/avocado/core/main.py
--- a/avocado/core/main.py
+++ b/avocado/core/main.py
@@ -19,6 +19,7 @@
import traceback
try:
+ from avocado.core import exit_codes
from avocado.core.settings import settings
except ImportError:
sys.stderr.write("Unable to import Avocado libraries, please verify "
@@ -51,13 +52,16 @@
tmp, name = tempfile.mkstemp(".log", prefix, get_crash_dir())
os.write(tmp, msg.encode('utf-8'))
os.close(tmp)
- # Print friendly message in console-like output
- msg = ("Avocado crashed unexpectedly: %s\nYou can find details in %s\n"
- % (exc_info[1], name))
+ if exc_info[0] is KeyboardInterrupt:
+ msg = "%s\nYou can find details in %s\n" % (exc_info[0].__doc__, name)
+ exit_code = exit_codes.AVOCADO_JOB_INTERRUPTED
+ else:
+ # Print friendly message in console-like output
+ msg = ("Avocado crashed unexpectedly: %s\nYou can find details in %s\n"
+ % (exc_info[1], name))
+ exit_code = exit_codes.AVOCADO_GENERIC_CRASH
os.write(2, msg.encode('utf-8'))
- # This exit code is replicated from avocado/core/exit_codes.py and not
- # imported because we are dealing with import failures
- sys.exit(-1)
+ sys.exit(exit_code)
def main():
| {"golden_diff": "diff --git a/avocado/core/main.py b/avocado/core/main.py\n--- a/avocado/core/main.py\n+++ b/avocado/core/main.py\n@@ -19,6 +19,7 @@\n import traceback\n \n try:\n+ from avocado.core import exit_codes\n from avocado.core.settings import settings\n except ImportError:\n sys.stderr.write(\"Unable to import Avocado libraries, please verify \"\n@@ -51,13 +52,16 @@\n tmp, name = tempfile.mkstemp(\".log\", prefix, get_crash_dir())\n os.write(tmp, msg.encode('utf-8'))\n os.close(tmp)\n- # Print friendly message in console-like output\n- msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n- % (exc_info[1], name))\n+ if exc_info[0] is KeyboardInterrupt:\n+ msg = \"%s\\nYou can find details in %s\\n\" % (exc_info[0].__doc__, name)\n+ exit_code = exit_codes.AVOCADO_JOB_INTERRUPTED\n+ else:\n+ # Print friendly message in console-like output\n+ msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n+ % (exc_info[1], name))\n+ exit_code = exit_codes.AVOCADO_GENERIC_CRASH\n os.write(2, msg.encode('utf-8'))\n- # This exit code is replicated from avocado/core/exit_codes.py and not\n- # imported because we are dealing with import failures\n- sys.exit(-1)\n+ sys.exit(exit_code)\n \n \n def main():\n", "issue": "Avocado crashed unexpectedly with the SIGINT\nWhen the SIGINT is sent to the avocado in the early stages the avocado will crash.\r\nThis is happening on both runner legacy and nrunner. \r\n\r\n```\r\navocado run /bin/true\r\nJOB ID : ee66540de61211c164d9d9cb5b0e9aaf65dca8a2\r\nJOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T16.36-ee66540/job.log\r\n^CAvocado crashed unexpectedly:\r\nYou can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_16:36:38-_m3ikjhl.log\r\n```\r\n\r\n```\r\navocado run --test-runner=nrunner /bin/true\r\nJOB ID : da09a60ab32ff647c79d919781f82db3543e107f\r\nJOB LOG : /home/jarichte/avocado/job-results/job-2021-05-25T15.09-da09a60/job.log\r\n^CAvocado crashed unexpectedly:\r\nYou can find details in /var/lib/avocado/data/crashes/avocado-traceback-2021-05-25_15:09:37-my68_dsy.log\r\n```\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; specifically version 2 of the License.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: RedHat 2013-2014\n# Author: Lucas Meneghel Rodrigues <[email protected]>\n\n\nimport os\nimport sys\nimport tempfile\nimport time\nimport traceback\n\ntry:\n from avocado.core.settings import settings\nexcept ImportError:\n sys.stderr.write(\"Unable to import Avocado libraries, please verify \"\n \"your installation, and if necessary reinstall it.\\n\")\n # This exit code is replicated from avocado/core/exit_codes.py and not\n # imported because we are dealing with import failures\n sys.exit(-1)\n\n\ndef get_crash_dir():\n config = settings.as_dict()\n crash_dir_path = os.path.join(config.get('datadir.paths.data_dir'),\n \"crashes\")\n try:\n os.makedirs(crash_dir_path)\n except OSError:\n pass\n return crash_dir_path\n\n\ndef handle_exception(*exc_info):\n # Print traceback if AVOCADO_LOG_DEBUG environment variable is set\n msg = \"Avocado crashed:\\n\" + \"\".join(traceback.format_exception(*exc_info))\n msg += \"\\n\"\n if os.environ.get(\"AVOCADO_LOG_DEBUG\"):\n os.write(2, msg.encode('utf-8'))\n # Store traceback in data_dir or TMPDIR\n prefix = \"avocado-traceback-\"\n prefix += time.strftime(\"%F_%T\") + \"-\"\n tmp, name = tempfile.mkstemp(\".log\", prefix, get_crash_dir())\n os.write(tmp, msg.encode('utf-8'))\n os.close(tmp)\n # Print friendly message in console-like output\n msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n % (exc_info[1], name))\n os.write(2, msg.encode('utf-8'))\n # This exit code is replicated from avocado/core/exit_codes.py and not\n # imported because we are dealing with import failures\n sys.exit(-1)\n\n\ndef main():\n sys.excepthook = handle_exception\n from avocado.core.app import AvocadoApp # pylint: disable=E0611\n\n # Override tmp in case it's not set in env\n for attr in (\"TMP\", \"TEMP\", \"TMPDIR\"):\n if attr in os.environ:\n break\n else: # TMP not set by user, use /var/tmp if exists\n # TMP not set by user in environment. Try to use /var/tmp to avoid\n # possible problems with \"/tmp\" being mounted as TMPFS without the\n # support for O_DIRECT\n if os.path.exists(\"/var/tmp\"):\n os.environ[\"TMP\"] = \"/var/tmp\"\n app = AvocadoApp()\n return app.run()\n\n\nif __name__ == '__main__':\n sys.exit(main())\n", "path": "avocado/core/main.py"}], "after_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; specifically version 2 of the License.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: RedHat 2013-2014\n# Author: Lucas Meneghel Rodrigues <[email protected]>\n\n\nimport os\nimport sys\nimport tempfile\nimport time\nimport traceback\n\ntry:\n from avocado.core import exit_codes\n from avocado.core.settings import settings\nexcept ImportError:\n sys.stderr.write(\"Unable to import Avocado libraries, please verify \"\n \"your installation, and if necessary reinstall it.\\n\")\n # This exit code is replicated from avocado/core/exit_codes.py and not\n # imported because we are dealing with import failures\n sys.exit(-1)\n\n\ndef get_crash_dir():\n config = settings.as_dict()\n crash_dir_path = os.path.join(config.get('datadir.paths.data_dir'),\n \"crashes\")\n try:\n os.makedirs(crash_dir_path)\n except OSError:\n pass\n return crash_dir_path\n\n\ndef handle_exception(*exc_info):\n # Print traceback if AVOCADO_LOG_DEBUG environment variable is set\n msg = \"Avocado crashed:\\n\" + \"\".join(traceback.format_exception(*exc_info))\n msg += \"\\n\"\n if os.environ.get(\"AVOCADO_LOG_DEBUG\"):\n os.write(2, msg.encode('utf-8'))\n # Store traceback in data_dir or TMPDIR\n prefix = \"avocado-traceback-\"\n prefix += time.strftime(\"%F_%T\") + \"-\"\n tmp, name = tempfile.mkstemp(\".log\", prefix, get_crash_dir())\n os.write(tmp, msg.encode('utf-8'))\n os.close(tmp)\n if exc_info[0] is KeyboardInterrupt:\n msg = \"%s\\nYou can find details in %s\\n\" % (exc_info[0].__doc__, name)\n exit_code = exit_codes.AVOCADO_JOB_INTERRUPTED\n else:\n # Print friendly message in console-like output\n msg = (\"Avocado crashed unexpectedly: %s\\nYou can find details in %s\\n\"\n % (exc_info[1], name))\n exit_code = exit_codes.AVOCADO_GENERIC_CRASH\n os.write(2, msg.encode('utf-8'))\n sys.exit(exit_code)\n\n\ndef main():\n sys.excepthook = handle_exception\n from avocado.core.app import AvocadoApp # pylint: disable=E0611\n\n # Override tmp in case it's not set in env\n for attr in (\"TMP\", \"TEMP\", \"TMPDIR\"):\n if attr in os.environ:\n break\n else: # TMP not set by user, use /var/tmp if exists\n # TMP not set by user in environment. Try to use /var/tmp to avoid\n # possible problems with \"/tmp\" being mounted as TMPFS without the\n # support for O_DIRECT\n if os.path.exists(\"/var/tmp\"):\n os.environ[\"TMP\"] = \"/var/tmp\"\n app = AvocadoApp()\n return app.run()\n\n\nif __name__ == '__main__':\n sys.exit(main())\n", "path": "avocado/core/main.py"}]} | 1,450 | 360 |
gh_patches_debug_13902 | rasdani/github-patches | git_diff | pyro-ppl__numpyro-476 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regression in fori_collect when progress bar disabled
I am finding that with progress bar disabled, `fori_collect` is not returning the same results for the sparse regression example:
- [x] It fetches the wrong `init_state` and runs adaptation twice.
- [ ] Results returned when progress bar is enabled/disabled are different.
- [x] When progress bar is disabled, the time taken is almost twice. This seems to be a regression from earlier when disabling the progress bar was much faster.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpyro/util.py`
Content:
```
1 from collections import namedtuple
2 from contextlib import contextmanager
3 import os
4 import random
5 import re
6
7 import numpy as onp
8 import tqdm
9
10 import jax
11 from jax import jit, lax, ops, vmap
12 from jax.interpreters.batching import BatchTracer
13 from jax.interpreters.partial_eval import JaxprTracer
14 from jax.dtypes import canonicalize_dtype
15 import jax.numpy as np
16 from jax.tree_util import tree_flatten, tree_map, tree_unflatten
17
18 _DATA_TYPES = {}
19 _DISABLE_CONTROL_FLOW_PRIM = False
20
21
22 def set_rng_seed(rng_seed):
23 """
24 Initializes internal state for the Python and NumPy random number generators.
25
26 :param int rng_seed: seed for Python and NumPy random states.
27 """
28 random.seed(rng_seed)
29 onp.random.seed(rng_seed)
30
31
32 def enable_x64(use_x64=True):
33 """
34 Changes the default array type to use 64 bit precision as in NumPy.
35
36 :param bool use_x64: when `True`, JAX arrays will use 64 bits by default;
37 else 32 bits.
38 """
39 if not use_x64:
40 use_x64 = os.getenv('JAX_ENABLE_X64', 0)
41 jax.config.update('jax_enable_x64', use_x64)
42
43
44 def set_platform(platform=None):
45 """
46 Changes platform to CPU, GPU, or TPU. This utility only takes
47 effect at the beginning of your program.
48
49 :param str platform: either 'cpu', 'gpu', or 'tpu'.
50 """
51 if platform is None:
52 platform = os.getenv('JAX_PLATFORM_NAME', 'cpu')
53 jax.config.update('jax_platform_name', platform)
54
55
56 def set_host_device_count(n):
57 """
58 By default, XLA considers all CPU cores as one device. This utility tells XLA
59 that there are `n` host (CPU) devices available to use. As a consequence, this
60 allows parallel mapping in JAX :func:`jax.pmap` to work in CPU platform.
61
62 .. note:: This utility only takes effect at the beginning of your program.
63 Under the hood, this sets the environment variable
64 `XLA_FLAGS=--xla_force_host_platform_device_count=[num_devices]`, where
65 `[num_device]` is the desired number of CPU devices `n`.
66
67 .. warning:: Our understanding of the side effects of using the
68 `xla_force_host_platform_device_count` flag in XLA is incomplete. If you
69 observe some strange phenomenon when using this utility, please let us
70 know through our issue or forum page. More information is available in this
71 `JAX issue <https://github.com/google/jax/issues/1408>`_.
72
73 :param int n: number of CPU devices to use.
74 """
75 xla_flags = os.getenv('XLA_FLAGS', '').lstrip('--')
76 xla_flags = re.sub(r'xla_force_host_platform_device_count=.+\s', '', xla_flags).split()
77 os.environ['XLA_FLAGS'] = ' '.join(['--xla_force_host_platform_device_count={}'.format(n)]
78 + xla_flags)
79
80
81 @contextmanager
82 def optional(condition, context_manager):
83 """
84 Optionally wrap inside `context_manager` if condition is `True`.
85 """
86 if condition:
87 with context_manager:
88 yield
89 else:
90 yield
91
92
93 @contextmanager
94 def control_flow_prims_disabled():
95 global _DISABLE_CONTROL_FLOW_PRIM
96 stored_flag = _DISABLE_CONTROL_FLOW_PRIM
97 try:
98 _DISABLE_CONTROL_FLOW_PRIM = True
99 yield
100 finally:
101 _DISABLE_CONTROL_FLOW_PRIM = stored_flag
102
103
104 def cond(pred, true_operand, true_fun, false_operand, false_fun):
105 if _DISABLE_CONTROL_FLOW_PRIM:
106 if pred:
107 return true_fun(true_operand)
108 else:
109 return false_fun(false_operand)
110 else:
111 return lax.cond(pred, true_operand, true_fun, false_operand, false_fun)
112
113
114 def while_loop(cond_fun, body_fun, init_val):
115 if _DISABLE_CONTROL_FLOW_PRIM:
116 val = init_val
117 while cond_fun(val):
118 val = body_fun(val)
119 return val
120 else:
121 return lax.while_loop(cond_fun, body_fun, init_val)
122
123
124 def fori_loop(lower, upper, body_fun, init_val):
125 if _DISABLE_CONTROL_FLOW_PRIM:
126 val = init_val
127 for i in range(int(lower), int(upper)):
128 val = body_fun(i, val)
129 return val
130 else:
131 return lax.fori_loop(lower, upper, body_fun, init_val)
132
133
134 def not_jax_tracer(x):
135 """
136 Checks if `x` is not an array generated inside `jit`, `pmap`, `vmap`, or `lax_control_flow`.
137 """
138 return not isinstance(x, (JaxprTracer, BatchTracer))
139
140
141 def identity(x):
142 return x
143
144
145 def fori_collect(lower, upper, body_fun, init_val, transform=identity,
146 progbar=True, return_init_state=False, **progbar_opts):
147 """
148 This looping construct works like :func:`~jax.lax.fori_loop` but with the additional
149 effect of collecting values from the loop body. In addition, this allows for
150 post-processing of these samples via `transform`, and progress bar updates.
151 Note that, `progbar=False` will be faster, especially when collecting a
152 lot of samples. Refer to example usage in :func:`~numpyro.infer.mcmc.hmc`.
153
154 :param int lower: the index to start the collective work. In other words,
155 we will skip collecting the first `lower` values.
156 :param int upper: number of times to run the loop body.
157 :param body_fun: a callable that takes a collection of
158 `np.ndarray` and returns a collection with the same shape and
159 `dtype`.
160 :param init_val: initial value to pass as argument to `body_fun`. Can
161 be any Python collection type containing `np.ndarray` objects.
162 :param transform: a callable to post-process the values returned by `body_fn`.
163 :param progbar: whether to post progress bar updates.
164 :param bool return_init_state: If `True`, the state at iteration `lower-1`,
165 where the collection begins, is also returned. This has the same type
166 as `init_val`.
167 :param `**progbar_opts`: optional additional progress bar arguments. A
168 `diagnostics_fn` can be supplied which when passed the current value
169 from `body_fun` returns a string that is used to update the progress
170 bar postfix. Also a `progbar_desc` keyword argument can be supplied
171 which is used to label the progress bar.
172 :return: collection with the same type as `init_val` with values
173 collected along the leading axis of `np.ndarray` objects.
174 """
175 assert lower <= upper
176 init_val_flat, unravel_fn = ravel_pytree(transform(init_val))
177 ravel_fn = lambda x: ravel_pytree(transform(x))[0] # noqa: E731
178
179 if not progbar:
180 collection = np.zeros((upper - lower,) + init_val_flat.shape)
181
182 def _body_fn(i, vals):
183 val, collection, start_state = vals
184 val = body_fun(val)
185 i = np.where(i >= lower, i - lower, 0)
186 start_state = lax.cond(i == lower-1,
187 start_state, lambda _: val,
188 start_state, lambda x: x)
189 collection = ops.index_update(collection, i, ravel_fn(val))
190 return val, collection, start_state
191
192 _, collection, start_state = fori_loop(0, upper, _body_fn, (init_val, collection, init_val))
193 else:
194 diagnostics_fn = progbar_opts.pop('diagnostics_fn', None)
195 progbar_desc = progbar_opts.pop('progbar_desc', lambda x: '')
196 collection = []
197
198 val, start_state = init_val, init_val
199 with tqdm.trange(upper) as t:
200 for i in t:
201 val = jit(body_fun)(val)
202 if i == lower - 1:
203 start_state = val
204 elif i >= lower:
205 collection.append(jit(ravel_fn)(val))
206 t.set_description(progbar_desc(i), refresh=False)
207 if diagnostics_fn:
208 t.set_postfix_str(diagnostics_fn(val), refresh=False)
209
210 collection = np.stack(collection) if len(collection) > 0 else \
211 np.zeros((upper - lower,) + init_val_flat.shape)
212
213 unravel_collection = vmap(unravel_fn)(collection)
214 return (unravel_collection, start_state) if return_init_state else unravel_collection
215
216
217 def copy_docs_from(source_class, full_text=False):
218 """
219 Decorator to copy class and method docs from source to destin class.
220 """
221
222 def decorator(destin_class):
223 # This works only in python 3.3+:
224 # if not destin_class.__doc__:
225 # destin_class.__doc__ = source_class.__doc__
226 for name in dir(destin_class):
227 if name.startswith('_'):
228 continue
229 destin_attr = getattr(destin_class, name)
230 destin_attr = getattr(destin_attr, '__func__', destin_attr)
231 source_attr = getattr(source_class, name, None)
232 source_doc = getattr(source_attr, '__doc__', None)
233 if source_doc and not getattr(destin_attr, '__doc__', None):
234 if full_text or source_doc.startswith('See '):
235 destin_doc = source_doc
236 else:
237 destin_doc = 'See :meth:`{}.{}.{}`'.format(
238 source_class.__module__, source_class.__name__, name)
239 if isinstance(destin_attr, property):
240 # Set docs for object properties.
241 # Since __doc__ is read-only, we need to reset the property
242 # with the updated doc.
243 updated_property = property(destin_attr.fget,
244 destin_attr.fset,
245 destin_attr.fdel,
246 destin_doc)
247 setattr(destin_class, name, updated_property)
248 else:
249 destin_attr.__doc__ = destin_doc
250 return destin_class
251
252 return decorator
253
254
255 pytree_metadata = namedtuple('pytree_metadata', ['flat', 'shape', 'size', 'dtype'])
256
257
258 def _ravel_list(*leaves):
259 leaves_metadata = tree_map(lambda l: pytree_metadata(
260 np.ravel(l), np.shape(l), np.size(l), canonicalize_dtype(lax.dtype(l))), leaves)
261 leaves_idx = np.cumsum(np.array((0,) + tuple(d.size for d in leaves_metadata)))
262
263 def unravel_list(arr):
264 return [np.reshape(lax.dynamic_slice_in_dim(arr, leaves_idx[i], m.size),
265 m.shape).astype(m.dtype)
266 for i, m in enumerate(leaves_metadata)]
267
268 flat = np.concatenate([m.flat for m in leaves_metadata]) if leaves_metadata else np.array([])
269 return flat, unravel_list
270
271
272 def ravel_pytree(pytree):
273 leaves, treedef = tree_flatten(pytree)
274 flat, unravel_list = _ravel_list(*leaves)
275
276 def unravel_pytree(arr):
277 return tree_unflatten(treedef, unravel_list(arr))
278
279 return flat, unravel_pytree
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpyro/util.py b/numpyro/util.py
--- a/numpyro/util.py
+++ b/numpyro/util.py
@@ -181,11 +181,11 @@
def _body_fn(i, vals):
val, collection, start_state = vals
- val = body_fun(val)
- i = np.where(i >= lower, i - lower, 0)
- start_state = lax.cond(i == lower-1,
- start_state, lambda _: val,
+ val = jit(body_fun)(val)
+ start_state = lax.cond(i < lower,
+ val, lambda x: x,
start_state, lambda x: x)
+ i = np.where(i >= lower, i - lower, 0)
collection = ops.index_update(collection, i, ravel_fn(val))
return val, collection, start_state
| {"golden_diff": "diff --git a/numpyro/util.py b/numpyro/util.py\n--- a/numpyro/util.py\n+++ b/numpyro/util.py\n@@ -181,11 +181,11 @@\n \n def _body_fn(i, vals):\n val, collection, start_state = vals\n- val = body_fun(val)\n- i = np.where(i >= lower, i - lower, 0)\n- start_state = lax.cond(i == lower-1,\n- start_state, lambda _: val,\n+ val = jit(body_fun)(val)\n+ start_state = lax.cond(i < lower,\n+ val, lambda x: x,\n start_state, lambda x: x)\n+ i = np.where(i >= lower, i - lower, 0)\n collection = ops.index_update(collection, i, ravel_fn(val))\n return val, collection, start_state\n", "issue": "Regression in fori_collect when progress bar disabled\nI am finding that with progress bar disabled, `fori_collect` is not returning the same results for the sparse regression example:\r\n - [x] It fetches the wrong `init_state` and runs adaptation twice.\r\n - [ ] Results returned when progress bar is enabled/disabled are different.\r\n - [x] When progress bar is disabled, the time taken is almost twice. This seems to be a regression from earlier when disabling the progress bar was much faster. \n", "before_files": [{"content": "from collections import namedtuple\nfrom contextlib import contextmanager\nimport os\nimport random\nimport re\n\nimport numpy as onp\nimport tqdm\n\nimport jax\nfrom jax import jit, lax, ops, vmap\nfrom jax.interpreters.batching import BatchTracer\nfrom jax.interpreters.partial_eval import JaxprTracer\nfrom jax.dtypes import canonicalize_dtype\nimport jax.numpy as np\nfrom jax.tree_util import tree_flatten, tree_map, tree_unflatten\n\n_DATA_TYPES = {}\n_DISABLE_CONTROL_FLOW_PRIM = False\n\n\ndef set_rng_seed(rng_seed):\n \"\"\"\n Initializes internal state for the Python and NumPy random number generators.\n\n :param int rng_seed: seed for Python and NumPy random states.\n \"\"\"\n random.seed(rng_seed)\n onp.random.seed(rng_seed)\n\n\ndef enable_x64(use_x64=True):\n \"\"\"\n Changes the default array type to use 64 bit precision as in NumPy.\n\n :param bool use_x64: when `True`, JAX arrays will use 64 bits by default;\n else 32 bits.\n \"\"\"\n if not use_x64:\n use_x64 = os.getenv('JAX_ENABLE_X64', 0)\n jax.config.update('jax_enable_x64', use_x64)\n\n\ndef set_platform(platform=None):\n \"\"\"\n Changes platform to CPU, GPU, or TPU. This utility only takes\n effect at the beginning of your program.\n\n :param str platform: either 'cpu', 'gpu', or 'tpu'.\n \"\"\"\n if platform is None:\n platform = os.getenv('JAX_PLATFORM_NAME', 'cpu')\n jax.config.update('jax_platform_name', platform)\n\n\ndef set_host_device_count(n):\n \"\"\"\n By default, XLA considers all CPU cores as one device. This utility tells XLA\n that there are `n` host (CPU) devices available to use. As a consequence, this\n allows parallel mapping in JAX :func:`jax.pmap` to work in CPU platform.\n\n .. note:: This utility only takes effect at the beginning of your program.\n Under the hood, this sets the environment variable\n `XLA_FLAGS=--xla_force_host_platform_device_count=[num_devices]`, where\n `[num_device]` is the desired number of CPU devices `n`.\n\n .. warning:: Our understanding of the side effects of using the\n `xla_force_host_platform_device_count` flag in XLA is incomplete. If you\n observe some strange phenomenon when using this utility, please let us\n know through our issue or forum page. More information is available in this\n `JAX issue <https://github.com/google/jax/issues/1408>`_.\n\n :param int n: number of CPU devices to use.\n \"\"\"\n xla_flags = os.getenv('XLA_FLAGS', '').lstrip('--')\n xla_flags = re.sub(r'xla_force_host_platform_device_count=.+\\s', '', xla_flags).split()\n os.environ['XLA_FLAGS'] = ' '.join(['--xla_force_host_platform_device_count={}'.format(n)]\n + xla_flags)\n\n\n@contextmanager\ndef optional(condition, context_manager):\n \"\"\"\n Optionally wrap inside `context_manager` if condition is `True`.\n \"\"\"\n if condition:\n with context_manager:\n yield\n else:\n yield\n\n\n@contextmanager\ndef control_flow_prims_disabled():\n global _DISABLE_CONTROL_FLOW_PRIM\n stored_flag = _DISABLE_CONTROL_FLOW_PRIM\n try:\n _DISABLE_CONTROL_FLOW_PRIM = True\n yield\n finally:\n _DISABLE_CONTROL_FLOW_PRIM = stored_flag\n\n\ndef cond(pred, true_operand, true_fun, false_operand, false_fun):\n if _DISABLE_CONTROL_FLOW_PRIM:\n if pred:\n return true_fun(true_operand)\n else:\n return false_fun(false_operand)\n else:\n return lax.cond(pred, true_operand, true_fun, false_operand, false_fun)\n\n\ndef while_loop(cond_fun, body_fun, init_val):\n if _DISABLE_CONTROL_FLOW_PRIM:\n val = init_val\n while cond_fun(val):\n val = body_fun(val)\n return val\n else:\n return lax.while_loop(cond_fun, body_fun, init_val)\n\n\ndef fori_loop(lower, upper, body_fun, init_val):\n if _DISABLE_CONTROL_FLOW_PRIM:\n val = init_val\n for i in range(int(lower), int(upper)):\n val = body_fun(i, val)\n return val\n else:\n return lax.fori_loop(lower, upper, body_fun, init_val)\n\n\ndef not_jax_tracer(x):\n \"\"\"\n Checks if `x` is not an array generated inside `jit`, `pmap`, `vmap`, or `lax_control_flow`.\n \"\"\"\n return not isinstance(x, (JaxprTracer, BatchTracer))\n\n\ndef identity(x):\n return x\n\n\ndef fori_collect(lower, upper, body_fun, init_val, transform=identity,\n progbar=True, return_init_state=False, **progbar_opts):\n \"\"\"\n This looping construct works like :func:`~jax.lax.fori_loop` but with the additional\n effect of collecting values from the loop body. In addition, this allows for\n post-processing of these samples via `transform`, and progress bar updates.\n Note that, `progbar=False` will be faster, especially when collecting a\n lot of samples. Refer to example usage in :func:`~numpyro.infer.mcmc.hmc`.\n\n :param int lower: the index to start the collective work. In other words,\n we will skip collecting the first `lower` values.\n :param int upper: number of times to run the loop body.\n :param body_fun: a callable that takes a collection of\n `np.ndarray` and returns a collection with the same shape and\n `dtype`.\n :param init_val: initial value to pass as argument to `body_fun`. Can\n be any Python collection type containing `np.ndarray` objects.\n :param transform: a callable to post-process the values returned by `body_fn`.\n :param progbar: whether to post progress bar updates.\n :param bool return_init_state: If `True`, the state at iteration `lower-1`,\n where the collection begins, is also returned. This has the same type\n as `init_val`.\n :param `**progbar_opts`: optional additional progress bar arguments. A\n `diagnostics_fn` can be supplied which when passed the current value\n from `body_fun` returns a string that is used to update the progress\n bar postfix. Also a `progbar_desc` keyword argument can be supplied\n which is used to label the progress bar.\n :return: collection with the same type as `init_val` with values\n collected along the leading axis of `np.ndarray` objects.\n \"\"\"\n assert lower <= upper\n init_val_flat, unravel_fn = ravel_pytree(transform(init_val))\n ravel_fn = lambda x: ravel_pytree(transform(x))[0] # noqa: E731\n\n if not progbar:\n collection = np.zeros((upper - lower,) + init_val_flat.shape)\n\n def _body_fn(i, vals):\n val, collection, start_state = vals\n val = body_fun(val)\n i = np.where(i >= lower, i - lower, 0)\n start_state = lax.cond(i == lower-1,\n start_state, lambda _: val,\n start_state, lambda x: x)\n collection = ops.index_update(collection, i, ravel_fn(val))\n return val, collection, start_state\n\n _, collection, start_state = fori_loop(0, upper, _body_fn, (init_val, collection, init_val))\n else:\n diagnostics_fn = progbar_opts.pop('diagnostics_fn', None)\n progbar_desc = progbar_opts.pop('progbar_desc', lambda x: '')\n collection = []\n\n val, start_state = init_val, init_val\n with tqdm.trange(upper) as t:\n for i in t:\n val = jit(body_fun)(val)\n if i == lower - 1:\n start_state = val\n elif i >= lower:\n collection.append(jit(ravel_fn)(val))\n t.set_description(progbar_desc(i), refresh=False)\n if diagnostics_fn:\n t.set_postfix_str(diagnostics_fn(val), refresh=False)\n\n collection = np.stack(collection) if len(collection) > 0 else \\\n np.zeros((upper - lower,) + init_val_flat.shape)\n\n unravel_collection = vmap(unravel_fn)(collection)\n return (unravel_collection, start_state) if return_init_state else unravel_collection\n\n\ndef copy_docs_from(source_class, full_text=False):\n \"\"\"\n Decorator to copy class and method docs from source to destin class.\n \"\"\"\n\n def decorator(destin_class):\n # This works only in python 3.3+:\n # if not destin_class.__doc__:\n # destin_class.__doc__ = source_class.__doc__\n for name in dir(destin_class):\n if name.startswith('_'):\n continue\n destin_attr = getattr(destin_class, name)\n destin_attr = getattr(destin_attr, '__func__', destin_attr)\n source_attr = getattr(source_class, name, None)\n source_doc = getattr(source_attr, '__doc__', None)\n if source_doc and not getattr(destin_attr, '__doc__', None):\n if full_text or source_doc.startswith('See '):\n destin_doc = source_doc\n else:\n destin_doc = 'See :meth:`{}.{}.{}`'.format(\n source_class.__module__, source_class.__name__, name)\n if isinstance(destin_attr, property):\n # Set docs for object properties.\n # Since __doc__ is read-only, we need to reset the property\n # with the updated doc.\n updated_property = property(destin_attr.fget,\n destin_attr.fset,\n destin_attr.fdel,\n destin_doc)\n setattr(destin_class, name, updated_property)\n else:\n destin_attr.__doc__ = destin_doc\n return destin_class\n\n return decorator\n\n\npytree_metadata = namedtuple('pytree_metadata', ['flat', 'shape', 'size', 'dtype'])\n\n\ndef _ravel_list(*leaves):\n leaves_metadata = tree_map(lambda l: pytree_metadata(\n np.ravel(l), np.shape(l), np.size(l), canonicalize_dtype(lax.dtype(l))), leaves)\n leaves_idx = np.cumsum(np.array((0,) + tuple(d.size for d in leaves_metadata)))\n\n def unravel_list(arr):\n return [np.reshape(lax.dynamic_slice_in_dim(arr, leaves_idx[i], m.size),\n m.shape).astype(m.dtype)\n for i, m in enumerate(leaves_metadata)]\n\n flat = np.concatenate([m.flat for m in leaves_metadata]) if leaves_metadata else np.array([])\n return flat, unravel_list\n\n\ndef ravel_pytree(pytree):\n leaves, treedef = tree_flatten(pytree)\n flat, unravel_list = _ravel_list(*leaves)\n\n def unravel_pytree(arr):\n return tree_unflatten(treedef, unravel_list(arr))\n\n return flat, unravel_pytree\n", "path": "numpyro/util.py"}], "after_files": [{"content": "from collections import namedtuple\nfrom contextlib import contextmanager\nimport os\nimport random\nimport re\n\nimport numpy as onp\nimport tqdm\n\nimport jax\nfrom jax import jit, lax, ops, vmap\nfrom jax.interpreters.batching import BatchTracer\nfrom jax.interpreters.partial_eval import JaxprTracer\nfrom jax.dtypes import canonicalize_dtype\nimport jax.numpy as np\nfrom jax.tree_util import tree_flatten, tree_map, tree_unflatten\n\n_DATA_TYPES = {}\n_DISABLE_CONTROL_FLOW_PRIM = False\n\n\ndef set_rng_seed(rng_seed):\n \"\"\"\n Initializes internal state for the Python and NumPy random number generators.\n\n :param int rng_seed: seed for Python and NumPy random states.\n \"\"\"\n random.seed(rng_seed)\n onp.random.seed(rng_seed)\n\n\ndef enable_x64(use_x64=True):\n \"\"\"\n Changes the default array type to use 64 bit precision as in NumPy.\n\n :param bool use_x64: when `True`, JAX arrays will use 64 bits by default;\n else 32 bits.\n \"\"\"\n if not use_x64:\n use_x64 = os.getenv('JAX_ENABLE_X64', 0)\n jax.config.update('jax_enable_x64', use_x64)\n\n\ndef set_platform(platform=None):\n \"\"\"\n Changes platform to CPU, GPU, or TPU. This utility only takes\n effect at the beginning of your program.\n\n :param str platform: either 'cpu', 'gpu', or 'tpu'.\n \"\"\"\n if platform is None:\n platform = os.getenv('JAX_PLATFORM_NAME', 'cpu')\n jax.config.update('jax_platform_name', platform)\n\n\ndef set_host_device_count(n):\n \"\"\"\n By default, XLA considers all CPU cores as one device. This utility tells XLA\n that there are `n` host (CPU) devices available to use. As a consequence, this\n allows parallel mapping in JAX :func:`jax.pmap` to work in CPU platform.\n\n .. note:: This utility only takes effect at the beginning of your program.\n Under the hood, this sets the environment variable\n `XLA_FLAGS=--xla_force_host_platform_device_count=[num_devices]`, where\n `[num_device]` is the desired number of CPU devices `n`.\n\n .. warning:: Our understanding of the side effects of using the\n `xla_force_host_platform_device_count` flag in XLA is incomplete. If you\n observe some strange phenomenon when using this utility, please let us\n know through our issue or forum page. More information is available in this\n `JAX issue <https://github.com/google/jax/issues/1408>`_.\n\n :param int n: number of CPU devices to use.\n \"\"\"\n xla_flags = os.getenv('XLA_FLAGS', '').lstrip('--')\n xla_flags = re.sub(r'xla_force_host_platform_device_count=.+\\s', '', xla_flags).split()\n os.environ['XLA_FLAGS'] = ' '.join(['--xla_force_host_platform_device_count={}'.format(n)]\n + xla_flags)\n\n\n@contextmanager\ndef optional(condition, context_manager):\n \"\"\"\n Optionally wrap inside `context_manager` if condition is `True`.\n \"\"\"\n if condition:\n with context_manager:\n yield\n else:\n yield\n\n\n@contextmanager\ndef control_flow_prims_disabled():\n global _DISABLE_CONTROL_FLOW_PRIM\n stored_flag = _DISABLE_CONTROL_FLOW_PRIM\n try:\n _DISABLE_CONTROL_FLOW_PRIM = True\n yield\n finally:\n _DISABLE_CONTROL_FLOW_PRIM = stored_flag\n\n\ndef cond(pred, true_operand, true_fun, false_operand, false_fun):\n if _DISABLE_CONTROL_FLOW_PRIM:\n if pred:\n return true_fun(true_operand)\n else:\n return false_fun(false_operand)\n else:\n return lax.cond(pred, true_operand, true_fun, false_operand, false_fun)\n\n\ndef while_loop(cond_fun, body_fun, init_val):\n if _DISABLE_CONTROL_FLOW_PRIM:\n val = init_val\n while cond_fun(val):\n val = body_fun(val)\n return val\n else:\n return lax.while_loop(cond_fun, body_fun, init_val)\n\n\ndef fori_loop(lower, upper, body_fun, init_val):\n if _DISABLE_CONTROL_FLOW_PRIM:\n val = init_val\n for i in range(int(lower), int(upper)):\n val = body_fun(i, val)\n return val\n else:\n return lax.fori_loop(lower, upper, body_fun, init_val)\n\n\ndef not_jax_tracer(x):\n \"\"\"\n Checks if `x` is not an array generated inside `jit`, `pmap`, `vmap`, or `lax_control_flow`.\n \"\"\"\n return not isinstance(x, (JaxprTracer, BatchTracer))\n\n\ndef identity(x):\n return x\n\n\ndef fori_collect(lower, upper, body_fun, init_val, transform=identity,\n progbar=True, return_init_state=False, **progbar_opts):\n \"\"\"\n This looping construct works like :func:`~jax.lax.fori_loop` but with the additional\n effect of collecting values from the loop body. In addition, this allows for\n post-processing of these samples via `transform`, and progress bar updates.\n Note that, `progbar=False` will be faster, especially when collecting a\n lot of samples. Refer to example usage in :func:`~numpyro.infer.mcmc.hmc`.\n\n :param int lower: the index to start the collective work. In other words,\n we will skip collecting the first `lower` values.\n :param int upper: number of times to run the loop body.\n :param body_fun: a callable that takes a collection of\n `np.ndarray` and returns a collection with the same shape and\n `dtype`.\n :param init_val: initial value to pass as argument to `body_fun`. Can\n be any Python collection type containing `np.ndarray` objects.\n :param transform: a callable to post-process the values returned by `body_fn`.\n :param progbar: whether to post progress bar updates.\n :param bool return_init_state: If `True`, the state at iteration `lower-1`,\n where the collection begins, is also returned. This has the same type\n as `init_val`.\n :param `**progbar_opts`: optional additional progress bar arguments. A\n `diagnostics_fn` can be supplied which when passed the current value\n from `body_fun` returns a string that is used to update the progress\n bar postfix. Also a `progbar_desc` keyword argument can be supplied\n which is used to label the progress bar.\n :return: collection with the same type as `init_val` with values\n collected along the leading axis of `np.ndarray` objects.\n \"\"\"\n assert lower <= upper\n init_val_flat, unravel_fn = ravel_pytree(transform(init_val))\n ravel_fn = lambda x: ravel_pytree(transform(x))[0] # noqa: E731\n\n if not progbar:\n collection = np.zeros((upper - lower,) + init_val_flat.shape)\n\n def _body_fn(i, vals):\n val, collection, start_state = vals\n val = jit(body_fun)(val)\n start_state = lax.cond(i < lower,\n val, lambda x: x,\n start_state, lambda x: x)\n i = np.where(i >= lower, i - lower, 0)\n collection = ops.index_update(collection, i, ravel_fn(val))\n return val, collection, start_state\n\n _, collection, start_state = fori_loop(0, upper, _body_fn, (init_val, collection, init_val))\n else:\n diagnostics_fn = progbar_opts.pop('diagnostics_fn', None)\n progbar_desc = progbar_opts.pop('progbar_desc', lambda x: '')\n collection = []\n\n val, start_state = init_val, init_val\n with tqdm.trange(upper) as t:\n for i in t:\n val = jit(body_fun)(val)\n if i == lower - 1:\n start_state = val\n elif i >= lower:\n collection.append(jit(ravel_fn)(val))\n t.set_description(progbar_desc(i), refresh=False)\n if diagnostics_fn:\n t.set_postfix_str(diagnostics_fn(val), refresh=False)\n\n collection = np.stack(collection) if len(collection) > 0 else \\\n np.zeros((upper - lower,) + init_val_flat.shape)\n\n unravel_collection = vmap(unravel_fn)(collection)\n return (unravel_collection, start_state) if return_init_state else unravel_collection\n\n\ndef copy_docs_from(source_class, full_text=False):\n \"\"\"\n Decorator to copy class and method docs from source to destin class.\n \"\"\"\n\n def decorator(destin_class):\n # This works only in python 3.3+:\n # if not destin_class.__doc__:\n # destin_class.__doc__ = source_class.__doc__\n for name in dir(destin_class):\n if name.startswith('_'):\n continue\n destin_attr = getattr(destin_class, name)\n destin_attr = getattr(destin_attr, '__func__', destin_attr)\n source_attr = getattr(source_class, name, None)\n source_doc = getattr(source_attr, '__doc__', None)\n if source_doc and not getattr(destin_attr, '__doc__', None):\n if full_text or source_doc.startswith('See '):\n destin_doc = source_doc\n else:\n destin_doc = 'See :meth:`{}.{}.{}`'.format(\n source_class.__module__, source_class.__name__, name)\n if isinstance(destin_attr, property):\n # Set docs for object properties.\n # Since __doc__ is read-only, we need to reset the property\n # with the updated doc.\n updated_property = property(destin_attr.fget,\n destin_attr.fset,\n destin_attr.fdel,\n destin_doc)\n setattr(destin_class, name, updated_property)\n else:\n destin_attr.__doc__ = destin_doc\n return destin_class\n\n return decorator\n\n\npytree_metadata = namedtuple('pytree_metadata', ['flat', 'shape', 'size', 'dtype'])\n\n\ndef _ravel_list(*leaves):\n leaves_metadata = tree_map(lambda l: pytree_metadata(\n np.ravel(l), np.shape(l), np.size(l), canonicalize_dtype(lax.dtype(l))), leaves)\n leaves_idx = np.cumsum(np.array((0,) + tuple(d.size for d in leaves_metadata)))\n\n def unravel_list(arr):\n return [np.reshape(lax.dynamic_slice_in_dim(arr, leaves_idx[i], m.size),\n m.shape).astype(m.dtype)\n for i, m in enumerate(leaves_metadata)]\n\n flat = np.concatenate([m.flat for m in leaves_metadata]) if leaves_metadata else np.array([])\n return flat, unravel_list\n\n\ndef ravel_pytree(pytree):\n leaves, treedef = tree_flatten(pytree)\n flat, unravel_list = _ravel_list(*leaves)\n\n def unravel_pytree(arr):\n return tree_unflatten(treedef, unravel_list(arr))\n\n return flat, unravel_pytree\n", "path": "numpyro/util.py"}]} | 3,562 | 195 |
gh_patches_debug_37094 | rasdani/github-patches | git_diff | pantsbuild__pants-9783 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid project names of projects generated with `./pants idea-plugin`
Since the switch to `.idea` project type (https://github.com/pantsbuild/pants/commit/bc8b6d2de121458aeab41c6b9afb45dea2b450f6), the name of IntelliJ project generated with `./pants idea-plugin` is the same as the folder name where the generated project files are located. Before the change, it was more descriptive, same as the imported target spec
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/project_info/tasks/idea_plugin_gen.py`
Content:
```
1 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import json
5 import logging
6 import os
7 import pkgutil
8 import re
9 import shutil
10 import subprocess
11
12 from pants.backend.jvm.targets.jvm_target import JvmTarget
13 from pants.backend.python.targets.python_target import PythonTarget
14 from pants.base.build_environment import get_buildroot, get_scm
15 from pants.base.exceptions import TaskError
16 from pants.base.generator import Generator, TemplateData
17 from pants.task.console_task import ConsoleTask
18 from pants.util import desktop
19 from pants.util.contextutil import temporary_dir, temporary_file
20 from pants.util.dirutil import safe_mkdir
21
22 _TEMPLATE_BASEDIR = "templates/idea"
23
24 # Follow `export.py` for versioning strategy.
25 IDEA_PLUGIN_VERSION = "0.0.4"
26
27
28 class IdeaPluginGen(ConsoleTask):
29 """Invoke IntelliJ Pants plugin (installation required) to create a project.
30
31 The ideal workflow is to programmatically open idea -> select import -> import as pants project -> select project
32 path, but IDEA does not have CLI support for "select import" and "import as pants project" once it is opened.
33
34 Therefore, this task takes another approach to embed the target specs into a `iws` workspace file along
35 with an skeleton `ipr` project file.
36
37 Sample `iws`:
38 ********************************************************
39 <?xml version="1.0"?>
40 <project version="4">
41 <component name="PropertiesComponent">
42 <property name="targets" value="["/Users/me/workspace/pants/testprojects/tests/scala/org/pantsbuild/testproject/cp-directories/::"]" />
43 <property name="project_path" value="/Users/me/workspace/pants/testprojects/tests/scala/org/pantsbuild/testproject/cp-directories/" />
44 </component>
45 </project>
46 ********************************************************
47
48 Once pants plugin sees `targets` and `project_path`, it will simulate the import process on and populate the
49 existing skeleton project into a Pants project as if user is importing these targets.
50 """
51
52 PROJECT_NAME_LIMIT = 200
53
54 _register_console_transitivity_option = False
55
56 @classmethod
57 def register_options(cls, register):
58 super().register_options(register)
59 # TODO: https://github.com/pantsbuild/pants/issues/3198
60 # scala/java-language level should use what Pants already knows.
61 register(
62 "--open",
63 type=bool,
64 default=True,
65 help="Attempts to open the generated project in IDEA.",
66 )
67 register(
68 "--incremental-import",
69 type=int,
70 default=None,
71 help="Enable incremental import of targets with the given graph depth. Supported "
72 "by IntelliJ Pants plugin versions `>= 1.9.2`.",
73 )
74 register(
75 "--dep-as-jar",
76 type=bool,
77 default=False,
78 help="If true, treat source dependencies as 3rdparty jars.",
79 )
80 register(
81 "--java-encoding",
82 default="UTF-8",
83 help="Sets the file encoding for java files in this project.",
84 )
85 register(
86 "--open-with",
87 type=str,
88 default=None,
89 recursive=True,
90 help="Program used to open the generated IntelliJ project.",
91 )
92 register(
93 "--debug_port",
94 type=int,
95 default=5005,
96 help="Port to use for launching tasks under the debugger.",
97 )
98 register(
99 "--java-jdk-name",
100 default=None,
101 help="Sets the jdk used to compile the project's java sources. If unset the default "
102 "jdk name for the --java-language-level is used",
103 )
104 register(
105 "--java-language-level",
106 type=int,
107 default=8,
108 help="Sets the java language and jdk used to compile the project's java sources.",
109 )
110 register(
111 "--possible-paths",
112 type=list,
113 default=["/Applications/IntelliJ IDEA CE.app", "/Applications/IntelliJ IDEA.app"],
114 help="Sets the the list of paths for IntelliJ lookup.",
115 )
116
117 @property
118 def act_transitively(self):
119 return True
120
121 def __init__(self, *args, **kwargs):
122 super().__init__(*args, **kwargs)
123
124 self.open = self.get_options().open
125
126 self.java_encoding = self.get_options().java_encoding
127 self.idea_modules_template = os.path.join(_TEMPLATE_BASEDIR, "modules-12.mustache")
128 self.idea_workspace_template = os.path.join(_TEMPLATE_BASEDIR, "workspace-12.mustache")
129 self.java_language_level = self.get_options().java_language_level
130 self.possible_paths = self.get_options().possible_paths
131
132 if self.get_options().java_jdk_name:
133 self.java_jdk = self.get_options().java_jdk_name
134 else:
135 self.java_jdk = "1.{}".format(self.java_language_level)
136
137 output_dir = os.path.join(get_buildroot(), ".idea", self.__class__.__name__)
138 safe_mkdir(output_dir)
139
140 with temporary_dir(root_dir=output_dir, cleanup=False) as output_project_dir:
141 self.gen_project_workdir = output_project_dir
142 self.idea_workspace_filename = os.path.join(
143 self.gen_project_workdir, ".idea", "workspace.xml"
144 )
145 self.idea_modules_filename = os.path.join(
146 self.gen_project_workdir, ".idea", "modules.xml"
147 )
148 self.intellij_output_dir = os.path.join(self.gen_project_workdir, "out")
149 self.intellij_idea_dir = os.path.join(self.gen_project_workdir, ".idea")
150
151 @classmethod
152 def get_project_name(cls, target_specs):
153 escaped_name = re.sub("[^0-9a-zA-Z:_]+", ".", "__".join(target_specs))
154 # take up to PROJECT_NAME_LIMIT chars as project file name due to filesystem constraint.
155 return escaped_name[: cls.PROJECT_NAME_LIMIT]
156
157 # TODO: https://github.com/pantsbuild/pants/issues/3198
158 def generate_project(self):
159 outdir = os.path.abspath(self.intellij_output_dir)
160 if not os.path.exists(outdir):
161 os.makedirs(outdir)
162
163 scm = get_scm()
164 configured_project = TemplateData(
165 root_dir=get_buildroot(),
166 outdir=outdir,
167 git_root=scm.worktree if scm else None,
168 java=TemplateData(
169 encoding=self.java_encoding,
170 jdk=self.java_jdk,
171 language_level="JDK_1_{}".format(self.java_language_level),
172 ),
173 debug_port=self.get_options().debug_port,
174 )
175
176 abs_target_specs = [
177 os.path.join(get_buildroot(), spec) for spec in self.context.options.specs
178 ]
179 configured_workspace = TemplateData(
180 targets=json.dumps(abs_target_specs),
181 project_path=os.path.join(get_buildroot(), abs_target_specs[0].split(":")[0]),
182 idea_plugin_version=IDEA_PLUGIN_VERSION,
183 incremental_import=self.get_options().incremental_import,
184 dep_as_jar=self.get_options().dep_as_jar,
185 )
186
187 # Generate (without merging in any extra components).
188 safe_mkdir(os.path.abspath(self.intellij_output_dir))
189 safe_mkdir(os.path.abspath(self.intellij_idea_dir))
190
191 def gen_file(template_file_name, **mustache_kwargs):
192 return self._generate_to_tempfile(
193 Generator(
194 pkgutil.get_data(__name__, template_file_name).decode(), **mustache_kwargs
195 )
196 )
197
198 idea_ws = gen_file(self.idea_workspace_template, workspace=configured_workspace)
199 idea_modules = gen_file(self.idea_modules_template, project=configured_project)
200
201 shutil.move(idea_ws, self.idea_workspace_filename)
202 shutil.move(idea_modules, self.idea_modules_filename)
203
204 return self.gen_project_workdir
205
206 def _generate_to_tempfile(self, generator):
207 """Applies the specified generator to a temp file and returns the path to that file.
208
209 We generate into a temp file so that we don't lose any manual customizations on error.
210 """
211 with temporary_file(cleanup=False, binary_mode=False) as output:
212 generator.write(output)
213 return output.name
214
215 def console_output(self, _targets):
216 if not self.context.options.specs:
217 raise TaskError("No targets specified.")
218
219 # Heuristics to guess whether user tries to load a python project,
220 # in which case intellij project sdk has to be set up manually.
221 jvm_target_num = len([x for x in self.context.target_roots if isinstance(x, JvmTarget)])
222 python_target_num = len(
223 [x for x in self.context.target_roots if isinstance(x, PythonTarget)]
224 )
225 if python_target_num > jvm_target_num:
226 logging.warn(
227 "This is likely a python project. Please make sure to "
228 "select the proper python interpreter as Project SDK in IntelliJ."
229 )
230
231 ide_file = self.generate_project()
232 yield self.gen_project_workdir
233
234 if ide_file and self.get_options().open:
235 open_with = self.get_options().open_with
236 if open_with:
237 null = open(os.devnull, "wb")
238 subprocess.Popen([open_with, ide_file], stdout=null, stderr=null)
239 else:
240 try:
241 desktop.idea_open(ide_file, self.possible_paths[::-1])
242 except desktop.OpenError as e:
243 raise TaskError(e)
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py b/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py
--- a/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py
+++ b/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py
@@ -138,6 +138,7 @@
safe_mkdir(output_dir)
with temporary_dir(root_dir=output_dir, cleanup=False) as output_project_dir:
+ self.project_name = self.get_project_name(self.context.options.specs)
self.gen_project_workdir = output_project_dir
self.idea_workspace_filename = os.path.join(
self.gen_project_workdir, ".idea", "workspace.xml"
@@ -145,6 +146,7 @@
self.idea_modules_filename = os.path.join(
self.gen_project_workdir, ".idea", "modules.xml"
)
+ self.idea_name_filename = os.path.join(self.gen_project_workdir, ".idea", ".name")
self.intellij_output_dir = os.path.join(self.gen_project_workdir, "out")
self.intellij_idea_dir = os.path.join(self.gen_project_workdir, ".idea")
@@ -197,9 +199,11 @@
idea_ws = gen_file(self.idea_workspace_template, workspace=configured_workspace)
idea_modules = gen_file(self.idea_modules_template, project=configured_project)
+ idea_dotname = self._write_to_tempfile(self.project_name)
shutil.move(idea_ws, self.idea_workspace_filename)
shutil.move(idea_modules, self.idea_modules_filename)
+ shutil.move(idea_dotname, self.idea_name_filename)
return self.gen_project_workdir
@@ -212,6 +216,12 @@
generator.write(output)
return output.name
+ def _write_to_tempfile(self, content):
+ """Writes coontent to a temp file and returns the path to that file."""
+ with temporary_file(cleanup=False, binary_mode=False) as output:
+ output.write(content)
+ return output.name
+
def console_output(self, _targets):
if not self.context.options.specs:
raise TaskError("No targets specified.")
| {"golden_diff": "diff --git a/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py b/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py\n--- a/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py\n+++ b/src/python/pants/backend/project_info/tasks/idea_plugin_gen.py\n@@ -138,6 +138,7 @@\n safe_mkdir(output_dir)\n \n with temporary_dir(root_dir=output_dir, cleanup=False) as output_project_dir:\n+ self.project_name = self.get_project_name(self.context.options.specs)\n self.gen_project_workdir = output_project_dir\n self.idea_workspace_filename = os.path.join(\n self.gen_project_workdir, \".idea\", \"workspace.xml\"\n@@ -145,6 +146,7 @@\n self.idea_modules_filename = os.path.join(\n self.gen_project_workdir, \".idea\", \"modules.xml\"\n )\n+ self.idea_name_filename = os.path.join(self.gen_project_workdir, \".idea\", \".name\")\n self.intellij_output_dir = os.path.join(self.gen_project_workdir, \"out\")\n self.intellij_idea_dir = os.path.join(self.gen_project_workdir, \".idea\")\n \n@@ -197,9 +199,11 @@\n \n idea_ws = gen_file(self.idea_workspace_template, workspace=configured_workspace)\n idea_modules = gen_file(self.idea_modules_template, project=configured_project)\n+ idea_dotname = self._write_to_tempfile(self.project_name)\n \n shutil.move(idea_ws, self.idea_workspace_filename)\n shutil.move(idea_modules, self.idea_modules_filename)\n+ shutil.move(idea_dotname, self.idea_name_filename)\n \n return self.gen_project_workdir\n \n@@ -212,6 +216,12 @@\n generator.write(output)\n return output.name\n \n+ def _write_to_tempfile(self, content):\n+ \"\"\"Writes coontent to a temp file and returns the path to that file.\"\"\"\n+ with temporary_file(cleanup=False, binary_mode=False) as output:\n+ output.write(content)\n+ return output.name\n+\n def console_output(self, _targets):\n if not self.context.options.specs:\n raise TaskError(\"No targets specified.\")\n", "issue": "Invalid project names of projects generated with `./pants idea-plugin` \nSince the switch to `.idea` project type (https://github.com/pantsbuild/pants/commit/bc8b6d2de121458aeab41c6b9afb45dea2b450f6), the name of IntelliJ project generated with `./pants idea-plugin` is the same as the folder name where the generated project files are located. Before the change, it was more descriptive, same as the imported target spec\n", "before_files": [{"content": "# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport json\nimport logging\nimport os\nimport pkgutil\nimport re\nimport shutil\nimport subprocess\n\nfrom pants.backend.jvm.targets.jvm_target import JvmTarget\nfrom pants.backend.python.targets.python_target import PythonTarget\nfrom pants.base.build_environment import get_buildroot, get_scm\nfrom pants.base.exceptions import TaskError\nfrom pants.base.generator import Generator, TemplateData\nfrom pants.task.console_task import ConsoleTask\nfrom pants.util import desktop\nfrom pants.util.contextutil import temporary_dir, temporary_file\nfrom pants.util.dirutil import safe_mkdir\n\n_TEMPLATE_BASEDIR = \"templates/idea\"\n\n# Follow `export.py` for versioning strategy.\nIDEA_PLUGIN_VERSION = \"0.0.4\"\n\n\nclass IdeaPluginGen(ConsoleTask):\n \"\"\"Invoke IntelliJ Pants plugin (installation required) to create a project.\n\n The ideal workflow is to programmatically open idea -> select import -> import as pants project -> select project\n path, but IDEA does not have CLI support for \"select import\" and \"import as pants project\" once it is opened.\n\n Therefore, this task takes another approach to embed the target specs into a `iws` workspace file along\n with an skeleton `ipr` project file.\n\n Sample `iws`:\n ********************************************************\n <?xml version=\"1.0\"?>\n <project version=\"4\">\n <component name=\"PropertiesComponent\">\n <property name=\"targets\" value=\"["/Users/me/workspace/pants/testprojects/tests/scala/org/pantsbuild/testproject/cp-directories/::"]\" />\n <property name=\"project_path\" value=\"/Users/me/workspace/pants/testprojects/tests/scala/org/pantsbuild/testproject/cp-directories/\" />\n </component>\n </project>\n ********************************************************\n\n Once pants plugin sees `targets` and `project_path`, it will simulate the import process on and populate the\n existing skeleton project into a Pants project as if user is importing these targets.\n \"\"\"\n\n PROJECT_NAME_LIMIT = 200\n\n _register_console_transitivity_option = False\n\n @classmethod\n def register_options(cls, register):\n super().register_options(register)\n # TODO: https://github.com/pantsbuild/pants/issues/3198\n # scala/java-language level should use what Pants already knows.\n register(\n \"--open\",\n type=bool,\n default=True,\n help=\"Attempts to open the generated project in IDEA.\",\n )\n register(\n \"--incremental-import\",\n type=int,\n default=None,\n help=\"Enable incremental import of targets with the given graph depth. Supported \"\n \"by IntelliJ Pants plugin versions `>= 1.9.2`.\",\n )\n register(\n \"--dep-as-jar\",\n type=bool,\n default=False,\n help=\"If true, treat source dependencies as 3rdparty jars.\",\n )\n register(\n \"--java-encoding\",\n default=\"UTF-8\",\n help=\"Sets the file encoding for java files in this project.\",\n )\n register(\n \"--open-with\",\n type=str,\n default=None,\n recursive=True,\n help=\"Program used to open the generated IntelliJ project.\",\n )\n register(\n \"--debug_port\",\n type=int,\n default=5005,\n help=\"Port to use for launching tasks under the debugger.\",\n )\n register(\n \"--java-jdk-name\",\n default=None,\n help=\"Sets the jdk used to compile the project's java sources. If unset the default \"\n \"jdk name for the --java-language-level is used\",\n )\n register(\n \"--java-language-level\",\n type=int,\n default=8,\n help=\"Sets the java language and jdk used to compile the project's java sources.\",\n )\n register(\n \"--possible-paths\",\n type=list,\n default=[\"/Applications/IntelliJ IDEA CE.app\", \"/Applications/IntelliJ IDEA.app\"],\n help=\"Sets the the list of paths for IntelliJ lookup.\",\n )\n\n @property\n def act_transitively(self):\n return True\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.open = self.get_options().open\n\n self.java_encoding = self.get_options().java_encoding\n self.idea_modules_template = os.path.join(_TEMPLATE_BASEDIR, \"modules-12.mustache\")\n self.idea_workspace_template = os.path.join(_TEMPLATE_BASEDIR, \"workspace-12.mustache\")\n self.java_language_level = self.get_options().java_language_level\n self.possible_paths = self.get_options().possible_paths\n\n if self.get_options().java_jdk_name:\n self.java_jdk = self.get_options().java_jdk_name\n else:\n self.java_jdk = \"1.{}\".format(self.java_language_level)\n\n output_dir = os.path.join(get_buildroot(), \".idea\", self.__class__.__name__)\n safe_mkdir(output_dir)\n\n with temporary_dir(root_dir=output_dir, cleanup=False) as output_project_dir:\n self.gen_project_workdir = output_project_dir\n self.idea_workspace_filename = os.path.join(\n self.gen_project_workdir, \".idea\", \"workspace.xml\"\n )\n self.idea_modules_filename = os.path.join(\n self.gen_project_workdir, \".idea\", \"modules.xml\"\n )\n self.intellij_output_dir = os.path.join(self.gen_project_workdir, \"out\")\n self.intellij_idea_dir = os.path.join(self.gen_project_workdir, \".idea\")\n\n @classmethod\n def get_project_name(cls, target_specs):\n escaped_name = re.sub(\"[^0-9a-zA-Z:_]+\", \".\", \"__\".join(target_specs))\n # take up to PROJECT_NAME_LIMIT chars as project file name due to filesystem constraint.\n return escaped_name[: cls.PROJECT_NAME_LIMIT]\n\n # TODO: https://github.com/pantsbuild/pants/issues/3198\n def generate_project(self):\n outdir = os.path.abspath(self.intellij_output_dir)\n if not os.path.exists(outdir):\n os.makedirs(outdir)\n\n scm = get_scm()\n configured_project = TemplateData(\n root_dir=get_buildroot(),\n outdir=outdir,\n git_root=scm.worktree if scm else None,\n java=TemplateData(\n encoding=self.java_encoding,\n jdk=self.java_jdk,\n language_level=\"JDK_1_{}\".format(self.java_language_level),\n ),\n debug_port=self.get_options().debug_port,\n )\n\n abs_target_specs = [\n os.path.join(get_buildroot(), spec) for spec in self.context.options.specs\n ]\n configured_workspace = TemplateData(\n targets=json.dumps(abs_target_specs),\n project_path=os.path.join(get_buildroot(), abs_target_specs[0].split(\":\")[0]),\n idea_plugin_version=IDEA_PLUGIN_VERSION,\n incremental_import=self.get_options().incremental_import,\n dep_as_jar=self.get_options().dep_as_jar,\n )\n\n # Generate (without merging in any extra components).\n safe_mkdir(os.path.abspath(self.intellij_output_dir))\n safe_mkdir(os.path.abspath(self.intellij_idea_dir))\n\n def gen_file(template_file_name, **mustache_kwargs):\n return self._generate_to_tempfile(\n Generator(\n pkgutil.get_data(__name__, template_file_name).decode(), **mustache_kwargs\n )\n )\n\n idea_ws = gen_file(self.idea_workspace_template, workspace=configured_workspace)\n idea_modules = gen_file(self.idea_modules_template, project=configured_project)\n\n shutil.move(idea_ws, self.idea_workspace_filename)\n shutil.move(idea_modules, self.idea_modules_filename)\n\n return self.gen_project_workdir\n\n def _generate_to_tempfile(self, generator):\n \"\"\"Applies the specified generator to a temp file and returns the path to that file.\n\n We generate into a temp file so that we don't lose any manual customizations on error.\n \"\"\"\n with temporary_file(cleanup=False, binary_mode=False) as output:\n generator.write(output)\n return output.name\n\n def console_output(self, _targets):\n if not self.context.options.specs:\n raise TaskError(\"No targets specified.\")\n\n # Heuristics to guess whether user tries to load a python project,\n # in which case intellij project sdk has to be set up manually.\n jvm_target_num = len([x for x in self.context.target_roots if isinstance(x, JvmTarget)])\n python_target_num = len(\n [x for x in self.context.target_roots if isinstance(x, PythonTarget)]\n )\n if python_target_num > jvm_target_num:\n logging.warn(\n \"This is likely a python project. Please make sure to \"\n \"select the proper python interpreter as Project SDK in IntelliJ.\"\n )\n\n ide_file = self.generate_project()\n yield self.gen_project_workdir\n\n if ide_file and self.get_options().open:\n open_with = self.get_options().open_with\n if open_with:\n null = open(os.devnull, \"wb\")\n subprocess.Popen([open_with, ide_file], stdout=null, stderr=null)\n else:\n try:\n desktop.idea_open(ide_file, self.possible_paths[::-1])\n except desktop.OpenError as e:\n raise TaskError(e)\n", "path": "src/python/pants/backend/project_info/tasks/idea_plugin_gen.py"}], "after_files": [{"content": "# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport json\nimport logging\nimport os\nimport pkgutil\nimport re\nimport shutil\nimport subprocess\n\nfrom pants.backend.jvm.targets.jvm_target import JvmTarget\nfrom pants.backend.python.targets.python_target import PythonTarget\nfrom pants.base.build_environment import get_buildroot, get_scm\nfrom pants.base.exceptions import TaskError\nfrom pants.base.generator import Generator, TemplateData\nfrom pants.task.console_task import ConsoleTask\nfrom pants.util import desktop\nfrom pants.util.contextutil import temporary_dir, temporary_file\nfrom pants.util.dirutil import safe_mkdir\n\n_TEMPLATE_BASEDIR = \"templates/idea\"\n\n# Follow `export.py` for versioning strategy.\nIDEA_PLUGIN_VERSION = \"0.0.4\"\n\n\nclass IdeaPluginGen(ConsoleTask):\n \"\"\"Invoke IntelliJ Pants plugin (installation required) to create a project.\n\n The ideal workflow is to programmatically open idea -> select import -> import as pants project -> select project\n path, but IDEA does not have CLI support for \"select import\" and \"import as pants project\" once it is opened.\n\n Therefore, this task takes another approach to embed the target specs into a `iws` workspace file along\n with an skeleton `ipr` project file.\n\n Sample `iws`:\n ********************************************************\n <?xml version=\"1.0\"?>\n <project version=\"4\">\n <component name=\"PropertiesComponent\">\n <property name=\"targets\" value=\"["/Users/me/workspace/pants/testprojects/tests/scala/org/pantsbuild/testproject/cp-directories/::"]\" />\n <property name=\"project_path\" value=\"/Users/me/workspace/pants/testprojects/tests/scala/org/pantsbuild/testproject/cp-directories/\" />\n </component>\n </project>\n ********************************************************\n\n Once pants plugin sees `targets` and `project_path`, it will simulate the import process on and populate the\n existing skeleton project into a Pants project as if user is importing these targets.\n \"\"\"\n\n PROJECT_NAME_LIMIT = 200\n\n _register_console_transitivity_option = False\n\n @classmethod\n def register_options(cls, register):\n super().register_options(register)\n # TODO: https://github.com/pantsbuild/pants/issues/3198\n # scala/java-language level should use what Pants already knows.\n register(\n \"--open\",\n type=bool,\n default=True,\n help=\"Attempts to open the generated project in IDEA.\",\n )\n register(\n \"--incremental-import\",\n type=int,\n default=None,\n help=\"Enable incremental import of targets with the given graph depth. Supported \"\n \"by IntelliJ Pants plugin versions `>= 1.9.2`.\",\n )\n register(\n \"--dep-as-jar\",\n type=bool,\n default=False,\n help=\"If true, treat source dependencies as 3rdparty jars.\",\n )\n register(\n \"--java-encoding\",\n default=\"UTF-8\",\n help=\"Sets the file encoding for java files in this project.\",\n )\n register(\n \"--open-with\",\n type=str,\n default=None,\n recursive=True,\n help=\"Program used to open the generated IntelliJ project.\",\n )\n register(\n \"--debug_port\",\n type=int,\n default=5005,\n help=\"Port to use for launching tasks under the debugger.\",\n )\n register(\n \"--java-jdk-name\",\n default=None,\n help=\"Sets the jdk used to compile the project's java sources. If unset the default \"\n \"jdk name for the --java-language-level is used\",\n )\n register(\n \"--java-language-level\",\n type=int,\n default=8,\n help=\"Sets the java language and jdk used to compile the project's java sources.\",\n )\n register(\n \"--possible-paths\",\n type=list,\n default=[\"/Applications/IntelliJ IDEA CE.app\", \"/Applications/IntelliJ IDEA.app\"],\n help=\"Sets the the list of paths for IntelliJ lookup.\",\n )\n\n @property\n def act_transitively(self):\n return True\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.open = self.get_options().open\n\n self.java_encoding = self.get_options().java_encoding\n self.idea_modules_template = os.path.join(_TEMPLATE_BASEDIR, \"modules-12.mustache\")\n self.idea_workspace_template = os.path.join(_TEMPLATE_BASEDIR, \"workspace-12.mustache\")\n self.java_language_level = self.get_options().java_language_level\n self.possible_paths = self.get_options().possible_paths\n\n if self.get_options().java_jdk_name:\n self.java_jdk = self.get_options().java_jdk_name\n else:\n self.java_jdk = \"1.{}\".format(self.java_language_level)\n\n output_dir = os.path.join(get_buildroot(), \".idea\", self.__class__.__name__)\n safe_mkdir(output_dir)\n\n with temporary_dir(root_dir=output_dir, cleanup=False) as output_project_dir:\n self.project_name = self.get_project_name(self.context.options.specs)\n self.gen_project_workdir = output_project_dir\n self.idea_workspace_filename = os.path.join(\n self.gen_project_workdir, \".idea\", \"workspace.xml\"\n )\n self.idea_modules_filename = os.path.join(\n self.gen_project_workdir, \".idea\", \"modules.xml\"\n )\n self.idea_name_filename = os.path.join(self.gen_project_workdir, \".idea\", \".name\")\n self.intellij_output_dir = os.path.join(self.gen_project_workdir, \"out\")\n self.intellij_idea_dir = os.path.join(self.gen_project_workdir, \".idea\")\n\n @classmethod\n def get_project_name(cls, target_specs):\n escaped_name = re.sub(\"[^0-9a-zA-Z:_]+\", \".\", \"__\".join(target_specs))\n # take up to PROJECT_NAME_LIMIT chars as project file name due to filesystem constraint.\n return escaped_name[: cls.PROJECT_NAME_LIMIT]\n\n # TODO: https://github.com/pantsbuild/pants/issues/3198\n def generate_project(self):\n outdir = os.path.abspath(self.intellij_output_dir)\n if not os.path.exists(outdir):\n os.makedirs(outdir)\n\n scm = get_scm()\n configured_project = TemplateData(\n root_dir=get_buildroot(),\n outdir=outdir,\n git_root=scm.worktree if scm else None,\n java=TemplateData(\n encoding=self.java_encoding,\n jdk=self.java_jdk,\n language_level=\"JDK_1_{}\".format(self.java_language_level),\n ),\n debug_port=self.get_options().debug_port,\n )\n\n abs_target_specs = [\n os.path.join(get_buildroot(), spec) for spec in self.context.options.specs\n ]\n configured_workspace = TemplateData(\n targets=json.dumps(abs_target_specs),\n project_path=os.path.join(get_buildroot(), abs_target_specs[0].split(\":\")[0]),\n idea_plugin_version=IDEA_PLUGIN_VERSION,\n incremental_import=self.get_options().incremental_import,\n dep_as_jar=self.get_options().dep_as_jar,\n )\n\n # Generate (without merging in any extra components).\n safe_mkdir(os.path.abspath(self.intellij_output_dir))\n safe_mkdir(os.path.abspath(self.intellij_idea_dir))\n\n def gen_file(template_file_name, **mustache_kwargs):\n return self._generate_to_tempfile(\n Generator(\n pkgutil.get_data(__name__, template_file_name).decode(), **mustache_kwargs\n )\n )\n\n idea_ws = gen_file(self.idea_workspace_template, workspace=configured_workspace)\n idea_modules = gen_file(self.idea_modules_template, project=configured_project)\n idea_dotname = self._write_to_tempfile(self.project_name)\n\n shutil.move(idea_ws, self.idea_workspace_filename)\n shutil.move(idea_modules, self.idea_modules_filename)\n shutil.move(idea_dotname, self.idea_name_filename)\n\n return self.gen_project_workdir\n\n def _generate_to_tempfile(self, generator):\n \"\"\"Applies the specified generator to a temp file and returns the path to that file.\n\n We generate into a temp file so that we don't lose any manual customizations on error.\n \"\"\"\n with temporary_file(cleanup=False, binary_mode=False) as output:\n generator.write(output)\n return output.name\n\n def _write_to_tempfile(self, content):\n \"\"\"Writes coontent to a temp file and returns the path to that file.\"\"\"\n with temporary_file(cleanup=False, binary_mode=False) as output:\n output.write(content)\n return output.name\n\n def console_output(self, _targets):\n if not self.context.options.specs:\n raise TaskError(\"No targets specified.\")\n\n # Heuristics to guess whether user tries to load a python project,\n # in which case intellij project sdk has to be set up manually.\n jvm_target_num = len([x for x in self.context.target_roots if isinstance(x, JvmTarget)])\n python_target_num = len(\n [x for x in self.context.target_roots if isinstance(x, PythonTarget)]\n )\n if python_target_num > jvm_target_num:\n logging.warn(\n \"This is likely a python project. Please make sure to \"\n \"select the proper python interpreter as Project SDK in IntelliJ.\"\n )\n\n ide_file = self.generate_project()\n yield self.gen_project_workdir\n\n if ide_file and self.get_options().open:\n open_with = self.get_options().open_with\n if open_with:\n null = open(os.devnull, \"wb\")\n subprocess.Popen([open_with, ide_file], stdout=null, stderr=null)\n else:\n try:\n desktop.idea_open(ide_file, self.possible_paths[::-1])\n except desktop.OpenError as e:\n raise TaskError(e)\n", "path": "src/python/pants/backend/project_info/tasks/idea_plugin_gen.py"}]} | 3,025 | 478 |
gh_patches_debug_31651 | rasdani/github-patches | git_diff | pytorch__ignite-626 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch website theme to PyTorch?
I am updating the docs theme to leverage [pytorch_sphinx_theme](https://github.com/pytorch/pytorch_sphinx_theme) so that Ignite docs look like PyTorch's. The diffs are minimal.
Pros:
- closer to PyTorch
- darkifiable with existing Userstyles (see screenshots)
Caveats:
- pytorch_sphinx_theme comes with footers that are not really relevant to Ignite, and I could not yet find a way to alter them.
- the links to various Ignite versions are eaten by some monster somewhere
Here are some screenshots (built with `make html` or with `sphinx-versioning ...`). What is your opinion? Should I open a PR?
## some text and code (that look nice IMO)
<img width="1348" alt="Screenshot 2019-09-14 at 20 00 54" src="https://user-images.githubusercontent.com/1936828/64912083-697e6480-d72a-11e9-8712-1bbbe64aab4b.png">
<img width="1348" alt="Screenshot 2019-09-14 at 19 44 04" src="https://user-images.githubusercontent.com/1936828/64912055-1f957e80-d72a-11e9-9478-2ae7e4891a50.png">
## the main caveat: PyTorch footer (irrelevant to Ignite)
<img width="1348" alt="Screenshot 2019-09-14 at 20 02 23" src="https://user-images.githubusercontent.com/1936828/64912094-9763a900-d72a-11e9-9eae-30cc55e67c59.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 import os
16 import sys
17 sys.path.insert(0, os.path.abspath('../..'))
18 import ignite
19 import sphinx_rtd_theme
20
21 # -- Project information -----------------------------------------------------
22
23 project = 'ignite'
24 copyright = '2018, Torch Contributors'
25 author = 'Torch Contributors'
26
27 # The short X.Y version
28 try:
29 version = os.environ['code_version']
30 if 'master' in version:
31 version = 'master (' + ignite.__version__ + ')'
32 else:
33 version = version.replace('v', '')
34 except KeyError:
35 version = ignite.__version__
36
37 # The full version, including alpha/beta/rc tags
38 release = 'master'
39
40
41 # -- General configuration ---------------------------------------------------
42
43 # If your documentation needs a minimal Sphinx version, state it here.
44 #
45 # needs_sphinx = '1.0'
46
47 # Add any Sphinx extension module names here, as strings. They can be
48 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
49 # ones.
50 extensions = [
51 'sphinx.ext.autosummary',
52 'sphinx.ext.doctest',
53 'sphinx.ext.intersphinx',
54 'sphinx.ext.todo',
55 'sphinx.ext.coverage',
56 'sphinx.ext.mathjax',
57 'sphinx.ext.napoleon',
58 'sphinx.ext.viewcode'
59 ]
60
61 # Add any paths that contain templates here, relative to this directory.
62 templates_path = ['_templates']
63
64 # The suffix(es) of source filenames.
65 # You can specify multiple suffix as a list of string:
66 #
67 # source_suffix = ['.rst', '.md']
68 source_suffix = '.rst'
69
70 # The master toctree document.
71 master_doc = 'index'
72
73 # The language for content autogenerated by Sphinx. Refer to documentation
74 # for a list of supported languages.
75 #
76 # This is also used if you do content translation via gettext catalogs.
77 # Usually you set "language" from the command line for these cases.
78 language = None
79
80 # List of patterns, relative to source directory, that match files and
81 # directories to ignore when looking for source files.
82 # This pattern also affects html_static_path and html_extra_path .
83 exclude_patterns = []
84
85 # The name of the Pygments (syntax highlighting) style to use.
86 pygments_style = 'sphinx'
87
88
89 # -- Options for HTML output -------------------------------------------------
90
91 # The theme to use for HTML and HTML Help pages. See the documentation for
92 # a list of builtin themes.
93 #
94 html_theme = 'sphinx_rtd_theme'
95 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
96
97 html_theme_options = {
98 'collapse_navigation': False,
99 'display_version': True,
100 'logo_only': True,
101 }
102
103 html_logo = '_static/img/ignite-logo-dark.svg'
104
105 # Theme options are theme-specific and customize the look and feel of a theme
106 # further. For a list of options available for each theme, see the
107 # documentation.
108 #
109 # html_theme_options = {}
110
111 # Add any paths that contain custom static files (such as style sheets) here,
112 # relative to this directory. They are copied after the builtin static files,
113 # so a file named "default.css" will overwrite the builtin "default.css".
114 html_static_path = ['_static']
115
116 html_context = {
117 'css_files': [
118 'https://fonts.googleapis.com/css?family=Lato',
119 '_static/css/pytorch_theme.css'
120 ],
121 }
122
123
124 # -- Options for HTMLHelp output ---------------------------------------------
125
126 # Output file base name for HTML help builder.
127 htmlhelp_basename = 'ignitedoc'
128
129
130 # -- Options for LaTeX output ------------------------------------------------
131
132 latex_elements = {
133 # The paper size ('letterpaper' or 'a4paper').
134 #
135 # 'papersize': 'letterpaper',
136
137 # The font size ('10pt', '11pt' or '12pt').
138 #
139 # 'pointsize': '10pt',
140
141 # Additional stuff for the LaTeX preamble.
142 #
143 # 'preamble': '',
144
145 # Latex figure (float) alignment
146 #
147 # 'figure_align': 'htbp',
148 }
149
150 # Grouping the document tree into LaTeX files. List of tuples
151 # (source start file, target name, title,
152 # author, documentclass [howto, manual, or own class]).
153 latex_documents = [
154 (master_doc, 'ignite.tex', 'ignite Documentation',
155 'Torch Contributors', 'manual'),
156 ]
157
158
159 # -- Options for manual page output ------------------------------------------
160
161 # One entry per manual page. List of tuples
162 # (source start file, name, description, authors, manual section).
163 man_pages = [
164 (master_doc, 'ignite', 'ignite Documentation',
165 [author], 1)
166 ]
167
168
169 # -- Options for Texinfo output ----------------------------------------------
170
171 # Grouping the document tree into Texinfo files. List of tuples
172 # (source start file, target name, title, author,
173 # dir menu entry, description, category)
174 texinfo_documents = [
175 (master_doc, 'ignite', 'ignite Documentation',
176 author, 'ignite', 'One line description of project.',
177 'Miscellaneous'),
178 ]
179
180
181 # -- Extension configuration -------------------------------------------------
182
183 # -- Options for intersphinx extension ---------------------------------------
184
185 # Example configuration for intersphinx: refer to the Python standard library.
186 intersphinx_mapping = {'https://docs.python.org/': None}
187
188 # -- Options for todo extension ----------------------------------------------
189
190 # If true, `todo` and `todoList` produce output, else they produce nothing.
191 todo_include_todos = True
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -16,12 +16,12 @@
import sys
sys.path.insert(0, os.path.abspath('../..'))
import ignite
-import sphinx_rtd_theme
+import pytorch_sphinx_theme
# -- Project information -----------------------------------------------------
project = 'ignite'
-copyright = '2018, Torch Contributors'
+copyright = '2019, Torch Contributors'
author = 'Torch Contributors'
# The short X.Y version
@@ -91,10 +91,11 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = 'sphinx_rtd_theme'
-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
+html_theme = 'pytorch_sphinx_theme'
+html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
html_theme_options = {
+ 'canonical_url': 'https://pytorch.org/ignite/index.html',
'collapse_navigation': False,
'display_version': True,
'logo_only': True,
@@ -111,12 +112,13 @@
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
+html_static_path = ['_static', '_templates/_static']
html_context = {
'css_files': [
- 'https://fonts.googleapis.com/css?family=Lato',
- '_static/css/pytorch_theme.css'
+ # 'https://fonts.googleapis.com/css?family=Lato',
+ # '_static/css/pytorch_theme.css'
+ '_static/css/ignite_theme.css'
],
}
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -16,12 +16,12 @@\n import sys\n sys.path.insert(0, os.path.abspath('../..'))\n import ignite\n-import sphinx_rtd_theme\n+import pytorch_sphinx_theme\n \n # -- Project information -----------------------------------------------------\n \n project = 'ignite'\n-copyright = '2018, Torch Contributors'\n+copyright = '2019, Torch Contributors'\n author = 'Torch Contributors'\n \n # The short X.Y version\n@@ -91,10 +91,11 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n-html_theme = 'sphinx_rtd_theme'\n-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n+html_theme = 'pytorch_sphinx_theme'\n+html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n \n html_theme_options = {\n+ 'canonical_url': 'https://pytorch.org/ignite/index.html',\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': True,\n@@ -111,12 +112,13 @@\n # Add any paths that contain custom static files (such as style sheets) here,\n # relative to this directory. They are copied after the builtin static files,\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n-html_static_path = ['_static']\n+html_static_path = ['_static', '_templates/_static']\n \n html_context = {\n 'css_files': [\n- 'https://fonts.googleapis.com/css?family=Lato',\n- '_static/css/pytorch_theme.css'\n+ # 'https://fonts.googleapis.com/css?family=Lato',\n+ # '_static/css/pytorch_theme.css'\n+ '_static/css/ignite_theme.css'\n ],\n }\n", "issue": "Switch website theme to PyTorch?\nI am updating the docs theme to leverage [pytorch_sphinx_theme](https://github.com/pytorch/pytorch_sphinx_theme) so that Ignite docs look like PyTorch's. The diffs are minimal.\r\n\r\nPros:\r\n- closer to PyTorch\r\n- darkifiable with existing Userstyles (see screenshots)\r\n\r\nCaveats:\r\n- pytorch_sphinx_theme comes with footers that are not really relevant to Ignite, and I could not yet find a way to alter them.\r\n- the links to various Ignite versions are eaten by some monster somewhere\r\n\r\nHere are some screenshots (built with `make html` or with `sphinx-versioning ...`). What is your opinion? Should I open a PR?\r\n\r\n## some text and code (that look nice IMO)\r\n\r\n<img width=\"1348\" alt=\"Screenshot 2019-09-14 at 20 00 54\" src=\"https://user-images.githubusercontent.com/1936828/64912083-697e6480-d72a-11e9-8712-1bbbe64aab4b.png\">\r\n\r\n<img width=\"1348\" alt=\"Screenshot 2019-09-14 at 19 44 04\" src=\"https://user-images.githubusercontent.com/1936828/64912055-1f957e80-d72a-11e9-9478-2ae7e4891a50.png\">\r\n\r\n## the main caveat: PyTorch footer (irrelevant to Ignite)\r\n\r\n<img width=\"1348\" alt=\"Screenshot 2019-09-14 at 20 02 23\" src=\"https://user-images.githubusercontent.com/1936828/64912094-9763a900-d72a-11e9-9eae-30cc55e67c59.png\">\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport ignite\nimport sphinx_rtd_theme\n\n# -- Project information -----------------------------------------------------\n\nproject = 'ignite'\ncopyright = '2018, Torch Contributors'\nauthor = 'Torch Contributors'\n\n# The short X.Y version\ntry:\n version = os.environ['code_version']\n if 'master' in version:\n version = 'master (' + ignite.__version__ + ')'\n else:\n version = version.replace('v', '')\nexcept KeyError:\n version = ignite.__version__\n\n# The full version, including alpha/beta/rc tags\nrelease = 'master'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': True,\n}\n\nhtml_logo = '_static/img/ignite-logo-dark.svg'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_context = {\n 'css_files': [\n 'https://fonts.googleapis.com/css?family=Lato',\n '_static/css/pytorch_theme.css'\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ignitedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'ignite.tex', 'ignite Documentation',\n 'Torch Contributors', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'ignite', 'ignite Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'ignite', 'ignite Documentation',\n author, 'ignite', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../..'))\nimport ignite\nimport pytorch_sphinx_theme\n\n# -- Project information -----------------------------------------------------\n\nproject = 'ignite'\ncopyright = '2019, Torch Contributors'\nauthor = 'Torch Contributors'\n\n# The short X.Y version\ntry:\n version = os.environ['code_version']\n if 'master' in version:\n version = 'master (' + ignite.__version__ + ')'\n else:\n version = version.replace('v', '')\nexcept KeyError:\n version = ignite.__version__\n\n# The full version, including alpha/beta/rc tags\nrelease = 'master'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'pytorch_sphinx_theme'\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n 'canonical_url': 'https://pytorch.org/ignite/index.html',\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': True,\n}\n\nhtml_logo = '_static/img/ignite-logo-dark.svg'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static', '_templates/_static']\n\nhtml_context = {\n 'css_files': [\n # 'https://fonts.googleapis.com/css?family=Lato',\n # '_static/css/pytorch_theme.css'\n '_static/css/ignite_theme.css'\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ignitedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'ignite.tex', 'ignite Documentation',\n 'Torch Contributors', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'ignite', 'ignite Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'ignite', 'ignite Documentation',\n author, 'ignite', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/source/conf.py"}]} | 2,480 | 421 |
gh_patches_debug_9856 | rasdani/github-patches | git_diff | Mailu__Mailu-837 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto-forward email validation bug
Getting `Invalid email address` when trying to set auto-forward destination to [email protected].
Following the code i got to this regex which i think does the trick.
https://github.com/Mailu/Mailu/blob/fa1e4b80231e4d52485b0d257227a02f31a48ac8/core/admin/mailu/ui/forms.py#L40
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/ui/forms.py`
Content:
```
1 from wtforms import validators, fields, widgets
2 from wtforms_components import fields as fields_
3 from flask_babel import lazy_gettext as _
4
5 import flask_login
6 import flask_wtf
7 import re
8
9 LOCALPART_REGEX = "^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$"
10
11 class DestinationField(fields.SelectMultipleField):
12 """ Allow for multiple emails selection from current user choices and
13 additional email addresses.
14 """
15
16 validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)
17
18 def iter_choices(self):
19 managed = [
20 str(email)
21 for email in flask_login.current_user.get_managed_emails()
22 ]
23 for email in managed:
24 selected = self.data is not None and self.coerce(email) in self.data
25 yield (email, email, selected)
26 for email in self.data or ():
27 if email not in managed:
28 yield (email, email, True)
29
30 def pre_validate(self, form):
31 for item in self.data:
32 if not self.validator.match(item):
33 raise validators.ValidationError(_('Invalid email address.'))
34
35 class MultipleEmailAddressesVerify(object):
36 def __init__(self,message=_('Invalid email address.')):
37 self.message = message
38
39 def __call__(self, form, field):
40 pattern = re.compile(r'^([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,4})(,([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,4}))*$')
41 if not pattern.match(field.data.replace(" ", "")):
42 raise validators.ValidationError(self.message)
43
44 class ConfirmationForm(flask_wtf.FlaskForm):
45 submit = fields.SubmitField(_('Confirm'))
46
47
48 class LoginForm(flask_wtf.FlaskForm):
49 email = fields.StringField(_('E-mail'), [validators.Email()])
50 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
51 submit = fields.SubmitField(_('Sign in'))
52
53
54 class DomainForm(flask_wtf.FlaskForm):
55 name = fields.StringField(_('Domain name'), [validators.DataRequired()])
56 max_users = fields_.IntegerField(_('Maximum user count'), [validators.NumberRange(min=-1)], default=10)
57 max_aliases = fields_.IntegerField(_('Maximum alias count'), [validators.NumberRange(min=-1)], default=10)
58 max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)
59 signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)
60 comment = fields.StringField(_('Comment'))
61 submit = fields.SubmitField(_('Save'))
62
63
64 class DomainSignupForm(flask_wtf.FlaskForm):
65 name = fields.StringField(_('Domain name'), [validators.DataRequired()])
66 localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])
67 pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])
68 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])
69 captcha = flask_wtf.RecaptchaField()
70 submit = fields.SubmitField(_('Create'))
71
72
73 class AlternativeForm(flask_wtf.FlaskForm):
74 name = fields.StringField(_('Alternative name'), [validators.DataRequired()])
75 submit = fields.SubmitField(_('Save'))
76
77
78 class RelayForm(flask_wtf.FlaskForm):
79 name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])
80 smtp = fields.StringField(_('Remote host'))
81 comment = fields.StringField(_('Comment'))
82 submit = fields.SubmitField(_('Save'))
83
84
85 class UserForm(flask_wtf.FlaskForm):
86 localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])
87 pw = fields.PasswordField(_('Password'))
88 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])
89 quota_bytes = fields_.IntegerSliderField(_('Quota'), default=1000000000)
90 enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)
91 enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)
92 displayed_name = fields.StringField(_('Displayed name'))
93 comment = fields.StringField(_('Comment'))
94 enabled = fields.BooleanField(_('Enabled'), default=True)
95 submit = fields.SubmitField(_('Save'))
96
97
98 class UserSignupForm(flask_wtf.FlaskForm):
99 localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])
100 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
101 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])
102 submit = fields.SubmitField(_('Sign up'))
103
104 class UserSignupFormCaptcha(UserSignupForm):
105 captcha = flask_wtf.RecaptchaField()
106
107 class UserSettingsForm(flask_wtf.FlaskForm):
108 displayed_name = fields.StringField(_('Displayed name'))
109 spam_enabled = fields.BooleanField(_('Enable spam filter'))
110 spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))
111 forward_enabled = fields.BooleanField(_('Enable forwarding'))
112 forward_keep = fields.BooleanField(_('Keep a copy of the emails'))
113 forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])
114 submit = fields.SubmitField(_('Save settings'))
115
116
117 class UserPasswordForm(flask_wtf.FlaskForm):
118 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
119 pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])
120 submit = fields.SubmitField(_('Update password'))
121
122
123 class UserReplyForm(flask_wtf.FlaskForm):
124 reply_enabled = fields.BooleanField(_('Enable automatic reply'))
125 reply_subject = fields.StringField(_('Reply subject'))
126 reply_body = fields.StringField(_('Reply body'),
127 widget=widgets.TextArea())
128 reply_startdate = fields.html5.DateField(_('Start of vacation'))
129 reply_enddate = fields.html5.DateField(_('End of vacation'))
130 submit = fields.SubmitField(_('Update'))
131
132
133 class TokenForm(flask_wtf.FlaskForm):
134 displayed_password = fields.StringField(
135 _('Your token (write it down, as it will never be displayed again)')
136 )
137 raw_password = fields.HiddenField([validators.DataRequired()])
138 comment = fields.StringField(_('Comment'))
139 ip = fields.StringField(
140 _('Authorized IP'), [validators.Optional(), validators.IPAddress()]
141 )
142 submit = fields.SubmitField(_('Save'))
143
144
145 class AliasForm(flask_wtf.FlaskForm):
146 localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])
147 wildcard = fields.BooleanField(
148 _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))
149 destination = DestinationField(_('Destination'))
150 comment = fields.StringField(_('Comment'))
151 submit = fields.SubmitField(_('Save'))
152
153
154 class AdminForm(flask_wtf.FlaskForm):
155 admin = fields.SelectField(_('Admin email'), choices=[])
156 submit = fields.SubmitField(_('Submit'))
157
158
159 class ManagerForm(flask_wtf.FlaskForm):
160 manager = fields.SelectField(_('Manager email'))
161 submit = fields.SubmitField(_('Submit'))
162
163
164 class FetchForm(flask_wtf.FlaskForm):
165 protocol = fields.SelectField(_('Protocol'), choices=[
166 ('imap', 'IMAP'), ('pop3', 'POP3')
167 ])
168 host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])
169 port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])
170 tls = fields.BooleanField(_('Enable TLS'))
171 username = fields.StringField(_('Username'), [validators.DataRequired()])
172 password = fields.PasswordField(_('Password'))
173 keep = fields.BooleanField(_('Keep emails on the server'))
174 submit = fields.SubmitField(_('Submit'))
175
176
177 class AnnouncementForm(flask_wtf.FlaskForm):
178 announcement_subject = fields.StringField(_('Announcement subject'),
179 [validators.DataRequired()])
180 announcement_body = fields.StringField(_('Announcement body'),
181 [validators.DataRequired()], widget=widgets.TextArea())
182 submit = fields.SubmitField(_('Send'))
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/ui/forms.py b/core/admin/mailu/ui/forms.py
--- a/core/admin/mailu/ui/forms.py
+++ b/core/admin/mailu/ui/forms.py
@@ -37,7 +37,7 @@
self.message = message
def __call__(self, form, field):
- pattern = re.compile(r'^([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,4})(,([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,4}))*$')
+ pattern = re.compile(r'^([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,})(,([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,}))*$')
if not pattern.match(field.data.replace(" ", "")):
raise validators.ValidationError(self.message)
| {"golden_diff": "diff --git a/core/admin/mailu/ui/forms.py b/core/admin/mailu/ui/forms.py\n--- a/core/admin/mailu/ui/forms.py\n+++ b/core/admin/mailu/ui/forms.py\n@@ -37,7 +37,7 @@\n self.message = message\n \n def __call__(self, form, field):\n- pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4}))*$')\n+ pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n", "issue": "Auto-forward email validation bug\nGetting `Invalid email address` when trying to set auto-forward destination to [email protected].\r\n\r\nFollowing the code i got to this regex which i think does the trick.\r\nhttps://github.com/Mailu/Mailu/blob/fa1e4b80231e4d52485b0d257227a02f31a48ac8/core/admin/mailu/ui/forms.py#L40\n", "before_files": [{"content": "from wtforms import validators, fields, widgets\nfrom wtforms_components import fields as fields_\nfrom flask_babel import lazy_gettext as _\n\nimport flask_login\nimport flask_wtf\nimport re\n\nLOCALPART_REGEX = \"^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$\"\n\nclass DestinationField(fields.SelectMultipleField):\n \"\"\" Allow for multiple emails selection from current user choices and\n additional email addresses.\n \"\"\"\n\n validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)\n\n def iter_choices(self):\n managed = [\n str(email)\n for email in flask_login.current_user.get_managed_emails()\n ]\n for email in managed:\n selected = self.data is not None and self.coerce(email) in self.data\n yield (email, email, selected)\n for email in self.data or ():\n if email not in managed:\n yield (email, email, True)\n\n def pre_validate(self, form):\n for item in self.data:\n if not self.validator.match(item):\n raise validators.ValidationError(_('Invalid email address.'))\n\nclass MultipleEmailAddressesVerify(object):\n def __init__(self,message=_('Invalid email address.')):\n self.message = message\n\n def __call__(self, form, field):\n pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n\nclass ConfirmationForm(flask_wtf.FlaskForm):\n submit = fields.SubmitField(_('Confirm'))\n\n\nclass LoginForm(flask_wtf.FlaskForm):\n email = fields.StringField(_('E-mail'), [validators.Email()])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Sign in'))\n\n\nclass DomainForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n max_users = fields_.IntegerField(_('Maximum user count'), [validators.NumberRange(min=-1)], default=10)\n max_aliases = fields_.IntegerField(_('Maximum alias count'), [validators.NumberRange(min=-1)], default=10)\n max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)\n signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass DomainSignupForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])\n pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n captcha = flask_wtf.RecaptchaField()\n submit = fields.SubmitField(_('Create'))\n\n\nclass AlternativeForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Alternative name'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Save'))\n\n\nclass RelayForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])\n smtp = fields.StringField(_('Remote host'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'))\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n quota_bytes = fields_.IntegerSliderField(_('Quota'), default=1000000000)\n enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)\n enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)\n displayed_name = fields.StringField(_('Displayed name'))\n comment = fields.StringField(_('Comment'))\n enabled = fields.BooleanField(_('Enabled'), default=True)\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserSignupForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n submit = fields.SubmitField(_('Sign up'))\n\nclass UserSignupFormCaptcha(UserSignupForm):\n captcha = flask_wtf.RecaptchaField()\n\nclass UserSettingsForm(flask_wtf.FlaskForm):\n displayed_name = fields.StringField(_('Displayed name'))\n spam_enabled = fields.BooleanField(_('Enable spam filter'))\n spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))\n forward_enabled = fields.BooleanField(_('Enable forwarding'))\n forward_keep = fields.BooleanField(_('Keep a copy of the emails'))\n forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])\n submit = fields.SubmitField(_('Save settings'))\n\n\nclass UserPasswordForm(flask_wtf.FlaskForm):\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Update password'))\n\n\nclass UserReplyForm(flask_wtf.FlaskForm):\n reply_enabled = fields.BooleanField(_('Enable automatic reply'))\n reply_subject = fields.StringField(_('Reply subject'))\n reply_body = fields.StringField(_('Reply body'),\n widget=widgets.TextArea())\n reply_startdate = fields.html5.DateField(_('Start of vacation'))\n reply_enddate = fields.html5.DateField(_('End of vacation'))\n submit = fields.SubmitField(_('Update'))\n\n\nclass TokenForm(flask_wtf.FlaskForm):\n displayed_password = fields.StringField(\n _('Your token (write it down, as it will never be displayed again)')\n )\n raw_password = fields.HiddenField([validators.DataRequired()])\n comment = fields.StringField(_('Comment'))\n ip = fields.StringField(\n _('Authorized IP'), [validators.Optional(), validators.IPAddress()]\n )\n submit = fields.SubmitField(_('Save'))\n\n\nclass AliasForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n wildcard = fields.BooleanField(\n _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))\n destination = DestinationField(_('Destination'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass AdminForm(flask_wtf.FlaskForm):\n admin = fields.SelectField(_('Admin email'), choices=[])\n submit = fields.SubmitField(_('Submit'))\n\n\nclass ManagerForm(flask_wtf.FlaskForm):\n manager = fields.SelectField(_('Manager email'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass FetchForm(flask_wtf.FlaskForm):\n protocol = fields.SelectField(_('Protocol'), choices=[\n ('imap', 'IMAP'), ('pop3', 'POP3')\n ])\n host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])\n port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])\n tls = fields.BooleanField(_('Enable TLS'))\n username = fields.StringField(_('Username'), [validators.DataRequired()])\n password = fields.PasswordField(_('Password'))\n keep = fields.BooleanField(_('Keep emails on the server'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass AnnouncementForm(flask_wtf.FlaskForm):\n announcement_subject = fields.StringField(_('Announcement subject'),\n [validators.DataRequired()])\n announcement_body = fields.StringField(_('Announcement body'),\n [validators.DataRequired()], widget=widgets.TextArea())\n submit = fields.SubmitField(_('Send'))\n", "path": "core/admin/mailu/ui/forms.py"}], "after_files": [{"content": "from wtforms import validators, fields, widgets\nfrom wtforms_components import fields as fields_\nfrom flask_babel import lazy_gettext as _\n\nimport flask_login\nimport flask_wtf\nimport re\n\nLOCALPART_REGEX = \"^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$\"\n\nclass DestinationField(fields.SelectMultipleField):\n \"\"\" Allow for multiple emails selection from current user choices and\n additional email addresses.\n \"\"\"\n\n validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)\n\n def iter_choices(self):\n managed = [\n str(email)\n for email in flask_login.current_user.get_managed_emails()\n ]\n for email in managed:\n selected = self.data is not None and self.coerce(email) in self.data\n yield (email, email, selected)\n for email in self.data or ():\n if email not in managed:\n yield (email, email, True)\n\n def pre_validate(self, form):\n for item in self.data:\n if not self.validator.match(item):\n raise validators.ValidationError(_('Invalid email address.'))\n\nclass MultipleEmailAddressesVerify(object):\n def __init__(self,message=_('Invalid email address.')):\n self.message = message\n\n def __call__(self, form, field):\n pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n\nclass ConfirmationForm(flask_wtf.FlaskForm):\n submit = fields.SubmitField(_('Confirm'))\n\n\nclass LoginForm(flask_wtf.FlaskForm):\n email = fields.StringField(_('E-mail'), [validators.Email()])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Sign in'))\n\n\nclass DomainForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n max_users = fields_.IntegerField(_('Maximum user count'), [validators.NumberRange(min=-1)], default=10)\n max_aliases = fields_.IntegerField(_('Maximum alias count'), [validators.NumberRange(min=-1)], default=10)\n max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)\n signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass DomainSignupForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])\n pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n captcha = flask_wtf.RecaptchaField()\n submit = fields.SubmitField(_('Create'))\n\n\nclass AlternativeForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Alternative name'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Save'))\n\n\nclass RelayForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])\n smtp = fields.StringField(_('Remote host'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'))\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n quota_bytes = fields_.IntegerSliderField(_('Quota'), default=1000000000)\n enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)\n enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)\n displayed_name = fields.StringField(_('Displayed name'))\n comment = fields.StringField(_('Comment'))\n enabled = fields.BooleanField(_('Enabled'), default=True)\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserSignupForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n submit = fields.SubmitField(_('Sign up'))\n\nclass UserSignupFormCaptcha(UserSignupForm):\n captcha = flask_wtf.RecaptchaField()\n\nclass UserSettingsForm(flask_wtf.FlaskForm):\n displayed_name = fields.StringField(_('Displayed name'))\n spam_enabled = fields.BooleanField(_('Enable spam filter'))\n spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))\n forward_enabled = fields.BooleanField(_('Enable forwarding'))\n forward_keep = fields.BooleanField(_('Keep a copy of the emails'))\n forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])\n submit = fields.SubmitField(_('Save settings'))\n\n\nclass UserPasswordForm(flask_wtf.FlaskForm):\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Update password'))\n\n\nclass UserReplyForm(flask_wtf.FlaskForm):\n reply_enabled = fields.BooleanField(_('Enable automatic reply'))\n reply_subject = fields.StringField(_('Reply subject'))\n reply_body = fields.StringField(_('Reply body'),\n widget=widgets.TextArea())\n reply_startdate = fields.html5.DateField(_('Start of vacation'))\n reply_enddate = fields.html5.DateField(_('End of vacation'))\n submit = fields.SubmitField(_('Update'))\n\n\nclass TokenForm(flask_wtf.FlaskForm):\n displayed_password = fields.StringField(\n _('Your token (write it down, as it will never be displayed again)')\n )\n raw_password = fields.HiddenField([validators.DataRequired()])\n comment = fields.StringField(_('Comment'))\n ip = fields.StringField(\n _('Authorized IP'), [validators.Optional(), validators.IPAddress()]\n )\n submit = fields.SubmitField(_('Save'))\n\n\nclass AliasForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n wildcard = fields.BooleanField(\n _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))\n destination = DestinationField(_('Destination'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass AdminForm(flask_wtf.FlaskForm):\n admin = fields.SelectField(_('Admin email'), choices=[])\n submit = fields.SubmitField(_('Submit'))\n\n\nclass ManagerForm(flask_wtf.FlaskForm):\n manager = fields.SelectField(_('Manager email'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass FetchForm(flask_wtf.FlaskForm):\n protocol = fields.SelectField(_('Protocol'), choices=[\n ('imap', 'IMAP'), ('pop3', 'POP3')\n ])\n host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])\n port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])\n tls = fields.BooleanField(_('Enable TLS'))\n username = fields.StringField(_('Username'), [validators.DataRequired()])\n password = fields.PasswordField(_('Password'))\n keep = fields.BooleanField(_('Keep emails on the server'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass AnnouncementForm(flask_wtf.FlaskForm):\n announcement_subject = fields.StringField(_('Announcement subject'),\n [validators.DataRequired()])\n announcement_body = fields.StringField(_('Announcement body'),\n [validators.DataRequired()], widget=widgets.TextArea())\n submit = fields.SubmitField(_('Send'))\n", "path": "core/admin/mailu/ui/forms.py"}]} | 2,572 | 282 |
gh_patches_debug_44635 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1019 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Profile Page is showing Server Error (500)
### Description
As we click on the button to navigate towards profile section of the site it throws a Server error (500) although the user is logged in.
Here are logs the logs for the same.
```
Internal Server Error: /profile
AttributeError at /profile
'Profile' object has no attribute 'full_account'
```the the
### What should have happened?
account.html should be returned to the view and hence the user must see the page.
### What browser(s) are you seeing the problem on?
Chrome
### Further details

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `project/frontend_views/views.py`
Content:
```
1 import json
2
3 from django.conf import settings
4 from django.contrib.auth.decorators import user_passes_test
5 from django.views.decorators.csrf import csrf_exempt
6 from django.contrib.auth.models import User
7 from django.db.models import F
8 from django.http import HttpResponse, HttpResponseRedirect
9 from django.template.response import TemplateResponse
10
11
12 from api.models import Category, Thread, Civi, Activity
13 from accounts.models import Profile
14 from accounts.forms import UpdateProfile
15 from api.forms import UpdateProfileImage
16 from core.constants import US_STATES
17 from core.custom_decorators import login_required, full_account
18
19
20 def base_view(request):
21 if not request.user.is_authenticated:
22 return TemplateResponse(request, "static_templates/landing.html", {})
23
24 a = Profile.objects.get(user=request.user)
25 if "login_user_image" not in request.session.keys():
26 request.session["login_user_image"] = a.profile_image_thumb_url
27
28 categories = [{"id": c.id, "name": c.name} for c in Category.objects.all()]
29
30 all_categories = list(Category.objects.values_list("id", flat=True))
31 user_categories = list(a.categories.values_list("id", flat=True)) or all_categories
32
33 feed_threads = [
34 Thread.objects.summarize(t)
35 for t in Thread.objects.exclude(is_draft=True).order_by("-created")
36 ]
37 top5_threads = list(
38 Thread.objects.filter(is_draft=False)
39 .order_by("-num_views")[:5]
40 .values("id", "title")
41 )
42 my_draft_threads = [
43 Thread.objects.summarize(t)
44 for t in Thread.objects.filter(author_id=a.id)
45 .exclude(is_draft=False)
46 .order_by("-created")
47 ]
48
49 states = sorted(US_STATES, key=lambda s: s[1])
50 data = {
51 "categories": categories,
52 "states": states,
53 "user_categories": user_categories,
54 "threads": feed_threads,
55 "trending": top5_threads,
56 "draft_threads": my_draft_threads,
57 }
58
59 return TemplateResponse(request, "feed.html", {"data": json.dumps(data)})
60
61
62 @login_required
63 @full_account
64 def user_profile(request, username=None):
65 if request.method == "GET":
66 if not username:
67 return HttpResponseRedirect("/profile/{0}".format(request.user))
68 else:
69 is_owner = username == request.user.username
70 try:
71 user = User.objects.get(username=username)
72 account = user.account_set.first()
73 except User.DoesNotExist:
74 return HttpResponseRedirect("/404")
75
76 form = UpdateProfile(
77 initial={
78 "username": user.username,
79 "email": user.email,
80 "first_name": account.first_name or None,
81 "last_name": account.last_name or None,
82 "about_me": account.about_me or None,
83 },
84 readonly=True,
85 )
86 data = {
87 "username": user,
88 "profile_image_form": UpdateProfileImage,
89 "form": form if is_owner else None,
90 "readonly": True,
91 }
92 return TemplateResponse(request, "account.html", data)
93
94
95 @login_required
96 def user_setup(request):
97 a = Profile.objects.get(user=request.user)
98 if a.full_account:
99 return HttpResponseRedirect("/")
100 # start temp rep rendering TODO: REMOVE THIS
101 else:
102 data = {
103 "username": request.user.username,
104 "email": request.user.email,
105 }
106 return TemplateResponse(request, "user-setup.html", data)
107
108
109 @login_required
110 @full_account
111 def issue_thread(request, thread_id=None):
112 if not thread_id:
113 return HttpResponseRedirect("/404")
114
115 req_acct = Profile.objects.get(user=request.user)
116 t = Thread.objects.get(id=thread_id)
117 c_qs = Civi.objects.filter(thread_id=thread_id).exclude(c_type="response")
118 c_scored = [c.dict_with_score(req_acct.id) for c in c_qs]
119 civis = sorted(c_scored, key=lambda c: c["score"], reverse=True)
120
121 # modify thread view count
122 t.num_civis = len(civis)
123 t.num_views = F("num_views") + 1
124 t.save()
125 t.refresh_from_db()
126
127 thread_wiki_data = {
128 "thread_id": thread_id,
129 "title": t.title,
130 "summary": t.summary,
131 "image": t.image_url,
132 "author": {
133 "username": t.author.user.username,
134 "profile_image": t.author.profile_image_url,
135 "first_name": t.author.first_name,
136 "last_name": t.author.last_name,
137 },
138 "contributors": [
139 Profile.objects.chip_summarize(a)
140 for a in Profile.objects.filter(
141 pk__in=civis.distinct("author").values_list("author", flat=True)
142 )
143 ],
144 "category": {"id": t.category.id, "name": t.category.name},
145 "categories": [{"id": c.id, "name": c.name} for c in Category.objects.all()],
146 "states": sorted(US_STATES, key=lambda s: s[1]),
147 "created": t.created_date_str,
148 "level": t.level,
149 "state": t.state if t.level == "state" else "",
150 "location": t.level if not t.state else dict(US_STATES).get(t.state),
151 "num_civis": t.num_civis,
152 "num_views": t.num_views,
153 "user_votes": [
154 {
155 "civi_id": act.civi.id,
156 "activity_type": act.activity_type,
157 "c_type": act.civi.c_type,
158 }
159 for act in Activity.objects.filter(thread=t.id, account=req_acct.id)
160 ],
161 }
162 thread_body_data = {
163 "civis": civis,
164 }
165
166 data = {
167 "thread_id": thread_id,
168 "is_draft": t.is_draft,
169 "thread_wiki_data": json.dumps(thread_wiki_data),
170 "thread_body_data": json.dumps(thread_body_data),
171 }
172 return TemplateResponse(request, "thread.html", data)
173
174
175 @login_required
176 @full_account
177 def create_group(request):
178 return TemplateResponse(request, "newgroup.html", {})
179
180
181 def declaration(request):
182 return TemplateResponse(request, "declaration.html", {})
183
184
185 def landing_view(request):
186 return TemplateResponse(request, "static_templates/landing.html", {})
187
188
189 def how_it_works_view(request):
190 return TemplateResponse(request, "static_templates/how_it_works.html", {})
191
192
193 def about_view(request):
194 return TemplateResponse(request, "static_templates/about.html", {})
195
196
197 def support_us_view(request):
198 return TemplateResponse(request, "static_templates/support_us.html", {})
199
200
201 """ CSV export function. Thread ID goes in, CSV HTTP response attachment goes out. """
202
203
204 @csrf_exempt
205 def civi2csv(request, thread_id):
206 import csv
207
208 thread = thread_id
209 response = HttpResponse(content_type="text/csv")
210 response["Content-Disposition"] = "attachment; filename=" + thread + ".csv"
211 writer = csv.writer(response, delimiter=",")
212 for card in Civi.objects.filter(thread_id=thread):
213 data = []
214 for key, value in card.dict_with_score().items():
215 if value:
216 data.append(value)
217 writer.writerow(data)
218 return response
219
```
Path: `project/api/permissions.py`
Content:
```
1 from rest_framework.permissions import BasePermission, SAFE_METHODS
2 from .utils import get_account
3
4
5 class IsOwnerOrReadOnly(BasePermission):
6 """ Custom API permission to check if request user is the owner of the model """
7
8 def has_object_permission(self, request, view, obj):
9 return (request.method in SAFE_METHODS) or (
10 obj.author == get_account(user=request.user)
11 )
12
13
14 class IsProfileOwnerOrReadOnly(BasePermission):
15 """ Custom API permission to check if request user is the owner of the account """
16
17 def has_object_permission(self, request, view, obj):
18 return (request.method in SAFE_METHODS) or (obj.user == request.user)
19
20
21 class IsProfileOwnerOrDuringRegistrationOrReadOnly(IsProfileOwnerOrReadOnly):
22 """ """
23 def has_object_permission(self, request, view, obj):
24 if obj.full_account:
25 return super(
26 IsProfileOwnerOrDuringRegistrationOrReadOnly, self
27 ).has_object_permission(request, view, obj)
28 return True
29
```
Path: `project/core/custom_decorators.py`
Content:
```
1 from functools import wraps
2 from django.http import HttpResponseBadRequest, HttpResponseRedirect
3 from accounts.models import Profile
4
5 """
6 USAGE:
7 @require_post_params(params=['we', 'are', 'required'])
8
9 returns a bad request if all required parameters are not present in the POST
10 """
11
12
13 def require_post_params(params):
14 def decorator(func):
15 @wraps(func)
16 def inner(request, *args, **kwargs):
17 if not all(param in request.POST for param in params):
18 missing_params = " ".join([p for p in params if p not in request.POST])
19 reason = "Missing required parameter(s): {p}".format(p=missing_params)
20 return HttpResponseBadRequest(reason=reason)
21 return func(request, *args, **kwargs)
22
23 return inner
24
25 return decorator
26
27
28 def full_account(func):
29 @wraps(func)
30 def inner(request, *args, **kwargs):
31 account = Profile.objects.get(user=request.user)
32 if not account.full_account:
33 return HttpResponseRedirect("/setup")
34 return func(request, *args, **kwargs)
35
36 return inner
37
38
39 def login_required(func):
40 @wraps(func)
41 def inner(request, *args, **kwargs):
42 if not request.user.is_authenticated:
43 return HttpResponseRedirect("/")
44 return func(request, *args, **kwargs)
45
46 return inner
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/project/api/permissions.py b/project/api/permissions.py
--- a/project/api/permissions.py
+++ b/project/api/permissions.py
@@ -21,7 +21,7 @@
class IsProfileOwnerOrDuringRegistrationOrReadOnly(IsProfileOwnerOrReadOnly):
""" """
def has_object_permission(self, request, view, obj):
- if obj.full_account:
+ if obj.full_profile:
return super(
IsProfileOwnerOrDuringRegistrationOrReadOnly, self
).has_object_permission(request, view, obj)
diff --git a/project/core/custom_decorators.py b/project/core/custom_decorators.py
--- a/project/core/custom_decorators.py
+++ b/project/core/custom_decorators.py
@@ -25,11 +25,11 @@
return decorator
-def full_account(func):
+def full_profile(func):
@wraps(func)
def inner(request, *args, **kwargs):
- account = Profile.objects.get(user=request.user)
- if not account.full_account:
+ profile = Profile.objects.get(user=request.user)
+ if not profile.full_profile:
return HttpResponseRedirect("/setup")
return func(request, *args, **kwargs)
diff --git a/project/frontend_views/views.py b/project/frontend_views/views.py
--- a/project/frontend_views/views.py
+++ b/project/frontend_views/views.py
@@ -3,18 +3,17 @@
from django.conf import settings
from django.contrib.auth.decorators import user_passes_test
from django.views.decorators.csrf import csrf_exempt
-from django.contrib.auth.models import User
from django.db.models import F
from django.http import HttpResponse, HttpResponseRedirect
from django.template.response import TemplateResponse
-
+from django.contrib.auth import get_user_model
from api.models import Category, Thread, Civi, Activity
from accounts.models import Profile
from accounts.forms import UpdateProfile
from api.forms import UpdateProfileImage
from core.constants import US_STATES
-from core.custom_decorators import login_required, full_account
+from core.custom_decorators import login_required, full_profile
def base_view(request):
@@ -60,8 +59,9 @@
@login_required
-@full_account
+@full_profile
def user_profile(request, username=None):
+ User = get_user_model()
if request.method == "GET":
if not username:
return HttpResponseRedirect("/profile/{0}".format(request.user))
@@ -69,7 +69,7 @@
is_owner = username == request.user.username
try:
user = User.objects.get(username=username)
- account = user.account_set.first()
+ profile = user.profile_set.first()
except User.DoesNotExist:
return HttpResponseRedirect("/404")
@@ -77,9 +77,9 @@
initial={
"username": user.username,
"email": user.email,
- "first_name": account.first_name or None,
- "last_name": account.last_name or None,
- "about_me": account.about_me or None,
+ "first_name": profile.first_name or None,
+ "last_name": profile.last_name or None,
+ "about_me": profile.about_me or None,
},
readonly=True,
)
@@ -94,8 +94,8 @@
@login_required
def user_setup(request):
- a = Profile.objects.get(user=request.user)
- if a.full_account:
+ profile = Profile.objects.get(user=request.user)
+ if profile.full_profile:
return HttpResponseRedirect("/")
# start temp rep rendering TODO: REMOVE THIS
else:
@@ -107,7 +107,7 @@
@login_required
-@full_account
+@full_profile
def issue_thread(request, thread_id=None):
if not thread_id:
return HttpResponseRedirect("/404")
@@ -173,7 +173,7 @@
@login_required
-@full_account
+@full_profile
def create_group(request):
return TemplateResponse(request, "newgroup.html", {})
| {"golden_diff": "diff --git a/project/api/permissions.py b/project/api/permissions.py\n--- a/project/api/permissions.py\n+++ b/project/api/permissions.py\n@@ -21,7 +21,7 @@\n class IsProfileOwnerOrDuringRegistrationOrReadOnly(IsProfileOwnerOrReadOnly):\n \"\"\" \"\"\"\n def has_object_permission(self, request, view, obj):\n- if obj.full_account:\n+ if obj.full_profile:\n return super(\n IsProfileOwnerOrDuringRegistrationOrReadOnly, self\n ).has_object_permission(request, view, obj)\ndiff --git a/project/core/custom_decorators.py b/project/core/custom_decorators.py\n--- a/project/core/custom_decorators.py\n+++ b/project/core/custom_decorators.py\n@@ -25,11 +25,11 @@\n return decorator\n \n \n-def full_account(func):\n+def full_profile(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n- account = Profile.objects.get(user=request.user)\n- if not account.full_account:\n+ profile = Profile.objects.get(user=request.user)\n+ if not profile.full_profile:\n return HttpResponseRedirect(\"/setup\")\n return func(request, *args, **kwargs)\n \ndiff --git a/project/frontend_views/views.py b/project/frontend_views/views.py\n--- a/project/frontend_views/views.py\n+++ b/project/frontend_views/views.py\n@@ -3,18 +3,17 @@\n from django.conf import settings\n from django.contrib.auth.decorators import user_passes_test\n from django.views.decorators.csrf import csrf_exempt\n-from django.contrib.auth.models import User\n from django.db.models import F\n from django.http import HttpResponse, HttpResponseRedirect\n from django.template.response import TemplateResponse\n-\n+from django.contrib.auth import get_user_model\n \n from api.models import Category, Thread, Civi, Activity\n from accounts.models import Profile\n from accounts.forms import UpdateProfile\n from api.forms import UpdateProfileImage\n from core.constants import US_STATES\n-from core.custom_decorators import login_required, full_account\n+from core.custom_decorators import login_required, full_profile\n \n \n def base_view(request):\n@@ -60,8 +59,9 @@\n \n \n @login_required\n-@full_account\n+@full_profile\n def user_profile(request, username=None):\n+ User = get_user_model()\n if request.method == \"GET\":\n if not username:\n return HttpResponseRedirect(\"/profile/{0}\".format(request.user))\n@@ -69,7 +69,7 @@\n is_owner = username == request.user.username\n try:\n user = User.objects.get(username=username)\n- account = user.account_set.first()\n+ profile = user.profile_set.first()\n except User.DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n \n@@ -77,9 +77,9 @@\n initial={\n \"username\": user.username,\n \"email\": user.email,\n- \"first_name\": account.first_name or None,\n- \"last_name\": account.last_name or None,\n- \"about_me\": account.about_me or None,\n+ \"first_name\": profile.first_name or None,\n+ \"last_name\": profile.last_name or None,\n+ \"about_me\": profile.about_me or None,\n },\n readonly=True,\n )\n@@ -94,8 +94,8 @@\n \n @login_required\n def user_setup(request):\n- a = Profile.objects.get(user=request.user)\n- if a.full_account:\n+ profile = Profile.objects.get(user=request.user)\n+ if profile.full_profile:\n return HttpResponseRedirect(\"/\")\n # start temp rep rendering TODO: REMOVE THIS\n else:\n@@ -107,7 +107,7 @@\n \n \n @login_required\n-@full_account\n+@full_profile\n def issue_thread(request, thread_id=None):\n if not thread_id:\n return HttpResponseRedirect(\"/404\")\n@@ -173,7 +173,7 @@\n \n \n @login_required\n-@full_account\n+@full_profile\n def create_group(request):\n return TemplateResponse(request, \"newgroup.html\", {})\n", "issue": "Profile Page is showing Server Error (500)\n### Description\n\nAs we click on the button to navigate towards profile section of the site it throws a Server error (500) although the user is logged in.\r\nHere are logs the logs for the same.\r\n```\r\nInternal Server Error: /profile\r\n\r\nAttributeError at /profile\r\n'Profile' object has no attribute 'full_account'\r\n```the the \n\n### What should have happened?\n\naccount.html should be returned to the view and hence the user must see the page.\n\n### What browser(s) are you seeing the problem on?\n\nChrome\n\n### Further details\n\n\r\n\n", "before_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import user_passes_test\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.contrib.auth.models import User\nfrom django.db.models import F\nfrom django.http import HttpResponse, HttpResponseRedirect\nfrom django.template.response import TemplateResponse\n\n\nfrom api.models import Category, Thread, Civi, Activity\nfrom accounts.models import Profile\nfrom accounts.forms import UpdateProfile\nfrom api.forms import UpdateProfileImage\nfrom core.constants import US_STATES\nfrom core.custom_decorators import login_required, full_account\n\n\ndef base_view(request):\n if not request.user.is_authenticated:\n return TemplateResponse(request, \"static_templates/landing.html\", {})\n\n a = Profile.objects.get(user=request.user)\n if \"login_user_image\" not in request.session.keys():\n request.session[\"login_user_image\"] = a.profile_image_thumb_url\n\n categories = [{\"id\": c.id, \"name\": c.name} for c in Category.objects.all()]\n\n all_categories = list(Category.objects.values_list(\"id\", flat=True))\n user_categories = list(a.categories.values_list(\"id\", flat=True)) or all_categories\n\n feed_threads = [\n Thread.objects.summarize(t)\n for t in Thread.objects.exclude(is_draft=True).order_by(\"-created\")\n ]\n top5_threads = list(\n Thread.objects.filter(is_draft=False)\n .order_by(\"-num_views\")[:5]\n .values(\"id\", \"title\")\n )\n my_draft_threads = [\n Thread.objects.summarize(t)\n for t in Thread.objects.filter(author_id=a.id)\n .exclude(is_draft=False)\n .order_by(\"-created\")\n ]\n\n states = sorted(US_STATES, key=lambda s: s[1])\n data = {\n \"categories\": categories,\n \"states\": states,\n \"user_categories\": user_categories,\n \"threads\": feed_threads,\n \"trending\": top5_threads,\n \"draft_threads\": my_draft_threads,\n }\n\n return TemplateResponse(request, \"feed.html\", {\"data\": json.dumps(data)})\n\n\n@login_required\n@full_account\ndef user_profile(request, username=None):\n if request.method == \"GET\":\n if not username:\n return HttpResponseRedirect(\"/profile/{0}\".format(request.user))\n else:\n is_owner = username == request.user.username\n try:\n user = User.objects.get(username=username)\n account = user.account_set.first()\n except User.DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n\n form = UpdateProfile(\n initial={\n \"username\": user.username,\n \"email\": user.email,\n \"first_name\": account.first_name or None,\n \"last_name\": account.last_name or None,\n \"about_me\": account.about_me or None,\n },\n readonly=True,\n )\n data = {\n \"username\": user,\n \"profile_image_form\": UpdateProfileImage,\n \"form\": form if is_owner else None,\n \"readonly\": True,\n }\n return TemplateResponse(request, \"account.html\", data)\n\n\n@login_required\ndef user_setup(request):\n a = Profile.objects.get(user=request.user)\n if a.full_account:\n return HttpResponseRedirect(\"/\")\n # start temp rep rendering TODO: REMOVE THIS\n else:\n data = {\n \"username\": request.user.username,\n \"email\": request.user.email,\n }\n return TemplateResponse(request, \"user-setup.html\", data)\n\n\n@login_required\n@full_account\ndef issue_thread(request, thread_id=None):\n if not thread_id:\n return HttpResponseRedirect(\"/404\")\n\n req_acct = Profile.objects.get(user=request.user)\n t = Thread.objects.get(id=thread_id)\n c_qs = Civi.objects.filter(thread_id=thread_id).exclude(c_type=\"response\")\n c_scored = [c.dict_with_score(req_acct.id) for c in c_qs]\n civis = sorted(c_scored, key=lambda c: c[\"score\"], reverse=True)\n\n # modify thread view count\n t.num_civis = len(civis)\n t.num_views = F(\"num_views\") + 1\n t.save()\n t.refresh_from_db()\n\n thread_wiki_data = {\n \"thread_id\": thread_id,\n \"title\": t.title,\n \"summary\": t.summary,\n \"image\": t.image_url,\n \"author\": {\n \"username\": t.author.user.username,\n \"profile_image\": t.author.profile_image_url,\n \"first_name\": t.author.first_name,\n \"last_name\": t.author.last_name,\n },\n \"contributors\": [\n Profile.objects.chip_summarize(a)\n for a in Profile.objects.filter(\n pk__in=civis.distinct(\"author\").values_list(\"author\", flat=True)\n )\n ],\n \"category\": {\"id\": t.category.id, \"name\": t.category.name},\n \"categories\": [{\"id\": c.id, \"name\": c.name} for c in Category.objects.all()],\n \"states\": sorted(US_STATES, key=lambda s: s[1]),\n \"created\": t.created_date_str,\n \"level\": t.level,\n \"state\": t.state if t.level == \"state\" else \"\",\n \"location\": t.level if not t.state else dict(US_STATES).get(t.state),\n \"num_civis\": t.num_civis,\n \"num_views\": t.num_views,\n \"user_votes\": [\n {\n \"civi_id\": act.civi.id,\n \"activity_type\": act.activity_type,\n \"c_type\": act.civi.c_type,\n }\n for act in Activity.objects.filter(thread=t.id, account=req_acct.id)\n ],\n }\n thread_body_data = {\n \"civis\": civis,\n }\n\n data = {\n \"thread_id\": thread_id,\n \"is_draft\": t.is_draft,\n \"thread_wiki_data\": json.dumps(thread_wiki_data),\n \"thread_body_data\": json.dumps(thread_body_data),\n }\n return TemplateResponse(request, \"thread.html\", data)\n\n\n@login_required\n@full_account\ndef create_group(request):\n return TemplateResponse(request, \"newgroup.html\", {})\n\n\ndef declaration(request):\n return TemplateResponse(request, \"declaration.html\", {})\n\n\ndef landing_view(request):\n return TemplateResponse(request, \"static_templates/landing.html\", {})\n\n\ndef how_it_works_view(request):\n return TemplateResponse(request, \"static_templates/how_it_works.html\", {})\n\n\ndef about_view(request):\n return TemplateResponse(request, \"static_templates/about.html\", {})\n\n\ndef support_us_view(request):\n return TemplateResponse(request, \"static_templates/support_us.html\", {})\n\n\n\"\"\" CSV export function. Thread ID goes in, CSV HTTP response attachment goes out. \"\"\"\n\n\n@csrf_exempt\ndef civi2csv(request, thread_id):\n import csv\n\n thread = thread_id\n response = HttpResponse(content_type=\"text/csv\")\n response[\"Content-Disposition\"] = \"attachment; filename=\" + thread + \".csv\"\n writer = csv.writer(response, delimiter=\",\")\n for card in Civi.objects.filter(thread_id=thread):\n data = []\n for key, value in card.dict_with_score().items():\n if value:\n data.append(value)\n writer.writerow(data)\n return response\n", "path": "project/frontend_views/views.py"}, {"content": "from rest_framework.permissions import BasePermission, SAFE_METHODS\nfrom .utils import get_account\n\n\nclass IsOwnerOrReadOnly(BasePermission):\n \"\"\" Custom API permission to check if request user is the owner of the model \"\"\"\n\n def has_object_permission(self, request, view, obj):\n return (request.method in SAFE_METHODS) or (\n obj.author == get_account(user=request.user)\n )\n\n\nclass IsProfileOwnerOrReadOnly(BasePermission):\n \"\"\" Custom API permission to check if request user is the owner of the account \"\"\"\n\n def has_object_permission(self, request, view, obj):\n return (request.method in SAFE_METHODS) or (obj.user == request.user)\n\n\nclass IsProfileOwnerOrDuringRegistrationOrReadOnly(IsProfileOwnerOrReadOnly):\n \"\"\" \"\"\"\n def has_object_permission(self, request, view, obj):\n if obj.full_account:\n return super(\n IsProfileOwnerOrDuringRegistrationOrReadOnly, self\n ).has_object_permission(request, view, obj)\n return True\n", "path": "project/api/permissions.py"}, {"content": "from functools import wraps\nfrom django.http import HttpResponseBadRequest, HttpResponseRedirect\nfrom accounts.models import Profile\n\n\"\"\"\nUSAGE:\n @require_post_params(params=['we', 'are', 'required'])\n\n returns a bad request if all required parameters are not present in the POST\n\"\"\"\n\n\ndef require_post_params(params):\n def decorator(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n if not all(param in request.POST for param in params):\n missing_params = \" \".join([p for p in params if p not in request.POST])\n reason = \"Missing required parameter(s): {p}\".format(p=missing_params)\n return HttpResponseBadRequest(reason=reason)\n return func(request, *args, **kwargs)\n\n return inner\n\n return decorator\n\n\ndef full_account(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n account = Profile.objects.get(user=request.user)\n if not account.full_account:\n return HttpResponseRedirect(\"/setup\")\n return func(request, *args, **kwargs)\n\n return inner\n\n\ndef login_required(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n if not request.user.is_authenticated:\n return HttpResponseRedirect(\"/\")\n return func(request, *args, **kwargs)\n\n return inner\n", "path": "project/core/custom_decorators.py"}], "after_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import user_passes_test\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.db.models import F\nfrom django.http import HttpResponse, HttpResponseRedirect\nfrom django.template.response import TemplateResponse\nfrom django.contrib.auth import get_user_model\n\nfrom api.models import Category, Thread, Civi, Activity\nfrom accounts.models import Profile\nfrom accounts.forms import UpdateProfile\nfrom api.forms import UpdateProfileImage\nfrom core.constants import US_STATES\nfrom core.custom_decorators import login_required, full_profile\n\n\ndef base_view(request):\n if not request.user.is_authenticated:\n return TemplateResponse(request, \"static_templates/landing.html\", {})\n\n a = Profile.objects.get(user=request.user)\n if \"login_user_image\" not in request.session.keys():\n request.session[\"login_user_image\"] = a.profile_image_thumb_url\n\n categories = [{\"id\": c.id, \"name\": c.name} for c in Category.objects.all()]\n\n all_categories = list(Category.objects.values_list(\"id\", flat=True))\n user_categories = list(a.categories.values_list(\"id\", flat=True)) or all_categories\n\n feed_threads = [\n Thread.objects.summarize(t)\n for t in Thread.objects.exclude(is_draft=True).order_by(\"-created\")\n ]\n top5_threads = list(\n Thread.objects.filter(is_draft=False)\n .order_by(\"-num_views\")[:5]\n .values(\"id\", \"title\")\n )\n my_draft_threads = [\n Thread.objects.summarize(t)\n for t in Thread.objects.filter(author_id=a.id)\n .exclude(is_draft=False)\n .order_by(\"-created\")\n ]\n\n states = sorted(US_STATES, key=lambda s: s[1])\n data = {\n \"categories\": categories,\n \"states\": states,\n \"user_categories\": user_categories,\n \"threads\": feed_threads,\n \"trending\": top5_threads,\n \"draft_threads\": my_draft_threads,\n }\n\n return TemplateResponse(request, \"feed.html\", {\"data\": json.dumps(data)})\n\n\n@login_required\n@full_profile\ndef user_profile(request, username=None):\n User = get_user_model()\n if request.method == \"GET\":\n if not username:\n return HttpResponseRedirect(\"/profile/{0}\".format(request.user))\n else:\n is_owner = username == request.user.username\n try:\n user = User.objects.get(username=username)\n profile = user.profile_set.first()\n except User.DoesNotExist:\n return HttpResponseRedirect(\"/404\")\n\n form = UpdateProfile(\n initial={\n \"username\": user.username,\n \"email\": user.email,\n \"first_name\": profile.first_name or None,\n \"last_name\": profile.last_name or None,\n \"about_me\": profile.about_me or None,\n },\n readonly=True,\n )\n data = {\n \"username\": user,\n \"profile_image_form\": UpdateProfileImage,\n \"form\": form if is_owner else None,\n \"readonly\": True,\n }\n return TemplateResponse(request, \"account.html\", data)\n\n\n@login_required\ndef user_setup(request):\n profile = Profile.objects.get(user=request.user)\n if profile.full_profile:\n return HttpResponseRedirect(\"/\")\n # start temp rep rendering TODO: REMOVE THIS\n else:\n data = {\n \"username\": request.user.username,\n \"email\": request.user.email,\n }\n return TemplateResponse(request, \"user-setup.html\", data)\n\n\n@login_required\n@full_profile\ndef issue_thread(request, thread_id=None):\n if not thread_id:\n return HttpResponseRedirect(\"/404\")\n\n req_acct = Profile.objects.get(user=request.user)\n t = Thread.objects.get(id=thread_id)\n c_qs = Civi.objects.filter(thread_id=thread_id).exclude(c_type=\"response\")\n c_scored = [c.dict_with_score(req_acct.id) for c in c_qs]\n civis = sorted(c_scored, key=lambda c: c[\"score\"], reverse=True)\n\n # modify thread view count\n t.num_civis = len(civis)\n t.num_views = F(\"num_views\") + 1\n t.save()\n t.refresh_from_db()\n\n thread_wiki_data = {\n \"thread_id\": thread_id,\n \"title\": t.title,\n \"summary\": t.summary,\n \"image\": t.image_url,\n \"author\": {\n \"username\": t.author.user.username,\n \"profile_image\": t.author.profile_image_url,\n \"first_name\": t.author.first_name,\n \"last_name\": t.author.last_name,\n },\n \"contributors\": [\n Profile.objects.chip_summarize(a)\n for a in Profile.objects.filter(\n pk__in=civis.distinct(\"author\").values_list(\"author\", flat=True)\n )\n ],\n \"category\": {\"id\": t.category.id, \"name\": t.category.name},\n \"categories\": [{\"id\": c.id, \"name\": c.name} for c in Category.objects.all()],\n \"states\": sorted(US_STATES, key=lambda s: s[1]),\n \"created\": t.created_date_str,\n \"level\": t.level,\n \"state\": t.state if t.level == \"state\" else \"\",\n \"location\": t.level if not t.state else dict(US_STATES).get(t.state),\n \"num_civis\": t.num_civis,\n \"num_views\": t.num_views,\n \"user_votes\": [\n {\n \"civi_id\": act.civi.id,\n \"activity_type\": act.activity_type,\n \"c_type\": act.civi.c_type,\n }\n for act in Activity.objects.filter(thread=t.id, account=req_acct.id)\n ],\n }\n thread_body_data = {\n \"civis\": civis,\n }\n\n data = {\n \"thread_id\": thread_id,\n \"is_draft\": t.is_draft,\n \"thread_wiki_data\": json.dumps(thread_wiki_data),\n \"thread_body_data\": json.dumps(thread_body_data),\n }\n return TemplateResponse(request, \"thread.html\", data)\n\n\n@login_required\n@full_profile\ndef create_group(request):\n return TemplateResponse(request, \"newgroup.html\", {})\n\n\ndef declaration(request):\n return TemplateResponse(request, \"declaration.html\", {})\n\n\ndef landing_view(request):\n return TemplateResponse(request, \"static_templates/landing.html\", {})\n\n\ndef how_it_works_view(request):\n return TemplateResponse(request, \"static_templates/how_it_works.html\", {})\n\n\ndef about_view(request):\n return TemplateResponse(request, \"static_templates/about.html\", {})\n\n\ndef support_us_view(request):\n return TemplateResponse(request, \"static_templates/support_us.html\", {})\n\n\n\"\"\" CSV export function. Thread ID goes in, CSV HTTP response attachment goes out. \"\"\"\n\n\n@csrf_exempt\ndef civi2csv(request, thread_id):\n import csv\n\n thread = thread_id\n response = HttpResponse(content_type=\"text/csv\")\n response[\"Content-Disposition\"] = \"attachment; filename=\" + thread + \".csv\"\n writer = csv.writer(response, delimiter=\",\")\n for card in Civi.objects.filter(thread_id=thread):\n data = []\n for key, value in card.dict_with_score().items():\n if value:\n data.append(value)\n writer.writerow(data)\n return response\n", "path": "project/frontend_views/views.py"}, {"content": "from rest_framework.permissions import BasePermission, SAFE_METHODS\nfrom .utils import get_account\n\n\nclass IsOwnerOrReadOnly(BasePermission):\n \"\"\" Custom API permission to check if request user is the owner of the model \"\"\"\n\n def has_object_permission(self, request, view, obj):\n return (request.method in SAFE_METHODS) or (\n obj.author == get_account(user=request.user)\n )\n\n\nclass IsProfileOwnerOrReadOnly(BasePermission):\n \"\"\" Custom API permission to check if request user is the owner of the account \"\"\"\n\n def has_object_permission(self, request, view, obj):\n return (request.method in SAFE_METHODS) or (obj.user == request.user)\n\n\nclass IsProfileOwnerOrDuringRegistrationOrReadOnly(IsProfileOwnerOrReadOnly):\n \"\"\" \"\"\"\n def has_object_permission(self, request, view, obj):\n if obj.full_profile:\n return super(\n IsProfileOwnerOrDuringRegistrationOrReadOnly, self\n ).has_object_permission(request, view, obj)\n return True\n", "path": "project/api/permissions.py"}, {"content": "from functools import wraps\nfrom django.http import HttpResponseBadRequest, HttpResponseRedirect\nfrom accounts.models import Profile\n\n\"\"\"\nUSAGE:\n @require_post_params(params=['we', 'are', 'required'])\n\n returns a bad request if all required parameters are not present in the POST\n\"\"\"\n\n\ndef require_post_params(params):\n def decorator(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n if not all(param in request.POST for param in params):\n missing_params = \" \".join([p for p in params if p not in request.POST])\n reason = \"Missing required parameter(s): {p}\".format(p=missing_params)\n return HttpResponseBadRequest(reason=reason)\n return func(request, *args, **kwargs)\n\n return inner\n\n return decorator\n\n\ndef full_profile(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n profile = Profile.objects.get(user=request.user)\n if not profile.full_profile:\n return HttpResponseRedirect(\"/setup\")\n return func(request, *args, **kwargs)\n\n return inner\n\n\ndef login_required(func):\n @wraps(func)\n def inner(request, *args, **kwargs):\n if not request.user.is_authenticated:\n return HttpResponseRedirect(\"/\")\n return func(request, *args, **kwargs)\n\n return inner\n", "path": "project/core/custom_decorators.py"}]} | 3,217 | 856 |
gh_patches_debug_48680 | rasdani/github-patches | git_diff | ethereum__web3.py-670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider adding Chain Id to library
* Version: 4.0.0-b
* Python: 3.6.3
* OS: linux
### What was wrong?
No clear way to access known chain ids.
### How can it be fixed?
Proposed syntax
```
>>> from web3 import Chains
>>> Chains.Ropsten.id
3
```
I ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.
```
>>> unicorn_txn = unicorns.functions.transfer(
... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',
... 1,
... ).buildTransaction({
... 'chainId': 1,
... 'gas': 70000,
... 'gasPrice': w3.toWei('1', 'gwei'),
... 'nonce': nonce,
... })
```
### Maybe this will help others
According to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:
0: Olympic, Ethereum public pre-release testnet
1: Frontier, Homestead, Metropolis, the Ethereum public main network
1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61
1: Expanse, an alternative Ethereum implementation, chain ID 2
2: Morden, the public Ethereum testnet, now Ethereum Classic testnet
3: Ropsten, the public cross-client Ethereum testnet
4: Rinkeby, the public Geth PoA testnet
42: Kovan, the public Parity PoA testnet
77: Sokol, the public POA Network testnet
99: Core, the public POA Network main network
7762959: Musicoin, the music blockchain
Consider adding Chain Id to library
* Version: 4.0.0-b
* Python: 3.6.3
* OS: linux
### What was wrong?
No clear way to access known chain ids.
### How can it be fixed?
Proposed syntax
```
>>> from web3 import Chains
>>> Chains.Ropsten.id
3
```
I ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.
```
>>> unicorn_txn = unicorns.functions.transfer(
... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',
... 1,
... ).buildTransaction({
... 'chainId': 1,
... 'gas': 70000,
... 'gasPrice': w3.toWei('1', 'gwei'),
... 'nonce': nonce,
... })
```
### Maybe this will help others
According to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:
0: Olympic, Ethereum public pre-release testnet
1: Frontier, Homestead, Metropolis, the Ethereum public main network
1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61
1: Expanse, an alternative Ethereum implementation, chain ID 2
2: Morden, the public Ethereum testnet, now Ethereum Classic testnet
3: Ropsten, the public cross-client Ethereum testnet
4: Rinkeby, the public Geth PoA testnet
42: Kovan, the public Parity PoA testnet
77: Sokol, the public POA Network testnet
99: Core, the public POA Network main network
7762959: Musicoin, the music blockchain
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/net.py`
Content:
```
1 from web3.module import (
2 Module,
3 )
4
5
6 class Net(Module):
7 @property
8 def listening(self):
9 return self.web3.manager.request_blocking("net_listening", [])
10
11 @property
12 def peerCount(self):
13 return self.web3.manager.request_blocking("net_peerCount", [])
14
15 @property
16 def version(self):
17 return self.web3.manager.request_blocking("net_version", [])
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/net.py b/web3/net.py
--- a/web3/net.py
+++ b/web3/net.py
@@ -12,6 +12,10 @@
def peerCount(self):
return self.web3.manager.request_blocking("net_peerCount", [])
+ @property
+ def chainId(self):
+ return self.version
+
@property
def version(self):
return self.web3.manager.request_blocking("net_version", [])
| {"golden_diff": "diff --git a/web3/net.py b/web3/net.py\n--- a/web3/net.py\n+++ b/web3/net.py\n@@ -12,6 +12,10 @@\n def peerCount(self):\n return self.web3.manager.request_blocking(\"net_peerCount\", [])\n \n+ @property\n+ def chainId(self):\n+ return self.version\n+\n @property\n def version(self):\n return self.web3.manager.request_blocking(\"net_version\", [])\n", "issue": "Consider adding Chain Id to library\n* Version: 4.0.0-b\r\n* Python: 3.6.3\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nNo clear way to access known chain ids.\r\n\r\n### How can it be fixed?\r\n\r\nProposed syntax\r\n\r\n```\r\n>>> from web3 import Chains\r\n>>> Chains.Ropsten.id\r\n3\r\n```\r\n\r\nI ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.\r\n\r\n```\r\n>>> unicorn_txn = unicorns.functions.transfer(\r\n... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',\r\n... 1,\r\n... ).buildTransaction({\r\n... 'chainId': 1,\r\n... 'gas': 70000,\r\n... 'gasPrice': w3.toWei('1', 'gwei'),\r\n... 'nonce': nonce,\r\n... })\r\n```\r\n\r\n### Maybe this will help others\r\n\r\nAccording to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:\r\n\r\n0: Olympic, Ethereum public pre-release testnet\r\n1: Frontier, Homestead, Metropolis, the Ethereum public main network\r\n1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61\r\n1: Expanse, an alternative Ethereum implementation, chain ID 2\r\n2: Morden, the public Ethereum testnet, now Ethereum Classic testnet\r\n3: Ropsten, the public cross-client Ethereum testnet\r\n4: Rinkeby, the public Geth PoA testnet\r\n42: Kovan, the public Parity PoA testnet\r\n77: Sokol, the public POA Network testnet\r\n99: Core, the public POA Network main network\r\n7762959: Musicoin, the music blockchain\r\n\nConsider adding Chain Id to library\n* Version: 4.0.0-b\r\n* Python: 3.6.3\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nNo clear way to access known chain ids.\r\n\r\n### How can it be fixed?\r\n\r\nProposed syntax\r\n\r\n```\r\n>>> from web3 import Chains\r\n>>> Chains.Ropsten.id\r\n3\r\n```\r\n\r\nI ran into issues here: https://web3py.readthedocs.io/en/latest/web3.eth.account.html#sign-a-contract-transaction as `buildTransaction()` requires a `chainId`. I didn't even know the chains had ids.\r\n\r\n```\r\n>>> unicorn_txn = unicorns.functions.transfer(\r\n... '0xfB6916095ca1df60bB79Ce92cE3Ea74c37c5d359',\r\n... 1,\r\n... ).buildTransaction({\r\n... 'chainId': 1,\r\n... 'gas': 70000,\r\n... 'gasPrice': w3.toWei('1', 'gwei'),\r\n... 'nonce': nonce,\r\n... })\r\n```\r\n\r\n### Maybe this will help others\r\n\r\nAccording to this answer https://ethereum.stackexchange.com/a/17101/7187 the chain ids are as follows:\r\n\r\n0: Olympic, Ethereum public pre-release testnet\r\n1: Frontier, Homestead, Metropolis, the Ethereum public main network\r\n1: Classic, the (un)forked public Ethereum Classic main network, chain ID 61\r\n1: Expanse, an alternative Ethereum implementation, chain ID 2\r\n2: Morden, the public Ethereum testnet, now Ethereum Classic testnet\r\n3: Ropsten, the public cross-client Ethereum testnet\r\n4: Rinkeby, the public Geth PoA testnet\r\n42: Kovan, the public Parity PoA testnet\r\n77: Sokol, the public POA Network testnet\r\n99: Core, the public POA Network main network\r\n7762959: Musicoin, the music blockchain\r\n\n", "before_files": [{"content": "from web3.module import (\n Module,\n)\n\n\nclass Net(Module):\n @property\n def listening(self):\n return self.web3.manager.request_blocking(\"net_listening\", [])\n\n @property\n def peerCount(self):\n return self.web3.manager.request_blocking(\"net_peerCount\", [])\n\n @property\n def version(self):\n return self.web3.manager.request_blocking(\"net_version\", [])\n", "path": "web3/net.py"}], "after_files": [{"content": "from web3.module import (\n Module,\n)\n\n\nclass Net(Module):\n @property\n def listening(self):\n return self.web3.manager.request_blocking(\"net_listening\", [])\n\n @property\n def peerCount(self):\n return self.web3.manager.request_blocking(\"net_peerCount\", [])\n\n @property\n def chainId(self):\n return self.version\n\n @property\n def version(self):\n return self.web3.manager.request_blocking(\"net_version\", [])\n", "path": "web3/net.py"}]} | 1,265 | 103 |
gh_patches_debug_36617 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-406 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use numpy.load and save instead of pickle
As @StanczakDominik put it:
> By the way, apparently pickle is unsafe due to allowing arbitrary code execution, and we're now including those in Langmuir samples. @jasperbeckers do you think we could transition to numpy.save and numpy.load .npz files? We're just storing two arrays in each of those anyway, right?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/examples/plot_langmuir_analysis.py`
Content:
```
1 # coding: utf-8
2 """
3 Langmuir probe data analysis
4 ============================
5
6 Let's analyze a few Langmuir probe characteristics using the
7 `diagnostics.langmuir` subpackage. First we need to import the module and some
8 basics.
9 """
10
11 from plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis
12 import astropy.units as u
13 import numpy as np
14 import pickle
15 import os
16
17 ######################################################
18 # The first characteristic we analyze is a simple single-probe measurement in
19 # a low (ion) temperature, low density plasma with a cylindrical probe. This
20 # allows us to utilize OML theory implemented in `swept_probe_analysis`.
21 # The data has been preprocessed with some smoothing, which allows us to obtain
22 # a Electron Energy Distribution Function (EEDF) as well.
23
24 # Load the bias and current values stored in the .p pickle file.
25 path = os.path.join("langmuir_samples", "Beckers2017.p")
26 bias, current = pickle.load(open(path, 'rb'))
27
28 # Create the Characteristic object, taking into account the correct units
29 characteristic = Characteristic(np.array(bias) * u.V,
30 np.array(current)*1e3 * u.mA)
31
32 # Calculate the cylindrical probe surface area
33 probe_length = 1.145 * u.mm
34 probe_diameter = 1.57 * u.mm
35 probe_area = (probe_length * np.pi * probe_diameter +
36 np.pi * 0.25 * probe_diameter**2)
37
38 ######################################################
39 # Now we can actually perform the analysis. Since the plasma is in Helium an
40 # ion mass number of 4 is entered. The results are visualized and the obtained
41 # EEDF is also shown.
42 print(swept_probe_analysis(characteristic,
43 probe_area, 4 * u.u,
44 visualize=True,
45 plot_EEDF=True))
46
47 ######################################################
48 # The cyan and yellow lines indicate the fitted electron and ion currents,
49 # respectively. The green line is the sum of these and agrees nicely with the
50 # data. This indicates a succesfull analysis.
51
52 ######################################################
53 # The next sample probe data is provided by David Pace. is also obtained from a low relatively ion
54 # temperature and density plasma, in Argon.
55
56 # Load the data from a file and create the Characteristic object
57 path = os.path.join("langmuir_samples", "Pace2015.p")
58 bias, current = pickle.load(open(path, 'rb'))
59 characteristic = Characteristic(np.array(bias) * u.V,
60 np.array(current) * 1e3 * u.mA)
61
62 ######################################################
63 # Initially the electrons are assumed to be Maxwellian. To check this the fit
64 # of the electron growth region will be plotted.
65 swept_probe_analysis(characteristic,
66 0.738 * u.cm**2,
67 40 * u.u,
68 bimaxwellian=False,
69 plot_electron_fit=True)
70
71 ######################################################
72 # It can be seen that this plasma is slightly bi-Maxwellian, as there are two
73 # distinct slopes in the exponential section. The analysis is now performed
74 # with bimaxwellian set to True, which yields improved results.
75 print(swept_probe_analysis(characteristic,
76 0.738 * u.cm**2,
77 40 * u.u,
78 bimaxwellian=True,
79 visualize=True,
80 plot_electron_fit=True))
81
82 ######################################################
83 # The probe current resolution of the raw data is relatively poor, but the
84 # analysis still performs well in the ion current region. The bi-Maxwellian
85 # properties are not significant but do make a difference. Check this analysis
86 # without setting `bimaxwellian` to True!
87 # This is reflected in the results, which indicate that the temperatures of
88 # the cold and hot electron population are indeed different, but relatively
89 # close.
90
91 ######################################################
92 # This Helium plasma is fully bi-Maxwellian.
93
94 # Import probe data and calculate probe surface area.
95 path = os.path.join("langmuir_samples", "Beckers2017b.p")
96 bias, current = pickle.load(open(path, 'rb'))
97 characteristic = Characteristic(np.array(bias) * u.V,
98 np.array(current) * 1e3 * u.mA)
99 probe_length = 1.145 * u.mm
100 probe_diameter = 1.57 * u.mm
101 probe_area = (probe_length * np.pi * probe_diameter +
102 np.pi * 0.25 * probe_diameter**2)
103
104 ######################################################
105 # `plot_electron_fit` is set to True to check the bi-Maxwellian properties.
106 # The fit converges nicely to the two slopes of the electron growth region.
107 print(swept_probe_analysis(characteristic,
108 probe_area,
109 4 * u.u,
110 bimaxwellian=True,
111 plot_electron_fit=True,
112 visualize=True))
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/examples/plot_langmuir_analysis.py b/plasmapy/examples/plot_langmuir_analysis.py
--- a/plasmapy/examples/plot_langmuir_analysis.py
+++ b/plasmapy/examples/plot_langmuir_analysis.py
@@ -11,7 +11,6 @@
from plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis
import astropy.units as u
import numpy as np
-import pickle
import os
######################################################
@@ -22,8 +21,8 @@
# a Electron Energy Distribution Function (EEDF) as well.
# Load the bias and current values stored in the .p pickle file.
-path = os.path.join("langmuir_samples", "Beckers2017.p")
-bias, current = pickle.load(open(path, 'rb'))
+path = os.path.join("langmuir_samples", "Beckers2017.npy")
+bias, current = np.load(path)
# Create the Characteristic object, taking into account the correct units
characteristic = Characteristic(np.array(bias) * u.V,
@@ -50,12 +49,12 @@
# data. This indicates a succesfull analysis.
######################################################
-# The next sample probe data is provided by David Pace. is also obtained from a low relatively ion
-# temperature and density plasma, in Argon.
+# The next sample probe data is provided by David Pace. It is also obtained
+# from a low relatively ion temperature and density plasma, in Argon.
# Load the data from a file and create the Characteristic object
-path = os.path.join("langmuir_samples", "Pace2015.p")
-bias, current = pickle.load(open(path, 'rb'))
+path = os.path.join("langmuir_samples", "Pace2015.npy")
+bias, current = np.load(path)
characteristic = Characteristic(np.array(bias) * u.V,
np.array(current) * 1e3 * u.mA)
@@ -92,8 +91,8 @@
# This Helium plasma is fully bi-Maxwellian.
# Import probe data and calculate probe surface area.
-path = os.path.join("langmuir_samples", "Beckers2017b.p")
-bias, current = pickle.load(open(path, 'rb'))
+path = os.path.join("langmuir_samples", "Beckers2017b.npy")
+bias, current = np.load(path)
characteristic = Characteristic(np.array(bias) * u.V,
np.array(current) * 1e3 * u.mA)
probe_length = 1.145 * u.mm
| {"golden_diff": "diff --git a/plasmapy/examples/plot_langmuir_analysis.py b/plasmapy/examples/plot_langmuir_analysis.py\n--- a/plasmapy/examples/plot_langmuir_analysis.py\n+++ b/plasmapy/examples/plot_langmuir_analysis.py\n@@ -11,7 +11,6 @@\n from plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis\n import astropy.units as u\n import numpy as np\n-import pickle\n import os\n \n ######################################################\n@@ -22,8 +21,8 @@\n # a Electron Energy Distribution Function (EEDF) as well.\n \n # Load the bias and current values stored in the .p pickle file.\n-path = os.path.join(\"langmuir_samples\", \"Beckers2017.p\")\n-bias, current = pickle.load(open(path, 'rb'))\n+path = os.path.join(\"langmuir_samples\", \"Beckers2017.npy\")\n+bias, current = np.load(path)\n \n # Create the Characteristic object, taking into account the correct units\n characteristic = Characteristic(np.array(bias) * u.V,\n@@ -50,12 +49,12 @@\n # data. This indicates a succesfull analysis.\n \n ######################################################\n-# The next sample probe data is provided by David Pace. is also obtained from a low relatively ion\n-# temperature and density plasma, in Argon.\n+# The next sample probe data is provided by David Pace. It is also obtained\n+# from a low relatively ion temperature and density plasma, in Argon.\n \n # Load the data from a file and create the Characteristic object\n-path = os.path.join(\"langmuir_samples\", \"Pace2015.p\")\n-bias, current = pickle.load(open(path, 'rb'))\n+path = os.path.join(\"langmuir_samples\", \"Pace2015.npy\")\n+bias, current = np.load(path)\n characteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n \n@@ -92,8 +91,8 @@\n # This Helium plasma is fully bi-Maxwellian.\n \n # Import probe data and calculate probe surface area.\n-path = os.path.join(\"langmuir_samples\", \"Beckers2017b.p\")\n-bias, current = pickle.load(open(path, 'rb'))\n+path = os.path.join(\"langmuir_samples\", \"Beckers2017b.npy\")\n+bias, current = np.load(path)\n characteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n probe_length = 1.145 * u.mm\n", "issue": "Use numpy.load and save instead of pickle\nAs @StanczakDominik put it:\r\n\r\n> By the way, apparently pickle is unsafe due to allowing arbitrary code execution, and we're now including those in Langmuir samples. @jasperbeckers do you think we could transition to numpy.save and numpy.load .npz files? We're just storing two arrays in each of those anyway, right?\n", "before_files": [{"content": "# coding: utf-8\n\"\"\"\nLangmuir probe data analysis\n============================\n\nLet's analyze a few Langmuir probe characteristics using the\n`diagnostics.langmuir` subpackage. First we need to import the module and some\nbasics.\n\"\"\"\n\nfrom plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis\nimport astropy.units as u\nimport numpy as np\nimport pickle\nimport os\n\n######################################################\n# The first characteristic we analyze is a simple single-probe measurement in\n# a low (ion) temperature, low density plasma with a cylindrical probe. This\n# allows us to utilize OML theory implemented in `swept_probe_analysis`.\n# The data has been preprocessed with some smoothing, which allows us to obtain\n# a Electron Energy Distribution Function (EEDF) as well.\n\n# Load the bias and current values stored in the .p pickle file.\npath = os.path.join(\"langmuir_samples\", \"Beckers2017.p\")\nbias, current = pickle.load(open(path, 'rb'))\n\n# Create the Characteristic object, taking into account the correct units\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current)*1e3 * u.mA)\n\n# Calculate the cylindrical probe surface area\nprobe_length = 1.145 * u.mm\nprobe_diameter = 1.57 * u.mm\nprobe_area = (probe_length * np.pi * probe_diameter +\n np.pi * 0.25 * probe_diameter**2)\n\n######################################################\n# Now we can actually perform the analysis. Since the plasma is in Helium an\n# ion mass number of 4 is entered. The results are visualized and the obtained\n# EEDF is also shown.\nprint(swept_probe_analysis(characteristic,\n probe_area, 4 * u.u,\n visualize=True,\n plot_EEDF=True))\n\n######################################################\n# The cyan and yellow lines indicate the fitted electron and ion currents,\n# respectively. The green line is the sum of these and agrees nicely with the\n# data. This indicates a succesfull analysis.\n\n######################################################\n# The next sample probe data is provided by David Pace. is also obtained from a low relatively ion\n# temperature and density plasma, in Argon.\n\n# Load the data from a file and create the Characteristic object\npath = os.path.join(\"langmuir_samples\", \"Pace2015.p\")\nbias, current = pickle.load(open(path, 'rb'))\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n\n######################################################\n# Initially the electrons are assumed to be Maxwellian. To check this the fit\n# of the electron growth region will be plotted.\nswept_probe_analysis(characteristic,\n 0.738 * u.cm**2,\n 40 * u.u,\n bimaxwellian=False,\n plot_electron_fit=True)\n\n######################################################\n# It can be seen that this plasma is slightly bi-Maxwellian, as there are two\n# distinct slopes in the exponential section. The analysis is now performed\n# with bimaxwellian set to True, which yields improved results.\nprint(swept_probe_analysis(characteristic,\n 0.738 * u.cm**2,\n 40 * u.u,\n bimaxwellian=True,\n visualize=True,\n plot_electron_fit=True))\n\n######################################################\n# The probe current resolution of the raw data is relatively poor, but the\n# analysis still performs well in the ion current region. The bi-Maxwellian\n# properties are not significant but do make a difference. Check this analysis\n# without setting `bimaxwellian` to True!\n# This is reflected in the results, which indicate that the temperatures of\n# the cold and hot electron population are indeed different, but relatively\n# close.\n\n######################################################\n# This Helium plasma is fully bi-Maxwellian.\n\n# Import probe data and calculate probe surface area.\npath = os.path.join(\"langmuir_samples\", \"Beckers2017b.p\")\nbias, current = pickle.load(open(path, 'rb'))\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\nprobe_length = 1.145 * u.mm\nprobe_diameter = 1.57 * u.mm\nprobe_area = (probe_length * np.pi * probe_diameter +\n np.pi * 0.25 * probe_diameter**2)\n\n######################################################\n# `plot_electron_fit` is set to True to check the bi-Maxwellian properties.\n# The fit converges nicely to the two slopes of the electron growth region.\nprint(swept_probe_analysis(characteristic,\n probe_area,\n 4 * u.u,\n bimaxwellian=True,\n plot_electron_fit=True,\n visualize=True))\n", "path": "plasmapy/examples/plot_langmuir_analysis.py"}], "after_files": [{"content": "# coding: utf-8\n\"\"\"\nLangmuir probe data analysis\n============================\n\nLet's analyze a few Langmuir probe characteristics using the\n`diagnostics.langmuir` subpackage. First we need to import the module and some\nbasics.\n\"\"\"\n\nfrom plasmapy.diagnostics.langmuir import Characteristic, swept_probe_analysis\nimport astropy.units as u\nimport numpy as np\nimport os\n\n######################################################\n# The first characteristic we analyze is a simple single-probe measurement in\n# a low (ion) temperature, low density plasma with a cylindrical probe. This\n# allows us to utilize OML theory implemented in `swept_probe_analysis`.\n# The data has been preprocessed with some smoothing, which allows us to obtain\n# a Electron Energy Distribution Function (EEDF) as well.\n\n# Load the bias and current values stored in the .p pickle file.\npath = os.path.join(\"langmuir_samples\", \"Beckers2017.npy\")\nbias, current = np.load(path)\n\n# Create the Characteristic object, taking into account the correct units\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current)*1e3 * u.mA)\n\n# Calculate the cylindrical probe surface area\nprobe_length = 1.145 * u.mm\nprobe_diameter = 1.57 * u.mm\nprobe_area = (probe_length * np.pi * probe_diameter +\n np.pi * 0.25 * probe_diameter**2)\n\n######################################################\n# Now we can actually perform the analysis. Since the plasma is in Helium an\n# ion mass number of 4 is entered. The results are visualized and the obtained\n# EEDF is also shown.\nprint(swept_probe_analysis(characteristic,\n probe_area, 4 * u.u,\n visualize=True,\n plot_EEDF=True))\n\n######################################################\n# The cyan and yellow lines indicate the fitted electron and ion currents,\n# respectively. The green line is the sum of these and agrees nicely with the\n# data. This indicates a succesfull analysis.\n\n######################################################\n# The next sample probe data is provided by David Pace. It is also obtained\n# from a low relatively ion temperature and density plasma, in Argon.\n\n# Load the data from a file and create the Characteristic object\npath = os.path.join(\"langmuir_samples\", \"Pace2015.npy\")\nbias, current = np.load(path)\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\n\n######################################################\n# Initially the electrons are assumed to be Maxwellian. To check this the fit\n# of the electron growth region will be plotted.\nswept_probe_analysis(characteristic,\n 0.738 * u.cm**2,\n 40 * u.u,\n bimaxwellian=False,\n plot_electron_fit=True)\n\n######################################################\n# It can be seen that this plasma is slightly bi-Maxwellian, as there are two\n# distinct slopes in the exponential section. The analysis is now performed\n# with bimaxwellian set to True, which yields improved results.\nprint(swept_probe_analysis(characteristic,\n 0.738 * u.cm**2,\n 40 * u.u,\n bimaxwellian=True,\n visualize=True,\n plot_electron_fit=True))\n\n######################################################\n# The probe current resolution of the raw data is relatively poor, but the\n# analysis still performs well in the ion current region. The bi-Maxwellian\n# properties are not significant but do make a difference. Check this analysis\n# without setting `bimaxwellian` to True!\n# This is reflected in the results, which indicate that the temperatures of\n# the cold and hot electron population are indeed different, but relatively\n# close.\n\n######################################################\n# This Helium plasma is fully bi-Maxwellian.\n\n# Import probe data and calculate probe surface area.\npath = os.path.join(\"langmuir_samples\", \"Beckers2017b.npy\")\nbias, current = np.load(path)\ncharacteristic = Characteristic(np.array(bias) * u.V,\n np.array(current) * 1e3 * u.mA)\nprobe_length = 1.145 * u.mm\nprobe_diameter = 1.57 * u.mm\nprobe_area = (probe_length * np.pi * probe_diameter +\n np.pi * 0.25 * probe_diameter**2)\n\n######################################################\n# `plot_electron_fit` is set to True to check the bi-Maxwellian properties.\n# The fit converges nicely to the two slopes of the electron growth region.\nprint(swept_probe_analysis(characteristic,\n probe_area,\n 4 * u.u,\n bimaxwellian=True,\n plot_electron_fit=True,\n visualize=True))\n", "path": "plasmapy/examples/plot_langmuir_analysis.py"}]} | 1,633 | 588 |
gh_patches_debug_20904 | rasdani/github-patches | git_diff | cupy__cupy-1911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[WIP] Fix assigning from complex to float (only test)
When a user assign complex value to float array, it causes an error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/core/_ufuncs.py`
Content:
```
1 from cupy.core._kernel import create_ufunc
2
3 elementwise_copy = create_ufunc(
4 'cupy_copy',
5 ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',
6 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),
7 'out0 = out0_type(in0)', default_casting='unsafe')
8 # complex numbers requires out0 = complex<T>(in0)
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/core/_ufuncs.py b/cupy/core/_ufuncs.py
--- a/cupy/core/_ufuncs.py
+++ b/cupy/core/_ufuncs.py
@@ -1,8 +1,30 @@
from cupy.core._kernel import create_ufunc
+
+_complex_cast_copy = '''
+template<typename T, typename U>
+__device__ void cast_copy(const U& x, T& y) {y = T(x);}
+template<typename T, typename U>
+__device__ void cast_copy(const complex<U>& x, complex<T>& y) {
+ y = complex<T>(x);
+}
+template<typename T, typename U>
+__device__ void cast_copy(const complex<U>& x, T& y) {y = T(x.real());}
+'''
+
+
elementwise_copy = create_ufunc(
'cupy_copy',
('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',
'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),
- 'out0 = out0_type(in0)', default_casting='unsafe')
+ 'cast_copy(in0, out0)',
+ preamble=_complex_cast_copy, default_casting='unsafe')
+
+
+elementwise_copy_where = create_ufunc(
+ 'cupy_copy_where',
+ ('??->?', 'b?->b', 'B?->B', 'h?->h', 'H?->H', 'i?->i', 'I?->I', 'l?->l',
+ 'L?->L', 'q?->q', 'Q?->Q', 'e?->e', 'f?->f', 'd?->d', 'F?->F', 'D?->D'),
+ 'if (in1) cast_copy(in0, out0)',
+ preamble=_complex_cast_copy, default_casting='unsafe')
# complex numbers requires out0 = complex<T>(in0)
| {"golden_diff": "diff --git a/cupy/core/_ufuncs.py b/cupy/core/_ufuncs.py\n--- a/cupy/core/_ufuncs.py\n+++ b/cupy/core/_ufuncs.py\n@@ -1,8 +1,30 @@\n from cupy.core._kernel import create_ufunc\n \n+\n+_complex_cast_copy = '''\n+template<typename T, typename U>\n+__device__ void cast_copy(const U& x, T& y) {y = T(x);}\n+template<typename T, typename U>\n+__device__ void cast_copy(const complex<U>& x, complex<T>& y) {\n+ y = complex<T>(x);\n+}\n+template<typename T, typename U>\n+__device__ void cast_copy(const complex<U>& x, T& y) {y = T(x.real());}\n+'''\n+\n+\n elementwise_copy = create_ufunc(\n 'cupy_copy',\n ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',\n 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),\n- 'out0 = out0_type(in0)', default_casting='unsafe')\n+ 'cast_copy(in0, out0)',\n+ preamble=_complex_cast_copy, default_casting='unsafe')\n+\n+\n+elementwise_copy_where = create_ufunc(\n+ 'cupy_copy_where',\n+ ('??->?', 'b?->b', 'B?->B', 'h?->h', 'H?->H', 'i?->i', 'I?->I', 'l?->l',\n+ 'L?->L', 'q?->q', 'Q?->Q', 'e?->e', 'f?->f', 'd?->d', 'F?->F', 'D?->D'),\n+ 'if (in1) cast_copy(in0, out0)',\n+ preamble=_complex_cast_copy, default_casting='unsafe')\n # complex numbers requires out0 = complex<T>(in0)\n", "issue": "[WIP] Fix assigning from complex to float (only test)\nWhen a user assign complex value to float array, it causes an error.\n", "before_files": [{"content": "from cupy.core._kernel import create_ufunc\n\nelementwise_copy = create_ufunc(\n 'cupy_copy',\n ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',\n 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),\n 'out0 = out0_type(in0)', default_casting='unsafe')\n# complex numbers requires out0 = complex<T>(in0)\n", "path": "cupy/core/_ufuncs.py"}], "after_files": [{"content": "from cupy.core._kernel import create_ufunc\n\n\n_complex_cast_copy = '''\ntemplate<typename T, typename U>\n__device__ void cast_copy(const U& x, T& y) {y = T(x);}\ntemplate<typename T, typename U>\n__device__ void cast_copy(const complex<U>& x, complex<T>& y) {\n y = complex<T>(x);\n}\ntemplate<typename T, typename U>\n__device__ void cast_copy(const complex<U>& x, T& y) {y = T(x.real());}\n'''\n\n\nelementwise_copy = create_ufunc(\n 'cupy_copy',\n ('?->?', 'b->b', 'B->B', 'h->h', 'H->H', 'i->i', 'I->I', 'l->l', 'L->L',\n 'q->q', 'Q->Q', 'e->e', 'f->f', 'd->d', 'F->F', 'D->D'),\n 'cast_copy(in0, out0)',\n preamble=_complex_cast_copy, default_casting='unsafe')\n\n\nelementwise_copy_where = create_ufunc(\n 'cupy_copy_where',\n ('??->?', 'b?->b', 'B?->B', 'h?->h', 'H?->H', 'i?->i', 'I?->I', 'l?->l',\n 'L?->L', 'q?->q', 'Q?->Q', 'e?->e', 'f?->f', 'd?->d', 'F?->F', 'D?->D'),\n 'if (in1) cast_copy(in0, out0)',\n preamble=_complex_cast_copy, default_casting='unsafe')\n# complex numbers requires out0 = complex<T>(in0)\n", "path": "cupy/core/_ufuncs.py"}]} | 438 | 480 |
gh_patches_debug_7704 | rasdani/github-patches | git_diff | searx__searx-2396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Startpage: The title result is showing the url

Just updated to the newest version. 0.18.0. The title in StartPage result is showing the url rather than the page title. Same happens to other public instance.
_Originally posted by @lucky13820 in https://github.com/searx/searx/pull/2385#issuecomment-746927618_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/startpage.py`
Content:
```
1 # Startpage (Web)
2 #
3 # @website https://startpage.com
4 # @provide-api no (nothing found)
5 #
6 # @using-api no
7 # @results HTML
8 # @stable no (HTML can change)
9 # @parse url, title, content
10 #
11 # @todo paging
12
13 from lxml import html
14 from dateutil import parser
15 from datetime import datetime, timedelta
16 import re
17 from unicodedata import normalize, combining
18 from babel import Locale
19 from babel.localedata import locale_identifiers
20 from searx.utils import extract_text, eval_xpath, match_language
21
22 # engine dependent config
23 categories = ['general']
24 # there is a mechanism to block "bot" search
25 # (probably the parameter qid), require
26 # storing of qid's between mulitble search-calls
27
28 paging = True
29 language_support = True
30 supported_languages_url = 'https://www.startpage.com/do/settings'
31
32 # search-url
33 base_url = 'https://startpage.com/'
34 search_url = base_url + 'do/search'
35
36 # specific xpath variables
37 # ads xpath //div[@id="results"]/div[@id="sponsored"]//div[@class="result"]
38 # not ads: div[@class="result"] are the direct childs of div[@id="results"]
39 results_xpath = '//div[@class="w-gl__result__main"]'
40 link_xpath = './/a[@class="w-gl__result-url result-link"]'
41 content_xpath = './/p[@class="w-gl__description"]'
42
43
44 # do search-request
45 def request(query, params):
46
47 params['url'] = search_url
48 params['method'] = 'POST'
49 params['data'] = {
50 'query': query,
51 'page': params['pageno'],
52 'cat': 'web',
53 'cmd': 'process_search',
54 'engine0': 'v1all',
55 }
56
57 # set language if specified
58 if params['language'] != 'all':
59 lang_code = match_language(params['language'], supported_languages, fallback=None)
60 if lang_code:
61 language_name = supported_languages[lang_code]['alias']
62 params['data']['language'] = language_name
63 params['data']['lui'] = language_name
64
65 return params
66
67
68 # get response from search-request
69 def response(resp):
70 results = []
71
72 dom = html.fromstring(resp.text)
73
74 # parse results
75 for result in eval_xpath(dom, results_xpath):
76 links = eval_xpath(result, link_xpath)
77 if not links:
78 continue
79 link = links[0]
80 url = link.attrib.get('href')
81
82 # block google-ad url's
83 if re.match(r"^http(s|)://(www\.)?google\.[a-z]+/aclk.*$", url):
84 continue
85
86 # block startpage search url's
87 if re.match(r"^http(s|)://(www\.)?startpage\.com/do/search\?.*$", url):
88 continue
89
90 title = extract_text(link)
91
92 if eval_xpath(result, content_xpath):
93 content = extract_text(eval_xpath(result, content_xpath))
94 else:
95 content = ''
96
97 published_date = None
98
99 # check if search result starts with something like: "2 Sep 2014 ... "
100 if re.match(r"^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \.\.\. ", content):
101 date_pos = content.find('...') + 4
102 date_string = content[0:date_pos - 5]
103 # fix content string
104 content = content[date_pos:]
105
106 try:
107 published_date = parser.parse(date_string, dayfirst=True)
108 except ValueError:
109 pass
110
111 # check if search result starts with something like: "5 days ago ... "
112 elif re.match(r"^[0-9]+ days? ago \.\.\. ", content):
113 date_pos = content.find('...') + 4
114 date_string = content[0:date_pos - 5]
115
116 # calculate datetime
117 published_date = datetime.now() - timedelta(days=int(re.match(r'\d+', date_string).group()))
118
119 # fix content string
120 content = content[date_pos:]
121
122 if published_date:
123 # append result
124 results.append({'url': url,
125 'title': title,
126 'content': content,
127 'publishedDate': published_date})
128 else:
129 # append result
130 results.append({'url': url,
131 'title': title,
132 'content': content})
133
134 # return results
135 return results
136
137
138 # get supported languages from their site
139 def _fetch_supported_languages(resp):
140 # startpage's language selector is a mess
141 # each option has a displayed name and a value, either of which may represent the language name
142 # in the native script, the language name in English, an English transliteration of the native name,
143 # the English name of the writing script used by the language, or occasionally something else entirely.
144
145 # this cases are so special they need to be hardcoded, a couple of them are mispellings
146 language_names = {
147 'english_uk': 'en-GB',
148 'fantizhengwen': ['zh-TW', 'zh-HK'],
149 'hangul': 'ko',
150 'malayam': 'ml',
151 'norsk': 'nb',
152 'sinhalese': 'si',
153 'sudanese': 'su'
154 }
155
156 # get the English name of every language known by babel
157 language_names.update({name.lower(): lang_code for lang_code, name in Locale('en')._data['languages'].items()})
158
159 # get the native name of every language known by babel
160 for lang_code in filter(lambda lang_code: lang_code.find('_') == -1, locale_identifiers()):
161 native_name = Locale(lang_code).get_language_name().lower()
162 # add native name exactly as it is
163 language_names[native_name] = lang_code
164
165 # add "normalized" language name (i.e. français becomes francais and español becomes espanol)
166 unaccented_name = ''.join(filter(lambda c: not combining(c), normalize('NFKD', native_name)))
167 if len(unaccented_name) == len(unaccented_name.encode()):
168 # add only if result is ascii (otherwise "normalization" didn't work)
169 language_names[unaccented_name] = lang_code
170
171 dom = html.fromstring(resp.text)
172 sp_lang_names = []
173 for option in dom.xpath('//form[@id="settings-form"]//select[@name="language"]/option'):
174 sp_lang_names.append((option.get('value'), extract_text(option).lower()))
175
176 supported_languages = {}
177 for sp_option_value, sp_option_text in sp_lang_names:
178 lang_code = language_names.get(sp_option_value) or language_names.get(sp_option_text)
179 if isinstance(lang_code, str):
180 supported_languages[lang_code] = {'alias': sp_option_value}
181 elif isinstance(lang_code, list):
182 for lc in lang_code:
183 supported_languages[lc] = {'alias': sp_option_value}
184 else:
185 print('Unknown language option in Startpage: {} ({})'.format(sp_option_value, sp_option_text))
186
187 return supported_languages
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/startpage.py b/searx/engines/startpage.py
--- a/searx/engines/startpage.py
+++ b/searx/engines/startpage.py
@@ -37,7 +37,7 @@
# ads xpath //div[@id="results"]/div[@id="sponsored"]//div[@class="result"]
# not ads: div[@class="result"] are the direct childs of div[@id="results"]
results_xpath = '//div[@class="w-gl__result__main"]'
-link_xpath = './/a[@class="w-gl__result-url result-link"]'
+link_xpath = './/a[@class="w-gl__result-title result-link"]'
content_xpath = './/p[@class="w-gl__description"]'
| {"golden_diff": "diff --git a/searx/engines/startpage.py b/searx/engines/startpage.py\n--- a/searx/engines/startpage.py\n+++ b/searx/engines/startpage.py\n@@ -37,7 +37,7 @@\n # ads xpath //div[@id=\"results\"]/div[@id=\"sponsored\"]//div[@class=\"result\"]\n # not ads: div[@class=\"result\"] are the direct childs of div[@id=\"results\"]\n results_xpath = '//div[@class=\"w-gl__result__main\"]'\n-link_xpath = './/a[@class=\"w-gl__result-url result-link\"]'\n+link_xpath = './/a[@class=\"w-gl__result-title result-link\"]'\n content_xpath = './/p[@class=\"w-gl__description\"]'\n", "issue": "Startpage: The title result is showing the url\n\r\nJust updated to the newest version. 0.18.0. The title in StartPage result is showing the url rather than the page title. Same happens to other public instance.\n\n_Originally posted by @lucky13820 in https://github.com/searx/searx/pull/2385#issuecomment-746927618_\n", "before_files": [{"content": "# Startpage (Web)\n#\n# @website https://startpage.com\n# @provide-api no (nothing found)\n#\n# @using-api no\n# @results HTML\n# @stable no (HTML can change)\n# @parse url, title, content\n#\n# @todo paging\n\nfrom lxml import html\nfrom dateutil import parser\nfrom datetime import datetime, timedelta\nimport re\nfrom unicodedata import normalize, combining\nfrom babel import Locale\nfrom babel.localedata import locale_identifiers\nfrom searx.utils import extract_text, eval_xpath, match_language\n\n# engine dependent config\ncategories = ['general']\n# there is a mechanism to block \"bot\" search\n# (probably the parameter qid), require\n# storing of qid's between mulitble search-calls\n\npaging = True\nlanguage_support = True\nsupported_languages_url = 'https://www.startpage.com/do/settings'\n\n# search-url\nbase_url = 'https://startpage.com/'\nsearch_url = base_url + 'do/search'\n\n# specific xpath variables\n# ads xpath //div[@id=\"results\"]/div[@id=\"sponsored\"]//div[@class=\"result\"]\n# not ads: div[@class=\"result\"] are the direct childs of div[@id=\"results\"]\nresults_xpath = '//div[@class=\"w-gl__result__main\"]'\nlink_xpath = './/a[@class=\"w-gl__result-url result-link\"]'\ncontent_xpath = './/p[@class=\"w-gl__description\"]'\n\n\n# do search-request\ndef request(query, params):\n\n params['url'] = search_url\n params['method'] = 'POST'\n params['data'] = {\n 'query': query,\n 'page': params['pageno'],\n 'cat': 'web',\n 'cmd': 'process_search',\n 'engine0': 'v1all',\n }\n\n # set language if specified\n if params['language'] != 'all':\n lang_code = match_language(params['language'], supported_languages, fallback=None)\n if lang_code:\n language_name = supported_languages[lang_code]['alias']\n params['data']['language'] = language_name\n params['data']['lui'] = language_name\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n # parse results\n for result in eval_xpath(dom, results_xpath):\n links = eval_xpath(result, link_xpath)\n if not links:\n continue\n link = links[0]\n url = link.attrib.get('href')\n\n # block google-ad url's\n if re.match(r\"^http(s|)://(www\\.)?google\\.[a-z]+/aclk.*$\", url):\n continue\n\n # block startpage search url's\n if re.match(r\"^http(s|)://(www\\.)?startpage\\.com/do/search\\?.*$\", url):\n continue\n\n title = extract_text(link)\n\n if eval_xpath(result, content_xpath):\n content = extract_text(eval_xpath(result, content_xpath))\n else:\n content = ''\n\n published_date = None\n\n # check if search result starts with something like: \"2 Sep 2014 ... \"\n if re.match(r\"^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \\.\\.\\. \", content):\n date_pos = content.find('...') + 4\n date_string = content[0:date_pos - 5]\n # fix content string\n content = content[date_pos:]\n\n try:\n published_date = parser.parse(date_string, dayfirst=True)\n except ValueError:\n pass\n\n # check if search result starts with something like: \"5 days ago ... \"\n elif re.match(r\"^[0-9]+ days? ago \\.\\.\\. \", content):\n date_pos = content.find('...') + 4\n date_string = content[0:date_pos - 5]\n\n # calculate datetime\n published_date = datetime.now() - timedelta(days=int(re.match(r'\\d+', date_string).group()))\n\n # fix content string\n content = content[date_pos:]\n\n if published_date:\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content,\n 'publishedDate': published_date})\n else:\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content})\n\n # return results\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n # startpage's language selector is a mess\n # each option has a displayed name and a value, either of which may represent the language name\n # in the native script, the language name in English, an English transliteration of the native name,\n # the English name of the writing script used by the language, or occasionally something else entirely.\n\n # this cases are so special they need to be hardcoded, a couple of them are mispellings\n language_names = {\n 'english_uk': 'en-GB',\n 'fantizhengwen': ['zh-TW', 'zh-HK'],\n 'hangul': 'ko',\n 'malayam': 'ml',\n 'norsk': 'nb',\n 'sinhalese': 'si',\n 'sudanese': 'su'\n }\n\n # get the English name of every language known by babel\n language_names.update({name.lower(): lang_code for lang_code, name in Locale('en')._data['languages'].items()})\n\n # get the native name of every language known by babel\n for lang_code in filter(lambda lang_code: lang_code.find('_') == -1, locale_identifiers()):\n native_name = Locale(lang_code).get_language_name().lower()\n # add native name exactly as it is\n language_names[native_name] = lang_code\n\n # add \"normalized\" language name (i.e. fran\u00e7ais becomes francais and espa\u00f1ol becomes espanol)\n unaccented_name = ''.join(filter(lambda c: not combining(c), normalize('NFKD', native_name)))\n if len(unaccented_name) == len(unaccented_name.encode()):\n # add only if result is ascii (otherwise \"normalization\" didn't work)\n language_names[unaccented_name] = lang_code\n\n dom = html.fromstring(resp.text)\n sp_lang_names = []\n for option in dom.xpath('//form[@id=\"settings-form\"]//select[@name=\"language\"]/option'):\n sp_lang_names.append((option.get('value'), extract_text(option).lower()))\n\n supported_languages = {}\n for sp_option_value, sp_option_text in sp_lang_names:\n lang_code = language_names.get(sp_option_value) or language_names.get(sp_option_text)\n if isinstance(lang_code, str):\n supported_languages[lang_code] = {'alias': sp_option_value}\n elif isinstance(lang_code, list):\n for lc in lang_code:\n supported_languages[lc] = {'alias': sp_option_value}\n else:\n print('Unknown language option in Startpage: {} ({})'.format(sp_option_value, sp_option_text))\n\n return supported_languages\n", "path": "searx/engines/startpage.py"}], "after_files": [{"content": "# Startpage (Web)\n#\n# @website https://startpage.com\n# @provide-api no (nothing found)\n#\n# @using-api no\n# @results HTML\n# @stable no (HTML can change)\n# @parse url, title, content\n#\n# @todo paging\n\nfrom lxml import html\nfrom dateutil import parser\nfrom datetime import datetime, timedelta\nimport re\nfrom unicodedata import normalize, combining\nfrom babel import Locale\nfrom babel.localedata import locale_identifiers\nfrom searx.utils import extract_text, eval_xpath, match_language\n\n# engine dependent config\ncategories = ['general']\n# there is a mechanism to block \"bot\" search\n# (probably the parameter qid), require\n# storing of qid's between mulitble search-calls\n\npaging = True\nlanguage_support = True\nsupported_languages_url = 'https://www.startpage.com/do/settings'\n\n# search-url\nbase_url = 'https://startpage.com/'\nsearch_url = base_url + 'do/search'\n\n# specific xpath variables\n# ads xpath //div[@id=\"results\"]/div[@id=\"sponsored\"]//div[@class=\"result\"]\n# not ads: div[@class=\"result\"] are the direct childs of div[@id=\"results\"]\nresults_xpath = '//div[@class=\"w-gl__result__main\"]'\nlink_xpath = './/a[@class=\"w-gl__result-title result-link\"]'\ncontent_xpath = './/p[@class=\"w-gl__description\"]'\n\n\n# do search-request\ndef request(query, params):\n\n params['url'] = search_url\n params['method'] = 'POST'\n params['data'] = {\n 'query': query,\n 'page': params['pageno'],\n 'cat': 'web',\n 'cmd': 'process_search',\n 'engine0': 'v1all',\n }\n\n # set language if specified\n if params['language'] != 'all':\n lang_code = match_language(params['language'], supported_languages, fallback=None)\n if lang_code:\n language_name = supported_languages[lang_code]['alias']\n params['data']['language'] = language_name\n params['data']['lui'] = language_name\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n # parse results\n for result in eval_xpath(dom, results_xpath):\n links = eval_xpath(result, link_xpath)\n if not links:\n continue\n link = links[0]\n url = link.attrib.get('href')\n\n # block google-ad url's\n if re.match(r\"^http(s|)://(www\\.)?google\\.[a-z]+/aclk.*$\", url):\n continue\n\n # block startpage search url's\n if re.match(r\"^http(s|)://(www\\.)?startpage\\.com/do/search\\?.*$\", url):\n continue\n\n title = extract_text(link)\n\n if eval_xpath(result, content_xpath):\n content = extract_text(eval_xpath(result, content_xpath))\n else:\n content = ''\n\n published_date = None\n\n # check if search result starts with something like: \"2 Sep 2014 ... \"\n if re.match(r\"^([1-9]|[1-2][0-9]|3[0-1]) [A-Z][a-z]{2} [0-9]{4} \\.\\.\\. \", content):\n date_pos = content.find('...') + 4\n date_string = content[0:date_pos - 5]\n # fix content string\n content = content[date_pos:]\n\n try:\n published_date = parser.parse(date_string, dayfirst=True)\n except ValueError:\n pass\n\n # check if search result starts with something like: \"5 days ago ... \"\n elif re.match(r\"^[0-9]+ days? ago \\.\\.\\. \", content):\n date_pos = content.find('...') + 4\n date_string = content[0:date_pos - 5]\n\n # calculate datetime\n published_date = datetime.now() - timedelta(days=int(re.match(r'\\d+', date_string).group()))\n\n # fix content string\n content = content[date_pos:]\n\n if published_date:\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content,\n 'publishedDate': published_date})\n else:\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content})\n\n # return results\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n # startpage's language selector is a mess\n # each option has a displayed name and a value, either of which may represent the language name\n # in the native script, the language name in English, an English transliteration of the native name,\n # the English name of the writing script used by the language, or occasionally something else entirely.\n\n # this cases are so special they need to be hardcoded, a couple of them are mispellings\n language_names = {\n 'english_uk': 'en-GB',\n 'fantizhengwen': ['zh-TW', 'zh-HK'],\n 'hangul': 'ko',\n 'malayam': 'ml',\n 'norsk': 'nb',\n 'sinhalese': 'si',\n 'sudanese': 'su'\n }\n\n # get the English name of every language known by babel\n language_names.update({name.lower(): lang_code for lang_code, name in Locale('en')._data['languages'].items()})\n\n # get the native name of every language known by babel\n for lang_code in filter(lambda lang_code: lang_code.find('_') == -1, locale_identifiers()):\n native_name = Locale(lang_code).get_language_name().lower()\n # add native name exactly as it is\n language_names[native_name] = lang_code\n\n # add \"normalized\" language name (i.e. fran\u00e7ais becomes francais and espa\u00f1ol becomes espanol)\n unaccented_name = ''.join(filter(lambda c: not combining(c), normalize('NFKD', native_name)))\n if len(unaccented_name) == len(unaccented_name.encode()):\n # add only if result is ascii (otherwise \"normalization\" didn't work)\n language_names[unaccented_name] = lang_code\n\n dom = html.fromstring(resp.text)\n sp_lang_names = []\n for option in dom.xpath('//form[@id=\"settings-form\"]//select[@name=\"language\"]/option'):\n sp_lang_names.append((option.get('value'), extract_text(option).lower()))\n\n supported_languages = {}\n for sp_option_value, sp_option_text in sp_lang_names:\n lang_code = language_names.get(sp_option_value) or language_names.get(sp_option_text)\n if isinstance(lang_code, str):\n supported_languages[lang_code] = {'alias': sp_option_value}\n elif isinstance(lang_code, list):\n for lc in lang_code:\n supported_languages[lc] = {'alias': sp_option_value}\n else:\n print('Unknown language option in Startpage: {} ({})'.format(sp_option_value, sp_option_text))\n\n return supported_languages\n", "path": "searx/engines/startpage.py"}]} | 2,481 | 173 |
gh_patches_debug_14860 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-963 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
W2030 Default value required on conditionally included property
*cfn-lint version: 0.21.3*
CloudFormation provides the AWS::NoValue pseudo-parameter, which allows for a property to be included based on a given Condition. However, cfn-lint will still validate the potential value provided for the property, even if it will not actually be used in the deployment.
Example template:
```yaml
Parameters:
Retention:
Type: Number
Description: Retention in days for the log group (-1 for no retention)
Default: -1
Conditions:
IsRetention:
!Not [!Equals [!Ref 'Retention', '-1']]
Resources:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: 'some-log-group'
RetentionInDays: !If [IsRetention, !Ref Retention, !Ref 'AWS::NoValue']
```
This template allows the user to specify the retention on a log group, or use the number -1 if they wish to have unlimited retention. This is achieved via a Condition as well as an If block that conditionally includes the property.
This leads to the following linter output:
```
cfn-lint --template template.yaml
W2030 You must specify a valid Default value for Retention (-1).
Valid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']
cloudformation/template.yaml:5:5
```
This can of course be avoided by disabling this specific check in the template Metadata block. Unfortunately it cannot be disabled in the resource Metadata, as the validation error happens on the Parameter:
```yaml
Metadata:
cfn-lint:
config:
ignore_checks:
- W2030
```
This might be a difficult situation to account for, since it would require the Condition to be evaluated to determine whether the property itself should even be checked.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/parameters/AllowedValue.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import six
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21 from cfnlint.helpers import RESOURCE_SPECS
22
23
24 class AllowedValue(CloudFormationLintRule):
25 """Check if parameters have a valid value"""
26 id = 'W2030'
27 shortdesc = 'Check if parameters have a valid value'
28 description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'
29 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
30 tags = ['resources', 'property', 'allowed value']
31
32 def initialize(self, cfn):
33 """Initialize the rule"""
34 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
35 self.resource_property_types.append(resource_type_spec)
36 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
37 self.resource_sub_property_types.append(property_type_spec)
38
39 def check_value_ref(self, value, **kwargs):
40 """Check Ref"""
41 matches = []
42
43 allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
44 cfn = kwargs.get('cfn')
45
46 if allowed_value_specs:
47 if value in cfn.template.get('Parameters', {}):
48 param = cfn.template.get('Parameters').get(value, {})
49 parameter_values = param.get('AllowedValues')
50 default_value = param.get('Default')
51 parameter_type = param.get('Type')
52 if isinstance(parameter_type, six.string_types):
53 if ((not parameter_type.startswith('List<')) and
54 (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and
55 parameter_type not in ['CommaDelimitedList', 'List<String>']):
56 # Check Allowed Values
57 if parameter_values:
58 for index, allowed_value in enumerate(parameter_values):
59 if str(allowed_value) not in allowed_value_specs:
60 param_path = ['Parameters', value, 'AllowedValues', index]
61 message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
62 matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
63 if default_value:
64 # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
65 if str(default_value) not in allowed_value_specs:
66 param_path = ['Parameters', value, 'Default']
67 message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
68 matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
69
70 return matches
71
72 def check(self, cfn, properties, value_specs, property_specs, path):
73 """Check itself"""
74 matches = list()
75 for p_value, p_path in properties.items_safe(path[:]):
76 for prop in p_value:
77 if prop in value_specs:
78 value = value_specs.get(prop).get('Value', {})
79 if value:
80 value_type = value.get('ValueType', '')
81 property_type = property_specs.get('Properties').get(prop).get('Type')
82 matches.extend(
83 cfn.check_value(
84 p_value, prop, p_path,
85 check_ref=self.check_value_ref,
86 value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
87 cfn=cfn, property_type=property_type, property_name=prop
88 )
89 )
90
91 return matches
92
93 def match_resource_sub_properties(self, properties, property_type, path, cfn):
94 """Match for sub properties"""
95 matches = list()
96
97 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
98 property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
99 matches.extend(self.check(cfn, properties, specs, property_specs, path))
100
101 return matches
102
103 def match_resource_properties(self, properties, resource_type, path, cfn):
104 """Check CloudFormation Properties"""
105 matches = list()
106
107 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
108 resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
109 matches.extend(self.check(cfn, properties, specs, resource_specs, path))
110
111 return matches
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py
--- a/src/cfnlint/rules/parameters/AllowedValue.py
+++ b/src/cfnlint/rules/parameters/AllowedValue.py
@@ -36,10 +36,14 @@
for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
self.resource_sub_property_types.append(property_type_spec)
- def check_value_ref(self, value, **kwargs):
+ def check_value_ref(self, value, path, **kwargs):
"""Check Ref"""
matches = []
+ if 'Fn::If' in path:
+ self.logger.debug('Not able to guarentee that the default value hasn\'t been conditioned out')
+ return matches
+
allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
cfn = kwargs.get('cfn')
| {"golden_diff": "diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py\n--- a/src/cfnlint/rules/parameters/AllowedValue.py\n+++ b/src/cfnlint/rules/parameters/AllowedValue.py\n@@ -36,10 +36,14 @@\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n \n- def check_value_ref(self, value, **kwargs):\n+ def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n \n+ if 'Fn::If' in path:\n+ self.logger.debug('Not able to guarentee that the default value hasn\\'t been conditioned out')\n+ return matches\n+\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n", "issue": "W2030 Default value required on conditionally included property\n*cfn-lint version: 0.21.3*\r\n\r\nCloudFormation provides the AWS::NoValue pseudo-parameter, which allows for a property to be included based on a given Condition. However, cfn-lint will still validate the potential value provided for the property, even if it will not actually be used in the deployment.\r\n\r\nExample template:\r\n\r\n```yaml\r\nParameters:\r\n Retention:\r\n Type: Number\r\n Description: Retention in days for the log group (-1 for no retention)\r\n Default: -1\r\nConditions:\r\n IsRetention: \r\n !Not [!Equals [!Ref 'Retention', '-1']]\r\nResources:\r\n LogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties:\r\n LogGroupName: 'some-log-group'\r\n RetentionInDays: !If [IsRetention, !Ref Retention, !Ref 'AWS::NoValue']\r\n```\r\n\r\nThis template allows the user to specify the retention on a log group, or use the number -1 if they wish to have unlimited retention. This is achieved via a Condition as well as an If block that conditionally includes the property.\r\n\r\nThis leads to the following linter output:\r\n\r\n```\r\ncfn-lint --template template.yaml\r\nW2030 You must specify a valid Default value for Retention (-1). \r\nValid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']\r\ncloudformation/template.yaml:5:5\r\n```\r\n\r\nThis can of course be avoided by disabling this specific check in the template Metadata block. Unfortunately it cannot be disabled in the resource Metadata, as the validation error happens on the Parameter:\r\n\r\n```yaml\r\nMetadata:\r\n cfn-lint:\r\n config:\r\n ignore_checks:\r\n - W2030\r\n```\r\n\r\nThis might be a difficult situation to account for, since it would require the Condition to be evaluated to determine whether the property itself should even be checked.\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList', 'List<String>']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n if 'Fn::If' in path:\n self.logger.debug('Not able to guarentee that the default value hasn\\'t been conditioned out')\n return matches\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList', 'List<String>']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}]} | 2,170 | 210 |
gh_patches_debug_36592 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-1399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable input normalization in SemanticSegmentationData module
## 🚀 Feature
Add the possibility to normalize Input images in SemanticSegmentationData module
### Motivation
Enable effortless normalization, as already implemented by ImageClassificationData: optionally configurable by doing:
```python
dm = SemanticSegmentationData.from_folders(
# ...
args_transforms=dict(mean=mean,std=std)
)
```
### Pitch
Change [/flash/image/segmentation/input_transform.py:43](https://github.com/Lightning-AI/lightning-flash/blob/master/flash/image/segmentation/input_transform.py#L43)
```python
@dataclass
class SemanticSegmentationInputTransform(InputTransform):
image_size: Tuple[int, int] = (128, 128)
def train_per_sample_transform(self) -> Callable:
return ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(
K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
),
)
def per_sample_transform(self) -> Callable:
return ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
)
def predict_per_sample_transform(self) -> Callable:
return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))
```
into this
```python
@dataclass
class SemanticSegmentationInputTransform(InputTransform):
image_size: Tuple[int, int] = (128, 128)
mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)
def train_per_sample_transform(self) -> Callable:
return T.Compose(
[
ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(
K.geometry.Resize(self.image_size, interpolation="nearest"),
)
),
ApplyToKeys(
[DataKeys.INPUT],
K.augmentation.Normalize(mean=mean, std=std)
),
]
)
def per_sample_transform(self) -> Callable:
return T.Compose(
[
ApplyToKeys(
[DataKeys.INPUT, DataKeys.TARGET],
KorniaParallelTransforms(
K.geometry.Resize(self.image_size, interpolation="nearest"),
)
),
ApplyToKeys(
[DataKeys.INPUT],
K.augmentation.Normalize(mean=mean, std=std)
),
]
)
def predict_per_sample_transform(self) -> Callable:
return ApplyToKeys(
DataKeys.INPUT,
K.geometry.Resize(self.image_size, interpolation="nearest"),
K.augmentation.Normalize(mean=mean, std=std)
)
```
### Alternatives
The alternative is to write a custom InputTransform object every time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash/image/segmentation/input_transform.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from dataclasses import dataclass
15 from typing import Any, Callable, Dict, Tuple
16
17 from flash.core.data.io.input import DataKeys
18 from flash.core.data.io.input_transform import InputTransform
19 from flash.core.data.transforms import ApplyToKeys, kornia_collate, KorniaParallelTransforms
20 from flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE
21
22 if _KORNIA_AVAILABLE:
23 import kornia as K
24
25 if _TORCHVISION_AVAILABLE:
26 from torchvision import transforms as T
27
28
29 def prepare_target(batch: Dict[str, Any]) -> Dict[str, Any]:
30 """Convert the target mask to long and remove the channel dimension."""
31 if DataKeys.TARGET in batch:
32 batch[DataKeys.TARGET] = batch[DataKeys.TARGET].long().squeeze(1)
33 return batch
34
35
36 def remove_extra_dimensions(batch: Dict[str, Any]):
37 if isinstance(batch[DataKeys.INPUT], list):
38 assert len(batch[DataKeys.INPUT]) == 1
39 batch[DataKeys.INPUT] = batch[DataKeys.INPUT][0]
40 return batch
41
42
43 @dataclass
44 class SemanticSegmentationInputTransform(InputTransform):
45
46 image_size: Tuple[int, int] = (128, 128)
47
48 def train_per_sample_transform(self) -> Callable:
49 return ApplyToKeys(
50 [DataKeys.INPUT, DataKeys.TARGET],
51 KorniaParallelTransforms(
52 K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
53 ),
54 )
55
56 def per_sample_transform(self) -> Callable:
57 return ApplyToKeys(
58 [DataKeys.INPUT, DataKeys.TARGET],
59 KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
60 )
61
62 def predict_per_sample_transform(self) -> Callable:
63 return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))
64
65 def collate(self) -> Callable:
66 return kornia_collate
67
68 def per_batch_transform(self) -> Callable:
69 return T.Compose([prepare_target, remove_extra_dimensions])
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flash/image/segmentation/input_transform.py b/flash/image/segmentation/input_transform.py
--- a/flash/image/segmentation/input_transform.py
+++ b/flash/image/segmentation/input_transform.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
-from typing import Any, Callable, Dict, Tuple
+from typing import Any, Callable, Dict, Tuple, Union
from flash.core.data.io.input import DataKeys
from flash.core.data.io.input_transform import InputTransform
@@ -44,23 +44,43 @@
class SemanticSegmentationInputTransform(InputTransform):
image_size: Tuple[int, int] = (128, 128)
+ mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
+ std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)
def train_per_sample_transform(self) -> Callable:
- return ApplyToKeys(
- [DataKeys.INPUT, DataKeys.TARGET],
- KorniaParallelTransforms(
- K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
- ),
+ return T.Compose(
+ [
+ ApplyToKeys(
+ [DataKeys.INPUT, DataKeys.TARGET],
+ KorniaParallelTransforms(
+ K.geometry.Resize(self.image_size, interpolation="nearest"),
+ K.augmentation.RandomHorizontalFlip(p=0.5),
+ ),
+ ),
+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),
+ ]
)
def per_sample_transform(self) -> Callable:
- return ApplyToKeys(
- [DataKeys.INPUT, DataKeys.TARGET],
- KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
+ return T.Compose(
+ [
+ ApplyToKeys(
+ [DataKeys.INPUT, DataKeys.TARGET],
+ KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
+ ),
+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),
+ ]
)
def predict_per_sample_transform(self) -> Callable:
- return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))
+ return ApplyToKeys(
+ DataKeys.INPUT,
+ K.geometry.Resize(
+ self.image_size,
+ interpolation="nearest",
+ ),
+ K.augmentation.Normalize(mean=self.mean, std=self.std),
+ )
def collate(self) -> Callable:
return kornia_collate
| {"golden_diff": "diff --git a/flash/image/segmentation/input_transform.py b/flash/image/segmentation/input_transform.py\n--- a/flash/image/segmentation/input_transform.py\n+++ b/flash/image/segmentation/input_transform.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n from dataclasses import dataclass\n-from typing import Any, Callable, Dict, Tuple\n+from typing import Any, Callable, Dict, Tuple, Union\n \n from flash.core.data.io.input import DataKeys\n from flash.core.data.io.input_transform import InputTransform\n@@ -44,23 +44,43 @@\n class SemanticSegmentationInputTransform(InputTransform):\n \n image_size: Tuple[int, int] = (128, 128)\n+ mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\n+ std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\n \n def train_per_sample_transform(self) -> Callable:\n- return ApplyToKeys(\n- [DataKeys.INPUT, DataKeys.TARGET],\n- KorniaParallelTransforms(\n- K.geometry.Resize(self.image_size, interpolation=\"nearest\"), K.augmentation.RandomHorizontalFlip(p=0.5)\n- ),\n+ return T.Compose(\n+ [\n+ ApplyToKeys(\n+ [DataKeys.INPUT, DataKeys.TARGET],\n+ KorniaParallelTransforms(\n+ K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\n+ K.augmentation.RandomHorizontalFlip(p=0.5),\n+ ),\n+ ),\n+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),\n+ ]\n )\n \n def per_sample_transform(self) -> Callable:\n- return ApplyToKeys(\n- [DataKeys.INPUT, DataKeys.TARGET],\n- KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n+ return T.Compose(\n+ [\n+ ApplyToKeys(\n+ [DataKeys.INPUT, DataKeys.TARGET],\n+ KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n+ ),\n+ ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),\n+ ]\n )\n \n def predict_per_sample_transform(self) -> Callable:\n- return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation=\"nearest\"))\n+ return ApplyToKeys(\n+ DataKeys.INPUT,\n+ K.geometry.Resize(\n+ self.image_size,\n+ interpolation=\"nearest\",\n+ ),\n+ K.augmentation.Normalize(mean=self.mean, std=self.std),\n+ )\n \n def collate(self) -> Callable:\n return kornia_collate\n", "issue": "Enable input normalization in SemanticSegmentationData module\n## \ud83d\ude80 Feature\r\nAdd the possibility to normalize Input images in SemanticSegmentationData module\r\n\r\n### Motivation\r\nEnable effortless normalization, as already implemented by ImageClassificationData: optionally configurable by doing: \r\n```python\r\ndm = SemanticSegmentationData.from_folders(\r\n # ...\r\n args_transforms=dict(mean=mean,std=std)\r\n)\r\n```\r\n\r\n### Pitch\r\nChange [/flash/image/segmentation/input_transform.py:43](https://github.com/Lightning-AI/lightning-flash/blob/master/flash/image/segmentation/input_transform.py#L43)\r\n\r\n```python\r\n\r\n@dataclass\r\nclass SemanticSegmentationInputTransform(InputTransform):\r\n\r\n image_size: Tuple[int, int] = (128, 128)\r\n\r\n def train_per_sample_transform(self) -> Callable:\r\n return ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(\r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"), K.augmentation.RandomHorizontalFlip(p=0.5)\r\n ),\r\n )\r\n\r\n def per_sample_transform(self) -> Callable:\r\n return ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\r\n )\r\n\r\n def predict_per_sample_transform(self) -> Callable:\r\n return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation=\"nearest\"))\r\n```\r\n\r\ninto this\r\n\r\n```python\r\n@dataclass\r\nclass SemanticSegmentationInputTransform(InputTransform):\r\n\r\n image_size: Tuple[int, int] = (128, 128)\r\n mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\r\n std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\r\n\r\n\r\n def train_per_sample_transform(self) -> Callable:\r\n return T.Compose(\r\n [\r\n ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(\r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\r\n )\r\n ),\r\n ApplyToKeys(\r\n [DataKeys.INPUT],\r\n K.augmentation.Normalize(mean=mean, std=std)\r\n \r\n ),\r\n ]\r\n )\r\n\r\n def per_sample_transform(self) -> Callable:\r\n return T.Compose(\r\n [\r\n ApplyToKeys(\r\n [DataKeys.INPUT, DataKeys.TARGET],\r\n KorniaParallelTransforms(\r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\r\n )\r\n ),\r\n ApplyToKeys(\r\n [DataKeys.INPUT],\r\n K.augmentation.Normalize(mean=mean, std=std)\r\n \r\n ),\r\n ]\r\n )\r\n\r\n def predict_per_sample_transform(self) -> Callable: \r\n return ApplyToKeys(\r\n DataKeys.INPUT, \r\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"), \r\n K.augmentation.Normalize(mean=mean, std=std)\r\n )\r\n\r\n```\r\n\r\n### Alternatives\r\nThe alternative is to write a custom InputTransform object every time.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Dict, Tuple\n\nfrom flash.core.data.io.input import DataKeys\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.data.transforms import ApplyToKeys, kornia_collate, KorniaParallelTransforms\nfrom flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE\n\nif _KORNIA_AVAILABLE:\n import kornia as K\n\nif _TORCHVISION_AVAILABLE:\n from torchvision import transforms as T\n\n\ndef prepare_target(batch: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Convert the target mask to long and remove the channel dimension.\"\"\"\n if DataKeys.TARGET in batch:\n batch[DataKeys.TARGET] = batch[DataKeys.TARGET].long().squeeze(1)\n return batch\n\n\ndef remove_extra_dimensions(batch: Dict[str, Any]):\n if isinstance(batch[DataKeys.INPUT], list):\n assert len(batch[DataKeys.INPUT]) == 1\n batch[DataKeys.INPUT] = batch[DataKeys.INPUT][0]\n return batch\n\n\n@dataclass\nclass SemanticSegmentationInputTransform(InputTransform):\n\n image_size: Tuple[int, int] = (128, 128)\n\n def train_per_sample_transform(self) -> Callable:\n return ApplyToKeys(\n [DataKeys.INPUT, DataKeys.TARGET],\n KorniaParallelTransforms(\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"), K.augmentation.RandomHorizontalFlip(p=0.5)\n ),\n )\n\n def per_sample_transform(self) -> Callable:\n return ApplyToKeys(\n [DataKeys.INPUT, DataKeys.TARGET],\n KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n )\n\n def predict_per_sample_transform(self) -> Callable:\n return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation=\"nearest\"))\n\n def collate(self) -> Callable:\n return kornia_collate\n\n def per_batch_transform(self) -> Callable:\n return T.Compose([prepare_target, remove_extra_dimensions])\n", "path": "flash/image/segmentation/input_transform.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Dict, Tuple, Union\n\nfrom flash.core.data.io.input import DataKeys\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.data.transforms import ApplyToKeys, kornia_collate, KorniaParallelTransforms\nfrom flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE\n\nif _KORNIA_AVAILABLE:\n import kornia as K\n\nif _TORCHVISION_AVAILABLE:\n from torchvision import transforms as T\n\n\ndef prepare_target(batch: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Convert the target mask to long and remove the channel dimension.\"\"\"\n if DataKeys.TARGET in batch:\n batch[DataKeys.TARGET] = batch[DataKeys.TARGET].long().squeeze(1)\n return batch\n\n\ndef remove_extra_dimensions(batch: Dict[str, Any]):\n if isinstance(batch[DataKeys.INPUT], list):\n assert len(batch[DataKeys.INPUT]) == 1\n batch[DataKeys.INPUT] = batch[DataKeys.INPUT][0]\n return batch\n\n\n@dataclass\nclass SemanticSegmentationInputTransform(InputTransform):\n\n image_size: Tuple[int, int] = (128, 128)\n mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\n std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\n\n def train_per_sample_transform(self) -> Callable:\n return T.Compose(\n [\n ApplyToKeys(\n [DataKeys.INPUT, DataKeys.TARGET],\n KorniaParallelTransforms(\n K.geometry.Resize(self.image_size, interpolation=\"nearest\"),\n K.augmentation.RandomHorizontalFlip(p=0.5),\n ),\n ),\n ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),\n ]\n )\n\n def per_sample_transform(self) -> Callable:\n return T.Compose(\n [\n ApplyToKeys(\n [DataKeys.INPUT, DataKeys.TARGET],\n KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation=\"nearest\")),\n ),\n ApplyToKeys([DataKeys.INPUT], K.augmentation.Normalize(mean=self.mean, std=self.std)),\n ]\n )\n\n def predict_per_sample_transform(self) -> Callable:\n return ApplyToKeys(\n DataKeys.INPUT,\n K.geometry.Resize(\n self.image_size,\n interpolation=\"nearest\",\n ),\n K.augmentation.Normalize(mean=self.mean, std=self.std),\n )\n\n def collate(self) -> Callable:\n return kornia_collate\n\n def per_batch_transform(self) -> Callable:\n return T.Compose([prepare_target, remove_extra_dimensions])\n", "path": "flash/image/segmentation/input_transform.py"}]} | 1,654 | 643 |
gh_patches_debug_27268 | rasdani/github-patches | git_diff | fidals__shopelectro-1023 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update prices separately
We update price files only after successful catalog data update. Sometimes we have struggle with data update, but we still need to update price files, otherwise we will get penalties from aggregators.
We should make the price files update independent of the catalog data update.
We can try these approaches:
1) Update files in separate celery cron task
2) Update files in finally block of update_catalog task
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/tasks.py`
Content:
```
1 from contextlib import contextmanager
2
3 from django.conf import settings
4 from django.core.management import call_command
5 from selenium.common.exceptions import WebDriverException
6
7 from shopelectro import selenium
8 from shopelectro.celery import app
9 from shopelectro.report import TelegramReport
10 from shopelectro.models import CategoryPage
11 from shopelectro.management.commands._update_catalog import utils
12
13
14 @contextmanager
15 def report():
16 try:
17 yield
18 except Exception as error:
19 utils.report(str(error))
20 raise error
21
22
23 @app.task
24 def generate_price_files():
25 with report():
26 call_command('price')
27 print('Generate prices complete.')
28
29
30 @app.task
31 def generate_excel_file():
32 with report():
33 call_command('excel')
34 print('Generate excel complete.')
35
36
37 @app.task
38 def collect_static():
39 with report():
40 call_command('collectstatic', '--noinput')
41
42
43 @app.task
44 def update_catalog_command():
45 with report():
46 call_command('update_catalog')
47
48
49 @app.task
50 def update_default_templates():
51 with report():
52 call_command('update_default_templates')
53
54
55 @app.task(autoretry_for=(Exception,), max_retries=3, default_retry_delay=60*10) # Ignore PycodestyleBear (E226)
56 def update_catalog():
57 # http://docs.celeryproject.org/en/latest/userguide/canvas.html#map-starmap
58 return [
59 update_catalog_command(),
60 update_default_templates(),
61 generate_price_files(),
62 generate_excel_file(),
63 collect_static()
64 ]
65
66
67 @app.task(
68 bind=True,
69 autoretry_for=(WebDriverException, AssertionError),
70 retry_kwargs={'max_retries': settings.CHECK_PURCHASE_RETRIES},
71 )
72 def check_purchase(self):
73 try:
74 with selenium.SiteDriver(site_url=settings.BASE_URL) as driver:
75 category_page = selenium.CategoryPage(driver, CategoryPage.objects.first().slug)
76 category_page.load()
77 category_page.add_to_cart()
78
79 order_page = selenium.OrderPage(driver)
80 order_page.load()
81 order_page.fill_contacts()
82 order_page.make_order()
83
84 success_page = selenium.SuccessPage(driver)
85 assert success_page.is_success()
86 except (WebDriverException, AssertionError) as err:
87 if self.request.retries + 1 > self.max_retries:
88 # report on the last attempt
89 TelegramReport().send(f'Can\'t buy a product. Got the error: {err}')
90 raise err
91
```
Path: `shopelectro/celery.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2 from datetime import timedelta
3 import os
4
5 from celery import Celery
6 from kombu import Exchange, Queue
7
8 # set the default Django settings module for the 'celery' program.
9 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'shopelectro.settings.local')
10
11 app = Celery('shopelectro')
12
13 # Exchanges
14 default_exchange = Exchange('default', type='direct')
15 utils_exchange = Exchange('utils', type='direct')
16
17 # http://docs.celeryproject.org/en/latest/userguide/tasks.html
18 task_queues = (
19 Queue(
20 name='default',
21 exchange=default_exchange,
22 routing_key='default',
23 ),
24 Queue(
25 name='mail',
26 exchange=utils_exchange,
27 routing_key='utils.mail',
28 ),
29 Queue(
30 name='command',
31 exchange=utils_exchange,
32 routing_key='utils.command',
33 )
34 )
35
36 # http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html
37 beat_schedule = {
38 'update-catalog': {
39 'task': 'shopelectro.tasks.update_catalog',
40 'schedule': timedelta(hours=2).total_seconds(),
41 },
42 'check-purchase': {
43 'task': 'shopelectro.tasks.check_purchase',
44 'schedule': timedelta(days=1).total_seconds(),
45 },
46 }
47
48 # http://docs.celeryproject.org/en/master/userguide/routing.html
49 task_routes = {
50 'shopelectro.tasks.update_catalog': {
51 'queue': 'command',
52 'routing_key': 'utils.command',
53 'priority': 30,
54 },
55 'shopelectro.tasks.check_purchase': {
56 'queue': 'command',
57 'routing_key': 'utils.command',
58 'priority': 20,
59 },
60 'ecommerce.tasks.send_mail': {
61 'queue': 'mail',
62 'routing_key': 'utils.mail',
63 'priority': 50,
64 },
65 }
66
67 # Using a string here means the worker don't have to serialize
68 # the configuration object to child processes.
69 # - namespace='CELERY' means all celery-related configuration keys
70 # should have a `CELERY_` prefix.
71 app.config_from_object('django.conf:settings', namespace='CELERY')
72
73 # http://docs.celeryproject.org/en/latest/userguide/configuration.html
74
75 BROCKER_URL = (
76 f'amqp://{os.environ["RABBITMQ_DEFAULT_USER"]}:{os.environ["RABBITMQ_DEFAULT_PASS"]}'
77 f'@{os.environ["RABBITMQ_URL"]}:{os.environ["RABBITMQ_PORT"]}/'
78 )
79 app.conf.update(
80 broker_url=BROCKER_URL,
81 broker_heartbeat=30,
82 task_acks_late=True,
83 task_default_queue='default',
84 task_default_exchange='default',
85 task_default_routing_key='default',
86 worker_pool_restarts=True,
87 task_routes=task_routes,
88 task_queues=task_queues,
89 beat_schedule=beat_schedule,
90 worker_max_memory_per_child=200000, # after 250MB will restart
91 )
92
93 # Load task modules from all registered Django app configs.
94 app.autodiscover_tasks()
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/celery.py b/shopelectro/celery.py
--- a/shopelectro/celery.py
+++ b/shopelectro/celery.py
@@ -39,6 +39,10 @@
'task': 'shopelectro.tasks.update_catalog',
'schedule': timedelta(hours=2).total_seconds(),
},
+ 'update-prices': {
+ 'task': 'shopelectro.tasks.update_prices',
+ 'schedule': timedelta(hours=1).total_seconds(),
+ },
'check-purchase': {
'task': 'shopelectro.tasks.check_purchase',
'schedule': timedelta(days=1).total_seconds(),
@@ -52,6 +56,11 @@
'routing_key': 'utils.command',
'priority': 30,
},
+ 'shopelectro.tasks.update_prices': {
+ 'queue': 'command',
+ 'routing_key': 'utils.command',
+ 'priority': 50,
+ },
'shopelectro.tasks.check_purchase': {
'queue': 'command',
'routing_key': 'utils.command',
diff --git a/shopelectro/tasks.py b/shopelectro/tasks.py
--- a/shopelectro/tasks.py
+++ b/shopelectro/tasks.py
@@ -58,9 +58,16 @@
return [
update_catalog_command(),
update_default_templates(),
+ collect_static(),
+ ]
+
+
[email protected](autoretry_for=(Exception,), max_retries=3, default_retry_delay=60*10) # Ignore PycodestyleBear (E226)
+def update_prices():
+ return [
generate_price_files(),
generate_excel_file(),
- collect_static()
+ collect_static(),
]
| {"golden_diff": "diff --git a/shopelectro/celery.py b/shopelectro/celery.py\n--- a/shopelectro/celery.py\n+++ b/shopelectro/celery.py\n@@ -39,6 +39,10 @@\n 'task': 'shopelectro.tasks.update_catalog',\n 'schedule': timedelta(hours=2).total_seconds(),\n },\n+ 'update-prices': {\n+ 'task': 'shopelectro.tasks.update_prices',\n+ 'schedule': timedelta(hours=1).total_seconds(),\n+ },\n 'check-purchase': {\n 'task': 'shopelectro.tasks.check_purchase',\n 'schedule': timedelta(days=1).total_seconds(),\n@@ -52,6 +56,11 @@\n 'routing_key': 'utils.command',\n 'priority': 30,\n },\n+ 'shopelectro.tasks.update_prices': {\n+ 'queue': 'command',\n+ 'routing_key': 'utils.command',\n+ 'priority': 50,\n+ },\n 'shopelectro.tasks.check_purchase': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\ndiff --git a/shopelectro/tasks.py b/shopelectro/tasks.py\n--- a/shopelectro/tasks.py\n+++ b/shopelectro/tasks.py\n@@ -58,9 +58,16 @@\n return [\n update_catalog_command(),\n update_default_templates(),\n+ collect_static(),\n+ ]\n+\n+\[email protected](autoretry_for=(Exception,), max_retries=3, default_retry_delay=60*10) # Ignore PycodestyleBear (E226)\n+def update_prices():\n+ return [\n generate_price_files(),\n generate_excel_file(),\n- collect_static()\n+ collect_static(),\n ]\n", "issue": "Update prices separately\nWe update price files only after successful catalog data update. Sometimes we have struggle with data update, but we still need to update price files, otherwise we will get penalties from aggregators.\r\n\r\nWe should make the price files update independent of the catalog data update.\r\nWe can try these approaches:\r\n1) Update files in separate celery cron task\r\n2) Update files in finally block of update_catalog task \n", "before_files": [{"content": "from contextlib import contextmanager\n\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom selenium.common.exceptions import WebDriverException\n\nfrom shopelectro import selenium\nfrom shopelectro.celery import app\nfrom shopelectro.report import TelegramReport\nfrom shopelectro.models import CategoryPage\nfrom shopelectro.management.commands._update_catalog import utils\n\n\n@contextmanager\ndef report():\n try:\n yield\n except Exception as error:\n utils.report(str(error))\n raise error\n\n\[email protected]\ndef generate_price_files():\n with report():\n call_command('price')\n print('Generate prices complete.')\n\n\[email protected]\ndef generate_excel_file():\n with report():\n call_command('excel')\n print('Generate excel complete.')\n\n\[email protected]\ndef collect_static():\n with report():\n call_command('collectstatic', '--noinput')\n\n\[email protected]\ndef update_catalog_command():\n with report():\n call_command('update_catalog')\n\n\[email protected]\ndef update_default_templates():\n with report():\n call_command('update_default_templates')\n\n\[email protected](autoretry_for=(Exception,), max_retries=3, default_retry_delay=60*10) # Ignore PycodestyleBear (E226)\ndef update_catalog():\n # http://docs.celeryproject.org/en/latest/userguide/canvas.html#map-starmap\n return [\n update_catalog_command(),\n update_default_templates(),\n generate_price_files(),\n generate_excel_file(),\n collect_static()\n ]\n\n\[email protected](\n bind=True,\n autoretry_for=(WebDriverException, AssertionError),\n retry_kwargs={'max_retries': settings.CHECK_PURCHASE_RETRIES},\n)\ndef check_purchase(self):\n try:\n with selenium.SiteDriver(site_url=settings.BASE_URL) as driver:\n category_page = selenium.CategoryPage(driver, CategoryPage.objects.first().slug)\n category_page.load()\n category_page.add_to_cart()\n\n order_page = selenium.OrderPage(driver)\n order_page.load()\n order_page.fill_contacts()\n order_page.make_order()\n\n success_page = selenium.SuccessPage(driver)\n assert success_page.is_success()\n except (WebDriverException, AssertionError) as err:\n if self.request.retries + 1 > self.max_retries:\n # report on the last attempt\n TelegramReport().send(f'Can\\'t buy a product. Got the error: {err}')\n raise err\n", "path": "shopelectro/tasks.py"}, {"content": "from __future__ import absolute_import, unicode_literals\nfrom datetime import timedelta\nimport os\n\nfrom celery import Celery\nfrom kombu import Exchange, Queue\n\n# set the default Django settings module for the 'celery' program.\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'shopelectro.settings.local')\n\napp = Celery('shopelectro')\n\n# Exchanges\ndefault_exchange = Exchange('default', type='direct')\nutils_exchange = Exchange('utils', type='direct')\n\n# http://docs.celeryproject.org/en/latest/userguide/tasks.html\ntask_queues = (\n Queue(\n name='default',\n exchange=default_exchange,\n routing_key='default',\n ),\n Queue(\n name='mail',\n exchange=utils_exchange,\n routing_key='utils.mail',\n ),\n Queue(\n name='command',\n exchange=utils_exchange,\n routing_key='utils.command',\n )\n)\n\n# http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html\nbeat_schedule = {\n 'update-catalog': {\n 'task': 'shopelectro.tasks.update_catalog',\n 'schedule': timedelta(hours=2).total_seconds(),\n },\n 'check-purchase': {\n 'task': 'shopelectro.tasks.check_purchase',\n 'schedule': timedelta(days=1).total_seconds(),\n },\n}\n\n# http://docs.celeryproject.org/en/master/userguide/routing.html\ntask_routes = {\n 'shopelectro.tasks.update_catalog': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 30,\n },\n 'shopelectro.tasks.check_purchase': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 20,\n },\n 'ecommerce.tasks.send_mail': {\n 'queue': 'mail',\n 'routing_key': 'utils.mail',\n 'priority': 50,\n },\n}\n\n# Using a string here means the worker don't have to serialize\n# the configuration object to child processes.\n# - namespace='CELERY' means all celery-related configuration keys\n# should have a `CELERY_` prefix.\napp.config_from_object('django.conf:settings', namespace='CELERY')\n\n# http://docs.celeryproject.org/en/latest/userguide/configuration.html\n\nBROCKER_URL = (\n f'amqp://{os.environ[\"RABBITMQ_DEFAULT_USER\"]}:{os.environ[\"RABBITMQ_DEFAULT_PASS\"]}'\n f'@{os.environ[\"RABBITMQ_URL\"]}:{os.environ[\"RABBITMQ_PORT\"]}/'\n)\napp.conf.update(\n broker_url=BROCKER_URL,\n broker_heartbeat=30,\n task_acks_late=True,\n task_default_queue='default',\n task_default_exchange='default',\n task_default_routing_key='default',\n worker_pool_restarts=True,\n task_routes=task_routes,\n task_queues=task_queues,\n beat_schedule=beat_schedule,\n worker_max_memory_per_child=200000, # after 250MB will restart\n)\n\n# Load task modules from all registered Django app configs.\napp.autodiscover_tasks()\n", "path": "shopelectro/celery.py"}], "after_files": [{"content": "from contextlib import contextmanager\n\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom selenium.common.exceptions import WebDriverException\n\nfrom shopelectro import selenium\nfrom shopelectro.celery import app\nfrom shopelectro.report import TelegramReport\nfrom shopelectro.models import CategoryPage\nfrom shopelectro.management.commands._update_catalog import utils\n\n\n@contextmanager\ndef report():\n try:\n yield\n except Exception as error:\n utils.report(str(error))\n raise error\n\n\[email protected]\ndef generate_price_files():\n with report():\n call_command('price')\n print('Generate prices complete.')\n\n\[email protected]\ndef generate_excel_file():\n with report():\n call_command('excel')\n print('Generate excel complete.')\n\n\[email protected]\ndef collect_static():\n with report():\n call_command('collectstatic', '--noinput')\n\n\[email protected]\ndef update_catalog_command():\n with report():\n call_command('update_catalog')\n\n\[email protected]\ndef update_default_templates():\n with report():\n call_command('update_default_templates')\n\n\[email protected](autoretry_for=(Exception,), max_retries=3, default_retry_delay=60*10) # Ignore PycodestyleBear (E226)\ndef update_catalog():\n # http://docs.celeryproject.org/en/latest/userguide/canvas.html#map-starmap\n return [\n update_catalog_command(),\n update_default_templates(),\n collect_static(),\n ]\n\n\[email protected](autoretry_for=(Exception,), max_retries=3, default_retry_delay=60*10) # Ignore PycodestyleBear (E226)\ndef update_prices():\n return [\n generate_price_files(),\n generate_excel_file(),\n collect_static(),\n ]\n\n\[email protected](\n bind=True,\n autoretry_for=(WebDriverException, AssertionError),\n retry_kwargs={'max_retries': settings.CHECK_PURCHASE_RETRIES},\n)\ndef check_purchase(self):\n try:\n with selenium.SiteDriver(site_url=settings.BASE_URL) as driver:\n category_page = selenium.CategoryPage(driver, CategoryPage.objects.first().slug)\n category_page.load()\n category_page.add_to_cart()\n\n order_page = selenium.OrderPage(driver)\n order_page.load()\n order_page.fill_contacts()\n order_page.make_order()\n\n success_page = selenium.SuccessPage(driver)\n assert success_page.is_success()\n except (WebDriverException, AssertionError) as err:\n if self.request.retries + 1 > self.max_retries:\n # report on the last attempt\n TelegramReport().send(f'Can\\'t buy a product. Got the error: {err}')\n raise err\n", "path": "shopelectro/tasks.py"}, {"content": "from __future__ import absolute_import, unicode_literals\nfrom datetime import timedelta\nimport os\n\nfrom celery import Celery\nfrom kombu import Exchange, Queue\n\n# set the default Django settings module for the 'celery' program.\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'shopelectro.settings.local')\n\napp = Celery('shopelectro')\n\n# Exchanges\ndefault_exchange = Exchange('default', type='direct')\nutils_exchange = Exchange('utils', type='direct')\n\n# http://docs.celeryproject.org/en/latest/userguide/tasks.html\ntask_queues = (\n Queue(\n name='default',\n exchange=default_exchange,\n routing_key='default',\n ),\n Queue(\n name='mail',\n exchange=utils_exchange,\n routing_key='utils.mail',\n ),\n Queue(\n name='command',\n exchange=utils_exchange,\n routing_key='utils.command',\n )\n)\n\n# http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html\nbeat_schedule = {\n 'update-catalog': {\n 'task': 'shopelectro.tasks.update_catalog',\n 'schedule': timedelta(hours=2).total_seconds(),\n },\n 'update-prices': {\n 'task': 'shopelectro.tasks.update_prices',\n 'schedule': timedelta(hours=1).total_seconds(),\n },\n 'check-purchase': {\n 'task': 'shopelectro.tasks.check_purchase',\n 'schedule': timedelta(days=1).total_seconds(),\n },\n}\n\n# http://docs.celeryproject.org/en/master/userguide/routing.html\ntask_routes = {\n 'shopelectro.tasks.update_catalog': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 30,\n },\n 'shopelectro.tasks.update_prices': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 50,\n },\n 'shopelectro.tasks.check_purchase': {\n 'queue': 'command',\n 'routing_key': 'utils.command',\n 'priority': 20,\n },\n 'ecommerce.tasks.send_mail': {\n 'queue': 'mail',\n 'routing_key': 'utils.mail',\n 'priority': 50,\n },\n}\n\n# Using a string here means the worker don't have to serialize\n# the configuration object to child processes.\n# - namespace='CELERY' means all celery-related configuration keys\n# should have a `CELERY_` prefix.\napp.config_from_object('django.conf:settings', namespace='CELERY')\n\n# http://docs.celeryproject.org/en/latest/userguide/configuration.html\n\nBROCKER_URL = (\n f'amqp://{os.environ[\"RABBITMQ_DEFAULT_USER\"]}:{os.environ[\"RABBITMQ_DEFAULT_PASS\"]}'\n f'@{os.environ[\"RABBITMQ_URL\"]}:{os.environ[\"RABBITMQ_PORT\"]}/'\n)\napp.conf.update(\n broker_url=BROCKER_URL,\n broker_heartbeat=30,\n task_acks_late=True,\n task_default_queue='default',\n task_default_exchange='default',\n task_default_routing_key='default',\n worker_pool_restarts=True,\n task_routes=task_routes,\n task_queues=task_queues,\n beat_schedule=beat_schedule,\n worker_max_memory_per_child=200000, # after 250MB will restart\n)\n\n# Load task modules from all registered Django app configs.\napp.autodiscover_tasks()\n", "path": "shopelectro/celery.py"}]} | 1,930 | 400 |
gh_patches_debug_2744 | rasdani/github-patches | git_diff | pulp__pulpcore-3381 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Export is not locking on the exported repositories
SSIA
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/viewsets/exporter.py`
Content:
```
1 from django_filters.rest_framework import filters
2
3 from drf_spectacular.utils import extend_schema
4 from rest_framework import mixins
5
6 from pulpcore.app.models import (
7 Export,
8 Exporter,
9 FilesystemExport,
10 FilesystemExporter,
11 Publication,
12 PulpExport,
13 PulpExporter,
14 RepositoryVersion,
15 )
16
17 from pulpcore.app.serializers import (
18 AsyncOperationResponseSerializer,
19 ExportSerializer,
20 ExporterSerializer,
21 FilesystemExporterSerializer,
22 FilesystemExportSerializer,
23 PulpExporterSerializer,
24 PulpExportSerializer,
25 )
26
27 from pulpcore.app.tasks.export import fs_publication_export, fs_repo_version_export, pulp_export
28
29 from pulpcore.app.viewsets import (
30 AsyncRemoveMixin,
31 AsyncUpdateMixin,
32 BaseFilterSet,
33 NamedModelViewSet,
34 )
35 from pulpcore.app.viewsets.base import NAME_FILTER_OPTIONS
36 from pulpcore.plugin.tasking import dispatch
37 from pulpcore.app.response import OperationPostponedResponse
38
39
40 class ExporterFilter(BaseFilterSet):
41 """
42 Plugin file system exporter filter should:
43 - inherit from this class
44 - add any specific filters if needed
45 - define a `Meta` class which should:
46 - specify a plugin remote model for which filter is defined
47 - extend `fields` with specific ones
48 """
49
50 name = filters.CharFilter()
51
52 class Meta:
53 model = Exporter
54 fields = {
55 "name": NAME_FILTER_OPTIONS,
56 }
57
58
59 class ExporterViewSet(
60 NamedModelViewSet,
61 mixins.CreateModelMixin,
62 AsyncUpdateMixin,
63 mixins.RetrieveModelMixin,
64 mixins.ListModelMixin,
65 AsyncRemoveMixin,
66 ):
67 """
68 ViewSet for viewing exporters.
69 """
70
71 queryset = Exporter.objects.all()
72 serializer_class = ExporterSerializer
73 endpoint_name = "exporters"
74 router_lookup = "exporter"
75 filterset_class = ExporterFilter
76
77
78 class PulpExporterViewSet(ExporterViewSet):
79 """
80 ViewSet for viewing PulpExporters.
81 """
82
83 endpoint_name = "pulp"
84 serializer_class = PulpExporterSerializer
85 queryset = PulpExporter.objects.all()
86
87
88 class FilesystemExporterViewSet(ExporterViewSet):
89 """
90 Endpoint for managing FilesystemExporters. FilesystemExporters are provided as a tech preview.
91 """
92
93 endpoint_name = "filesystem"
94 serializer_class = FilesystemExporterSerializer
95 queryset = FilesystemExporter.objects.all()
96
97
98 class ExportViewSet(
99 NamedModelViewSet,
100 mixins.CreateModelMixin,
101 mixins.RetrieveModelMixin,
102 mixins.ListModelMixin,
103 mixins.DestroyModelMixin,
104 ):
105 """
106 ViewSet for viewing exports from an Exporter.
107 """
108
109 endpoint_name = "exports"
110 nest_prefix = "exporters"
111 router_lookup = "export"
112 lookup_field = "pk"
113 parent_lookup_kwargs = {"exporter_pk": "exporter__pk"}
114 serializer_class = ExportSerializer
115 queryset = Export.objects.all()
116 parent_viewset = ExporterViewSet
117
118
119 class PulpExportViewSet(ExportViewSet):
120 """
121 ViewSet for viewing exports from a PulpExporter.
122 """
123
124 parent_viewset = PulpExporterViewSet
125 serializer_class = PulpExportSerializer
126 queryset = PulpExport.objects.all()
127
128 @extend_schema(
129 request=PulpExportSerializer,
130 description="Trigger an asynchronous task to export a set of repositories",
131 responses={202: AsyncOperationResponseSerializer},
132 )
133 def create(self, request, exporter_pk):
134 """
135 Generates a Task to export the set of repositories assigned to a specific PulpExporter.
136 """
137 # Validate Exporter
138 exporter = PulpExporter.objects.get(pk=exporter_pk).cast()
139 ExporterSerializer.validate_path(exporter.path, check_is_dir=True)
140
141 # Validate Export
142 serializer = PulpExportSerializer(data=request.data, context={"exporter": exporter})
143 serializer.is_valid(raise_exception=True)
144
145 # Invoke the export
146 task = dispatch(
147 pulp_export,
148 exclusive_resources=[exporter],
149 kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
150 )
151
152 return OperationPostponedResponse(task, request)
153
154
155 class FilesystemExportViewSet(ExportViewSet):
156 """
157 Endpoint for managing FilesystemExports. This endpoint is provided as a tech preview.
158 """
159
160 parent_viewset = FilesystemExporterViewSet
161 serializer_class = FilesystemExportSerializer
162 queryset = FilesystemExport.objects.all()
163
164 @extend_schema(
165 request=FilesystemExportSerializer,
166 description="Trigger an asynchronous task to export files to the filesystem",
167 responses={202: AsyncOperationResponseSerializer},
168 )
169 def create(self, request, exporter_pk):
170 """
171 Generates a Task to export files to the filesystem.
172 """
173 # Validate Exporter
174 exporter = FilesystemExporter.objects.get(pk=exporter_pk).cast()
175 ExporterSerializer.validate_path(exporter.path, check_is_dir=True)
176
177 # Validate Export
178 serializer = FilesystemExportSerializer(data=request.data, context={"exporter": exporter})
179 serializer.is_valid(raise_exception=True)
180
181 if request.data.get("publication"):
182 publication = self.get_resource(request.data["publication"], Publication)
183
184 task = dispatch(
185 fs_publication_export,
186 exclusive_resources=[exporter],
187 kwargs={"exporter_pk": exporter.pk, "publication_pk": publication.pk},
188 )
189 else:
190 repo_version = self.get_resource(request.data["repository_version"], RepositoryVersion)
191
192 task = dispatch(
193 fs_repo_version_export,
194 exclusive_resources=[exporter],
195 kwargs={"exporter_pk": str(exporter.pk), "repo_version_pk": repo_version.pk},
196 )
197
198 return OperationPostponedResponse(task, request)
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -146,6 +146,7 @@
task = dispatch(
pulp_export,
exclusive_resources=[exporter],
+ shared_resources=exporter.repositories.all(),
kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
)
| {"golden_diff": "diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py\n--- a/pulpcore/app/viewsets/exporter.py\n+++ b/pulpcore/app/viewsets/exporter.py\n@@ -146,6 +146,7 @@\n task = dispatch(\n pulp_export,\n exclusive_resources=[exporter],\n+ shared_resources=exporter.repositories.all(),\n kwargs={\"exporter_pk\": str(exporter.pk), \"params\": request.data},\n )\n", "issue": "Export is not locking on the exported repositories\nSSIA\n", "before_files": [{"content": "from django_filters.rest_framework import filters\n\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\n\nfrom pulpcore.app.models import (\n Export,\n Exporter,\n FilesystemExport,\n FilesystemExporter,\n Publication,\n PulpExport,\n PulpExporter,\n RepositoryVersion,\n)\n\nfrom pulpcore.app.serializers import (\n AsyncOperationResponseSerializer,\n ExportSerializer,\n ExporterSerializer,\n FilesystemExporterSerializer,\n FilesystemExportSerializer,\n PulpExporterSerializer,\n PulpExportSerializer,\n)\n\nfrom pulpcore.app.tasks.export import fs_publication_export, fs_repo_version_export, pulp_export\n\nfrom pulpcore.app.viewsets import (\n AsyncRemoveMixin,\n AsyncUpdateMixin,\n BaseFilterSet,\n NamedModelViewSet,\n)\nfrom pulpcore.app.viewsets.base import NAME_FILTER_OPTIONS\nfrom pulpcore.plugin.tasking import dispatch\nfrom pulpcore.app.response import OperationPostponedResponse\n\n\nclass ExporterFilter(BaseFilterSet):\n \"\"\"\n Plugin file system exporter filter should:\n - inherit from this class\n - add any specific filters if needed\n - define a `Meta` class which should:\n - specify a plugin remote model for which filter is defined\n - extend `fields` with specific ones\n \"\"\"\n\n name = filters.CharFilter()\n\n class Meta:\n model = Exporter\n fields = {\n \"name\": NAME_FILTER_OPTIONS,\n }\n\n\nclass ExporterViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n AsyncUpdateMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n AsyncRemoveMixin,\n):\n \"\"\"\n ViewSet for viewing exporters.\n \"\"\"\n\n queryset = Exporter.objects.all()\n serializer_class = ExporterSerializer\n endpoint_name = \"exporters\"\n router_lookup = \"exporter\"\n filterset_class = ExporterFilter\n\n\nclass PulpExporterViewSet(ExporterViewSet):\n \"\"\"\n ViewSet for viewing PulpExporters.\n \"\"\"\n\n endpoint_name = \"pulp\"\n serializer_class = PulpExporterSerializer\n queryset = PulpExporter.objects.all()\n\n\nclass FilesystemExporterViewSet(ExporterViewSet):\n \"\"\"\n Endpoint for managing FilesystemExporters. FilesystemExporters are provided as a tech preview.\n \"\"\"\n\n endpoint_name = \"filesystem\"\n serializer_class = FilesystemExporterSerializer\n queryset = FilesystemExporter.objects.all()\n\n\nclass ExportViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n):\n \"\"\"\n ViewSet for viewing exports from an Exporter.\n \"\"\"\n\n endpoint_name = \"exports\"\n nest_prefix = \"exporters\"\n router_lookup = \"export\"\n lookup_field = \"pk\"\n parent_lookup_kwargs = {\"exporter_pk\": \"exporter__pk\"}\n serializer_class = ExportSerializer\n queryset = Export.objects.all()\n parent_viewset = ExporterViewSet\n\n\nclass PulpExportViewSet(ExportViewSet):\n \"\"\"\n ViewSet for viewing exports from a PulpExporter.\n \"\"\"\n\n parent_viewset = PulpExporterViewSet\n serializer_class = PulpExportSerializer\n queryset = PulpExport.objects.all()\n\n @extend_schema(\n request=PulpExportSerializer,\n description=\"Trigger an asynchronous task to export a set of repositories\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export the set of repositories assigned to a specific PulpExporter.\n \"\"\"\n # Validate Exporter\n exporter = PulpExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = PulpExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n # Invoke the export\n task = dispatch(\n pulp_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": str(exporter.pk), \"params\": request.data},\n )\n\n return OperationPostponedResponse(task, request)\n\n\nclass FilesystemExportViewSet(ExportViewSet):\n \"\"\"\n Endpoint for managing FilesystemExports. This endpoint is provided as a tech preview.\n \"\"\"\n\n parent_viewset = FilesystemExporterViewSet\n serializer_class = FilesystemExportSerializer\n queryset = FilesystemExport.objects.all()\n\n @extend_schema(\n request=FilesystemExportSerializer,\n description=\"Trigger an asynchronous task to export files to the filesystem\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export files to the filesystem.\n \"\"\"\n # Validate Exporter\n exporter = FilesystemExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = FilesystemExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n if request.data.get(\"publication\"):\n publication = self.get_resource(request.data[\"publication\"], Publication)\n\n task = dispatch(\n fs_publication_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": exporter.pk, \"publication_pk\": publication.pk},\n )\n else:\n repo_version = self.get_resource(request.data[\"repository_version\"], RepositoryVersion)\n\n task = dispatch(\n fs_repo_version_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": str(exporter.pk), \"repo_version_pk\": repo_version.pk},\n )\n\n return OperationPostponedResponse(task, request)\n", "path": "pulpcore/app/viewsets/exporter.py"}], "after_files": [{"content": "from django_filters.rest_framework import filters\n\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework import mixins\n\nfrom pulpcore.app.models import (\n Export,\n Exporter,\n FilesystemExport,\n FilesystemExporter,\n Publication,\n PulpExport,\n PulpExporter,\n RepositoryVersion,\n)\n\nfrom pulpcore.app.serializers import (\n AsyncOperationResponseSerializer,\n ExportSerializer,\n ExporterSerializer,\n FilesystemExporterSerializer,\n FilesystemExportSerializer,\n PulpExporterSerializer,\n PulpExportSerializer,\n)\n\nfrom pulpcore.app.tasks.export import fs_publication_export, fs_repo_version_export, pulp_export\n\nfrom pulpcore.app.viewsets import (\n AsyncRemoveMixin,\n AsyncUpdateMixin,\n BaseFilterSet,\n NamedModelViewSet,\n)\nfrom pulpcore.app.viewsets.base import NAME_FILTER_OPTIONS\nfrom pulpcore.plugin.tasking import dispatch\nfrom pulpcore.app.response import OperationPostponedResponse\n\n\nclass ExporterFilter(BaseFilterSet):\n \"\"\"\n Plugin file system exporter filter should:\n - inherit from this class\n - add any specific filters if needed\n - define a `Meta` class which should:\n - specify a plugin remote model for which filter is defined\n - extend `fields` with specific ones\n \"\"\"\n\n name = filters.CharFilter()\n\n class Meta:\n model = Exporter\n fields = {\n \"name\": NAME_FILTER_OPTIONS,\n }\n\n\nclass ExporterViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n AsyncUpdateMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n AsyncRemoveMixin,\n):\n \"\"\"\n ViewSet for viewing exporters.\n \"\"\"\n\n queryset = Exporter.objects.all()\n serializer_class = ExporterSerializer\n endpoint_name = \"exporters\"\n router_lookup = \"exporter\"\n filterset_class = ExporterFilter\n\n\nclass PulpExporterViewSet(ExporterViewSet):\n \"\"\"\n ViewSet for viewing PulpExporters.\n \"\"\"\n\n endpoint_name = \"pulp\"\n serializer_class = PulpExporterSerializer\n queryset = PulpExporter.objects.all()\n\n\nclass FilesystemExporterViewSet(ExporterViewSet):\n \"\"\"\n Endpoint for managing FilesystemExporters. FilesystemExporters are provided as a tech preview.\n \"\"\"\n\n endpoint_name = \"filesystem\"\n serializer_class = FilesystemExporterSerializer\n queryset = FilesystemExporter.objects.all()\n\n\nclass ExportViewSet(\n NamedModelViewSet,\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n mixins.DestroyModelMixin,\n):\n \"\"\"\n ViewSet for viewing exports from an Exporter.\n \"\"\"\n\n endpoint_name = \"exports\"\n nest_prefix = \"exporters\"\n router_lookup = \"export\"\n lookup_field = \"pk\"\n parent_lookup_kwargs = {\"exporter_pk\": \"exporter__pk\"}\n serializer_class = ExportSerializer\n queryset = Export.objects.all()\n parent_viewset = ExporterViewSet\n\n\nclass PulpExportViewSet(ExportViewSet):\n \"\"\"\n ViewSet for viewing exports from a PulpExporter.\n \"\"\"\n\n parent_viewset = PulpExporterViewSet\n serializer_class = PulpExportSerializer\n queryset = PulpExport.objects.all()\n\n @extend_schema(\n request=PulpExportSerializer,\n description=\"Trigger an asynchronous task to export a set of repositories\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export the set of repositories assigned to a specific PulpExporter.\n \"\"\"\n # Validate Exporter\n exporter = PulpExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = PulpExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n # Invoke the export\n task = dispatch(\n pulp_export,\n exclusive_resources=[exporter],\n shared_resources=exporter.repositories.all(),\n kwargs={\"exporter_pk\": str(exporter.pk), \"params\": request.data},\n )\n\n return OperationPostponedResponse(task, request)\n\n\nclass FilesystemExportViewSet(ExportViewSet):\n \"\"\"\n Endpoint for managing FilesystemExports. This endpoint is provided as a tech preview.\n \"\"\"\n\n parent_viewset = FilesystemExporterViewSet\n serializer_class = FilesystemExportSerializer\n queryset = FilesystemExport.objects.all()\n\n @extend_schema(\n request=FilesystemExportSerializer,\n description=\"Trigger an asynchronous task to export files to the filesystem\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request, exporter_pk):\n \"\"\"\n Generates a Task to export files to the filesystem.\n \"\"\"\n # Validate Exporter\n exporter = FilesystemExporter.objects.get(pk=exporter_pk).cast()\n ExporterSerializer.validate_path(exporter.path, check_is_dir=True)\n\n # Validate Export\n serializer = FilesystemExportSerializer(data=request.data, context={\"exporter\": exporter})\n serializer.is_valid(raise_exception=True)\n\n if request.data.get(\"publication\"):\n publication = self.get_resource(request.data[\"publication\"], Publication)\n\n task = dispatch(\n fs_publication_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": exporter.pk, \"publication_pk\": publication.pk},\n )\n else:\n repo_version = self.get_resource(request.data[\"repository_version\"], RepositoryVersion)\n\n task = dispatch(\n fs_repo_version_export,\n exclusive_resources=[exporter],\n kwargs={\"exporter_pk\": str(exporter.pk), \"repo_version_pk\": repo_version.pk},\n )\n\n return OperationPostponedResponse(task, request)\n", "path": "pulpcore/app/viewsets/exporter.py"}]} | 1,989 | 109 |
gh_patches_debug_2276 | rasdani/github-patches | git_diff | cloudtools__troposphere-1740 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SageMaker Model ContainerDefinition object does not support attribute Mode
Setting a `Mode` attribute within the ContainerDefinition for both the `PrimaryContainer` and `Containers` attributes for creating a Model resources keeps throwing error - `AttributeError: ContainerDefinition object does not support attribute Mode`.
Within the latest cloudformation docs https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-model-containerdefinition.html the `Mode` attribute is supported.
Without this support, multiple models container(s) creates/updates cannot be configured.
Would you prefer I open a PR or can I wait if it won't take much.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/sagemaker.py`
Content:
```
1 # Copyright (c) 2012-2018, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7 from .validators import integer
8
9
10 class GitConfig(AWSProperty):
11 props = {
12 'Branch': (basestring, False),
13 'RepositoryUrl': (basestring, True),
14 'SecretArn': (basestring, False),
15 }
16
17
18 class CodeRepository(AWSObject):
19 resource_type = "AWS::SageMaker::CodeRepository"
20
21 props = {
22 'CodeRepositoryName': (basestring, False),
23 'GitConfig': (GitConfig, True)
24 }
25
26
27 class Endpoint(AWSObject):
28 resource_type = "AWS::SageMaker::Endpoint"
29
30 props = {
31 'EndpointName': (basestring, False),
32 'EndpointConfigName': (basestring, True),
33 'Tags': (Tags, True)
34 }
35
36
37 class ProductionVariant(AWSProperty):
38 props = {
39 'ModelName': (basestring, True),
40 'VariantName': (basestring, True),
41 'InitialInstanceCount': (integer, True),
42 'InstanceType': (basestring, True),
43 'InitialVariantWeight': (float, True)
44 }
45
46
47 class EndpointConfig(AWSObject):
48 resource_type = "AWS::SageMaker::EndpointConfig"
49
50 props = {
51 'EndpointConfigName': (basestring, False),
52 'ProductionVariants': ([ProductionVariant], True),
53 'KmsKeyId': (basestring, False),
54 'Tags': (Tags, True)
55 }
56
57
58 class ContainerDefinition(AWSProperty):
59 props = {
60 'ContainerHostname': (basestring, False),
61 'Environment': (dict, False),
62 'ModelDataUrl': (basestring, False),
63 'Image': (basestring, True)
64 }
65
66
67 class VpcConfig(AWSProperty):
68 props = {
69 'Subnets': ([basestring], True),
70 'SecurityGroupIds': ([basestring], True)
71 }
72
73
74 class Model(AWSObject):
75 resource_type = "AWS::SageMaker::Model"
76
77 props = {
78 'Containers': ([ContainerDefinition], False),
79 'ExecutionRoleArn': (basestring, True),
80 'ModelName': (basestring, False),
81 'PrimaryContainer': (ContainerDefinition, False),
82 'Tags': (Tags, False),
83 'VpcConfig': (VpcConfig, False),
84 }
85
86
87 class NotebookInstanceLifecycleHook(AWSProperty):
88 props = {
89 'Content': (basestring, False)
90 }
91
92
93 class NotebookInstanceLifecycleConfig(AWSObject):
94 resource_type = "AWS::SageMaker::NotebookInstanceLifecycleConfig"
95
96 props = {
97 'NotebookInstanceLifecycleConfigName': (basestring, False),
98 'OnCreate': ([NotebookInstanceLifecycleHook], False),
99 'OnStart': ([NotebookInstanceLifecycleHook], False)
100 }
101
102
103 class NotebookInstance(AWSObject):
104 resource_type = "AWS::SageMaker::NotebookInstance"
105
106 props = {
107 'AcceleratorTypes': ([basestring], False),
108 'AdditionalCodeRepositories': ([basestring], False),
109 'DefaultCodeRepository': (basestring, False),
110 'DirectInternetAccess': (basestring, False),
111 'InstanceType': (basestring, True),
112 'KmsKeyId': (basestring, False),
113 'LifecycleConfigName': (basestring, False),
114 'NotebookInstanceName': (basestring, False),
115 'RoleArn': (basestring, True),
116 'RootAccess': (basestring, False),
117 'SecurityGroupIds': ([basestring], False),
118 'SubnetId': (basestring, False),
119 'Tags': (Tags, False),
120 'VolumeSizeInGB': (integer, False),
121 }
122
123
124 class CognitoMemberDefinition(AWSProperty):
125 props = {
126 'CognitoClientId': (basestring, True),
127 'CognitoUserGroup': (basestring, True),
128 'CognitoUserPool': (basestring, True),
129 }
130
131
132 class MemberDefinition(AWSProperty):
133 props = {
134 'CognitoMemberDefinition': (CognitoMemberDefinition, True),
135 }
136
137
138 class NotificationConfiguration(AWSProperty):
139 props = {
140 'NotificationTopicArn': (basestring, True),
141 }
142
143
144 class Workteam(AWSObject):
145 resource_type = "AWS::SageMaker::Workteam"
146
147 props = {
148 'Description': (basestring, False),
149 'MemberDefinitions': ([MemberDefinition], False),
150 'NotificationConfiguration': (NotificationConfiguration, False),
151 'Tags': (Tags, False),
152 'WorkteamName': (basestring, False),
153 }
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/sagemaker.py b/troposphere/sagemaker.py
--- a/troposphere/sagemaker.py
+++ b/troposphere/sagemaker.py
@@ -59,6 +59,7 @@
props = {
'ContainerHostname': (basestring, False),
'Environment': (dict, False),
+ 'Mode': (basestring, False),
'ModelDataUrl': (basestring, False),
'Image': (basestring, True)
}
| {"golden_diff": "diff --git a/troposphere/sagemaker.py b/troposphere/sagemaker.py\n--- a/troposphere/sagemaker.py\n+++ b/troposphere/sagemaker.py\n@@ -59,6 +59,7 @@\n props = {\n 'ContainerHostname': (basestring, False),\n 'Environment': (dict, False),\n+ 'Mode': (basestring, False),\n 'ModelDataUrl': (basestring, False),\n 'Image': (basestring, True)\n }\n", "issue": "SageMaker Model ContainerDefinition object does not support attribute Mode\nSetting a `Mode` attribute within the ContainerDefinition for both the `PrimaryContainer` and `Containers` attributes for creating a Model resources keeps throwing error - `AttributeError: ContainerDefinition object does not support attribute Mode`.\r\n\r\nWithin the latest cloudformation docs https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-model-containerdefinition.html the `Mode` attribute is supported.\r\n\r\nWithout this support, multiple models container(s) creates/updates cannot be configured.\r\n\r\nWould you prefer I open a PR or can I wait if it won't take much.\r\n\r\nThanks.\n", "before_files": [{"content": "# Copyright (c) 2012-2018, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import integer\n\n\nclass GitConfig(AWSProperty):\n props = {\n 'Branch': (basestring, False),\n 'RepositoryUrl': (basestring, True),\n 'SecretArn': (basestring, False),\n }\n\n\nclass CodeRepository(AWSObject):\n resource_type = \"AWS::SageMaker::CodeRepository\"\n\n props = {\n 'CodeRepositoryName': (basestring, False),\n 'GitConfig': (GitConfig, True)\n }\n\n\nclass Endpoint(AWSObject):\n resource_type = \"AWS::SageMaker::Endpoint\"\n\n props = {\n 'EndpointName': (basestring, False),\n 'EndpointConfigName': (basestring, True),\n 'Tags': (Tags, True)\n }\n\n\nclass ProductionVariant(AWSProperty):\n props = {\n 'ModelName': (basestring, True),\n 'VariantName': (basestring, True),\n 'InitialInstanceCount': (integer, True),\n 'InstanceType': (basestring, True),\n 'InitialVariantWeight': (float, True)\n }\n\n\nclass EndpointConfig(AWSObject):\n resource_type = \"AWS::SageMaker::EndpointConfig\"\n\n props = {\n 'EndpointConfigName': (basestring, False),\n 'ProductionVariants': ([ProductionVariant], True),\n 'KmsKeyId': (basestring, False),\n 'Tags': (Tags, True)\n }\n\n\nclass ContainerDefinition(AWSProperty):\n props = {\n 'ContainerHostname': (basestring, False),\n 'Environment': (dict, False),\n 'ModelDataUrl': (basestring, False),\n 'Image': (basestring, True)\n }\n\n\nclass VpcConfig(AWSProperty):\n props = {\n 'Subnets': ([basestring], True),\n 'SecurityGroupIds': ([basestring], True)\n }\n\n\nclass Model(AWSObject):\n resource_type = \"AWS::SageMaker::Model\"\n\n props = {\n 'Containers': ([ContainerDefinition], False),\n 'ExecutionRoleArn': (basestring, True),\n 'ModelName': (basestring, False),\n 'PrimaryContainer': (ContainerDefinition, False),\n 'Tags': (Tags, False),\n 'VpcConfig': (VpcConfig, False),\n }\n\n\nclass NotebookInstanceLifecycleHook(AWSProperty):\n props = {\n 'Content': (basestring, False)\n }\n\n\nclass NotebookInstanceLifecycleConfig(AWSObject):\n resource_type = \"AWS::SageMaker::NotebookInstanceLifecycleConfig\"\n\n props = {\n 'NotebookInstanceLifecycleConfigName': (basestring, False),\n 'OnCreate': ([NotebookInstanceLifecycleHook], False),\n 'OnStart': ([NotebookInstanceLifecycleHook], False)\n }\n\n\nclass NotebookInstance(AWSObject):\n resource_type = \"AWS::SageMaker::NotebookInstance\"\n\n props = {\n 'AcceleratorTypes': ([basestring], False),\n 'AdditionalCodeRepositories': ([basestring], False),\n 'DefaultCodeRepository': (basestring, False),\n 'DirectInternetAccess': (basestring, False),\n 'InstanceType': (basestring, True),\n 'KmsKeyId': (basestring, False),\n 'LifecycleConfigName': (basestring, False),\n 'NotebookInstanceName': (basestring, False),\n 'RoleArn': (basestring, True),\n 'RootAccess': (basestring, False),\n 'SecurityGroupIds': ([basestring], False),\n 'SubnetId': (basestring, False),\n 'Tags': (Tags, False),\n 'VolumeSizeInGB': (integer, False),\n }\n\n\nclass CognitoMemberDefinition(AWSProperty):\n props = {\n 'CognitoClientId': (basestring, True),\n 'CognitoUserGroup': (basestring, True),\n 'CognitoUserPool': (basestring, True),\n }\n\n\nclass MemberDefinition(AWSProperty):\n props = {\n 'CognitoMemberDefinition': (CognitoMemberDefinition, True),\n }\n\n\nclass NotificationConfiguration(AWSProperty):\n props = {\n 'NotificationTopicArn': (basestring, True),\n }\n\n\nclass Workteam(AWSObject):\n resource_type = \"AWS::SageMaker::Workteam\"\n\n props = {\n 'Description': (basestring, False),\n 'MemberDefinitions': ([MemberDefinition], False),\n 'NotificationConfiguration': (NotificationConfiguration, False),\n 'Tags': (Tags, False),\n 'WorkteamName': (basestring, False),\n }\n", "path": "troposphere/sagemaker.py"}], "after_files": [{"content": "# Copyright (c) 2012-2018, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import integer\n\n\nclass GitConfig(AWSProperty):\n props = {\n 'Branch': (basestring, False),\n 'RepositoryUrl': (basestring, True),\n 'SecretArn': (basestring, False),\n }\n\n\nclass CodeRepository(AWSObject):\n resource_type = \"AWS::SageMaker::CodeRepository\"\n\n props = {\n 'CodeRepositoryName': (basestring, False),\n 'GitConfig': (GitConfig, True)\n }\n\n\nclass Endpoint(AWSObject):\n resource_type = \"AWS::SageMaker::Endpoint\"\n\n props = {\n 'EndpointName': (basestring, False),\n 'EndpointConfigName': (basestring, True),\n 'Tags': (Tags, True)\n }\n\n\nclass ProductionVariant(AWSProperty):\n props = {\n 'ModelName': (basestring, True),\n 'VariantName': (basestring, True),\n 'InitialInstanceCount': (integer, True),\n 'InstanceType': (basestring, True),\n 'InitialVariantWeight': (float, True)\n }\n\n\nclass EndpointConfig(AWSObject):\n resource_type = \"AWS::SageMaker::EndpointConfig\"\n\n props = {\n 'EndpointConfigName': (basestring, False),\n 'ProductionVariants': ([ProductionVariant], True),\n 'KmsKeyId': (basestring, False),\n 'Tags': (Tags, True)\n }\n\n\nclass ContainerDefinition(AWSProperty):\n props = {\n 'ContainerHostname': (basestring, False),\n 'Environment': (dict, False),\n 'Mode': (basestring, False),\n 'ModelDataUrl': (basestring, False),\n 'Image': (basestring, True)\n }\n\n\nclass VpcConfig(AWSProperty):\n props = {\n 'Subnets': ([basestring], True),\n 'SecurityGroupIds': ([basestring], True)\n }\n\n\nclass Model(AWSObject):\n resource_type = \"AWS::SageMaker::Model\"\n\n props = {\n 'Containers': ([ContainerDefinition], False),\n 'ExecutionRoleArn': (basestring, True),\n 'ModelName': (basestring, False),\n 'PrimaryContainer': (ContainerDefinition, False),\n 'Tags': (Tags, False),\n 'VpcConfig': (VpcConfig, False),\n }\n\n\nclass NotebookInstanceLifecycleHook(AWSProperty):\n props = {\n 'Content': (basestring, False)\n }\n\n\nclass NotebookInstanceLifecycleConfig(AWSObject):\n resource_type = \"AWS::SageMaker::NotebookInstanceLifecycleConfig\"\n\n props = {\n 'NotebookInstanceLifecycleConfigName': (basestring, False),\n 'OnCreate': ([NotebookInstanceLifecycleHook], False),\n 'OnStart': ([NotebookInstanceLifecycleHook], False)\n }\n\n\nclass NotebookInstance(AWSObject):\n resource_type = \"AWS::SageMaker::NotebookInstance\"\n\n props = {\n 'AcceleratorTypes': ([basestring], False),\n 'AdditionalCodeRepositories': ([basestring], False),\n 'DefaultCodeRepository': (basestring, False),\n 'DirectInternetAccess': (basestring, False),\n 'InstanceType': (basestring, True),\n 'KmsKeyId': (basestring, False),\n 'LifecycleConfigName': (basestring, False),\n 'NotebookInstanceName': (basestring, False),\n 'RoleArn': (basestring, True),\n 'RootAccess': (basestring, False),\n 'SecurityGroupIds': ([basestring], False),\n 'SubnetId': (basestring, False),\n 'Tags': (Tags, False),\n 'VolumeSizeInGB': (integer, False),\n }\n\n\nclass CognitoMemberDefinition(AWSProperty):\n props = {\n 'CognitoClientId': (basestring, True),\n 'CognitoUserGroup': (basestring, True),\n 'CognitoUserPool': (basestring, True),\n }\n\n\nclass MemberDefinition(AWSProperty):\n props = {\n 'CognitoMemberDefinition': (CognitoMemberDefinition, True),\n }\n\n\nclass NotificationConfiguration(AWSProperty):\n props = {\n 'NotificationTopicArn': (basestring, True),\n }\n\n\nclass Workteam(AWSObject):\n resource_type = \"AWS::SageMaker::Workteam\"\n\n props = {\n 'Description': (basestring, False),\n 'MemberDefinitions': ([MemberDefinition], False),\n 'NotificationConfiguration': (NotificationConfiguration, False),\n 'Tags': (Tags, False),\n 'WorkteamName': (basestring, False),\n }\n", "path": "troposphere/sagemaker.py"}]} | 1,790 | 111 |
gh_patches_debug_10482 | rasdani/github-patches | git_diff | encode__httpx-737 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version 0.11.0
This one is a big deal, since it reintroduces the sync client, and is essentially a 1.0 pre-release in terms of how the API looks.
```python
>>> import httpx
>>> httpx.get('https://www.example.com')
<Response [200 OK]>
```
🎉✨ **TA-DA!** ✨🎉
---
# Release notes
## 0.11.0 (January 9th, 2019)
The 0.11 release reintroduces our sync support, so that `httpx` now supports both a standard thread-concurrency API, and an async API.
Existing async `httpx` users that are upgrading to 0.11 should ensure that:
* Async codebases should always use a client instance to make requests, instead of the top-level API.
* The async client is named as `httpx.AsyncClient()`, instead of `httpx.Client()`.
* When instantiating proxy configurations use the `httpx.Proxy()` class, instead of the previous `httpx.HTTPProxy()`. This new configuration class works for configuring both sync and async clients.
We believe the API is now pretty much stable, and are aiming for a 1.0 release sometime on or before April 2020.
### Changed
- Top level API such as `httpx.get(url, ...)`, `httpx.post(url, ...)`, `httpx.request(method, url, ...)` becomes synchronous.
- Added `httpx.Client()` for synchronous clients, with `httpx.AsyncClient` being used for async clients.
- Switched to `proxies=httpx.Proxy(...)` for proxy configuration.
- Network connection errors are wrapped in `httpx.NetworkError`, rather than exposing lower-level exception types directly.
### Removed
- The `request.url.origin` property and `httpx.Origin` class are no longer available.
- The per-request `cert`, `verify`, and `trust_env` arguments are escalated from raising errors if used, to no longer being available. These arguments should be used on a per-client instance instead, or in the top-level API.
- The `stream` argument has escalated from raising an error when used, to no longer being available. Use the `client.stream(...)` or `httpx.stream()` streaming API instead.
### Fixed
- Redirect loop detection matches against `(method, url)` rather than `url`. (Pull #734)
---
# What's next...
I'd expect that we'll likely end up waiting for a period of time after this release, and then end up releasing a 1.0 with either no API changes, or only very minimal API changes. (The only remaining area I can see us still wanting to refine/change, would be some review making sure we've got an exception heirarchy/naming that we're entirely happy to stick with for 1.0 onwards)
---
# Checklist
- [x] Reintroduce `Client` as a sync client. #735
- [x] Reintroduce `WSGIDispatch`. #735
- [x] Top-level API becomes sync, not async. #735
- [x] Drop `Origin` from public API. #688
- [x] Use `httpx.Proxy()` for proxy configuration, not the `httpx.HTTPProxy` dispatch class. #713
- [ ] ~Consider switching `client.params`, `client.headers`, `client.cookies` so that they don't have a setter/getter mismatch.~ Refs #678 #274
- [ ] ~Consider dropping UDS support.~ #723
- [x] Wrap IO Exceptions in httpx exceptions. #707
- [x] Docs #727
- [x] `httpx.Auth` becomes public API. #732 #731
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import re
5 from pathlib import Path
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 version = Path(package, "__version__.py").read_text()
15 return re.search("__version__ = ['\"]([^'\"]+)['\"]", version).group(1)
16
17
18 def get_long_description():
19 """
20 Return the README.
21 """
22 long_description = ""
23 with open("README.md", encoding="utf8") as f:
24 long_description += f.read()
25 long_description += "\n\n"
26 with open("CHANGELOG.md", encoding="utf8") as f:
27 long_description += f.read()
28 return long_description
29
30
31 def get_packages(package):
32 """
33 Return root package and all sub-packages.
34 """
35 return [str(path.parent) for path in Path(package).glob("**/__init__.py")]
36
37
38 setup(
39 name="httpx",
40 python_requires=">=3.6",
41 version=get_version("httpx"),
42 url="https://github.com/encode/httpx",
43 license="BSD",
44 description="The next generation HTTP client.",
45 long_description=get_long_description(),
46 long_description_content_type="text/markdown",
47 author="Tom Christie",
48 author_email="[email protected]",
49 package_data={"httpx": ["py.typed"]},
50 packages=get_packages("httpx"),
51 include_package_data=True,
52 zip_safe=False,
53 install_requires=[
54 "certifi",
55 "hstspreload",
56 "chardet==3.*",
57 "h11>=0.8,<0.10",
58 "h2==3.*",
59 "idna==2.*",
60 "rfc3986>=1.3,<2",
61 "sniffio==1.*",
62 "urllib3==1.*",
63 ],
64 classifiers=[
65 "Development Status :: 3 - Alpha",
66 "Environment :: Web Environment",
67 "Intended Audience :: Developers",
68 "License :: OSI Approved :: BSD License",
69 "Operating System :: OS Independent",
70 "Topic :: Internet :: WWW/HTTP",
71 "Framework :: AsyncIO",
72 "Framework :: Trio",
73 "Programming Language :: Python :: 3",
74 "Programming Language :: Python :: 3.6",
75 "Programming Language :: Python :: 3.7",
76 "Programming Language :: Python :: 3.8",
77 ],
78 )
79
```
Path: `httpx/__version__.py`
Content:
```
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.10.1"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.10.1"
+__version__ = "0.11.0"
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -62,7 +62,7 @@
"urllib3==1.*",
],
classifiers=[
- "Development Status :: 3 - Alpha",
+ "Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
| {"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.10.1\"\n+__version__ = \"0.11.0\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -62,7 +62,7 @@\n \"urllib3==1.*\",\n ],\n classifiers=[\n- \"Development Status :: 3 - Alpha\",\n+ \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n", "issue": "Version 0.11.0\nThis one is a big deal, since it reintroduces the sync client, and is essentially a 1.0 pre-release in terms of how the API looks.\r\n\r\n```python\r\n>>> import httpx\r\n>>> httpx.get('https://www.example.com')\r\n<Response [200 OK]>\r\n```\r\n\r\n\ud83c\udf89\u2728 **TA-DA!** \u2728\ud83c\udf89\r\n\r\n---\r\n\r\n# Release notes\r\n\r\n## 0.11.0 (January 9th, 2019)\r\n\r\nThe 0.11 release reintroduces our sync support, so that `httpx` now supports both a standard thread-concurrency API, and an async API.\r\n\r\nExisting async `httpx` users that are upgrading to 0.11 should ensure that:\r\n\r\n* Async codebases should always use a client instance to make requests, instead of the top-level API.\r\n* The async client is named as `httpx.AsyncClient()`, instead of `httpx.Client()`.\r\n* When instantiating proxy configurations use the `httpx.Proxy()` class, instead of the previous `httpx.HTTPProxy()`. This new configuration class works for configuring both sync and async clients.\r\n\r\nWe believe the API is now pretty much stable, and are aiming for a 1.0 release sometime on or before April 2020.\r\n\r\n### Changed\r\n\r\n- Top level API such as `httpx.get(url, ...)`, `httpx.post(url, ...)`, `httpx.request(method, url, ...)` becomes synchronous.\r\n- Added `httpx.Client()` for synchronous clients, with `httpx.AsyncClient` being used for async clients.\r\n- Switched to `proxies=httpx.Proxy(...)` for proxy configuration.\r\n- Network connection errors are wrapped in `httpx.NetworkError`, rather than exposing lower-level exception types directly.\r\n\r\n### Removed\r\n\r\n- The `request.url.origin` property and `httpx.Origin` class are no longer available.\r\n- The per-request `cert`, `verify`, and `trust_env` arguments are escalated from raising errors if used, to no longer being available. These arguments should be used on a per-client instance instead, or in the top-level API.\r\n- The `stream` argument has escalated from raising an error when used, to no longer being available. Use the `client.stream(...)` or `httpx.stream()` streaming API instead.\r\n\r\n### Fixed\r\n\r\n- Redirect loop detection matches against `(method, url)` rather than `url`. (Pull #734)\r\n\r\n---\r\n\r\n# What's next...\r\n\r\nI'd expect that we'll likely end up waiting for a period of time after this release, and then end up releasing a 1.0 with either no API changes, or only very minimal API changes. (The only remaining area I can see us still wanting to refine/change, would be some review making sure we've got an exception heirarchy/naming that we're entirely happy to stick with for 1.0 onwards)\r\n\r\n---\r\n\r\n# Checklist\r\n\r\n- [x] Reintroduce `Client` as a sync client. #735\r\n- [x] Reintroduce `WSGIDispatch`. #735\r\n- [x] Top-level API becomes sync, not async. #735\r\n- [x] Drop `Origin` from public API. #688\r\n- [x] Use `httpx.Proxy()` for proxy configuration, not the `httpx.HTTPProxy` dispatch class. #713\r\n- [ ] ~Consider switching `client.params`, `client.headers`, `client.cookies` so that they don't have a setter/getter mismatch.~ Refs #678 #274\r\n- [ ] ~Consider dropping UDS support.~ #723\r\n- [x] Wrap IO Exceptions in httpx exceptions. #707\r\n- [x] Docs #727\r\n- [x] `httpx.Auth` becomes public API. #732 #731\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n version = Path(package, \"__version__.py\").read_text()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", version).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n long_description = \"\"\n with open(\"README.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n long_description += \"\\n\\n\"\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n return long_description\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [str(path.parent) for path in Path(package).glob(\"**/__init__.py\")]\n\n\nsetup(\n name=\"httpx\",\n python_requires=\">=3.6\",\n version=get_version(\"httpx\"),\n url=\"https://github.com/encode/httpx\",\n license=\"BSD\",\n description=\"The next generation HTTP client.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\n \"certifi\",\n \"hstspreload\",\n \"chardet==3.*\",\n \"h11>=0.8,<0.10\",\n \"h2==3.*\",\n \"idna==2.*\",\n \"rfc3986>=1.3,<2\",\n \"sniffio==1.*\",\n \"urllib3==1.*\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n \"Framework :: Trio\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}, {"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.10.1\"\n", "path": "httpx/__version__.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n version = Path(package, \"__version__.py\").read_text()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", version).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n long_description = \"\"\n with open(\"README.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n long_description += \"\\n\\n\"\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n return long_description\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [str(path.parent) for path in Path(package).glob(\"**/__init__.py\")]\n\n\nsetup(\n name=\"httpx\",\n python_requires=\">=3.6\",\n version=get_version(\"httpx\"),\n url=\"https://github.com/encode/httpx\",\n license=\"BSD\",\n description=\"The next generation HTTP client.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\n \"certifi\",\n \"hstspreload\",\n \"chardet==3.*\",\n \"h11>=0.8,<0.10\",\n \"h2==3.*\",\n \"idna==2.*\",\n \"rfc3986>=1.3,<2\",\n \"sniffio==1.*\",\n \"urllib3==1.*\",\n ],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n \"Framework :: Trio\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}, {"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.11.0\"\n", "path": "httpx/__version__.py"}]} | 1,830 | 189 |
gh_patches_debug_22967 | rasdani/github-patches | git_diff | autogluon__autogluon-379 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Balanced accuracy error
(this is with 6b247bf)
Consider (from `tabular-indepth` tutorial):
```python
>>> y_test[:5]
0 Sales
1 Sales
2 Exec-managerial
3 Exec-managerial
4 Prof-specialty
Name: occupation, dtype: object
>>> y_pred[:5]
array([' Other-service', ' Craft-repair', ' Exec-managerial', ' Sales',
' Other-service'], dtype=object)
```
**with ag**
```python
>>> ag.utils.tabular.metrics.accuracy(y_test, y_pred)
0.3393387245368001
>>> ag.utils.tabular.metrics.balanced_accuracy(y_test, y_pred)
[error see stacktrace]
```
**with sklearn**
```python
>>> sklearn.metrics.balanced_accuracy_score(y_test, y_pred)
0.21896145445995055
```
### Reason
I believe the issue stems from this line: https://github.com/awslabs/autogluon/blob/6b247bfea9d504381cc512e36ba1909e6c54c0c3/autogluon/utils/tabular/metrics/classification_metrics.py#L21
which would be ok if `prediction` and `solution` have been encoded to integer values but not in general. The max should be over the number of unique elements. This requires a bit more than a one line refactoring though, I'll open a PR.
### Stacktrace
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ddcca1d23e69> in <module>
----> 1 ag.utils.tabular.metrics.balanced_accuracy(y_test, y_pred)
~/Desktop/github-aws/autogluon-public/autogluon/utils/tabular/metrics/__init__.py in __call__(self, y_true, y_pred, sample_weight)
87 else:
88 return self._sign * self._score_func(y_true, y_pred,
---> 89 **self._kwargs)
90
91
~/Desktop/github-aws/autogluon-public/autogluon/utils/tabular/metrics/classification_metrics.py in balanced_accuracy(solution, prediction)
19 elif y_type == 'multiclass':
20 # Need to create a multiclass solution and a multiclass predictions
---> 21 max_class = int(np.max((np.max(solution), np.max(prediction))))
22 solution_binary = np.zeros((len(solution), max_class + 1))
23 prediction_binary = np.zeros((len(prediction), max_class + 1))
<__array_function__ internals> in amax(*args, **kwargs)
~/Desktop/github-aws/ghaws/lib/python3.7/site-packages/numpy/core/fromnumeric.py in amax(a, axis, out, keepdims, initial, where)
2666 """
2667 return _wrapreduction(a, np.maximum, 'max', axis, None, out,
-> 2668 keepdims=keepdims, initial=initial, where=where)
2669
2670
~/Desktop/github-aws/ghaws/lib/python3.7/site-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
88 return reduction(axis=axis, out=out, **passkwargs)
89
---> 90 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
91
92
TypeError: cannot perform reduce with flexible type
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autogluon/utils/tabular/metrics/classification_metrics.py`
Content:
```
1 import logging
2
3 import numpy as np
4 from sklearn.metrics.classification import _check_targets, type_of_target
5
6 logger = logging.getLogger(__name__)
7
8
9 def balanced_accuracy(solution, prediction):
10 y_type, solution, prediction = _check_targets(solution, prediction)
11
12 if y_type not in ["binary", "multiclass", 'multilabel-indicator']:
13 raise ValueError(f"{y_type} is not supported")
14
15 if y_type == 'binary':
16 # Do not transform into any multiclass representation
17 pass
18
19 elif y_type == 'multiclass':
20 # Need to create a multiclass solution and a multiclass predictions
21 max_class = int(np.max((np.max(solution), np.max(prediction))))
22 solution_binary = np.zeros((len(solution), max_class + 1))
23 prediction_binary = np.zeros((len(prediction), max_class + 1))
24 for i in range(len(solution)):
25 solution_binary[i, int(solution[i])] = 1
26 prediction_binary[i, int(prediction[i])] = 1
27 solution = solution_binary
28 prediction = prediction_binary
29
30 elif y_type == 'multilabel-indicator':
31 solution = solution.toarray()
32 prediction = prediction.toarray()
33 else:
34 raise NotImplementedError(f'bac_metric does not support task type {y_type}')
35
36 fn = np.sum(np.multiply(solution, (1 - prediction)), axis=0, dtype=float)
37 tp = np.sum(np.multiply(solution, prediction), axis=0, dtype=float)
38 # Bounding to avoid division by 0
39 eps = 1e-15
40 tp = np.maximum(eps, tp)
41 pos_num = np.maximum(eps, tp + fn)
42 tpr = tp / pos_num # true positive rate (sensitivity)
43
44 if y_type in ('binary', 'multilabel-indicator'):
45 tn = np.sum(
46 np.multiply((1 - solution), (1 - prediction)),
47 axis=0, dtype=float
48 )
49 fp = np.sum(
50 np.multiply((1 - solution), prediction),
51 axis=0, dtype=float
52 )
53 tn = np.maximum(eps, tn)
54 neg_num = np.maximum(eps, tn + fp)
55 tnr = tn / neg_num # true negative rate (specificity)
56 bac = 0.5 * (tpr + tnr)
57 elif y_type == 'multiclass':
58 bac = tpr
59 else:
60 raise ValueError(y_type)
61
62 return np.mean(bac) # average over all classes
63
64
65 def pac_score(solution, prediction):
66 """
67 Probabilistic Accuracy based on log_loss metric.
68 We assume the solution is in {0, 1} and prediction in [0, 1].
69 Otherwise, run normalize_array.
70 :param solution:
71 :param prediction:
72 :param task:
73 :return:
74 """
75
76 def normalize_array(solution, prediction):
77 """
78 Use min and max of solution as scaling factors to normalize prediction,
79 then threshold it to [0, 1].
80 Binarize solution to {0, 1}. This allows applying classification
81 scores to all cases. In principle, this should not do anything to
82 properly formatted classification inputs and outputs.
83 :param solution:
84 :param prediction:
85 :return:
86 """
87 # Binarize solution
88 sol = np.ravel(solution) # convert to 1-d array
89 maxi = np.nanmax(sol[np.isfinite(sol)])
90 mini = np.nanmin(sol[np.isfinite(sol)])
91 if maxi == mini:
92 logger.debug('Warning: cannot normalize array')
93 return [solution, prediction]
94 diff = maxi - mini
95 mid = (maxi + mini) / 2.
96
97 solution[solution >= mid] = 1
98 solution[solution < mid] = 0
99 # Normalize and threshold predictions (takes effect only if solution not
100 # in {0, 1})
101
102 prediction -= float(mini)
103 prediction /= float(diff)
104
105 # and if predictions exceed the bounds [0, 1]
106 prediction[prediction > 1] = 1
107 prediction[prediction < 0] = 0
108 # Make probabilities smoother
109 # new_prediction = np.power(new_prediction, (1./10))
110
111 return [solution, prediction]
112
113 def log_loss(solution, prediction, task):
114 """Log loss for binary and multiclass."""
115 [sample_num, label_num] = solution.shape
116 # Lower gives problems with float32!
117 eps = 0.00000003
118
119 if (task == 'multiclass') and (label_num > 1):
120 # Make sure the lines add up to one for multi-class classification
121 norma = np.sum(prediction, axis=1)
122 for k in range(sample_num):
123 prediction[k, :] /= np.maximum(norma[k], eps)
124
125 sample_num = solution.shape[0]
126 for i in range(sample_num):
127 j = np.argmax(solution[i, :])
128 solution[i, :] = 0
129 solution[i, j] = 1
130
131 solution = solution.astype(np.int32, copy=False)
132 # For the base prediction, this solution is ridiculous in the
133 # multi-label case
134
135 # Bounding of predictions to avoid log(0),1/0,...
136 prediction = np.minimum(1 - eps, np.maximum(eps, prediction))
137 # Compute the log loss
138 pos_class_log_loss = -np.mean(solution * np.log(prediction), axis=0)
139 if (task != 'multiclass') or (label_num == 1):
140 # The multi-label case is a bunch of binary problems.
141 # The second class is the negative class for each column.
142 neg_class_log_loss = -np.mean(
143 (1 - solution) * np.log(1 - prediction),
144 axis=0
145 )
146 log_loss = pos_class_log_loss + neg_class_log_loss
147 # Each column is an independent problem, so we average.
148 # The probabilities in one line do not add up to one.
149 # log_loss = mvmean(log_loss)
150 # print('binary {}'.format(log_loss))
151 # In the multilabel case, the right thing i to AVERAGE not sum
152 # We return all the scores so we can normalize correctly later on
153 else:
154 # For the multiclass case the probabilities in one line add up one.
155 log_loss = pos_class_log_loss
156 # We sum the contributions of the columns.
157 log_loss = np.sum(log_loss)
158 # print('multiclass {}'.format(log_loss))
159 return log_loss
160
161 def prior_log_loss(frac_pos, task):
162 """Baseline log loss.
163 For multiple classes ot labels return the values for each column
164 """
165 eps = 1e-15
166 frac_pos_ = np.maximum(eps, frac_pos)
167 if task != 'multiclass': # binary case
168 frac_neg = 1 - frac_pos
169 frac_neg_ = np.maximum(eps, frac_neg)
170 pos_class_log_loss_ = -frac_pos * np.log(frac_pos_)
171 neg_class_log_loss_ = -frac_neg * np.log(frac_neg_)
172 base_log_loss = pos_class_log_loss_ + neg_class_log_loss_
173 # base_log_loss = mvmean(base_log_loss)
174 # print('binary {}'.format(base_log_loss))
175 # In the multilabel case, the right thing i to AVERAGE not sum
176 # We return all the scores so we can normalize correctly later on
177 else: # multiclass case
178 fp = frac_pos_ / sum(frac_pos_) # Need to renormalize the lines in multiclass case
179 # Only ONE label is 1 in the multiclass case active for each line
180 pos_class_log_loss_ = -frac_pos * np.log(fp)
181 base_log_loss = np.sum(pos_class_log_loss_)
182 return base_log_loss
183
184 y_type = type_of_target(solution)
185
186 if y_type == 'binary':
187 if len(solution.shape) == 1:
188 solution = solution.reshape((-1, 1))
189 if len(prediction.shape) == 1:
190 prediction = prediction.reshape((-1, 1))
191 if len(prediction.shape) == 2:
192 if prediction.shape[1] > 2:
193 raise ValueError(f'A prediction array with probability values '
194 f'for {prediction.shape[1]} classes is not a binary '
195 f'classification problem')
196 # Prediction will be copied into a new binary array - no copy
197 prediction = prediction[:, 1].reshape((-1, 1))
198 else:
199 raise ValueError(f'Invalid prediction shape {prediction.shape}')
200
201 elif y_type == 'multiclass':
202 if len(solution.shape) == 2:
203 if solution.shape[1] > 1:
204 raise ValueError(f'Solution array must only contain one class '
205 f'label, but contains {solution.shape[1]}')
206 elif len(solution.shape) == 1:
207 pass
208 else:
209 raise ValueError('Solution.shape %s' % solution.shape)
210
211 # Need to create a multiclass solution and a multiclass predictions
212 max_class = int(np.max((np.max(solution), np.max(prediction))))
213 solution_binary = np.zeros((len(solution), max_class + 1))
214 for i in range(len(solution)):
215 solution_binary[i, int(solution[i])] = 1
216 solution = solution_binary
217
218 elif y_type == 'multilabel-indicator':
219 solution = solution.copy()
220
221 else:
222 raise NotImplementedError(f'pac_score does not support task {y_type}')
223
224 solution, prediction = normalize_array(solution, prediction.copy())
225
226 sample_num, _ = solution.shape
227
228 eps = 1e-7
229 # Compute the base log loss (using the prior probabilities)
230 pos_num = 1. * np.sum(solution, axis=0, dtype=float) # float conversion!
231 frac_pos = pos_num / sample_num # prior proba of positive class
232 the_base_log_loss = prior_log_loss(frac_pos, y_type)
233 the_log_loss = log_loss(solution, prediction, y_type)
234
235 # Exponentiate to turn into an accuracy-like score.
236 # In the multi-label case, we need to average AFTER taking the exp
237 # because it is an NL operation
238 pac = np.mean(np.exp(-the_log_loss))
239 base_pac = np.mean(np.exp(-the_base_log_loss))
240 # Normalize: 0 for random, 1 for perfect
241 score = (pac - base_pac) / np.maximum(eps, (1 - base_pac))
242
243 return score
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/autogluon/utils/tabular/metrics/classification_metrics.py b/autogluon/utils/tabular/metrics/classification_metrics.py
--- a/autogluon/utils/tabular/metrics/classification_metrics.py
+++ b/autogluon/utils/tabular/metrics/classification_metrics.py
@@ -17,15 +17,19 @@
pass
elif y_type == 'multiclass':
- # Need to create a multiclass solution and a multiclass predictions
- max_class = int(np.max((np.max(solution), np.max(prediction))))
- solution_binary = np.zeros((len(solution), max_class + 1))
- prediction_binary = np.zeros((len(prediction), max_class + 1))
- for i in range(len(solution)):
- solution_binary[i, int(solution[i])] = 1
- prediction_binary[i, int(prediction[i])] = 1
- solution = solution_binary
- prediction = prediction_binary
+ n = len(solution)
+ unique_sol, encoded_sol = np.unique(solution, return_inverse=True)
+ unique_pred, encoded_pred = np.unique(prediction, return_inverse=True)
+ classes = np.unique(np.concatenate((unique_sol, unique_pred)))
+ map_sol = np.array([np.where(classes==c)[0][0] for c in unique_sol])
+ map_pred = np.array([np.where(classes==c)[0][0] for c in unique_pred])
+ # one hot encoding
+ sol_ohe = np.zeros((n, len(classes)))
+ pred_ohe = np.zeros((n, len(classes)))
+ sol_ohe[np.arange(n), map_sol[encoded_sol]] = 1
+ pred_ohe[np.arange(n), map_pred[encoded_pred]] = 1
+ solution = sol_ohe
+ prediction = pred_ohe
elif y_type == 'multilabel-indicator':
solution = solution.toarray()
| {"golden_diff": "diff --git a/autogluon/utils/tabular/metrics/classification_metrics.py b/autogluon/utils/tabular/metrics/classification_metrics.py\n--- a/autogluon/utils/tabular/metrics/classification_metrics.py\n+++ b/autogluon/utils/tabular/metrics/classification_metrics.py\n@@ -17,15 +17,19 @@\n pass\n \n elif y_type == 'multiclass':\n- # Need to create a multiclass solution and a multiclass predictions\n- max_class = int(np.max((np.max(solution), np.max(prediction))))\n- solution_binary = np.zeros((len(solution), max_class + 1))\n- prediction_binary = np.zeros((len(prediction), max_class + 1))\n- for i in range(len(solution)):\n- solution_binary[i, int(solution[i])] = 1\n- prediction_binary[i, int(prediction[i])] = 1\n- solution = solution_binary\n- prediction = prediction_binary\n+ n = len(solution)\n+ unique_sol, encoded_sol = np.unique(solution, return_inverse=True)\n+ unique_pred, encoded_pred = np.unique(prediction, return_inverse=True)\n+ classes = np.unique(np.concatenate((unique_sol, unique_pred)))\n+ map_sol = np.array([np.where(classes==c)[0][0] for c in unique_sol])\n+ map_pred = np.array([np.where(classes==c)[0][0] for c in unique_pred])\n+ # one hot encoding\n+ sol_ohe = np.zeros((n, len(classes)))\n+ pred_ohe = np.zeros((n, len(classes)))\n+ sol_ohe[np.arange(n), map_sol[encoded_sol]] = 1\n+ pred_ohe[np.arange(n), map_pred[encoded_pred]] = 1\n+ solution = sol_ohe\n+ prediction = pred_ohe\n \n elif y_type == 'multilabel-indicator':\n solution = solution.toarray()\n", "issue": "Balanced accuracy error\n(this is with 6b247bf)\r\n\r\nConsider (from `tabular-indepth` tutorial):\r\n\r\n```python\r\n>>> y_test[:5]\r\n0 Sales\r\n1 Sales\r\n2 Exec-managerial\r\n3 Exec-managerial\r\n4 Prof-specialty\r\nName: occupation, dtype: object\r\n\r\n>>> y_pred[:5]\r\narray([' Other-service', ' Craft-repair', ' Exec-managerial', ' Sales',\r\n ' Other-service'], dtype=object)\r\n```\r\n\r\n**with ag**\r\n\r\n```python\r\n>>> ag.utils.tabular.metrics.accuracy(y_test, y_pred)\r\n0.3393387245368001\r\n\r\n>>> ag.utils.tabular.metrics.balanced_accuracy(y_test, y_pred)\r\n[error see stacktrace]\r\n```\r\n\r\n**with sklearn**\r\n\r\n```python\r\n>>> sklearn.metrics.balanced_accuracy_score(y_test, y_pred)\r\n0.21896145445995055\r\n```\r\n\r\n### Reason\r\n\r\nI believe the issue stems from this line: https://github.com/awslabs/autogluon/blob/6b247bfea9d504381cc512e36ba1909e6c54c0c3/autogluon/utils/tabular/metrics/classification_metrics.py#L21 \r\n\r\nwhich would be ok if `prediction` and `solution` have been encoded to integer values but not in general. The max should be over the number of unique elements. This requires a bit more than a one line refactoring though, I'll open a PR. \r\n\r\n### Stacktrace\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-ddcca1d23e69> in <module>\r\n----> 1 ag.utils.tabular.metrics.balanced_accuracy(y_test, y_pred)\r\n\r\n~/Desktop/github-aws/autogluon-public/autogluon/utils/tabular/metrics/__init__.py in __call__(self, y_true, y_pred, sample_weight)\r\n 87 else:\r\n 88 return self._sign * self._score_func(y_true, y_pred,\r\n---> 89 **self._kwargs)\r\n 90 \r\n 91 \r\n\r\n~/Desktop/github-aws/autogluon-public/autogluon/utils/tabular/metrics/classification_metrics.py in balanced_accuracy(solution, prediction)\r\n 19 elif y_type == 'multiclass':\r\n 20 # Need to create a multiclass solution and a multiclass predictions\r\n---> 21 max_class = int(np.max((np.max(solution), np.max(prediction))))\r\n 22 solution_binary = np.zeros((len(solution), max_class + 1))\r\n 23 prediction_binary = np.zeros((len(prediction), max_class + 1))\r\n\r\n<__array_function__ internals> in amax(*args, **kwargs)\r\n\r\n~/Desktop/github-aws/ghaws/lib/python3.7/site-packages/numpy/core/fromnumeric.py in amax(a, axis, out, keepdims, initial, where)\r\n 2666 \"\"\"\r\n 2667 return _wrapreduction(a, np.maximum, 'max', axis, None, out,\r\n-> 2668 keepdims=keepdims, initial=initial, where=where)\r\n 2669 \r\n 2670 \r\n\r\n~/Desktop/github-aws/ghaws/lib/python3.7/site-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)\r\n 88 return reduction(axis=axis, out=out, **passkwargs)\r\n 89 \r\n---> 90 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)\r\n 91 \r\n 92 \r\n\r\nTypeError: cannot perform reduce with flexible type\r\n```\n", "before_files": [{"content": "import logging\n\nimport numpy as np\nfrom sklearn.metrics.classification import _check_targets, type_of_target\n\nlogger = logging.getLogger(__name__)\n\n\ndef balanced_accuracy(solution, prediction):\n y_type, solution, prediction = _check_targets(solution, prediction)\n\n if y_type not in [\"binary\", \"multiclass\", 'multilabel-indicator']:\n raise ValueError(f\"{y_type} is not supported\")\n\n if y_type == 'binary':\n # Do not transform into any multiclass representation\n pass\n\n elif y_type == 'multiclass':\n # Need to create a multiclass solution and a multiclass predictions\n max_class = int(np.max((np.max(solution), np.max(prediction))))\n solution_binary = np.zeros((len(solution), max_class + 1))\n prediction_binary = np.zeros((len(prediction), max_class + 1))\n for i in range(len(solution)):\n solution_binary[i, int(solution[i])] = 1\n prediction_binary[i, int(prediction[i])] = 1\n solution = solution_binary\n prediction = prediction_binary\n\n elif y_type == 'multilabel-indicator':\n solution = solution.toarray()\n prediction = prediction.toarray()\n else:\n raise NotImplementedError(f'bac_metric does not support task type {y_type}')\n\n fn = np.sum(np.multiply(solution, (1 - prediction)), axis=0, dtype=float)\n tp = np.sum(np.multiply(solution, prediction), axis=0, dtype=float)\n # Bounding to avoid division by 0\n eps = 1e-15\n tp = np.maximum(eps, tp)\n pos_num = np.maximum(eps, tp + fn)\n tpr = tp / pos_num # true positive rate (sensitivity)\n\n if y_type in ('binary', 'multilabel-indicator'):\n tn = np.sum(\n np.multiply((1 - solution), (1 - prediction)),\n axis=0, dtype=float\n )\n fp = np.sum(\n np.multiply((1 - solution), prediction),\n axis=0, dtype=float\n )\n tn = np.maximum(eps, tn)\n neg_num = np.maximum(eps, tn + fp)\n tnr = tn / neg_num # true negative rate (specificity)\n bac = 0.5 * (tpr + tnr)\n elif y_type == 'multiclass':\n bac = tpr\n else:\n raise ValueError(y_type)\n\n return np.mean(bac) # average over all classes\n\n\ndef pac_score(solution, prediction):\n \"\"\"\n Probabilistic Accuracy based on log_loss metric.\n We assume the solution is in {0, 1} and prediction in [0, 1].\n Otherwise, run normalize_array.\n :param solution:\n :param prediction:\n :param task:\n :return:\n \"\"\"\n\n def normalize_array(solution, prediction):\n \"\"\"\n Use min and max of solution as scaling factors to normalize prediction,\n then threshold it to [0, 1].\n Binarize solution to {0, 1}. This allows applying classification\n scores to all cases. In principle, this should not do anything to\n properly formatted classification inputs and outputs.\n :param solution:\n :param prediction:\n :return:\n \"\"\"\n # Binarize solution\n sol = np.ravel(solution) # convert to 1-d array\n maxi = np.nanmax(sol[np.isfinite(sol)])\n mini = np.nanmin(sol[np.isfinite(sol)])\n if maxi == mini:\n logger.debug('Warning: cannot normalize array')\n return [solution, prediction]\n diff = maxi - mini\n mid = (maxi + mini) / 2.\n\n solution[solution >= mid] = 1\n solution[solution < mid] = 0\n # Normalize and threshold predictions (takes effect only if solution not\n # in {0, 1})\n\n prediction -= float(mini)\n prediction /= float(diff)\n\n # and if predictions exceed the bounds [0, 1]\n prediction[prediction > 1] = 1\n prediction[prediction < 0] = 0\n # Make probabilities smoother\n # new_prediction = np.power(new_prediction, (1./10))\n\n return [solution, prediction]\n\n def log_loss(solution, prediction, task):\n \"\"\"Log loss for binary and multiclass.\"\"\"\n [sample_num, label_num] = solution.shape\n # Lower gives problems with float32!\n eps = 0.00000003\n\n if (task == 'multiclass') and (label_num > 1):\n # Make sure the lines add up to one for multi-class classification\n norma = np.sum(prediction, axis=1)\n for k in range(sample_num):\n prediction[k, :] /= np.maximum(norma[k], eps)\n\n sample_num = solution.shape[0]\n for i in range(sample_num):\n j = np.argmax(solution[i, :])\n solution[i, :] = 0\n solution[i, j] = 1\n\n solution = solution.astype(np.int32, copy=False)\n # For the base prediction, this solution is ridiculous in the\n # multi-label case\n\n # Bounding of predictions to avoid log(0),1/0,...\n prediction = np.minimum(1 - eps, np.maximum(eps, prediction))\n # Compute the log loss\n pos_class_log_loss = -np.mean(solution * np.log(prediction), axis=0)\n if (task != 'multiclass') or (label_num == 1):\n # The multi-label case is a bunch of binary problems.\n # The second class is the negative class for each column.\n neg_class_log_loss = -np.mean(\n (1 - solution) * np.log(1 - prediction),\n axis=0\n )\n log_loss = pos_class_log_loss + neg_class_log_loss\n # Each column is an independent problem, so we average.\n # The probabilities in one line do not add up to one.\n # log_loss = mvmean(log_loss)\n # print('binary {}'.format(log_loss))\n # In the multilabel case, the right thing i to AVERAGE not sum\n # We return all the scores so we can normalize correctly later on\n else:\n # For the multiclass case the probabilities in one line add up one.\n log_loss = pos_class_log_loss\n # We sum the contributions of the columns.\n log_loss = np.sum(log_loss)\n # print('multiclass {}'.format(log_loss))\n return log_loss\n\n def prior_log_loss(frac_pos, task):\n \"\"\"Baseline log loss.\n For multiple classes ot labels return the values for each column\n \"\"\"\n eps = 1e-15\n frac_pos_ = np.maximum(eps, frac_pos)\n if task != 'multiclass': # binary case\n frac_neg = 1 - frac_pos\n frac_neg_ = np.maximum(eps, frac_neg)\n pos_class_log_loss_ = -frac_pos * np.log(frac_pos_)\n neg_class_log_loss_ = -frac_neg * np.log(frac_neg_)\n base_log_loss = pos_class_log_loss_ + neg_class_log_loss_\n # base_log_loss = mvmean(base_log_loss)\n # print('binary {}'.format(base_log_loss))\n # In the multilabel case, the right thing i to AVERAGE not sum\n # We return all the scores so we can normalize correctly later on\n else: # multiclass case\n fp = frac_pos_ / sum(frac_pos_) # Need to renormalize the lines in multiclass case\n # Only ONE label is 1 in the multiclass case active for each line\n pos_class_log_loss_ = -frac_pos * np.log(fp)\n base_log_loss = np.sum(pos_class_log_loss_)\n return base_log_loss\n\n y_type = type_of_target(solution)\n\n if y_type == 'binary':\n if len(solution.shape) == 1:\n solution = solution.reshape((-1, 1))\n if len(prediction.shape) == 1:\n prediction = prediction.reshape((-1, 1))\n if len(prediction.shape) == 2:\n if prediction.shape[1] > 2:\n raise ValueError(f'A prediction array with probability values '\n f'for {prediction.shape[1]} classes is not a binary '\n f'classification problem')\n # Prediction will be copied into a new binary array - no copy\n prediction = prediction[:, 1].reshape((-1, 1))\n else:\n raise ValueError(f'Invalid prediction shape {prediction.shape}')\n\n elif y_type == 'multiclass':\n if len(solution.shape) == 2:\n if solution.shape[1] > 1:\n raise ValueError(f'Solution array must only contain one class '\n f'label, but contains {solution.shape[1]}')\n elif len(solution.shape) == 1:\n pass\n else:\n raise ValueError('Solution.shape %s' % solution.shape)\n\n # Need to create a multiclass solution and a multiclass predictions\n max_class = int(np.max((np.max(solution), np.max(prediction))))\n solution_binary = np.zeros((len(solution), max_class + 1))\n for i in range(len(solution)):\n solution_binary[i, int(solution[i])] = 1\n solution = solution_binary\n\n elif y_type == 'multilabel-indicator':\n solution = solution.copy()\n\n else:\n raise NotImplementedError(f'pac_score does not support task {y_type}')\n\n solution, prediction = normalize_array(solution, prediction.copy())\n\n sample_num, _ = solution.shape\n\n eps = 1e-7\n # Compute the base log loss (using the prior probabilities)\n pos_num = 1. * np.sum(solution, axis=0, dtype=float) # float conversion!\n frac_pos = pos_num / sample_num # prior proba of positive class\n the_base_log_loss = prior_log_loss(frac_pos, y_type)\n the_log_loss = log_loss(solution, prediction, y_type)\n\n # Exponentiate to turn into an accuracy-like score.\n # In the multi-label case, we need to average AFTER taking the exp\n # because it is an NL operation\n pac = np.mean(np.exp(-the_log_loss))\n base_pac = np.mean(np.exp(-the_base_log_loss))\n # Normalize: 0 for random, 1 for perfect\n score = (pac - base_pac) / np.maximum(eps, (1 - base_pac))\n\n return score\n", "path": "autogluon/utils/tabular/metrics/classification_metrics.py"}], "after_files": [{"content": "import logging\n\nimport numpy as np\nfrom sklearn.metrics.classification import _check_targets, type_of_target\n\nlogger = logging.getLogger(__name__)\n\n\ndef balanced_accuracy(solution, prediction):\n y_type, solution, prediction = _check_targets(solution, prediction)\n\n if y_type not in [\"binary\", \"multiclass\", 'multilabel-indicator']:\n raise ValueError(f\"{y_type} is not supported\")\n\n if y_type == 'binary':\n # Do not transform into any multiclass representation\n pass\n\n elif y_type == 'multiclass':\n n = len(solution)\n unique_sol, encoded_sol = np.unique(solution, return_inverse=True)\n unique_pred, encoded_pred = np.unique(prediction, return_inverse=True)\n classes = np.unique(np.concatenate((unique_sol, unique_pred)))\n map_sol = np.array([np.where(classes==c)[0][0] for c in unique_sol])\n map_pred = np.array([np.where(classes==c)[0][0] for c in unique_pred])\n # one hot encoding\n sol_ohe = np.zeros((n, len(classes)))\n pred_ohe = np.zeros((n, len(classes)))\n sol_ohe[np.arange(n), map_sol[encoded_sol]] = 1\n pred_ohe[np.arange(n), map_pred[encoded_pred]] = 1\n solution = sol_ohe\n prediction = pred_ohe\n\n elif y_type == 'multilabel-indicator':\n solution = solution.toarray()\n prediction = prediction.toarray()\n else:\n raise NotImplementedError(f'bac_metric does not support task type {y_type}')\n\n fn = np.sum(np.multiply(solution, (1 - prediction)), axis=0, dtype=float)\n tp = np.sum(np.multiply(solution, prediction), axis=0, dtype=float)\n # Bounding to avoid division by 0\n eps = 1e-15\n tp = np.maximum(eps, tp)\n pos_num = np.maximum(eps, tp + fn)\n tpr = tp / pos_num # true positive rate (sensitivity)\n\n if y_type in ('binary', 'multilabel-indicator'):\n tn = np.sum(\n np.multiply((1 - solution), (1 - prediction)),\n axis=0, dtype=float\n )\n fp = np.sum(\n np.multiply((1 - solution), prediction),\n axis=0, dtype=float\n )\n tn = np.maximum(eps, tn)\n neg_num = np.maximum(eps, tn + fp)\n tnr = tn / neg_num # true negative rate (specificity)\n bac = 0.5 * (tpr + tnr)\n elif y_type == 'multiclass':\n bac = tpr\n else:\n raise ValueError(y_type)\n\n return np.mean(bac) # average over all classes\n\n\ndef pac_score(solution, prediction):\n \"\"\"\n Probabilistic Accuracy based on log_loss metric.\n We assume the solution is in {0, 1} and prediction in [0, 1].\n Otherwise, run normalize_array.\n :param solution:\n :param prediction:\n :param task:\n :return:\n \"\"\"\n\n def normalize_array(solution, prediction):\n \"\"\"\n Use min and max of solution as scaling factors to normalize prediction,\n then threshold it to [0, 1].\n Binarize solution to {0, 1}. This allows applying classification\n scores to all cases. In principle, this should not do anything to\n properly formatted classification inputs and outputs.\n :param solution:\n :param prediction:\n :return:\n \"\"\"\n # Binarize solution\n sol = np.ravel(solution) # convert to 1-d array\n maxi = np.nanmax(sol[np.isfinite(sol)])\n mini = np.nanmin(sol[np.isfinite(sol)])\n if maxi == mini:\n logger.debug('Warning: cannot normalize array')\n return [solution, prediction]\n diff = maxi - mini\n mid = (maxi + mini) / 2.\n\n solution[solution >= mid] = 1\n solution[solution < mid] = 0\n # Normalize and threshold predictions (takes effect only if solution not\n # in {0, 1})\n\n prediction -= float(mini)\n prediction /= float(diff)\n\n # and if predictions exceed the bounds [0, 1]\n prediction[prediction > 1] = 1\n prediction[prediction < 0] = 0\n # Make probabilities smoother\n # new_prediction = np.power(new_prediction, (1./10))\n\n return [solution, prediction]\n\n def log_loss(solution, prediction, task):\n \"\"\"Log loss for binary and multiclass.\"\"\"\n [sample_num, label_num] = solution.shape\n # Lower gives problems with float32!\n eps = 0.00000003\n\n if (task == 'multiclass') and (label_num > 1):\n # Make sure the lines add up to one for multi-class classification\n norma = np.sum(prediction, axis=1)\n for k in range(sample_num):\n prediction[k, :] /= np.maximum(norma[k], eps)\n\n sample_num = solution.shape[0]\n for i in range(sample_num):\n j = np.argmax(solution[i, :])\n solution[i, :] = 0\n solution[i, j] = 1\n\n solution = solution.astype(np.int32, copy=False)\n # For the base prediction, this solution is ridiculous in the\n # multi-label case\n\n # Bounding of predictions to avoid log(0),1/0,...\n prediction = np.minimum(1 - eps, np.maximum(eps, prediction))\n # Compute the log loss\n pos_class_log_loss = -np.mean(solution * np.log(prediction), axis=0)\n if (task != 'multiclass') or (label_num == 1):\n # The multi-label case is a bunch of binary problems.\n # The second class is the negative class for each column.\n neg_class_log_loss = -np.mean(\n (1 - solution) * np.log(1 - prediction),\n axis=0\n )\n log_loss = pos_class_log_loss + neg_class_log_loss\n # Each column is an independent problem, so we average.\n # The probabilities in one line do not add up to one.\n # log_loss = mvmean(log_loss)\n # print('binary {}'.format(log_loss))\n # In the multilabel case, the right thing i to AVERAGE not sum\n # We return all the scores so we can normalize correctly later on\n else:\n # For the multiclass case the probabilities in one line add up one.\n log_loss = pos_class_log_loss\n # We sum the contributions of the columns.\n log_loss = np.sum(log_loss)\n # print('multiclass {}'.format(log_loss))\n return log_loss\n\n def prior_log_loss(frac_pos, task):\n \"\"\"Baseline log loss.\n For multiple classes ot labels return the values for each column\n \"\"\"\n eps = 1e-15\n frac_pos_ = np.maximum(eps, frac_pos)\n if task != 'multiclass': # binary case\n frac_neg = 1 - frac_pos\n frac_neg_ = np.maximum(eps, frac_neg)\n pos_class_log_loss_ = -frac_pos * np.log(frac_pos_)\n neg_class_log_loss_ = -frac_neg * np.log(frac_neg_)\n base_log_loss = pos_class_log_loss_ + neg_class_log_loss_\n # base_log_loss = mvmean(base_log_loss)\n # print('binary {}'.format(base_log_loss))\n # In the multilabel case, the right thing i to AVERAGE not sum\n # We return all the scores so we can normalize correctly later on\n else: # multiclass case\n fp = frac_pos_ / sum(frac_pos_) # Need to renormalize the lines in multiclass case\n # Only ONE label is 1 in the multiclass case active for each line\n pos_class_log_loss_ = -frac_pos * np.log(fp)\n base_log_loss = np.sum(pos_class_log_loss_)\n return base_log_loss\n\n y_type = type_of_target(solution)\n\n if y_type == 'binary':\n if len(solution.shape) == 1:\n solution = solution.reshape((-1, 1))\n if len(prediction.shape) == 1:\n prediction = prediction.reshape((-1, 1))\n if len(prediction.shape) == 2:\n if prediction.shape[1] > 2:\n raise ValueError(f'A prediction array with probability values '\n f'for {prediction.shape[1]} classes is not a binary '\n f'classification problem')\n # Prediction will be copied into a new binary array - no copy\n prediction = prediction[:, 1].reshape((-1, 1))\n else:\n raise ValueError(f'Invalid prediction shape {prediction.shape}')\n\n elif y_type == 'multiclass':\n if len(solution.shape) == 2:\n if solution.shape[1] > 1:\n raise ValueError(f'Solution array must only contain one class '\n f'label, but contains {solution.shape[1]}')\n elif len(solution.shape) == 1:\n pass\n else:\n raise ValueError('Solution.shape %s' % solution.shape)\n\n # Need to create a multiclass solution and a multiclass predictions\n max_class = int(np.max((np.max(solution), np.max(prediction))))\n solution_binary = np.zeros((len(solution), max_class + 1))\n for i in range(len(solution)):\n solution_binary[i, int(solution[i])] = 1\n solution = solution_binary\n\n elif y_type == 'multilabel-indicator':\n solution = solution.copy()\n\n else:\n raise NotImplementedError(f'pac_score does not support task {y_type}')\n\n solution, prediction = normalize_array(solution, prediction.copy())\n\n sample_num, _ = solution.shape\n\n eps = 1e-7\n # Compute the base log loss (using the prior probabilities)\n pos_num = 1. * np.sum(solution, axis=0, dtype=float) # float conversion!\n frac_pos = pos_num / sample_num # prior proba of positive class\n the_base_log_loss = prior_log_loss(frac_pos, y_type)\n the_log_loss = log_loss(solution, prediction, y_type)\n\n # Exponentiate to turn into an accuracy-like score.\n # In the multi-label case, we need to average AFTER taking the exp\n # because it is an NL operation\n pac = np.mean(np.exp(-the_log_loss))\n base_pac = np.mean(np.exp(-the_base_log_loss))\n # Normalize: 0 for random, 1 for perfect\n score = (pac - base_pac) / np.maximum(eps, (1 - base_pac))\n\n return score\n", "path": "autogluon/utils/tabular/metrics/classification_metrics.py"}]} | 4,039 | 422 |
gh_patches_debug_8703 | rasdani/github-patches | git_diff | svthalia__concrexit-1836 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DeviceListView permission not checked
### Describe the bug
The `DeviceListView` of `api/v2` has a `IsAuthenticatedOwnerOrReadOnly` permission which is never checked as `get_object` is not used in the view.
### How to reproduce
Steps to reproduce the behaviour:
1. Set a breakpoint in the `IsAuthenticatedOwnerOrReadOnly` class
2. Enable the debugger
3. See that the `has_object_permission` method is not called on a request to the corresponding endpoint
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/pushnotifications/api/v2/views.py`
Content:
```
1 from django.utils.translation import get_language_from_request
2 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
3 from rest_framework.filters import OrderingFilter
4 from rest_framework.generics import (
5 ListAPIView,
6 RetrieveAPIView,
7 CreateAPIView,
8 UpdateAPIView,
9 )
10
11 from pushnotifications.api.v2.filters import CategoryFilter
12 from pushnotifications.api.v2.permissions import IsAuthenticatedOwnerOrReadOnly
13 from pushnotifications.api.v2.serializers import (
14 DeviceSerializer,
15 MessageSerializer,
16 CategorySerializer,
17 )
18 from pushnotifications.models import Device, Category, Message
19 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
20
21
22 class DeviceListView(ListAPIView, CreateAPIView):
23 """Returns an overview of all devices that are owner by the user."""
24
25 permission_classes = [
26 IsAuthenticatedOrTokenHasScopeForMethod,
27 IsAuthenticatedOwnerOrReadOnly,
28 ]
29 serializer_class = DeviceSerializer
30 queryset = Device.objects.all()
31 required_scopes_per_method = {
32 "GET": ["pushnotifications:read"],
33 "POST": ["pushnotifications:write"],
34 }
35
36 def get_queryset(self):
37 if self.request.user:
38 return Device.objects.filter(user=self.request.user)
39 return super().get_queryset()
40
41 def perform_create(self, serializer):
42 language = get_language_from_request(self.request)
43
44 try:
45 serializer.instance = Device.objects.get(
46 user=self.request.user,
47 registration_id=serializer.validated_data["registration_id"],
48 )
49 except Device.DoesNotExist:
50 pass
51
52 data = serializer.validated_data
53 categories = [c.pk for c in Category.objects.all()]
54 if "receive_category" in data and len(data["receive_category"]) > 0:
55 categories = data["receive_category"] + ["general"]
56
57 serializer.save(
58 user=self.request.user, language=language, receive_category=categories
59 )
60
61
62 class DeviceDetailView(RetrieveAPIView, UpdateAPIView):
63 """Returns details of a device."""
64
65 permission_classes = [
66 IsAuthenticatedOrTokenHasScope,
67 IsAuthenticatedOwnerOrReadOnly,
68 ]
69 serializer_class = DeviceSerializer
70 required_scopes = ["pushnotifications:read", "pushnotifications:write"]
71 queryset = Device.objects.all()
72
73 def perform_update(self, serializer):
74 serializer.save(user=self.request.user)
75
76
77 class CategoryListView(ListAPIView):
78 """Returns an overview of all available categories for push notifications."""
79
80 serializer_class = CategorySerializer
81 queryset = Category.objects.all()
82 required_scopes = ["pushnotifications:read"]
83
84
85 class MessageListView(ListAPIView):
86 """Returns a list of message sent to the user."""
87
88 serializer_class = MessageSerializer
89 required_scopes = ["pushnotifications:read"]
90 permission_classes = [
91 IsAuthenticatedOrTokenHasScope,
92 ]
93 filter_backends = (OrderingFilter, CategoryFilter)
94 ordering_fields = ("sent",)
95
96 def get_queryset(self):
97 if self.request.user:
98 return Message.all_objects.filter(users=self.request.user)
99 return Message.all_objects.all()
100
101
102 class MessageDetailView(RetrieveAPIView):
103 """Returns a message."""
104
105 serializer_class = MessageSerializer
106 required_scopes = ["pushnotifications:read"]
107 permission_classes = [
108 IsAuthenticatedOrTokenHasScope,
109 ]
110
111 def get_queryset(self):
112 if self.request.user:
113 return Message.all_objects.filter(users=self.request.user)
114 return Message.all_objects.all()
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/pushnotifications/api/v2/views.py b/website/pushnotifications/api/v2/views.py
--- a/website/pushnotifications/api/v2/views.py
+++ b/website/pushnotifications/api/v2/views.py
@@ -22,10 +22,7 @@
class DeviceListView(ListAPIView, CreateAPIView):
"""Returns an overview of all devices that are owner by the user."""
- permission_classes = [
- IsAuthenticatedOrTokenHasScopeForMethod,
- IsAuthenticatedOwnerOrReadOnly,
- ]
+ permission_classes = [IsAuthenticatedOrTokenHasScopeForMethod]
serializer_class = DeviceSerializer
queryset = Device.objects.all()
required_scopes_per_method = {
| {"golden_diff": "diff --git a/website/pushnotifications/api/v2/views.py b/website/pushnotifications/api/v2/views.py\n--- a/website/pushnotifications/api/v2/views.py\n+++ b/website/pushnotifications/api/v2/views.py\n@@ -22,10 +22,7 @@\n class DeviceListView(ListAPIView, CreateAPIView):\n \"\"\"Returns an overview of all devices that are owner by the user.\"\"\"\n \n- permission_classes = [\n- IsAuthenticatedOrTokenHasScopeForMethod,\n- IsAuthenticatedOwnerOrReadOnly,\n- ]\n+ permission_classes = [IsAuthenticatedOrTokenHasScopeForMethod]\n serializer_class = DeviceSerializer\n queryset = Device.objects.all()\n required_scopes_per_method = {\n", "issue": "DeviceListView permission not checked\n### Describe the bug\r\nThe `DeviceListView` of `api/v2` has a `IsAuthenticatedOwnerOrReadOnly` permission which is never checked as `get_object` is not used in the view.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Set a breakpoint in the `IsAuthenticatedOwnerOrReadOnly` class\r\n2. Enable the debugger\r\n3. See that the `has_object_permission` method is not called on a request to the corresponding endpoint\r\n\n", "before_files": [{"content": "from django.utils.translation import get_language_from_request\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n CreateAPIView,\n UpdateAPIView,\n)\n\nfrom pushnotifications.api.v2.filters import CategoryFilter\nfrom pushnotifications.api.v2.permissions import IsAuthenticatedOwnerOrReadOnly\nfrom pushnotifications.api.v2.serializers import (\n DeviceSerializer,\n MessageSerializer,\n CategorySerializer,\n)\nfrom pushnotifications.models import Device, Category, Message\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass DeviceListView(ListAPIView, CreateAPIView):\n \"\"\"Returns an overview of all devices that are owner by the user.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n IsAuthenticatedOwnerOrReadOnly,\n ]\n serializer_class = DeviceSerializer\n queryset = Device.objects.all()\n required_scopes_per_method = {\n \"GET\": [\"pushnotifications:read\"],\n \"POST\": [\"pushnotifications:write\"],\n }\n\n def get_queryset(self):\n if self.request.user:\n return Device.objects.filter(user=self.request.user)\n return super().get_queryset()\n\n def perform_create(self, serializer):\n language = get_language_from_request(self.request)\n\n try:\n serializer.instance = Device.objects.get(\n user=self.request.user,\n registration_id=serializer.validated_data[\"registration_id\"],\n )\n except Device.DoesNotExist:\n pass\n\n data = serializer.validated_data\n categories = [c.pk for c in Category.objects.all()]\n if \"receive_category\" in data and len(data[\"receive_category\"]) > 0:\n categories = data[\"receive_category\"] + [\"general\"]\n\n serializer.save(\n user=self.request.user, language=language, receive_category=categories\n )\n\n\nclass DeviceDetailView(RetrieveAPIView, UpdateAPIView):\n \"\"\"Returns details of a device.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n IsAuthenticatedOwnerOrReadOnly,\n ]\n serializer_class = DeviceSerializer\n required_scopes = [\"pushnotifications:read\", \"pushnotifications:write\"]\n queryset = Device.objects.all()\n\n def perform_update(self, serializer):\n serializer.save(user=self.request.user)\n\n\nclass CategoryListView(ListAPIView):\n \"\"\"Returns an overview of all available categories for push notifications.\"\"\"\n\n serializer_class = CategorySerializer\n queryset = Category.objects.all()\n required_scopes = [\"pushnotifications:read\"]\n\n\nclass MessageListView(ListAPIView):\n \"\"\"Returns a list of message sent to the user.\"\"\"\n\n serializer_class = MessageSerializer\n required_scopes = [\"pushnotifications:read\"]\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n filter_backends = (OrderingFilter, CategoryFilter)\n ordering_fields = (\"sent\",)\n\n def get_queryset(self):\n if self.request.user:\n return Message.all_objects.filter(users=self.request.user)\n return Message.all_objects.all()\n\n\nclass MessageDetailView(RetrieveAPIView):\n \"\"\"Returns a message.\"\"\"\n\n serializer_class = MessageSerializer\n required_scopes = [\"pushnotifications:read\"]\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n\n def get_queryset(self):\n if self.request.user:\n return Message.all_objects.filter(users=self.request.user)\n return Message.all_objects.all()\n", "path": "website/pushnotifications/api/v2/views.py"}], "after_files": [{"content": "from django.utils.translation import get_language_from_request\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n CreateAPIView,\n UpdateAPIView,\n)\n\nfrom pushnotifications.api.v2.filters import CategoryFilter\nfrom pushnotifications.api.v2.permissions import IsAuthenticatedOwnerOrReadOnly\nfrom pushnotifications.api.v2.serializers import (\n DeviceSerializer,\n MessageSerializer,\n CategorySerializer,\n)\nfrom pushnotifications.models import Device, Category, Message\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass DeviceListView(ListAPIView, CreateAPIView):\n \"\"\"Returns an overview of all devices that are owner by the user.\"\"\"\n\n permission_classes = [IsAuthenticatedOrTokenHasScopeForMethod]\n serializer_class = DeviceSerializer\n queryset = Device.objects.all()\n required_scopes_per_method = {\n \"GET\": [\"pushnotifications:read\"],\n \"POST\": [\"pushnotifications:write\"],\n }\n\n def get_queryset(self):\n if self.request.user:\n return Device.objects.filter(user=self.request.user)\n return super().get_queryset()\n\n def perform_create(self, serializer):\n language = get_language_from_request(self.request)\n\n try:\n serializer.instance = Device.objects.get(\n user=self.request.user,\n registration_id=serializer.validated_data[\"registration_id\"],\n )\n except Device.DoesNotExist:\n pass\n\n data = serializer.validated_data\n categories = [c.pk for c in Category.objects.all()]\n if \"receive_category\" in data and len(data[\"receive_category\"]) > 0:\n categories = data[\"receive_category\"] + [\"general\"]\n\n serializer.save(\n user=self.request.user, language=language, receive_category=categories\n )\n\n\nclass DeviceDetailView(RetrieveAPIView, UpdateAPIView):\n \"\"\"Returns details of a device.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n IsAuthenticatedOwnerOrReadOnly,\n ]\n serializer_class = DeviceSerializer\n required_scopes = [\"pushnotifications:read\", \"pushnotifications:write\"]\n queryset = Device.objects.all()\n\n def perform_update(self, serializer):\n serializer.save(user=self.request.user)\n\n\nclass CategoryListView(ListAPIView):\n \"\"\"Returns an overview of all available categories for push notifications.\"\"\"\n\n serializer_class = CategorySerializer\n queryset = Category.objects.all()\n required_scopes = [\"pushnotifications:read\"]\n\n\nclass MessageListView(ListAPIView):\n \"\"\"Returns a list of message sent to the user.\"\"\"\n\n serializer_class = MessageSerializer\n required_scopes = [\"pushnotifications:read\"]\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n filter_backends = (OrderingFilter, CategoryFilter)\n ordering_fields = (\"sent\",)\n\n def get_queryset(self):\n if self.request.user:\n return Message.all_objects.filter(users=self.request.user)\n return Message.all_objects.all()\n\n\nclass MessageDetailView(RetrieveAPIView):\n \"\"\"Returns a message.\"\"\"\n\n serializer_class = MessageSerializer\n required_scopes = [\"pushnotifications:read\"]\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n\n def get_queryset(self):\n if self.request.user:\n return Message.all_objects.filter(users=self.request.user)\n return Message.all_objects.all()\n", "path": "website/pushnotifications/api/v2/views.py"}]} | 1,325 | 155 |
gh_patches_debug_20327 | rasdani/github-patches | git_diff | pypi__warehouse-10438 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use natural sort order for file listings
**What's the problem this feature will solve?**
Currently on https://pypi.org/project/lxml/4.6.3/#files, the files are listed as:
- lxml-4.6.3-cp27-cp27mu-manylinux1_x86_64.whl
- lxml-4.6.3-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl
- lxml-4.6.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
- lxml-4.6.3-cp35-cp35m-manylinux1_i686.whl
This is because the strings are sorted as 27 < 310 < 35, for strings.
**Describe the solution you'd like**
Use natural sorting order for filenames, similar to what we did for https://github.com/pypa/trove-classifiers/issues/56.
This _may_ also make sense for the simple pages, where it would be a nice-to-have when a human looks at the page.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/packaging/views.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
14 from pyramid.view import view_config
15 from sqlalchemy.orm.exc import NoResultFound
16
17 from warehouse.accounts.models import User
18 from warehouse.cache.origin import origin_cache
19 from warehouse.packaging.models import Project, Release, Role
20 from warehouse.utils import readme
21
22
23 @view_config(
24 route_name="packaging.project",
25 context=Project,
26 renderer="packaging/detail.html",
27 decorator=[
28 origin_cache(
29 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale
30 )
31 ],
32 has_translations=True,
33 )
34 def project_detail(project, request):
35 if project.name != request.matchdict.get("name", project.name):
36 return HTTPMovedPermanently(request.current_route_path(name=project.name))
37
38 try:
39 release = (
40 request.db.query(Release)
41 .filter(Release.project == project)
42 .order_by(
43 Release.yanked,
44 Release.is_prerelease.nullslast(),
45 Release._pypi_ordering.desc(),
46 )
47 .limit(1)
48 .one()
49 )
50 except NoResultFound:
51 raise HTTPNotFound
52
53 return release_detail(release, request)
54
55
56 @view_config(
57 route_name="packaging.release",
58 context=Release,
59 renderer="packaging/detail.html",
60 decorator=[
61 origin_cache(
62 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale
63 )
64 ],
65 has_translations=True,
66 )
67 def release_detail(release, request):
68 project = release.project
69
70 # Check if the requested version is equivalent but not exactly the same as
71 # the release's version. Use `.get` because this view is used by
72 # `project_detail` and there may not be a version.
73 #
74 # This also handles the case where both the version and the project name
75 # need adjusted, and handles it in a single redirect.
76 if release.version != request.matchdict.get("version", release.version):
77 return HTTPMovedPermanently(
78 request.current_route_path(name=project.name, version=release.version)
79 )
80
81 # It's possible that the requested version was correct (or not provided),
82 # but we still need to adjust the project name.
83 if project.name != request.matchdict.get("name", project.name):
84 return HTTPMovedPermanently(request.current_route_path(name=project.name))
85
86 # Grab the rendered description if it exists, and if it doesn't, then we will render
87 # it inline.
88 # TODO: Remove the fallback to rendering inline and only support displaying the
89 # already rendered content.
90 if release.description.html:
91 description = release.description.html
92 else:
93 description = readme.render(
94 release.description.raw, release.description.content_type
95 )
96
97 # Get all of the maintainers for this project.
98 maintainers = [
99 r.user
100 for r in (
101 request.db.query(Role)
102 .join(User)
103 .filter(Role.project == project)
104 .distinct(User.username)
105 .order_by(User.username)
106 .all()
107 )
108 ]
109
110 # Get the license from both the `Classifier` and `License` metadata fields
111 license_classifiers = ", ".join(
112 c.split(" :: ")[-1] for c in release.classifiers if c.startswith("License")
113 )
114
115 # Make a best effort when the entire license text is given by using the
116 # first line only.
117 short_license = release.license.split("\n")[0] if release.license else None
118
119 if license_classifiers and short_license:
120 license = f"{license_classifiers} ({short_license})"
121 else:
122 license = license_classifiers or short_license or None
123
124 return {
125 "project": project,
126 "release": release,
127 "description": description,
128 "files": release.files.all(),
129 "latest_version": project.latest_version,
130 "all_versions": project.all_versions,
131 "maintainers": maintainers,
132 "license": license,
133 }
134
135
136 @view_config(
137 route_name="includes.edit-project-button",
138 context=Project,
139 renderer="includes/manage-project-button.html",
140 uses_session=True,
141 permission="manage:project",
142 has_translations=True,
143 )
144 def edit_project_button(project, request):
145 return {"project": project}
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py
--- a/warehouse/packaging/views.py
+++ b/warehouse/packaging/views.py
@@ -10,6 +10,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from natsort import natsorted
from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
from pyramid.view import view_config
from sqlalchemy.orm.exc import NoResultFound
@@ -125,7 +126,8 @@
"project": project,
"release": release,
"description": description,
- "files": release.files.all(),
+ # We cannot easily sort naturally in SQL, sort here and pass to template
+ "files": natsorted(release.files.all(), reverse=True, key=lambda f: f.filename),
"latest_version": project.latest_version,
"all_versions": project.all_versions,
"maintainers": maintainers,
| {"golden_diff": "diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py\n--- a/warehouse/packaging/views.py\n+++ b/warehouse/packaging/views.py\n@@ -10,6 +10,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from natsort import natsorted\n from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\n from pyramid.view import view_config\n from sqlalchemy.orm.exc import NoResultFound\n@@ -125,7 +126,8 @@\n \"project\": project,\n \"release\": release,\n \"description\": description,\n- \"files\": release.files.all(),\n+ # We cannot easily sort naturally in SQL, sort here and pass to template\n+ \"files\": natsorted(release.files.all(), reverse=True, key=lambda f: f.filename),\n \"latest_version\": project.latest_version,\n \"all_versions\": project.all_versions,\n \"maintainers\": maintainers,\n", "issue": "Use natural sort order for file listings\n**What's the problem this feature will solve?**\r\n\r\nCurrently on https://pypi.org/project/lxml/4.6.3/#files, the files are listed as:\r\n\r\n- lxml-4.6.3-cp27-cp27mu-manylinux1_x86_64.whl\r\n- lxml-4.6.3-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl\r\n- lxml-4.6.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl\r\n- lxml-4.6.3-cp35-cp35m-manylinux1_i686.whl\r\n\r\nThis is because the strings are sorted as 27 < 310 < 35, for strings.\r\n\r\n**Describe the solution you'd like**\r\n\r\nUse natural sorting order for filenames, similar to what we did for https://github.com/pypa/trove-classifiers/issues/56.\r\n\r\nThis _may_ also make sense for the simple pages, where it would be a nice-to-have when a human looks at the page.\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import Project, Release, Role\nfrom warehouse.utils import readme\n\n\n@view_config(\n route_name=\"packaging.project\",\n context=Project,\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale\n )\n ],\n has_translations=True,\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(request.current_route_path(name=project.name))\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(\n Release.yanked,\n Release.is_prerelease.nullslast(),\n Release._pypi_ordering.desc(),\n )\n .limit(1)\n .one()\n )\n except NoResultFound:\n raise HTTPNotFound\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n context=Release,\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale\n )\n ],\n has_translations=True,\n)\ndef release_detail(release, request):\n project = release.project\n\n # Check if the requested version is equivalent but not exactly the same as\n # the release's version. Use `.get` because this view is used by\n # `project_detail` and there may not be a version.\n #\n # This also handles the case where both the version and the project name\n # need adjusted, and handles it in a single redirect.\n if release.version != request.matchdict.get(\"version\", release.version):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name, version=release.version)\n )\n\n # It's possible that the requested version was correct (or not provided),\n # but we still need to adjust the project name.\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(request.current_route_path(name=project.name))\n\n # Grab the rendered description if it exists, and if it doesn't, then we will render\n # it inline.\n # TODO: Remove the fallback to rendering inline and only support displaying the\n # already rendered content.\n if release.description.html:\n description = release.description.html\n else:\n description = readme.render(\n release.description.raw, release.description.content_type\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n # Get the license from both the `Classifier` and `License` metadata fields\n license_classifiers = \", \".join(\n c.split(\" :: \")[-1] for c in release.classifiers if c.startswith(\"License\")\n )\n\n # Make a best effort when the entire license text is given by using the\n # first line only.\n short_license = release.license.split(\"\\n\")[0] if release.license else None\n\n if license_classifiers and short_license:\n license = f\"{license_classifiers} ({short_license})\"\n else:\n license = license_classifiers or short_license or None\n\n return {\n \"project\": project,\n \"release\": release,\n \"description\": description,\n \"files\": release.files.all(),\n \"latest_version\": project.latest_version,\n \"all_versions\": project.all_versions,\n \"maintainers\": maintainers,\n \"license\": license,\n }\n\n\n@view_config(\n route_name=\"includes.edit-project-button\",\n context=Project,\n renderer=\"includes/manage-project-button.html\",\n uses_session=True,\n permission=\"manage:project\",\n has_translations=True,\n)\ndef edit_project_button(project, request):\n return {\"project\": project}\n", "path": "warehouse/packaging/views.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom natsort import natsorted\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import Project, Release, Role\nfrom warehouse.utils import readme\n\n\n@view_config(\n route_name=\"packaging.project\",\n context=Project,\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale\n )\n ],\n has_translations=True,\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(request.current_route_path(name=project.name))\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project)\n .order_by(\n Release.yanked,\n Release.is_prerelease.nullslast(),\n Release._pypi_ordering.desc(),\n )\n .limit(1)\n .one()\n )\n except NoResultFound:\n raise HTTPNotFound\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n context=Release,\n renderer=\"packaging/detail.html\",\n decorator=[\n origin_cache(\n 1 * 24 * 60 * 60, stale_if_error=5 * 24 * 60 * 60 # 1 day, 5 days stale\n )\n ],\n has_translations=True,\n)\ndef release_detail(release, request):\n project = release.project\n\n # Check if the requested version is equivalent but not exactly the same as\n # the release's version. Use `.get` because this view is used by\n # `project_detail` and there may not be a version.\n #\n # This also handles the case where both the version and the project name\n # need adjusted, and handles it in a single redirect.\n if release.version != request.matchdict.get(\"version\", release.version):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name, version=release.version)\n )\n\n # It's possible that the requested version was correct (or not provided),\n # but we still need to adjust the project name.\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(request.current_route_path(name=project.name))\n\n # Grab the rendered description if it exists, and if it doesn't, then we will render\n # it inline.\n # TODO: Remove the fallback to rendering inline and only support displaying the\n # already rendered content.\n if release.description.html:\n description = release.description.html\n else:\n description = readme.render(\n release.description.raw, release.description.content_type\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n # Get the license from both the `Classifier` and `License` metadata fields\n license_classifiers = \", \".join(\n c.split(\" :: \")[-1] for c in release.classifiers if c.startswith(\"License\")\n )\n\n # Make a best effort when the entire license text is given by using the\n # first line only.\n short_license = release.license.split(\"\\n\")[0] if release.license else None\n\n if license_classifiers and short_license:\n license = f\"{license_classifiers} ({short_license})\"\n else:\n license = license_classifiers or short_license or None\n\n return {\n \"project\": project,\n \"release\": release,\n \"description\": description,\n # We cannot easily sort naturally in SQL, sort here and pass to template\n \"files\": natsorted(release.files.all(), reverse=True, key=lambda f: f.filename),\n \"latest_version\": project.latest_version,\n \"all_versions\": project.all_versions,\n \"maintainers\": maintainers,\n \"license\": license,\n }\n\n\n@view_config(\n route_name=\"includes.edit-project-button\",\n context=Project,\n renderer=\"includes/manage-project-button.html\",\n uses_session=True,\n permission=\"manage:project\",\n has_translations=True,\n)\ndef edit_project_button(project, request):\n return {\"project\": project}\n", "path": "warehouse/packaging/views.py"}]} | 1,991 | 219 |
gh_patches_debug_29877 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-4211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
orgaccounts.py (from c7n-org) regions flag puts *id001 instead of correct value
When I try to add regions as an argument with the **orgaccounts.py** script, it incorrectly puts the region with ***id001** as a value.
`python3 ./tools/c7n_org/scripts/orgaccounts.py --active true --regions ca-central-1 -f accounts.yml`
Results:
```
- account_id: 'XXXXXXXXXXXXXXXX'
email: [email protected]
name: accountname
regions: *id001
role: arn:aws:iam::XXXXXXXXXXXXXXXX:role/OrganizationAccountAccessRole
tags:
- path:/OU
```
Expected:
```
- account_id: 'XXXXXXXXXXXXXXXX'
email: [email protected]
name: accountname
region:
- ca-central-1
role: arn:aws:iam::XXXXXXXXXXXXXXXX:role/OrganizationAccountAccessRole
tags:
- path:/OU
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_org/scripts/orgaccounts.py`
Content:
```
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import print_function
16
17 import click
18 import yaml
19 import os
20 from c7n.credentials import assumed_session, SessionFactory
21
22 ROLE_TEMPLATE = "arn:aws:iam::{Id}:role/OrganizationAccountAccessRole"
23
24
25 @click.command()
26 @click.option(
27 '--role',
28 default=ROLE_TEMPLATE,
29 help="Role template for accounts in the config, defaults to %s" % ROLE_TEMPLATE)
30 @click.option('--ou', multiple=True, default=["/"],
31 help="Only export the given subtrees of an organization")
32 @click.option('-r', '--regions', multiple=True,
33 help="If specified, set regions per account in config")
34 @click.option('--assume', help="Role to assume for Credentials")
35 @click.option('--profile', help="AWS CLI Profile to use for Credentials")
36 @click.option(
37 '-f', '--output', type=click.File('w'),
38 help="File to store the generated config (default stdout)")
39 @click.option('-a', '--active', default=False, help="Get only active accounts", type=click.BOOL)
40 def main(role, ou, assume, profile, output, regions, active):
41 """Generate a c7n-org accounts config file using AWS Organizations
42
43 With c7n-org you can then run policies or arbitrary scripts across
44 accounts.
45 """
46
47 session = get_session(assume, 'c7n-org', profile)
48 client = session.client('organizations')
49 accounts = []
50 for path in ou:
51 ou = get_ou_from_path(client, path)
52 accounts.extend(get_accounts_for_ou(client, ou, active))
53
54 results = []
55 for a in accounts:
56 tags = []
57 path_parts = a['Path'].strip('/').split('/')
58 for idx, _ in enumerate(path_parts):
59 tags.append("path:/%s" % "/".join(path_parts[:idx + 1]))
60
61 ainfo = {
62 'account_id': a['Id'],
63 'email': a['Email'],
64 'name': a['Name'],
65 'tags': tags,
66 'role': role.format(**a)}
67 if regions:
68 ainfo['regions'] = regions
69 results.append(ainfo)
70
71 print(
72 yaml.safe_dump(
73 {'accounts': results},
74 default_flow_style=False),
75 file=output)
76
77
78 def get_session(role, session_name, profile):
79 region = os.environ.get('AWS_DEFAULT_REGION', 'eu-west-1')
80 if role:
81 return assumed_session(role, session_name, region=region)
82 else:
83 return SessionFactory(region, profile)()
84
85
86 def get_ou_from_path(client, path):
87 ou = client.list_roots()['Roots'][0]
88
89 if path == "/":
90 ou['Path'] = path
91 return ou
92
93 ou_pager = client.get_paginator('list_organizational_units_for_parent')
94 for part in path.strip('/').split('/'):
95 found = False
96 for page in ou_pager.paginate(ParentId=ou['Id']):
97 for child in page.get('OrganizationalUnits'):
98 if child['Name'] == part:
99 found = True
100 ou = child
101 break
102 if found:
103 break
104 if found is False:
105 raise ValueError(
106 "No OU named:%r found in path: %s" % (
107 path, path))
108 ou['Path'] = path
109 return ou
110
111
112 def get_sub_ous(client, ou):
113 results = [ou]
114 ou_pager = client.get_paginator('list_organizational_units_for_parent')
115 for sub_ou in ou_pager.paginate(
116 ParentId=ou['Id']).build_full_result().get(
117 'OrganizationalUnits'):
118 sub_ou['Path'] = "/%s/%s" % (ou['Path'].strip('/'), sub_ou['Name'])
119 results.extend(get_sub_ous(client, sub_ou))
120 return results
121
122
123 def get_accounts_for_ou(client, ou, active, recursive=True):
124 results = []
125 ous = [ou]
126 if recursive:
127 ous = get_sub_ous(client, ou)
128
129 account_pager = client.get_paginator('list_accounts_for_parent')
130 for ou in ous:
131 for a in account_pager.paginate(
132 ParentId=ou['Id']).build_full_result().get(
133 'Accounts', []):
134 a['Path'] = ou['Path']
135 if active:
136 if a['Status'] == 'ACTIVE':
137 results.append(a)
138 else:
139 results.append(a)
140 return results
141
142
143 if __name__ == '__main__':
144 main()
145
```
Path: `tools/c7n_org/scripts/azuresubs.py`
Content:
```
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import print_function
16
17 import yaml
18 import click
19 from c7n_azure.session import Session
20 from azure.mgmt.resource.subscriptions import SubscriptionClient
21
22
23 @click.command()
24 @click.option(
25 '-f', '--output', type=click.File('w'),
26 help="File to store the generated config (default stdout)")
27 def main(output):
28 """
29 Generate a c7n-org subscriptions config file
30 """
31
32 client = SubscriptionClient(Session().get_credentials())
33 subs = [sub.serialize(True) for sub in client.subscriptions.list()]
34 results = []
35 for sub in subs:
36 sub_info = {
37 'subscription_id': sub['subscriptionId'],
38 'name': sub['displayName']
39 }
40 results.append(sub_info)
41
42 print(
43 yaml.safe_dump(
44 {'subscriptions': results},
45 default_flow_style=False),
46 file=output)
47
48
49 if __name__ == '__main__':
50 main()
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_org/scripts/azuresubs.py b/tools/c7n_org/scripts/azuresubs.py
--- a/tools/c7n_org/scripts/azuresubs.py
+++ b/tools/c7n_org/scripts/azuresubs.py
@@ -14,9 +14,9 @@
from __future__ import print_function
-import yaml
import click
from c7n_azure.session import Session
+from c7n.utils import yaml_dump
from azure.mgmt.resource.subscriptions import SubscriptionClient
@@ -39,11 +39,7 @@
}
results.append(sub_info)
- print(
- yaml.safe_dump(
- {'subscriptions': results},
- default_flow_style=False),
- file=output)
+ print(yaml_dump({'subscriptions': results}), file=output)
if __name__ == '__main__':
diff --git a/tools/c7n_org/scripts/orgaccounts.py b/tools/c7n_org/scripts/orgaccounts.py
--- a/tools/c7n_org/scripts/orgaccounts.py
+++ b/tools/c7n_org/scripts/orgaccounts.py
@@ -15,9 +15,9 @@
from __future__ import print_function
import click
-import yaml
import os
from c7n.credentials import assumed_session, SessionFactory
+from c7n.utils import yaml_dump
ROLE_TEMPLATE = "arn:aws:iam::{Id}:role/OrganizationAccountAccessRole"
@@ -65,14 +65,10 @@
'tags': tags,
'role': role.format(**a)}
if regions:
- ainfo['regions'] = regions
+ ainfo['regions'] = list(regions)
results.append(ainfo)
- print(
- yaml.safe_dump(
- {'accounts': results},
- default_flow_style=False),
- file=output)
+ print(yaml_dump({'accounts': results}), file=output)
def get_session(role, session_name, profile):
| {"golden_diff": "diff --git a/tools/c7n_org/scripts/azuresubs.py b/tools/c7n_org/scripts/azuresubs.py\n--- a/tools/c7n_org/scripts/azuresubs.py\n+++ b/tools/c7n_org/scripts/azuresubs.py\n@@ -14,9 +14,9 @@\n \n from __future__ import print_function\n \n-import yaml\n import click\n from c7n_azure.session import Session\n+from c7n.utils import yaml_dump\n from azure.mgmt.resource.subscriptions import SubscriptionClient\n \n \n@@ -39,11 +39,7 @@\n }\n results.append(sub_info)\n \n- print(\n- yaml.safe_dump(\n- {'subscriptions': results},\n- default_flow_style=False),\n- file=output)\n+ print(yaml_dump({'subscriptions': results}), file=output)\n \n \n if __name__ == '__main__':\ndiff --git a/tools/c7n_org/scripts/orgaccounts.py b/tools/c7n_org/scripts/orgaccounts.py\n--- a/tools/c7n_org/scripts/orgaccounts.py\n+++ b/tools/c7n_org/scripts/orgaccounts.py\n@@ -15,9 +15,9 @@\n from __future__ import print_function\n \n import click\n-import yaml\n import os\n from c7n.credentials import assumed_session, SessionFactory\n+from c7n.utils import yaml_dump\n \n ROLE_TEMPLATE = \"arn:aws:iam::{Id}:role/OrganizationAccountAccessRole\"\n \n@@ -65,14 +65,10 @@\n 'tags': tags,\n 'role': role.format(**a)}\n if regions:\n- ainfo['regions'] = regions\n+ ainfo['regions'] = list(regions)\n results.append(ainfo)\n \n- print(\n- yaml.safe_dump(\n- {'accounts': results},\n- default_flow_style=False),\n- file=output)\n+ print(yaml_dump({'accounts': results}), file=output)\n \n \n def get_session(role, session_name, profile):\n", "issue": "orgaccounts.py (from c7n-org) regions flag puts *id001 instead of correct value\nWhen I try to add regions as an argument with the **orgaccounts.py** script, it incorrectly puts the region with ***id001** as a value.\r\n\r\n`python3 ./tools/c7n_org/scripts/orgaccounts.py --active true --regions ca-central-1 -f accounts.yml`\r\n\r\nResults: \r\n```\r\n- account_id: 'XXXXXXXXXXXXXXXX'\r\n email: [email protected]\r\n name: accountname\r\n regions: *id001\r\n role: arn:aws:iam::XXXXXXXXXXXXXXXX:role/OrganizationAccountAccessRole\r\n tags:\r\n - path:/OU\r\n```\r\nExpected: \r\n```\r\n- account_id: 'XXXXXXXXXXXXXXXX'\r\n email: [email protected]\r\n name: accountname\r\n region:\r\n - ca-central-1\r\n role: arn:aws:iam::XXXXXXXXXXXXXXXX:role/OrganizationAccountAccessRole\r\n tags:\r\n - path:/OU\r\n```\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import print_function\n\nimport click\nimport yaml\nimport os\nfrom c7n.credentials import assumed_session, SessionFactory\n\nROLE_TEMPLATE = \"arn:aws:iam::{Id}:role/OrganizationAccountAccessRole\"\n\n\[email protected]()\[email protected](\n '--role',\n default=ROLE_TEMPLATE,\n help=\"Role template for accounts in the config, defaults to %s\" % ROLE_TEMPLATE)\[email protected]('--ou', multiple=True, default=[\"/\"],\n help=\"Only export the given subtrees of an organization\")\[email protected]('-r', '--regions', multiple=True,\n help=\"If specified, set regions per account in config\")\[email protected]('--assume', help=\"Role to assume for Credentials\")\[email protected]('--profile', help=\"AWS CLI Profile to use for Credentials\")\[email protected](\n '-f', '--output', type=click.File('w'),\n help=\"File to store the generated config (default stdout)\")\[email protected]('-a', '--active', default=False, help=\"Get only active accounts\", type=click.BOOL)\ndef main(role, ou, assume, profile, output, regions, active):\n \"\"\"Generate a c7n-org accounts config file using AWS Organizations\n\n With c7n-org you can then run policies or arbitrary scripts across\n accounts.\n \"\"\"\n\n session = get_session(assume, 'c7n-org', profile)\n client = session.client('organizations')\n accounts = []\n for path in ou:\n ou = get_ou_from_path(client, path)\n accounts.extend(get_accounts_for_ou(client, ou, active))\n\n results = []\n for a in accounts:\n tags = []\n path_parts = a['Path'].strip('/').split('/')\n for idx, _ in enumerate(path_parts):\n tags.append(\"path:/%s\" % \"/\".join(path_parts[:idx + 1]))\n\n ainfo = {\n 'account_id': a['Id'],\n 'email': a['Email'],\n 'name': a['Name'],\n 'tags': tags,\n 'role': role.format(**a)}\n if regions:\n ainfo['regions'] = regions\n results.append(ainfo)\n\n print(\n yaml.safe_dump(\n {'accounts': results},\n default_flow_style=False),\n file=output)\n\n\ndef get_session(role, session_name, profile):\n region = os.environ.get('AWS_DEFAULT_REGION', 'eu-west-1')\n if role:\n return assumed_session(role, session_name, region=region)\n else:\n return SessionFactory(region, profile)()\n\n\ndef get_ou_from_path(client, path):\n ou = client.list_roots()['Roots'][0]\n\n if path == \"/\":\n ou['Path'] = path\n return ou\n\n ou_pager = client.get_paginator('list_organizational_units_for_parent')\n for part in path.strip('/').split('/'):\n found = False\n for page in ou_pager.paginate(ParentId=ou['Id']):\n for child in page.get('OrganizationalUnits'):\n if child['Name'] == part:\n found = True\n ou = child\n break\n if found:\n break\n if found is False:\n raise ValueError(\n \"No OU named:%r found in path: %s\" % (\n path, path))\n ou['Path'] = path\n return ou\n\n\ndef get_sub_ous(client, ou):\n results = [ou]\n ou_pager = client.get_paginator('list_organizational_units_for_parent')\n for sub_ou in ou_pager.paginate(\n ParentId=ou['Id']).build_full_result().get(\n 'OrganizationalUnits'):\n sub_ou['Path'] = \"/%s/%s\" % (ou['Path'].strip('/'), sub_ou['Name'])\n results.extend(get_sub_ous(client, sub_ou))\n return results\n\n\ndef get_accounts_for_ou(client, ou, active, recursive=True):\n results = []\n ous = [ou]\n if recursive:\n ous = get_sub_ous(client, ou)\n\n account_pager = client.get_paginator('list_accounts_for_parent')\n for ou in ous:\n for a in account_pager.paginate(\n ParentId=ou['Id']).build_full_result().get(\n 'Accounts', []):\n a['Path'] = ou['Path']\n if active:\n if a['Status'] == 'ACTIVE':\n results.append(a)\n else:\n results.append(a)\n return results\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/c7n_org/scripts/orgaccounts.py"}, {"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import print_function\n\nimport yaml\nimport click\nfrom c7n_azure.session import Session\nfrom azure.mgmt.resource.subscriptions import SubscriptionClient\n\n\[email protected]()\[email protected](\n '-f', '--output', type=click.File('w'),\n help=\"File to store the generated config (default stdout)\")\ndef main(output):\n \"\"\"\n Generate a c7n-org subscriptions config file\n \"\"\"\n\n client = SubscriptionClient(Session().get_credentials())\n subs = [sub.serialize(True) for sub in client.subscriptions.list()]\n results = []\n for sub in subs:\n sub_info = {\n 'subscription_id': sub['subscriptionId'],\n 'name': sub['displayName']\n }\n results.append(sub_info)\n\n print(\n yaml.safe_dump(\n {'subscriptions': results},\n default_flow_style=False),\n file=output)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/c7n_org/scripts/azuresubs.py"}], "after_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import print_function\n\nimport click\nimport os\nfrom c7n.credentials import assumed_session, SessionFactory\nfrom c7n.utils import yaml_dump\n\nROLE_TEMPLATE = \"arn:aws:iam::{Id}:role/OrganizationAccountAccessRole\"\n\n\[email protected]()\[email protected](\n '--role',\n default=ROLE_TEMPLATE,\n help=\"Role template for accounts in the config, defaults to %s\" % ROLE_TEMPLATE)\[email protected]('--ou', multiple=True, default=[\"/\"],\n help=\"Only export the given subtrees of an organization\")\[email protected]('-r', '--regions', multiple=True,\n help=\"If specified, set regions per account in config\")\[email protected]('--assume', help=\"Role to assume for Credentials\")\[email protected]('--profile', help=\"AWS CLI Profile to use for Credentials\")\[email protected](\n '-f', '--output', type=click.File('w'),\n help=\"File to store the generated config (default stdout)\")\[email protected]('-a', '--active', default=False, help=\"Get only active accounts\", type=click.BOOL)\ndef main(role, ou, assume, profile, output, regions, active):\n \"\"\"Generate a c7n-org accounts config file using AWS Organizations\n\n With c7n-org you can then run policies or arbitrary scripts across\n accounts.\n \"\"\"\n\n session = get_session(assume, 'c7n-org', profile)\n client = session.client('organizations')\n accounts = []\n for path in ou:\n ou = get_ou_from_path(client, path)\n accounts.extend(get_accounts_for_ou(client, ou, active))\n\n results = []\n for a in accounts:\n tags = []\n path_parts = a['Path'].strip('/').split('/')\n for idx, _ in enumerate(path_parts):\n tags.append(\"path:/%s\" % \"/\".join(path_parts[:idx + 1]))\n\n ainfo = {\n 'account_id': a['Id'],\n 'email': a['Email'],\n 'name': a['Name'],\n 'tags': tags,\n 'role': role.format(**a)}\n if regions:\n ainfo['regions'] = list(regions)\n results.append(ainfo)\n\n print(yaml_dump({'accounts': results}), file=output)\n\n\ndef get_session(role, session_name, profile):\n region = os.environ.get('AWS_DEFAULT_REGION', 'eu-west-1')\n if role:\n return assumed_session(role, session_name, region=region)\n else:\n return SessionFactory(region, profile)()\n\n\ndef get_ou_from_path(client, path):\n ou = client.list_roots()['Roots'][0]\n\n if path == \"/\":\n ou['Path'] = path\n return ou\n\n ou_pager = client.get_paginator('list_organizational_units_for_parent')\n for part in path.strip('/').split('/'):\n found = False\n for page in ou_pager.paginate(ParentId=ou['Id']):\n for child in page.get('OrganizationalUnits'):\n if child['Name'] == part:\n found = True\n ou = child\n break\n if found:\n break\n if found is False:\n raise ValueError(\n \"No OU named:%r found in path: %s\" % (\n path, path))\n ou['Path'] = path\n return ou\n\n\ndef get_sub_ous(client, ou):\n results = [ou]\n ou_pager = client.get_paginator('list_organizational_units_for_parent')\n for sub_ou in ou_pager.paginate(\n ParentId=ou['Id']).build_full_result().get(\n 'OrganizationalUnits'):\n sub_ou['Path'] = \"/%s/%s\" % (ou['Path'].strip('/'), sub_ou['Name'])\n results.extend(get_sub_ous(client, sub_ou))\n return results\n\n\ndef get_accounts_for_ou(client, ou, active, recursive=True):\n results = []\n ous = [ou]\n if recursive:\n ous = get_sub_ous(client, ou)\n\n account_pager = client.get_paginator('list_accounts_for_parent')\n for ou in ous:\n for a in account_pager.paginate(\n ParentId=ou['Id']).build_full_result().get(\n 'Accounts', []):\n a['Path'] = ou['Path']\n if active:\n if a['Status'] == 'ACTIVE':\n results.append(a)\n else:\n results.append(a)\n return results\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/c7n_org/scripts/orgaccounts.py"}, {"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import print_function\n\nimport click\nfrom c7n_azure.session import Session\nfrom c7n.utils import yaml_dump\nfrom azure.mgmt.resource.subscriptions import SubscriptionClient\n\n\[email protected]()\[email protected](\n '-f', '--output', type=click.File('w'),\n help=\"File to store the generated config (default stdout)\")\ndef main(output):\n \"\"\"\n Generate a c7n-org subscriptions config file\n \"\"\"\n\n client = SubscriptionClient(Session().get_credentials())\n subs = [sub.serialize(True) for sub in client.subscriptions.list()]\n results = []\n for sub in subs:\n sub_info = {\n 'subscription_id': sub['subscriptionId'],\n 'name': sub['displayName']\n }\n results.append(sub_info)\n\n print(yaml_dump({'subscriptions': results}), file=output)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/c7n_org/scripts/azuresubs.py"}]} | 2,379 | 424 |
gh_patches_debug_14952 | rasdani/github-patches | git_diff | rasterio__rasterio-503 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`rio merge --co TILED=YES` not producing tiled output
Specifically in the case where its inputs are stripped, not tiled, TIFFs. The signature of this bug is a GDAL error like
```
ERROR:GDAL:CPLE_AppDefined in _TIFFVSetField:/private/tmp/combined2.tif: Bad value 600 for "TileWidth" tag
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/merge.py`
Content:
```
1 # Merge command.
2
3 import logging
4 import math
5 import os.path
6 import warnings
7
8 import click
9 from cligj import files_inout_arg, format_opt
10
11 from .helpers import resolve_inout
12 from . import options
13 import rasterio
14 from rasterio.transform import Affine
15
16
17 @click.command(short_help="Merge a stack of raster datasets.")
18 @files_inout_arg
19 @options.output_opt
20 @format_opt
21 @options.bounds_opt
22 @options.resolution_opt
23 @click.option('--nodata', type=float, default=None,
24 help="Override nodata values defined in input datasets")
25 @click.option('--force-overwrite', '-f', 'force_overwrite', is_flag=True,
26 type=bool, default=False,
27 help="Do not prompt for confirmation before overwriting output "
28 "file")
29 @click.option('--precision', type=int, default=7,
30 help="Number of decimal places of precision in alignment of "
31 "pixels")
32 @options.creation_options
33 @click.pass_context
34 def merge(ctx, files, output, driver, bounds, res, nodata, force_overwrite,
35 precision, creation_options):
36 """Copy valid pixels from input files to an output file.
37
38 All files must have the same number of bands, data type, and
39 coordinate reference system.
40
41 Input files are merged in their listed order using the reverse
42 painter's algorithm. If the output file exists, its values will be
43 overwritten by input values.
44
45 Geospatial bounds and resolution of a new output file in the
46 units of the input file coordinate reference system may be provided
47 and are otherwise taken from the first input file.
48
49 Note: --res changed from 2 parameters in 0.25.
50 --res 0.1 0.1 => --res 0.1 (square)
51 --res 0.1 0.2 => --res 0.1 --res 0.2 (rectangular)
52 """
53
54 from rasterio.tools.merge import merge as merge_tool
55
56 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
57 logger = logging.getLogger('rio')
58
59 output, files = resolve_inout(files=files, output=output)
60
61 if os.path.exists(output) and not force_overwrite:
62 raise click.ClickException(
63 "Output exists and won't be overwritten without the "
64 "`-f` option")
65
66 sources = [rasterio.open(f) for f in files]
67 dest, output_transform = merge_tool(sources, bounds=bounds, res=res,
68 nodata=nodata, precision=precision)
69
70 profile = sources[0].profile
71 profile.pop('affine')
72 profile['transform'] = output_transform
73 profile['height'] = dest.shape[1]
74 profile['width'] = dest.shape[2]
75 profile['driver'] = driver
76 profile.update(**creation_options)
77
78 with rasterio.open(output, 'w', **profile) as dst:
79 dst.write(dest)
80
```
Path: `rasterio/rio/options.py`
Content:
```
1 """
2 Registry of common rio CLI options. See cligj for more options.
3
4 -a, --all: Use all pixels touched by features. In rio-mask, rio-rasterize
5 --as-mask/--not-as-mask: interpret band as mask or not. In rio-shapes
6 --band/--mask: use band or mask. In rio-shapes
7 --bbox:
8 -b, --bidx: band index(es) (singular or multiple value versions).
9 In rio-info, rio-sample, rio-shapes, rio-stack (different usages)
10 --bounds: bounds in world coordinates.
11 In rio-info, rio-rasterize (different usages)
12 --count: count of bands. In rio-info
13 --crop: Crop raster to extent of features. In rio-mask
14 --crs: CRS of input raster. In rio-info
15 --default-value: default for rasterized pixels. In rio-rasterize
16 --dimensions: Output width, height. In rio-rasterize
17 --dst-crs: destination CRS. In rio-transform
18 --fill: fill value for pixels not covered by features. In rio-rasterize
19 --formats: list available formats. In rio-info
20 --height: height of raster. In rio-info
21 -i, --invert: Invert mask created from features: In rio-mask
22 -j, --geojson-mask: GeoJSON for masking raster. In rio-mask
23 --lnglat: geograhpic coordinates of center of raster. In rio-info
24 --masked/--not-masked: read masked data from source file.
25 In rio-calc, rio-info
26 -m, --mode: output file mode (r, r+). In rio-insp
27 --name: input file name alias. In rio-calc
28 --nodata: nodata value. In rio-info, rio-merge (different usages)
29 --photometric: photometric interpretation. In rio-stack
30 --property: GeoJSON property to use as values for rasterize. In rio-rasterize
31 -r, --res: output resolution.
32 In rio-info, rio-rasterize (different usages. TODO: try to combine
33 usages, prefer rio-rasterize version)
34 --sampling: Inverse of sampling fraction. In rio-shapes
35 --shape: shape (width, height) of band. In rio-info
36 --src-crs: source CRS.
37 In rio-insp, rio-rasterize (different usages. TODO: consolidate usages)
38 --stats: print raster stats. In rio-inf
39 -t, --dtype: data type. In rio-calc, rio-info (different usages)
40 --width: width of raster. In rio-info
41 --with-nodata/--without-nodata: include nodata regions or not. In rio-shapes.
42 -v, --tell-me-more, --verbose
43 """
44
45
46 # TODO: move file_in_arg and file_out_arg to cligj
47
48
49 import click
50
51
52 def _cb_key_val(ctx, param, value):
53
54 """
55 click callback to validate `--opt KEY1=VAL1 --opt KEY2=VAL2` and collect
56 in a dictionary like the one below, which is what the CLI function receives.
57 If no value or `None` is received then an empty dictionary is returned.
58
59 {
60 'KEY1': 'VAL1',
61 'KEY2': 'VAL2'
62 }
63
64 Note: `==VAL` breaks this as `str.split('=', 1)` is used.
65 """
66
67 if not value:
68 return {}
69 else:
70 out = {}
71 for pair in value:
72 if '=' not in pair:
73 raise click.BadParameter("Invalid syntax for KEY=VAL arg: {}".format(pair))
74 else:
75 k, v = pair.split('=', 1)
76 out[k] = v
77
78 return out
79
80
81 # Singular input file
82 file_in_arg = click.argument(
83 'INPUT',
84 type=click.Path(exists=True, resolve_path=True))
85
86 # Singular output file
87 file_out_arg = click.argument(
88 'OUTPUT',
89 type=click.Path(resolve_path=True))
90
91 bidx_opt = click.option(
92 '-b', '--bidx',
93 type=int,
94 default=1,
95 help="Input file band index (default: 1)")
96
97 bidx_mult_opt = click.option(
98 '-b', '--bidx',
99 multiple=True,
100 help="Indexes of input file bands.")
101
102 # TODO: may be better suited to cligj
103 bounds_opt = click.option(
104 '--bounds',
105 nargs=4, type=float, default=None,
106 help='Output bounds: left bottom right top.')
107
108 dimensions_opt = click.option(
109 '--dimensions',
110 nargs=2, type=int, default=None,
111 help='Output dataset width, height in number of pixels.')
112
113 dtype_opt = click.option(
114 '-t', '--dtype',
115 type=click.Choice([
116 'ubyte', 'uint8', 'uint16', 'int16', 'uint32', 'int32',
117 'float32', 'float64']),
118 default=None,
119 help="Output data type.")
120
121 like_file_opt = click.option(
122 '--like',
123 type=click.Path(exists=True),
124 help='Raster dataset to use as a template for obtaining affine '
125 'transform (bounds and resolution), crs, data type, and driver '
126 'used to create the output.')
127
128 masked_opt = click.option(
129 '--masked/--not-masked',
130 default=True,
131 help="Evaluate expressions using masked arrays (the default) or ordinary "
132 "numpy arrays.")
133
134 output_opt = click.option(
135 '-o', '--output',
136 default=None,
137 type=click.Path(resolve_path=True),
138 help="Path to output file (optional alternative to a positional arg "
139 "for some commands).")
140
141 resolution_opt = click.option(
142 '-r', '--res',
143 multiple=True, type=float, default=None,
144 help='Output dataset resolution in units of coordinate '
145 'reference system. Pixels assumed to be square if this option '
146 'is used once, otherwise use: '
147 '--res pixel_width --res pixel_height')
148
149 creation_options = click.option(
150 '--co', 'creation_options',
151 metavar='NAME=VALUE',
152 multiple=True,
153 callback=_cb_key_val,
154 help="Driver specific creation options."
155 "See the documentation for the selected output driver for "
156 "more information.")
157
158 rgb_opt = click.option(
159 '--rgb', 'photometric',
160 flag_value='rgb',
161 default=False,
162 help="Set RGB photometric interpretation")
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py
--- a/rasterio/rio/merge.py
+++ b/rasterio/rio/merge.py
@@ -73,6 +73,7 @@
profile['height'] = dest.shape[1]
profile['width'] = dest.shape[2]
profile['driver'] = driver
+
profile.update(**creation_options)
with rasterio.open(output, 'w', **profile) as dst:
diff --git a/rasterio/rio/options.py b/rasterio/rio/options.py
--- a/rasterio/rio/options.py
+++ b/rasterio/rio/options.py
@@ -73,6 +73,8 @@
raise click.BadParameter("Invalid syntax for KEY=VAL arg: {}".format(pair))
else:
k, v = pair.split('=', 1)
+ k = k.lower()
+ v = v.lower()
out[k] = v
return out
| {"golden_diff": "diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py\n--- a/rasterio/rio/merge.py\n+++ b/rasterio/rio/merge.py\n@@ -73,6 +73,7 @@\n profile['height'] = dest.shape[1]\n profile['width'] = dest.shape[2]\n profile['driver'] = driver\n+\n profile.update(**creation_options)\n \n with rasterio.open(output, 'w', **profile) as dst:\ndiff --git a/rasterio/rio/options.py b/rasterio/rio/options.py\n--- a/rasterio/rio/options.py\n+++ b/rasterio/rio/options.py\n@@ -73,6 +73,8 @@\n raise click.BadParameter(\"Invalid syntax for KEY=VAL arg: {}\".format(pair))\n else:\n k, v = pair.split('=', 1)\n+ k = k.lower()\n+ v = v.lower()\n out[k] = v\n \n return out\n", "issue": "`rio merge --co TILED=YES` not producing tiled output\nSpecifically in the case where its inputs are stripped, not tiled, TIFFs. The signature of this bug is a GDAL error like\n\n```\nERROR:GDAL:CPLE_AppDefined in _TIFFVSetField:/private/tmp/combined2.tif: Bad value 600 for \"TileWidth\" tag\n```\n\n", "before_files": [{"content": "# Merge command.\n\nimport logging\nimport math\nimport os.path\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.transform import Affine\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_opt\[email protected]_opt\[email protected]('--nodata', type=float, default=None,\n help=\"Override nodata values defined in input datasets\")\[email protected]('--force-overwrite', '-f', 'force_overwrite', is_flag=True,\n type=bool, default=False,\n help=\"Do not prompt for confirmation before overwriting output \"\n \"file\")\[email protected]('--precision', type=int, default=7,\n help=\"Number of decimal places of precision in alignment of \"\n \"pixels\")\[email protected]_options\[email protected]_context\ndef merge(ctx, files, output, driver, bounds, res, nodata, force_overwrite,\n precision, creation_options):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n\n Note: --res changed from 2 parameters in 0.25.\n --res 0.1 0.1 => --res 0.1 (square)\n --res 0.1 0.2 => --res 0.1 --res 0.2 (rectangular)\n \"\"\"\n\n from rasterio.tools.merge import merge as merge_tool\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n\n output, files = resolve_inout(files=files, output=output)\n\n if os.path.exists(output) and not force_overwrite:\n raise click.ClickException(\n \"Output exists and won't be overwritten without the \"\n \"`-f` option\")\n\n sources = [rasterio.open(f) for f in files]\n dest, output_transform = merge_tool(sources, bounds=bounds, res=res,\n nodata=nodata, precision=precision)\n\n profile = sources[0].profile\n profile.pop('affine')\n profile['transform'] = output_transform\n profile['height'] = dest.shape[1]\n profile['width'] = dest.shape[2]\n profile['driver'] = driver\n profile.update(**creation_options)\n\n with rasterio.open(output, 'w', **profile) as dst:\n dst.write(dest)\n", "path": "rasterio/rio/merge.py"}, {"content": "\"\"\"\nRegistry of common rio CLI options. See cligj for more options.\n\n-a, --all: Use all pixels touched by features. In rio-mask, rio-rasterize\n--as-mask/--not-as-mask: interpret band as mask or not. In rio-shapes\n--band/--mask: use band or mask. In rio-shapes\n--bbox:\n-b, --bidx: band index(es) (singular or multiple value versions).\n In rio-info, rio-sample, rio-shapes, rio-stack (different usages)\n--bounds: bounds in world coordinates.\n In rio-info, rio-rasterize (different usages)\n--count: count of bands. In rio-info\n--crop: Crop raster to extent of features. In rio-mask\n--crs: CRS of input raster. In rio-info\n--default-value: default for rasterized pixels. In rio-rasterize\n--dimensions: Output width, height. In rio-rasterize\n--dst-crs: destination CRS. In rio-transform\n--fill: fill value for pixels not covered by features. In rio-rasterize\n--formats: list available formats. In rio-info\n--height: height of raster. In rio-info\n-i, --invert: Invert mask created from features: In rio-mask\n-j, --geojson-mask: GeoJSON for masking raster. In rio-mask\n--lnglat: geograhpic coordinates of center of raster. In rio-info\n--masked/--not-masked: read masked data from source file.\n In rio-calc, rio-info\n-m, --mode: output file mode (r, r+). In rio-insp\n--name: input file name alias. In rio-calc\n--nodata: nodata value. In rio-info, rio-merge (different usages)\n--photometric: photometric interpretation. In rio-stack\n--property: GeoJSON property to use as values for rasterize. In rio-rasterize\n-r, --res: output resolution.\n In rio-info, rio-rasterize (different usages. TODO: try to combine\n usages, prefer rio-rasterize version)\n--sampling: Inverse of sampling fraction. In rio-shapes\n--shape: shape (width, height) of band. In rio-info\n--src-crs: source CRS.\n In rio-insp, rio-rasterize (different usages. TODO: consolidate usages)\n--stats: print raster stats. In rio-inf\n-t, --dtype: data type. In rio-calc, rio-info (different usages)\n--width: width of raster. In rio-info\n--with-nodata/--without-nodata: include nodata regions or not. In rio-shapes.\n-v, --tell-me-more, --verbose\n\"\"\"\n\n\n# TODO: move file_in_arg and file_out_arg to cligj\n\n\nimport click\n\n\ndef _cb_key_val(ctx, param, value):\n\n \"\"\"\n click callback to validate `--opt KEY1=VAL1 --opt KEY2=VAL2` and collect\n in a dictionary like the one below, which is what the CLI function receives.\n If no value or `None` is received then an empty dictionary is returned.\n\n {\n 'KEY1': 'VAL1',\n 'KEY2': 'VAL2'\n }\n\n Note: `==VAL` breaks this as `str.split('=', 1)` is used.\n \"\"\"\n\n if not value:\n return {}\n else:\n out = {}\n for pair in value:\n if '=' not in pair:\n raise click.BadParameter(\"Invalid syntax for KEY=VAL arg: {}\".format(pair))\n else:\n k, v = pair.split('=', 1)\n out[k] = v\n\n return out\n\n\n# Singular input file\nfile_in_arg = click.argument(\n 'INPUT',\n type=click.Path(exists=True, resolve_path=True))\n\n# Singular output file\nfile_out_arg = click.argument(\n 'OUTPUT',\n type=click.Path(resolve_path=True))\n\nbidx_opt = click.option(\n '-b', '--bidx',\n type=int,\n default=1,\n help=\"Input file band index (default: 1)\")\n\nbidx_mult_opt = click.option(\n '-b', '--bidx',\n multiple=True,\n help=\"Indexes of input file bands.\")\n\n# TODO: may be better suited to cligj\nbounds_opt = click.option(\n '--bounds',\n nargs=4, type=float, default=None,\n help='Output bounds: left bottom right top.')\n\ndimensions_opt = click.option(\n '--dimensions',\n nargs=2, type=int, default=None,\n help='Output dataset width, height in number of pixels.')\n\ndtype_opt = click.option(\n '-t', '--dtype',\n type=click.Choice([\n 'ubyte', 'uint8', 'uint16', 'int16', 'uint32', 'int32',\n 'float32', 'float64']),\n default=None,\n help=\"Output data type.\")\n\nlike_file_opt = click.option(\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), crs, data type, and driver '\n 'used to create the output.')\n\nmasked_opt = click.option(\n '--masked/--not-masked',\n default=True,\n help=\"Evaluate expressions using masked arrays (the default) or ordinary \"\n \"numpy arrays.\")\n\noutput_opt = click.option(\n '-o', '--output',\n default=None,\n type=click.Path(resolve_path=True),\n help=\"Path to output file (optional alternative to a positional arg \"\n \"for some commands).\")\n\nresolution_opt = click.option(\n '-r', '--res',\n multiple=True, type=float, default=None,\n help='Output dataset resolution in units of coordinate '\n 'reference system. Pixels assumed to be square if this option '\n 'is used once, otherwise use: '\n '--res pixel_width --res pixel_height')\n\ncreation_options = click.option(\n '--co', 'creation_options',\n metavar='NAME=VALUE',\n multiple=True,\n callback=_cb_key_val,\n help=\"Driver specific creation options.\"\n \"See the documentation for the selected output driver for \"\n \"more information.\")\n\nrgb_opt = click.option(\n '--rgb', 'photometric', \n flag_value='rgb',\n default=False,\n help=\"Set RGB photometric interpretation\")\n", "path": "rasterio/rio/options.py"}], "after_files": [{"content": "# Merge command.\n\nimport logging\nimport math\nimport os.path\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.transform import Affine\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_opt\[email protected]_opt\[email protected]('--nodata', type=float, default=None,\n help=\"Override nodata values defined in input datasets\")\[email protected]('--force-overwrite', '-f', 'force_overwrite', is_flag=True,\n type=bool, default=False,\n help=\"Do not prompt for confirmation before overwriting output \"\n \"file\")\[email protected]('--precision', type=int, default=7,\n help=\"Number of decimal places of precision in alignment of \"\n \"pixels\")\[email protected]_options\[email protected]_context\ndef merge(ctx, files, output, driver, bounds, res, nodata, force_overwrite,\n precision, creation_options):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n\n Note: --res changed from 2 parameters in 0.25.\n --res 0.1 0.1 => --res 0.1 (square)\n --res 0.1 0.2 => --res 0.1 --res 0.2 (rectangular)\n \"\"\"\n\n from rasterio.tools.merge import merge as merge_tool\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n\n output, files = resolve_inout(files=files, output=output)\n\n if os.path.exists(output) and not force_overwrite:\n raise click.ClickException(\n \"Output exists and won't be overwritten without the \"\n \"`-f` option\")\n\n sources = [rasterio.open(f) for f in files]\n dest, output_transform = merge_tool(sources, bounds=bounds, res=res,\n nodata=nodata, precision=precision)\n\n profile = sources[0].profile\n profile.pop('affine')\n profile['transform'] = output_transform\n profile['height'] = dest.shape[1]\n profile['width'] = dest.shape[2]\n profile['driver'] = driver\n\n profile.update(**creation_options)\n\n with rasterio.open(output, 'w', **profile) as dst:\n dst.write(dest)\n", "path": "rasterio/rio/merge.py"}, {"content": "\"\"\"\nRegistry of common rio CLI options. See cligj for more options.\n\n-a, --all: Use all pixels touched by features. In rio-mask, rio-rasterize\n--as-mask/--not-as-mask: interpret band as mask or not. In rio-shapes\n--band/--mask: use band or mask. In rio-shapes\n--bbox:\n-b, --bidx: band index(es) (singular or multiple value versions).\n In rio-info, rio-sample, rio-shapes, rio-stack (different usages)\n--bounds: bounds in world coordinates.\n In rio-info, rio-rasterize (different usages)\n--count: count of bands. In rio-info\n--crop: Crop raster to extent of features. In rio-mask\n--crs: CRS of input raster. In rio-info\n--default-value: default for rasterized pixels. In rio-rasterize\n--dimensions: Output width, height. In rio-rasterize\n--dst-crs: destination CRS. In rio-transform\n--fill: fill value for pixels not covered by features. In rio-rasterize\n--formats: list available formats. In rio-info\n--height: height of raster. In rio-info\n-i, --invert: Invert mask created from features: In rio-mask\n-j, --geojson-mask: GeoJSON for masking raster. In rio-mask\n--lnglat: geograhpic coordinates of center of raster. In rio-info\n--masked/--not-masked: read masked data from source file.\n In rio-calc, rio-info\n-m, --mode: output file mode (r, r+). In rio-insp\n--name: input file name alias. In rio-calc\n--nodata: nodata value. In rio-info, rio-merge (different usages)\n--photometric: photometric interpretation. In rio-stack\n--property: GeoJSON property to use as values for rasterize. In rio-rasterize\n-r, --res: output resolution.\n In rio-info, rio-rasterize (different usages. TODO: try to combine\n usages, prefer rio-rasterize version)\n--sampling: Inverse of sampling fraction. In rio-shapes\n--shape: shape (width, height) of band. In rio-info\n--src-crs: source CRS.\n In rio-insp, rio-rasterize (different usages. TODO: consolidate usages)\n--stats: print raster stats. In rio-inf\n-t, --dtype: data type. In rio-calc, rio-info (different usages)\n--width: width of raster. In rio-info\n--with-nodata/--without-nodata: include nodata regions or not. In rio-shapes.\n-v, --tell-me-more, --verbose\n\"\"\"\n\n\n# TODO: move file_in_arg and file_out_arg to cligj\n\n\nimport click\n\n\ndef _cb_key_val(ctx, param, value):\n\n \"\"\"\n click callback to validate `--opt KEY1=VAL1 --opt KEY2=VAL2` and collect\n in a dictionary like the one below, which is what the CLI function receives.\n If no value or `None` is received then an empty dictionary is returned.\n\n {\n 'KEY1': 'VAL1',\n 'KEY2': 'VAL2'\n }\n\n Note: `==VAL` breaks this as `str.split('=', 1)` is used.\n \"\"\"\n\n if not value:\n return {}\n else:\n out = {}\n for pair in value:\n if '=' not in pair:\n raise click.BadParameter(\"Invalid syntax for KEY=VAL arg: {}\".format(pair))\n else:\n k, v = pair.split('=', 1)\n k = k.lower()\n v = v.lower()\n out[k] = v\n\n return out\n\n\n# Singular input file\nfile_in_arg = click.argument(\n 'INPUT',\n type=click.Path(exists=True, resolve_path=True))\n\n# Singular output file\nfile_out_arg = click.argument(\n 'OUTPUT',\n type=click.Path(resolve_path=True))\n\nbidx_opt = click.option(\n '-b', '--bidx',\n type=int,\n default=1,\n help=\"Input file band index (default: 1)\")\n\nbidx_mult_opt = click.option(\n '-b', '--bidx',\n multiple=True,\n help=\"Indexes of input file bands.\")\n\n# TODO: may be better suited to cligj\nbounds_opt = click.option(\n '--bounds',\n nargs=4, type=float, default=None,\n help='Output bounds: left bottom right top.')\n\ndimensions_opt = click.option(\n '--dimensions',\n nargs=2, type=int, default=None,\n help='Output dataset width, height in number of pixels.')\n\ndtype_opt = click.option(\n '-t', '--dtype',\n type=click.Choice([\n 'ubyte', 'uint8', 'uint16', 'int16', 'uint32', 'int32',\n 'float32', 'float64']),\n default=None,\n help=\"Output data type.\")\n\nlike_file_opt = click.option(\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), crs, data type, and driver '\n 'used to create the output.')\n\nmasked_opt = click.option(\n '--masked/--not-masked',\n default=True,\n help=\"Evaluate expressions using masked arrays (the default) or ordinary \"\n \"numpy arrays.\")\n\noutput_opt = click.option(\n '-o', '--output',\n default=None,\n type=click.Path(resolve_path=True),\n help=\"Path to output file (optional alternative to a positional arg \"\n \"for some commands).\")\n\nresolution_opt = click.option(\n '-r', '--res',\n multiple=True, type=float, default=None,\n help='Output dataset resolution in units of coordinate '\n 'reference system. Pixels assumed to be square if this option '\n 'is used once, otherwise use: '\n '--res pixel_width --res pixel_height')\n\ncreation_options = click.option(\n '--co', 'creation_options',\n metavar='NAME=VALUE',\n multiple=True,\n callback=_cb_key_val,\n help=\"Driver specific creation options.\"\n \"See the documentation for the selected output driver for \"\n \"more information.\")\n\nrgb_opt = click.option(\n '--rgb', 'photometric', \n flag_value='rgb',\n default=False,\n help=\"Set RGB photometric interpretation\")\n", "path": "rasterio/rio/options.py"}]} | 2,963 | 221 |
gh_patches_debug_16052 | rasdani/github-patches | git_diff | google__flax-985 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Port ensembling HOWTO from old diff based system
And instead, use a standalone doc with tests like in #771
Here is the old (pre-Linen) HOWTO diff, for reference:
https://github.com/google/flax/blob/master/howtos/diffs/ensembling.diff
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/_ext/codediff.py`
Content:
```
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import dataclasses
15 from typing import Optional, Sequence
16 import itertools
17
18 from docutils import nodes
19 from docutils.parsers.rst import directives
20 from docutils.statemachine import ViewList
21
22 import sphinx
23 from sphinx.util.docutils import SphinxDirective
24 """Sphinx directive for creating code diff tables.
25
26 Use directive as follows:
27
28 .. codediff::
29 :title-left: <LEFT_CODE_BLOCK_TITLE>
30 :title-right: <RIGHT_CODE_BLOCK_TITLE>
31 :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>
32 :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>
33
34 <CODE_BLOCK_LEFT>
35 ---
36 <CODE_BLOCK_RIGHT>
37 """
38
39 class CodeDiffParser:
40 def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):
41 if code_sep not in lines:
42 raise ValueError('Code separator not found! Code snippets should be '
43 f'separated by {code_sep}.')
44 idx = lines.index(code_sep)
45 code_left = self._code_block(lines[0: idx])
46 code_right = self._code_block(lines[idx+1:])
47
48 self.max_left = max(len(x) for x in code_left + [title_left])
49 self.max_right = max(len(x) for x in code_right + [title_right])
50
51 output = [
52 self._hline(),
53 self._table_row(title_left, title_right),
54 self._hline(),
55 ]
56
57 for l, r in itertools.zip_longest(code_left, code_right, fillvalue=''):
58 output += [self._table_row(l, r)]
59
60 return output + [self._hline()]
61
62 def _code_block(self, lines):
63 # Remove right trailing whitespace so we can detect the comments.
64 lines = [x.rstrip() for x in lines]
65 highlight = lambda x : x.endswith('#!')
66 code = map(lambda x : x[:-2].rstrip() if highlight(x) else x, lines)
67 highlights = [i+1 for i in range(len(lines)) if highlight(lines[i])]
68 highlights = ','.join(str(i) for i in highlights)
69
70 directive = ['.. code-block:: python']
71 if highlights:
72 directive += [f' :emphasize-lines: {highlights}']
73
74 # Indent code and add empty line so the code is picked up by the directive.
75 return directive + [''] + list(map(lambda x: ' ' + x, code))
76
77 def _hline(self):
78 return '+' + '-'*(self.max_left+2) + '+' + '-'*(self.max_right+2) + '+'
79
80 def _rfill(self, text, max_len):
81 return text + ' ' * (max_len-len(text))
82
83 def _table_row(self, left, right):
84 text_left = self._rfill(left, self.max_left)
85 text_right = self._rfill(right, self.max_right)
86 return '| ' + text_left + ' | ' + text_right + ' |'
87
88
89 class CodeDiffDirective(SphinxDirective):
90 has_content = True
91 option_spec = {
92 'title_left': directives.unchanged,
93 'title_right': directives.unchanged,
94 'code_sep': directives.unchanged,
95 }
96
97 def run(self):
98 new_content = CodeDiffParser().parse(list(self.content), **self.options)
99
100 node = nodes.paragraph()
101 self.content = ViewList(new_content, self.content.parent)
102 self.state.nested_parse(self.content, self.content_offset, node)
103 return [node]
104
105 def setup(app):
106 app.add_directive('codediff', CodeDiffDirective)
107
108 return {
109 'version': sphinx.__display_version__,
110 'parallel_read_safe': True,
111 'parallel_write_safe': True,
112 }
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/_ext/codediff.py b/docs/_ext/codediff.py
--- a/docs/_ext/codediff.py
+++ b/docs/_ext/codediff.py
@@ -26,14 +26,14 @@
Use directive as follows:
.. codediff::
- :title-left: <LEFT_CODE_BLOCK_TITLE>
- :title-right: <RIGHT_CODE_BLOCK_TITLE>
- :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>
- :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>
+ :title_left: <LEFT_CODE_BLOCK_TITLE>
+ :title_right: <RIGHT_CODE_BLOCK_TITLE>
<CODE_BLOCK_LEFT>
---
<CODE_BLOCK_RIGHT>
+
+In order to highlight a line of code, prepend it with "#!".
"""
class CodeDiffParser:
@@ -94,7 +94,7 @@
'code_sep': directives.unchanged,
}
- def run(self):
+ def run(self):
new_content = CodeDiffParser().parse(list(self.content), **self.options)
node = nodes.paragraph()
| {"golden_diff": "diff --git a/docs/_ext/codediff.py b/docs/_ext/codediff.py\n--- a/docs/_ext/codediff.py\n+++ b/docs/_ext/codediff.py\n@@ -26,14 +26,14 @@\n Use directive as follows:\n \n .. codediff::\n- :title-left: <LEFT_CODE_BLOCK_TITLE>\n- :title-right: <RIGHT_CODE_BLOCK_TITLE>\n- :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>\n- :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>\n+ :title_left: <LEFT_CODE_BLOCK_TITLE>\n+ :title_right: <RIGHT_CODE_BLOCK_TITLE>\n \n <CODE_BLOCK_LEFT>\n ---\n <CODE_BLOCK_RIGHT>\n+\n+In order to highlight a line of code, prepend it with \"#!\".\n \"\"\"\n \n class CodeDiffParser:\n@@ -94,7 +94,7 @@\n 'code_sep': directives.unchanged,\n }\n \n- def run(self): \n+ def run(self):\n new_content = CodeDiffParser().parse(list(self.content), **self.options)\n \n node = nodes.paragraph()\n", "issue": "Port ensembling HOWTO from old diff based system\nAnd instead, use a standalone doc with tests like in #771\r\n\r\nHere is the old (pre-Linen) HOWTO diff, for reference:\r\nhttps://github.com/google/flax/blob/master/howtos/diffs/ensembling.diff\n", "before_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport dataclasses\nfrom typing import Optional, Sequence\nimport itertools\n\nfrom docutils import nodes\nfrom docutils.parsers.rst import directives\nfrom docutils.statemachine import ViewList\n\nimport sphinx\nfrom sphinx.util.docutils import SphinxDirective\n\"\"\"Sphinx directive for creating code diff tables.\n\nUse directive as follows:\n\n.. codediff::\n :title-left: <LEFT_CODE_BLOCK_TITLE>\n :title-right: <RIGHT_CODE_BLOCK_TITLE>\n :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>\n :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>\n \n <CODE_BLOCK_LEFT>\n ---\n <CODE_BLOCK_RIGHT>\n\"\"\"\n\nclass CodeDiffParser:\n def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):\n if code_sep not in lines:\n raise ValueError('Code separator not found! Code snippets should be '\n f'separated by {code_sep}.')\n idx = lines.index(code_sep)\n code_left = self._code_block(lines[0: idx])\n code_right = self._code_block(lines[idx+1:])\n \n self.max_left = max(len(x) for x in code_left + [title_left])\n self.max_right = max(len(x) for x in code_right + [title_right])\n\n output = [\n self._hline(),\n self._table_row(title_left, title_right),\n self._hline(),\n ]\n\n for l, r in itertools.zip_longest(code_left, code_right, fillvalue=''):\n output += [self._table_row(l, r)]\n\n return output + [self._hline()]\n\n def _code_block(self, lines):\n # Remove right trailing whitespace so we can detect the comments.\n lines = [x.rstrip() for x in lines]\n highlight = lambda x : x.endswith('#!')\n code = map(lambda x : x[:-2].rstrip() if highlight(x) else x, lines)\n highlights = [i+1 for i in range(len(lines)) if highlight(lines[i])]\n highlights = ','.join(str(i) for i in highlights)\n\n directive = ['.. code-block:: python']\n if highlights:\n directive += [f' :emphasize-lines: {highlights}']\n\n # Indent code and add empty line so the code is picked up by the directive.\n return directive + [''] + list(map(lambda x: ' ' + x, code))\n\n def _hline(self):\n return '+' + '-'*(self.max_left+2) + '+' + '-'*(self.max_right+2) + '+'\n\n def _rfill(self, text, max_len):\n return text + ' ' * (max_len-len(text))\n\n def _table_row(self, left, right):\n text_left = self._rfill(left, self.max_left)\n text_right = self._rfill(right, self.max_right)\n return '| ' + text_left + ' | ' + text_right + ' |'\n\n\nclass CodeDiffDirective(SphinxDirective):\n has_content = True\n option_spec = {\n 'title_left': directives.unchanged,\n 'title_right': directives.unchanged,\n 'code_sep': directives.unchanged,\n }\n\n def run(self): \n new_content = CodeDiffParser().parse(list(self.content), **self.options)\n\n node = nodes.paragraph()\n self.content = ViewList(new_content, self.content.parent)\n self.state.nested_parse(self.content, self.content_offset, node)\n return [node]\n\ndef setup(app):\n app.add_directive('codediff', CodeDiffDirective)\n\n return {\n 'version': sphinx.__display_version__,\n 'parallel_read_safe': True,\n 'parallel_write_safe': True,\n }\n", "path": "docs/_ext/codediff.py"}], "after_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport dataclasses\nfrom typing import Optional, Sequence\nimport itertools\n\nfrom docutils import nodes\nfrom docutils.parsers.rst import directives\nfrom docutils.statemachine import ViewList\n\nimport sphinx\nfrom sphinx.util.docutils import SphinxDirective\n\"\"\"Sphinx directive for creating code diff tables.\n\nUse directive as follows:\n\n.. codediff::\n :title_left: <LEFT_CODE_BLOCK_TITLE>\n :title_right: <RIGHT_CODE_BLOCK_TITLE>\n \n <CODE_BLOCK_LEFT>\n ---\n <CODE_BLOCK_RIGHT>\n\nIn order to highlight a line of code, prepend it with \"#!\".\n\"\"\"\n\nclass CodeDiffParser:\n def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):\n if code_sep not in lines:\n raise ValueError('Code separator not found! Code snippets should be '\n f'separated by {code_sep}.')\n idx = lines.index(code_sep)\n code_left = self._code_block(lines[0: idx])\n code_right = self._code_block(lines[idx+1:])\n \n self.max_left = max(len(x) for x in code_left + [title_left])\n self.max_right = max(len(x) for x in code_right + [title_right])\n\n output = [\n self._hline(),\n self._table_row(title_left, title_right),\n self._hline(),\n ]\n\n for l, r in itertools.zip_longest(code_left, code_right, fillvalue=''):\n output += [self._table_row(l, r)]\n\n return output + [self._hline()]\n\n def _code_block(self, lines):\n # Remove right trailing whitespace so we can detect the comments.\n lines = [x.rstrip() for x in lines]\n highlight = lambda x : x.endswith('#!')\n code = map(lambda x : x[:-2].rstrip() if highlight(x) else x, lines)\n highlights = [i+1 for i in range(len(lines)) if highlight(lines[i])]\n highlights = ','.join(str(i) for i in highlights)\n\n directive = ['.. code-block:: python']\n if highlights:\n directive += [f' :emphasize-lines: {highlights}']\n\n # Indent code and add empty line so the code is picked up by the directive.\n return directive + [''] + list(map(lambda x: ' ' + x, code))\n\n def _hline(self):\n return '+' + '-'*(self.max_left+2) + '+' + '-'*(self.max_right+2) + '+'\n\n def _rfill(self, text, max_len):\n return text + ' ' * (max_len-len(text))\n\n def _table_row(self, left, right):\n text_left = self._rfill(left, self.max_left)\n text_right = self._rfill(right, self.max_right)\n return '| ' + text_left + ' | ' + text_right + ' |'\n\n\nclass CodeDiffDirective(SphinxDirective):\n has_content = True\n option_spec = {\n 'title_left': directives.unchanged,\n 'title_right': directives.unchanged,\n 'code_sep': directives.unchanged,\n }\n\n def run(self):\n new_content = CodeDiffParser().parse(list(self.content), **self.options)\n\n node = nodes.paragraph()\n self.content = ViewList(new_content, self.content.parent)\n self.state.nested_parse(self.content, self.content_offset, node)\n return [node]\n\ndef setup(app):\n app.add_directive('codediff', CodeDiffDirective)\n\n return {\n 'version': sphinx.__display_version__,\n 'parallel_read_safe': True,\n 'parallel_write_safe': True,\n }\n", "path": "docs/_ext/codediff.py"}]} | 1,495 | 242 |
gh_patches_debug_34734 | rasdani/github-patches | git_diff | astronomer__astro-sdk-455 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change `export_file` to return `File` object
**Context**
In order to allow users to perform subsequent actions on an exported file (while maintaining a functional structure), we should allow the `export_file` function to return a file object.
* Astro-SDK version: 0.9..1
* Request by: @jlaneve
* Analysed by @dimberman
**Problem**
At the moment a user who wants to use the `output_file` object would need to explicitly set dependencies like this:
```
output_file = File(path="/tmp/saved_df.csv")
with sample_dag:
table = aql.load_file(input_file=File(path=data_path), output_table=test_table)
export = aql.export_file(
input_data=table,
output_file=output_file,
if_exists="replace",
)
res_df = aql.load_file(input_file=output_file)
export >> res_df
```
**Desired behaviour**
```
with sample_dag:
table = aql.load_file(input_file=File(path=data_path), output_table=test_table)
exported_file = aql.export_file(
input_data=table,
output_file=File(path="/tmp/saved_df.csv"),
if_exists="replace",
)
res_df = aql.load_file(input_file=exported_file)
```
**Acceptance criteria**
* Change `export_file` so it returns the `File` instance, as opposed to `None`
Since there is no documentation about this task, we don't need to update the documentation for it. To create documentation for this feature should be part of another issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/astro/sql/operators/export_file.py`
Content:
```
1 from typing import Optional, Union
2
3 import pandas as pd
4 from airflow.models import BaseOperator
5 from airflow.models.xcom_arg import XComArg
6
7 from astro.constants import ExportExistsStrategy
8 from astro.databases import create_database
9 from astro.files import File
10 from astro.sql.table import Table
11 from astro.utils.task_id_helper import get_task_id
12
13
14 class ExportFile(BaseOperator):
15 """Write SQL table to csv/parquet on local/S3/GCS.
16
17 :param input_data: Table to convert to file
18 :param output_file: File object containing the path to the file and connection id.
19 :param if_exists: Overwrite file if exists. Default False.
20 """
21
22 template_fields = ("input_data", "output_file")
23
24 def __init__(
25 self,
26 input_data: Union[Table, pd.DataFrame],
27 output_file: File,
28 if_exists: ExportExistsStrategy = "exception",
29 **kwargs,
30 ) -> None:
31 super().__init__(**kwargs)
32 self.output_file = output_file
33 self.input_data = input_data
34 self.if_exists = if_exists
35 self.kwargs = kwargs
36
37 def execute(self, context: dict) -> None:
38 """Write SQL table to csv/parquet on local/S3/GCS.
39
40 Infers SQL database type based on connection.
41 """
42 # Infer db type from `input_conn_id`.
43 if isinstance(self.input_data, Table):
44 database = create_database(self.input_data.conn_id)
45 self.input_data = database.populate_table_metadata(self.input_data)
46 df = database.export_table_to_pandas_dataframe(self.input_data)
47 elif isinstance(self.input_data, pd.DataFrame):
48 df = self.input_data
49 else:
50 raise ValueError(
51 f"Expected input_table to be Table or dataframe. Got {type(self.input_data)}"
52 )
53 # Write file if overwrite == True or if file doesn't exist.
54 if self.if_exists == "replace" or not self.output_file.exists():
55 self.output_file.create_from_dataframe(df)
56 else:
57 raise FileExistsError(f"{self.output_file.path} file already exists.")
58
59
60 def export_file(
61 input_data: Union[Table, pd.DataFrame],
62 output_file: File,
63 if_exists: ExportExistsStrategy = "exception",
64 task_id: Optional[str] = None,
65 **kwargs,
66 ) -> XComArg:
67 """Convert SaveFile into a function. Returns XComArg.
68
69 Returns an XComArg object.
70
71 :param output_file: Path and conn_id
72 :param input_data: Input table / dataframe
73 :param if_exists: Overwrite file if exists. Default "exception"
74 :param task_id: task id, optional
75 """
76
77 task_id = (
78 task_id if task_id is not None else get_task_id("export_file", output_file.path)
79 )
80
81 return ExportFile(
82 task_id=task_id,
83 output_file=output_file,
84 input_data=input_data,
85 if_exists=if_exists,
86 ).output
87
```
Path: `src/astro/__init__.py`
Content:
```
1 """A decorator that allows users to run SQL queries natively in Airflow."""
2
3 __version__ = "0.9.1"
4
5 # The following line is an import work-around to avoid raising a circular dependency issue related to `create_database`
6 # Without this, if we run the following imports, in this specific order:
7 # from astro.databases import create_database
8 # from astro.sql.table import Metadata, Table, create_unique_table_name
9 # We face ImportError, as it happened in:
10 # https://github.com/astronomer/astro-sdk/pull/396/commits/fbe73bdbe46d65777258a5f79f461ef69f08a673
11 # https://github.com/astronomer/astro-sdk/actions/runs/2378526135
12 # Although astro.database does not depend on astro.sql, it depends on astro.sql.table - and, unless astro.sql was
13 # imported beforehand, it will also load astro.sql. In astro.sql we import lots of operators which depend on
14 # astro.database, and this is what leads to the circular dependency.
15 import astro.sql # noqa: F401
16
17
18 # This is needed to allow Airflow to pick up specific metadata fields it needs
19 # for certain features. We recognize it's a bit unclean to define these in
20 # multiple places, but at this point it's the only workaround if you'd like
21 # your custom conn type to show up in the Airflow UI.
22 def get_provider_info() -> dict:
23 return {
24 # Required.
25 "package-name": "astro-sdk-python",
26 "name": "Astro SQL Provider",
27 "description": __doc__,
28 "versions": [__version__],
29 # Optional.
30 "hook-class-names": [],
31 "extra-links": [],
32 }
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/astro/__init__.py b/src/astro/__init__.py
--- a/src/astro/__init__.py
+++ b/src/astro/__init__.py
@@ -1,6 +1,6 @@
"""A decorator that allows users to run SQL queries natively in Airflow."""
-__version__ = "0.9.1"
+__version__ = "0.9.2"
# The following line is an import work-around to avoid raising a circular dependency issue related to `create_database`
# Without this, if we run the following imports, in this specific order:
diff --git a/src/astro/sql/operators/export_file.py b/src/astro/sql/operators/export_file.py
--- a/src/astro/sql/operators/export_file.py
+++ b/src/astro/sql/operators/export_file.py
@@ -34,7 +34,7 @@
self.if_exists = if_exists
self.kwargs = kwargs
- def execute(self, context: dict) -> None:
+ def execute(self, context: dict) -> File:
"""Write SQL table to csv/parquet on local/S3/GCS.
Infers SQL database type based on connection.
@@ -53,6 +53,7 @@
# Write file if overwrite == True or if file doesn't exist.
if self.if_exists == "replace" or not self.output_file.exists():
self.output_file.create_from_dataframe(df)
+ return self.output_file
else:
raise FileExistsError(f"{self.output_file.path} file already exists.")
@@ -66,7 +67,20 @@
) -> XComArg:
"""Convert SaveFile into a function. Returns XComArg.
- Returns an XComArg object.
+ Returns an XComArg object of type File which matches the output_file parameter.
+
+ This will allow users to perform further actions with the exported file.
+
+ e.g.
+
+ with sample_dag:
+ table = aql.load_file(input_file=File(path=data_path), output_table=test_table)
+ exported_file = aql.export_file(
+ input_data=table,
+ output_file=File(path="/tmp/saved_df.csv"),
+ if_exists="replace",
+ )
+ res_df = aql.load_file(input_file=exported_file)
:param output_file: Path and conn_id
:param input_data: Input table / dataframe
| {"golden_diff": "diff --git a/src/astro/__init__.py b/src/astro/__init__.py\n--- a/src/astro/__init__.py\n+++ b/src/astro/__init__.py\n@@ -1,6 +1,6 @@\n \"\"\"A decorator that allows users to run SQL queries natively in Airflow.\"\"\"\n \n-__version__ = \"0.9.1\"\n+__version__ = \"0.9.2\"\n \n # The following line is an import work-around to avoid raising a circular dependency issue related to `create_database`\n # Without this, if we run the following imports, in this specific order:\ndiff --git a/src/astro/sql/operators/export_file.py b/src/astro/sql/operators/export_file.py\n--- a/src/astro/sql/operators/export_file.py\n+++ b/src/astro/sql/operators/export_file.py\n@@ -34,7 +34,7 @@\n self.if_exists = if_exists\n self.kwargs = kwargs\n \n- def execute(self, context: dict) -> None:\n+ def execute(self, context: dict) -> File:\n \"\"\"Write SQL table to csv/parquet on local/S3/GCS.\n \n Infers SQL database type based on connection.\n@@ -53,6 +53,7 @@\n # Write file if overwrite == True or if file doesn't exist.\n if self.if_exists == \"replace\" or not self.output_file.exists():\n self.output_file.create_from_dataframe(df)\n+ return self.output_file\n else:\n raise FileExistsError(f\"{self.output_file.path} file already exists.\")\n \n@@ -66,7 +67,20 @@\n ) -> XComArg:\n \"\"\"Convert SaveFile into a function. Returns XComArg.\n \n- Returns an XComArg object.\n+ Returns an XComArg object of type File which matches the output_file parameter.\n+\n+ This will allow users to perform further actions with the exported file.\n+\n+ e.g.\n+\n+ with sample_dag:\n+ table = aql.load_file(input_file=File(path=data_path), output_table=test_table)\n+ exported_file = aql.export_file(\n+ input_data=table,\n+ output_file=File(path=\"/tmp/saved_df.csv\"),\n+ if_exists=\"replace\",\n+ )\n+ res_df = aql.load_file(input_file=exported_file)\n \n :param output_file: Path and conn_id\n :param input_data: Input table / dataframe\n", "issue": "Change `export_file` to return `File` object\n**Context**\r\n\r\nIn order to allow users to perform subsequent actions on an exported file (while maintaining a functional structure), we should allow the `export_file` function to return a file object.\r\n\r\n* Astro-SDK version: 0.9..1\r\n* Request by: @jlaneve\r\n* Analysed by @dimberman \r\n\r\n**Problem**\r\n\r\nAt the moment a user who wants to use the `output_file` object would need to explicitly set dependencies like this:\r\n\r\n```\r\n output_file = File(path=\"/tmp/saved_df.csv\")\r\n with sample_dag:\r\n table = aql.load_file(input_file=File(path=data_path), output_table=test_table)\r\n export = aql.export_file(\r\n input_data=table,\r\n output_file=output_file,\r\n if_exists=\"replace\",\r\n )\r\n res_df = aql.load_file(input_file=output_file)\r\n export >> res_df\r\n```\r\n\r\n**Desired behaviour**\r\n\r\n```\r\n with sample_dag:\r\n table = aql.load_file(input_file=File(path=data_path), output_table=test_table)\r\n exported_file = aql.export_file(\r\n input_data=table,\r\n output_file=File(path=\"/tmp/saved_df.csv\"),\r\n if_exists=\"replace\",\r\n )\r\n res_df = aql.load_file(input_file=exported_file)\r\n```\r\n\r\n**Acceptance criteria**\r\n* Change `export_file` so it returns the `File` instance, as opposed to `None`\r\n\r\nSince there is no documentation about this task, we don't need to update the documentation for it. To create documentation for this feature should be part of another issue.\n", "before_files": [{"content": "from typing import Optional, Union\n\nimport pandas as pd\nfrom airflow.models import BaseOperator\nfrom airflow.models.xcom_arg import XComArg\n\nfrom astro.constants import ExportExistsStrategy\nfrom astro.databases import create_database\nfrom astro.files import File\nfrom astro.sql.table import Table\nfrom astro.utils.task_id_helper import get_task_id\n\n\nclass ExportFile(BaseOperator):\n \"\"\"Write SQL table to csv/parquet on local/S3/GCS.\n\n :param input_data: Table to convert to file\n :param output_file: File object containing the path to the file and connection id.\n :param if_exists: Overwrite file if exists. Default False.\n \"\"\"\n\n template_fields = (\"input_data\", \"output_file\")\n\n def __init__(\n self,\n input_data: Union[Table, pd.DataFrame],\n output_file: File,\n if_exists: ExportExistsStrategy = \"exception\",\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.output_file = output_file\n self.input_data = input_data\n self.if_exists = if_exists\n self.kwargs = kwargs\n\n def execute(self, context: dict) -> None:\n \"\"\"Write SQL table to csv/parquet on local/S3/GCS.\n\n Infers SQL database type based on connection.\n \"\"\"\n # Infer db type from `input_conn_id`.\n if isinstance(self.input_data, Table):\n database = create_database(self.input_data.conn_id)\n self.input_data = database.populate_table_metadata(self.input_data)\n df = database.export_table_to_pandas_dataframe(self.input_data)\n elif isinstance(self.input_data, pd.DataFrame):\n df = self.input_data\n else:\n raise ValueError(\n f\"Expected input_table to be Table or dataframe. Got {type(self.input_data)}\"\n )\n # Write file if overwrite == True or if file doesn't exist.\n if self.if_exists == \"replace\" or not self.output_file.exists():\n self.output_file.create_from_dataframe(df)\n else:\n raise FileExistsError(f\"{self.output_file.path} file already exists.\")\n\n\ndef export_file(\n input_data: Union[Table, pd.DataFrame],\n output_file: File,\n if_exists: ExportExistsStrategy = \"exception\",\n task_id: Optional[str] = None,\n **kwargs,\n) -> XComArg:\n \"\"\"Convert SaveFile into a function. Returns XComArg.\n\n Returns an XComArg object.\n\n :param output_file: Path and conn_id\n :param input_data: Input table / dataframe\n :param if_exists: Overwrite file if exists. Default \"exception\"\n :param task_id: task id, optional\n \"\"\"\n\n task_id = (\n task_id if task_id is not None else get_task_id(\"export_file\", output_file.path)\n )\n\n return ExportFile(\n task_id=task_id,\n output_file=output_file,\n input_data=input_data,\n if_exists=if_exists,\n ).output\n", "path": "src/astro/sql/operators/export_file.py"}, {"content": "\"\"\"A decorator that allows users to run SQL queries natively in Airflow.\"\"\"\n\n__version__ = \"0.9.1\"\n\n# The following line is an import work-around to avoid raising a circular dependency issue related to `create_database`\n# Without this, if we run the following imports, in this specific order:\n# from astro.databases import create_database\n# from astro.sql.table import Metadata, Table, create_unique_table_name\n# We face ImportError, as it happened in:\n# https://github.com/astronomer/astro-sdk/pull/396/commits/fbe73bdbe46d65777258a5f79f461ef69f08a673\n# https://github.com/astronomer/astro-sdk/actions/runs/2378526135\n# Although astro.database does not depend on astro.sql, it depends on astro.sql.table - and, unless astro.sql was\n# imported beforehand, it will also load astro.sql. In astro.sql we import lots of operators which depend on\n# astro.database, and this is what leads to the circular dependency.\nimport astro.sql # noqa: F401\n\n\n# This is needed to allow Airflow to pick up specific metadata fields it needs\n# for certain features. We recognize it's a bit unclean to define these in\n# multiple places, but at this point it's the only workaround if you'd like\n# your custom conn type to show up in the Airflow UI.\ndef get_provider_info() -> dict:\n return {\n # Required.\n \"package-name\": \"astro-sdk-python\",\n \"name\": \"Astro SQL Provider\",\n \"description\": __doc__,\n \"versions\": [__version__],\n # Optional.\n \"hook-class-names\": [],\n \"extra-links\": [],\n }\n", "path": "src/astro/__init__.py"}], "after_files": [{"content": "from typing import Optional, Union\n\nimport pandas as pd\nfrom airflow.models import BaseOperator\nfrom airflow.models.xcom_arg import XComArg\n\nfrom astro.constants import ExportExistsStrategy\nfrom astro.databases import create_database\nfrom astro.files import File\nfrom astro.sql.table import Table\nfrom astro.utils.task_id_helper import get_task_id\n\n\nclass ExportFile(BaseOperator):\n \"\"\"Write SQL table to csv/parquet on local/S3/GCS.\n\n :param input_data: Table to convert to file\n :param output_file: File object containing the path to the file and connection id.\n :param if_exists: Overwrite file if exists. Default False.\n \"\"\"\n\n template_fields = (\"input_data\", \"output_file\")\n\n def __init__(\n self,\n input_data: Union[Table, pd.DataFrame],\n output_file: File,\n if_exists: ExportExistsStrategy = \"exception\",\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.output_file = output_file\n self.input_data = input_data\n self.if_exists = if_exists\n self.kwargs = kwargs\n\n def execute(self, context: dict) -> File:\n \"\"\"Write SQL table to csv/parquet on local/S3/GCS.\n\n Infers SQL database type based on connection.\n \"\"\"\n # Infer db type from `input_conn_id`.\n if isinstance(self.input_data, Table):\n database = create_database(self.input_data.conn_id)\n self.input_data = database.populate_table_metadata(self.input_data)\n df = database.export_table_to_pandas_dataframe(self.input_data)\n elif isinstance(self.input_data, pd.DataFrame):\n df = self.input_data\n else:\n raise ValueError(\n f\"Expected input_table to be Table or dataframe. Got {type(self.input_data)}\"\n )\n # Write file if overwrite == True or if file doesn't exist.\n if self.if_exists == \"replace\" or not self.output_file.exists():\n self.output_file.create_from_dataframe(df)\n return self.output_file\n else:\n raise FileExistsError(f\"{self.output_file.path} file already exists.\")\n\n\ndef export_file(\n input_data: Union[Table, pd.DataFrame],\n output_file: File,\n if_exists: ExportExistsStrategy = \"exception\",\n task_id: Optional[str] = None,\n **kwargs,\n) -> XComArg:\n \"\"\"Convert SaveFile into a function. Returns XComArg.\n\n Returns an XComArg object of type File which matches the output_file parameter.\n\n This will allow users to perform further actions with the exported file.\n\n e.g.\n\n with sample_dag:\n table = aql.load_file(input_file=File(path=data_path), output_table=test_table)\n exported_file = aql.export_file(\n input_data=table,\n output_file=File(path=\"/tmp/saved_df.csv\"),\n if_exists=\"replace\",\n )\n res_df = aql.load_file(input_file=exported_file)\n\n :param output_file: Path and conn_id\n :param input_data: Input table / dataframe\n :param if_exists: Overwrite file if exists. Default \"exception\"\n :param task_id: task id, optional\n \"\"\"\n\n task_id = (\n task_id if task_id is not None else get_task_id(\"export_file\", output_file.path)\n )\n\n return ExportFile(\n task_id=task_id,\n output_file=output_file,\n input_data=input_data,\n if_exists=if_exists,\n ).output\n", "path": "src/astro/sql/operators/export_file.py"}, {"content": "\"\"\"A decorator that allows users to run SQL queries natively in Airflow.\"\"\"\n\n__version__ = \"0.9.2\"\n\n# The following line is an import work-around to avoid raising a circular dependency issue related to `create_database`\n# Without this, if we run the following imports, in this specific order:\n# from astro.databases import create_database\n# from astro.sql.table import Metadata, Table, create_unique_table_name\n# We face ImportError, as it happened in:\n# https://github.com/astronomer/astro-sdk/pull/396/commits/fbe73bdbe46d65777258a5f79f461ef69f08a673\n# https://github.com/astronomer/astro-sdk/actions/runs/2378526135\n# Although astro.database does not depend on astro.sql, it depends on astro.sql.table - and, unless astro.sql was\n# imported beforehand, it will also load astro.sql. In astro.sql we import lots of operators which depend on\n# astro.database, and this is what leads to the circular dependency.\nimport astro.sql # noqa: F401\n\n\n# This is needed to allow Airflow to pick up specific metadata fields it needs\n# for certain features. We recognize it's a bit unclean to define these in\n# multiple places, but at this point it's the only workaround if you'd like\n# your custom conn type to show up in the Airflow UI.\ndef get_provider_info() -> dict:\n return {\n # Required.\n \"package-name\": \"astro-sdk-python\",\n \"name\": \"Astro SQL Provider\",\n \"description\": __doc__,\n \"versions\": [__version__],\n # Optional.\n \"hook-class-names\": [],\n \"extra-links\": [],\n }\n", "path": "src/astro/__init__.py"}]} | 1,891 | 525 |
gh_patches_debug_3737 | rasdani/github-patches | git_diff | intel__dffml-529 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs: Enable hiding of Python prompts
This will be very helpful for copy pasting examples.
References:
- https://github.com/readthedocs/sphinx_rtd_theme/issues/167
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # http://www.sphinx-doc.org/en/master/config
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import sys
15 import pathlib
16
17 sys.path.insert(0, os.path.abspath("."))
18 from dffml.version import VERSION
19
20 # -- Project information -----------------------------------------------------
21
22 project = "DFFML"
23 copyright = "2019, Intel"
24 author = "John Andersen"
25
26 # The short X.Y version
27 version = VERSION
28
29 # The full version, including alpha/beta/rc tags
30 release = version
31
32
33 # -- General configuration ---------------------------------------------------
34
35 # Add any Sphinx extension module names here, as strings. They can be
36 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
37 # ones.
38 extensions = [
39 "sphinx.ext.intersphinx",
40 "sphinx.ext.autodoc",
41 "sphinx.ext.viewcode",
42 "sphinx.ext.napoleon",
43 "sphinx.ext.doctest",
44 "recommonmark",
45 ]
46
47 intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ["_templates"]
51
52 # List of patterns, relative to source directory, that match files and
53 # directories to ignore when looking for source files.
54 # This pattern also affects html_static_path and html_extra_path.
55 exclude_patterns = []
56
57 # Enable markdown
58 source_suffix = {
59 ".rst": "restructuredtext",
60 ".txt": "markdown",
61 ".md": "markdown",
62 }
63
64
65 # -- Options for HTML output -------------------------------------------------
66
67 # The theme to use for HTML and HTML Help pages. See the documentation for
68 # a list of builtin themes.
69 #
70 html_theme = "sphinx_rtd_theme"
71
72 html_context = {
73 "github_user": "intel",
74 "github_repo": "dffml",
75 "github_version": "master",
76 "conf_py_path": "/docs/",
77 "display_github": True,
78 }
79
80 html_theme_options = {
81 "description": "The fastest path to machine learning integration",
82 "github_url": "https://github.com/intel/dffml/",
83 }
84
85 # Add any paths that contain custom static files (such as style sheets) here,
86 # relative to this directory. They are copied after the builtin static files,
87 # so a file named "default.css" will overwrite the builtin "default.css".
88 html_static_path = ["_static"]
89
90 # -- Extension configuration -------------------------------------------------
91
92 napoleon_numpy_docstring = True
93
94 doctest_global_setup = (
95 pathlib.Path(__file__).parent / "doctest_header.py"
96 ).read_text()
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -87,6 +87,11 @@
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
+
+def setup(app):
+ app.add_javascript("copybutton.js")
+
+
# -- Extension configuration -------------------------------------------------
napoleon_numpy_docstring = True
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -87,6 +87,11 @@\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n html_static_path = [\"_static\"]\n \n+\n+def setup(app):\n+ app.add_javascript(\"copybutton.js\")\n+\n+\n # -- Extension configuration -------------------------------------------------\n \n napoleon_numpy_docstring = True\n", "issue": "docs: Enable hiding of Python prompts\nThis will be very helpful for copy pasting examples.\r\n\r\nReferences:\r\n- https://github.com/readthedocs/sphinx_rtd_theme/issues/167\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport pathlib\n\nsys.path.insert(0, os.path.abspath(\".\"))\nfrom dffml.version import VERSION\n\n# -- Project information -----------------------------------------------------\n\nproject = \"DFFML\"\ncopyright = \"2019, Intel\"\nauthor = \"John Andersen\"\n\n# The short X.Y version\nversion = VERSION\n\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.doctest\",\n \"recommonmark\",\n]\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n# Enable markdown\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"markdown\",\n \".md\": \"markdown\",\n}\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\nhtml_context = {\n \"github_user\": \"intel\",\n \"github_repo\": \"dffml\",\n \"github_version\": \"master\",\n \"conf_py_path\": \"/docs/\",\n \"display_github\": True,\n}\n\nhtml_theme_options = {\n \"description\": \"The fastest path to machine learning integration\",\n \"github_url\": \"https://github.com/intel/dffml/\",\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# -- Extension configuration -------------------------------------------------\n\nnapoleon_numpy_docstring = True\n\ndoctest_global_setup = (\n pathlib.Path(__file__).parent / \"doctest_header.py\"\n).read_text()\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport pathlib\n\nsys.path.insert(0, os.path.abspath(\".\"))\nfrom dffml.version import VERSION\n\n# -- Project information -----------------------------------------------------\n\nproject = \"DFFML\"\ncopyright = \"2019, Intel\"\nauthor = \"John Andersen\"\n\n# The short X.Y version\nversion = VERSION\n\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.doctest\",\n \"recommonmark\",\n]\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n# Enable markdown\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"markdown\",\n \".md\": \"markdown\",\n}\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\nhtml_context = {\n \"github_user\": \"intel\",\n \"github_repo\": \"dffml\",\n \"github_version\": \"master\",\n \"conf_py_path\": \"/docs/\",\n \"display_github\": True,\n}\n\nhtml_theme_options = {\n \"description\": \"The fastest path to machine learning integration\",\n \"github_url\": \"https://github.com/intel/dffml/\",\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef setup(app):\n app.add_javascript(\"copybutton.js\")\n\n\n# -- Extension configuration -------------------------------------------------\n\nnapoleon_numpy_docstring = True\n\ndoctest_global_setup = (\n pathlib.Path(__file__).parent / \"doctest_header.py\"\n).read_text()\n", "path": "docs/conf.py"}]} | 1,122 | 97 |
gh_patches_debug_13871 | rasdani/github-patches | git_diff | scrapy__scrapy-4207 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scrapy does not use a non-zero exit code when pipeline's open_spider throws the exception
<!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
In our case, we execute command `scrapy crawl` in airflow task and the exit code would be used to judge this task success or failure. I agree that `scrapy crawl` ignores spider exceptions because it's unpredictable in the crawling process.
Back to our case, we export data to file or database in the pipeline and we create the directory or database connection in `open_spider(self, spider)`. I think if there is an exception happens during this function, it's reasonable to propagate a non-zero exit code. it because we normally do some initialization in this function.
### Steps to Reproduce
- scrapy startproject test_spider
- cd test_spider
- scrapy genspider example example.com
- modify spiders/example.py to
```
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
name = 'example'
allowed_domains = ['example.com']
start_urls = ['http://example.com/']
custom_settings = {
'ITEM_PIPELINES': {
'test_spider.pipelines.TestSpiderPipeline': 300
}
}
def parse(self, response):
pass
```
- modify pipelines.py to
```
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
class TestSpiderPipeline(object):
def open_spider(self, spider):
raise Exception('error')
def process_item(self, item, spider):
return item
```
- scrapy crawl example
- echo $?
**Expected behavior:** [What you expect to happen]
return non-zero exit code
**Actual behavior:** [What actually happens]
return zero exit code
**Reproduces how often:** [What percentage of the time does it reproduce?]
100%
### Versions
Scrapy : 1.8.0
lxml : 4.3.3.0
libxml2 : 2.9.9
cssselect : 1.0.3
parsel : 1.5.1
w3lib : 1.20.0
Twisted : 19.2.0
Python : 3.7.3 (default, Mar 27 2019, 09:23:39) - [Clang 10.0.0 (clang-1000.11.45.5)]
pyOpenSSL : 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019)
cryptography : 2.6.1
Platform : Darwin-18.5.0-x86_64-i386-64bit
### Additional context
I could get the expected behavior if I change `def run(self, args, opts)` in scrapy/commands/crawl.py to
```
def run(self, args, opts):
if len(args) < 1:
raise UsageError()
elif len(args) > 1:
raise UsageError("running 'scrapy crawl' with more than one spider is no longer supported")
spname = args[0]
res = self.crawler_process.crawl(spname, **opts.spargs)
if hasattr(res, 'result') and res.result is not None and issubclass(res.result.type, Exception):
self.exitcode = 1
else:
self.crawler_process.start()
if self.crawler_process.bootstrap_failed:
self.exitcode = 1
```
original `def run(self, args, opts)`
```
def run(self, args, opts):
if len(args) < 1:
raise UsageError()
elif len(args) > 1:
raise UsageError("running 'scrapy crawl' with more than one spider is no longer supported")
spname = args[0]
self.crawler_process.crawl(spname, **opts.spargs)
self.crawler_process.start()
if self.crawler_process.bootstrap_failed:
self.exitcode = 1
```
Is it the proper way to modify the code for achieving this purpose? if it is, could I create a PR request for this issue?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/crawl.py`
Content:
```
1 import os
2 from scrapy.commands import ScrapyCommand
3 from scrapy.utils.conf import arglist_to_dict
4 from scrapy.utils.python import without_none_values
5 from scrapy.exceptions import UsageError
6
7
8 class Command(ScrapyCommand):
9
10 requires_project = True
11
12 def syntax(self):
13 return "[options] <spider>"
14
15 def short_desc(self):
16 return "Run a spider"
17
18 def add_options(self, parser):
19 ScrapyCommand.add_options(self, parser)
20 parser.add_option("-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE",
21 help="set spider argument (may be repeated)")
22 parser.add_option("-o", "--output", metavar="FILE",
23 help="dump scraped items into FILE (use - for stdout)")
24 parser.add_option("-t", "--output-format", metavar="FORMAT",
25 help="format to use for dumping items with -o")
26
27 def process_options(self, args, opts):
28 ScrapyCommand.process_options(self, args, opts)
29 try:
30 opts.spargs = arglist_to_dict(opts.spargs)
31 except ValueError:
32 raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False)
33 if opts.output:
34 if opts.output == '-':
35 self.settings.set('FEED_URI', 'stdout:', priority='cmdline')
36 else:
37 self.settings.set('FEED_URI', opts.output, priority='cmdline')
38 feed_exporters = without_none_values(
39 self.settings.getwithbase('FEED_EXPORTERS'))
40 valid_output_formats = feed_exporters.keys()
41 if not opts.output_format:
42 opts.output_format = os.path.splitext(opts.output)[1].replace(".", "")
43 if opts.output_format not in valid_output_formats:
44 raise UsageError("Unrecognized output format '%s', set one"
45 " using the '-t' switch or as a file extension"
46 " from the supported list %s" % (opts.output_format,
47 tuple(valid_output_formats)))
48 self.settings.set('FEED_FORMAT', opts.output_format, priority='cmdline')
49
50 def run(self, args, opts):
51 if len(args) < 1:
52 raise UsageError()
53 elif len(args) > 1:
54 raise UsageError("running 'scrapy crawl' with more than one spider is no longer supported")
55 spname = args[0]
56
57 self.crawler_process.crawl(spname, **opts.spargs)
58 self.crawler_process.start()
59
60 if self.crawler_process.bootstrap_failed:
61 self.exitcode = 1
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/commands/crawl.py b/scrapy/commands/crawl.py
--- a/scrapy/commands/crawl.py
+++ b/scrapy/commands/crawl.py
@@ -54,8 +54,13 @@
raise UsageError("running 'scrapy crawl' with more than one spider is no longer supported")
spname = args[0]
- self.crawler_process.crawl(spname, **opts.spargs)
- self.crawler_process.start()
+ crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
- if self.crawler_process.bootstrap_failed:
+ if getattr(crawl_defer, 'result', None) is not None and issubclass(crawl_defer.result.type, Exception):
self.exitcode = 1
+ else:
+ self.crawler_process.start()
+
+ if self.crawler_process.bootstrap_failed or \
+ (hasattr(self.crawler_process, 'has_exception') and self.crawler_process.has_exception):
+ self.exitcode = 1
| {"golden_diff": "diff --git a/scrapy/commands/crawl.py b/scrapy/commands/crawl.py\n--- a/scrapy/commands/crawl.py\n+++ b/scrapy/commands/crawl.py\n@@ -54,8 +54,13 @@\n raise UsageError(\"running 'scrapy crawl' with more than one spider is no longer supported\")\n spname = args[0]\n \n- self.crawler_process.crawl(spname, **opts.spargs)\n- self.crawler_process.start()\n+ crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)\n \n- if self.crawler_process.bootstrap_failed:\n+ if getattr(crawl_defer, 'result', None) is not None and issubclass(crawl_defer.result.type, Exception):\n self.exitcode = 1\n+ else:\n+ self.crawler_process.start()\n+\n+ if self.crawler_process.bootstrap_failed or \\\n+ (hasattr(self.crawler_process, 'has_exception') and self.crawler_process.has_exception):\n+ self.exitcode = 1\n", "issue": "Scrapy does not use a non-zero exit code when pipeline's open_spider throws the exception\n<!--\r\n\r\nThanks for taking an interest in Scrapy!\r\n\r\nIf you have a question that starts with \"How to...\", please see the Scrapy Community page: https://scrapy.org/community/.\r\nThe GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.\r\n\r\nKeep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md\r\n\r\nThe following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs\r\n\r\n-->\r\n\r\n### Description\r\nIn our case, we execute command `scrapy crawl` in airflow task and the exit code would be used to judge this task success or failure. I agree that `scrapy crawl` ignores spider exceptions because it's unpredictable in the crawling process. \r\n\r\nBack to our case, we export data to file or database in the pipeline and we create the directory or database connection in `open_spider(self, spider)`. I think if there is an exception happens during this function, it's reasonable to propagate a non-zero exit code. it because we normally do some initialization in this function.\r\n\r\n### Steps to Reproduce\r\n\r\n- scrapy startproject test_spider\r\n- cd test_spider\r\n- scrapy genspider example example.com\r\n- modify spiders/example.py to \r\n```\r\n# -*- coding: utf-8 -*-\r\nimport scrapy\r\n\r\n\r\nclass ExampleSpider(scrapy.Spider):\r\n name = 'example'\r\n allowed_domains = ['example.com']\r\n start_urls = ['http://example.com/']\r\n\r\n custom_settings = {\r\n 'ITEM_PIPELINES': {\r\n 'test_spider.pipelines.TestSpiderPipeline': 300\r\n }\r\n }\r\n\r\n def parse(self, response):\r\n pass\r\n```\r\n- modify pipelines.py to \r\n```\r\n# -*- coding: utf-8 -*-\r\n\r\n# Define your item pipelines here\r\n#\r\n# Don't forget to add your pipeline to the ITEM_PIPELINES setting\r\n# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html\r\n\r\n\r\nclass TestSpiderPipeline(object):\r\n\r\n def open_spider(self, spider):\r\n raise Exception('error')\r\n\r\n def process_item(self, item, spider):\r\n return item\r\n```\r\n- scrapy crawl example\r\n- echo $? \r\n\r\n**Expected behavior:** [What you expect to happen]\r\nreturn non-zero exit code\r\n\r\n**Actual behavior:** [What actually happens]\r\nreturn zero exit code\r\n\r\n**Reproduces how often:** [What percentage of the time does it reproduce?]\r\n100%\r\n\r\n### Versions\r\nScrapy : 1.8.0\r\nlxml : 4.3.3.0\r\nlibxml2 : 2.9.9\r\ncssselect : 1.0.3\r\nparsel : 1.5.1\r\nw3lib : 1.20.0\r\nTwisted : 19.2.0\r\nPython : 3.7.3 (default, Mar 27 2019, 09:23:39) - [Clang 10.0.0 (clang-1000.11.45.5)]\r\npyOpenSSL : 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019)\r\ncryptography : 2.6.1\r\nPlatform : Darwin-18.5.0-x86_64-i386-64bit\r\n\r\n### Additional context\r\n\r\nI could get the expected behavior if I change `def run(self, args, opts)` in scrapy/commands/crawl.py to \r\n```\r\n def run(self, args, opts):\r\n if len(args) < 1:\r\n raise UsageError()\r\n elif len(args) > 1:\r\n raise UsageError(\"running 'scrapy crawl' with more than one spider is no longer supported\")\r\n spname = args[0]\r\n\r\n res = self.crawler_process.crawl(spname, **opts.spargs)\r\n\r\n if hasattr(res, 'result') and res.result is not None and issubclass(res.result.type, Exception):\r\n self.exitcode = 1\r\n else:\r\n self.crawler_process.start()\r\n\r\n if self.crawler_process.bootstrap_failed:\r\n self.exitcode = 1\r\n```\r\noriginal `def run(self, args, opts)`\r\n```\r\n def run(self, args, opts):\r\n if len(args) < 1:\r\n raise UsageError()\r\n elif len(args) > 1:\r\n raise UsageError(\"running 'scrapy crawl' with more than one spider is no longer supported\")\r\n spname = args[0]\r\n\r\n self.crawler_process.crawl(spname, **opts.spargs)\r\n self.crawler_process.start()\r\n\r\n if self.crawler_process.bootstrap_failed:\r\n self.exitcode = 1\r\n```\r\n\r\nIs it the proper way to modify the code for achieving this purpose? if it is, could I create a PR request for this issue?\r\n\n", "before_files": [{"content": "import os\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.conf import arglist_to_dict\nfrom scrapy.utils.python import without_none_values\nfrom scrapy.exceptions import UsageError\n\n\nclass Command(ScrapyCommand):\n\n requires_project = True\n\n def syntax(self):\n return \"[options] <spider>\"\n\n def short_desc(self):\n return \"Run a spider\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"-a\", dest=\"spargs\", action=\"append\", default=[], metavar=\"NAME=VALUE\",\n help=\"set spider argument (may be repeated)\")\n parser.add_option(\"-o\", \"--output\", metavar=\"FILE\",\n help=\"dump scraped items into FILE (use - for stdout)\")\n parser.add_option(\"-t\", \"--output-format\", metavar=\"FORMAT\",\n help=\"format to use for dumping items with -o\")\n\n def process_options(self, args, opts):\n ScrapyCommand.process_options(self, args, opts)\n try:\n opts.spargs = arglist_to_dict(opts.spargs)\n except ValueError:\n raise UsageError(\"Invalid -a value, use -a NAME=VALUE\", print_help=False)\n if opts.output:\n if opts.output == '-':\n self.settings.set('FEED_URI', 'stdout:', priority='cmdline')\n else:\n self.settings.set('FEED_URI', opts.output, priority='cmdline')\n feed_exporters = without_none_values(\n self.settings.getwithbase('FEED_EXPORTERS'))\n valid_output_formats = feed_exporters.keys()\n if not opts.output_format:\n opts.output_format = os.path.splitext(opts.output)[1].replace(\".\", \"\")\n if opts.output_format not in valid_output_formats:\n raise UsageError(\"Unrecognized output format '%s', set one\"\n \" using the '-t' switch or as a file extension\"\n \" from the supported list %s\" % (opts.output_format,\n tuple(valid_output_formats)))\n self.settings.set('FEED_FORMAT', opts.output_format, priority='cmdline')\n\n def run(self, args, opts):\n if len(args) < 1:\n raise UsageError()\n elif len(args) > 1:\n raise UsageError(\"running 'scrapy crawl' with more than one spider is no longer supported\")\n spname = args[0]\n\n self.crawler_process.crawl(spname, **opts.spargs)\n self.crawler_process.start()\n\n if self.crawler_process.bootstrap_failed:\n self.exitcode = 1\n", "path": "scrapy/commands/crawl.py"}], "after_files": [{"content": "import os\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.conf import arglist_to_dict\nfrom scrapy.utils.python import without_none_values\nfrom scrapy.exceptions import UsageError\n\n\nclass Command(ScrapyCommand):\n\n requires_project = True\n\n def syntax(self):\n return \"[options] <spider>\"\n\n def short_desc(self):\n return \"Run a spider\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"-a\", dest=\"spargs\", action=\"append\", default=[], metavar=\"NAME=VALUE\",\n help=\"set spider argument (may be repeated)\")\n parser.add_option(\"-o\", \"--output\", metavar=\"FILE\",\n help=\"dump scraped items into FILE (use - for stdout)\")\n parser.add_option(\"-t\", \"--output-format\", metavar=\"FORMAT\",\n help=\"format to use for dumping items with -o\")\n\n def process_options(self, args, opts):\n ScrapyCommand.process_options(self, args, opts)\n try:\n opts.spargs = arglist_to_dict(opts.spargs)\n except ValueError:\n raise UsageError(\"Invalid -a value, use -a NAME=VALUE\", print_help=False)\n if opts.output:\n if opts.output == '-':\n self.settings.set('FEED_URI', 'stdout:', priority='cmdline')\n else:\n self.settings.set('FEED_URI', opts.output, priority='cmdline')\n feed_exporters = without_none_values(\n self.settings.getwithbase('FEED_EXPORTERS'))\n valid_output_formats = feed_exporters.keys()\n if not opts.output_format:\n opts.output_format = os.path.splitext(opts.output)[1].replace(\".\", \"\")\n if opts.output_format not in valid_output_formats:\n raise UsageError(\"Unrecognized output format '%s', set one\"\n \" using the '-t' switch or as a file extension\"\n \" from the supported list %s\" % (opts.output_format,\n tuple(valid_output_formats)))\n self.settings.set('FEED_FORMAT', opts.output_format, priority='cmdline')\n\n def run(self, args, opts):\n if len(args) < 1:\n raise UsageError()\n elif len(args) > 1:\n raise UsageError(\"running 'scrapy crawl' with more than one spider is no longer supported\")\n spname = args[0]\n\n crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)\n\n if getattr(crawl_defer, 'result', None) is not None and issubclass(crawl_defer.result.type, Exception):\n self.exitcode = 1\n else:\n self.crawler_process.start()\n\n if self.crawler_process.bootstrap_failed or \\\n (hasattr(self.crawler_process, 'has_exception') and self.crawler_process.has_exception):\n self.exitcode = 1\n", "path": "scrapy/commands/crawl.py"}]} | 2,016 | 230 |
gh_patches_debug_30396 | rasdani/github-patches | git_diff | meltano__meltano-6351 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove run 'preview' references
As of v2.0.0, `meltano run` is no longer a preview feature. We can therefore remove the remaining references to `meltano run` being in 'preview' from the CLI.
<img width="762" alt="Screenshot 2022-07-05 at 10 49 34" src="https://user-images.githubusercontent.com/5585874/177345173-62a09d70-b72b-49ef-b644-a6d16275394f.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/cli/run.py`
Content:
```
1 """meltano run command and supporting functions."""
2 from __future__ import annotations
3
4 import click
5 import structlog
6
7 from meltano.core.block.blockset import BlockSet
8 from meltano.core.block.parser import BlockParser, validate_block_sets
9 from meltano.core.block.plugin_command import PluginCommandBlock
10 from meltano.core.legacy_tracking import LegacyTracker
11 from meltano.core.logging.utils import change_console_log_level
12 from meltano.core.project import Project
13 from meltano.core.project_settings_service import ProjectSettingsService
14 from meltano.core.runner import RunnerError
15 from meltano.core.tracking import BlockEvents, CliEvent, Tracker
16 from meltano.core.tracking.contexts.plugins import PluginsTrackingContext
17 from meltano.core.utils import click_run_async
18
19 from . import CliError, cli
20 from .params import pass_project
21 from .utils import InstrumentedCmd
22
23 logger = structlog.getLogger(__name__)
24
25
26 @cli.command(
27 cls=InstrumentedCmd, short_help="[preview] Run a set of plugins in series."
28 )
29 @click.option(
30 "--dry-run",
31 help="Do not run, just parse the invocation, validate it, and explain what would be executed.",
32 is_flag=True,
33 )
34 @click.option(
35 "--full-refresh",
36 help="Perform a full refresh (ignore state left behind by any previous runs). Applies to all pipelines.",
37 is_flag=True,
38 )
39 @click.option(
40 "--no-state-update",
41 help="Run without state saving. Applies to all pipelines.",
42 is_flag=True,
43 )
44 @click.option(
45 "--force",
46 "-f",
47 help="Force a new run even if a pipeline with the same State ID is already present. Applies to all pipelines.",
48 is_flag=True,
49 )
50 @click.argument(
51 "blocks",
52 nargs=-1,
53 )
54 @pass_project(migrate=True)
55 @click.pass_context
56 @click_run_async
57 async def run(
58 ctx: click.Context,
59 project: Project,
60 dry_run: bool,
61 full_refresh: bool,
62 no_state_update: bool,
63 force: bool,
64 blocks: list[str],
65 ):
66 """
67 Run a set of command blocks in series.
68
69 Blocks are specified as a list of plugin names, e.g.
70 `meltano run some_extractor some_loader some_plugin:some_command` and are run in the order they are specified
71 from left to right. A failure in any block will cause the entire run to abort.
72
73 Multiple commmand blocks can be chained together or repeated, and tap/target pairs will automatically be linked:
74
75 `meltano run tap-gitlab target-postgres dbt:test dbt:run`\n
76 `meltano run tap-gitlab target-postgres tap-salesforce target-mysql ...`\n
77 `meltano run tap-gitlab target-postgres dbt:run tap-postgres target-bigquery ...`\n
78
79 When running within an active environment, meltano run activates incremental job support. State ID's are autogenerated
80 using the format `{active_environment.name}:{extractor_name}-to-{loader_name}` for each extract/load pair found:
81
82 `meltano --environment=prod run tap-gitlab target-postgres tap-salesforce target-mysql`\n
83
84 The above command will create two jobs with state IDs `prod:tap-gitlab-to-target-postgres` and `prod:tap-salesforce-to-target-mysql`.
85
86 This a preview feature - its functionality and cli signature is still evolving.
87
88 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#run
89 """
90 if dry_run:
91 if not ProjectSettingsService.config_override.get("cli.log_level"):
92 logger.info("Setting 'console' handler log level to 'debug' for dry run")
93 change_console_log_level()
94
95 tracker: Tracker = ctx.obj["tracker"]
96 legacy_tracker: LegacyTracker = ctx.obj["legacy_tracker"]
97
98 parser_blocks = [] # noqa: F841
99 try:
100 parser = BlockParser(
101 logger, project, blocks, full_refresh, no_state_update, force
102 )
103 parsed_blocks = list(parser.find_blocks(0))
104 if not parsed_blocks:
105 tracker.track_command_event(CliEvent.aborted)
106 logger.info("No valid blocks found.")
107 return
108 except Exception as parser_err:
109 tracker.track_command_event(CliEvent.aborted)
110 raise parser_err
111
112 if validate_block_sets(logger, parsed_blocks):
113 logger.debug("All ExtractLoadBlocks validated, starting execution.")
114 else:
115 tracker.track_command_event(CliEvent.aborted)
116 raise CliError("Some ExtractLoadBlocks set failed validation.")
117 try:
118 await _run_blocks(tracker, parsed_blocks, dry_run=dry_run)
119 except Exception as err:
120 tracker.track_command_event(CliEvent.failed)
121 raise err
122 tracker.track_command_event(CliEvent.completed)
123 legacy_tracker.track_meltano_run(blocks)
124
125
126 async def _run_blocks(
127 tracker: Tracker,
128 parsed_blocks: list[BlockSet | PluginCommandBlock],
129 dry_run: bool,
130 ) -> None:
131 for idx, blk in enumerate(parsed_blocks):
132 blk_name = blk.__class__.__name__
133 tracking_ctx = PluginsTrackingContext.from_block(blk)
134 with tracker.with_contexts(tracking_ctx):
135 tracker.track_block_event(blk_name, BlockEvents.initialized)
136 if dry_run:
137 if isinstance(blk, BlockSet):
138 logger.info(
139 f"Dry run, but would have run block {idx + 1}/{len(parsed_blocks)}.",
140 block_type=blk_name,
141 comprised_of=[plugin.string_id for plugin in blk.blocks],
142 )
143 elif isinstance(blk, PluginCommandBlock):
144 logger.info(
145 f"Dry run, but would have run block {idx + 1}/{len(parsed_blocks)}.",
146 block_type=blk_name,
147 comprised_of=f"{blk.string_id}:{blk.command}",
148 )
149 continue
150
151 try:
152 await blk.run()
153 except RunnerError as err:
154 logger.error(
155 "Block run completed.",
156 set_number=idx,
157 block_type=blk_name,
158 success=False,
159 err=err,
160 exit_codes=err.exitcodes,
161 )
162 with tracker.with_contexts(tracking_ctx):
163 tracker.track_block_event(blk_name, BlockEvents.failed)
164 raise CliError(
165 f"Run invocation could not be completed as block failed: {err}"
166 ) from err
167 except Exception as bare_err: # make sure we also fire block failed events for all other exceptions
168 with tracker.with_contexts(tracking_ctx):
169 tracker.track_block_event(blk_name, BlockEvents.failed)
170 raise bare_err
171
172 logger.info(
173 "Block run completed.",
174 set_number=idx,
175 block_type=blk.__class__.__name__,
176 success=True,
177 err=None,
178 )
179 with tracker.with_contexts(tracking_ctx):
180 tracker.track_block_event(blk_name, BlockEvents.completed)
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/meltano/cli/run.py b/src/meltano/cli/run.py
--- a/src/meltano/cli/run.py
+++ b/src/meltano/cli/run.py
@@ -23,9 +23,7 @@
logger = structlog.getLogger(__name__)
[email protected](
- cls=InstrumentedCmd, short_help="[preview] Run a set of plugins in series."
-)
[email protected](cls=InstrumentedCmd, short_help="Run a set of plugins in series.")
@click.option(
"--dry-run",
help="Do not run, just parse the invocation, validate it, and explain what would be executed.",
@@ -70,7 +68,7 @@
`meltano run some_extractor some_loader some_plugin:some_command` and are run in the order they are specified
from left to right. A failure in any block will cause the entire run to abort.
- Multiple commmand blocks can be chained together or repeated, and tap/target pairs will automatically be linked:
+ Multiple command blocks can be chained together or repeated, and tap/target pairs will automatically be linked:
`meltano run tap-gitlab target-postgres dbt:test dbt:run`\n
`meltano run tap-gitlab target-postgres tap-salesforce target-mysql ...`\n
@@ -83,8 +81,6 @@
The above command will create two jobs with state IDs `prod:tap-gitlab-to-target-postgres` and `prod:tap-salesforce-to-target-mysql`.
- This a preview feature - its functionality and cli signature is still evolving.
-
\b\nRead more at https://docs.meltano.com/reference/command-line-interface#run
"""
if dry_run:
| {"golden_diff": "diff --git a/src/meltano/cli/run.py b/src/meltano/cli/run.py\n--- a/src/meltano/cli/run.py\n+++ b/src/meltano/cli/run.py\n@@ -23,9 +23,7 @@\n logger = structlog.getLogger(__name__)\n \n \[email protected](\n- cls=InstrumentedCmd, short_help=\"[preview] Run a set of plugins in series.\"\n-)\[email protected](cls=InstrumentedCmd, short_help=\"Run a set of plugins in series.\")\n @click.option(\n \"--dry-run\",\n help=\"Do not run, just parse the invocation, validate it, and explain what would be executed.\",\n@@ -70,7 +68,7 @@\n `meltano run some_extractor some_loader some_plugin:some_command` and are run in the order they are specified\n from left to right. A failure in any block will cause the entire run to abort.\n \n- Multiple commmand blocks can be chained together or repeated, and tap/target pairs will automatically be linked:\n+ Multiple command blocks can be chained together or repeated, and tap/target pairs will automatically be linked:\n \n `meltano run tap-gitlab target-postgres dbt:test dbt:run`\\n\n `meltano run tap-gitlab target-postgres tap-salesforce target-mysql ...`\\n\n@@ -83,8 +81,6 @@\n \n The above command will create two jobs with state IDs `prod:tap-gitlab-to-target-postgres` and `prod:tap-salesforce-to-target-mysql`.\n \n- This a preview feature - its functionality and cli signature is still evolving.\n-\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#run\n \"\"\"\n if dry_run:\n", "issue": "Remove run 'preview' references\nAs of v2.0.0, `meltano run` is no longer a preview feature. We can therefore remove the remaining references to `meltano run` being in 'preview' from the CLI.\r\n\r\n<img width=\"762\" alt=\"Screenshot 2022-07-05 at 10 49 34\" src=\"https://user-images.githubusercontent.com/5585874/177345173-62a09d70-b72b-49ef-b644-a6d16275394f.png\">\n", "before_files": [{"content": "\"\"\"meltano run command and supporting functions.\"\"\"\nfrom __future__ import annotations\n\nimport click\nimport structlog\n\nfrom meltano.core.block.blockset import BlockSet\nfrom meltano.core.block.parser import BlockParser, validate_block_sets\nfrom meltano.core.block.plugin_command import PluginCommandBlock\nfrom meltano.core.legacy_tracking import LegacyTracker\nfrom meltano.core.logging.utils import change_console_log_level\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.runner import RunnerError\nfrom meltano.core.tracking import BlockEvents, CliEvent, Tracker\nfrom meltano.core.tracking.contexts.plugins import PluginsTrackingContext\nfrom meltano.core.utils import click_run_async\n\nfrom . import CliError, cli\nfrom .params import pass_project\nfrom .utils import InstrumentedCmd\n\nlogger = structlog.getLogger(__name__)\n\n\[email protected](\n cls=InstrumentedCmd, short_help=\"[preview] Run a set of plugins in series.\"\n)\[email protected](\n \"--dry-run\",\n help=\"Do not run, just parse the invocation, validate it, and explain what would be executed.\",\n is_flag=True,\n)\[email protected](\n \"--full-refresh\",\n help=\"Perform a full refresh (ignore state left behind by any previous runs). Applies to all pipelines.\",\n is_flag=True,\n)\[email protected](\n \"--no-state-update\",\n help=\"Run without state saving. Applies to all pipelines.\",\n is_flag=True,\n)\[email protected](\n \"--force\",\n \"-f\",\n help=\"Force a new run even if a pipeline with the same State ID is already present. Applies to all pipelines.\",\n is_flag=True,\n)\[email protected](\n \"blocks\",\n nargs=-1,\n)\n@pass_project(migrate=True)\[email protected]_context\n@click_run_async\nasync def run(\n ctx: click.Context,\n project: Project,\n dry_run: bool,\n full_refresh: bool,\n no_state_update: bool,\n force: bool,\n blocks: list[str],\n):\n \"\"\"\n Run a set of command blocks in series.\n\n Blocks are specified as a list of plugin names, e.g.\n `meltano run some_extractor some_loader some_plugin:some_command` and are run in the order they are specified\n from left to right. A failure in any block will cause the entire run to abort.\n\n Multiple commmand blocks can be chained together or repeated, and tap/target pairs will automatically be linked:\n\n `meltano run tap-gitlab target-postgres dbt:test dbt:run`\\n\n `meltano run tap-gitlab target-postgres tap-salesforce target-mysql ...`\\n\n `meltano run tap-gitlab target-postgres dbt:run tap-postgres target-bigquery ...`\\n\n\n When running within an active environment, meltano run activates incremental job support. State ID's are autogenerated\n using the format `{active_environment.name}:{extractor_name}-to-{loader_name}` for each extract/load pair found:\n\n `meltano --environment=prod run tap-gitlab target-postgres tap-salesforce target-mysql`\\n\n\n The above command will create two jobs with state IDs `prod:tap-gitlab-to-target-postgres` and `prod:tap-salesforce-to-target-mysql`.\n\n This a preview feature - its functionality and cli signature is still evolving.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#run\n \"\"\"\n if dry_run:\n if not ProjectSettingsService.config_override.get(\"cli.log_level\"):\n logger.info(\"Setting 'console' handler log level to 'debug' for dry run\")\n change_console_log_level()\n\n tracker: Tracker = ctx.obj[\"tracker\"]\n legacy_tracker: LegacyTracker = ctx.obj[\"legacy_tracker\"]\n\n parser_blocks = [] # noqa: F841\n try:\n parser = BlockParser(\n logger, project, blocks, full_refresh, no_state_update, force\n )\n parsed_blocks = list(parser.find_blocks(0))\n if not parsed_blocks:\n tracker.track_command_event(CliEvent.aborted)\n logger.info(\"No valid blocks found.\")\n return\n except Exception as parser_err:\n tracker.track_command_event(CliEvent.aborted)\n raise parser_err\n\n if validate_block_sets(logger, parsed_blocks):\n logger.debug(\"All ExtractLoadBlocks validated, starting execution.\")\n else:\n tracker.track_command_event(CliEvent.aborted)\n raise CliError(\"Some ExtractLoadBlocks set failed validation.\")\n try:\n await _run_blocks(tracker, parsed_blocks, dry_run=dry_run)\n except Exception as err:\n tracker.track_command_event(CliEvent.failed)\n raise err\n tracker.track_command_event(CliEvent.completed)\n legacy_tracker.track_meltano_run(blocks)\n\n\nasync def _run_blocks(\n tracker: Tracker,\n parsed_blocks: list[BlockSet | PluginCommandBlock],\n dry_run: bool,\n) -> None:\n for idx, blk in enumerate(parsed_blocks):\n blk_name = blk.__class__.__name__\n tracking_ctx = PluginsTrackingContext.from_block(blk)\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.initialized)\n if dry_run:\n if isinstance(blk, BlockSet):\n logger.info(\n f\"Dry run, but would have run block {idx + 1}/{len(parsed_blocks)}.\",\n block_type=blk_name,\n comprised_of=[plugin.string_id for plugin in blk.blocks],\n )\n elif isinstance(blk, PluginCommandBlock):\n logger.info(\n f\"Dry run, but would have run block {idx + 1}/{len(parsed_blocks)}.\",\n block_type=blk_name,\n comprised_of=f\"{blk.string_id}:{blk.command}\",\n )\n continue\n\n try:\n await blk.run()\n except RunnerError as err:\n logger.error(\n \"Block run completed.\",\n set_number=idx,\n block_type=blk_name,\n success=False,\n err=err,\n exit_codes=err.exitcodes,\n )\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.failed)\n raise CliError(\n f\"Run invocation could not be completed as block failed: {err}\"\n ) from err\n except Exception as bare_err: # make sure we also fire block failed events for all other exceptions\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.failed)\n raise bare_err\n\n logger.info(\n \"Block run completed.\",\n set_number=idx,\n block_type=blk.__class__.__name__,\n success=True,\n err=None,\n )\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.completed)\n", "path": "src/meltano/cli/run.py"}], "after_files": [{"content": "\"\"\"meltano run command and supporting functions.\"\"\"\nfrom __future__ import annotations\n\nimport click\nimport structlog\n\nfrom meltano.core.block.blockset import BlockSet\nfrom meltano.core.block.parser import BlockParser, validate_block_sets\nfrom meltano.core.block.plugin_command import PluginCommandBlock\nfrom meltano.core.legacy_tracking import LegacyTracker\nfrom meltano.core.logging.utils import change_console_log_level\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.runner import RunnerError\nfrom meltano.core.tracking import BlockEvents, CliEvent, Tracker\nfrom meltano.core.tracking.contexts.plugins import PluginsTrackingContext\nfrom meltano.core.utils import click_run_async\n\nfrom . import CliError, cli\nfrom .params import pass_project\nfrom .utils import InstrumentedCmd\n\nlogger = structlog.getLogger(__name__)\n\n\[email protected](cls=InstrumentedCmd, short_help=\"Run a set of plugins in series.\")\[email protected](\n \"--dry-run\",\n help=\"Do not run, just parse the invocation, validate it, and explain what would be executed.\",\n is_flag=True,\n)\[email protected](\n \"--full-refresh\",\n help=\"Perform a full refresh (ignore state left behind by any previous runs). Applies to all pipelines.\",\n is_flag=True,\n)\[email protected](\n \"--no-state-update\",\n help=\"Run without state saving. Applies to all pipelines.\",\n is_flag=True,\n)\[email protected](\n \"--force\",\n \"-f\",\n help=\"Force a new run even if a pipeline with the same State ID is already present. Applies to all pipelines.\",\n is_flag=True,\n)\[email protected](\n \"blocks\",\n nargs=-1,\n)\n@pass_project(migrate=True)\[email protected]_context\n@click_run_async\nasync def run(\n ctx: click.Context,\n project: Project,\n dry_run: bool,\n full_refresh: bool,\n no_state_update: bool,\n force: bool,\n blocks: list[str],\n):\n \"\"\"\n Run a set of command blocks in series.\n\n Blocks are specified as a list of plugin names, e.g.\n `meltano run some_extractor some_loader some_plugin:some_command` and are run in the order they are specified\n from left to right. A failure in any block will cause the entire run to abort.\n\n Multiple command blocks can be chained together or repeated, and tap/target pairs will automatically be linked:\n\n `meltano run tap-gitlab target-postgres dbt:test dbt:run`\\n\n `meltano run tap-gitlab target-postgres tap-salesforce target-mysql ...`\\n\n `meltano run tap-gitlab target-postgres dbt:run tap-postgres target-bigquery ...`\\n\n\n When running within an active environment, meltano run activates incremental job support. State ID's are autogenerated\n using the format `{active_environment.name}:{extractor_name}-to-{loader_name}` for each extract/load pair found:\n\n `meltano --environment=prod run tap-gitlab target-postgres tap-salesforce target-mysql`\\n\n\n The above command will create two jobs with state IDs `prod:tap-gitlab-to-target-postgres` and `prod:tap-salesforce-to-target-mysql`.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#run\n \"\"\"\n if dry_run:\n if not ProjectSettingsService.config_override.get(\"cli.log_level\"):\n logger.info(\"Setting 'console' handler log level to 'debug' for dry run\")\n change_console_log_level()\n\n tracker: Tracker = ctx.obj[\"tracker\"]\n legacy_tracker: LegacyTracker = ctx.obj[\"legacy_tracker\"]\n\n parser_blocks = [] # noqa: F841\n try:\n parser = BlockParser(\n logger, project, blocks, full_refresh, no_state_update, force\n )\n parsed_blocks = list(parser.find_blocks(0))\n if not parsed_blocks:\n tracker.track_command_event(CliEvent.aborted)\n logger.info(\"No valid blocks found.\")\n return\n except Exception as parser_err:\n tracker.track_command_event(CliEvent.aborted)\n raise parser_err\n\n if validate_block_sets(logger, parsed_blocks):\n logger.debug(\"All ExtractLoadBlocks validated, starting execution.\")\n else:\n tracker.track_command_event(CliEvent.aborted)\n raise CliError(\"Some ExtractLoadBlocks set failed validation.\")\n try:\n await _run_blocks(tracker, parsed_blocks, dry_run=dry_run)\n except Exception as err:\n tracker.track_command_event(CliEvent.failed)\n raise err\n tracker.track_command_event(CliEvent.completed)\n legacy_tracker.track_meltano_run(blocks)\n\n\nasync def _run_blocks(\n tracker: Tracker,\n parsed_blocks: list[BlockSet | PluginCommandBlock],\n dry_run: bool,\n) -> None:\n for idx, blk in enumerate(parsed_blocks):\n blk_name = blk.__class__.__name__\n tracking_ctx = PluginsTrackingContext.from_block(blk)\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.initialized)\n if dry_run:\n if isinstance(blk, BlockSet):\n logger.info(\n f\"Dry run, but would have run block {idx + 1}/{len(parsed_blocks)}.\",\n block_type=blk_name,\n comprised_of=[plugin.string_id for plugin in blk.blocks],\n )\n elif isinstance(blk, PluginCommandBlock):\n logger.info(\n f\"Dry run, but would have run block {idx + 1}/{len(parsed_blocks)}.\",\n block_type=blk_name,\n comprised_of=f\"{blk.string_id}:{blk.command}\",\n )\n continue\n\n try:\n await blk.run()\n except RunnerError as err:\n logger.error(\n \"Block run completed.\",\n set_number=idx,\n block_type=blk_name,\n success=False,\n err=err,\n exit_codes=err.exitcodes,\n )\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.failed)\n raise CliError(\n f\"Run invocation could not be completed as block failed: {err}\"\n ) from err\n except Exception as bare_err: # make sure we also fire block failed events for all other exceptions\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.failed)\n raise bare_err\n\n logger.info(\n \"Block run completed.\",\n set_number=idx,\n block_type=blk.__class__.__name__,\n success=True,\n err=None,\n )\n with tracker.with_contexts(tracking_ctx):\n tracker.track_block_event(blk_name, BlockEvents.completed)\n", "path": "src/meltano/cli/run.py"}]} | 2,308 | 380 |
gh_patches_debug_9519 | rasdani/github-patches | git_diff | huggingface__transformers-5082 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Perform evaluation on HANS with Trainer (like GLUE example)
Current [HANS](https://github.com/huggingface/transformers/tree/master/examples/adversarial) evaluation implementation is carried out in the old way. It'd be good to do it in the same manner as other examples are implemented now with the Trainer class.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/adversarial/run_hans.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
3 # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 """ Finetuning the library models for sequence classification on HANS."""
17
18 import logging
19 import os
20 from dataclasses import dataclass, field
21 from typing import Dict, List, Optional
22
23 import numpy as np
24 import torch
25
26 from transformers import (
27 AutoConfig,
28 AutoModelForSequenceClassification,
29 AutoTokenizer,
30 HfArgumentParser,
31 Trainer,
32 TrainingArguments,
33 default_data_collator,
34 set_seed,
35 )
36 from utils_hans import HansDataset, InputFeatures, hans_processors
37
38
39 logger = logging.getLogger(__name__)
40
41
42 @dataclass
43 class ModelArguments:
44 """
45 Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
46 """
47
48 model_name_or_path: str = field(
49 metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
50 )
51 config_name: Optional[str] = field(
52 default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
53 )
54 tokenizer_name: Optional[str] = field(
55 default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
56 )
57 cache_dir: Optional[str] = field(
58 default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
59 )
60
61
62 @dataclass
63 class DataTrainingArguments:
64 """
65 Arguments pertaining to what data we are going to input our model for training and eval.
66 """
67
68 task_name: str = field(
69 metadata={"help": "The name of the task to train selected in the list: " + ", ".join(hans_processors.keys())}
70 )
71 data_dir: str = field(
72 metadata={"help": "The input data dir. Should contain the .tsv files (or other data files) for the task."}
73 )
74 max_seq_length: int = field(
75 default=128,
76 metadata={
77 "help": "The maximum total input sequence length after tokenization. Sequences longer "
78 "than this will be truncated, sequences shorter will be padded."
79 },
80 )
81 overwrite_cache: bool = field(
82 default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
83 )
84
85
86 def hans_data_collator(features: List[InputFeatures]) -> Dict[str, torch.Tensor]:
87 """
88 Data collator that removes the "pairID" key if present.
89 """
90 batch = default_data_collator(features)
91 _ = batch.pop("pairID", None)
92 return batch
93
94
95 def main():
96 # See all possible arguments in src/transformers/training_args.py
97 # or by passing the --help flag to this script.
98 # We now keep distinct sets of args, for a cleaner separation of concerns.
99
100 parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
101 model_args, data_args, training_args = parser.parse_args_into_dataclasses()
102
103 if (
104 os.path.exists(training_args.output_dir)
105 and os.listdir(training_args.output_dir)
106 and training_args.do_train
107 and not training_args.overwrite_output_dir
108 ):
109 raise ValueError(
110 f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
111 )
112
113 # Setup logging
114 logging.basicConfig(
115 format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
116 datefmt="%m/%d/%Y %H:%M:%S",
117 level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
118 )
119 logger.warning(
120 "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
121 training_args.local_rank,
122 training_args.device,
123 training_args.n_gpu,
124 bool(training_args.local_rank != -1),
125 training_args.fp16,
126 )
127 logger.info("Training/evaluation parameters %s", training_args)
128
129 # Set seed
130 set_seed(training_args.seed)
131
132 try:
133 processor = hans_processors[data_args.task_name]()
134 label_list = processor.get_labels()
135 num_labels = len(label_list)
136 except KeyError:
137 raise ValueError("Task not found: %s" % (data_args.task_name))
138
139 # Load pretrained model and tokenizer
140 #
141 # Distributed training:
142 # The .from_pretrained methods guarantee that only one local process can concurrently
143 # download model & vocab.
144
145 config = AutoConfig.from_pretrained(
146 model_args.config_name if model_args.config_name else model_args.model_name_or_path,
147 num_labels=num_labels,
148 finetuning_task=data_args.task_name,
149 cache_dir=model_args.cache_dir,
150 )
151 tokenizer = AutoTokenizer.from_pretrained(
152 model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
153 cache_dir=model_args.cache_dir,
154 )
155 model = AutoModelForSequenceClassification.from_pretrained(
156 model_args.model_name_or_path,
157 from_tf=bool(".ckpt" in model_args.model_name_or_path),
158 config=config,
159 cache_dir=model_args.cache_dir,
160 )
161
162 # Get datasets
163 train_dataset = (
164 HansDataset(
165 data_dir=data_args.data_dir,
166 tokenizer=tokenizer,
167 task=data_args.task_name,
168 max_seq_length=data_args.max_seq_length,
169 overwrite_cache=data_args.overwrite_cache,
170 )
171 if training_args.do_train
172 else None
173 )
174 eval_dataset = (
175 HansDataset(
176 data_dir=data_args.data_dir,
177 tokenizer=tokenizer,
178 task=data_args.task_name,
179 max_seq_length=data_args.max_seq_length,
180 overwrite_cache=data_args.overwrite_cache,
181 evaluate=True,
182 )
183 if training_args.do_eval
184 else None
185 )
186
187 # Initialize our Trainer
188 trainer = Trainer(
189 model=model,
190 args=training_args,
191 train_dataset=train_dataset,
192 eval_dataset=eval_dataset,
193 data_collator=hans_data_collator,
194 )
195
196 # Training
197 if training_args.do_train:
198 trainer.train(
199 model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
200 )
201 trainer.save_model()
202 # For convenience, we also re-save the tokenizer to the same directory,
203 # so that you can share your model easily on huggingface.co/models =)
204 if trainer.is_world_master():
205 tokenizer.save_pretrained(training_args.output_dir)
206
207 # Evaluation
208 if training_args.do_eval:
209 logger.info("*** Evaluate ***")
210
211 output = trainer.predict(eval_dataset)
212 preds = output.predictions
213 preds = np.argmax(preds, axis=1)
214
215 pair_ids = [ex.pairID for ex in eval_dataset]
216 output_eval_file = os.path.join(training_args.output_dir, "hans_predictions.txt")
217 if trainer.is_world_master():
218 with open(output_eval_file, "w") as writer:
219 for pid, pred in zip(pair_ids, preds):
220 writer.write("ex" + str(pid) + "," + label_list[int(pred)] + "\n")
221
222 trainer._log(output.metrics)
223
224
225 def _mp_fn(index):
226 # For xla_spawn (TPUs)
227 main()
228
229
230 if __name__ == "__main__":
231 main()
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/adversarial/run_hans.py b/examples/adversarial/run_hans.py
--- a/examples/adversarial/run_hans.py
+++ b/examples/adversarial/run_hans.py
@@ -216,6 +216,7 @@
output_eval_file = os.path.join(training_args.output_dir, "hans_predictions.txt")
if trainer.is_world_master():
with open(output_eval_file, "w") as writer:
+ writer.write("pairID,gold_label\n")
for pid, pred in zip(pair_ids, preds):
writer.write("ex" + str(pid) + "," + label_list[int(pred)] + "\n")
| {"golden_diff": "diff --git a/examples/adversarial/run_hans.py b/examples/adversarial/run_hans.py\n--- a/examples/adversarial/run_hans.py\n+++ b/examples/adversarial/run_hans.py\n@@ -216,6 +216,7 @@\n output_eval_file = os.path.join(training_args.output_dir, \"hans_predictions.txt\")\n if trainer.is_world_master():\n with open(output_eval_file, \"w\") as writer:\n+ writer.write(\"pairID,gold_label\\n\")\n for pid, pred in zip(pair_ids, preds):\n writer.write(\"ex\" + str(pid) + \",\" + label_list[int(pred)] + \"\\n\")\n", "issue": "Perform evaluation on HANS with Trainer (like GLUE example)\nCurrent [HANS](https://github.com/huggingface/transformers/tree/master/examples/adversarial) evaluation implementation is carried out in the old way. It'd be good to do it in the same manner as other examples are implemented now with the Trainer class. \r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" Finetuning the library models for sequence classification on HANS.\"\"\"\n\nimport logging\nimport os\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List, Optional\n\nimport numpy as np\nimport torch\n\nfrom transformers import (\n AutoConfig,\n AutoModelForSequenceClassification,\n AutoTokenizer,\n HfArgumentParser,\n Trainer,\n TrainingArguments,\n default_data_collator,\n set_seed,\n)\nfrom utils_hans import HansDataset, InputFeatures, hans_processors\n\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass ModelArguments:\n \"\"\"\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n \"\"\"\n\n model_name_or_path: str = field(\n metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n )\n config_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n )\n tokenizer_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n )\n cache_dir: Optional[str] = field(\n default=None, metadata={\"help\": \"Where do you want to store the pretrained models downloaded from s3\"}\n )\n\n\n@dataclass\nclass DataTrainingArguments:\n \"\"\"\n Arguments pertaining to what data we are going to input our model for training and eval.\n \"\"\"\n\n task_name: str = field(\n metadata={\"help\": \"The name of the task to train selected in the list: \" + \", \".join(hans_processors.keys())}\n )\n data_dir: str = field(\n metadata={\"help\": \"The input data dir. Should contain the .tsv files (or other data files) for the task.\"}\n )\n max_seq_length: int = field(\n default=128,\n metadata={\n \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n \"than this will be truncated, sequences shorter will be padded.\"\n },\n )\n overwrite_cache: bool = field(\n default=False, metadata={\"help\": \"Overwrite the cached training and evaluation sets\"}\n )\n\n\ndef hans_data_collator(features: List[InputFeatures]) -> Dict[str, torch.Tensor]:\n \"\"\"\n Data collator that removes the \"pairID\" key if present.\n \"\"\"\n batch = default_data_collator(features)\n _ = batch.pop(\"pairID\", None)\n return batch\n\n\ndef main():\n # See all possible arguments in src/transformers/training_args.py\n # or by passing the --help flag to this script.\n # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n if (\n os.path.exists(training_args.output_dir)\n and os.listdir(training_args.output_dir)\n and training_args.do_train\n and not training_args.overwrite_output_dir\n ):\n raise ValueError(\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\n )\n\n # Setup logging\n logging.basicConfig(\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n datefmt=\"%m/%d/%Y %H:%M:%S\",\n level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,\n )\n logger.warning(\n \"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\n training_args.local_rank,\n training_args.device,\n training_args.n_gpu,\n bool(training_args.local_rank != -1),\n training_args.fp16,\n )\n logger.info(\"Training/evaluation parameters %s\", training_args)\n\n # Set seed\n set_seed(training_args.seed)\n\n try:\n processor = hans_processors[data_args.task_name]()\n label_list = processor.get_labels()\n num_labels = len(label_list)\n except KeyError:\n raise ValueError(\"Task not found: %s\" % (data_args.task_name))\n\n # Load pretrained model and tokenizer\n #\n # Distributed training:\n # The .from_pretrained methods guarantee that only one local process can concurrently\n # download model & vocab.\n\n config = AutoConfig.from_pretrained(\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n num_labels=num_labels,\n finetuning_task=data_args.task_name,\n cache_dir=model_args.cache_dir,\n )\n tokenizer = AutoTokenizer.from_pretrained(\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n cache_dir=model_args.cache_dir,\n )\n model = AutoModelForSequenceClassification.from_pretrained(\n model_args.model_name_or_path,\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n config=config,\n cache_dir=model_args.cache_dir,\n )\n\n # Get datasets\n train_dataset = (\n HansDataset(\n data_dir=data_args.data_dir,\n tokenizer=tokenizer,\n task=data_args.task_name,\n max_seq_length=data_args.max_seq_length,\n overwrite_cache=data_args.overwrite_cache,\n )\n if training_args.do_train\n else None\n )\n eval_dataset = (\n HansDataset(\n data_dir=data_args.data_dir,\n tokenizer=tokenizer,\n task=data_args.task_name,\n max_seq_length=data_args.max_seq_length,\n overwrite_cache=data_args.overwrite_cache,\n evaluate=True,\n )\n if training_args.do_eval\n else None\n )\n\n # Initialize our Trainer\n trainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n data_collator=hans_data_collator,\n )\n\n # Training\n if training_args.do_train:\n trainer.train(\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\n )\n trainer.save_model()\n # For convenience, we also re-save the tokenizer to the same directory,\n # so that you can share your model easily on huggingface.co/models =)\n if trainer.is_world_master():\n tokenizer.save_pretrained(training_args.output_dir)\n\n # Evaluation\n if training_args.do_eval:\n logger.info(\"*** Evaluate ***\")\n\n output = trainer.predict(eval_dataset)\n preds = output.predictions\n preds = np.argmax(preds, axis=1)\n\n pair_ids = [ex.pairID for ex in eval_dataset]\n output_eval_file = os.path.join(training_args.output_dir, \"hans_predictions.txt\")\n if trainer.is_world_master():\n with open(output_eval_file, \"w\") as writer:\n for pid, pred in zip(pair_ids, preds):\n writer.write(\"ex\" + str(pid) + \",\" + label_list[int(pred)] + \"\\n\")\n\n trainer._log(output.metrics)\n\n\ndef _mp_fn(index):\n # For xla_spawn (TPUs)\n main()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/adversarial/run_hans.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" Finetuning the library models for sequence classification on HANS.\"\"\"\n\nimport logging\nimport os\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List, Optional\n\nimport numpy as np\nimport torch\n\nfrom transformers import (\n AutoConfig,\n AutoModelForSequenceClassification,\n AutoTokenizer,\n HfArgumentParser,\n Trainer,\n TrainingArguments,\n default_data_collator,\n set_seed,\n)\nfrom utils_hans import HansDataset, InputFeatures, hans_processors\n\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass ModelArguments:\n \"\"\"\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n \"\"\"\n\n model_name_or_path: str = field(\n metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n )\n config_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n )\n tokenizer_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n )\n cache_dir: Optional[str] = field(\n default=None, metadata={\"help\": \"Where do you want to store the pretrained models downloaded from s3\"}\n )\n\n\n@dataclass\nclass DataTrainingArguments:\n \"\"\"\n Arguments pertaining to what data we are going to input our model for training and eval.\n \"\"\"\n\n task_name: str = field(\n metadata={\"help\": \"The name of the task to train selected in the list: \" + \", \".join(hans_processors.keys())}\n )\n data_dir: str = field(\n metadata={\"help\": \"The input data dir. Should contain the .tsv files (or other data files) for the task.\"}\n )\n max_seq_length: int = field(\n default=128,\n metadata={\n \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n \"than this will be truncated, sequences shorter will be padded.\"\n },\n )\n overwrite_cache: bool = field(\n default=False, metadata={\"help\": \"Overwrite the cached training and evaluation sets\"}\n )\n\n\ndef hans_data_collator(features: List[InputFeatures]) -> Dict[str, torch.Tensor]:\n \"\"\"\n Data collator that removes the \"pairID\" key if present.\n \"\"\"\n batch = default_data_collator(features)\n _ = batch.pop(\"pairID\", None)\n return batch\n\n\ndef main():\n # See all possible arguments in src/transformers/training_args.py\n # or by passing the --help flag to this script.\n # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n if (\n os.path.exists(training_args.output_dir)\n and os.listdir(training_args.output_dir)\n and training_args.do_train\n and not training_args.overwrite_output_dir\n ):\n raise ValueError(\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\n )\n\n # Setup logging\n logging.basicConfig(\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n datefmt=\"%m/%d/%Y %H:%M:%S\",\n level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,\n )\n logger.warning(\n \"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\n training_args.local_rank,\n training_args.device,\n training_args.n_gpu,\n bool(training_args.local_rank != -1),\n training_args.fp16,\n )\n logger.info(\"Training/evaluation parameters %s\", training_args)\n\n # Set seed\n set_seed(training_args.seed)\n\n try:\n processor = hans_processors[data_args.task_name]()\n label_list = processor.get_labels()\n num_labels = len(label_list)\n except KeyError:\n raise ValueError(\"Task not found: %s\" % (data_args.task_name))\n\n # Load pretrained model and tokenizer\n #\n # Distributed training:\n # The .from_pretrained methods guarantee that only one local process can concurrently\n # download model & vocab.\n\n config = AutoConfig.from_pretrained(\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n num_labels=num_labels,\n finetuning_task=data_args.task_name,\n cache_dir=model_args.cache_dir,\n )\n tokenizer = AutoTokenizer.from_pretrained(\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n cache_dir=model_args.cache_dir,\n )\n model = AutoModelForSequenceClassification.from_pretrained(\n model_args.model_name_or_path,\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n config=config,\n cache_dir=model_args.cache_dir,\n )\n\n # Get datasets\n train_dataset = (\n HansDataset(\n data_dir=data_args.data_dir,\n tokenizer=tokenizer,\n task=data_args.task_name,\n max_seq_length=data_args.max_seq_length,\n overwrite_cache=data_args.overwrite_cache,\n )\n if training_args.do_train\n else None\n )\n eval_dataset = (\n HansDataset(\n data_dir=data_args.data_dir,\n tokenizer=tokenizer,\n task=data_args.task_name,\n max_seq_length=data_args.max_seq_length,\n overwrite_cache=data_args.overwrite_cache,\n evaluate=True,\n )\n if training_args.do_eval\n else None\n )\n\n # Initialize our Trainer\n trainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n data_collator=hans_data_collator,\n )\n\n # Training\n if training_args.do_train:\n trainer.train(\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\n )\n trainer.save_model()\n # For convenience, we also re-save the tokenizer to the same directory,\n # so that you can share your model easily on huggingface.co/models =)\n if trainer.is_world_master():\n tokenizer.save_pretrained(training_args.output_dir)\n\n # Evaluation\n if training_args.do_eval:\n logger.info(\"*** Evaluate ***\")\n\n output = trainer.predict(eval_dataset)\n preds = output.predictions\n preds = np.argmax(preds, axis=1)\n\n pair_ids = [ex.pairID for ex in eval_dataset]\n output_eval_file = os.path.join(training_args.output_dir, \"hans_predictions.txt\")\n if trainer.is_world_master():\n with open(output_eval_file, \"w\") as writer:\n writer.write(\"pairID,gold_label\\n\")\n for pid, pred in zip(pair_ids, preds):\n writer.write(\"ex\" + str(pid) + \",\" + label_list[int(pred)] + \"\\n\")\n\n trainer._log(output.metrics)\n\n\ndef _mp_fn(index):\n # For xla_spawn (TPUs)\n main()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/adversarial/run_hans.py"}]} | 2,654 | 143 |
gh_patches_debug_8395 | rasdani/github-patches | git_diff | qtile__qtile-2303 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
widget.LaunchBar padding background color
# Issue description
When the widget.LaunchBar is used with a padding greater than 0 (zero), the rightmost padding does not respect the bar background color.
# Qtile version
0.17.0
# Configuration
```
screens = [
Screen(
top=bar.Bar(
[
widget.CurrentLayoutIcon(),
widget.GroupBox(),
widget.Prompt(),
widget.LaunchBar(
[
("google-chrome", "google-chrome-stable", "Launch Google Chrome"),
("firefox", "firefox", "Launch Google Firefox"),
],
padding=15
),
widget.Spacer(),
widget.Clock(format='%a %d %b %Y %I:%M %p'),
widget.Spacer(),
widget.Systray(),
widget.QuickExit(),
],
29, background="#008880", opacity=.5
),
),
]
```

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/launchbar.py`
Content:
```
1 # Copyright (c) 2014 Tycho Andersen
2 # Copyright (c) 2014 dequis
3 # Copyright (c) 2014-2015 Joseph Razik
4 # Copyright (c) 2014 Sean Vig
5 # Copyright (c) 2015 reus
6 #
7 # Permission is hereby granted, free of charge, to any person obtaining a copy
8 # of this software and associated documentation files (the "Software"), to deal
9 # in the Software without restriction, including without limitation the rights
10 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
11 # copies of the Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice shall be included in
15 # all copies or substantial portions of the Software.
16 #
17 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
18 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
22 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
23 # SOFTWARE.
24
25 """
26 This module define a widget that displays icons to launch softwares or commands
27 when clicked -- a launchbar.
28 Only png icon files are displayed, not xpm because cairo doesn't support
29 loading of xpm file.
30 The order of displaying (from left to right) is in the order of the list.
31
32 If no icon was found for the name provided and if default_icon is set to None
33 then the name is printed instead. If default_icon is defined then this icon is
34 displayed instead.
35
36 To execute a software:
37 - ('thunderbird', 'thunderbird -safe-mode', 'launch thunderbird in safe mode')
38 To execute a python command in qtile, begin with by 'qshell:'
39 - ('logout', 'qshell:self.qtile.cmd_shutdown()', 'logout from qtile')
40
41
42 """
43 import os.path
44
45 import cairocffi
46 from xdg.IconTheme import getIconPath
47
48 from libqtile import bar
49 from libqtile.log_utils import logger
50 from libqtile.widget import base
51
52
53 class LaunchBar(base._Widget):
54 """A widget that display icons to launch the associated command
55
56 Widget requirements: pyxdg_.
57
58 .. _pyxdg: https://freedesktop.org/wiki/Software/pyxdg/
59
60 Parameters
61 ==========
62 progs :
63 a list of tuples ``(software_name, command_to_execute, comment)``, for
64 example::
65
66 ('thunderbird', 'thunderbird -safe-mode', 'launch thunderbird in safe mode')
67 ('logout', 'qshell:self.qtile.cmd_shutdown()', 'logout from qtile')
68 """
69 orientations = base.ORIENTATION_HORIZONTAL
70 defaults = [
71 ('padding', 2, 'Padding between icons'),
72 ('default_icon', '/usr/share/icons/oxygen/256x256/mimetypes/'
73 'application-x-executable.png', 'Default icon not found'),
74 ]
75
76 def __init__(self, progs=None, width=bar.CALCULATED, **config):
77 base._Widget.__init__(self, width, **config)
78 if progs is None:
79 progs = []
80 self.add_defaults(LaunchBar.defaults)
81 self.surfaces = {}
82 self.icons_files = {}
83 self.icons_widths = {}
84 self.icons_offsets = {}
85 # For now, ignore the comments but may be one day it will be useful
86 self.progs = dict(enumerate([{'name': prog[0], 'cmd': prog[1],
87 'comment': prog[2] if len(prog) > 2 else
88 None} for prog in progs]))
89 self.progs_name = set([prog['name'] for prog in self.progs.values()])
90 self.length_type = bar.STATIC
91 self.length = 0
92
93 def _configure(self, qtile, pbar):
94 base._Widget._configure(self, qtile, pbar)
95 self.lookup_icons()
96 self.setup_images()
97 self.length = self.calculate_length()
98
99 def setup_images(self):
100 """ Create image structures for each icon files. """
101 for img_name, iconfile in self.icons_files.items():
102 if iconfile is None:
103 logger.warning(
104 'No icon found for application "%s" (%s) switch to text mode',
105 img_name, iconfile)
106 # if no icon is found and no default icon was set, we just
107 # print the name, based on a textbox.
108 textbox = base._TextBox()
109 textbox._configure(self.qtile, self.bar)
110 textbox.layout = self.drawer.textlayout(
111 textbox.text,
112 textbox.foreground,
113 textbox.font,
114 textbox.fontsize,
115 textbox.fontshadow,
116 markup=textbox.markup,
117 )
118 # the name will be displayed
119 textbox.text = img_name
120 textbox.calculate_length()
121 self.icons_widths[img_name] = textbox.width
122 self.surfaces[img_name] = textbox
123 continue
124 else:
125 try:
126 img = cairocffi.ImageSurface.create_from_png(iconfile)
127 except cairocffi.Error:
128 logger.exception('Error loading icon for application "%s" (%s)', img_name, iconfile)
129 return
130
131 input_width = img.get_width()
132 input_height = img.get_height()
133
134 sp = input_height / (self.bar.height - 4)
135 width = int(input_width / sp)
136
137 imgpat = cairocffi.SurfacePattern(img)
138 scaler = cairocffi.Matrix()
139 scaler.scale(sp, sp)
140 scaler.translate(self.padding * -1, -2)
141 imgpat.set_matrix(scaler)
142
143 imgpat.set_filter(cairocffi.FILTER_BEST)
144 self.surfaces[img_name] = imgpat
145 self.icons_widths[img_name] = width
146
147 def _lookup_icon(self, name):
148 """ Search for the icon corresponding to one command. """
149 self.icons_files[name] = None
150 # if the software_name is directly an absolute path icon file
151 if os.path.isabs(name):
152 # name start with '/' thus it's an absolute path
153 root, ext = os.path.splitext(name)
154 if ext == '.png':
155 self.icons_files[name] = name if os.path.isfile(name) else None
156 else:
157 # try to add the extension
158 self.icons_files[name] = name + '.png' if os.path.isfile(name + '.png') else None
159 else:
160 self.icons_files[name] = getIconPath(name)
161 # no search method found an icon, so default icon
162 if self.icons_files[name] is None:
163 self.icons_files[name] = self.default_icon
164
165 def lookup_icons(self):
166 """ Search for the icons corresponding to the commands to execute. """
167 if self.default_icon is not None:
168 if not os.path.isfile(self.default_icon):
169 # if the default icon provided is not found, switch to
170 # text mode
171 self.default_icon = None
172 for name in self.progs_name:
173 self._lookup_icon(name)
174
175 def get_icon_in_position(self, x, y):
176 """ Determine which icon is clicked according to its position. """
177 for i in self.progs:
178 if x < (self.icons_offsets[i] +
179 self.icons_widths[self.progs[i]['name']] +
180 self.padding / 2):
181 return i
182
183 def button_press(self, x, y, button):
184 """ Launch the associated command to the clicked icon. """
185 base._Widget.button_press(self, x, y, button)
186 if button == 1:
187 icon = self.get_icon_in_position(x, y)
188 if icon is not None:
189 cmd = self.progs[icon]['cmd']
190 if cmd.startswith('qshell:'):
191 exec(cmd[7:].lstrip())
192 else:
193 self.qtile.cmd_spawn(cmd)
194 self.draw()
195
196 def draw(self):
197 """ Draw the icons in the widget. """
198 self.drawer.clear(self.background or self.bar.background)
199 xoffset = 0
200 for i in sorted(self.progs.keys()):
201 self.icons_offsets[i] = xoffset + self.padding
202 name = self.progs[i]['name']
203 icon_width = self.icons_widths[name]
204 self.drawer.ctx.move_to(self.offset + xoffset, icon_width)
205 self.drawer.clear(self.background or self.bar.background)
206 if isinstance(self.surfaces[name], base._TextBox):
207 # display the name if no icon was found and no default icon
208 textbox = self.surfaces[name]
209 textbox.layout.draw(
210 self.padding + textbox.actual_padding,
211 int((self.bar.height - textbox.layout.height) / 2.0) + 1
212 )
213 else:
214 # display an icon
215 self.drawer.ctx.set_source(self.surfaces[name])
216 self.drawer.ctx.paint()
217 self.drawer.draw(offsetx=self.offset + xoffset,
218 width=icon_width + self.padding)
219 xoffset += icon_width + self.padding
220
221 def calculate_length(self):
222 """ Compute the width of the widget according to each icon width. """
223 return sum(self.icons_widths[prg['name']] for prg in self.progs.values()) \
224 + self.padding * (len(self.progs) + 1)
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/widget/launchbar.py b/libqtile/widget/launchbar.py
--- a/libqtile/widget/launchbar.py
+++ b/libqtile/widget/launchbar.py
@@ -217,6 +217,8 @@
self.drawer.draw(offsetx=self.offset + xoffset,
width=icon_width + self.padding)
xoffset += icon_width + self.padding
+ if self.padding:
+ self.drawer.draw(offsetx=self.offset + xoffset, width=self.padding)
def calculate_length(self):
""" Compute the width of the widget according to each icon width. """
| {"golden_diff": "diff --git a/libqtile/widget/launchbar.py b/libqtile/widget/launchbar.py\n--- a/libqtile/widget/launchbar.py\n+++ b/libqtile/widget/launchbar.py\n@@ -217,6 +217,8 @@\n self.drawer.draw(offsetx=self.offset + xoffset,\n width=icon_width + self.padding)\n xoffset += icon_width + self.padding\n+ if self.padding:\n+ self.drawer.draw(offsetx=self.offset + xoffset, width=self.padding)\n \n def calculate_length(self):\n \"\"\" Compute the width of the widget according to each icon width. \"\"\"\n", "issue": "widget.LaunchBar padding background color\n# Issue description\r\n\r\nWhen the widget.LaunchBar is used with a padding greater than 0 (zero), the rightmost padding does not respect the bar background color. \r\n\r\n# Qtile version\r\n\r\n0.17.0\r\n\r\n# Configuration\r\n\r\n```\r\nscreens = [\r\n Screen(\r\n top=bar.Bar(\r\n [\r\n widget.CurrentLayoutIcon(),\r\n widget.GroupBox(),\r\n widget.Prompt(),\r\n\t\t widget.LaunchBar(\r\n\t\t [ \r\n\t\t (\"google-chrome\", \"google-chrome-stable\", \"Launch Google Chrome\"),\r\n (\"firefox\", \"firefox\", \"Launch Google Firefox\"),\r\n\t\t ],\r\n padding=15\r\n ),\r\n\t\t widget.Spacer(),\r\n widget.Clock(format='%a %d %b %Y %I:%M %p'),\r\n\t\t widget.Spacer(),\r\n widget.Systray(),\r\n widget.QuickExit(),\r\n ],\r\n\t 29, background=\"#008880\", opacity=.5\r\n ),\r\n ),\r\n]\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2014 Tycho Andersen\n# Copyright (c) 2014 dequis\n# Copyright (c) 2014-2015 Joseph Razik\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2015 reus\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\n\"\"\"\nThis module define a widget that displays icons to launch softwares or commands\nwhen clicked -- a launchbar.\nOnly png icon files are displayed, not xpm because cairo doesn't support\nloading of xpm file.\nThe order of displaying (from left to right) is in the order of the list.\n\nIf no icon was found for the name provided and if default_icon is set to None\nthen the name is printed instead. If default_icon is defined then this icon is\ndisplayed instead.\n\nTo execute a software:\n - ('thunderbird', 'thunderbird -safe-mode', 'launch thunderbird in safe mode')\nTo execute a python command in qtile, begin with by 'qshell:'\n - ('logout', 'qshell:self.qtile.cmd_shutdown()', 'logout from qtile')\n\n\n\"\"\"\nimport os.path\n\nimport cairocffi\nfrom xdg.IconTheme import getIconPath\n\nfrom libqtile import bar\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass LaunchBar(base._Widget):\n \"\"\"A widget that display icons to launch the associated command\n\n Widget requirements: pyxdg_.\n\n .. _pyxdg: https://freedesktop.org/wiki/Software/pyxdg/\n\n Parameters\n ==========\n progs :\n a list of tuples ``(software_name, command_to_execute, comment)``, for\n example::\n\n ('thunderbird', 'thunderbird -safe-mode', 'launch thunderbird in safe mode')\n ('logout', 'qshell:self.qtile.cmd_shutdown()', 'logout from qtile')\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('padding', 2, 'Padding between icons'),\n ('default_icon', '/usr/share/icons/oxygen/256x256/mimetypes/'\n 'application-x-executable.png', 'Default icon not found'),\n ]\n\n def __init__(self, progs=None, width=bar.CALCULATED, **config):\n base._Widget.__init__(self, width, **config)\n if progs is None:\n progs = []\n self.add_defaults(LaunchBar.defaults)\n self.surfaces = {}\n self.icons_files = {}\n self.icons_widths = {}\n self.icons_offsets = {}\n # For now, ignore the comments but may be one day it will be useful\n self.progs = dict(enumerate([{'name': prog[0], 'cmd': prog[1],\n 'comment': prog[2] if len(prog) > 2 else\n None} for prog in progs]))\n self.progs_name = set([prog['name'] for prog in self.progs.values()])\n self.length_type = bar.STATIC\n self.length = 0\n\n def _configure(self, qtile, pbar):\n base._Widget._configure(self, qtile, pbar)\n self.lookup_icons()\n self.setup_images()\n self.length = self.calculate_length()\n\n def setup_images(self):\n \"\"\" Create image structures for each icon files. \"\"\"\n for img_name, iconfile in self.icons_files.items():\n if iconfile is None:\n logger.warning(\n 'No icon found for application \"%s\" (%s) switch to text mode',\n img_name, iconfile)\n # if no icon is found and no default icon was set, we just\n # print the name, based on a textbox.\n textbox = base._TextBox()\n textbox._configure(self.qtile, self.bar)\n textbox.layout = self.drawer.textlayout(\n textbox.text,\n textbox.foreground,\n textbox.font,\n textbox.fontsize,\n textbox.fontshadow,\n markup=textbox.markup,\n )\n # the name will be displayed\n textbox.text = img_name\n textbox.calculate_length()\n self.icons_widths[img_name] = textbox.width\n self.surfaces[img_name] = textbox\n continue\n else:\n try:\n img = cairocffi.ImageSurface.create_from_png(iconfile)\n except cairocffi.Error:\n logger.exception('Error loading icon for application \"%s\" (%s)', img_name, iconfile)\n return\n\n input_width = img.get_width()\n input_height = img.get_height()\n\n sp = input_height / (self.bar.height - 4)\n width = int(input_width / sp)\n\n imgpat = cairocffi.SurfacePattern(img)\n scaler = cairocffi.Matrix()\n scaler.scale(sp, sp)\n scaler.translate(self.padding * -1, -2)\n imgpat.set_matrix(scaler)\n\n imgpat.set_filter(cairocffi.FILTER_BEST)\n self.surfaces[img_name] = imgpat\n self.icons_widths[img_name] = width\n\n def _lookup_icon(self, name):\n \"\"\" Search for the icon corresponding to one command. \"\"\"\n self.icons_files[name] = None\n # if the software_name is directly an absolute path icon file\n if os.path.isabs(name):\n # name start with '/' thus it's an absolute path\n root, ext = os.path.splitext(name)\n if ext == '.png':\n self.icons_files[name] = name if os.path.isfile(name) else None\n else:\n # try to add the extension\n self.icons_files[name] = name + '.png' if os.path.isfile(name + '.png') else None\n else:\n self.icons_files[name] = getIconPath(name)\n # no search method found an icon, so default icon\n if self.icons_files[name] is None:\n self.icons_files[name] = self.default_icon\n\n def lookup_icons(self):\n \"\"\" Search for the icons corresponding to the commands to execute. \"\"\"\n if self.default_icon is not None:\n if not os.path.isfile(self.default_icon):\n # if the default icon provided is not found, switch to\n # text mode\n self.default_icon = None\n for name in self.progs_name:\n self._lookup_icon(name)\n\n def get_icon_in_position(self, x, y):\n \"\"\" Determine which icon is clicked according to its position. \"\"\"\n for i in self.progs:\n if x < (self.icons_offsets[i] +\n self.icons_widths[self.progs[i]['name']] +\n self.padding / 2):\n return i\n\n def button_press(self, x, y, button):\n \"\"\" Launch the associated command to the clicked icon. \"\"\"\n base._Widget.button_press(self, x, y, button)\n if button == 1:\n icon = self.get_icon_in_position(x, y)\n if icon is not None:\n cmd = self.progs[icon]['cmd']\n if cmd.startswith('qshell:'):\n exec(cmd[7:].lstrip())\n else:\n self.qtile.cmd_spawn(cmd)\n self.draw()\n\n def draw(self):\n \"\"\" Draw the icons in the widget. \"\"\"\n self.drawer.clear(self.background or self.bar.background)\n xoffset = 0\n for i in sorted(self.progs.keys()):\n self.icons_offsets[i] = xoffset + self.padding\n name = self.progs[i]['name']\n icon_width = self.icons_widths[name]\n self.drawer.ctx.move_to(self.offset + xoffset, icon_width)\n self.drawer.clear(self.background or self.bar.background)\n if isinstance(self.surfaces[name], base._TextBox):\n # display the name if no icon was found and no default icon\n textbox = self.surfaces[name]\n textbox.layout.draw(\n self.padding + textbox.actual_padding,\n int((self.bar.height - textbox.layout.height) / 2.0) + 1\n )\n else:\n # display an icon\n self.drawer.ctx.set_source(self.surfaces[name])\n self.drawer.ctx.paint()\n self.drawer.draw(offsetx=self.offset + xoffset,\n width=icon_width + self.padding)\n xoffset += icon_width + self.padding\n\n def calculate_length(self):\n \"\"\" Compute the width of the widget according to each icon width. \"\"\"\n return sum(self.icons_widths[prg['name']] for prg in self.progs.values()) \\\n + self.padding * (len(self.progs) + 1)\n", "path": "libqtile/widget/launchbar.py"}], "after_files": [{"content": "# Copyright (c) 2014 Tycho Andersen\n# Copyright (c) 2014 dequis\n# Copyright (c) 2014-2015 Joseph Razik\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2015 reus\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\n\"\"\"\nThis module define a widget that displays icons to launch softwares or commands\nwhen clicked -- a launchbar.\nOnly png icon files are displayed, not xpm because cairo doesn't support\nloading of xpm file.\nThe order of displaying (from left to right) is in the order of the list.\n\nIf no icon was found for the name provided and if default_icon is set to None\nthen the name is printed instead. If default_icon is defined then this icon is\ndisplayed instead.\n\nTo execute a software:\n - ('thunderbird', 'thunderbird -safe-mode', 'launch thunderbird in safe mode')\nTo execute a python command in qtile, begin with by 'qshell:'\n - ('logout', 'qshell:self.qtile.cmd_shutdown()', 'logout from qtile')\n\n\n\"\"\"\nimport os.path\n\nimport cairocffi\nfrom xdg.IconTheme import getIconPath\n\nfrom libqtile import bar\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass LaunchBar(base._Widget):\n \"\"\"A widget that display icons to launch the associated command\n\n Widget requirements: pyxdg_.\n\n .. _pyxdg: https://freedesktop.org/wiki/Software/pyxdg/\n\n Parameters\n ==========\n progs :\n a list of tuples ``(software_name, command_to_execute, comment)``, for\n example::\n\n ('thunderbird', 'thunderbird -safe-mode', 'launch thunderbird in safe mode')\n ('logout', 'qshell:self.qtile.cmd_shutdown()', 'logout from qtile')\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n ('padding', 2, 'Padding between icons'),\n ('default_icon', '/usr/share/icons/oxygen/256x256/mimetypes/'\n 'application-x-executable.png', 'Default icon not found'),\n ]\n\n def __init__(self, progs=None, width=bar.CALCULATED, **config):\n base._Widget.__init__(self, width, **config)\n if progs is None:\n progs = []\n self.add_defaults(LaunchBar.defaults)\n self.surfaces = {}\n self.icons_files = {}\n self.icons_widths = {}\n self.icons_offsets = {}\n # For now, ignore the comments but may be one day it will be useful\n self.progs = dict(enumerate([{'name': prog[0], 'cmd': prog[1],\n 'comment': prog[2] if len(prog) > 2 else\n None} for prog in progs]))\n self.progs_name = set([prog['name'] for prog in self.progs.values()])\n self.length_type = bar.STATIC\n self.length = 0\n\n def _configure(self, qtile, pbar):\n base._Widget._configure(self, qtile, pbar)\n self.lookup_icons()\n self.setup_images()\n self.length = self.calculate_length()\n\n def setup_images(self):\n \"\"\" Create image structures for each icon files. \"\"\"\n for img_name, iconfile in self.icons_files.items():\n if iconfile is None:\n logger.warning(\n 'No icon found for application \"%s\" (%s) switch to text mode',\n img_name, iconfile)\n # if no icon is found and no default icon was set, we just\n # print the name, based on a textbox.\n textbox = base._TextBox()\n textbox._configure(self.qtile, self.bar)\n textbox.layout = self.drawer.textlayout(\n textbox.text,\n textbox.foreground,\n textbox.font,\n textbox.fontsize,\n textbox.fontshadow,\n markup=textbox.markup,\n )\n # the name will be displayed\n textbox.text = img_name\n textbox.calculate_length()\n self.icons_widths[img_name] = textbox.width\n self.surfaces[img_name] = textbox\n continue\n else:\n try:\n img = cairocffi.ImageSurface.create_from_png(iconfile)\n except cairocffi.Error:\n logger.exception('Error loading icon for application \"%s\" (%s)', img_name, iconfile)\n return\n\n input_width = img.get_width()\n input_height = img.get_height()\n\n sp = input_height / (self.bar.height - 4)\n width = int(input_width / sp)\n\n imgpat = cairocffi.SurfacePattern(img)\n scaler = cairocffi.Matrix()\n scaler.scale(sp, sp)\n scaler.translate(self.padding * -1, -2)\n imgpat.set_matrix(scaler)\n\n imgpat.set_filter(cairocffi.FILTER_BEST)\n self.surfaces[img_name] = imgpat\n self.icons_widths[img_name] = width\n\n def _lookup_icon(self, name):\n \"\"\" Search for the icon corresponding to one command. \"\"\"\n self.icons_files[name] = None\n # if the software_name is directly an absolute path icon file\n if os.path.isabs(name):\n # name start with '/' thus it's an absolute path\n root, ext = os.path.splitext(name)\n if ext == '.png':\n self.icons_files[name] = name if os.path.isfile(name) else None\n else:\n # try to add the extension\n self.icons_files[name] = name + '.png' if os.path.isfile(name + '.png') else None\n else:\n self.icons_files[name] = getIconPath(name)\n # no search method found an icon, so default icon\n if self.icons_files[name] is None:\n self.icons_files[name] = self.default_icon\n\n def lookup_icons(self):\n \"\"\" Search for the icons corresponding to the commands to execute. \"\"\"\n if self.default_icon is not None:\n if not os.path.isfile(self.default_icon):\n # if the default icon provided is not found, switch to\n # text mode\n self.default_icon = None\n for name in self.progs_name:\n self._lookup_icon(name)\n\n def get_icon_in_position(self, x, y):\n \"\"\" Determine which icon is clicked according to its position. \"\"\"\n for i in self.progs:\n if x < (self.icons_offsets[i] +\n self.icons_widths[self.progs[i]['name']] +\n self.padding / 2):\n return i\n\n def button_press(self, x, y, button):\n \"\"\" Launch the associated command to the clicked icon. \"\"\"\n base._Widget.button_press(self, x, y, button)\n if button == 1:\n icon = self.get_icon_in_position(x, y)\n if icon is not None:\n cmd = self.progs[icon]['cmd']\n if cmd.startswith('qshell:'):\n exec(cmd[7:].lstrip())\n else:\n self.qtile.cmd_spawn(cmd)\n self.draw()\n\n def draw(self):\n \"\"\" Draw the icons in the widget. \"\"\"\n self.drawer.clear(self.background or self.bar.background)\n xoffset = 0\n for i in sorted(self.progs.keys()):\n self.icons_offsets[i] = xoffset + self.padding\n name = self.progs[i]['name']\n icon_width = self.icons_widths[name]\n self.drawer.ctx.move_to(self.offset + xoffset, icon_width)\n self.drawer.clear(self.background or self.bar.background)\n if isinstance(self.surfaces[name], base._TextBox):\n # display the name if no icon was found and no default icon\n textbox = self.surfaces[name]\n textbox.layout.draw(\n self.padding + textbox.actual_padding,\n int((self.bar.height - textbox.layout.height) / 2.0) + 1\n )\n else:\n # display an icon\n self.drawer.ctx.set_source(self.surfaces[name])\n self.drawer.ctx.paint()\n self.drawer.draw(offsetx=self.offset + xoffset,\n width=icon_width + self.padding)\n xoffset += icon_width + self.padding\n if self.padding:\n self.drawer.draw(offsetx=self.offset + xoffset, width=self.padding)\n\n def calculate_length(self):\n \"\"\" Compute the width of the widget according to each icon width. \"\"\"\n return sum(self.icons_widths[prg['name']] for prg in self.progs.values()) \\\n + self.padding * (len(self.progs) + 1)\n", "path": "libqtile/widget/launchbar.py"}]} | 3,180 | 134 |
gh_patches_debug_41819 | rasdani/github-patches | git_diff | kartoza__prj.app-1154 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot upload a CSV with attendees list
The CSV is formatted as requested:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/certification/views/attendee.py`
Content:
```
1 # coding=utf-8
2 import io
3 import csv
4 from django.db import transaction
5 from django.urls import reverse
6 from django.views.generic import (
7 CreateView, FormView)
8 from braces.views import LoginRequiredMixin, FormMessagesMixin
9 from certification.models import (
10 Attendee, CertifyingOrganisation, CourseAttendee, Course
11 )
12 from certification.forms import AttendeeForm, CsvAttendeeForm
13
14
15 class AttendeeMixin(object):
16 """Mixin class to provide standard settings for Attendee."""
17
18 model = Attendee
19 form_class = AttendeeForm
20
21
22 class AttendeeCreateView(LoginRequiredMixin, AttendeeMixin, CreateView):
23 """Create view for Attendee."""
24
25 context_object_name = 'attendee'
26 template_name = 'attendee/create.html'
27
28 def get_success_url(self):
29 """Define the redirect URL.
30
31 After successful creation of the object, the User will be redirected
32 to the create course attendee page.
33
34 :returns: URL
35 :rtype: HttpResponse
36 """
37 add_to_course = self.request.POST.get('add_to_course')
38 if add_to_course is None:
39 success_url = reverse('courseattendee-create', kwargs={
40 'project_slug': self.project_slug,
41 'organisation_slug': self.organisation_slug,
42 'slug': self.course_slug,
43 })
44 else:
45 success_url = reverse('course-detail', kwargs={
46 'project_slug': self.project_slug,
47 'organisation_slug': self.organisation_slug,
48 'slug': self.course_slug,
49 })
50 return success_url
51
52 def get_context_data(self, **kwargs):
53 """Get the context data which is passed to a template.
54
55 :param kwargs: Any arguments to pass to the superclass.
56 :type kwargs: dict
57
58 :returns: Context data which will be passed to the template.
59 :rtype: dict
60 """
61
62 context = super(
63 AttendeeCreateView, self).get_context_data(**kwargs)
64 return context
65
66 def get_form_kwargs(self):
67 """Get keyword arguments from form.
68
69 :returns keyword argument from the form
70 :rtype: dict
71 """
72
73 kwargs = super(AttendeeCreateView, self).get_form_kwargs()
74 self.project_slug = self.kwargs.get('project_slug', None)
75 self.organisation_slug = self.kwargs.get('organisation_slug', None)
76 self.course_slug = self.kwargs.get('slug', None)
77 self.certifying_organisation = \
78 CertifyingOrganisation.objects.get(slug=self.organisation_slug)
79 kwargs.update({
80 'user': self.request.user,
81 'certifying_organisation': self.certifying_organisation
82 })
83 return kwargs
84
85 def form_valid(self, form):
86 add_to_course = self.request.POST.get('add_to_course')
87 if add_to_course is None:
88 if form.is_valid():
89 form.save()
90 else:
91 if form.is_valid():
92 object = form.save()
93 course_slug = self.kwargs.get('slug', None)
94 course = Course.objects.get(slug=course_slug)
95 course_attendee = CourseAttendee(
96 attendee=object,
97 course=course,
98 author=self.request.user
99 )
100 course_attendee.save()
101 return super(AttendeeCreateView, self).form_valid(form)
102
103
104 class CsvUploadView(FormMessagesMixin, LoginRequiredMixin, FormView):
105 """
106 Allow upload of attendees through CSV file.
107 """
108
109 context_object_name = 'csvupload'
110 form_class = CsvAttendeeForm
111 template_name = 'attendee/upload_attendee_csv.html'
112
113 def get_success_url(self):
114 """Define the redirect URL.
115
116 After successful creation of the object, the User will be redirected
117 to the Course detail page.
118
119 :returns: URL
120 :rtype: HttpResponse
121 """
122
123 return reverse('course-detail', kwargs={
124 'project_slug': self.project_slug,
125 'organisation_slug': self.organisation_slug,
126 'slug': self.slug,
127 })
128
129 def get_context_data(self, **kwargs):
130 """Get the context data which is passed to a template.
131
132 :param kwargs: Any arguments to pass to the superclass.
133 :type kwargs: dict
134
135 :returns: Context data which will be passed to the template.
136 :rtype: dict
137 """
138
139 context = super(
140 CsvUploadView, self).get_context_data(**kwargs)
141 context['certifyingorganisation'] = \
142 CertifyingOrganisation.objects.get(slug=self.organisation_slug)
143 context['course'] = Course.objects.get(slug=self.slug)
144 return context
145
146 def get_form_kwargs(self):
147 """Get keyword arguments from form.
148
149 :returns keyword argument from the form
150 :rtype: dict
151 """
152
153 kwargs = super(CsvUploadView, self).get_form_kwargs()
154 self.project_slug = self.kwargs.get('project_slug', None)
155 self.organisation_slug = self.kwargs.get('organisation_slug', None)
156 self.slug = self.kwargs.get('slug', None)
157 self.course = Course.objects.get(slug=self.slug)
158 self.certifying_organisation = \
159 CertifyingOrganisation.objects.get(slug=self.organisation_slug)
160 return kwargs
161
162 @transaction.atomic()
163 def post(self, request, *args, **kwargs):
164 """Get form instance from upload.
165
166 After successful creation of the object,the User
167 will be redirected to the create course attendee page.
168
169 :returns: URL
170 :rtype: HttpResponse
171 """
172 form_class = self.get_form_class()
173 form = self.get_form(form_class)
174 attendees_file = request.FILES.get('file')
175 attendees_file.seek(0)
176 course = Course.objects.get(slug=self.slug)
177 if form.is_valid():
178 if attendees_file:
179 reader = csv.DictReader(
180 io.StringIO(attendees_file.read().decode('utf-8'))
181 )
182 attendee_count = 0
183 course_attendee_count = 0
184 for row in reader:
185 # We should have logic here to first see if the attendee
186 # already exists and if they do, just add them to the
187 # course
188 attendee = Attendee(
189 firstname=row['First Name'],
190 surname=row['Surname'],
191 email=row['Email'],
192 certifying_organisation=self.certifying_organisation,
193 author=self.request.user,
194 )
195 try:
196 attendee.save()
197 attendee_count += 1
198 except: # noqa
199 # Could not save - probably they exist already
200 attendee = None
201
202 if not attendee:
203 # put more checks in case attendee
204 # does not already exist
205 continue
206
207 course_attendee = CourseAttendee(
208 attendee=attendee,
209 course=course,
210 author=self.request.user,
211 )
212 try:
213 course_attendee.save()
214 course_attendee_count += 1
215 except: # noqa
216 # They are probably already associated with a course
217 pass
218
219 self.form_valid_message = (
220 '%i new attendees were created, and %i attendees were '
221 'added to the course: % s' % (
222 attendee_count, course_attendee_count, self.course)
223 )
224
225 self.form_invalid_message = (
226 'Something wrong happened while running the upload. '
227 'Please contact site support to help resolving the issue.')
228 return self.form_valid(form)
229
230 else:
231 return self.form_invalid(form)
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/certification/views/attendee.py b/django_project/certification/views/attendee.py
--- a/django_project/certification/views/attendee.py
+++ b/django_project/certification/views/attendee.py
@@ -179,47 +179,58 @@
reader = csv.DictReader(
io.StringIO(attendees_file.read().decode('utf-8'))
)
+ fieldnames = reader.fieldnames
attendee_count = 0
course_attendee_count = 0
+ existing_attendee_count = 0
for row in reader:
# We should have logic here to first see if the attendee
# already exists and if they do, just add them to the
# course
- attendee = Attendee(
- firstname=row['First Name'],
- surname=row['Surname'],
- email=row['Email'],
- certifying_organisation=self.certifying_organisation,
- author=self.request.user,
- )
try:
+ attendee = Attendee.objects.get(
+ firstname=row[fieldnames[0]],
+ surname=row[fieldnames[1]],
+ email=row[fieldnames[2]],
+ certifying_organisation=
+ self.certifying_organisation,
+ )
+ except Attendee.DoesNotExist:
+ attendee = Attendee(
+ firstname=row[fieldnames[0]],
+ surname=row[fieldnames[1]],
+ email=row[fieldnames[2]],
+ certifying_organisation=
+ self.certifying_organisation,
+ author=self.request.user
+ )
attendee.save()
attendee_count += 1
- except: # noqa
- # Could not save - probably they exist already
- attendee = None
-
- if not attendee:
- # put more checks in case attendee
- # does not already exist
- continue
-
- course_attendee = CourseAttendee(
- attendee=attendee,
- course=course,
- author=self.request.user,
- )
+
try:
+ course_attendee = CourseAttendee.objects.get(
+ attendee=attendee,
+ course=course,
+ )
+ except CourseAttendee.DoesNotExist:
+ course_attendee = CourseAttendee(
+ attendee=attendee,
+ course=course,
+ author=self.request.user
+ )
course_attendee.save()
course_attendee_count += 1
- except: # noqa
- # They are probably already associated with a course
- pass
+ else:
+ existing_attendee_count += 1
self.form_valid_message = (
- '%i new attendees were created, and %i attendees were '
- 'added to the course: % s' % (
- attendee_count, course_attendee_count, self.course)
+ 'From the csv: {} attendee already exist in this course, '
+ '{} new attendees were created, and {} attendees were '
+ 'added to the course: {}'.format(
+ existing_attendee_count,
+ attendee_count,
+ course_attendee_count,
+ self.course)
)
self.form_invalid_message = (
| {"golden_diff": "diff --git a/django_project/certification/views/attendee.py b/django_project/certification/views/attendee.py\n--- a/django_project/certification/views/attendee.py\n+++ b/django_project/certification/views/attendee.py\n@@ -179,47 +179,58 @@\n reader = csv.DictReader(\n io.StringIO(attendees_file.read().decode('utf-8'))\n )\n+ fieldnames = reader.fieldnames\n attendee_count = 0\n course_attendee_count = 0\n+ existing_attendee_count = 0\n for row in reader:\n # We should have logic here to first see if the attendee\n # already exists and if they do, just add them to the\n # course\n- attendee = Attendee(\n- firstname=row['First Name'],\n- surname=row['Surname'],\n- email=row['Email'],\n- certifying_organisation=self.certifying_organisation,\n- author=self.request.user,\n- )\n try:\n+ attendee = Attendee.objects.get(\n+ firstname=row[fieldnames[0]],\n+ surname=row[fieldnames[1]],\n+ email=row[fieldnames[2]],\n+ certifying_organisation=\n+ self.certifying_organisation,\n+ )\n+ except Attendee.DoesNotExist:\n+ attendee = Attendee(\n+ firstname=row[fieldnames[0]],\n+ surname=row[fieldnames[1]],\n+ email=row[fieldnames[2]],\n+ certifying_organisation=\n+ self.certifying_organisation,\n+ author=self.request.user\n+ )\n attendee.save()\n attendee_count += 1\n- except: # noqa\n- # Could not save - probably they exist already\n- attendee = None\n-\n- if not attendee:\n- # put more checks in case attendee\n- # does not already exist\n- continue\n-\n- course_attendee = CourseAttendee(\n- attendee=attendee,\n- course=course,\n- author=self.request.user,\n- )\n+\n try:\n+ course_attendee = CourseAttendee.objects.get(\n+ attendee=attendee,\n+ course=course,\n+ )\n+ except CourseAttendee.DoesNotExist:\n+ course_attendee = CourseAttendee(\n+ attendee=attendee,\n+ course=course,\n+ author=self.request.user\n+ )\n course_attendee.save()\n course_attendee_count += 1\n- except: # noqa\n- # They are probably already associated with a course\n- pass\n+ else:\n+ existing_attendee_count += 1\n \n self.form_valid_message = (\n- '%i new attendees were created, and %i attendees were '\n- 'added to the course: % s' % (\n- attendee_count, course_attendee_count, self.course)\n+ 'From the csv: {} attendee already exist in this course, '\n+ '{} new attendees were created, and {} attendees were '\n+ 'added to the course: {}'.format(\n+ existing_attendee_count,\n+ attendee_count,\n+ course_attendee_count,\n+ self.course)\n )\n \n self.form_invalid_message = (\n", "issue": "Cannot upload a CSV with attendees list\nThe CSV is formatted as requested:\r\n\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\nimport io\nimport csv\nfrom django.db import transaction\nfrom django.urls import reverse\nfrom django.views.generic import (\n CreateView, FormView)\nfrom braces.views import LoginRequiredMixin, FormMessagesMixin\nfrom certification.models import (\n Attendee, CertifyingOrganisation, CourseAttendee, Course\n)\nfrom certification.forms import AttendeeForm, CsvAttendeeForm\n\n\nclass AttendeeMixin(object):\n \"\"\"Mixin class to provide standard settings for Attendee.\"\"\"\n\n model = Attendee\n form_class = AttendeeForm\n\n\nclass AttendeeCreateView(LoginRequiredMixin, AttendeeMixin, CreateView):\n \"\"\"Create view for Attendee.\"\"\"\n\n context_object_name = 'attendee'\n template_name = 'attendee/create.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful creation of the object, the User will be redirected\n to the create course attendee page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n add_to_course = self.request.POST.get('add_to_course')\n if add_to_course is None:\n success_url = reverse('courseattendee-create', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.course_slug,\n })\n else:\n success_url = reverse('course-detail', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.course_slug,\n })\n return success_url\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n context = super(\n AttendeeCreateView, self).get_context_data(**kwargs)\n return context\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n\n kwargs = super(AttendeeCreateView, self).get_form_kwargs()\n self.project_slug = self.kwargs.get('project_slug', None)\n self.organisation_slug = self.kwargs.get('organisation_slug', None)\n self.course_slug = self.kwargs.get('slug', None)\n self.certifying_organisation = \\\n CertifyingOrganisation.objects.get(slug=self.organisation_slug)\n kwargs.update({\n 'user': self.request.user,\n 'certifying_organisation': self.certifying_organisation\n })\n return kwargs\n\n def form_valid(self, form):\n add_to_course = self.request.POST.get('add_to_course')\n if add_to_course is None:\n if form.is_valid():\n form.save()\n else:\n if form.is_valid():\n object = form.save()\n course_slug = self.kwargs.get('slug', None)\n course = Course.objects.get(slug=course_slug)\n course_attendee = CourseAttendee(\n attendee=object,\n course=course,\n author=self.request.user\n )\n course_attendee.save()\n return super(AttendeeCreateView, self).form_valid(form)\n\n\nclass CsvUploadView(FormMessagesMixin, LoginRequiredMixin, FormView):\n \"\"\"\n Allow upload of attendees through CSV file.\n \"\"\"\n\n context_object_name = 'csvupload'\n form_class = CsvAttendeeForm\n template_name = 'attendee/upload_attendee_csv.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful creation of the object, the User will be redirected\n to the Course detail page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n\n return reverse('course-detail', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.slug,\n })\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n context = super(\n CsvUploadView, self).get_context_data(**kwargs)\n context['certifyingorganisation'] = \\\n CertifyingOrganisation.objects.get(slug=self.organisation_slug)\n context['course'] = Course.objects.get(slug=self.slug)\n return context\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n\n kwargs = super(CsvUploadView, self).get_form_kwargs()\n self.project_slug = self.kwargs.get('project_slug', None)\n self.organisation_slug = self.kwargs.get('organisation_slug', None)\n self.slug = self.kwargs.get('slug', None)\n self.course = Course.objects.get(slug=self.slug)\n self.certifying_organisation = \\\n CertifyingOrganisation.objects.get(slug=self.organisation_slug)\n return kwargs\n\n @transaction.atomic()\n def post(self, request, *args, **kwargs):\n \"\"\"Get form instance from upload.\n\n After successful creation of the object,the User\n will be redirected to the create course attendee page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n form_class = self.get_form_class()\n form = self.get_form(form_class)\n attendees_file = request.FILES.get('file')\n attendees_file.seek(0)\n course = Course.objects.get(slug=self.slug)\n if form.is_valid():\n if attendees_file:\n reader = csv.DictReader(\n io.StringIO(attendees_file.read().decode('utf-8'))\n )\n attendee_count = 0\n course_attendee_count = 0\n for row in reader:\n # We should have logic here to first see if the attendee\n # already exists and if they do, just add them to the\n # course\n attendee = Attendee(\n firstname=row['First Name'],\n surname=row['Surname'],\n email=row['Email'],\n certifying_organisation=self.certifying_organisation,\n author=self.request.user,\n )\n try:\n attendee.save()\n attendee_count += 1\n except: # noqa\n # Could not save - probably they exist already\n attendee = None\n\n if not attendee:\n # put more checks in case attendee\n # does not already exist\n continue\n\n course_attendee = CourseAttendee(\n attendee=attendee,\n course=course,\n author=self.request.user,\n )\n try:\n course_attendee.save()\n course_attendee_count += 1\n except: # noqa\n # They are probably already associated with a course\n pass\n\n self.form_valid_message = (\n '%i new attendees were created, and %i attendees were '\n 'added to the course: % s' % (\n attendee_count, course_attendee_count, self.course)\n )\n\n self.form_invalid_message = (\n 'Something wrong happened while running the upload. '\n 'Please contact site support to help resolving the issue.')\n return self.form_valid(form)\n\n else:\n return self.form_invalid(form)\n", "path": "django_project/certification/views/attendee.py"}], "after_files": [{"content": "# coding=utf-8\nimport io\nimport csv\nfrom django.db import transaction\nfrom django.urls import reverse\nfrom django.views.generic import (\n CreateView, FormView)\nfrom braces.views import LoginRequiredMixin, FormMessagesMixin\nfrom certification.models import (\n Attendee, CertifyingOrganisation, CourseAttendee, Course\n)\nfrom certification.forms import AttendeeForm, CsvAttendeeForm\n\n\nclass AttendeeMixin(object):\n \"\"\"Mixin class to provide standard settings for Attendee.\"\"\"\n\n model = Attendee\n form_class = AttendeeForm\n\n\nclass AttendeeCreateView(LoginRequiredMixin, AttendeeMixin, CreateView):\n \"\"\"Create view for Attendee.\"\"\"\n\n context_object_name = 'attendee'\n template_name = 'attendee/create.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful creation of the object, the User will be redirected\n to the create course attendee page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n add_to_course = self.request.POST.get('add_to_course')\n if add_to_course is None:\n success_url = reverse('courseattendee-create', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.course_slug,\n })\n else:\n success_url = reverse('course-detail', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.course_slug,\n })\n return success_url\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n context = super(\n AttendeeCreateView, self).get_context_data(**kwargs)\n return context\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n\n kwargs = super(AttendeeCreateView, self).get_form_kwargs()\n self.project_slug = self.kwargs.get('project_slug', None)\n self.organisation_slug = self.kwargs.get('organisation_slug', None)\n self.course_slug = self.kwargs.get('slug', None)\n self.certifying_organisation = \\\n CertifyingOrganisation.objects.get(slug=self.organisation_slug)\n kwargs.update({\n 'user': self.request.user,\n 'certifying_organisation': self.certifying_organisation\n })\n return kwargs\n\n def form_valid(self, form):\n add_to_course = self.request.POST.get('add_to_course')\n if add_to_course is None:\n if form.is_valid():\n form.save()\n else:\n if form.is_valid():\n object = form.save()\n course_slug = self.kwargs.get('slug', None)\n course = Course.objects.get(slug=course_slug)\n course_attendee = CourseAttendee(\n attendee=object,\n course=course,\n author=self.request.user\n )\n course_attendee.save()\n return super(AttendeeCreateView, self).form_valid(form)\n\n\nclass CsvUploadView(FormMessagesMixin, LoginRequiredMixin, FormView):\n \"\"\"\n Allow upload of attendees through CSV file.\n \"\"\"\n\n context_object_name = 'csvupload'\n form_class = CsvAttendeeForm\n template_name = 'attendee/upload_attendee_csv.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful creation of the object, the User will be redirected\n to the Course detail page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n\n return reverse('course-detail', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.slug,\n })\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n context = super(\n CsvUploadView, self).get_context_data(**kwargs)\n context['certifyingorganisation'] = \\\n CertifyingOrganisation.objects.get(slug=self.organisation_slug)\n context['course'] = Course.objects.get(slug=self.slug)\n return context\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n\n kwargs = super(CsvUploadView, self).get_form_kwargs()\n self.project_slug = self.kwargs.get('project_slug', None)\n self.organisation_slug = self.kwargs.get('organisation_slug', None)\n self.slug = self.kwargs.get('slug', None)\n self.course = Course.objects.get(slug=self.slug)\n self.certifying_organisation = \\\n CertifyingOrganisation.objects.get(slug=self.organisation_slug)\n return kwargs\n\n @transaction.atomic()\n def post(self, request, *args, **kwargs):\n \"\"\"Get form instance from upload.\n\n After successful creation of the object,the User\n will be redirected to the create course attendee page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n form_class = self.get_form_class()\n form = self.get_form(form_class)\n attendees_file = request.FILES.get('file')\n attendees_file.seek(0)\n course = Course.objects.get(slug=self.slug)\n if form.is_valid():\n if attendees_file:\n reader = csv.DictReader(\n io.StringIO(attendees_file.read().decode('utf-8'))\n )\n fieldnames = reader.fieldnames\n attendee_count = 0\n course_attendee_count = 0\n existing_attendee_count = 0\n for row in reader:\n # We should have logic here to first see if the attendee\n # already exists and if they do, just add them to the\n # course\n try:\n attendee = Attendee.objects.get(\n firstname=row[fieldnames[0]],\n surname=row[fieldnames[1]],\n email=row[fieldnames[2]],\n certifying_organisation=\n self.certifying_organisation,\n )\n except Attendee.DoesNotExist:\n attendee = Attendee(\n firstname=row[fieldnames[0]],\n surname=row[fieldnames[1]],\n email=row[fieldnames[2]],\n certifying_organisation=\n self.certifying_organisation,\n author=self.request.user\n )\n attendee.save()\n attendee_count += 1\n\n try:\n course_attendee = CourseAttendee.objects.get(\n attendee=attendee,\n course=course,\n )\n except CourseAttendee.DoesNotExist:\n course_attendee = CourseAttendee(\n attendee=attendee,\n course=course,\n author=self.request.user\n )\n course_attendee.save()\n course_attendee_count += 1\n else:\n existing_attendee_count += 1\n\n self.form_valid_message = (\n 'From the csv: {} attendee already exist in this course, '\n '{} new attendees were created, and {} attendees were '\n 'added to the course: {}'.format(\n existing_attendee_count,\n attendee_count,\n course_attendee_count,\n self.course)\n )\n\n self.form_invalid_message = (\n 'Something wrong happened while running the upload. '\n 'Please contact site support to help resolving the issue.')\n return self.form_valid(form)\n\n else:\n return self.form_invalid(form)\n", "path": "django_project/certification/views/attendee.py"}]} | 2,522 | 706 |
gh_patches_debug_20738 | rasdani/github-patches | git_diff | ESMCI__cime-547 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
xmlquery resolve option not functioning
The xmlquery routine has an option --resolve. But xmlquery should resolve variables by default and allow the user to override this and get the unresolved value. As far as I can tell the --resolve option does nothing at all. This option should be changed to --no-resolve and when set should change the resolved option to get_value to False.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/XML/generic_xml.py`
Content:
```
1 """
2 Common interface to XML files, this is an abstract class and is expected to
3 be used by other XML interface modules and not directly.
4 """
5 from CIME.XML.standard_module_setup import *
6 from distutils.spawn import find_executable
7 from xml.dom import minidom
8 from CIME.utils import expect, get_cime_root
9
10 logger = logging.getLogger(__name__)
11
12 class GenericXML(object):
13
14 def __init__(self, infile=None):
15 """
16 Initialize an object
17 """
18
19 logger.debug("Initializing %s" , infile)
20 self.tree = None
21 self.version = None
22
23 if infile == None:
24 # if file is not defined just return
25 self.filename = None
26 return
27
28 if os.path.isfile(infile) and os.access(infile, os.R_OK):
29 # If file is defined and exists, read it
30 self.filename = infile
31 self.read(infile)
32 else:
33 # if file does not exist create a root xml element
34 # and set it's id to file
35
36 logger.debug("File %s does not exists." , infile)
37 expect("$" not in infile,"File path not fully resolved %s"%infile)
38
39 self.filename = infile
40 root = ET.Element("xml")
41 root.set("version", "1.0")
42 self.root = ET.SubElement(root, "file")
43 self.root.set("id", os.path.basename(infile))
44 self.tree = ET.ElementTree(root)
45
46 def read(self, infile):
47 """
48 Read and parse an xml file into the object
49 """
50 logger.debug("read: " + infile)
51 if self.tree:
52 self.root.append(ET.parse(infile).getroot())
53 else:
54 self.tree = ET.parse(infile)
55 self.root = self.tree.getroot()
56 self.version = self.root.get("version")
57 self.version = "1.0" if self.version is None else self.version
58 logger.debug("File version is "+self.version)
59
60 def write(self, outfile=None):
61 """
62 Write an xml file from data in self
63 """
64 if outfile is None:
65 outfile = self.filename
66
67 logger.debug("write: " + outfile)
68 try:
69 xmlstr = ET.tostring(self.root)
70 except ET.ParseError as e:
71 ET.dump(self.root)
72 expect(False, "Could not write file %s, xml formatting error '%s'" % (self.filename, e))
73
74 # xmllint provides a better format option for the output file
75 xmllint = find_executable("xmllint")
76 if xmllint is not None:
77 run_cmd_no_fail("%s --format --output %s -"%(xmllint,outfile), input_str=xmlstr)
78 else:
79 doc = minidom.parseString(xmlstr)
80 with open(outfile,'w') as xmlout:
81 doc.writexml(xmlout,addindent=' ')
82
83 def get_node(self, nodename, attributes=None, root=None, xpath=None):
84 """
85 Get an xml element matching nodename with optional attributes.
86
87 Error unless exactly one match.
88 """
89
90 nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)
91
92 expect(len(nodes) == 1, "Incorrect number of matches, %d, for nodename '%s' and attrs '%s' in file '%s'" %
93 (len(nodes), nodename, attributes, self.filename))
94 return nodes[0]
95
96 def get_optional_node(self, nodename, attributes=None, root=None, xpath=None):
97 """
98 Get an xml element matching nodename with optional attributes.
99
100 Return None if no match.
101 """
102 nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)
103
104 expect(len(nodes) <= 1, "Multiple matches for nodename '%s' and attrs '%s' in file '%s'" %
105 (nodename, attributes, self.filename))
106 return nodes[0] if nodes else None
107
108 def get_nodes(self, nodename, attributes=None, root=None, xpath=None):
109
110 logger.debug("(get_nodes) Input values: %s , %s , %s , %s , %s" , self.__class__.__name__ , nodename , attributes , root , xpath)
111
112 if root is None:
113 root = self.root
114 nodes = []
115
116 expect(attributes is None or xpath is None,
117 " Arguments attributes and xpath are exclusive")
118 if xpath is None:
119 xpath = ".//"+nodename
120
121 if attributes:
122 # xml.etree has limited support for xpath and does not allow more than
123 # one attribute in an xpath query so we query seperately for each attribute
124 # and create a result with the intersection of those lists
125
126 for key, value in attributes.iteritems():
127 if value is not None:
128 expect(isinstance(value, str), " Bad value passed for key %s"%key)
129 xpath = ".//%s[@%s=\'%s\']" % (nodename, key, value)
130 logger.debug("xpath is %s"%xpath)
131
132 try:
133 newnodes = root.findall(xpath)
134 except Exception as e:
135 expect(False, "Bad xpath search term '%s', error: %s" % (xpath, e))
136
137 if not nodes:
138 nodes = newnodes
139 else:
140 for node in nodes[:]:
141 if node not in newnodes:
142 nodes.remove(node)
143 if not nodes:
144 return []
145
146 else:
147 logger.debug("xpath: %s" , xpath)
148 nodes = root.findall(xpath)
149
150 logger.debug("Returning %s nodes (%s)" , len(nodes), nodes)
151
152 return nodes
153
154 def add_child(self, node, root=None):
155 """
156 Add element node to self at root
157 """
158 if root is None:
159 root = self.root
160 self.root.append(node)
161
162 def get_value(self, item, attribute=None, resolved=True, subgroup=None): # pylint: disable=unused-argument
163 """
164 get_value is expected to be defined by the derived classes, if you get here
165 the value was not found in the class.
166 """
167 logger.debug("Get Value for " + item)
168 return None
169
170 def set_value(self, vid, value, subgroup=None, ignore_type=True): # pylint: disable=unused-argument
171 """
172 ignore_type is not used in this flavor
173 """
174 valnodes = self.get_nodes(vid)
175 if valnodes:
176 for node in valnodes:
177 node.text = value
178
179 def get_resolved_value(self, raw_value):
180 """
181 A value in the xml file may contain references to other xml
182 variables or to environment variables. These are refered to in
183 the perl style with $name and $ENV{name}.
184
185 >>> obj = GenericXML()
186 >>> os.environ["FOO"] = "BAR"
187 >>> os.environ["BAZ"] = "BARF"
188 >>> obj.get_resolved_value("one $ENV{FOO} two $ENV{BAZ} three")
189 'one BAR two BARF three'
190 """
191 logger.debug("raw_value %s" % raw_value)
192 reference_re = re.compile(r'\${?(\w+)}?')
193 env_ref_re = re.compile(r'\$ENV\{(\w+)\}')
194 item_data = raw_value
195
196 if item_data is None:
197 return None
198
199 if type(item_data) is not str:
200 return item_data
201
202 for m in env_ref_re.finditer(item_data):
203 logger.debug("look for %s in env" % item_data)
204 env_var = m.groups()[0]
205 expect(env_var in os.environ, "Undefined env var '%s'" % env_var)
206 item_data = item_data.replace(m.group(), os.environ[env_var])
207
208 for m in reference_re.finditer(item_data):
209 var = m.groups()[0]
210 logger.debug("find: %s" % var)
211 ref = self.get_value(var)
212 if ref is not None:
213 logger.debug("resolve: " + str(ref))
214 item_data = item_data.replace(m.group(), self.get_resolved_value(str(ref)))
215 elif var == "CIMEROOT":
216 cimeroot = get_cime_root()
217 item_data = item_data.replace(m.group(), cimeroot)
218 elif var == "SRCROOT":
219 srcroot = os.path.join(get_cime_root(),"..")
220 item_data = item_data.replace(m.group(), srcroot)
221
222 return item_data
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/XML/generic_xml.py b/utils/python/CIME/XML/generic_xml.py
--- a/utils/python/CIME/XML/generic_xml.py
+++ b/utils/python/CIME/XML/generic_xml.py
@@ -191,6 +191,7 @@
logger.debug("raw_value %s" % raw_value)
reference_re = re.compile(r'\${?(\w+)}?')
env_ref_re = re.compile(r'\$ENV\{(\w+)\}')
+ math_re = re.compile(r'[+-/*]')
item_data = raw_value
if item_data is None:
@@ -218,5 +219,13 @@
elif var == "SRCROOT":
srcroot = os.path.join(get_cime_root(),"..")
item_data = item_data.replace(m.group(), srcroot)
-
+ elif var in os.environ:
+ logging.debug("resolve from env: " + var)
+ item_data = item_data.replace(m.group(), os.environ[var])
+ if math_re.search(item_data):
+ try:
+ tmp = eval(item_data)
+ except:
+ tmp = item_data
+ item_data = str(tmp)
return item_data
| {"golden_diff": "diff --git a/utils/python/CIME/XML/generic_xml.py b/utils/python/CIME/XML/generic_xml.py\n--- a/utils/python/CIME/XML/generic_xml.py\n+++ b/utils/python/CIME/XML/generic_xml.py\n@@ -191,6 +191,7 @@\n logger.debug(\"raw_value %s\" % raw_value)\n reference_re = re.compile(r'\\${?(\\w+)}?')\n env_ref_re = re.compile(r'\\$ENV\\{(\\w+)\\}')\n+ math_re = re.compile(r'[+-/*]')\n item_data = raw_value\n \n if item_data is None:\n@@ -218,5 +219,13 @@\n elif var == \"SRCROOT\":\n srcroot = os.path.join(get_cime_root(),\"..\")\n item_data = item_data.replace(m.group(), srcroot)\n-\n+ elif var in os.environ:\n+ logging.debug(\"resolve from env: \" + var)\n+ item_data = item_data.replace(m.group(), os.environ[var])\n+ if math_re.search(item_data):\n+ try:\n+ tmp = eval(item_data)\n+ except:\n+ tmp = item_data\n+ item_data = str(tmp)\n return item_data\n", "issue": "xmlquery resolve option not functioning \nThe xmlquery routine has an option --resolve. But xmlquery should resolve variables by default and allow the user to override this and get the unresolved value. As far as I can tell the --resolve option does nothing at all. This option should be changed to --no-resolve and when set should change the resolved option to get_value to False. \n\n", "before_files": [{"content": "\"\"\"\nCommon interface to XML files, this is an abstract class and is expected to\nbe used by other XML interface modules and not directly.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom distutils.spawn import find_executable\nfrom xml.dom import minidom\nfrom CIME.utils import expect, get_cime_root\n\nlogger = logging.getLogger(__name__)\n\nclass GenericXML(object):\n\n def __init__(self, infile=None):\n \"\"\"\n Initialize an object\n \"\"\"\n\n logger.debug(\"Initializing %s\" , infile)\n self.tree = None\n self.version = None\n\n if infile == None:\n # if file is not defined just return\n self.filename = None\n return\n\n if os.path.isfile(infile) and os.access(infile, os.R_OK):\n # If file is defined and exists, read it\n self.filename = infile\n self.read(infile)\n else:\n # if file does not exist create a root xml element\n # and set it's id to file\n\n logger.debug(\"File %s does not exists.\" , infile)\n expect(\"$\" not in infile,\"File path not fully resolved %s\"%infile)\n\n self.filename = infile\n root = ET.Element(\"xml\")\n root.set(\"version\", \"1.0\")\n self.root = ET.SubElement(root, \"file\")\n self.root.set(\"id\", os.path.basename(infile))\n self.tree = ET.ElementTree(root)\n\n def read(self, infile):\n \"\"\"\n Read and parse an xml file into the object\n \"\"\"\n logger.debug(\"read: \" + infile)\n if self.tree:\n self.root.append(ET.parse(infile).getroot())\n else:\n self.tree = ET.parse(infile)\n self.root = self.tree.getroot()\n self.version = self.root.get(\"version\")\n self.version = \"1.0\" if self.version is None else self.version\n logger.debug(\"File version is \"+self.version)\n\n def write(self, outfile=None):\n \"\"\"\n Write an xml file from data in self\n \"\"\"\n if outfile is None:\n outfile = self.filename\n\n logger.debug(\"write: \" + outfile)\n try:\n xmlstr = ET.tostring(self.root)\n except ET.ParseError as e:\n ET.dump(self.root)\n expect(False, \"Could not write file %s, xml formatting error '%s'\" % (self.filename, e))\n\n # xmllint provides a better format option for the output file\n xmllint = find_executable(\"xmllint\")\n if xmllint is not None:\n run_cmd_no_fail(\"%s --format --output %s -\"%(xmllint,outfile), input_str=xmlstr)\n else:\n doc = minidom.parseString(xmlstr)\n with open(outfile,'w') as xmlout:\n doc.writexml(xmlout,addindent=' ')\n\n def get_node(self, nodename, attributes=None, root=None, xpath=None):\n \"\"\"\n Get an xml element matching nodename with optional attributes.\n\n Error unless exactly one match.\n \"\"\"\n\n nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)\n\n expect(len(nodes) == 1, \"Incorrect number of matches, %d, for nodename '%s' and attrs '%s' in file '%s'\" %\n (len(nodes), nodename, attributes, self.filename))\n return nodes[0]\n\n def get_optional_node(self, nodename, attributes=None, root=None, xpath=None):\n \"\"\"\n Get an xml element matching nodename with optional attributes.\n\n Return None if no match.\n \"\"\"\n nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)\n\n expect(len(nodes) <= 1, \"Multiple matches for nodename '%s' and attrs '%s' in file '%s'\" %\n (nodename, attributes, self.filename))\n return nodes[0] if nodes else None\n\n def get_nodes(self, nodename, attributes=None, root=None, xpath=None):\n\n logger.debug(\"(get_nodes) Input values: %s , %s , %s , %s , %s\" , self.__class__.__name__ , nodename , attributes , root , xpath)\n\n if root is None:\n root = self.root\n nodes = []\n\n expect(attributes is None or xpath is None,\n \" Arguments attributes and xpath are exclusive\")\n if xpath is None:\n xpath = \".//\"+nodename\n\n if attributes:\n # xml.etree has limited support for xpath and does not allow more than\n # one attribute in an xpath query so we query seperately for each attribute\n # and create a result with the intersection of those lists\n\n for key, value in attributes.iteritems():\n if value is not None:\n expect(isinstance(value, str), \" Bad value passed for key %s\"%key)\n xpath = \".//%s[@%s=\\'%s\\']\" % (nodename, key, value)\n logger.debug(\"xpath is %s\"%xpath)\n\n try:\n newnodes = root.findall(xpath)\n except Exception as e:\n expect(False, \"Bad xpath search term '%s', error: %s\" % (xpath, e))\n\n if not nodes:\n nodes = newnodes\n else:\n for node in nodes[:]:\n if node not in newnodes:\n nodes.remove(node)\n if not nodes:\n return []\n\n else:\n logger.debug(\"xpath: %s\" , xpath)\n nodes = root.findall(xpath)\n\n logger.debug(\"Returning %s nodes (%s)\" , len(nodes), nodes)\n\n return nodes\n\n def add_child(self, node, root=None):\n \"\"\"\n Add element node to self at root\n \"\"\"\n if root is None:\n root = self.root\n self.root.append(node)\n\n def get_value(self, item, attribute=None, resolved=True, subgroup=None): # pylint: disable=unused-argument\n \"\"\"\n get_value is expected to be defined by the derived classes, if you get here\n the value was not found in the class.\n \"\"\"\n logger.debug(\"Get Value for \" + item)\n return None\n\n def set_value(self, vid, value, subgroup=None, ignore_type=True): # pylint: disable=unused-argument\n \"\"\"\n ignore_type is not used in this flavor\n \"\"\"\n valnodes = self.get_nodes(vid)\n if valnodes:\n for node in valnodes:\n node.text = value\n\n def get_resolved_value(self, raw_value):\n \"\"\"\n A value in the xml file may contain references to other xml\n variables or to environment variables. These are refered to in\n the perl style with $name and $ENV{name}.\n\n >>> obj = GenericXML()\n >>> os.environ[\"FOO\"] = \"BAR\"\n >>> os.environ[\"BAZ\"] = \"BARF\"\n >>> obj.get_resolved_value(\"one $ENV{FOO} two $ENV{BAZ} three\")\n 'one BAR two BARF three'\n \"\"\"\n logger.debug(\"raw_value %s\" % raw_value)\n reference_re = re.compile(r'\\${?(\\w+)}?')\n env_ref_re = re.compile(r'\\$ENV\\{(\\w+)\\}')\n item_data = raw_value\n\n if item_data is None:\n return None\n\n if type(item_data) is not str:\n return item_data\n\n for m in env_ref_re.finditer(item_data):\n logger.debug(\"look for %s in env\" % item_data)\n env_var = m.groups()[0]\n expect(env_var in os.environ, \"Undefined env var '%s'\" % env_var)\n item_data = item_data.replace(m.group(), os.environ[env_var])\n\n for m in reference_re.finditer(item_data):\n var = m.groups()[0]\n logger.debug(\"find: %s\" % var)\n ref = self.get_value(var)\n if ref is not None:\n logger.debug(\"resolve: \" + str(ref))\n item_data = item_data.replace(m.group(), self.get_resolved_value(str(ref)))\n elif var == \"CIMEROOT\":\n cimeroot = get_cime_root()\n item_data = item_data.replace(m.group(), cimeroot)\n elif var == \"SRCROOT\":\n srcroot = os.path.join(get_cime_root(),\"..\")\n item_data = item_data.replace(m.group(), srcroot)\n\n return item_data\n", "path": "utils/python/CIME/XML/generic_xml.py"}], "after_files": [{"content": "\"\"\"\nCommon interface to XML files, this is an abstract class and is expected to\nbe used by other XML interface modules and not directly.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom distutils.spawn import find_executable\nfrom xml.dom import minidom\nfrom CIME.utils import expect, get_cime_root\n\nlogger = logging.getLogger(__name__)\n\nclass GenericXML(object):\n\n def __init__(self, infile=None):\n \"\"\"\n Initialize an object\n \"\"\"\n\n logger.debug(\"Initializing %s\" , infile)\n self.tree = None\n self.version = None\n\n if infile == None:\n # if file is not defined just return\n self.filename = None\n return\n\n if os.path.isfile(infile) and os.access(infile, os.R_OK):\n # If file is defined and exists, read it\n self.filename = infile\n self.read(infile)\n else:\n # if file does not exist create a root xml element\n # and set it's id to file\n\n logger.debug(\"File %s does not exists.\" , infile)\n expect(\"$\" not in infile,\"File path not fully resolved %s\"%infile)\n\n self.filename = infile\n root = ET.Element(\"xml\")\n root.set(\"version\", \"1.0\")\n self.root = ET.SubElement(root, \"file\")\n self.root.set(\"id\", os.path.basename(infile))\n self.tree = ET.ElementTree(root)\n\n def read(self, infile):\n \"\"\"\n Read and parse an xml file into the object\n \"\"\"\n logger.debug(\"read: \" + infile)\n if self.tree:\n self.root.append(ET.parse(infile).getroot())\n else:\n self.tree = ET.parse(infile)\n self.root = self.tree.getroot()\n self.version = self.root.get(\"version\")\n self.version = \"1.0\" if self.version is None else self.version\n logger.debug(\"File version is \"+self.version)\n\n def write(self, outfile=None):\n \"\"\"\n Write an xml file from data in self\n \"\"\"\n if outfile is None:\n outfile = self.filename\n\n logger.debug(\"write: \" + outfile)\n try:\n xmlstr = ET.tostring(self.root)\n except ET.ParseError as e:\n ET.dump(self.root)\n expect(False, \"Could not write file %s, xml formatting error '%s'\" % (self.filename, e))\n\n # xmllint provides a better format option for the output file\n xmllint = find_executable(\"xmllint\")\n if xmllint is not None:\n run_cmd_no_fail(\"%s --format --output %s -\"%(xmllint,outfile), input_str=xmlstr)\n else:\n doc = minidom.parseString(xmlstr)\n with open(outfile,'w') as xmlout:\n doc.writexml(xmlout,addindent=' ')\n\n def get_node(self, nodename, attributes=None, root=None, xpath=None):\n \"\"\"\n Get an xml element matching nodename with optional attributes.\n\n Error unless exactly one match.\n \"\"\"\n\n nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)\n\n expect(len(nodes) == 1, \"Incorrect number of matches, %d, for nodename '%s' and attrs '%s' in file '%s'\" %\n (len(nodes), nodename, attributes, self.filename))\n return nodes[0]\n\n def get_optional_node(self, nodename, attributes=None, root=None, xpath=None):\n \"\"\"\n Get an xml element matching nodename with optional attributes.\n\n Return None if no match.\n \"\"\"\n nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)\n\n expect(len(nodes) <= 1, \"Multiple matches for nodename '%s' and attrs '%s' in file '%s'\" %\n (nodename, attributes, self.filename))\n return nodes[0] if nodes else None\n\n def get_nodes(self, nodename, attributes=None, root=None, xpath=None):\n\n logger.debug(\"(get_nodes) Input values: %s , %s , %s , %s , %s\" , self.__class__.__name__ , nodename , attributes , root , xpath)\n\n if root is None:\n root = self.root\n nodes = []\n\n expect(attributes is None or xpath is None,\n \" Arguments attributes and xpath are exclusive\")\n if xpath is None:\n xpath = \".//\"+nodename\n\n if attributes:\n # xml.etree has limited support for xpath and does not allow more than\n # one attribute in an xpath query so we query seperately for each attribute\n # and create a result with the intersection of those lists\n\n for key, value in attributes.iteritems():\n if value is not None:\n expect(isinstance(value, str), \" Bad value passed for key %s\"%key)\n xpath = \".//%s[@%s=\\'%s\\']\" % (nodename, key, value)\n logger.debug(\"xpath is %s\"%xpath)\n\n try:\n newnodes = root.findall(xpath)\n except Exception as e:\n expect(False, \"Bad xpath search term '%s', error: %s\" % (xpath, e))\n\n if not nodes:\n nodes = newnodes\n else:\n for node in nodes[:]:\n if node not in newnodes:\n nodes.remove(node)\n if not nodes:\n return []\n\n else:\n logger.debug(\"xpath: %s\" , xpath)\n nodes = root.findall(xpath)\n\n logger.debug(\"Returning %s nodes (%s)\" , len(nodes), nodes)\n\n return nodes\n\n def add_child(self, node, root=None):\n \"\"\"\n Add element node to self at root\n \"\"\"\n if root is None:\n root = self.root\n self.root.append(node)\n\n def get_value(self, item, attribute=None, resolved=True, subgroup=None): # pylint: disable=unused-argument\n \"\"\"\n get_value is expected to be defined by the derived classes, if you get here\n the value was not found in the class.\n \"\"\"\n logger.debug(\"Get Value for \" + item)\n return None\n\n def set_value(self, vid, value, subgroup=None, ignore_type=True): # pylint: disable=unused-argument\n \"\"\"\n ignore_type is not used in this flavor\n \"\"\"\n valnodes = self.get_nodes(vid)\n if valnodes:\n for node in valnodes:\n node.text = value\n\n def get_resolved_value(self, raw_value):\n \"\"\"\n A value in the xml file may contain references to other xml\n variables or to environment variables. These are refered to in\n the perl style with $name and $ENV{name}.\n\n >>> obj = GenericXML()\n >>> os.environ[\"FOO\"] = \"BAR\"\n >>> os.environ[\"BAZ\"] = \"BARF\"\n >>> obj.get_resolved_value(\"one $ENV{FOO} two $ENV{BAZ} three\")\n 'one BAR two BARF three'\n \"\"\"\n logger.debug(\"raw_value %s\" % raw_value)\n reference_re = re.compile(r'\\${?(\\w+)}?')\n env_ref_re = re.compile(r'\\$ENV\\{(\\w+)\\}')\n math_re = re.compile(r'[+-/*]')\n item_data = raw_value\n\n if item_data is None:\n return None\n\n if type(item_data) is not str:\n return item_data\n\n for m in env_ref_re.finditer(item_data):\n logger.debug(\"look for %s in env\" % item_data)\n env_var = m.groups()[0]\n expect(env_var in os.environ, \"Undefined env var '%s'\" % env_var)\n item_data = item_data.replace(m.group(), os.environ[env_var])\n\n for m in reference_re.finditer(item_data):\n var = m.groups()[0]\n logger.debug(\"find: %s\" % var)\n ref = self.get_value(var)\n if ref is not None:\n logger.debug(\"resolve: \" + str(ref))\n item_data = item_data.replace(m.group(), self.get_resolved_value(str(ref)))\n elif var == \"CIMEROOT\":\n cimeroot = get_cime_root()\n item_data = item_data.replace(m.group(), cimeroot)\n elif var == \"SRCROOT\":\n srcroot = os.path.join(get_cime_root(),\"..\")\n item_data = item_data.replace(m.group(), srcroot)\n elif var in os.environ:\n logging.debug(\"resolve from env: \" + var)\n item_data = item_data.replace(m.group(), os.environ[var])\n if math_re.search(item_data):\n try:\n tmp = eval(item_data)\n except:\n tmp = item_data\n item_data = str(tmp)\n return item_data\n", "path": "utils/python/CIME/XML/generic_xml.py"}]} | 2,740 | 267 |
gh_patches_debug_28893 | rasdani/github-patches | git_diff | mirumee__ariadne-387 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Argument 'code' has invalid value "ABC"
I think there is a bug when using both literal and variable values with a custom scalar.
```python
from ariadne import ScalarType
testscalar = ScalarType('TestScalar')
@testscalar.serializer
def serializer(value):
return value.upper()
@testscalar.value_parser
def value_parser(value):
if value:
return serializer(value)
@testscalar.literal_parser
def literal_parser(ast):
value = str(ast.value)
return value_parser(value)
```
If you then make the following query:
```graphql
query($code: TestScalar) {
test1: testType(code: $code) {
id
}
test2: testType(code: "ABC") {
id
}
}
```
This error is returned: Argument 'code' has invalid value "ABC"
If you don't pass variables and only use "literal" values it works. Same for if you only pass variables it works fine.
If you don't set up a resolver for "testType" then no error is returned.
Not sure what is happening but I think this is a bug. If not, does anyone know why this is happening?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/scalars.py`
Content:
```
1 from typing import Optional, cast
2
3 from graphql.language.ast import (
4 BooleanValueNode,
5 FloatValueNode,
6 IntValueNode,
7 StringValueNode,
8 )
9 from graphql.type import (
10 GraphQLNamedType,
11 GraphQLScalarLiteralParser,
12 GraphQLScalarSerializer,
13 GraphQLScalarType,
14 GraphQLScalarValueParser,
15 GraphQLSchema,
16 )
17 from graphql.utilities import value_from_ast_untyped
18
19 from .types import SchemaBindable
20
21
22 class ScalarType(SchemaBindable):
23 _serialize: Optional[GraphQLScalarSerializer]
24 _parse_value: Optional[GraphQLScalarValueParser]
25 _parse_literal: Optional[GraphQLScalarLiteralParser]
26
27 def __init__(
28 self,
29 name: str,
30 *,
31 serializer: GraphQLScalarSerializer = None,
32 value_parser: GraphQLScalarValueParser = None,
33 literal_parser: GraphQLScalarLiteralParser = None,
34 ) -> None:
35 self.name = name
36 self._serialize = serializer
37 self._parse_value = value_parser
38 self._parse_literal = literal_parser
39
40 def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:
41 self._serialize = f
42 return f
43
44 def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:
45 self._parse_value = f
46 if not self._parse_literal:
47 self._parse_literal = create_default_literal_parser(f)
48 return f
49
50 def set_literal_parser(
51 self, f: GraphQLScalarLiteralParser
52 ) -> GraphQLScalarLiteralParser:
53 self._parse_literal = f
54 return f
55
56 # Alias above setters for consistent decorator API
57 serializer = set_serializer
58 value_parser = set_value_parser
59 literal_parser = set_literal_parser
60
61 def bind_to_schema(self, schema: GraphQLSchema) -> None:
62 graphql_type = schema.type_map.get(self.name)
63 self.validate_graphql_type(graphql_type)
64 graphql_type = cast(GraphQLScalarType, graphql_type)
65
66 if self._serialize:
67 # See mypy bug https://github.com/python/mypy/issues/2427
68 graphql_type.serialize = self._serialize # type: ignore
69 if self._parse_value:
70 graphql_type.parse_value = self._parse_value # type: ignore
71 if self._parse_literal:
72 graphql_type.parse_literal = self._parse_literal # type: ignore
73
74 def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:
75 if not graphql_type:
76 raise ValueError("Scalar %s is not defined in the schema" % self.name)
77 if not isinstance(graphql_type, GraphQLScalarType):
78 raise ValueError(
79 "%s is defined in the schema, but it is instance of %s (expected %s)"
80 % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)
81 )
82
83
84 SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)
85
86
87 def create_default_literal_parser(
88 value_parser: GraphQLScalarValueParser,
89 ) -> GraphQLScalarLiteralParser:
90 def default_literal_parser(ast):
91 return value_parser(value_from_ast_untyped(ast))
92
93 return default_literal_parser
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ariadne/scalars.py b/ariadne/scalars.py
--- a/ariadne/scalars.py
+++ b/ariadne/scalars.py
@@ -1,11 +1,5 @@
from typing import Optional, cast
-from graphql.language.ast import (
- BooleanValueNode,
- FloatValueNode,
- IntValueNode,
- StringValueNode,
-)
from graphql.type import (
GraphQLNamedType,
GraphQLScalarLiteralParser,
@@ -14,7 +8,6 @@
GraphQLScalarValueParser,
GraphQLSchema,
)
-from graphql.utilities import value_from_ast_untyped
from .types import SchemaBindable
@@ -43,8 +36,6 @@
def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:
self._parse_value = f
- if not self._parse_literal:
- self._parse_literal = create_default_literal_parser(f)
return f
def set_literal_parser(
@@ -79,15 +70,3 @@
"%s is defined in the schema, but it is instance of %s (expected %s)"
% (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)
)
-
-
-SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)
-
-
-def create_default_literal_parser(
- value_parser: GraphQLScalarValueParser,
-) -> GraphQLScalarLiteralParser:
- def default_literal_parser(ast):
- return value_parser(value_from_ast_untyped(ast))
-
- return default_literal_parser
| {"golden_diff": "diff --git a/ariadne/scalars.py b/ariadne/scalars.py\n--- a/ariadne/scalars.py\n+++ b/ariadne/scalars.py\n@@ -1,11 +1,5 @@\n from typing import Optional, cast\n \n-from graphql.language.ast import (\n- BooleanValueNode,\n- FloatValueNode,\n- IntValueNode,\n- StringValueNode,\n-)\n from graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n@@ -14,7 +8,6 @@\n GraphQLScalarValueParser,\n GraphQLSchema,\n )\n-from graphql.utilities import value_from_ast_untyped\n \n from .types import SchemaBindable\n \n@@ -43,8 +36,6 @@\n \n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n- if not self._parse_literal:\n- self._parse_literal = create_default_literal_parser(f)\n return f\n \n def set_literal_parser(\n@@ -79,15 +70,3 @@\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n-\n-\n-SCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)\n-\n-\n-def create_default_literal_parser(\n- value_parser: GraphQLScalarValueParser,\n-) -> GraphQLScalarLiteralParser:\n- def default_literal_parser(ast):\n- return value_parser(value_from_ast_untyped(ast))\n-\n- return default_literal_parser\n", "issue": "Argument 'code' has invalid value \"ABC\"\nI think there is a bug when using both literal and variable values with a custom scalar.\r\n\r\n```python\r\nfrom ariadne import ScalarType\r\n\r\ntestscalar = ScalarType('TestScalar')\r\n\r\[email protected]\r\ndef serializer(value):\r\n return value.upper()\r\n\r\n\r\[email protected]_parser\r\ndef value_parser(value):\r\n if value:\r\n return serializer(value)\r\n\r\n\r\[email protected]_parser\r\ndef literal_parser(ast):\r\n value = str(ast.value)\r\n return value_parser(value)\r\n```\r\n\r\nIf you then make the following query:\r\n```graphql\r\nquery($code: TestScalar) {\r\n test1: testType(code: $code) {\r\n id\r\n }\r\n test2: testType(code: \"ABC\") {\r\n id\r\n }\r\n}\r\n```\r\n This error is returned: Argument 'code' has invalid value \"ABC\"\r\n\r\nIf you don't pass variables and only use \"literal\" values it works. Same for if you only pass variables it works fine.\r\n\r\nIf you don't set up a resolver for \"testType\" then no error is returned.\r\n\r\nNot sure what is happening but I think this is a bug. If not, does anyone know why this is happening?\n", "before_files": [{"content": "from typing import Optional, cast\n\nfrom graphql.language.ast import (\n BooleanValueNode,\n FloatValueNode,\n IntValueNode,\n StringValueNode,\n)\nfrom graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n GraphQLScalarSerializer,\n GraphQLScalarType,\n GraphQLScalarValueParser,\n GraphQLSchema,\n)\nfrom graphql.utilities import value_from_ast_untyped\n\nfrom .types import SchemaBindable\n\n\nclass ScalarType(SchemaBindable):\n _serialize: Optional[GraphQLScalarSerializer]\n _parse_value: Optional[GraphQLScalarValueParser]\n _parse_literal: Optional[GraphQLScalarLiteralParser]\n\n def __init__(\n self,\n name: str,\n *,\n serializer: GraphQLScalarSerializer = None,\n value_parser: GraphQLScalarValueParser = None,\n literal_parser: GraphQLScalarLiteralParser = None,\n ) -> None:\n self.name = name\n self._serialize = serializer\n self._parse_value = value_parser\n self._parse_literal = literal_parser\n\n def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:\n self._serialize = f\n return f\n\n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n if not self._parse_literal:\n self._parse_literal = create_default_literal_parser(f)\n return f\n\n def set_literal_parser(\n self, f: GraphQLScalarLiteralParser\n ) -> GraphQLScalarLiteralParser:\n self._parse_literal = f\n return f\n\n # Alias above setters for consistent decorator API\n serializer = set_serializer\n value_parser = set_value_parser\n literal_parser = set_literal_parser\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLScalarType, graphql_type)\n\n if self._serialize:\n # See mypy bug https://github.com/python/mypy/issues/2427\n graphql_type.serialize = self._serialize # type: ignore\n if self._parse_value:\n graphql_type.parse_value = self._parse_value # type: ignore\n if self._parse_literal:\n graphql_type.parse_literal = self._parse_literal # type: ignore\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n if not graphql_type:\n raise ValueError(\"Scalar %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLScalarType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n\n\nSCALAR_AST_NODES = (BooleanValueNode, FloatValueNode, IntValueNode, StringValueNode)\n\n\ndef create_default_literal_parser(\n value_parser: GraphQLScalarValueParser,\n) -> GraphQLScalarLiteralParser:\n def default_literal_parser(ast):\n return value_parser(value_from_ast_untyped(ast))\n\n return default_literal_parser\n", "path": "ariadne/scalars.py"}], "after_files": [{"content": "from typing import Optional, cast\n\nfrom graphql.type import (\n GraphQLNamedType,\n GraphQLScalarLiteralParser,\n GraphQLScalarSerializer,\n GraphQLScalarType,\n GraphQLScalarValueParser,\n GraphQLSchema,\n)\n\nfrom .types import SchemaBindable\n\n\nclass ScalarType(SchemaBindable):\n _serialize: Optional[GraphQLScalarSerializer]\n _parse_value: Optional[GraphQLScalarValueParser]\n _parse_literal: Optional[GraphQLScalarLiteralParser]\n\n def __init__(\n self,\n name: str,\n *,\n serializer: GraphQLScalarSerializer = None,\n value_parser: GraphQLScalarValueParser = None,\n literal_parser: GraphQLScalarLiteralParser = None,\n ) -> None:\n self.name = name\n self._serialize = serializer\n self._parse_value = value_parser\n self._parse_literal = literal_parser\n\n def set_serializer(self, f: GraphQLScalarSerializer) -> GraphQLScalarSerializer:\n self._serialize = f\n return f\n\n def set_value_parser(self, f: GraphQLScalarValueParser) -> GraphQLScalarValueParser:\n self._parse_value = f\n return f\n\n def set_literal_parser(\n self, f: GraphQLScalarLiteralParser\n ) -> GraphQLScalarLiteralParser:\n self._parse_literal = f\n return f\n\n # Alias above setters for consistent decorator API\n serializer = set_serializer\n value_parser = set_value_parser\n literal_parser = set_literal_parser\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLScalarType, graphql_type)\n\n if self._serialize:\n # See mypy bug https://github.com/python/mypy/issues/2427\n graphql_type.serialize = self._serialize # type: ignore\n if self._parse_value:\n graphql_type.parse_value = self._parse_value # type: ignore\n if self._parse_literal:\n graphql_type.parse_literal = self._parse_literal # type: ignore\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n if not graphql_type:\n raise ValueError(\"Scalar %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLScalarType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLScalarType.__name__)\n )\n", "path": "ariadne/scalars.py"}]} | 1,366 | 351 |
gh_patches_debug_66140 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-1452 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Run Flake8 lint on RHEL6
Currently, flake8 is run only on RHEL7 and 8 and not on RHEL6. According to [the documentation](http://flake8.pycqa.org/en/latest/#installation) it is necessary to run flake8 with the exact Python version that is used. Thus to be sure that the syntax is ok even for the older Python version, we have to run in to RHEL6 too.
Tackled in #1251.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 from setuptools import setup, find_packages
3
4 __here__ = os.path.dirname(os.path.abspath(__file__))
5
6 package_info = dict.fromkeys(["RELEASE", "COMMIT", "VERSION", "NAME"])
7
8 for name in package_info:
9 with open(os.path.join(__here__, "insights", name)) as f:
10 package_info[name] = f.read().strip()
11
12 entry_points = {
13 'console_scripts': [
14 'insights-run = insights:main',
15 'insights-info = insights.tools.query:main',
16 'gen_api = insights.tools.generate_api_config:main',
17 'insights-perf = insights.tools.perf:main',
18 'client = insights.client:run',
19 'mangle = insights.util.mangle:main'
20 ]
21 }
22
23 runtime = set([
24 'pyyaml>=3.10,<=3.13',
25 'six',
26 ])
27
28
29 def maybe_require(pkg):
30 try:
31 __import__(pkg)
32 except ImportError:
33 runtime.add(pkg)
34
35
36 maybe_require("importlib")
37 maybe_require("argparse")
38
39
40 client = set([
41 'requests',
42 'pyOpenSSL',
43 ])
44
45 develop = set([
46 'futures==3.0.5',
47 'requests==2.13.0',
48 'wheel',
49 ])
50
51 docs = set([
52 'Sphinx==1.7.9',
53 'nbsphinx==0.3.1',
54 'sphinx_rtd_theme',
55 'ipython<6',
56 'colorama',
57 ])
58
59 testing = set([
60 'coverage==4.3.4',
61 'pytest==3.0.6',
62 'pytest-cov==2.4.0',
63 'mock==2.0.0',
64 ])
65
66 linting = set([
67 'flake8==3.3.0',
68 ])
69
70 optional = set([
71 'jinja2',
72 'python-cjson',
73 'python-logstash',
74 'python-statsd',
75 'watchdog',
76 ])
77
78 if __name__ == "__main__":
79 # allows for runtime modification of rpm name
80 name = os.environ.get("INSIGHTS_CORE_NAME", package_info["NAME"])
81
82 setup(
83 name=name,
84 version=package_info["VERSION"],
85 description="Insights Core is a data collection and analysis framework",
86 long_description=open("README.rst").read(),
87 url="https://github.com/redhatinsights/insights-core",
88 author="Red Hat, Inc.",
89 author_email="[email protected]",
90 packages=find_packages(),
91 install_requires=list(runtime),
92 package_data={'': ['LICENSE']},
93 license='Apache 2.0',
94 extras_require={
95 'develop': list(runtime | develop | client | docs | linting | testing),
96 'client': list(runtime | client),
97 'optional': list(optional),
98 'docs': list(docs),
99 'linting': list(linting | client),
100 'testing': list(testing | client)
101 },
102 classifiers=[
103 'Development Status :: 5 - Production/Stable',
104 'Intended Audience :: Developers',
105 'Natural Language :: English',
106 'License :: OSI Approved :: Apache Software License',
107 'Programming Language :: Python',
108 'Programming Language :: Python :: 2.6',
109 'Programming Language :: Python :: 2.7',
110 'Programming Language :: Python :: 3.3',
111 'Programming Language :: Python :: 3.4',
112 'Programming Language :: Python :: 3.5',
113 'Programming Language :: Python :: 3.6'
114 ],
115 entry_points=entry_points,
116 include_package_data=True
117 )
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,7 @@
])
linting = set([
- 'flake8==3.3.0',
+ 'flake8==2.6.2',
])
optional = set([
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,7 @@\n ])\n \n linting = set([\n- 'flake8==3.3.0',\n+ 'flake8==2.6.2',\n ])\n \n optional = set([\n", "issue": "Run Flake8 lint on RHEL6\nCurrently, flake8 is run only on RHEL7 and 8 and not on RHEL6. According to [the documentation](http://flake8.pycqa.org/en/latest/#installation) it is necessary to run flake8 with the exact Python version that is used. Thus to be sure that the syntax is ok even for the older Python version, we have to run in to RHEL6 too.\r\n\r\nTackled in #1251.\n", "before_files": [{"content": "import os\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-run = insights:main',\n 'insights-info = insights.tools.query:main',\n 'gen_api = insights.tools.generate_api_config:main',\n 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'pyyaml>=3.10,<=3.13',\n 'six',\n])\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests',\n 'pyOpenSSL',\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'requests==2.13.0',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx==1.7.9',\n 'nbsphinx==0.3.1',\n 'sphinx_rtd_theme',\n 'ipython<6',\n 'colorama',\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\nlinting = set([\n 'flake8==3.3.0',\n])\n\noptional = set([\n 'jinja2',\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing),\n 'client': list(runtime | client),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}], "after_files": [{"content": "import os\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-run = insights:main',\n 'insights-info = insights.tools.query:main',\n 'gen_api = insights.tools.generate_api_config:main',\n 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'pyyaml>=3.10,<=3.13',\n 'six',\n])\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests',\n 'pyOpenSSL',\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'requests==2.13.0',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx==1.7.9',\n 'nbsphinx==0.3.1',\n 'sphinx_rtd_theme',\n 'ipython<6',\n 'colorama',\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\nlinting = set([\n 'flake8==2.6.2',\n])\n\noptional = set([\n 'jinja2',\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing),\n 'client': list(runtime | client),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}]} | 1,382 | 69 |
gh_patches_debug_28732 | rasdani/github-patches | git_diff | mindee__doctr-619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
How to run this github repo on google colab
I want to run this repo on google colab
How can I run this code?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 """
7 Package installation setup
8 """
9
10 import os
11 import re
12 import subprocess
13 from pathlib import Path
14
15 from setuptools import find_packages, setup
16
17 version = "0.4.1a0"
18 sha = 'Unknown'
19 src_folder = 'doctr'
20 package_index = 'python-doctr'
21
22 cwd = Path(__file__).parent.absolute()
23
24 if os.getenv('BUILD_VERSION'):
25 version = os.getenv('BUILD_VERSION')
26 elif sha != 'Unknown':
27 try:
28 sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()
29 except Exception:
30 pass
31 version += '+' + sha[:7]
32 print(f"Building wheel {package_index}-{version}")
33
34 with open(cwd.joinpath(src_folder, 'version.py'), 'w') as f:
35 f.write(f"__version__ = '{version}'\n")
36
37 with open('README.md', 'r') as f:
38 readme = f.read()
39
40 # Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py
41 _deps = [
42 "importlib_metadata",
43 "numpy>=1.16.0",
44 "scipy>=1.4.0",
45 "opencv-python>=3.4.5.20",
46 "tensorflow>=2.4.0",
47 "PyMuPDF>=1.16.0,<1.18.11",
48 "pyclipper>=1.2.0",
49 "shapely>=1.6.0",
50 "matplotlib>=3.1.0,<3.4.3",
51 "mplcursors>=0.3",
52 "weasyprint>=52.2,<53.0",
53 "unidecode>=1.0.0",
54 "tensorflow-cpu>=2.4.0",
55 "torch>=1.8.0",
56 "torchvision>=0.9.0",
57 "Pillow>=8.3.2", # cf. https://github.com/advisories/GHSA-98vv-pw6r-q6q4
58 "tqdm>=4.30.0",
59 "tensorflow-addons>=0.13.0",
60 "rapidfuzz>=1.6.0",
61 "keras<2.7.0",
62 # Testing
63 "pytest>=5.3.2",
64 "coverage>=4.5.4",
65 "requests>=2.20.0",
66 "requirements-parser==0.2.0",
67 # Quality
68 "flake8>=3.9.0",
69 "isort>=5.7.0",
70 "mypy>=0.812",
71 # Docs
72 "sphinx<3.5.0",
73 "sphinx-rtd-theme==0.4.3",
74 "sphinxemoji>=0.1.8",
75 "sphinx-copybutton>=0.3.1",
76 "docutils<0.18",
77 ]
78
79 deps = {b: a for a, b in (re.findall(r"^(([^!=<>]+)(?:[!=<>].*)?$)", x)[0] for x in _deps)}
80
81
82 def deps_list(*pkgs):
83 return [deps[pkg] for pkg in pkgs]
84
85
86 install_requires = [
87 deps["importlib_metadata"] + ";python_version<'3.8'", # importlib_metadata for Python versions that don't have it
88 deps["numpy"],
89 deps["scipy"],
90 deps["opencv-python"],
91 deps["PyMuPDF"],
92 deps["pyclipper"],
93 deps["shapely"],
94 deps["matplotlib"],
95 deps["mplcursors"],
96 deps["weasyprint"],
97 deps["unidecode"],
98 deps["Pillow"],
99 deps["tqdm"],
100 deps["rapidfuzz"],
101 ]
102
103 extras = {}
104 extras["tf"] = deps_list(
105 "tensorflow",
106 "tensorflow-addons",
107 "keras",
108 )
109
110 extras["tf-cpu"] = deps_list(
111 "tensorflow-cpu",
112 "tensorflow-addons",
113 "keras",
114 )
115
116 extras["torch"] = deps_list(
117 "torch",
118 "torchvision",
119 )
120
121 extras["all"] = (
122 extras["tf"]
123 + extras["torch"]
124 )
125
126 extras["testing"] = deps_list(
127 "pytest",
128 "coverage",
129 "requests",
130 "requirements-parser",
131 )
132
133 extras["quality"] = deps_list(
134 "flake8",
135 "isort",
136 "mypy"
137 )
138
139 extras["docs_specific"] = deps_list(
140 "sphinx",
141 "sphinx-rtd-theme",
142 "sphinxemoji",
143 "sphinx-copybutton",
144 "docutils",
145 )
146
147 extras["docs"] = extras["all"] + extras["docs_specific"]
148
149 extras["dev"] = (
150 extras["all"]
151 + extras["testing"]
152 + extras["quality"]
153 + extras["docs_specific"]
154 )
155
156 setup(
157 # Metadata
158 name=package_index,
159 version=version,
160 author='Mindee',
161 author_email='[email protected]',
162 maintainer='François-Guillaume Fernandez, Charles Gaillard',
163 description='Document Text Recognition (docTR): deep Learning for high-performance OCR on documents.',
164 long_description=readme,
165 long_description_content_type="text/markdown",
166 url='https://github.com/mindee/doctr',
167 download_url='https://github.com/mindee/doctr/tags',
168 license='Apache',
169 classifiers=[
170 'Development Status :: 4 - Beta',
171 'Intended Audience :: Developers',
172 "Intended Audience :: Education",
173 'Intended Audience :: Science/Research',
174 'License :: OSI Approved :: Apache Software License',
175 'Natural Language :: English',
176 'Operating System :: OS Independent',
177 'Programming Language :: Python :: 3',
178 'Programming Language :: Python :: 3.6',
179 'Programming Language :: Python :: 3.7',
180 'Topic :: Scientific/Engineering :: Artificial Intelligence',
181 ],
182 keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],
183
184 # Package info
185 packages=find_packages(exclude=('tests',)),
186 zip_safe=True,
187 python_requires='>=3.6.0',
188 include_package_data=True,
189 install_requires=install_requires,
190 extras_require=extras,
191 package_data={'': ['LICENSE']}
192 )
193
```
Path: `docs/source/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import sys
15 from datetime import datetime
16
17 import sphinx_rtd_theme
18
19 sys.path.insert(0, os.path.abspath('../..'))
20 import doctr
21
22 # -- Project information -----------------------------------------------------
23
24 master_doc = 'index'
25 project = 'docTR'
26 _copyright_str = f"-{datetime.now().year}" if datetime.now().year > 2021 else ""
27 copyright = f"2021{_copyright_str}, Mindee"
28 author = 'François-Guillaume Fernandez, Charles Gaillard'
29
30 # The full version, including alpha/beta/rc tags
31 version = doctr.__version__
32 release = doctr.__version__ + '-git'
33
34 # -- General configuration ---------------------------------------------------
35
36 # Add any Sphinx extension module names here, as strings. They can be
37 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
38 # ones.
39 extensions = [
40 'sphinx.ext.autodoc',
41 'sphinx.ext.napoleon',
42 'sphinx.ext.viewcode',
43 'sphinx.ext.coverage',
44 'sphinx.ext.mathjax',
45 'sphinx.ext.autosectionlabel',
46 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/
47 'sphinx_copybutton',
48 ]
49
50 napoleon_use_ivar = True
51
52 # Add any paths that contain templates here, relative to this directory.
53 templates_path = ['_templates']
54
55 # List of patterns, relative to source directory, that match files and
56 # directories to ignore when looking for source files.
57 # This pattern also affects html_static_path and html_extra_path.
58 exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
59
60
61 # The name of the Pygments (syntax highlighting) style to use.
62 pygments_style = 'sphinx'
63 highlight_language = 'python3'
64
65 # -- Options for HTML output -------------------------------------------------
66
67 # The theme to use for HTML and HTML Help pages. See the documentation for
68 # a list of builtin themes.
69 #
70 html_theme = 'sphinx_rtd_theme'
71 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
72
73 # Theme options are theme-specific and customize the look and feel of a theme
74 # further. For a list of options available for each theme, see the
75 # documentation.
76 #
77 html_theme_options = {
78 'collapse_navigation': False,
79 'display_version': False,
80 'logo_only': False,
81 'analytics_id': 'G-40DVRMX8T4',
82 }
83
84 html_logo = '_static/images/Logo-docTR-white.png'
85 html_favicon = '_static/images/favicon.ico'
86
87
88
89 # Add any paths that contain custom static files (such as style sheets) here,
90 # relative to this directory. They are copied after the builtin static files,
91 # so a file named "default.css" will overwrite the builtin "default.css".
92 html_static_path = ['_static']
93
94 # A list of files that should not be packed into the epub file.
95 epub_exclude_files = ['search.html']
96
97 def setup(app):
98 app.add_css_file('css/mindee.css')
99 app.add_js_file('js/custom.js')
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -45,6 +45,8 @@
'sphinx.ext.autosectionlabel',
'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/
'sphinx_copybutton',
+ 'recommonmark',
+ 'sphinx_markdown_tables',
]
napoleon_use_ivar = True
@@ -55,7 +57,7 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
+exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store', 'notebooks/*.rst']
# The name of the Pygments (syntax highlighting) style to use.
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -74,6 +74,8 @@
"sphinxemoji>=0.1.8",
"sphinx-copybutton>=0.3.1",
"docutils<0.18",
+ "recommonmark>=0.7.1",
+ "sphinx-markdown-tables>=0.0.15",
]
deps = {b: a for a, b in (re.findall(r"^(([^!=<>]+)(?:[!=<>].*)?$)", x)[0] for x in _deps)}
@@ -142,6 +144,8 @@
"sphinxemoji",
"sphinx-copybutton",
"docutils",
+ "recommonmark",
+ "sphinx-markdown-tables",
)
extras["docs"] = extras["all"] + extras["docs_specific"]
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -45,6 +45,8 @@\n 'sphinx.ext.autosectionlabel',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n+ 'recommonmark',\n+ 'sphinx_markdown_tables',\n ]\n \n napoleon_use_ivar = True\n@@ -55,7 +57,7 @@\n # List of patterns, relative to source directory, that match files and\n # directories to ignore when looking for source files.\n # This pattern also affects html_static_path and html_extra_path.\n-exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n+exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store', 'notebooks/*.rst']\n \n \n # The name of the Pygments (syntax highlighting) style to use.\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -74,6 +74,8 @@\n \"sphinxemoji>=0.1.8\",\n \"sphinx-copybutton>=0.3.1\",\n \"docutils<0.18\",\n+ \"recommonmark>=0.7.1\",\n+ \"sphinx-markdown-tables>=0.0.15\",\n ]\n \n deps = {b: a for a, b in (re.findall(r\"^(([^!=<>]+)(?:[!=<>].*)?$)\", x)[0] for x in _deps)}\n@@ -142,6 +144,8 @@\n \"sphinxemoji\",\n \"sphinx-copybutton\",\n \"docutils\",\n+ \"recommonmark\",\n+ \"sphinx-markdown-tables\",\n )\n \n extras[\"docs\"] = extras[\"all\"] + extras[\"docs_specific\"]\n", "issue": "How to run this github repo on google colab\nI want to run this repo on google colab\r\nHow can I run this code?\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\"\"\"\nPackage installation setup\n\"\"\"\n\nimport os\nimport re\nimport subprocess\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nversion = \"0.4.1a0\"\nsha = 'Unknown'\nsrc_folder = 'doctr'\npackage_index = 'python-doctr'\n\ncwd = Path(__file__).parent.absolute()\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n try:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\n except Exception:\n pass\n version += '+' + sha[:7]\nprint(f\"Building wheel {package_index}-{version}\")\n\nwith open(cwd.joinpath(src_folder, 'version.py'), 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\nwith open('README.md', 'r') as f:\n readme = f.read()\n\n# Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py\n_deps = [\n \"importlib_metadata\",\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=3.4.5.20\",\n \"tensorflow>=2.4.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0,<3.4.3\",\n \"mplcursors>=0.3\",\n \"weasyprint>=52.2,<53.0\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n \"torchvision>=0.9.0\",\n \"Pillow>=8.3.2\", # cf. https://github.com/advisories/GHSA-98vv-pw6r-q6q4\n \"tqdm>=4.30.0\",\n \"tensorflow-addons>=0.13.0\",\n \"rapidfuzz>=1.6.0\",\n \"keras<2.7.0\",\n # Testing\n \"pytest>=5.3.2\",\n \"coverage>=4.5.4\",\n \"requests>=2.20.0\",\n \"requirements-parser==0.2.0\",\n # Quality\n \"flake8>=3.9.0\",\n \"isort>=5.7.0\",\n \"mypy>=0.812\",\n # Docs\n \"sphinx<3.5.0\",\n \"sphinx-rtd-theme==0.4.3\",\n \"sphinxemoji>=0.1.8\",\n \"sphinx-copybutton>=0.3.1\",\n \"docutils<0.18\",\n]\n\ndeps = {b: a for a, b in (re.findall(r\"^(([^!=<>]+)(?:[!=<>].*)?$)\", x)[0] for x in _deps)}\n\n\ndef deps_list(*pkgs):\n return [deps[pkg] for pkg in pkgs]\n\n\ninstall_requires = [\n deps[\"importlib_metadata\"] + \";python_version<'3.8'\", # importlib_metadata for Python versions that don't have it\n deps[\"numpy\"],\n deps[\"scipy\"],\n deps[\"opencv-python\"],\n deps[\"PyMuPDF\"],\n deps[\"pyclipper\"],\n deps[\"shapely\"],\n deps[\"matplotlib\"],\n deps[\"mplcursors\"],\n deps[\"weasyprint\"],\n deps[\"unidecode\"],\n deps[\"Pillow\"],\n deps[\"tqdm\"],\n deps[\"rapidfuzz\"],\n]\n\nextras = {}\nextras[\"tf\"] = deps_list(\n \"tensorflow\",\n \"tensorflow-addons\",\n \"keras\",\n)\n\nextras[\"tf-cpu\"] = deps_list(\n \"tensorflow-cpu\",\n \"tensorflow-addons\",\n \"keras\",\n)\n\nextras[\"torch\"] = deps_list(\n \"torch\",\n \"torchvision\",\n)\n\nextras[\"all\"] = (\n extras[\"tf\"]\n + extras[\"torch\"]\n)\n\nextras[\"testing\"] = deps_list(\n \"pytest\",\n \"coverage\",\n \"requests\",\n \"requirements-parser\",\n)\n\nextras[\"quality\"] = deps_list(\n \"flake8\",\n \"isort\",\n \"mypy\"\n)\n\nextras[\"docs_specific\"] = deps_list(\n \"sphinx\",\n \"sphinx-rtd-theme\",\n \"sphinxemoji\",\n \"sphinx-copybutton\",\n \"docutils\",\n)\n\nextras[\"docs\"] = extras[\"all\"] + extras[\"docs_specific\"]\n\nextras[\"dev\"] = (\n extras[\"all\"]\n + extras[\"testing\"]\n + extras[\"quality\"]\n + extras[\"docs_specific\"]\n)\n\nsetup(\n # Metadata\n name=package_index,\n version=version,\n author='Mindee',\n author_email='[email protected]',\n maintainer='Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard',\n description='Document Text Recognition (docTR): deep Learning for high-performance OCR on documents.',\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url='https://github.com/mindee/doctr',\n download_url='https://github.com/mindee/doctr/tags',\n license='Apache',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n \"Intended Audience :: Education\",\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],\n\n # Package info\n packages=find_packages(exclude=('tests',)),\n zip_safe=True,\n python_requires='>=3.6.0',\n include_package_data=True,\n install_requires=install_requires,\n extras_require=extras,\n package_data={'': ['LICENSE']}\n)\n", "path": "setup.py"}, {"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nfrom datetime import datetime\n\nimport sphinx_rtd_theme\n\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'docTR'\n_copyright_str = f\"-{datetime.now().year}\" if datetime.now().year > 2021 else \"\"\ncopyright = f\"2021{_copyright_str}, Mindee\"\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.autosectionlabel',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': False,\n 'logo_only': False,\n 'analytics_id': 'G-40DVRMX8T4',\n}\n\nhtml_logo = '_static/images/Logo-docTR-white.png'\nhtml_favicon = '_static/images/favicon.ico'\n\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\"\"\"\nPackage installation setup\n\"\"\"\n\nimport os\nimport re\nimport subprocess\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nversion = \"0.4.1a0\"\nsha = 'Unknown'\nsrc_folder = 'doctr'\npackage_index = 'python-doctr'\n\ncwd = Path(__file__).parent.absolute()\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n try:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\n except Exception:\n pass\n version += '+' + sha[:7]\nprint(f\"Building wheel {package_index}-{version}\")\n\nwith open(cwd.joinpath(src_folder, 'version.py'), 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\nwith open('README.md', 'r') as f:\n readme = f.read()\n\n# Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py\n_deps = [\n \"importlib_metadata\",\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=3.4.5.20\",\n \"tensorflow>=2.4.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0,<3.4.3\",\n \"mplcursors>=0.3\",\n \"weasyprint>=52.2,<53.0\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n \"torchvision>=0.9.0\",\n \"Pillow>=8.3.2\", # cf. https://github.com/advisories/GHSA-98vv-pw6r-q6q4\n \"tqdm>=4.30.0\",\n \"tensorflow-addons>=0.13.0\",\n \"rapidfuzz>=1.6.0\",\n \"keras<2.7.0\",\n # Testing\n \"pytest>=5.3.2\",\n \"coverage>=4.5.4\",\n \"requests>=2.20.0\",\n \"requirements-parser==0.2.0\",\n # Quality\n \"flake8>=3.9.0\",\n \"isort>=5.7.0\",\n \"mypy>=0.812\",\n # Docs\n \"sphinx<3.5.0\",\n \"sphinx-rtd-theme==0.4.3\",\n \"sphinxemoji>=0.1.8\",\n \"sphinx-copybutton>=0.3.1\",\n \"docutils<0.18\",\n \"recommonmark>=0.7.1\",\n \"sphinx-markdown-tables>=0.0.15\",\n]\n\ndeps = {b: a for a, b in (re.findall(r\"^(([^!=<>]+)(?:[!=<>].*)?$)\", x)[0] for x in _deps)}\n\n\ndef deps_list(*pkgs):\n return [deps[pkg] for pkg in pkgs]\n\n\ninstall_requires = [\n deps[\"importlib_metadata\"] + \";python_version<'3.8'\", # importlib_metadata for Python versions that don't have it\n deps[\"numpy\"],\n deps[\"scipy\"],\n deps[\"opencv-python\"],\n deps[\"PyMuPDF\"],\n deps[\"pyclipper\"],\n deps[\"shapely\"],\n deps[\"matplotlib\"],\n deps[\"mplcursors\"],\n deps[\"weasyprint\"],\n deps[\"unidecode\"],\n deps[\"Pillow\"],\n deps[\"tqdm\"],\n deps[\"rapidfuzz\"],\n]\n\nextras = {}\nextras[\"tf\"] = deps_list(\n \"tensorflow\",\n \"tensorflow-addons\",\n \"keras\",\n)\n\nextras[\"tf-cpu\"] = deps_list(\n \"tensorflow-cpu\",\n \"tensorflow-addons\",\n \"keras\",\n)\n\nextras[\"torch\"] = deps_list(\n \"torch\",\n \"torchvision\",\n)\n\nextras[\"all\"] = (\n extras[\"tf\"]\n + extras[\"torch\"]\n)\n\nextras[\"testing\"] = deps_list(\n \"pytest\",\n \"coverage\",\n \"requests\",\n \"requirements-parser\",\n)\n\nextras[\"quality\"] = deps_list(\n \"flake8\",\n \"isort\",\n \"mypy\"\n)\n\nextras[\"docs_specific\"] = deps_list(\n \"sphinx\",\n \"sphinx-rtd-theme\",\n \"sphinxemoji\",\n \"sphinx-copybutton\",\n \"docutils\",\n \"recommonmark\",\n \"sphinx-markdown-tables\",\n)\n\nextras[\"docs\"] = extras[\"all\"] + extras[\"docs_specific\"]\n\nextras[\"dev\"] = (\n extras[\"all\"]\n + extras[\"testing\"]\n + extras[\"quality\"]\n + extras[\"docs_specific\"]\n)\n\nsetup(\n # Metadata\n name=package_index,\n version=version,\n author='Mindee',\n author_email='[email protected]',\n maintainer='Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard',\n description='Document Text Recognition (docTR): deep Learning for high-performance OCR on documents.',\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url='https://github.com/mindee/doctr',\n download_url='https://github.com/mindee/doctr/tags',\n license='Apache',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n \"Intended Audience :: Education\",\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],\n\n # Package info\n packages=find_packages(exclude=('tests',)),\n zip_safe=True,\n python_requires='>=3.6.0',\n include_package_data=True,\n install_requires=install_requires,\n extras_require=extras,\n package_data={'': ['LICENSE']}\n)\n", "path": "setup.py"}, {"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nfrom datetime import datetime\n\nimport sphinx_rtd_theme\n\nsys.path.insert(0, os.path.abspath('../..'))\nimport doctr\n\n# -- Project information -----------------------------------------------------\n\nmaster_doc = 'index'\nproject = 'docTR'\n_copyright_str = f\"-{datetime.now().year}\" if datetime.now().year > 2021 else \"\"\ncopyright = f\"2021{_copyright_str}, Mindee\"\nauthor = 'Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard'\n\n# The full version, including alpha/beta/rc tags\nversion = doctr.__version__\nrelease = doctr.__version__ + '-git'\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n\t'sphinx.ext.napoleon',\n\t'sphinx.ext.viewcode',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.autosectionlabel',\n 'sphinxemoji.sphinxemoji', # cf. https://sphinxemojicodes.readthedocs.io/en/stable/\n 'sphinx_copybutton',\n 'recommonmark',\n 'sphinx_markdown_tables',\n]\n\nnapoleon_use_ivar = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store', 'notebooks/*.rst']\n\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\nhighlight_language = 'python3'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': False,\n 'logo_only': False,\n 'analytics_id': 'G-40DVRMX8T4',\n}\n\nhtml_logo = '_static/images/Logo-docTR-white.png'\nhtml_favicon = '_static/images/favicon.ico'\n\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\ndef setup(app):\n app.add_css_file('css/mindee.css')\n app.add_js_file('js/custom.js')\n", "path": "docs/source/conf.py"}]} | 3,175 | 425 |
gh_patches_debug_18952 | rasdani/github-patches | git_diff | bids-standard__pybids-614 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scale transformation shouldn't return NAs in event of constant input
If constant input is passed to the `Scale` transformation, a column of N/A values is returned. This should probably just fail with an exception (or at minimum, return a column of `0`).
Scale transformation shouldn't return NAs in event of constant input
If constant input is passed to the `Scale` transformation, a column of N/A values is returned. This should probably just fail with an exception (or at minimum, return a column of `0`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bids/analysis/transformations/compute.py`
Content:
```
1 """
2 Transformations that primarily involve numerical computation on variables.
3 """
4 import math
5 import numpy as np
6 import pandas as pd
7 from bids.utils import listify
8 from .base import Transformation
9 from bids.analysis import hrf
10 from bids.variables import SparseRunVariable, DenseRunVariable
11
12
13 class Convolve(Transformation):
14 """Convolve the input variable with an HRF.
15
16 Parameters
17 ----------
18 var : Variable
19 The variable to convolve.
20 model : str
21 The name of the HRF model to apply. Must be one of 'spm',
22 'glover', or 'fir'.
23 derivative : bool
24 Whether or not to include the temporal derivative.
25 dispersion : bool
26 Whether or not to include the dispersion derivative.
27 fir_delays : iterable
28 A list or iterable of delays to use if model is
29 'fir' (ignored otherwise). Spacing between delays must be fixed.
30
31 Notes
32 -----
33 Uses the HRF convolution functions implemented in nistats.
34 """
35
36 _input_type = 'variable'
37 _return_type = 'variable'
38
39 def _transform(self, var, model='spm', derivative=False, dispersion=False,
40 fir_delays=None):
41
42 model = model.lower()
43
44 df = var.to_df(entities=False)
45
46 if isinstance(var, SparseRunVariable):
47 sampling_rate = self.collection.sampling_rate
48 dur = var.get_duration()
49 resample_frames = np.linspace(
50 0, dur, int(math.ceil(dur * sampling_rate)), endpoint=False)
51
52 else:
53 resample_frames = df['onset'].values
54 sampling_rate = var.sampling_rate
55
56 vals = df[['onset', 'duration', 'amplitude']].values.T
57
58 if model in ['spm', 'glover']:
59 if derivative:
60 model += ' + derivative'
61 if dispersion:
62 model += ' + dispersion'
63 elif model != 'fir':
64 raise ValueError("Model must be one of 'spm', 'glover', or 'fir'.")
65
66 # Minimum interval between event onsets/duration
67 # Used to compute oversampling factor to prevent information loss
68 unique_onsets = np.unique(np.sort(df.onset))
69 if len(unique_onsets) > 1:
70 min_interval = min(np.ediff1d(unique_onsets).min(),
71 df.duration.min())
72 oversampling = np.ceil(2*(1 / (min_interval * sampling_rate)))
73 else:
74 oversampling = 2
75 convolved = hrf.compute_regressor(
76 vals, model, resample_frames, fir_delays=fir_delays, min_onset=0,
77 oversampling=oversampling
78 )
79
80 return DenseRunVariable(
81 name=var.name, values=convolved[0], run_info=var.run_info,
82 source=var.source, sampling_rate=sampling_rate)
83
84
85 class Demean(Transformation):
86
87 def _transform(self, data):
88 return data - data.mean()
89
90
91 class Orthogonalize(Transformation):
92
93 _variables_used = ('variables', 'other')
94 _densify = ('variables', 'other')
95 _align = ('other')
96
97 def _transform(self, var, other):
98
99 other = listify(other)
100
101 # Set up X matrix and slice into it based on target variable indices
102 X = np.array([self._variables[c].values.values.squeeze()
103 for c in other]).T
104 X = X[var.index, :]
105 assert len(X) == len(var)
106 y = var.values
107 _aX = np.c_[np.ones(len(y)), X]
108 coefs, resids, rank, s = np.linalg.lstsq(_aX, y, rcond=None)
109 result = pd.DataFrame(y - X.dot(coefs[1:]), index=var.index)
110 return result
111
112
113 class Product(Transformation):
114
115 _loopable = False
116 _groupable = False
117 _align = True
118 _output_required = True
119
120 def _transform(self, data):
121 data = pd.concat(data, axis=1, sort=True)
122 return data.product(1)
123
124
125 class Scale(Transformation):
126 """Scale a variable.
127
128 Parameters
129 ----------
130 data : :obj:`pandas.Series` or :obj:`pandas.DataFrame`
131 The variables to scale.
132 demean : bool
133 If True, demean each column.
134 rescale : bool
135 If True, divide variables by their standard deviation.
136 replace_na : str
137 Whether/when to replace missing values with 0. If
138 None, no replacement is performed. If 'before', missing values are
139 replaced with 0's before scaling. If 'after', missing values are
140 replaced with 0 after scaling.
141 """
142
143 def _transform(self, data, demean=True, rescale=True, replace_na=None):
144 if replace_na == 'before':
145 data = data.fillna(0.)
146 if demean:
147 data -= data.mean()
148 if rescale:
149 data /= data.std()
150 if replace_na == 'after':
151 data = data.fillna(0.)
152 return data
153
154
155 class Sum(Transformation):
156
157 _loopable = False
158 _groupable = False
159 _align = True
160 _output_required = True
161
162 def _transform(self, data, weights=None):
163 data = pd.concat(data, axis=1, sort=True)
164 if weights is None:
165 weights = np.ones(data.shape[1])
166 else:
167 weights = np.array(weights)
168 if len(weights.ravel()) != data.shape[1]:
169 raise ValueError("If weights are passed to sum(), the number "
170 "of elements must equal number of variables"
171 " being summed.")
172 return (data * weights).sum(axis=1)
173
174
175
176 class Threshold(Transformation):
177 """Threshold and/or binarize a variable.
178
179 Parameters
180 ----------
181 data :obj:`pandas.Series` or :obj:`pandas.DataFrame`
182 The pandas structure to threshold.
183 threshold : float
184 The value to binarize around (values above will
185 be assigned 1, values below will be assigned 0).
186 binarize : bool
187 If True, binarizes all non-zero values (i.e., every
188 non-zero value will be set to 1).
189 above : bool
190 Specifies which values to retain with respect to the
191 cut-off. If True, all value above the threshold will be kept; if
192 False, all values below the threshold will be kept. Defaults to
193 True.
194 signed : bool
195 Specifies whether to treat the threshold as signed
196 (default) or unsigned. For example, when passing above=True and
197 threshold=3, if signed=True, all and only values above +3 would be
198 retained. If signed=False, all absolute values > 3 would be retained
199 (i.e.,values in the range -3 < X < 3 would be set to 0).
200 """
201
202 _groupable = False
203
204 def _transform(self, data, threshold=0., binarize=False, above=True,
205 signed=True):
206 if not signed:
207 threshold = np.abs(threshold)
208 data = data.abs()
209 keep = data >= threshold if above else data <= threshold
210 data[~keep] = 0
211 if binarize:
212 data[keep] = 1
213 return data
214
215
216 class And(Transformation):
217 """Logical AND on two or more variables.
218
219 Parameters
220 ----------
221 dfs : list of :obj:`pandas.DataFrame`
222 variables to enter into the conjunction.
223 """
224
225 _loopable = False
226 _groupable = False
227 _output_required = True
228
229 def _transform(self, dfs):
230 df = pd.concat(dfs, axis=1, sort=True)
231 return df.all(axis=1).astype(int)
232
233
234 class Not(Transformation):
235 """Logical negation of a variable.
236
237 Parameters
238 ----------
239 var : :obj:`pandas.Series`
240 Variable to negate. Must be convertible to bool.
241 """
242
243 _loopable = True
244 _groupable = False
245
246 def _transform(self, var):
247 return ~var.astype(bool)
248
249
250 class Or(Transformation):
251 """Logical OR (inclusive) on two or more variables.
252
253 Parameters
254 ----------
255 dfs : list of :obj:`pandas.DataFrame`
256 variables to enter into the disjunction.
257 """
258
259 _loopable = False
260 _groupable = False
261 _output_required = True
262
263 def _transform(self, dfs):
264 df = pd.concat(dfs, axis=1, sort=True)
265 return df.any(axis=1).astype(int)
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bids/analysis/transformations/compute.py b/bids/analysis/transformations/compute.py
--- a/bids/analysis/transformations/compute.py
+++ b/bids/analysis/transformations/compute.py
@@ -138,9 +138,19 @@
None, no replacement is performed. If 'before', missing values are
replaced with 0's before scaling. If 'after', missing values are
replaced with 0 after scaling.
+
+ Notes
+ -----
+ If a constant column is passed in, and replace_na is None or 'before', an
+ exception will be raised.
"""
def _transform(self, data, demean=True, rescale=True, replace_na=None):
+ if data.nunique() == 1 and replace_na in {None, 'before'}:
+ val = data.unique()[0]
+ raise ValueError("Cannot scale a column with constant value ({})! "
+ "If you want a constant column of 0's returned, "
+ "set replace_na to 'after'.".format(val))
if replace_na == 'before':
data = data.fillna(0.)
if demean:
| {"golden_diff": "diff --git a/bids/analysis/transformations/compute.py b/bids/analysis/transformations/compute.py\n--- a/bids/analysis/transformations/compute.py\n+++ b/bids/analysis/transformations/compute.py\n@@ -138,9 +138,19 @@\n None, no replacement is performed. If 'before', missing values are\n replaced with 0's before scaling. If 'after', missing values are\n replaced with 0 after scaling.\n+\n+ Notes\n+ -----\n+ If a constant column is passed in, and replace_na is None or 'before', an\n+ exception will be raised.\n \"\"\"\n \n def _transform(self, data, demean=True, rescale=True, replace_na=None):\n+ if data.nunique() == 1 and replace_na in {None, 'before'}:\n+ val = data.unique()[0]\n+ raise ValueError(\"Cannot scale a column with constant value ({})! \"\n+ \"If you want a constant column of 0's returned, \"\n+ \"set replace_na to 'after'.\".format(val))\n if replace_na == 'before':\n data = data.fillna(0.)\n if demean:\n", "issue": "Scale transformation shouldn't return NAs in event of constant input\nIf constant input is passed to the `Scale` transformation, a column of N/A values is returned. This should probably just fail with an exception (or at minimum, return a column of `0`).\nScale transformation shouldn't return NAs in event of constant input\nIf constant input is passed to the `Scale` transformation, a column of N/A values is returned. This should probably just fail with an exception (or at minimum, return a column of `0`).\n", "before_files": [{"content": "\"\"\"\nTransformations that primarily involve numerical computation on variables.\n\"\"\"\nimport math\nimport numpy as np\nimport pandas as pd\nfrom bids.utils import listify\nfrom .base import Transformation\nfrom bids.analysis import hrf\nfrom bids.variables import SparseRunVariable, DenseRunVariable\n\n\nclass Convolve(Transformation):\n \"\"\"Convolve the input variable with an HRF.\n\n Parameters\n ----------\n var : Variable\n The variable to convolve.\n model : str\n The name of the HRF model to apply. Must be one of 'spm',\n 'glover', or 'fir'.\n derivative : bool\n Whether or not to include the temporal derivative.\n dispersion : bool\n Whether or not to include the dispersion derivative.\n fir_delays : iterable\n A list or iterable of delays to use if model is\n 'fir' (ignored otherwise). Spacing between delays must be fixed.\n\n Notes\n -----\n Uses the HRF convolution functions implemented in nistats.\n \"\"\"\n\n _input_type = 'variable'\n _return_type = 'variable'\n\n def _transform(self, var, model='spm', derivative=False, dispersion=False,\n fir_delays=None):\n\n model = model.lower()\n\n df = var.to_df(entities=False)\n\n if isinstance(var, SparseRunVariable):\n sampling_rate = self.collection.sampling_rate\n dur = var.get_duration()\n resample_frames = np.linspace(\n 0, dur, int(math.ceil(dur * sampling_rate)), endpoint=False)\n\n else:\n resample_frames = df['onset'].values\n sampling_rate = var.sampling_rate\n\n vals = df[['onset', 'duration', 'amplitude']].values.T\n\n if model in ['spm', 'glover']:\n if derivative:\n model += ' + derivative'\n if dispersion:\n model += ' + dispersion'\n elif model != 'fir':\n raise ValueError(\"Model must be one of 'spm', 'glover', or 'fir'.\")\n\n # Minimum interval between event onsets/duration\n # Used to compute oversampling factor to prevent information loss\n unique_onsets = np.unique(np.sort(df.onset))\n if len(unique_onsets) > 1:\n min_interval = min(np.ediff1d(unique_onsets).min(),\n df.duration.min())\n oversampling = np.ceil(2*(1 / (min_interval * sampling_rate)))\n else:\n oversampling = 2\n convolved = hrf.compute_regressor(\n vals, model, resample_frames, fir_delays=fir_delays, min_onset=0,\n oversampling=oversampling\n )\n\n return DenseRunVariable(\n name=var.name, values=convolved[0], run_info=var.run_info,\n source=var.source, sampling_rate=sampling_rate)\n\n\nclass Demean(Transformation):\n\n def _transform(self, data):\n return data - data.mean()\n\n\nclass Orthogonalize(Transformation):\n\n _variables_used = ('variables', 'other')\n _densify = ('variables', 'other')\n _align = ('other')\n\n def _transform(self, var, other):\n\n other = listify(other)\n\n # Set up X matrix and slice into it based on target variable indices\n X = np.array([self._variables[c].values.values.squeeze()\n for c in other]).T\n X = X[var.index, :]\n assert len(X) == len(var)\n y = var.values\n _aX = np.c_[np.ones(len(y)), X]\n coefs, resids, rank, s = np.linalg.lstsq(_aX, y, rcond=None)\n result = pd.DataFrame(y - X.dot(coefs[1:]), index=var.index)\n return result\n\n\nclass Product(Transformation):\n\n _loopable = False\n _groupable = False\n _align = True\n _output_required = True\n\n def _transform(self, data):\n data = pd.concat(data, axis=1, sort=True)\n return data.product(1)\n\n\nclass Scale(Transformation):\n \"\"\"Scale a variable.\n\n Parameters\n ----------\n data : :obj:`pandas.Series` or :obj:`pandas.DataFrame`\n The variables to scale.\n demean : bool\n If True, demean each column.\n rescale : bool\n If True, divide variables by their standard deviation.\n replace_na : str\n Whether/when to replace missing values with 0. If\n None, no replacement is performed. If 'before', missing values are\n replaced with 0's before scaling. If 'after', missing values are\n replaced with 0 after scaling.\n \"\"\"\n\n def _transform(self, data, demean=True, rescale=True, replace_na=None):\n if replace_na == 'before':\n data = data.fillna(0.)\n if demean:\n data -= data.mean()\n if rescale:\n data /= data.std()\n if replace_na == 'after':\n data = data.fillna(0.)\n return data\n\n\nclass Sum(Transformation):\n\n _loopable = False\n _groupable = False\n _align = True\n _output_required = True\n\n def _transform(self, data, weights=None):\n data = pd.concat(data, axis=1, sort=True)\n if weights is None:\n weights = np.ones(data.shape[1])\n else:\n weights = np.array(weights)\n if len(weights.ravel()) != data.shape[1]:\n raise ValueError(\"If weights are passed to sum(), the number \"\n \"of elements must equal number of variables\"\n \" being summed.\")\n return (data * weights).sum(axis=1)\n\n\n\nclass Threshold(Transformation):\n \"\"\"Threshold and/or binarize a variable.\n\n Parameters\n ----------\n data :obj:`pandas.Series` or :obj:`pandas.DataFrame`\n The pandas structure to threshold.\n threshold : float\n The value to binarize around (values above will\n be assigned 1, values below will be assigned 0).\n binarize : bool\n If True, binarizes all non-zero values (i.e., every\n non-zero value will be set to 1).\n above : bool\n Specifies which values to retain with respect to the\n cut-off. If True, all value above the threshold will be kept; if\n False, all values below the threshold will be kept. Defaults to\n True.\n signed : bool\n Specifies whether to treat the threshold as signed\n (default) or unsigned. For example, when passing above=True and\n threshold=3, if signed=True, all and only values above +3 would be\n retained. If signed=False, all absolute values > 3 would be retained\n (i.e.,values in the range -3 < X < 3 would be set to 0).\n \"\"\"\n\n _groupable = False\n\n def _transform(self, data, threshold=0., binarize=False, above=True,\n signed=True):\n if not signed:\n threshold = np.abs(threshold)\n data = data.abs()\n keep = data >= threshold if above else data <= threshold\n data[~keep] = 0\n if binarize:\n data[keep] = 1\n return data\n\n\nclass And(Transformation):\n \"\"\"Logical AND on two or more variables.\n\n Parameters\n ----------\n dfs : list of :obj:`pandas.DataFrame`\n variables to enter into the conjunction.\n \"\"\"\n\n _loopable = False\n _groupable = False\n _output_required = True\n\n def _transform(self, dfs):\n df = pd.concat(dfs, axis=1, sort=True)\n return df.all(axis=1).astype(int)\n\n\nclass Not(Transformation):\n \"\"\"Logical negation of a variable.\n\n Parameters\n ----------\n var : :obj:`pandas.Series`\n Variable to negate. Must be convertible to bool.\n \"\"\"\n\n _loopable = True\n _groupable = False\n\n def _transform(self, var):\n return ~var.astype(bool)\n\n\nclass Or(Transformation):\n \"\"\"Logical OR (inclusive) on two or more variables.\n\n Parameters\n ----------\n dfs : list of :obj:`pandas.DataFrame`\n variables to enter into the disjunction.\n \"\"\"\n\n _loopable = False\n _groupable = False\n _output_required = True\n\n def _transform(self, dfs):\n df = pd.concat(dfs, axis=1, sort=True)\n return df.any(axis=1).astype(int)\n", "path": "bids/analysis/transformations/compute.py"}], "after_files": [{"content": "\"\"\"\nTransformations that primarily involve numerical computation on variables.\n\"\"\"\nimport math\nimport numpy as np\nimport pandas as pd\nfrom bids.utils import listify\nfrom .base import Transformation\nfrom bids.analysis import hrf\nfrom bids.variables import SparseRunVariable, DenseRunVariable\n\n\nclass Convolve(Transformation):\n \"\"\"Convolve the input variable with an HRF.\n\n Parameters\n ----------\n var : Variable\n The variable to convolve.\n model : str\n The name of the HRF model to apply. Must be one of 'spm',\n 'glover', or 'fir'.\n derivative : bool\n Whether or not to include the temporal derivative.\n dispersion : bool\n Whether or not to include the dispersion derivative.\n fir_delays : iterable\n A list or iterable of delays to use if model is\n 'fir' (ignored otherwise). Spacing between delays must be fixed.\n\n Notes\n -----\n Uses the HRF convolution functions implemented in nistats.\n \"\"\"\n\n _input_type = 'variable'\n _return_type = 'variable'\n\n def _transform(self, var, model='spm', derivative=False, dispersion=False,\n fir_delays=None):\n\n model = model.lower()\n\n df = var.to_df(entities=False)\n\n if isinstance(var, SparseRunVariable):\n sampling_rate = self.collection.sampling_rate\n dur = var.get_duration()\n resample_frames = np.linspace(\n 0, dur, int(math.ceil(dur * sampling_rate)), endpoint=False)\n\n else:\n resample_frames = df['onset'].values\n sampling_rate = var.sampling_rate\n\n vals = df[['onset', 'duration', 'amplitude']].values.T\n\n if model in ['spm', 'glover']:\n if derivative:\n model += ' + derivative'\n if dispersion:\n model += ' + dispersion'\n elif model != 'fir':\n raise ValueError(\"Model must be one of 'spm', 'glover', or 'fir'.\")\n\n # Minimum interval between event onsets/duration\n # Used to compute oversampling factor to prevent information loss\n unique_onsets = np.unique(np.sort(df.onset))\n if len(unique_onsets) > 1:\n min_interval = min(np.ediff1d(unique_onsets).min(),\n df.duration.min())\n oversampling = np.ceil(2*(1 / (min_interval * sampling_rate)))\n else:\n oversampling = 2\n convolved = hrf.compute_regressor(\n vals, model, resample_frames, fir_delays=fir_delays, min_onset=0,\n oversampling=oversampling\n )\n\n return DenseRunVariable(\n name=var.name, values=convolved[0], run_info=var.run_info,\n source=var.source, sampling_rate=sampling_rate)\n\n\nclass Demean(Transformation):\n\n def _transform(self, data):\n return data - data.mean()\n\n\nclass Orthogonalize(Transformation):\n\n _variables_used = ('variables', 'other')\n _densify = ('variables', 'other')\n _align = ('other')\n\n def _transform(self, var, other):\n\n other = listify(other)\n\n # Set up X matrix and slice into it based on target variable indices\n X = np.array([self._variables[c].values.values.squeeze()\n for c in other]).T\n X = X[var.index, :]\n assert len(X) == len(var)\n y = var.values\n _aX = np.c_[np.ones(len(y)), X]\n coefs, resids, rank, s = np.linalg.lstsq(_aX, y, rcond=None)\n result = pd.DataFrame(y - X.dot(coefs[1:]), index=var.index)\n return result\n\n\nclass Product(Transformation):\n\n _loopable = False\n _groupable = False\n _align = True\n _output_required = True\n\n def _transform(self, data):\n data = pd.concat(data, axis=1, sort=True)\n return data.product(1)\n\n\nclass Scale(Transformation):\n \"\"\"Scale a variable.\n\n Parameters\n ----------\n data : :obj:`pandas.Series` or :obj:`pandas.DataFrame`\n The variables to scale.\n demean : bool\n If True, demean each column.\n rescale : bool\n If True, divide variables by their standard deviation.\n replace_na : str\n Whether/when to replace missing values with 0. If\n None, no replacement is performed. If 'before', missing values are\n replaced with 0's before scaling. If 'after', missing values are\n replaced with 0 after scaling.\n\n Notes\n -----\n If a constant column is passed in, and replace_na is None or 'before', an\n exception will be raised.\n \"\"\"\n\n def _transform(self, data, demean=True, rescale=True, replace_na=None):\n if data.nunique() == 1 and replace_na in {None, 'before'}:\n val = data.unique()[0]\n raise ValueError(\"Cannot scale a column with constant value ({})! \"\n \"If you want a constant column of 0's returned, \"\n \"set replace_na to 'after'.\".format(val))\n if replace_na == 'before':\n data = data.fillna(0.)\n if demean:\n data -= data.mean()\n if rescale:\n data /= data.std()\n if replace_na == 'after':\n data = data.fillna(0.)\n return data\n\n\nclass Sum(Transformation):\n\n _loopable = False\n _groupable = False\n _align = True\n _output_required = True\n\n def _transform(self, data, weights=None):\n data = pd.concat(data, axis=1, sort=True)\n if weights is None:\n weights = np.ones(data.shape[1])\n else:\n weights = np.array(weights)\n if len(weights.ravel()) != data.shape[1]:\n raise ValueError(\"If weights are passed to sum(), the number \"\n \"of elements must equal number of variables\"\n \" being summed.\")\n return (data * weights).sum(axis=1)\n\n\n\nclass Threshold(Transformation):\n \"\"\"Threshold and/or binarize a variable.\n\n Parameters\n ----------\n data :obj:`pandas.Series` or :obj:`pandas.DataFrame`\n The pandas structure to threshold.\n threshold : float\n The value to binarize around (values above will\n be assigned 1, values below will be assigned 0).\n binarize : bool\n If True, binarizes all non-zero values (i.e., every\n non-zero value will be set to 1).\n above : bool\n Specifies which values to retain with respect to the\n cut-off. If True, all value above the threshold will be kept; if\n False, all values below the threshold will be kept. Defaults to\n True.\n signed : bool\n Specifies whether to treat the threshold as signed\n (default) or unsigned. For example, when passing above=True and\n threshold=3, if signed=True, all and only values above +3 would be\n retained. If signed=False, all absolute values > 3 would be retained\n (i.e.,values in the range -3 < X < 3 would be set to 0).\n \"\"\"\n\n _groupable = False\n\n def _transform(self, data, threshold=0., binarize=False, above=True,\n signed=True):\n if not signed:\n threshold = np.abs(threshold)\n data = data.abs()\n keep = data >= threshold if above else data <= threshold\n data[~keep] = 0\n if binarize:\n data[keep] = 1\n return data\n\n\nclass And(Transformation):\n \"\"\"Logical AND on two or more variables.\n\n Parameters\n ----------\n dfs : list of :obj:`pandas.DataFrame`\n variables to enter into the conjunction.\n \"\"\"\n\n _loopable = False\n _groupable = False\n _output_required = True\n\n def _transform(self, dfs):\n df = pd.concat(dfs, axis=1, sort=True)\n return df.all(axis=1).astype(int)\n\n\nclass Not(Transformation):\n \"\"\"Logical negation of a variable.\n\n Parameters\n ----------\n var : :obj:`pandas.Series`\n Variable to negate. Must be convertible to bool.\n \"\"\"\n\n _loopable = True\n _groupable = False\n\n def _transform(self, var):\n return ~var.astype(bool)\n\n\nclass Or(Transformation):\n \"\"\"Logical OR (inclusive) on two or more variables.\n\n Parameters\n ----------\n dfs : list of :obj:`pandas.DataFrame`\n variables to enter into the disjunction.\n \"\"\"\n\n _loopable = False\n _groupable = False\n _output_required = True\n\n def _transform(self, dfs):\n df = pd.concat(dfs, axis=1, sort=True)\n return df.any(axis=1).astype(int)\n", "path": "bids/analysis/transformations/compute.py"}]} | 2,944 | 263 |
gh_patches_debug_4765 | rasdani/github-patches | git_diff | Qiskit__qiskit-8638 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug using SNOBFIT in combination with UCCSD (VQE) / misleading function signature of optimizer SNOBFIT
When running a VQE algorithm with ansatz UCCSD the following error message shows up:
```
[...]
File "C:\Users\poc\Anaconda3\envs\qiskit\lib\site-packages\qiskit_nature\algorithms\pes_samplers\bopes_sampler.py", line 175, in sample
self._raw_results = self._run_points(points)
File "C:\Users\poc\Anaconda3\envs\qiskit\lib\site-packages\qiskit_nature\algorithms\pes_samplers\bopes_sampler.py", line 201, in _run_points
raw_result = self._run_single_point(point) # dict of results
File "C:\Users\poc\Anaconda3\envs\qiskit\lib\site-packages\qiskit_nature\algorithms\pes_samplers\bopes_sampler.py", line 259, in _run_single_point
result = self._state_solver.solve(self._problem, aux_ops_current_step)
File "C:\Users\poc\Anaconda3\envs\qiskit\lib\site-packages\qiskit_nature\algorithms\ground_state_solvers\ground_state_eigensolver.py", line 91, in solve
raw_mes_result = self._solver.compute_minimum_eigenvalue(main_operator, aux_ops) # type: ignore
File "C:\Users\poc\Anaconda3\envs\qiskit\lib\site-packages\qiskit\algorithms\minimum_eigen_solvers\vqe.py", line 530, in compute_minimum_eigenvalue
opt_result = self.optimizer.minimize(
File "C:\Users\poc\Anaconda3\envs\qiskit\lib\site-packages\qiskit\algorithms\optimizers\snobfit.py", line 96, in minimize
if abs(theta) > bounds[idx][0]:
TypeError: '>' not supported between instances of 'float' and 'NoneType'
```
In case of ansatz `EfficientSU2` the code runs fine, only with `UCCSD` I get the error.
I tracked down the error to be caused by missing parameter bounds. The function [signature](https://github.com/Qiskit/qiskit-terra/blob/cbb64ed266b55525738de20a4e43419d5fef6e32/qiskit/algorithms/optimizers/snobfit.py#L82) of the optimizer's `minimize` function (``) suggests that `None` is a valid value. But the code never handles None-type input (neither `None` nor the nested list/tuple with `None` values).
So, I propose one of the following changes:
- Change the signature of the `minimize` method (still not easy to read for a user)
- Catch the `None`s in the input bounds and add a more verbose error message that optimizer SNOBFIT requires parameter bounds
- Artificially add bounds for the optimizer. Basically the parameter can only be varied from 0 to 2pi so one could catch the `None` input and change it to these bounds. Though I'm not aware of the side effects this approach might have... Here it might be useful to ask also someone from qiskit nature whether it could be resolved within the UCCSD ansatz (though it would not solve the initial problem with the misleading signature/missing error message).
Any fixes (or suggestions therefore) are highly appreciated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/algorithms/optimizers/snobfit.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2019, 2020.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Stable Noisy Optimization by Branch and FIT algorithm (SNOBFIT) optimizer."""
14
15 from typing import Any, Dict, Optional, Callable, Tuple, List
16
17 import numpy as np
18 from qiskit.utils import optionals as _optionals
19 from .optimizer import Optimizer, OptimizerSupportLevel, OptimizerResult, POINT
20
21
22 @_optionals.HAS_SKQUANT.require_in_instance
23 @_optionals.HAS_SQSNOBFIT.require_in_instance
24 class SNOBFIT(Optimizer):
25 """Stable Noisy Optimization by Branch and FIT algorithm.
26
27 SnobFit is used for the optimization of derivative-free, noisy objective functions providing
28 robust and fast solutions of problems with continuous variables varying within bound.
29
30 Uses skquant.opt installed with pip install scikit-quant.
31 For further detail, please refer to
32 https://github.com/scikit-quant/scikit-quant and https://qat4chem.lbl.gov/software.
33 """
34
35 def __init__(
36 self,
37 maxiter: int = 1000,
38 maxfail: int = 10,
39 maxmp: int = None,
40 verbose: bool = False,
41 ) -> None:
42 """
43 Args:
44 maxiter: Maximum number of function evaluations.
45 maxmp: Maximum number of model points requested for the local fit.
46 Default = 2 * number of parameters + 6 set to this value when None.
47 maxfail: Maximum number of failures to improve the solution. Stops the algorithm
48 after maxfail is reached.
49 verbose: Provide verbose (debugging) output.
50
51 Raises:
52 MissingOptionalLibraryError: scikit-quant or SQSnobFit not installed
53 """
54 super().__init__()
55 self._maxiter = maxiter
56 self._maxfail = maxfail
57 self._maxmp = maxmp
58 self._verbose = verbose
59
60 def get_support_level(self):
61 """Returns support level dictionary."""
62 return {
63 "gradient": OptimizerSupportLevel.ignored,
64 "bounds": OptimizerSupportLevel.required,
65 "initial_point": OptimizerSupportLevel.required,
66 }
67
68 @property
69 def settings(self) -> Dict[str, Any]:
70 return {
71 "maxiter": self._maxiter,
72 "maxfail": self._maxfail,
73 "maxmp": self._maxmp,
74 "verbose": self._verbose,
75 }
76
77 def minimize(
78 self,
79 fun: Callable[[POINT], float],
80 x0: POINT,
81 jac: Optional[Callable[[POINT], POINT]] = None,
82 bounds: Optional[List[Tuple[float, float]]] = None,
83 ) -> OptimizerResult:
84 import skquant.opt as skq
85 from SQSnobFit import optset
86
87 snobfit_settings = {
88 "maxmp": self._maxmp,
89 "maxfail": self._maxfail,
90 "verbose": self._verbose,
91 }
92 options = optset(optin=snobfit_settings)
93 # counters the error when initial point is outside the acceptable bounds
94 x0 = np.asarray(x0)
95 for idx, theta in enumerate(x0):
96 if abs(theta) > bounds[idx][0]:
97 x0[idx] = x0[idx] % bounds[idx][0]
98 elif abs(theta) > bounds[idx][1]:
99 x0[idx] = x0[idx] % bounds[idx][1]
100
101 res, history = skq.minimize(
102 fun,
103 x0,
104 bounds=bounds,
105 budget=self._maxiter,
106 method="snobfit",
107 options=options,
108 )
109
110 optimizer_result = OptimizerResult()
111 optimizer_result.x = res.optpar
112 optimizer_result.fun = res.optval
113 optimizer_result.nfev = len(history)
114 return optimizer_result
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/algorithms/optimizers/snobfit.py b/qiskit/algorithms/optimizers/snobfit.py
--- a/qiskit/algorithms/optimizers/snobfit.py
+++ b/qiskit/algorithms/optimizers/snobfit.py
@@ -84,6 +84,9 @@
import skquant.opt as skq
from SQSnobFit import optset
+ if bounds is None or any(None in bound_tuple for bound_tuple in bounds):
+ raise ValueError("Optimizer SNOBFIT requires bounds for all parameters.")
+
snobfit_settings = {
"maxmp": self._maxmp,
"maxfail": self._maxfail,
| {"golden_diff": "diff --git a/qiskit/algorithms/optimizers/snobfit.py b/qiskit/algorithms/optimizers/snobfit.py\n--- a/qiskit/algorithms/optimizers/snobfit.py\n+++ b/qiskit/algorithms/optimizers/snobfit.py\n@@ -84,6 +84,9 @@\n import skquant.opt as skq\n from SQSnobFit import optset\n \n+ if bounds is None or any(None in bound_tuple for bound_tuple in bounds):\n+ raise ValueError(\"Optimizer SNOBFIT requires bounds for all parameters.\")\n+\n snobfit_settings = {\n \"maxmp\": self._maxmp,\n \"maxfail\": self._maxfail,\n", "issue": "Bug using SNOBFIT in combination with UCCSD (VQE) / misleading function signature of optimizer SNOBFIT\nWhen running a VQE algorithm with ansatz UCCSD the following error message shows up:\r\n```\r\n[...]\r\n File \"C:\\Users\\poc\\Anaconda3\\envs\\qiskit\\lib\\site-packages\\qiskit_nature\\algorithms\\pes_samplers\\bopes_sampler.py\", line 175, in sample\r\n self._raw_results = self._run_points(points)\r\n File \"C:\\Users\\poc\\Anaconda3\\envs\\qiskit\\lib\\site-packages\\qiskit_nature\\algorithms\\pes_samplers\\bopes_sampler.py\", line 201, in _run_points\r\n raw_result = self._run_single_point(point) # dict of results\r\n File \"C:\\Users\\poc\\Anaconda3\\envs\\qiskit\\lib\\site-packages\\qiskit_nature\\algorithms\\pes_samplers\\bopes_sampler.py\", line 259, in _run_single_point\r\n result = self._state_solver.solve(self._problem, aux_ops_current_step)\r\n File \"C:\\Users\\poc\\Anaconda3\\envs\\qiskit\\lib\\site-packages\\qiskit_nature\\algorithms\\ground_state_solvers\\ground_state_eigensolver.py\", line 91, in solve\r\n raw_mes_result = self._solver.compute_minimum_eigenvalue(main_operator, aux_ops) # type: ignore\r\n File \"C:\\Users\\poc\\Anaconda3\\envs\\qiskit\\lib\\site-packages\\qiskit\\algorithms\\minimum_eigen_solvers\\vqe.py\", line 530, in compute_minimum_eigenvalue\r\n opt_result = self.optimizer.minimize(\r\n File \"C:\\Users\\poc\\Anaconda3\\envs\\qiskit\\lib\\site-packages\\qiskit\\algorithms\\optimizers\\snobfit.py\", line 96, in minimize\r\n if abs(theta) > bounds[idx][0]:\r\nTypeError: '>' not supported between instances of 'float' and 'NoneType'\r\n```\r\n\r\nIn case of ansatz `EfficientSU2` the code runs fine, only with `UCCSD` I get the error.\r\n\r\nI tracked down the error to be caused by missing parameter bounds. The function [signature](https://github.com/Qiskit/qiskit-terra/blob/cbb64ed266b55525738de20a4e43419d5fef6e32/qiskit/algorithms/optimizers/snobfit.py#L82) of the optimizer's `minimize` function (``) suggests that `None` is a valid value. But the code never handles None-type input (neither `None` nor the nested list/tuple with `None` values).\r\n\r\nSo, I propose one of the following changes:\r\n- Change the signature of the `minimize` method (still not easy to read for a user)\r\n- Catch the `None`s in the input bounds and add a more verbose error message that optimizer SNOBFIT requires parameter bounds\r\n- Artificially add bounds for the optimizer. Basically the parameter can only be varied from 0 to 2pi so one could catch the `None` input and change it to these bounds. Though I'm not aware of the side effects this approach might have... Here it might be useful to ask also someone from qiskit nature whether it could be resolved within the UCCSD ansatz (though it would not solve the initial problem with the misleading signature/missing error message).\r\n\r\nAny fixes (or suggestions therefore) are highly appreciated.\r\n\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019, 2020.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Stable Noisy Optimization by Branch and FIT algorithm (SNOBFIT) optimizer.\"\"\"\n\nfrom typing import Any, Dict, Optional, Callable, Tuple, List\n\nimport numpy as np\nfrom qiskit.utils import optionals as _optionals\nfrom .optimizer import Optimizer, OptimizerSupportLevel, OptimizerResult, POINT\n\n\n@_optionals.HAS_SKQUANT.require_in_instance\n@_optionals.HAS_SQSNOBFIT.require_in_instance\nclass SNOBFIT(Optimizer):\n \"\"\"Stable Noisy Optimization by Branch and FIT algorithm.\n\n SnobFit is used for the optimization of derivative-free, noisy objective functions providing\n robust and fast solutions of problems with continuous variables varying within bound.\n\n Uses skquant.opt installed with pip install scikit-quant.\n For further detail, please refer to\n https://github.com/scikit-quant/scikit-quant and https://qat4chem.lbl.gov/software.\n \"\"\"\n\n def __init__(\n self,\n maxiter: int = 1000,\n maxfail: int = 10,\n maxmp: int = None,\n verbose: bool = False,\n ) -> None:\n \"\"\"\n Args:\n maxiter: Maximum number of function evaluations.\n maxmp: Maximum number of model points requested for the local fit.\n Default = 2 * number of parameters + 6 set to this value when None.\n maxfail: Maximum number of failures to improve the solution. Stops the algorithm\n after maxfail is reached.\n verbose: Provide verbose (debugging) output.\n\n Raises:\n MissingOptionalLibraryError: scikit-quant or SQSnobFit not installed\n \"\"\"\n super().__init__()\n self._maxiter = maxiter\n self._maxfail = maxfail\n self._maxmp = maxmp\n self._verbose = verbose\n\n def get_support_level(self):\n \"\"\"Returns support level dictionary.\"\"\"\n return {\n \"gradient\": OptimizerSupportLevel.ignored,\n \"bounds\": OptimizerSupportLevel.required,\n \"initial_point\": OptimizerSupportLevel.required,\n }\n\n @property\n def settings(self) -> Dict[str, Any]:\n return {\n \"maxiter\": self._maxiter,\n \"maxfail\": self._maxfail,\n \"maxmp\": self._maxmp,\n \"verbose\": self._verbose,\n }\n\n def minimize(\n self,\n fun: Callable[[POINT], float],\n x0: POINT,\n jac: Optional[Callable[[POINT], POINT]] = None,\n bounds: Optional[List[Tuple[float, float]]] = None,\n ) -> OptimizerResult:\n import skquant.opt as skq\n from SQSnobFit import optset\n\n snobfit_settings = {\n \"maxmp\": self._maxmp,\n \"maxfail\": self._maxfail,\n \"verbose\": self._verbose,\n }\n options = optset(optin=snobfit_settings)\n # counters the error when initial point is outside the acceptable bounds\n x0 = np.asarray(x0)\n for idx, theta in enumerate(x0):\n if abs(theta) > bounds[idx][0]:\n x0[idx] = x0[idx] % bounds[idx][0]\n elif abs(theta) > bounds[idx][1]:\n x0[idx] = x0[idx] % bounds[idx][1]\n\n res, history = skq.minimize(\n fun,\n x0,\n bounds=bounds,\n budget=self._maxiter,\n method=\"snobfit\",\n options=options,\n )\n\n optimizer_result = OptimizerResult()\n optimizer_result.x = res.optpar\n optimizer_result.fun = res.optval\n optimizer_result.nfev = len(history)\n return optimizer_result\n", "path": "qiskit/algorithms/optimizers/snobfit.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019, 2020.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Stable Noisy Optimization by Branch and FIT algorithm (SNOBFIT) optimizer.\"\"\"\n\nfrom typing import Any, Dict, Optional, Callable, Tuple, List\n\nimport numpy as np\nfrom qiskit.utils import optionals as _optionals\nfrom .optimizer import Optimizer, OptimizerSupportLevel, OptimizerResult, POINT\n\n\n@_optionals.HAS_SKQUANT.require_in_instance\n@_optionals.HAS_SQSNOBFIT.require_in_instance\nclass SNOBFIT(Optimizer):\n \"\"\"Stable Noisy Optimization by Branch and FIT algorithm.\n\n SnobFit is used for the optimization of derivative-free, noisy objective functions providing\n robust and fast solutions of problems with continuous variables varying within bound.\n\n Uses skquant.opt installed with pip install scikit-quant.\n For further detail, please refer to\n https://github.com/scikit-quant/scikit-quant and https://qat4chem.lbl.gov/software.\n \"\"\"\n\n def __init__(\n self,\n maxiter: int = 1000,\n maxfail: int = 10,\n maxmp: int = None,\n verbose: bool = False,\n ) -> None:\n \"\"\"\n Args:\n maxiter: Maximum number of function evaluations.\n maxmp: Maximum number of model points requested for the local fit.\n Default = 2 * number of parameters + 6 set to this value when None.\n maxfail: Maximum number of failures to improve the solution. Stops the algorithm\n after maxfail is reached.\n verbose: Provide verbose (debugging) output.\n\n Raises:\n MissingOptionalLibraryError: scikit-quant or SQSnobFit not installed\n \"\"\"\n super().__init__()\n self._maxiter = maxiter\n self._maxfail = maxfail\n self._maxmp = maxmp\n self._verbose = verbose\n\n def get_support_level(self):\n \"\"\"Returns support level dictionary.\"\"\"\n return {\n \"gradient\": OptimizerSupportLevel.ignored,\n \"bounds\": OptimizerSupportLevel.required,\n \"initial_point\": OptimizerSupportLevel.required,\n }\n\n @property\n def settings(self) -> Dict[str, Any]:\n return {\n \"maxiter\": self._maxiter,\n \"maxfail\": self._maxfail,\n \"maxmp\": self._maxmp,\n \"verbose\": self._verbose,\n }\n\n def minimize(\n self,\n fun: Callable[[POINT], float],\n x0: POINT,\n jac: Optional[Callable[[POINT], POINT]] = None,\n bounds: Optional[List[Tuple[float, float]]] = None,\n ) -> OptimizerResult:\n import skquant.opt as skq\n from SQSnobFit import optset\n\n if bounds is None or any(None in bound_tuple for bound_tuple in bounds):\n raise ValueError(\"Optimizer SNOBFIT requires bounds for all parameters.\")\n\n snobfit_settings = {\n \"maxmp\": self._maxmp,\n \"maxfail\": self._maxfail,\n \"verbose\": self._verbose,\n }\n options = optset(optin=snobfit_settings)\n # counters the error when initial point is outside the acceptable bounds\n x0 = np.asarray(x0)\n for idx, theta in enumerate(x0):\n if abs(theta) > bounds[idx][0]:\n x0[idx] = x0[idx] % bounds[idx][0]\n elif abs(theta) > bounds[idx][1]:\n x0[idx] = x0[idx] % bounds[idx][1]\n\n res, history = skq.minimize(\n fun,\n x0,\n bounds=bounds,\n budget=self._maxiter,\n method=\"snobfit\",\n options=options,\n )\n\n optimizer_result = OptimizerResult()\n optimizer_result.x = res.optpar\n optimizer_result.fun = res.optval\n optimizer_result.nfev = len(history)\n return optimizer_result\n", "path": "qiskit/algorithms/optimizers/snobfit.py"}]} | 2,255 | 153 |
gh_patches_debug_32323 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-923 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Print out name of the created master pod after submitting a job via client
Currently, when submitting a job via `elasticdl train --job_name=xxx`, no master pod information will be printed out and users have to guess the name to master pod from the job name they provided.
We should print out the name to master pod when job has been submitted successfully.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/elasticdl/api.py`
Content:
```
1 import os
2
3 from elasticdl.python.common import k8s_client as k8s
4 from elasticdl.python.elasticdl.image_builder import (
5 build_and_push_docker_image,
6 )
7
8 MODEL_ROOT_PATH = "/model_zoo"
9 CLUSTER_SPEC_ROOT_PATH = "/cluster_spec"
10
11
12 def train(args):
13 image_name = build_and_push_docker_image(
14 model_zoo=args.model_def,
15 base_image=args.image_base,
16 docker_image_prefix=args.docker_image_prefix,
17 extra_pypi=args.extra_pypi_index,
18 cluster_spec=args.cluster_spec,
19 )
20 container_args = [
21 "-m",
22 "elasticdl.python.master.main",
23 "--job_name",
24 args.job_name,
25 "--worker_image",
26 image_name,
27 "--model_def",
28 _model_def_in_docker(args.model_def),
29 "--cluster_spec",
30 _cluster_spec_def_in_docker(args.cluster_spec),
31 "--num_workers",
32 str(args.num_workers),
33 "--worker_resource_request",
34 args.worker_resource_request,
35 "--worker_resource_limit",
36 args.worker_resource_limit,
37 "--namespace",
38 args.namespace,
39 "--tensorboard_log_dir",
40 args.tensorboard_log_dir,
41 "--records_per_task",
42 str(args.records_per_task),
43 "--num_epochs",
44 str(args.num_epochs),
45 "--grads_to_wait",
46 str(args.grads_to_wait),
47 "--minibatch_size",
48 str(args.minibatch_size),
49 "--training_data_dir",
50 args.training_data_dir,
51 "--evaluation_data_dir",
52 args.evaluation_data_dir,
53 "--checkpoint_steps",
54 str(args.checkpoint_steps),
55 "--checkpoint_dir",
56 args.checkpoint_dir,
57 "--keep_checkpoint_max",
58 str(args.keep_checkpoint_max),
59 "--evaluation_steps",
60 str(args.evaluation_steps),
61 "--evaluation_start_delay_secs",
62 str(args.evaluation_start_delay_secs),
63 "--evaluation_throttle_secs",
64 str(args.evaluation_throttle_secs),
65 "--input_fn",
66 args.input_fn,
67 "--loss",
68 args.loss,
69 "--optimizer",
70 args.optimizer,
71 "--eval_metrics_fn",
72 args.eval_metrics_fn,
73 "--model_class",
74 args.model_class,
75 "--model_params",
76 args.model_params,
77 ]
78 container_args.extend(["--image_pull_policy", args.image_pull_policy])
79 container_args.extend(["--restart_policy", args.restart_policy])
80 container_args.extend(["--volume", args.volume])
81
82 args.master_resource_limit = (
83 args.master_resource_limit
84 if args.master_resource_limit
85 else args.master_resource_request
86 )
87
88 k8s.Client(
89 image_name=image_name,
90 namespace=args.namespace,
91 job_name=args.job_name,
92 event_callback=None,
93 cluster_spec=args.cluster_spec,
94 ).create_master(
95 resource_requests=args.master_resource_request,
96 resource_limits=args.master_resource_limit,
97 args=container_args,
98 pod_priority=args.master_pod_priority,
99 image_pull_policy=args.image_pull_policy,
100 restart_policy=args.restart_policy,
101 volume=args.volume,
102 )
103 # TODO: print dashboard url after launching the master pod
104
105
106 def evaluate(args):
107 image_name = build_and_push_docker_image(
108 model_zoo=args.model_def,
109 base_image=args.image_base,
110 docker_image_prefix=args.docker_image_prefix,
111 extra_pypi=args.extra_pypi_index,
112 cluster_spec=args.cluster_spec,
113 )
114 container_args = [
115 "-m",
116 "elasticdl.python.master.main",
117 "--job_name",
118 args.job_name,
119 "--worker_image",
120 image_name,
121 "--model_def",
122 _model_def_in_docker(args.model_def),
123 "--cluster_spec",
124 _cluster_spec_def_in_docker(args.cluster_spec),
125 "--num_workers",
126 str(args.num_workers),
127 "--worker_resource_request",
128 args.worker_resource_request,
129 "--worker_resource_limit",
130 args.worker_resource_limit,
131 "--namespace",
132 args.namespace,
133 "--records_per_task",
134 str(args.records_per_task),
135 "--minibatch_size",
136 str(args.minibatch_size),
137 "--evaluation_data_dir",
138 args.evaluation_data_dir,
139 "--checkpoint_filename_for_init",
140 args.checkpoint_filename_for_init,
141 "--input_fn",
142 args.input_fn,
143 "--eval_metrics_fn",
144 args.eval_metrics_fn,
145 "--model_class",
146 args.model_class,
147 "--model_params",
148 args.model_params,
149 ]
150 container_args.extend(["--image_pull_policy", args.image_pull_policy])
151 container_args.extend(["--restart_policy", args.restart_policy])
152 container_args.extend(["--volume", args.volume])
153
154 args.master_resource_limit = (
155 args.master_resource_limit
156 if args.master_resource_limit
157 else args.master_resource_request
158 )
159
160 k8s.Client(
161 image_name=image_name,
162 namespace=args.namespace,
163 job_name=args.job_name,
164 event_callback=None,
165 cluster_spec=args.cluster_spec,
166 ).create_master(
167 resource_requests=args.master_resource_request,
168 resource_limits=args.master_resource_limit,
169 args=container_args,
170 pod_priority=args.master_pod_priority,
171 image_pull_policy=args.image_pull_policy,
172 restart_policy=args.restart_policy,
173 volume=args.volume,
174 )
175
176
177 def _model_def_in_docker(model_def):
178 return os.path.join(MODEL_ROOT_PATH, os.path.basename(model_def))
179
180
181 def _cluster_spec_def_in_docker(cluster_spec):
182 return (
183 os.path.join(CLUSTER_SPEC_ROOT_PATH, os.path.basename(cluster_spec))
184 if cluster_spec
185 else ""
186 )
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/elasticdl/api.py b/elasticdl/python/elasticdl/api.py
--- a/elasticdl/python/elasticdl/api.py
+++ b/elasticdl/python/elasticdl/api.py
@@ -85,21 +85,7 @@
else args.master_resource_request
)
- k8s.Client(
- image_name=image_name,
- namespace=args.namespace,
- job_name=args.job_name,
- event_callback=None,
- cluster_spec=args.cluster_spec,
- ).create_master(
- resource_requests=args.master_resource_request,
- resource_limits=args.master_resource_limit,
- args=container_args,
- pod_priority=args.master_pod_priority,
- image_pull_policy=args.image_pull_policy,
- restart_policy=args.restart_policy,
- volume=args.volume,
- )
+ _submit_job(image_name, args, container_args)
# TODO: print dashboard url after launching the master pod
@@ -157,20 +143,30 @@
else args.master_resource_request
)
- k8s.Client(
+ _submit_job(image_name, args, container_args)
+
+
+def _submit_job(image_name, client_args, container_args):
+ client = k8s.Client(
image_name=image_name,
- namespace=args.namespace,
- job_name=args.job_name,
+ namespace=client_args.namespace,
+ job_name=client_args.job_name,
event_callback=None,
- cluster_spec=args.cluster_spec,
- ).create_master(
- resource_requests=args.master_resource_request,
- resource_limits=args.master_resource_limit,
+ cluster_spec=client_args.cluster_spec,
+ )
+
+ client.create_master(
+ resource_requests=client_args.master_resource_request,
+ resource_limits=client_args.master_resource_limit,
args=container_args,
- pod_priority=args.master_pod_priority,
- image_pull_policy=args.image_pull_policy,
- restart_policy=args.restart_policy,
- volume=args.volume,
+ pod_priority=client_args.master_pod_priority,
+ image_pull_policy=client_args.image_pull_policy,
+ restart_policy=client_args.restart_policy,
+ volume=client_args.volume,
+ )
+ print(
+ "ElasticDL job %s was successfully submitted. The master pod is: %s."
+ % (client_args.job_name, client.get_master_pod_name())
)
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/api.py b/elasticdl/python/elasticdl/api.py\n--- a/elasticdl/python/elasticdl/api.py\n+++ b/elasticdl/python/elasticdl/api.py\n@@ -85,21 +85,7 @@\n else args.master_resource_request\n )\n \n- k8s.Client(\n- image_name=image_name,\n- namespace=args.namespace,\n- job_name=args.job_name,\n- event_callback=None,\n- cluster_spec=args.cluster_spec,\n- ).create_master(\n- resource_requests=args.master_resource_request,\n- resource_limits=args.master_resource_limit,\n- args=container_args,\n- pod_priority=args.master_pod_priority,\n- image_pull_policy=args.image_pull_policy,\n- restart_policy=args.restart_policy,\n- volume=args.volume,\n- )\n+ _submit_job(image_name, args, container_args)\n # TODO: print dashboard url after launching the master pod\n \n \n@@ -157,20 +143,30 @@\n else args.master_resource_request\n )\n \n- k8s.Client(\n+ _submit_job(image_name, args, container_args)\n+\n+\n+def _submit_job(image_name, client_args, container_args):\n+ client = k8s.Client(\n image_name=image_name,\n- namespace=args.namespace,\n- job_name=args.job_name,\n+ namespace=client_args.namespace,\n+ job_name=client_args.job_name,\n event_callback=None,\n- cluster_spec=args.cluster_spec,\n- ).create_master(\n- resource_requests=args.master_resource_request,\n- resource_limits=args.master_resource_limit,\n+ cluster_spec=client_args.cluster_spec,\n+ )\n+\n+ client.create_master(\n+ resource_requests=client_args.master_resource_request,\n+ resource_limits=client_args.master_resource_limit,\n args=container_args,\n- pod_priority=args.master_pod_priority,\n- image_pull_policy=args.image_pull_policy,\n- restart_policy=args.restart_policy,\n- volume=args.volume,\n+ pod_priority=client_args.master_pod_priority,\n+ image_pull_policy=client_args.image_pull_policy,\n+ restart_policy=client_args.restart_policy,\n+ volume=client_args.volume,\n+ )\n+ print(\n+ \"ElasticDL job %s was successfully submitted. The master pod is: %s.\"\n+ % (client_args.job_name, client.get_master_pod_name())\n )\n", "issue": "Print out name of the created master pod after submitting a job via client\nCurrently, when submitting a job via `elasticdl train --job_name=xxx`, no master pod information will be printed out and users have to guess the name to master pod from the job name they provided. \r\n\r\nWe should print out the name to master pod when job has been submitted successfully.\n", "before_files": [{"content": "import os\n\nfrom elasticdl.python.common import k8s_client as k8s\nfrom elasticdl.python.elasticdl.image_builder import (\n build_and_push_docker_image,\n)\n\nMODEL_ROOT_PATH = \"/model_zoo\"\nCLUSTER_SPEC_ROOT_PATH = \"/cluster_spec\"\n\n\ndef train(args):\n image_name = build_and_push_docker_image(\n model_zoo=args.model_def,\n base_image=args.image_base,\n docker_image_prefix=args.docker_image_prefix,\n extra_pypi=args.extra_pypi_index,\n cluster_spec=args.cluster_spec,\n )\n container_args = [\n \"-m\",\n \"elasticdl.python.master.main\",\n \"--job_name\",\n args.job_name,\n \"--worker_image\",\n image_name,\n \"--model_def\",\n _model_def_in_docker(args.model_def),\n \"--cluster_spec\",\n _cluster_spec_def_in_docker(args.cluster_spec),\n \"--num_workers\",\n str(args.num_workers),\n \"--worker_resource_request\",\n args.worker_resource_request,\n \"--worker_resource_limit\",\n args.worker_resource_limit,\n \"--namespace\",\n args.namespace,\n \"--tensorboard_log_dir\",\n args.tensorboard_log_dir,\n \"--records_per_task\",\n str(args.records_per_task),\n \"--num_epochs\",\n str(args.num_epochs),\n \"--grads_to_wait\",\n str(args.grads_to_wait),\n \"--minibatch_size\",\n str(args.minibatch_size),\n \"--training_data_dir\",\n args.training_data_dir,\n \"--evaluation_data_dir\",\n args.evaluation_data_dir,\n \"--checkpoint_steps\",\n str(args.checkpoint_steps),\n \"--checkpoint_dir\",\n args.checkpoint_dir,\n \"--keep_checkpoint_max\",\n str(args.keep_checkpoint_max),\n \"--evaluation_steps\",\n str(args.evaluation_steps),\n \"--evaluation_start_delay_secs\",\n str(args.evaluation_start_delay_secs),\n \"--evaluation_throttle_secs\",\n str(args.evaluation_throttle_secs),\n \"--input_fn\",\n args.input_fn,\n \"--loss\",\n args.loss,\n \"--optimizer\",\n args.optimizer,\n \"--eval_metrics_fn\",\n args.eval_metrics_fn,\n \"--model_class\",\n args.model_class,\n \"--model_params\",\n args.model_params,\n ]\n container_args.extend([\"--image_pull_policy\", args.image_pull_policy])\n container_args.extend([\"--restart_policy\", args.restart_policy])\n container_args.extend([\"--volume\", args.volume])\n\n args.master_resource_limit = (\n args.master_resource_limit\n if args.master_resource_limit\n else args.master_resource_request\n )\n\n k8s.Client(\n image_name=image_name,\n namespace=args.namespace,\n job_name=args.job_name,\n event_callback=None,\n cluster_spec=args.cluster_spec,\n ).create_master(\n resource_requests=args.master_resource_request,\n resource_limits=args.master_resource_limit,\n args=container_args,\n pod_priority=args.master_pod_priority,\n image_pull_policy=args.image_pull_policy,\n restart_policy=args.restart_policy,\n volume=args.volume,\n )\n # TODO: print dashboard url after launching the master pod\n\n\ndef evaluate(args):\n image_name = build_and_push_docker_image(\n model_zoo=args.model_def,\n base_image=args.image_base,\n docker_image_prefix=args.docker_image_prefix,\n extra_pypi=args.extra_pypi_index,\n cluster_spec=args.cluster_spec,\n )\n container_args = [\n \"-m\",\n \"elasticdl.python.master.main\",\n \"--job_name\",\n args.job_name,\n \"--worker_image\",\n image_name,\n \"--model_def\",\n _model_def_in_docker(args.model_def),\n \"--cluster_spec\",\n _cluster_spec_def_in_docker(args.cluster_spec),\n \"--num_workers\",\n str(args.num_workers),\n \"--worker_resource_request\",\n args.worker_resource_request,\n \"--worker_resource_limit\",\n args.worker_resource_limit,\n \"--namespace\",\n args.namespace,\n \"--records_per_task\",\n str(args.records_per_task),\n \"--minibatch_size\",\n str(args.minibatch_size),\n \"--evaluation_data_dir\",\n args.evaluation_data_dir,\n \"--checkpoint_filename_for_init\",\n args.checkpoint_filename_for_init,\n \"--input_fn\",\n args.input_fn,\n \"--eval_metrics_fn\",\n args.eval_metrics_fn,\n \"--model_class\",\n args.model_class,\n \"--model_params\",\n args.model_params,\n ]\n container_args.extend([\"--image_pull_policy\", args.image_pull_policy])\n container_args.extend([\"--restart_policy\", args.restart_policy])\n container_args.extend([\"--volume\", args.volume])\n\n args.master_resource_limit = (\n args.master_resource_limit\n if args.master_resource_limit\n else args.master_resource_request\n )\n\n k8s.Client(\n image_name=image_name,\n namespace=args.namespace,\n job_name=args.job_name,\n event_callback=None,\n cluster_spec=args.cluster_spec,\n ).create_master(\n resource_requests=args.master_resource_request,\n resource_limits=args.master_resource_limit,\n args=container_args,\n pod_priority=args.master_pod_priority,\n image_pull_policy=args.image_pull_policy,\n restart_policy=args.restart_policy,\n volume=args.volume,\n )\n\n\ndef _model_def_in_docker(model_def):\n return os.path.join(MODEL_ROOT_PATH, os.path.basename(model_def))\n\n\ndef _cluster_spec_def_in_docker(cluster_spec):\n return (\n os.path.join(CLUSTER_SPEC_ROOT_PATH, os.path.basename(cluster_spec))\n if cluster_spec\n else \"\"\n )\n", "path": "elasticdl/python/elasticdl/api.py"}], "after_files": [{"content": "import os\n\nfrom elasticdl.python.common import k8s_client as k8s\nfrom elasticdl.python.elasticdl.image_builder import (\n build_and_push_docker_image,\n)\n\nMODEL_ROOT_PATH = \"/model_zoo\"\nCLUSTER_SPEC_ROOT_PATH = \"/cluster_spec\"\n\n\ndef train(args):\n image_name = build_and_push_docker_image(\n model_zoo=args.model_def,\n base_image=args.image_base,\n docker_image_prefix=args.docker_image_prefix,\n extra_pypi=args.extra_pypi_index,\n cluster_spec=args.cluster_spec,\n )\n container_args = [\n \"-m\",\n \"elasticdl.python.master.main\",\n \"--job_name\",\n args.job_name,\n \"--worker_image\",\n image_name,\n \"--model_def\",\n _model_def_in_docker(args.model_def),\n \"--cluster_spec\",\n _cluster_spec_def_in_docker(args.cluster_spec),\n \"--num_workers\",\n str(args.num_workers),\n \"--worker_resource_request\",\n args.worker_resource_request,\n \"--worker_resource_limit\",\n args.worker_resource_limit,\n \"--namespace\",\n args.namespace,\n \"--tensorboard_log_dir\",\n args.tensorboard_log_dir,\n \"--records_per_task\",\n str(args.records_per_task),\n \"--num_epochs\",\n str(args.num_epochs),\n \"--grads_to_wait\",\n str(args.grads_to_wait),\n \"--minibatch_size\",\n str(args.minibatch_size),\n \"--training_data_dir\",\n args.training_data_dir,\n \"--evaluation_data_dir\",\n args.evaluation_data_dir,\n \"--checkpoint_steps\",\n str(args.checkpoint_steps),\n \"--checkpoint_dir\",\n args.checkpoint_dir,\n \"--keep_checkpoint_max\",\n str(args.keep_checkpoint_max),\n \"--evaluation_steps\",\n str(args.evaluation_steps),\n \"--evaluation_start_delay_secs\",\n str(args.evaluation_start_delay_secs),\n \"--evaluation_throttle_secs\",\n str(args.evaluation_throttle_secs),\n \"--input_fn\",\n args.input_fn,\n \"--loss\",\n args.loss,\n \"--optimizer\",\n args.optimizer,\n \"--eval_metrics_fn\",\n args.eval_metrics_fn,\n \"--model_class\",\n args.model_class,\n \"--model_params\",\n args.model_params,\n ]\n container_args.extend([\"--image_pull_policy\", args.image_pull_policy])\n container_args.extend([\"--restart_policy\", args.restart_policy])\n container_args.extend([\"--volume\", args.volume])\n\n args.master_resource_limit = (\n args.master_resource_limit\n if args.master_resource_limit\n else args.master_resource_request\n )\n\n _submit_job(image_name, args, container_args)\n # TODO: print dashboard url after launching the master pod\n\n\ndef evaluate(args):\n image_name = build_and_push_docker_image(\n model_zoo=args.model_def,\n base_image=args.image_base,\n docker_image_prefix=args.docker_image_prefix,\n extra_pypi=args.extra_pypi_index,\n cluster_spec=args.cluster_spec,\n )\n container_args = [\n \"-m\",\n \"elasticdl.python.master.main\",\n \"--job_name\",\n args.job_name,\n \"--worker_image\",\n image_name,\n \"--model_def\",\n _model_def_in_docker(args.model_def),\n \"--cluster_spec\",\n _cluster_spec_def_in_docker(args.cluster_spec),\n \"--num_workers\",\n str(args.num_workers),\n \"--worker_resource_request\",\n args.worker_resource_request,\n \"--worker_resource_limit\",\n args.worker_resource_limit,\n \"--namespace\",\n args.namespace,\n \"--records_per_task\",\n str(args.records_per_task),\n \"--minibatch_size\",\n str(args.minibatch_size),\n \"--evaluation_data_dir\",\n args.evaluation_data_dir,\n \"--checkpoint_filename_for_init\",\n args.checkpoint_filename_for_init,\n \"--input_fn\",\n args.input_fn,\n \"--eval_metrics_fn\",\n args.eval_metrics_fn,\n \"--model_class\",\n args.model_class,\n \"--model_params\",\n args.model_params,\n ]\n container_args.extend([\"--image_pull_policy\", args.image_pull_policy])\n container_args.extend([\"--restart_policy\", args.restart_policy])\n container_args.extend([\"--volume\", args.volume])\n\n args.master_resource_limit = (\n args.master_resource_limit\n if args.master_resource_limit\n else args.master_resource_request\n )\n\n _submit_job(image_name, args, container_args)\n\n\ndef _submit_job(image_name, client_args, container_args):\n client = k8s.Client(\n image_name=image_name,\n namespace=client_args.namespace,\n job_name=client_args.job_name,\n event_callback=None,\n cluster_spec=client_args.cluster_spec,\n )\n\n client.create_master(\n resource_requests=client_args.master_resource_request,\n resource_limits=client_args.master_resource_limit,\n args=container_args,\n pod_priority=client_args.master_pod_priority,\n image_pull_policy=client_args.image_pull_policy,\n restart_policy=client_args.restart_policy,\n volume=client_args.volume,\n )\n print(\n \"ElasticDL job %s was successfully submitted. The master pod is: %s.\"\n % (client_args.job_name, client.get_master_pod_name())\n )\n\n\ndef _model_def_in_docker(model_def):\n return os.path.join(MODEL_ROOT_PATH, os.path.basename(model_def))\n\n\ndef _cluster_spec_def_in_docker(cluster_spec):\n return (\n os.path.join(CLUSTER_SPEC_ROOT_PATH, os.path.basename(cluster_spec))\n if cluster_spec\n else \"\"\n )\n", "path": "elasticdl/python/elasticdl/api.py"}]} | 1,937 | 520 |
gh_patches_debug_21365 | rasdani/github-patches | git_diff | GoogleCloudPlatform__PerfKitBenchmarker-586 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Help doesn't render with FlagValuesProxy.
Example:
```
[:~/git/PerfKitBenchmarker] [perfkit] release-0.23.0+* 1 ± python pkb.py --benchmarks redis_ycsb --machine_type n1-standard-4 --json_output redis_ycsb.json
ERROR:root:Unknown command line flag 'json_output'
Usage: pkb.py ARGS
<perfkitbenchmarker.context.FlagValuesProxy object at 0x7f51910bc050>
```
@ehankland - do you have a minute to look at this? If not assign back to me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `perfkitbenchmarker/context.py`
Content:
```
1 # Copyright 2015 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Module for working with the current thread context."""
16
17 import threading
18
19 import gflags as flags
20
21
22 class FlagsModuleProxy(object):
23 """Class which acts as a proxy for the flags module.
24
25 When the FLAGS attribute is accessed, BENCHMARK_FLAGS will be returned
26 rather than the global FlagValues object. BENCHMARK_FLAGS is an instance
27 of FlagValuesProxy, which enables benchmarks to run with different and
28 even conflicting flags. Accessing the GLOBAL_FLAGS attribute will return
29 the global FlagValues object. Otherwise, this will behave just like the
30 flags module.
31 """
32
33 def __getattr__(self, name):
34 if name == 'FLAGS':
35 return BENCHMARK_FLAGS
36 elif name == 'GLOBAL_FLAGS':
37 return flags.FLAGS
38 return flags.__dict__[name]
39
40
41 class FlagValuesProxy(object):
42 """Class which provides the same interface as FlagValues.
43
44 By acting as a proxy for the FlagValues object (i.e. flags.FLAGS),
45 this enables benchmark specific flags. This proxy attempts to
46 use the current thread's BenchmarkSpec's FlagValues object, but
47 falls back to using flags.FLAGS if the thread has no BenchmarkSpec
48 object.
49 """
50
51 @property
52 def _thread_flag_values(self):
53 """Returns the correct FlagValues object for the current thread.
54
55 This first tries to get the BenchmarkSpec object corresponding to the
56 current thread. If there is one, it returns that spec's FlagValues
57 object. If there isn't one, it will return the global FlagValues
58 object.
59 """
60 benchmark_spec = GetThreadBenchmarkSpec()
61 if benchmark_spec:
62 return benchmark_spec.FLAGS
63 else:
64 return flags.FLAGS
65
66 def __setattr__(self, name, value):
67 self._thread_flag_values.__setattr__(name, value)
68
69 def __getattr__(self, name):
70 return self._thread_flag_values.__getattr__(name)
71
72 def __setitem__(self, key, value):
73 self._thread_flag_values.__setitem__(key, value)
74
75 def __getitem__(self, key):
76 return self._thread_flag_values.__getitem__(key)
77
78 def __call__(self, argv):
79 return self._thread_flag_values.__call__(argv)
80
81 def FlagDict(self):
82 return self._thread_flag_values.FlagDict()
83
84
85 BENCHMARK_FLAGS = FlagValuesProxy()
86
87
88 class _ThreadData(threading.local):
89 def __init__(self):
90 self.benchmark_spec = None
91
92
93 _thread_local = _ThreadData()
94
95
96 def SetThreadBenchmarkSpec(benchmark_spec):
97 """Sets the current thread's BenchmarkSpec object."""
98 _thread_local.benchmark_spec = benchmark_spec
99
100
101 def GetThreadBenchmarkSpec():
102 """Gets the current thread's BenchmarkSpec object.
103
104 If SetThreadBenchmarkSpec() has not been called in either the current thread
105 or in an ancestor, then this method will return None by default.
106 """
107 return _thread_local.benchmark_spec
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/perfkitbenchmarker/context.py b/perfkitbenchmarker/context.py
--- a/perfkitbenchmarker/context.py
+++ b/perfkitbenchmarker/context.py
@@ -63,23 +63,24 @@
else:
return flags.FLAGS
- def __setattr__(self, name, value):
- self._thread_flag_values.__setattr__(name, value)
- def __getattr__(self, name):
- return self._thread_flag_values.__getattr__(name)
-
- def __setitem__(self, key, value):
- self._thread_flag_values.__setitem__(key, value)
-
- def __getitem__(self, key):
- return self._thread_flag_values.__getitem__(key)
-
- def __call__(self, argv):
- return self._thread_flag_values.__call__(argv)
-
- def FlagDict(self):
- return self._thread_flag_values.FlagDict()
+def _AddProxyMethod(f_name):
+ """Adds a method to FlagValuesProxy that forwards to _thread_flag_values."""
+ def f(self, *args, **kwargs):
+ return getattr(self._thread_flag_values, f_name)(*args, **kwargs)
+ f.__name__ = f_name
+ f.__doc__ = 'Proxied ' + f_name
+ setattr(FlagValuesProxy, f_name, f)
+
+
+# TODO: introduce a more generic proxy.
+for _f_name in ['FlagDict', 'Reset', 'SetDefault', 'RegisteredFlags',
+ 'FlagValuesDict', '__contains__', '__iter__', '__call__',
+ '__setattr__', '__getattr__', '__setitem__', '__getitem__',
+ '__str__']:
+ _AddProxyMethod(_f_name)
+del _f_name
+del _AddProxyMethod
BENCHMARK_FLAGS = FlagValuesProxy()
| {"golden_diff": "diff --git a/perfkitbenchmarker/context.py b/perfkitbenchmarker/context.py\n--- a/perfkitbenchmarker/context.py\n+++ b/perfkitbenchmarker/context.py\n@@ -63,23 +63,24 @@\n else:\n return flags.FLAGS\n \n- def __setattr__(self, name, value):\n- self._thread_flag_values.__setattr__(name, value)\n \n- def __getattr__(self, name):\n- return self._thread_flag_values.__getattr__(name)\n-\n- def __setitem__(self, key, value):\n- self._thread_flag_values.__setitem__(key, value)\n-\n- def __getitem__(self, key):\n- return self._thread_flag_values.__getitem__(key)\n-\n- def __call__(self, argv):\n- return self._thread_flag_values.__call__(argv)\n-\n- def FlagDict(self):\n- return self._thread_flag_values.FlagDict()\n+def _AddProxyMethod(f_name):\n+ \"\"\"Adds a method to FlagValuesProxy that forwards to _thread_flag_values.\"\"\"\n+ def f(self, *args, **kwargs):\n+ return getattr(self._thread_flag_values, f_name)(*args, **kwargs)\n+ f.__name__ = f_name\n+ f.__doc__ = 'Proxied ' + f_name\n+ setattr(FlagValuesProxy, f_name, f)\n+\n+\n+# TODO: introduce a more generic proxy.\n+for _f_name in ['FlagDict', 'Reset', 'SetDefault', 'RegisteredFlags',\n+ 'FlagValuesDict', '__contains__', '__iter__', '__call__',\n+ '__setattr__', '__getattr__', '__setitem__', '__getitem__',\n+ '__str__']:\n+ _AddProxyMethod(_f_name)\n+del _f_name\n+del _AddProxyMethod\n \n \n BENCHMARK_FLAGS = FlagValuesProxy()\n", "issue": "Help doesn't render with FlagValuesProxy.\nExample:\n\n```\n[:~/git/PerfKitBenchmarker] [perfkit] release-0.23.0+* 1 \u00b1 python pkb.py --benchmarks redis_ycsb --machine_type n1-standard-4 --json_output redis_ycsb.json\nERROR:root:Unknown command line flag 'json_output'\nUsage: pkb.py ARGS\n<perfkitbenchmarker.context.FlagValuesProxy object at 0x7f51910bc050>\n```\n\n@ehankland - do you have a minute to look at this? If not assign back to me.\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Module for working with the current thread context.\"\"\"\n\nimport threading\n\nimport gflags as flags\n\n\nclass FlagsModuleProxy(object):\n \"\"\"Class which acts as a proxy for the flags module.\n\n When the FLAGS attribute is accessed, BENCHMARK_FLAGS will be returned\n rather than the global FlagValues object. BENCHMARK_FLAGS is an instance\n of FlagValuesProxy, which enables benchmarks to run with different and\n even conflicting flags. Accessing the GLOBAL_FLAGS attribute will return\n the global FlagValues object. Otherwise, this will behave just like the\n flags module.\n \"\"\"\n\n def __getattr__(self, name):\n if name == 'FLAGS':\n return BENCHMARK_FLAGS\n elif name == 'GLOBAL_FLAGS':\n return flags.FLAGS\n return flags.__dict__[name]\n\n\nclass FlagValuesProxy(object):\n \"\"\"Class which provides the same interface as FlagValues.\n\n By acting as a proxy for the FlagValues object (i.e. flags.FLAGS),\n this enables benchmark specific flags. This proxy attempts to\n use the current thread's BenchmarkSpec's FlagValues object, but\n falls back to using flags.FLAGS if the thread has no BenchmarkSpec\n object.\n \"\"\"\n\n @property\n def _thread_flag_values(self):\n \"\"\"Returns the correct FlagValues object for the current thread.\n\n This first tries to get the BenchmarkSpec object corresponding to the\n current thread. If there is one, it returns that spec's FlagValues\n object. If there isn't one, it will return the global FlagValues\n object.\n \"\"\"\n benchmark_spec = GetThreadBenchmarkSpec()\n if benchmark_spec:\n return benchmark_spec.FLAGS\n else:\n return flags.FLAGS\n\n def __setattr__(self, name, value):\n self._thread_flag_values.__setattr__(name, value)\n\n def __getattr__(self, name):\n return self._thread_flag_values.__getattr__(name)\n\n def __setitem__(self, key, value):\n self._thread_flag_values.__setitem__(key, value)\n\n def __getitem__(self, key):\n return self._thread_flag_values.__getitem__(key)\n\n def __call__(self, argv):\n return self._thread_flag_values.__call__(argv)\n\n def FlagDict(self):\n return self._thread_flag_values.FlagDict()\n\n\nBENCHMARK_FLAGS = FlagValuesProxy()\n\n\nclass _ThreadData(threading.local):\n def __init__(self):\n self.benchmark_spec = None\n\n\n_thread_local = _ThreadData()\n\n\ndef SetThreadBenchmarkSpec(benchmark_spec):\n \"\"\"Sets the current thread's BenchmarkSpec object.\"\"\"\n _thread_local.benchmark_spec = benchmark_spec\n\n\ndef GetThreadBenchmarkSpec():\n \"\"\"Gets the current thread's BenchmarkSpec object.\n\n If SetThreadBenchmarkSpec() has not been called in either the current thread\n or in an ancestor, then this method will return None by default.\n \"\"\"\n return _thread_local.benchmark_spec\n", "path": "perfkitbenchmarker/context.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Module for working with the current thread context.\"\"\"\n\nimport threading\n\nimport gflags as flags\n\n\nclass FlagsModuleProxy(object):\n \"\"\"Class which acts as a proxy for the flags module.\n\n When the FLAGS attribute is accessed, BENCHMARK_FLAGS will be returned\n rather than the global FlagValues object. BENCHMARK_FLAGS is an instance\n of FlagValuesProxy, which enables benchmarks to run with different and\n even conflicting flags. Accessing the GLOBAL_FLAGS attribute will return\n the global FlagValues object. Otherwise, this will behave just like the\n flags module.\n \"\"\"\n\n def __getattr__(self, name):\n if name == 'FLAGS':\n return BENCHMARK_FLAGS\n elif name == 'GLOBAL_FLAGS':\n return flags.FLAGS\n return flags.__dict__[name]\n\n\nclass FlagValuesProxy(object):\n \"\"\"Class which provides the same interface as FlagValues.\n\n By acting as a proxy for the FlagValues object (i.e. flags.FLAGS),\n this enables benchmark specific flags. This proxy attempts to\n use the current thread's BenchmarkSpec's FlagValues object, but\n falls back to using flags.FLAGS if the thread has no BenchmarkSpec\n object.\n \"\"\"\n\n @property\n def _thread_flag_values(self):\n \"\"\"Returns the correct FlagValues object for the current thread.\n\n This first tries to get the BenchmarkSpec object corresponding to the\n current thread. If there is one, it returns that spec's FlagValues\n object. If there isn't one, it will return the global FlagValues\n object.\n \"\"\"\n benchmark_spec = GetThreadBenchmarkSpec()\n if benchmark_spec:\n return benchmark_spec.FLAGS\n else:\n return flags.FLAGS\n\n\ndef _AddProxyMethod(f_name):\n \"\"\"Adds a method to FlagValuesProxy that forwards to _thread_flag_values.\"\"\"\n def f(self, *args, **kwargs):\n return getattr(self._thread_flag_values, f_name)(*args, **kwargs)\n f.__name__ = f_name\n f.__doc__ = 'Proxied ' + f_name\n setattr(FlagValuesProxy, f_name, f)\n\n\n# TODO: introduce a more generic proxy.\nfor _f_name in ['FlagDict', 'Reset', 'SetDefault', 'RegisteredFlags',\n 'FlagValuesDict', '__contains__', '__iter__', '__call__',\n '__setattr__', '__getattr__', '__setitem__', '__getitem__',\n '__str__']:\n _AddProxyMethod(_f_name)\ndel _f_name\ndel _AddProxyMethod\n\n\nBENCHMARK_FLAGS = FlagValuesProxy()\n\n\nclass _ThreadData(threading.local):\n def __init__(self):\n self.benchmark_spec = None\n\n\n_thread_local = _ThreadData()\n\n\ndef SetThreadBenchmarkSpec(benchmark_spec):\n \"\"\"Sets the current thread's BenchmarkSpec object.\"\"\"\n _thread_local.benchmark_spec = benchmark_spec\n\n\ndef GetThreadBenchmarkSpec():\n \"\"\"Gets the current thread's BenchmarkSpec object.\n\n If SetThreadBenchmarkSpec() has not been called in either the current thread\n or in an ancestor, then this method will return None by default.\n \"\"\"\n return _thread_local.benchmark_spec\n", "path": "perfkitbenchmarker/context.py"}]} | 1,394 | 408 |
gh_patches_debug_17410 | rasdani/github-patches | git_diff | mindee__doctr-548 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
set resolve_lines and resolve_blocks to True
## 🐛 Bug
As discussed in #512 it needs an improvement for the `sort_boxes ` to set the `resolve_lines` and `resolve_blocks` in `builder.py `to True as default
In fact of this the `.render()` and `.export_as_xml()` results are not correct
## To Reproduce
Steps to reproduce the behavior:
1. test with any document image which has multible text blocks/lines or where the document is slightly crooked in the picture, for example photographed by a mobile phone
## Expected behavior
Correct sorted boxes left-right / top-bottom that the lines and blocks can be resolved correctly
## Environment
-DocTR version: 0.4.1a0
-TensorFlow version: N/A
-PyTorch version: 1.9.1 (torchvision 0.10.1)
-OpenCV version: 4.4.0
-OS: Ubuntu 20.04.3 LTS
-Python version: 3.8
-Is CUDA available (TensorFlow): N/A
-Is CUDA available (PyTorch): No
-CUDA runtime version: Could not collect
-GPU models and configuration: Could not collect
-Nvidia driver version: Could not collect
-cuDNN version: Could not collect
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doctr/models/builder.py`
Content:
```
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6
7 import numpy as np
8 from scipy.cluster.hierarchy import fclusterdata
9 from typing import List, Tuple, Dict
10
11 from doctr.io.elements import Word, Line, Block, Page, Document
12 from doctr.utils.repr import NestedObject
13 from doctr.utils.geometry import resolve_enclosing_bbox, resolve_enclosing_rbbox
14
15 __all__ = ['DocumentBuilder']
16
17
18 class DocumentBuilder(NestedObject):
19 """Implements a document builder
20
21 Args:
22 resolve_lines: whether words should be automatically grouped into lines
23 resolve_blocks: whether lines should be automatically grouped into blocks
24 paragraph_break: relative length of the minimum space separating paragraphs
25 """
26
27 def __init__(
28 self,
29 resolve_lines: bool = False,
30 resolve_blocks: bool = False,
31 paragraph_break: float = 0.035,
32 rotated_bbox: bool = False
33 ) -> None:
34
35 self.resolve_lines = resolve_lines
36 self.resolve_blocks = resolve_blocks
37 self.paragraph_break = paragraph_break
38 self.rotated_bbox = rotated_bbox
39
40 def _sort_boxes(self, boxes: np.ndarray) -> np.ndarray:
41 """Sort bounding boxes from top to bottom, left to right
42
43 Args:
44 boxes: bounding boxes of shape (N, 4) or (N, 5) (in case of rotated bbox)
45
46 Returns:
47 indices of ordered boxes of shape (N,)
48 """
49 if self.rotated_bbox:
50 return (boxes[:, 0] + 2 * boxes[:, 1] / np.median(boxes[:, 3])).argsort()
51 return (boxes[:, 0] + 2 * boxes[:, 3] / np.median(boxes[:, 3] - boxes[:, 1])).argsort()
52
53 def _resolve_sub_lines(self, boxes: np.ndarray, words: List[int]) -> List[List[int]]:
54 """Split a line in sub_lines
55
56 Args:
57 boxes: bounding boxes of shape (N, 4) or (N, 5) in case of rotated bbox
58 words: list of indexes for the words of the line
59
60 Returns:
61 A list of (sub-)lines computed from the original line (words)
62 """
63 lines = []
64 # Sort words horizontally
65 words = [words[j] for j in np.argsort([boxes[i, 0] for i in words]).tolist()]
66 # Eventually split line horizontally
67 if len(words) < 2:
68 lines.append(words)
69 else:
70 sub_line = [words[0]]
71 for i in words[1:]:
72 horiz_break = True
73
74 prev_box = boxes[sub_line[-1]]
75 # Compute distance between boxes
76 if self.rotated_bbox:
77 dist = boxes[i, 0] - prev_box[2] / 2 - (prev_box[0] + prev_box[2] / 2)
78 else:
79 dist = boxes[i, 0] - prev_box[2]
80 # If distance between boxes is lower than paragraph break, same sub-line
81 if dist < self.paragraph_break:
82 horiz_break = False
83
84 if horiz_break:
85 lines.append(sub_line)
86 sub_line = []
87
88 sub_line.append(i)
89 lines.append(sub_line)
90
91 return lines
92
93 def _resolve_lines(self, boxes: np.ndarray) -> List[List[int]]:
94 """Order boxes to group them in lines
95
96 Args:
97 boxes: bounding boxes of shape (N, 4) or (N, 5) in case of rotated bbox
98
99 Returns:
100 nested list of box indices
101 """
102 # Compute median for boxes heights
103 y_med = np.median(boxes[:, 3] if self.rotated_bbox else boxes[:, 3] - boxes[:, 1])
104
105 # Sort boxes
106 idxs = (boxes[:, 0] + 2 * boxes[:, 1 if self.rotated_bbox else 3] / y_med).argsort()
107
108 lines = []
109 words = [idxs[0]] # Assign the top-left word to the first line
110 # Define a mean y-center for the line
111 if self.rotated_bbox:
112 y_center_sum = boxes[idxs[0]][1]
113 else:
114 y_center_sum = boxes[idxs[0]][[1, 3]].mean()
115
116 for idx in idxs[1:]:
117 vert_break = True
118
119 # Compute y_dist
120 if self.rotated_bbox:
121 y_dist = abs(boxes[idx][1] - y_center_sum / len(words))
122 else:
123 y_dist = abs(boxes[idx][[1, 3]].mean() - y_center_sum / len(words))
124 # If y-center of the box is close enough to mean y-center of the line, same line
125 if y_dist < y_med / 2:
126 vert_break = False
127
128 if vert_break:
129 # Compute sub-lines (horizontal split)
130 lines.extend(self._resolve_sub_lines(boxes, words))
131 words = []
132 y_center_sum = 0
133
134 words.append(idx)
135 y_center_sum += boxes[idx][1 if self.rotated_bbox else [1, 3]].mean()
136
137 # Use the remaining words to form the last(s) line(s)
138 if len(words) > 0:
139 # Compute sub-lines (horizontal split)
140 lines.extend(self._resolve_sub_lines(boxes, words))
141
142 return lines
143
144 def _resolve_blocks(self, boxes: np.ndarray, lines: List[List[int]]) -> List[List[List[int]]]:
145 """Order lines to group them in blocks
146
147 Args:
148 boxes: bounding boxes of shape (N, 4) or (N, 5)
149 lines: list of lines, each line is a list of idx
150
151 Returns:
152 nested list of box indices
153 """
154 # Resolve enclosing boxes of lines
155 if self.rotated_bbox:
156 box_lines = np.asarray([
157 resolve_enclosing_rbbox([tuple(boxes[idx, :5]) for idx in line]) for line in lines # type: ignore[misc]
158 ])
159 else:
160 _box_lines = [
161 resolve_enclosing_bbox([
162 (tuple(boxes[idx, :2]), tuple(boxes[idx, 2:])) for idx in line # type: ignore[misc]
163 ])
164 for line in lines
165 ]
166 box_lines = np.asarray([(x1, y1, x2, y2) for ((x1, y1), (x2, y2)) in _box_lines])
167
168 # Compute geometrical features of lines to clusterize
169 # Clusterizing only with box centers yield to poor results for complex documents
170 box_features = np.stack(
171 (
172 (box_lines[:, 0] + box_lines[:, 3]) / 2,
173 (box_lines[:, 1] + box_lines[:, 2]) / 2,
174 (box_lines[:, 0] + box_lines[:, 2]) / 2,
175 (box_lines[:, 1] + box_lines[:, 3]) / 2,
176 box_lines[:, 0],
177 box_lines[:, 1],
178 ), axis=-1
179 )
180 # Compute clusters
181 clusters = fclusterdata(box_features, t=0.1, depth=4, criterion='distance', metric='euclidean')
182
183 _blocks: Dict[int, List[int]] = {}
184 # Form clusters
185 for line_idx, cluster_idx in enumerate(clusters):
186 if cluster_idx in _blocks.keys():
187 _blocks[cluster_idx].append(line_idx)
188 else:
189 _blocks[cluster_idx] = [line_idx]
190
191 # Retrieve word-box level to return a fully nested structure
192 blocks = [[lines[idx] for idx in block] for block in _blocks.values()]
193
194 return blocks
195
196 def _build_blocks(self, boxes: np.ndarray, word_preds: List[Tuple[str, float]]) -> List[Block]:
197 """Gather independent words in structured blocks
198
199 Args:
200 boxes: bounding boxes of all detected words of the page, of shape (N, 5) or (N, 6)
201 word_preds: list of all detected words of the page, of shape N
202
203 Returns:
204 list of block elements
205 """
206
207 if boxes.shape[0] != len(word_preds):
208 raise ValueError(f"Incompatible argument lengths: {boxes.shape[0]}, {len(word_preds)}")
209
210 if boxes.shape[0] == 0:
211 return []
212
213 # Decide whether we try to form lines
214 if self.resolve_lines:
215 lines = self._resolve_lines(boxes[:, :-1])
216 # Decide whether we try to form blocks
217 if self.resolve_blocks:
218 _blocks = self._resolve_blocks(boxes[:, :-1], lines)
219 else:
220 _blocks = [lines]
221 else:
222 # Sort bounding boxes, one line for all boxes, one block for the line
223 lines = [self._sort_boxes(boxes[:, :-1])]
224 _blocks = [lines]
225
226 blocks = [
227 Block(
228 [Line(
229 [
230 Word(
231 *word_preds[idx],
232 (boxes[idx, 0], boxes[idx, 1], boxes[idx, 2], boxes[idx, 3], boxes[idx, 4])
233 ) if self.rotated_bbox else
234 Word(
235 *word_preds[idx],
236 ((boxes[idx, 0], boxes[idx, 1]), (boxes[idx, 2], boxes[idx, 3]))
237 ) for idx in line
238 ]
239 ) for line in lines]
240 ) for lines in _blocks
241 ]
242
243 return blocks
244
245 def extra_repr(self) -> str:
246 return (f"resolve_lines={self.resolve_lines}, resolve_blocks={self.resolve_blocks}, "
247 f"paragraph_break={self.paragraph_break}")
248
249 def __call__(
250 self,
251 boxes: List[np.ndarray],
252 text_preds: List[List[Tuple[str, float]]],
253 page_shapes: List[Tuple[int, int]]
254 ) -> Document:
255 """Re-arrange detected words into structured blocks
256
257 Args:
258 boxes: list of N elements, where each element represents the localization predictions, of shape (*, 5)
259 or (*, 6) for all words for a given page
260 text_preds: list of N elements, where each element is the list of all word prediction (text + confidence)
261 page_shape: shape of each page, of size N
262
263 Returns:
264 document object
265 """
266
267 if len(boxes) != len(text_preds) or len(boxes) != len(page_shapes):
268 raise ValueError("All arguments are expected to be lists of the same size")
269
270 _pages = [
271 Page(
272 self._build_blocks(
273 page_boxes,
274 word_preds,
275 ),
276 _idx,
277 shape,
278 )
279 for _idx, shape, page_boxes, word_preds in zip(range(len(boxes)), page_shapes, boxes, text_preds)
280 ]
281
282 return Document(_pages)
283
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doctr/models/builder.py b/doctr/models/builder.py
--- a/doctr/models/builder.py
+++ b/doctr/models/builder.py
@@ -26,8 +26,8 @@
def __init__(
self,
- resolve_lines: bool = False,
- resolve_blocks: bool = False,
+ resolve_lines: bool = True,
+ resolve_blocks: bool = True,
paragraph_break: float = 0.035,
rotated_bbox: bool = False
) -> None:
@@ -214,7 +214,7 @@
if self.resolve_lines:
lines = self._resolve_lines(boxes[:, :-1])
# Decide whether we try to form blocks
- if self.resolve_blocks:
+ if self.resolve_blocks and len(lines) > 1:
_blocks = self._resolve_blocks(boxes[:, :-1], lines)
else:
_blocks = [lines]
| {"golden_diff": "diff --git a/doctr/models/builder.py b/doctr/models/builder.py\n--- a/doctr/models/builder.py\n+++ b/doctr/models/builder.py\n@@ -26,8 +26,8 @@\n \n def __init__(\n self,\n- resolve_lines: bool = False,\n- resolve_blocks: bool = False,\n+ resolve_lines: bool = True,\n+ resolve_blocks: bool = True,\n paragraph_break: float = 0.035,\n rotated_bbox: bool = False\n ) -> None:\n@@ -214,7 +214,7 @@\n if self.resolve_lines:\n lines = self._resolve_lines(boxes[:, :-1])\n # Decide whether we try to form blocks\n- if self.resolve_blocks:\n+ if self.resolve_blocks and len(lines) > 1:\n _blocks = self._resolve_blocks(boxes[:, :-1], lines)\n else:\n _blocks = [lines]\n", "issue": "set resolve_lines and resolve_blocks to True\n## \ud83d\udc1b Bug\r\n\r\nAs discussed in #512 it needs an improvement for the `sort_boxes ` to set the `resolve_lines` and `resolve_blocks` in `builder.py `to True as default\r\n\r\nIn fact of this the `.render()` and `.export_as_xml()` results are not correct \r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. test with any document image which has multible text blocks/lines or where the document is slightly crooked in the picture, for example photographed by a mobile phone\r\n\r\n## Expected behavior\r\n\r\nCorrect sorted boxes left-right / top-bottom that the lines and blocks can be resolved correctly\r\n\r\n## Environment\r\n\r\n-DocTR version: 0.4.1a0\r\n-TensorFlow version: N/A\r\n-PyTorch version: 1.9.1 (torchvision 0.10.1)\r\n-OpenCV version: 4.4.0\r\n-OS: Ubuntu 20.04.3 LTS\r\n-Python version: 3.8\r\n-Is CUDA available (TensorFlow): N/A\r\n-Is CUDA available (PyTorch): No\r\n-CUDA runtime version: Could not collect\r\n-GPU models and configuration: Could not collect\r\n-Nvidia driver version: Could not collect\r\n-cuDNN version: Could not collect\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\nimport numpy as np\nfrom scipy.cluster.hierarchy import fclusterdata\nfrom typing import List, Tuple, Dict\n\nfrom doctr.io.elements import Word, Line, Block, Page, Document\nfrom doctr.utils.repr import NestedObject\nfrom doctr.utils.geometry import resolve_enclosing_bbox, resolve_enclosing_rbbox\n\n__all__ = ['DocumentBuilder']\n\n\nclass DocumentBuilder(NestedObject):\n \"\"\"Implements a document builder\n\n Args:\n resolve_lines: whether words should be automatically grouped into lines\n resolve_blocks: whether lines should be automatically grouped into blocks\n paragraph_break: relative length of the minimum space separating paragraphs\n \"\"\"\n\n def __init__(\n self,\n resolve_lines: bool = False,\n resolve_blocks: bool = False,\n paragraph_break: float = 0.035,\n rotated_bbox: bool = False\n ) -> None:\n\n self.resolve_lines = resolve_lines\n self.resolve_blocks = resolve_blocks\n self.paragraph_break = paragraph_break\n self.rotated_bbox = rotated_bbox\n\n def _sort_boxes(self, boxes: np.ndarray) -> np.ndarray:\n \"\"\"Sort bounding boxes from top to bottom, left to right\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5) (in case of rotated bbox)\n\n Returns:\n indices of ordered boxes of shape (N,)\n \"\"\"\n if self.rotated_bbox:\n return (boxes[:, 0] + 2 * boxes[:, 1] / np.median(boxes[:, 3])).argsort()\n return (boxes[:, 0] + 2 * boxes[:, 3] / np.median(boxes[:, 3] - boxes[:, 1])).argsort()\n\n def _resolve_sub_lines(self, boxes: np.ndarray, words: List[int]) -> List[List[int]]:\n \"\"\"Split a line in sub_lines\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5) in case of rotated bbox\n words: list of indexes for the words of the line\n\n Returns:\n A list of (sub-)lines computed from the original line (words)\n \"\"\"\n lines = []\n # Sort words horizontally\n words = [words[j] for j in np.argsort([boxes[i, 0] for i in words]).tolist()]\n # Eventually split line horizontally\n if len(words) < 2:\n lines.append(words)\n else:\n sub_line = [words[0]]\n for i in words[1:]:\n horiz_break = True\n\n prev_box = boxes[sub_line[-1]]\n # Compute distance between boxes\n if self.rotated_bbox:\n dist = boxes[i, 0] - prev_box[2] / 2 - (prev_box[0] + prev_box[2] / 2)\n else:\n dist = boxes[i, 0] - prev_box[2]\n # If distance between boxes is lower than paragraph break, same sub-line\n if dist < self.paragraph_break:\n horiz_break = False\n\n if horiz_break:\n lines.append(sub_line)\n sub_line = []\n\n sub_line.append(i)\n lines.append(sub_line)\n\n return lines\n\n def _resolve_lines(self, boxes: np.ndarray) -> List[List[int]]:\n \"\"\"Order boxes to group them in lines\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5) in case of rotated bbox\n\n Returns:\n nested list of box indices\n \"\"\"\n # Compute median for boxes heights\n y_med = np.median(boxes[:, 3] if self.rotated_bbox else boxes[:, 3] - boxes[:, 1])\n\n # Sort boxes\n idxs = (boxes[:, 0] + 2 * boxes[:, 1 if self.rotated_bbox else 3] / y_med).argsort()\n\n lines = []\n words = [idxs[0]] # Assign the top-left word to the first line\n # Define a mean y-center for the line\n if self.rotated_bbox:\n y_center_sum = boxes[idxs[0]][1]\n else:\n y_center_sum = boxes[idxs[0]][[1, 3]].mean()\n\n for idx in idxs[1:]:\n vert_break = True\n\n # Compute y_dist\n if self.rotated_bbox:\n y_dist = abs(boxes[idx][1] - y_center_sum / len(words))\n else:\n y_dist = abs(boxes[idx][[1, 3]].mean() - y_center_sum / len(words))\n # If y-center of the box is close enough to mean y-center of the line, same line\n if y_dist < y_med / 2:\n vert_break = False\n\n if vert_break:\n # Compute sub-lines (horizontal split)\n lines.extend(self._resolve_sub_lines(boxes, words))\n words = []\n y_center_sum = 0\n\n words.append(idx)\n y_center_sum += boxes[idx][1 if self.rotated_bbox else [1, 3]].mean()\n\n # Use the remaining words to form the last(s) line(s)\n if len(words) > 0:\n # Compute sub-lines (horizontal split)\n lines.extend(self._resolve_sub_lines(boxes, words))\n\n return lines\n\n def _resolve_blocks(self, boxes: np.ndarray, lines: List[List[int]]) -> List[List[List[int]]]:\n \"\"\"Order lines to group them in blocks\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5)\n lines: list of lines, each line is a list of idx\n\n Returns:\n nested list of box indices\n \"\"\"\n # Resolve enclosing boxes of lines\n if self.rotated_bbox:\n box_lines = np.asarray([\n resolve_enclosing_rbbox([tuple(boxes[idx, :5]) for idx in line]) for line in lines # type: ignore[misc]\n ])\n else:\n _box_lines = [\n resolve_enclosing_bbox([\n (tuple(boxes[idx, :2]), tuple(boxes[idx, 2:])) for idx in line # type: ignore[misc]\n ])\n for line in lines\n ]\n box_lines = np.asarray([(x1, y1, x2, y2) for ((x1, y1), (x2, y2)) in _box_lines])\n\n # Compute geometrical features of lines to clusterize\n # Clusterizing only with box centers yield to poor results for complex documents\n box_features = np.stack(\n (\n (box_lines[:, 0] + box_lines[:, 3]) / 2,\n (box_lines[:, 1] + box_lines[:, 2]) / 2,\n (box_lines[:, 0] + box_lines[:, 2]) / 2,\n (box_lines[:, 1] + box_lines[:, 3]) / 2,\n box_lines[:, 0],\n box_lines[:, 1],\n ), axis=-1\n )\n # Compute clusters\n clusters = fclusterdata(box_features, t=0.1, depth=4, criterion='distance', metric='euclidean')\n\n _blocks: Dict[int, List[int]] = {}\n # Form clusters\n for line_idx, cluster_idx in enumerate(clusters):\n if cluster_idx in _blocks.keys():\n _blocks[cluster_idx].append(line_idx)\n else:\n _blocks[cluster_idx] = [line_idx]\n\n # Retrieve word-box level to return a fully nested structure\n blocks = [[lines[idx] for idx in block] for block in _blocks.values()]\n\n return blocks\n\n def _build_blocks(self, boxes: np.ndarray, word_preds: List[Tuple[str, float]]) -> List[Block]:\n \"\"\"Gather independent words in structured blocks\n\n Args:\n boxes: bounding boxes of all detected words of the page, of shape (N, 5) or (N, 6)\n word_preds: list of all detected words of the page, of shape N\n\n Returns:\n list of block elements\n \"\"\"\n\n if boxes.shape[0] != len(word_preds):\n raise ValueError(f\"Incompatible argument lengths: {boxes.shape[0]}, {len(word_preds)}\")\n\n if boxes.shape[0] == 0:\n return []\n\n # Decide whether we try to form lines\n if self.resolve_lines:\n lines = self._resolve_lines(boxes[:, :-1])\n # Decide whether we try to form blocks\n if self.resolve_blocks:\n _blocks = self._resolve_blocks(boxes[:, :-1], lines)\n else:\n _blocks = [lines]\n else:\n # Sort bounding boxes, one line for all boxes, one block for the line\n lines = [self._sort_boxes(boxes[:, :-1])]\n _blocks = [lines]\n\n blocks = [\n Block(\n [Line(\n [\n Word(\n *word_preds[idx],\n (boxes[idx, 0], boxes[idx, 1], boxes[idx, 2], boxes[idx, 3], boxes[idx, 4])\n ) if self.rotated_bbox else\n Word(\n *word_preds[idx],\n ((boxes[idx, 0], boxes[idx, 1]), (boxes[idx, 2], boxes[idx, 3]))\n ) for idx in line\n ]\n ) for line in lines]\n ) for lines in _blocks\n ]\n\n return blocks\n\n def extra_repr(self) -> str:\n return (f\"resolve_lines={self.resolve_lines}, resolve_blocks={self.resolve_blocks}, \"\n f\"paragraph_break={self.paragraph_break}\")\n\n def __call__(\n self,\n boxes: List[np.ndarray],\n text_preds: List[List[Tuple[str, float]]],\n page_shapes: List[Tuple[int, int]]\n ) -> Document:\n \"\"\"Re-arrange detected words into structured blocks\n\n Args:\n boxes: list of N elements, where each element represents the localization predictions, of shape (*, 5)\n or (*, 6) for all words for a given page\n text_preds: list of N elements, where each element is the list of all word prediction (text + confidence)\n page_shape: shape of each page, of size N\n\n Returns:\n document object\n \"\"\"\n\n if len(boxes) != len(text_preds) or len(boxes) != len(page_shapes):\n raise ValueError(\"All arguments are expected to be lists of the same size\")\n\n _pages = [\n Page(\n self._build_blocks(\n page_boxes,\n word_preds,\n ),\n _idx,\n shape,\n )\n for _idx, shape, page_boxes, word_preds in zip(range(len(boxes)), page_shapes, boxes, text_preds)\n ]\n\n return Document(_pages)\n", "path": "doctr/models/builder.py"}], "after_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\nimport numpy as np\nfrom scipy.cluster.hierarchy import fclusterdata\nfrom typing import List, Tuple, Dict\n\nfrom doctr.io.elements import Word, Line, Block, Page, Document\nfrom doctr.utils.repr import NestedObject\nfrom doctr.utils.geometry import resolve_enclosing_bbox, resolve_enclosing_rbbox\n\n__all__ = ['DocumentBuilder']\n\n\nclass DocumentBuilder(NestedObject):\n \"\"\"Implements a document builder\n\n Args:\n resolve_lines: whether words should be automatically grouped into lines\n resolve_blocks: whether lines should be automatically grouped into blocks\n paragraph_break: relative length of the minimum space separating paragraphs\n \"\"\"\n\n def __init__(\n self,\n resolve_lines: bool = True,\n resolve_blocks: bool = True,\n paragraph_break: float = 0.035,\n rotated_bbox: bool = False\n ) -> None:\n\n self.resolve_lines = resolve_lines\n self.resolve_blocks = resolve_blocks\n self.paragraph_break = paragraph_break\n self.rotated_bbox = rotated_bbox\n\n def _sort_boxes(self, boxes: np.ndarray) -> np.ndarray:\n \"\"\"Sort bounding boxes from top to bottom, left to right\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5) (in case of rotated bbox)\n\n Returns:\n indices of ordered boxes of shape (N,)\n \"\"\"\n if self.rotated_bbox:\n return (boxes[:, 0] + 2 * boxes[:, 1] / np.median(boxes[:, 3])).argsort()\n return (boxes[:, 0] + 2 * boxes[:, 3] / np.median(boxes[:, 3] - boxes[:, 1])).argsort()\n\n def _resolve_sub_lines(self, boxes: np.ndarray, words: List[int]) -> List[List[int]]:\n \"\"\"Split a line in sub_lines\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5) in case of rotated bbox\n words: list of indexes for the words of the line\n\n Returns:\n A list of (sub-)lines computed from the original line (words)\n \"\"\"\n lines = []\n # Sort words horizontally\n words = [words[j] for j in np.argsort([boxes[i, 0] for i in words]).tolist()]\n # Eventually split line horizontally\n if len(words) < 2:\n lines.append(words)\n else:\n sub_line = [words[0]]\n for i in words[1:]:\n horiz_break = True\n\n prev_box = boxes[sub_line[-1]]\n # Compute distance between boxes\n if self.rotated_bbox:\n dist = boxes[i, 0] - prev_box[2] / 2 - (prev_box[0] + prev_box[2] / 2)\n else:\n dist = boxes[i, 0] - prev_box[2]\n # If distance between boxes is lower than paragraph break, same sub-line\n if dist < self.paragraph_break:\n horiz_break = False\n\n if horiz_break:\n lines.append(sub_line)\n sub_line = []\n\n sub_line.append(i)\n lines.append(sub_line)\n\n return lines\n\n def _resolve_lines(self, boxes: np.ndarray) -> List[List[int]]:\n \"\"\"Order boxes to group them in lines\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5) in case of rotated bbox\n\n Returns:\n nested list of box indices\n \"\"\"\n # Compute median for boxes heights\n y_med = np.median(boxes[:, 3] if self.rotated_bbox else boxes[:, 3] - boxes[:, 1])\n\n # Sort boxes\n idxs = (boxes[:, 0] + 2 * boxes[:, 1 if self.rotated_bbox else 3] / y_med).argsort()\n\n lines = []\n words = [idxs[0]] # Assign the top-left word to the first line\n # Define a mean y-center for the line\n if self.rotated_bbox:\n y_center_sum = boxes[idxs[0]][1]\n else:\n y_center_sum = boxes[idxs[0]][[1, 3]].mean()\n\n for idx in idxs[1:]:\n vert_break = True\n\n # Compute y_dist\n if self.rotated_bbox:\n y_dist = abs(boxes[idx][1] - y_center_sum / len(words))\n else:\n y_dist = abs(boxes[idx][[1, 3]].mean() - y_center_sum / len(words))\n # If y-center of the box is close enough to mean y-center of the line, same line\n if y_dist < y_med / 2:\n vert_break = False\n\n if vert_break:\n # Compute sub-lines (horizontal split)\n lines.extend(self._resolve_sub_lines(boxes, words))\n words = []\n y_center_sum = 0\n\n words.append(idx)\n y_center_sum += boxes[idx][1 if self.rotated_bbox else [1, 3]].mean()\n\n # Use the remaining words to form the last(s) line(s)\n if len(words) > 0:\n # Compute sub-lines (horizontal split)\n lines.extend(self._resolve_sub_lines(boxes, words))\n\n return lines\n\n def _resolve_blocks(self, boxes: np.ndarray, lines: List[List[int]]) -> List[List[List[int]]]:\n \"\"\"Order lines to group them in blocks\n\n Args:\n boxes: bounding boxes of shape (N, 4) or (N, 5)\n lines: list of lines, each line is a list of idx\n\n Returns:\n nested list of box indices\n \"\"\"\n # Resolve enclosing boxes of lines\n if self.rotated_bbox:\n box_lines = np.asarray([\n resolve_enclosing_rbbox([tuple(boxes[idx, :5]) for idx in line]) for line in lines # type: ignore[misc]\n ])\n else:\n _box_lines = [\n resolve_enclosing_bbox([\n (tuple(boxes[idx, :2]), tuple(boxes[idx, 2:])) for idx in line # type: ignore[misc]\n ])\n for line in lines\n ]\n box_lines = np.asarray([(x1, y1, x2, y2) for ((x1, y1), (x2, y2)) in _box_lines])\n\n # Compute geometrical features of lines to clusterize\n # Clusterizing only with box centers yield to poor results for complex documents\n box_features = np.stack(\n (\n (box_lines[:, 0] + box_lines[:, 3]) / 2,\n (box_lines[:, 1] + box_lines[:, 2]) / 2,\n (box_lines[:, 0] + box_lines[:, 2]) / 2,\n (box_lines[:, 1] + box_lines[:, 3]) / 2,\n box_lines[:, 0],\n box_lines[:, 1],\n ), axis=-1\n )\n # Compute clusters\n clusters = fclusterdata(box_features, t=0.1, depth=4, criterion='distance', metric='euclidean')\n\n _blocks: Dict[int, List[int]] = {}\n # Form clusters\n for line_idx, cluster_idx in enumerate(clusters):\n if cluster_idx in _blocks.keys():\n _blocks[cluster_idx].append(line_idx)\n else:\n _blocks[cluster_idx] = [line_idx]\n\n # Retrieve word-box level to return a fully nested structure\n blocks = [[lines[idx] for idx in block] for block in _blocks.values()]\n\n return blocks\n\n def _build_blocks(self, boxes: np.ndarray, word_preds: List[Tuple[str, float]]) -> List[Block]:\n \"\"\"Gather independent words in structured blocks\n\n Args:\n boxes: bounding boxes of all detected words of the page, of shape (N, 5) or (N, 6)\n word_preds: list of all detected words of the page, of shape N\n\n Returns:\n list of block elements\n \"\"\"\n\n if boxes.shape[0] != len(word_preds):\n raise ValueError(f\"Incompatible argument lengths: {boxes.shape[0]}, {len(word_preds)}\")\n\n if boxes.shape[0] == 0:\n return []\n\n # Decide whether we try to form lines\n if self.resolve_lines:\n lines = self._resolve_lines(boxes[:, :-1])\n # Decide whether we try to form blocks\n if self.resolve_blocks and len(lines) > 1:\n _blocks = self._resolve_blocks(boxes[:, :-1], lines)\n else:\n _blocks = [lines]\n else:\n # Sort bounding boxes, one line for all boxes, one block for the line\n lines = [self._sort_boxes(boxes[:, :-1])]\n _blocks = [lines]\n\n blocks = [\n Block(\n [Line(\n [\n Word(\n *word_preds[idx],\n (boxes[idx, 0], boxes[idx, 1], boxes[idx, 2], boxes[idx, 3], boxes[idx, 4])\n ) if self.rotated_bbox else\n Word(\n *word_preds[idx],\n ((boxes[idx, 0], boxes[idx, 1]), (boxes[idx, 2], boxes[idx, 3]))\n ) for idx in line\n ]\n ) for line in lines]\n ) for lines in _blocks\n ]\n\n return blocks\n\n def extra_repr(self) -> str:\n return (f\"resolve_lines={self.resolve_lines}, resolve_blocks={self.resolve_blocks}, \"\n f\"paragraph_break={self.paragraph_break}\")\n\n def __call__(\n self,\n boxes: List[np.ndarray],\n text_preds: List[List[Tuple[str, float]]],\n page_shapes: List[Tuple[int, int]]\n ) -> Document:\n \"\"\"Re-arrange detected words into structured blocks\n\n Args:\n boxes: list of N elements, where each element represents the localization predictions, of shape (*, 5)\n or (*, 6) for all words for a given page\n text_preds: list of N elements, where each element is the list of all word prediction (text + confidence)\n page_shape: shape of each page, of size N\n\n Returns:\n document object\n \"\"\"\n\n if len(boxes) != len(text_preds) or len(boxes) != len(page_shapes):\n raise ValueError(\"All arguments are expected to be lists of the same size\")\n\n _pages = [\n Page(\n self._build_blocks(\n page_boxes,\n word_preds,\n ),\n _idx,\n shape,\n )\n for _idx, shape, page_boxes, word_preds in zip(range(len(boxes)), page_shapes, boxes, text_preds)\n ]\n\n return Document(_pages)\n", "path": "doctr/models/builder.py"}]} | 3,701 | 209 |
gh_patches_debug_5764 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1353 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix pitest warning, recheck dependencies versions.
* Cookiecutter version:master
```
py27 run-test: commands[1] | /home/insspb/git/cookiecutter/.tox/py27/bin/python /snap/pycharm-professional/196/plugins/python/helpers/pycharm/_jb_pytest_runner.py --offset 10001 -- --cov=cookiecutter tests
/home/insspb/git/cookiecutter/.tox/py27/lib/python2.7/site-packages/_pytest/config/__init__.py:316: PytestConfigWarning: pytest-catchlog plugin has been merged into the core, please remove it from your requirements.
name.replace("_", "-")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """cookiecutter distutils configuration."""
5
6 import os
7 import io
8 import sys
9
10 from setuptools import setup
11
12 version = "1.7.0"
13
14 if sys.argv[-1] == 'publish':
15 os.system('python setup.py sdist upload')
16 os.system('python setup.py bdist_wheel upload')
17 sys.exit()
18
19 if sys.argv[-1] == 'tag':
20 os.system("git tag -a %s -m 'version %s'" % (version, version))
21 os.system("git push --tags")
22 sys.exit()
23
24 with io.open('README.md', 'r', encoding='utf-8') as readme_file:
25 readme = readme_file.read()
26
27 requirements = [
28 'binaryornot>=0.2.0',
29 'jinja2>=2.7',
30 'click>=7.0',
31 'poyo>=0.1.0',
32 'jinja2-time>=0.1.0',
33 'python-slugify>=4.0.0',
34 'requests>=2.18.0',
35 'six>=1.10',
36 ]
37
38 if sys.argv[-1] == 'readme':
39 print(readme)
40 sys.exit()
41
42
43 setup(
44 name='cookiecutter',
45 version=version,
46 description=('A command-line utility that creates projects from project '
47 'templates, e.g. creating a Python package project from a '
48 'Python package project template.'),
49 long_description=readme,
50 long_description_content_type='text/markdown',
51 author='Audrey Roy',
52 author_email='[email protected]',
53 url='https://github.com/cookiecutter/cookiecutter',
54 packages=[
55 'cookiecutter',
56 ],
57 package_dir={'cookiecutter': 'cookiecutter'},
58 entry_points={
59 'console_scripts': [
60 'cookiecutter = cookiecutter.__main__:main',
61 ]
62 },
63 include_package_data=True,
64 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
65 install_requires=requirements,
66 extras_require={
67 ':python_version<"3.3"': ['whichcraft>=0.4.0'],
68 },
69 license='BSD',
70 zip_safe=False,
71 classifiers=[
72 "Development Status :: 5 - Production/Stable",
73 "Environment :: Console",
74 "Intended Audience :: Developers",
75 "Natural Language :: English",
76 "License :: OSI Approved :: BSD License",
77 "Programming Language :: Python",
78 "Programming Language :: Python :: 2",
79 "Programming Language :: Python :: 2.7",
80 "Programming Language :: Python :: 3",
81 "Programming Language :: Python :: 3.5",
82 "Programming Language :: Python :: 3.6",
83 "Programming Language :: Python :: 3.7",
84 "Programming Language :: Python :: 3.8",
85 "Programming Language :: Python :: Implementation :: CPython",
86 "Programming Language :: Python :: Implementation :: PyPy",
87 "Topic :: Software Development",
88 ],
89 keywords=(
90 'cookiecutter, Python, projects, project templates, Jinja2, '
91 'skeleton, scaffolding, project directory, setup.py, package, '
92 'packaging'
93 ),
94 )
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,14 +25,15 @@
readme = readme_file.read()
requirements = [
- 'binaryornot>=0.2.0',
- 'jinja2>=2.7',
- 'click>=7.0',
- 'poyo>=0.1.0',
- 'jinja2-time>=0.1.0',
+ 'binaryornot>=0.4.4',
+ 'Jinja2<=2.11.0',
+ 'click>=7.1.1',
+ 'poyo>=0.5.0',
+ 'jinja2-time>=0.2.0',
'python-slugify>=4.0.0',
- 'requests>=2.18.0',
- 'six>=1.10',
+ 'requests>=2.23.0',
+ 'six>=1.14',
+ 'MarkupSafe<2.0.0'
]
if sys.argv[-1] == 'readme':
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,14 +25,15 @@\n readme = readme_file.read()\n \n requirements = [\n- 'binaryornot>=0.2.0',\n- 'jinja2>=2.7',\n- 'click>=7.0',\n- 'poyo>=0.1.0',\n- 'jinja2-time>=0.1.0',\n+ 'binaryornot>=0.4.4',\n+ 'Jinja2<=2.11.0',\n+ 'click>=7.1.1',\n+ 'poyo>=0.5.0',\n+ 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n- 'requests>=2.18.0',\n- 'six>=1.10',\n+ 'requests>=2.23.0',\n+ 'six>=1.14',\n+ 'MarkupSafe<2.0.0'\n ]\n \n if sys.argv[-1] == 'readme':\n", "issue": "Fix pitest warning, recheck dependencies versions.\n* Cookiecutter version:master\r\n\r\n```\r\npy27 run-test: commands[1] | /home/insspb/git/cookiecutter/.tox/py27/bin/python /snap/pycharm-professional/196/plugins/python/helpers/pycharm/_jb_pytest_runner.py --offset 10001 -- --cov=cookiecutter tests\r\n/home/insspb/git/cookiecutter/.tox/py27/lib/python2.7/site-packages/_pytest/config/__init__.py:316: PytestConfigWarning: pytest-catchlog plugin has been merged into the core, please remove it from your requirements.\r\n name.replace(\"_\", \"-\")\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"cookiecutter distutils configuration.\"\"\"\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.7.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.md', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.2.0',\n 'jinja2>=2.7',\n 'click>=7.0',\n 'poyo>=0.1.0',\n 'jinja2-time>=0.1.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.18.0',\n 'six>=1.10',\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n extras_require={\n ':python_version<\"3.3\"': ['whichcraft>=0.4.0'],\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"cookiecutter distutils configuration.\"\"\"\n\nimport os\nimport io\nimport sys\n\nfrom setuptools import setup\n\nversion = \"1.7.0\"\n\nif sys.argv[-1] == 'publish':\n os.system('python setup.py sdist upload')\n os.system('python setup.py bdist_wheel upload')\n sys.exit()\n\nif sys.argv[-1] == 'tag':\n os.system(\"git tag -a %s -m 'version %s'\" % (version, version))\n os.system(\"git push --tags\")\n sys.exit()\n\nwith io.open('README.md', 'r', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2<=2.11.0',\n 'click>=7.1.1',\n 'poyo>=0.5.0',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n 'six>=1.14',\n 'MarkupSafe<2.0.0'\n]\n\nif sys.argv[-1] == 'readme':\n print(readme)\n sys.exit()\n\n\nsetup(\n name='cookiecutter',\n version=version,\n description=('A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Roy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=[\n 'cookiecutter',\n ],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={\n 'console_scripts': [\n 'cookiecutter = cookiecutter.__main__:main',\n ]\n },\n include_package_data=True,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n install_requires=requirements,\n extras_require={\n ':python_version<\"3.3\"': ['whichcraft>=0.4.0'],\n },\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n 'cookiecutter, Python, projects, project templates, Jinja2, '\n 'skeleton, scaffolding, project directory, setup.py, package, '\n 'packaging'\n ),\n)\n", "path": "setup.py"}]} | 1,314 | 250 |
gh_patches_debug_21886 | rasdani/github-patches | git_diff | voxel51__fiftyone-1878 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Changing session.view resets field visibility choices
On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?
This came up when I was trying to work with an interactive plot. I just wanted to see images with no labels, but every time I made a selection in the linked plot, the labels kept re-appearing, which was annoying. I don't recall facing this issue before.
Of course persisting sidebar settings is a bit tricky because views can change the label schema.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fiftyone/server/query.py`
Content:
```
1 """
2 FiftyOne Server queries
3
4 | Copyright 2017-2022, Voxel51, Inc.
5 | `voxel51.com <https://voxel51.com/>`_
6 |
7 """
8 import typing as t
9 from dataclasses import asdict
10 from datetime import date, datetime
11 from enum import Enum
12 import os
13
14 import eta.core.serial as etas
15 import eta.core.utils as etau
16 import strawberry as gql
17 from bson import ObjectId
18 from dacite import Config, from_dict
19
20
21 import fiftyone as fo
22 import fiftyone.constants as foc
23 import fiftyone.core.context as focx
24 import fiftyone.core.dataset as fod
25 import fiftyone.core.uid as fou
26 import fiftyone.core.view as fov
27
28 from fiftyone.server.data import Info
29 from fiftyone.server.dataloader import get_dataloader_resolver
30 from fiftyone.server.mixins import HasCollection
31 from fiftyone.server.paginator import Connection, get_paginator_resolver
32 from fiftyone.server.scalars import JSONArray
33
34 ID = gql.scalar(
35 t.NewType("ID", str),
36 serialize=lambda v: str(v),
37 parse_value=lambda v: ObjectId(v),
38 )
39 DATASET_FILTER = [{"sample_collection_name": {"$regex": "^samples\\."}}]
40 DATASET_FILTER_STAGE = [{"$match": DATASET_FILTER[0]}]
41
42
43 @gql.enum
44 class MediaType(Enum):
45 image = "image"
46 video = "video"
47
48
49 @gql.type
50 class Target:
51 target: int
52 value: str
53
54
55 @gql.type
56 class NamedTargets:
57 name: str
58 targets: t.List[Target]
59
60
61 @gql.type
62 class SampleField:
63 ftype: str
64 path: str
65 subfield: t.Optional[str]
66 embedded_doc_type: t.Optional[str]
67 db_field: t.Optional[str]
68
69
70 @gql.interface
71 class RunConfig:
72 cls: str
73
74
75 @gql.interface
76 class Run:
77 key: str
78 version: str
79 timestamp: datetime
80 config: RunConfig
81 view_stages: t.List[str]
82
83
84 @gql.type
85 class BrainRunConfig(RunConfig):
86 embeddings_field: t.Optional[str]
87 method: str
88 patches_field: t.Optional[str]
89
90
91 @gql.type
92 class BrainRun(Run):
93 config: BrainRunConfig
94
95
96 @gql.type
97 class EvaluationRunConfig(RunConfig):
98 gt_field: str
99 pred_field: str
100 method: str
101
102
103 @gql.type
104 class EvaluationRun(Run):
105 config: EvaluationRunConfig
106
107
108 @gql.type
109 class SidebarGroup:
110 name: str
111 paths: t.List[str]
112
113
114 @gql.type
115 class KeypointSkeleton:
116 labels: t.Optional[t.List[str]]
117 edges: t.List[t.List[int]]
118
119
120 @gql.type
121 class NamedKeypointSkeleton(KeypointSkeleton):
122 name: str
123
124
125 @gql.type
126 class Dataset(HasCollection):
127 id: gql.ID
128 name: str
129 created_at: t.Optional[date]
130 last_loaded_at: t.Optional[datetime]
131 persistent: bool
132 media_type: t.Optional[MediaType]
133 mask_targets: t.List[NamedTargets]
134 default_mask_targets: t.Optional[t.List[Target]]
135 sample_fields: t.List[SampleField]
136 frame_fields: t.List[SampleField]
137 brain_methods: t.List[BrainRun]
138 evaluations: t.List[EvaluationRun]
139 app_sidebar_groups: t.Optional[t.List[SidebarGroup]]
140 version: t.Optional[str]
141 view_cls: t.Optional[str]
142 default_skeleton: t.Optional[KeypointSkeleton]
143 skeletons: t.List[NamedKeypointSkeleton]
144
145 @staticmethod
146 def get_collection_name() -> str:
147 return "datasets"
148
149 @staticmethod
150 def modifier(doc: dict) -> dict:
151
152 doc["id"] = doc.pop("_id")
153 doc["mask_targets"] = []
154 doc["default_mask_targets"] = []
155 doc["sample_fields"] = _flatten_fields([], doc["sample_fields"])
156 doc["frame_fields"] = _flatten_fields([], doc["frame_fields"])
157 doc["brain_methods"] = list(doc.get("brain_methods", {}).values())
158 doc["evaluations"] = list(doc.get("evaluations", {}).values())
159 doc["skeletons"] = list(
160 dict(name=name, **data)
161 for name, data in doc.get("skeletons", {}).items()
162 )
163 doc["default_skeletons"] = doc.get("default_skeletons", None)
164 return doc
165
166 @classmethod
167 async def resolver(
168 cls, name: str, view: t.Optional[JSONArray], info: Info
169 ) -> t.Optional["Dataset"]:
170 dataset = await dataset_dataloader(name, info)
171 if dataset is None:
172 return dataset
173
174 ds = fo.load_dataset(name)
175 view = fov.DatasetView._build(ds, view or [])
176 if view._dataset != ds:
177 d = view._dataset._serialize()
178 dataset.id = (
179 ObjectId()
180 ) # if it is not the root dataset, change the id (relay requires it)
181 dataset.media_type = d["media_type"]
182 dataset.sample_fields = [
183 from_dict(SampleField, s)
184 for s in _flatten_fields([], d["sample_fields"])
185 ]
186 dataset.frame_fields = [
187 from_dict(SampleField, s)
188 for s in _flatten_fields([], d["frame_fields"])
189 ]
190
191 dataset.view_cls = etau.get_class_name(view)
192
193 return dataset
194
195
196 dataset_dataloader = get_dataloader_resolver(Dataset, "name", DATASET_FILTER)
197
198
199 @gql.enum
200 class ColorBy(Enum):
201 field = "field"
202 instance = "instance"
203 label = "label"
204
205
206 @gql.type
207 class AppConfig:
208 color_by: ColorBy
209 color_pool: t.List[str]
210 colorscale: str
211 grid_zoom: int
212 loop_videos: bool
213 notebook_height: int
214 show_confidence: bool
215 show_index: bool
216 show_label: bool
217 show_skeletons: bool
218 show_tooltip: bool
219 timezone: t.Optional[str]
220 use_frame_number: bool
221
222
223 @gql.type
224 class Query:
225 @gql.field
226 def colorscale(self) -> t.Optional[t.List[t.List[int]]]:
227 if fo.app_config.colorscale:
228 return fo.app_config.get_colormap()
229
230 return None
231
232 @gql.field
233 def config(self) -> AppConfig:
234 d = fo.app_config.serialize()
235 d["timezone"] = fo.config.timezone
236 return from_dict(AppConfig, d, config=Config(check_types=False))
237
238 @gql.field
239 def context(self) -> str:
240 return focx._get_context()
241
242 @gql.field
243 def dev(self) -> bool:
244 return foc.DEV_INSTALL or foc.RC_INSTALL
245
246 @gql.field
247 def do_not_track(self) -> bool:
248 return fo.config.do_not_track
249
250 dataset = gql.field(resolver=Dataset.resolver)
251 datasets: Connection[Dataset] = gql.field(
252 resolver=get_paginator_resolver(
253 Dataset,
254 "created_at",
255 DATASET_FILTER_STAGE,
256 )
257 )
258
259 @gql.field
260 def teams_submission(self) -> bool:
261 isfile = os.path.isfile(foc.TEAMS_PATH)
262 if isfile:
263 submitted = etas.load_json(foc.TEAMS_PATH)["submitted"]
264 else:
265 submitted = False
266
267 return submitted
268
269 @gql.field
270 def uid(self) -> str:
271 uid, _ = fou.get_user_id()
272 return uid
273
274 @gql.field
275 def version(self) -> str:
276 return foc.VERSION
277
278
279 def serialize_dataset(dataset: fod.Dataset, view: fov.DatasetView) -> t.Dict:
280 doc = dataset._doc.to_dict()
281 Dataset.modifier(doc)
282 data = from_dict(Dataset, doc, config=Config(check_types=False))
283 data.view_cls = None
284
285 if view is not None and view._dataset != dataset:
286 d = view._dataset._serialize()
287 data.media_type = d["media_type"]
288 data.id = ObjectId()
289 data.sample_fields = [
290 from_dict(SampleField, s)
291 for s in _flatten_fields([], d["sample_fields"])
292 ]
293 data.frame_fields = [
294 from_dict(SampleField, s)
295 for s in _flatten_fields([], d["frame_fields"])
296 ]
297
298 data.view_cls = etau.get_class_name(view)
299
300 return asdict(data)
301
302
303 def _flatten_fields(
304 path: t.List[str], fields: t.List[t.Dict]
305 ) -> t.List[t.Dict]:
306 result = []
307 for field in fields:
308 key = field.pop("name")
309 field_path = path + [key]
310 field["path"] = ".".join(field_path)
311 result.append(field)
312
313 fields = field.pop("fields", None)
314 if fields:
315 result = result + _flatten_fields(field_path, fields)
316
317 return result
318
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fiftyone/server/query.py b/fiftyone/server/query.py
--- a/fiftyone/server/query.py
+++ b/fiftyone/server/query.py
@@ -175,9 +175,7 @@
view = fov.DatasetView._build(ds, view or [])
if view._dataset != ds:
d = view._dataset._serialize()
- dataset.id = (
- ObjectId()
- ) # if it is not the root dataset, change the id (relay requires it)
+ dataset.id = view._dataset._doc.id
dataset.media_type = d["media_type"]
dataset.sample_fields = [
from_dict(SampleField, s)
@@ -285,7 +283,7 @@
if view is not None and view._dataset != dataset:
d = view._dataset._serialize()
data.media_type = d["media_type"]
- data.id = ObjectId()
+ data.id = view._dataset._doc.id
data.sample_fields = [
from_dict(SampleField, s)
for s in _flatten_fields([], d["sample_fields"])
| {"golden_diff": "diff --git a/fiftyone/server/query.py b/fiftyone/server/query.py\n--- a/fiftyone/server/query.py\n+++ b/fiftyone/server/query.py\n@@ -175,9 +175,7 @@\n view = fov.DatasetView._build(ds, view or [])\n if view._dataset != ds:\n d = view._dataset._serialize()\n- dataset.id = (\n- ObjectId()\n- ) # if it is not the root dataset, change the id (relay requires it)\n+ dataset.id = view._dataset._doc.id\n dataset.media_type = d[\"media_type\"]\n dataset.sample_fields = [\n from_dict(SampleField, s)\n@@ -285,7 +283,7 @@\n if view is not None and view._dataset != dataset:\n d = view._dataset._serialize()\n data.media_type = d[\"media_type\"]\n- data.id = ObjectId()\n+ data.id = view._dataset._doc.id\n data.sample_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"sample_fields\"])\n", "issue": "[BUG] Changing session.view resets field visibility choices\nOn `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?\r\n\r\nThis came up when I was trying to work with an interactive plot. I just wanted to see images with no labels, but every time I made a selection in the linked plot, the labels kept re-appearing, which was annoying. I don't recall facing this issue before.\r\n\r\nOf course persisting sidebar settings is a bit tricky because views can change the label schema.\n", "before_files": [{"content": "\"\"\"\nFiftyOne Server queries\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport typing as t\nfrom dataclasses import asdict\nfrom datetime import date, datetime\nfrom enum import Enum\nimport os\n\nimport eta.core.serial as etas\nimport eta.core.utils as etau\nimport strawberry as gql\nfrom bson import ObjectId\nfrom dacite import Config, from_dict\n\n\nimport fiftyone as fo\nimport fiftyone.constants as foc\nimport fiftyone.core.context as focx\nimport fiftyone.core.dataset as fod\nimport fiftyone.core.uid as fou\nimport fiftyone.core.view as fov\n\nfrom fiftyone.server.data import Info\nfrom fiftyone.server.dataloader import get_dataloader_resolver\nfrom fiftyone.server.mixins import HasCollection\nfrom fiftyone.server.paginator import Connection, get_paginator_resolver\nfrom fiftyone.server.scalars import JSONArray\n\nID = gql.scalar(\n t.NewType(\"ID\", str),\n serialize=lambda v: str(v),\n parse_value=lambda v: ObjectId(v),\n)\nDATASET_FILTER = [{\"sample_collection_name\": {\"$regex\": \"^samples\\\\.\"}}]\nDATASET_FILTER_STAGE = [{\"$match\": DATASET_FILTER[0]}]\n\n\[email protected]\nclass MediaType(Enum):\n image = \"image\"\n video = \"video\"\n\n\[email protected]\nclass Target:\n target: int\n value: str\n\n\[email protected]\nclass NamedTargets:\n name: str\n targets: t.List[Target]\n\n\[email protected]\nclass SampleField:\n ftype: str\n path: str\n subfield: t.Optional[str]\n embedded_doc_type: t.Optional[str]\n db_field: t.Optional[str]\n\n\[email protected]\nclass RunConfig:\n cls: str\n\n\[email protected]\nclass Run:\n key: str\n version: str\n timestamp: datetime\n config: RunConfig\n view_stages: t.List[str]\n\n\[email protected]\nclass BrainRunConfig(RunConfig):\n embeddings_field: t.Optional[str]\n method: str\n patches_field: t.Optional[str]\n\n\[email protected]\nclass BrainRun(Run):\n config: BrainRunConfig\n\n\[email protected]\nclass EvaluationRunConfig(RunConfig):\n gt_field: str\n pred_field: str\n method: str\n\n\[email protected]\nclass EvaluationRun(Run):\n config: EvaluationRunConfig\n\n\[email protected]\nclass SidebarGroup:\n name: str\n paths: t.List[str]\n\n\[email protected]\nclass KeypointSkeleton:\n labels: t.Optional[t.List[str]]\n edges: t.List[t.List[int]]\n\n\[email protected]\nclass NamedKeypointSkeleton(KeypointSkeleton):\n name: str\n\n\[email protected]\nclass Dataset(HasCollection):\n id: gql.ID\n name: str\n created_at: t.Optional[date]\n last_loaded_at: t.Optional[datetime]\n persistent: bool\n media_type: t.Optional[MediaType]\n mask_targets: t.List[NamedTargets]\n default_mask_targets: t.Optional[t.List[Target]]\n sample_fields: t.List[SampleField]\n frame_fields: t.List[SampleField]\n brain_methods: t.List[BrainRun]\n evaluations: t.List[EvaluationRun]\n app_sidebar_groups: t.Optional[t.List[SidebarGroup]]\n version: t.Optional[str]\n view_cls: t.Optional[str]\n default_skeleton: t.Optional[KeypointSkeleton]\n skeletons: t.List[NamedKeypointSkeleton]\n\n @staticmethod\n def get_collection_name() -> str:\n return \"datasets\"\n\n @staticmethod\n def modifier(doc: dict) -> dict:\n\n doc[\"id\"] = doc.pop(\"_id\")\n doc[\"mask_targets\"] = []\n doc[\"default_mask_targets\"] = []\n doc[\"sample_fields\"] = _flatten_fields([], doc[\"sample_fields\"])\n doc[\"frame_fields\"] = _flatten_fields([], doc[\"frame_fields\"])\n doc[\"brain_methods\"] = list(doc.get(\"brain_methods\", {}).values())\n doc[\"evaluations\"] = list(doc.get(\"evaluations\", {}).values())\n doc[\"skeletons\"] = list(\n dict(name=name, **data)\n for name, data in doc.get(\"skeletons\", {}).items()\n )\n doc[\"default_skeletons\"] = doc.get(\"default_skeletons\", None)\n return doc\n\n @classmethod\n async def resolver(\n cls, name: str, view: t.Optional[JSONArray], info: Info\n ) -> t.Optional[\"Dataset\"]:\n dataset = await dataset_dataloader(name, info)\n if dataset is None:\n return dataset\n\n ds = fo.load_dataset(name)\n view = fov.DatasetView._build(ds, view or [])\n if view._dataset != ds:\n d = view._dataset._serialize()\n dataset.id = (\n ObjectId()\n ) # if it is not the root dataset, change the id (relay requires it)\n dataset.media_type = d[\"media_type\"]\n dataset.sample_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"sample_fields\"])\n ]\n dataset.frame_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"frame_fields\"])\n ]\n\n dataset.view_cls = etau.get_class_name(view)\n\n return dataset\n\n\ndataset_dataloader = get_dataloader_resolver(Dataset, \"name\", DATASET_FILTER)\n\n\[email protected]\nclass ColorBy(Enum):\n field = \"field\"\n instance = \"instance\"\n label = \"label\"\n\n\[email protected]\nclass AppConfig:\n color_by: ColorBy\n color_pool: t.List[str]\n colorscale: str\n grid_zoom: int\n loop_videos: bool\n notebook_height: int\n show_confidence: bool\n show_index: bool\n show_label: bool\n show_skeletons: bool\n show_tooltip: bool\n timezone: t.Optional[str]\n use_frame_number: bool\n\n\[email protected]\nclass Query:\n @gql.field\n def colorscale(self) -> t.Optional[t.List[t.List[int]]]:\n if fo.app_config.colorscale:\n return fo.app_config.get_colormap()\n\n return None\n\n @gql.field\n def config(self) -> AppConfig:\n d = fo.app_config.serialize()\n d[\"timezone\"] = fo.config.timezone\n return from_dict(AppConfig, d, config=Config(check_types=False))\n\n @gql.field\n def context(self) -> str:\n return focx._get_context()\n\n @gql.field\n def dev(self) -> bool:\n return foc.DEV_INSTALL or foc.RC_INSTALL\n\n @gql.field\n def do_not_track(self) -> bool:\n return fo.config.do_not_track\n\n dataset = gql.field(resolver=Dataset.resolver)\n datasets: Connection[Dataset] = gql.field(\n resolver=get_paginator_resolver(\n Dataset,\n \"created_at\",\n DATASET_FILTER_STAGE,\n )\n )\n\n @gql.field\n def teams_submission(self) -> bool:\n isfile = os.path.isfile(foc.TEAMS_PATH)\n if isfile:\n submitted = etas.load_json(foc.TEAMS_PATH)[\"submitted\"]\n else:\n submitted = False\n\n return submitted\n\n @gql.field\n def uid(self) -> str:\n uid, _ = fou.get_user_id()\n return uid\n\n @gql.field\n def version(self) -> str:\n return foc.VERSION\n\n\ndef serialize_dataset(dataset: fod.Dataset, view: fov.DatasetView) -> t.Dict:\n doc = dataset._doc.to_dict()\n Dataset.modifier(doc)\n data = from_dict(Dataset, doc, config=Config(check_types=False))\n data.view_cls = None\n\n if view is not None and view._dataset != dataset:\n d = view._dataset._serialize()\n data.media_type = d[\"media_type\"]\n data.id = ObjectId()\n data.sample_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"sample_fields\"])\n ]\n data.frame_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"frame_fields\"])\n ]\n\n data.view_cls = etau.get_class_name(view)\n\n return asdict(data)\n\n\ndef _flatten_fields(\n path: t.List[str], fields: t.List[t.Dict]\n) -> t.List[t.Dict]:\n result = []\n for field in fields:\n key = field.pop(\"name\")\n field_path = path + [key]\n field[\"path\"] = \".\".join(field_path)\n result.append(field)\n\n fields = field.pop(\"fields\", None)\n if fields:\n result = result + _flatten_fields(field_path, fields)\n\n return result\n", "path": "fiftyone/server/query.py"}], "after_files": [{"content": "\"\"\"\nFiftyOne Server queries\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport typing as t\nfrom dataclasses import asdict\nfrom datetime import date, datetime\nfrom enum import Enum\nimport os\n\nimport eta.core.serial as etas\nimport eta.core.utils as etau\nimport strawberry as gql\nfrom bson import ObjectId\nfrom dacite import Config, from_dict\n\n\nimport fiftyone as fo\nimport fiftyone.constants as foc\nimport fiftyone.core.context as focx\nimport fiftyone.core.dataset as fod\nimport fiftyone.core.uid as fou\nimport fiftyone.core.view as fov\n\nfrom fiftyone.server.data import Info\nfrom fiftyone.server.dataloader import get_dataloader_resolver\nfrom fiftyone.server.mixins import HasCollection\nfrom fiftyone.server.paginator import Connection, get_paginator_resolver\nfrom fiftyone.server.scalars import JSONArray\n\nID = gql.scalar(\n t.NewType(\"ID\", str),\n serialize=lambda v: str(v),\n parse_value=lambda v: ObjectId(v),\n)\nDATASET_FILTER = [{\"sample_collection_name\": {\"$regex\": \"^samples\\\\.\"}}]\nDATASET_FILTER_STAGE = [{\"$match\": DATASET_FILTER[0]}]\n\n\[email protected]\nclass MediaType(Enum):\n image = \"image\"\n video = \"video\"\n\n\[email protected]\nclass Target:\n target: int\n value: str\n\n\[email protected]\nclass NamedTargets:\n name: str\n targets: t.List[Target]\n\n\[email protected]\nclass SampleField:\n ftype: str\n path: str\n subfield: t.Optional[str]\n embedded_doc_type: t.Optional[str]\n db_field: t.Optional[str]\n\n\[email protected]\nclass RunConfig:\n cls: str\n\n\[email protected]\nclass Run:\n key: str\n version: str\n timestamp: datetime\n config: RunConfig\n view_stages: t.List[str]\n\n\[email protected]\nclass BrainRunConfig(RunConfig):\n embeddings_field: t.Optional[str]\n method: str\n patches_field: t.Optional[str]\n\n\[email protected]\nclass BrainRun(Run):\n config: BrainRunConfig\n\n\[email protected]\nclass EvaluationRunConfig(RunConfig):\n gt_field: str\n pred_field: str\n method: str\n\n\[email protected]\nclass EvaluationRun(Run):\n config: EvaluationRunConfig\n\n\[email protected]\nclass SidebarGroup:\n name: str\n paths: t.List[str]\n\n\[email protected]\nclass KeypointSkeleton:\n labels: t.Optional[t.List[str]]\n edges: t.List[t.List[int]]\n\n\[email protected]\nclass NamedKeypointSkeleton(KeypointSkeleton):\n name: str\n\n\[email protected]\nclass Dataset(HasCollection):\n id: gql.ID\n name: str\n created_at: t.Optional[date]\n last_loaded_at: t.Optional[datetime]\n persistent: bool\n media_type: t.Optional[MediaType]\n mask_targets: t.List[NamedTargets]\n default_mask_targets: t.Optional[t.List[Target]]\n sample_fields: t.List[SampleField]\n frame_fields: t.List[SampleField]\n brain_methods: t.List[BrainRun]\n evaluations: t.List[EvaluationRun]\n app_sidebar_groups: t.Optional[t.List[SidebarGroup]]\n version: t.Optional[str]\n view_cls: t.Optional[str]\n default_skeleton: t.Optional[KeypointSkeleton]\n skeletons: t.List[NamedKeypointSkeleton]\n\n @staticmethod\n def get_collection_name() -> str:\n return \"datasets\"\n\n @staticmethod\n def modifier(doc: dict) -> dict:\n\n doc[\"id\"] = doc.pop(\"_id\")\n doc[\"mask_targets\"] = []\n doc[\"default_mask_targets\"] = []\n doc[\"sample_fields\"] = _flatten_fields([], doc[\"sample_fields\"])\n doc[\"frame_fields\"] = _flatten_fields([], doc[\"frame_fields\"])\n doc[\"brain_methods\"] = list(doc.get(\"brain_methods\", {}).values())\n doc[\"evaluations\"] = list(doc.get(\"evaluations\", {}).values())\n doc[\"skeletons\"] = list(\n dict(name=name, **data)\n for name, data in doc.get(\"skeletons\", {}).items()\n )\n doc[\"default_skeletons\"] = doc.get(\"default_skeletons\", None)\n return doc\n\n @classmethod\n async def resolver(\n cls, name: str, view: t.Optional[JSONArray], info: Info\n ) -> t.Optional[\"Dataset\"]:\n dataset = await dataset_dataloader(name, info)\n if dataset is None:\n return dataset\n\n ds = fo.load_dataset(name)\n view = fov.DatasetView._build(ds, view or [])\n if view._dataset != ds:\n d = view._dataset._serialize()\n dataset.id = view._dataset._doc.id\n dataset.media_type = d[\"media_type\"]\n dataset.sample_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"sample_fields\"])\n ]\n dataset.frame_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"frame_fields\"])\n ]\n\n dataset.view_cls = etau.get_class_name(view)\n\n return dataset\n\n\ndataset_dataloader = get_dataloader_resolver(Dataset, \"name\", DATASET_FILTER)\n\n\[email protected]\nclass ColorBy(Enum):\n field = \"field\"\n instance = \"instance\"\n label = \"label\"\n\n\[email protected]\nclass AppConfig:\n color_by: ColorBy\n color_pool: t.List[str]\n colorscale: str\n grid_zoom: int\n loop_videos: bool\n notebook_height: int\n show_confidence: bool\n show_index: bool\n show_label: bool\n show_skeletons: bool\n show_tooltip: bool\n timezone: t.Optional[str]\n use_frame_number: bool\n\n\[email protected]\nclass Query:\n @gql.field\n def colorscale(self) -> t.Optional[t.List[t.List[int]]]:\n if fo.app_config.colorscale:\n return fo.app_config.get_colormap()\n\n return None\n\n @gql.field\n def config(self) -> AppConfig:\n d = fo.app_config.serialize()\n d[\"timezone\"] = fo.config.timezone\n return from_dict(AppConfig, d, config=Config(check_types=False))\n\n @gql.field\n def context(self) -> str:\n return focx._get_context()\n\n @gql.field\n def dev(self) -> bool:\n return foc.DEV_INSTALL or foc.RC_INSTALL\n\n @gql.field\n def do_not_track(self) -> bool:\n return fo.config.do_not_track\n\n dataset = gql.field(resolver=Dataset.resolver)\n datasets: Connection[Dataset] = gql.field(\n resolver=get_paginator_resolver(\n Dataset,\n \"created_at\",\n DATASET_FILTER_STAGE,\n )\n )\n\n @gql.field\n def teams_submission(self) -> bool:\n isfile = os.path.isfile(foc.TEAMS_PATH)\n if isfile:\n submitted = etas.load_json(foc.TEAMS_PATH)[\"submitted\"]\n else:\n submitted = False\n\n return submitted\n\n @gql.field\n def uid(self) -> str:\n uid, _ = fou.get_user_id()\n return uid\n\n @gql.field\n def version(self) -> str:\n return foc.VERSION\n\n\ndef serialize_dataset(dataset: fod.Dataset, view: fov.DatasetView) -> t.Dict:\n doc = dataset._doc.to_dict()\n Dataset.modifier(doc)\n data = from_dict(Dataset, doc, config=Config(check_types=False))\n data.view_cls = None\n\n if view is not None and view._dataset != dataset:\n d = view._dataset._serialize()\n data.media_type = d[\"media_type\"]\n data.id = view._dataset._doc.id\n data.sample_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"sample_fields\"])\n ]\n data.frame_fields = [\n from_dict(SampleField, s)\n for s in _flatten_fields([], d[\"frame_fields\"])\n ]\n\n data.view_cls = etau.get_class_name(view)\n\n return asdict(data)\n\n\ndef _flatten_fields(\n path: t.List[str], fields: t.List[t.Dict]\n) -> t.List[t.Dict]:\n result = []\n for field in fields:\n key = field.pop(\"name\")\n field_path = path + [key]\n field[\"path\"] = \".\".join(field_path)\n result.append(field)\n\n fields = field.pop(\"fields\", None)\n if fields:\n result = result + _flatten_fields(field_path, fields)\n\n return result\n", "path": "fiftyone/server/query.py"}]} | 3,226 | 241 |
gh_patches_debug_24274 | rasdani/github-patches | git_diff | conda__conda-2875 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Would be nice if conda config --get channels listed the channels in priority order
As far as I can tell it currently lists them in reverse order.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/cli/main_config.py`
Content:
```
1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 from __future__ import print_function, division, absolute_import
7
8 import os
9 import sys
10
11 from .common import (Completer, add_parser_json, error_and_exit, exception_and_exit,
12 stdout_json_success)
13 from ..compat import string_types
14 from ..config import (rc_bool_keys, rc_string_keys, rc_list_keys, sys_rc_path,
15 user_rc_path, rc_other)
16 from ..utils import yaml_load, yaml_dump
17
18 descr = """
19 Modify configuration values in .condarc. This is modeled after the git
20 config command. Writes to the user .condarc file (%s) by default.
21
22 """ % user_rc_path
23
24 # Note, the extra whitespace in the list keys is on purpose. It's so the
25 # formatting from help2man is still valid YAML (otherwise it line wraps the
26 # keys like "- conda - defaults"). Technically the parser here still won't
27 # recognize it because it removes the indentation, but at least it will be
28 # valid.
29 additional_descr = """
30 See http://conda.pydata.org/docs/config.html for details on all the options
31 that can go in .condarc.
32
33 List keys, like
34
35 channels:
36 - conda
37 - defaults
38
39 are modified with the --add and --remove options. For example
40
41 conda config --add channels r
42
43 on the above configuration would prepend the key 'r', giving
44
45 channels:
46 - r
47 - conda
48 - defaults
49
50 Note that the key 'channels' implicitly contains the key 'defaults' if it has
51 not been configured yet.
52
53 Boolean keys, like
54
55 always_yes: true
56
57 are modified with --set and removed with --remove-key. For example
58
59 conda config --set always_yes false
60
61 gives
62
63 always_yes: false
64
65 Note that in YAML, "yes", "YES", "on", "true", "True", and "TRUE" are all
66 valid ways to spell "true", and "no", "NO", "off", "false", "False", and
67 "FALSE", are all valid ways to spell "false".
68
69 The .condarc file is YAML, and any valid YAML syntax is allowed.
70 """
71
72
73 # Note, the formatting of this is designed to work well with help2man
74 example = """
75 Examples:
76
77 Get the channels defined in the system .condarc:
78
79 conda config --get channels --system
80
81 Add the 'foo' Binstar channel:
82
83 conda config --add channels foo
84
85 Disable the 'show_channel_urls' option:
86
87 conda config --set show_channel_urls no
88 """
89
90 class CouldntParse(NotImplementedError):
91 def __init__(self, reason):
92 self.args = ["""Could not parse the yaml file. Use -f to use the
93 yaml parser (this will remove any structure or comments from the existing
94 .condarc file). Reason: %s""" % reason]
95
96 class SingleValueKey(Completer):
97 def _get_items(self):
98 return rc_bool_keys + \
99 rc_string_keys + \
100 ['yes', 'no', 'on', 'off', 'true', 'false']
101
102 class ListKey(Completer):
103 def _get_items(self):
104 return rc_list_keys
105
106 class BoolOrListKey(Completer):
107 def __contains__(self, other):
108 return other in self.get_items()
109
110 def _get_items(self):
111 return rc_list_keys + rc_bool_keys
112
113 def configure_parser(sub_parsers):
114 p = sub_parsers.add_parser(
115 'config',
116 description=descr,
117 help=descr,
118 epilog=additional_descr + example,
119 )
120 add_parser_json(p)
121
122 # TODO: use argparse.FileType
123 location = p.add_mutually_exclusive_group()
124 location.add_argument(
125 "--system",
126 action="store_true",
127 help="""Write to the system .condarc file ({system}). Otherwise writes to the user
128 config file ({user}).""".format(system=sys_rc_path,
129 user=user_rc_path),
130 )
131 location.add_argument(
132 "--file",
133 action="store",
134 help="""Write to the given file. Otherwise writes to the user config file ({user})
135 or the file path given by the 'CONDARC' environment variable, if it is set
136 (default: %(default)s).""".format(user=user_rc_path),
137 default=os.environ.get('CONDARC', user_rc_path)
138 )
139
140 # XXX: Does this really have to be mutually exclusive. I think the below
141 # code will work even if it is a regular group (although combination of
142 # --add and --remove with the same keys will not be well-defined).
143 action = p.add_mutually_exclusive_group(required=True)
144 action.add_argument(
145 "--get",
146 nargs='*',
147 action="store",
148 help="Get a configuration value.",
149 default=None,
150 metavar=('KEY'),
151 choices=BoolOrListKey()
152 )
153 action.add_argument(
154 "--add",
155 nargs=2,
156 action="append",
157 help="""Add one configuration value to the beginning of a list key.
158 To add to the end of the list, use --append.""",
159 default=[],
160 choices=ListKey(),
161 metavar=('KEY', 'VALUE'),
162 )
163 action.add_argument(
164 "--append",
165 nargs=2,
166 action="append",
167 help="""Add one configuration value to a list key. The default
168 behavior is to prepend.""",
169 default=[],
170 choices=ListKey(),
171 metavar=('KEY', 'VALUE'),
172 )
173 action.add_argument(
174 "--set",
175 nargs=2,
176 action="append",
177 help="""Set a boolean or string key""",
178 default=[],
179 choices=SingleValueKey(),
180 metavar=('KEY', 'VALUE'),
181 )
182 action.add_argument(
183 "--remove",
184 nargs=2,
185 action="append",
186 help="""Remove a configuration value from a list key. This removes
187 all instances of the value.""",
188 default=[],
189 metavar=('KEY', 'VALUE'),
190 )
191 action.add_argument(
192 "--remove-key",
193 nargs=1,
194 action="append",
195 help="""Remove a configuration key (and all its values).""",
196 default=[],
197 metavar="KEY",
198 )
199
200 p.add_argument(
201 "-f", "--force",
202 action="store_true",
203 help="""Write to the config file using the yaml parser. This will
204 remove any comments or structure from the file."""
205 )
206
207 p.set_defaults(func=execute)
208
209
210 def execute(args, parser):
211 try:
212 execute_config(args, parser)
213 except (CouldntParse, NotImplementedError) as e:
214 if args.json:
215 exception_and_exit(e, json=True)
216 else:
217 raise
218
219
220 def execute_config(args, parser):
221 json_warnings = []
222 json_get = {}
223
224 if args.system:
225 rc_path = sys_rc_path
226 elif args.file:
227 rc_path = args.file
228 else:
229 rc_path = user_rc_path
230
231 # read existing condarc
232 if os.path.exists(rc_path):
233 with open(rc_path, 'r') as fh:
234 rc_config = yaml_load(fh) or {}
235 else:
236 rc_config = {}
237
238 # Get
239 if args.get is not None:
240 if args.get == []:
241 args.get = sorted(rc_config.keys())
242 for key in args.get:
243 if key not in rc_list_keys + rc_bool_keys + rc_string_keys:
244 if key not in rc_other:
245 message = "unknown key %s" % key
246 if not args.json:
247 print(message, file=sys.stderr)
248 else:
249 json_warnings.append(message)
250 continue
251 if key not in rc_config:
252 continue
253
254 if args.json:
255 json_get[key] = rc_config[key]
256 continue
257
258 if isinstance(rc_config[key], (bool, string_types)):
259 print("--set", key, rc_config[key])
260 else:
261 # Note, since conda config --add prepends, these are printed in
262 # the reverse order so that entering them in this order will
263 # recreate the same file
264 for item in reversed(rc_config.get(key, [])):
265 # Use repr so that it can be pasted back in to conda config --add
266 print("--add", key, repr(item))
267
268 # Add, append
269 for arg, prepend in zip((args.add, args.append), (True, False)):
270 for key, item in arg:
271 if key == 'channels' and key not in rc_config:
272 rc_config[key] = ['defaults']
273 if key not in rc_list_keys:
274 error_and_exit("key must be one of %s, not %r" %
275 (', '.join(rc_list_keys), key), json=args.json,
276 error_type="ValueError")
277 if not isinstance(rc_config.get(key, []), list):
278 bad = rc_config[key].__class__.__name__
279 raise CouldntParse("key %r should be a list, not %s." % (key, bad))
280 if key == 'default_channels' and rc_path != sys_rc_path:
281 msg = "'default_channels' is only configurable for system installs"
282 raise NotImplementedError(msg)
283 arglist = rc_config.setdefault(key, [])
284 if item in arglist:
285 # Right now, all list keys should not contain duplicates
286 message = "Warning: '%s' already in '%s' list, moving to the %s" % (
287 item, key, "front" if prepend else "back")
288 arglist = rc_config[key] = [p for p in arglist if p != item]
289 if not args.json:
290 print(message, file=sys.stderr)
291 else:
292 json_warnings.append(message)
293 arglist.insert(0 if prepend else len(arglist), item)
294
295 # Set
296 set_bools, set_strings = set(rc_bool_keys), set(rc_string_keys)
297 for key, item in args.set:
298 # Check key and value
299 yamlitem = yaml_load(item)
300 if key in set_bools:
301 if not isinstance(yamlitem, bool):
302 error_and_exit("Key: %s; %s is not a YAML boolean." % (key, item),
303 json=args.json, error_type="TypeError")
304 rc_config[key] = yamlitem
305 elif key in set_strings:
306 rc_config[key] = yamlitem
307 else:
308 error_and_exit("Error key must be one of %s, not %s" %
309 (', '.join(set_bools | set_strings), key), json=args.json,
310 error_type="ValueError")
311
312 # Remove
313 for key, item in args.remove:
314 if key not in rc_config:
315 if key != 'channels':
316 error_and_exit("key %r is not in the config file" % key, json=args.json,
317 error_type="KeyError")
318 rc_config[key] = ['defaults']
319 if item not in rc_config[key]:
320 error_and_exit("%r is not in the %r key of the config file" %
321 (item, key), json=args.json, error_type="KeyError")
322 rc_config[key] = [i for i in rc_config[key] if i != item]
323
324 # Remove Key
325 for key, in args.remove_key:
326 if key not in rc_config:
327 error_and_exit("key %r is not in the config file" % key, json=args.json,
328 error_type="KeyError")
329 del rc_config[key]
330
331 # config.rc_keys
332 with open(rc_path, 'w') as rc:
333 rc.write(yaml_dump(rc_config))
334
335 if args.json:
336 stdout_json_success(
337 rc_path=rc_path,
338 warnings=json_warnings,
339 get=json_get
340 )
341 return
342
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda/cli/main_config.py b/conda/cli/main_config.py
--- a/conda/cli/main_config.py
+++ b/conda/cli/main_config.py
@@ -257,13 +257,19 @@
if isinstance(rc_config[key], (bool, string_types)):
print("--set", key, rc_config[key])
- else:
+ else: # assume the key is a list-type
# Note, since conda config --add prepends, these are printed in
# the reverse order so that entering them in this order will
# recreate the same file
- for item in reversed(rc_config.get(key, [])):
+ items = rc_config.get(key, [])
+ numitems = len(items)
+ for q, item in enumerate(reversed(items)):
# Use repr so that it can be pasted back in to conda config --add
- print("--add", key, repr(item))
+ if key == "channels" and q in (0, numitems-1):
+ print("--add", key, repr(item),
+ " # lowest priority" if q == 0 else " # highest priority")
+ else:
+ print("--add", key, repr(item))
# Add, append
for arg, prepend in zip((args.add, args.append), (True, False)):
| {"golden_diff": "diff --git a/conda/cli/main_config.py b/conda/cli/main_config.py\n--- a/conda/cli/main_config.py\n+++ b/conda/cli/main_config.py\n@@ -257,13 +257,19 @@\n \n if isinstance(rc_config[key], (bool, string_types)):\n print(\"--set\", key, rc_config[key])\n- else:\n+ else: # assume the key is a list-type\n # Note, since conda config --add prepends, these are printed in\n # the reverse order so that entering them in this order will\n # recreate the same file\n- for item in reversed(rc_config.get(key, [])):\n+ items = rc_config.get(key, [])\n+ numitems = len(items)\n+ for q, item in enumerate(reversed(items)):\n # Use repr so that it can be pasted back in to conda config --add\n- print(\"--add\", key, repr(item))\n+ if key == \"channels\" and q in (0, numitems-1):\n+ print(\"--add\", key, repr(item),\n+ \" # lowest priority\" if q == 0 else \" # highest priority\")\n+ else:\n+ print(\"--add\", key, repr(item))\n \n # Add, append\n for arg, prepend in zip((args.add, args.append), (True, False)):\n", "issue": "Would be nice if conda config --get channels listed the channels in priority order\nAs far as I can tell it currently lists them in reverse order. \n\n", "before_files": [{"content": "# (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\nfrom __future__ import print_function, division, absolute_import\n\nimport os\nimport sys\n\nfrom .common import (Completer, add_parser_json, error_and_exit, exception_and_exit,\n stdout_json_success)\nfrom ..compat import string_types\nfrom ..config import (rc_bool_keys, rc_string_keys, rc_list_keys, sys_rc_path,\n user_rc_path, rc_other)\nfrom ..utils import yaml_load, yaml_dump\n\ndescr = \"\"\"\nModify configuration values in .condarc. This is modeled after the git\nconfig command. Writes to the user .condarc file (%s) by default.\n\n\"\"\" % user_rc_path\n\n# Note, the extra whitespace in the list keys is on purpose. It's so the\n# formatting from help2man is still valid YAML (otherwise it line wraps the\n# keys like \"- conda - defaults\"). Technically the parser here still won't\n# recognize it because it removes the indentation, but at least it will be\n# valid.\nadditional_descr = \"\"\"\nSee http://conda.pydata.org/docs/config.html for details on all the options\nthat can go in .condarc.\n\nList keys, like\n\n channels:\n - conda\n - defaults\n\nare modified with the --add and --remove options. For example\n\n conda config --add channels r\n\non the above configuration would prepend the key 'r', giving\n\n channels:\n - r\n - conda\n - defaults\n\nNote that the key 'channels' implicitly contains the key 'defaults' if it has\nnot been configured yet.\n\nBoolean keys, like\n\n always_yes: true\n\nare modified with --set and removed with --remove-key. For example\n\n conda config --set always_yes false\n\ngives\n\n always_yes: false\n\nNote that in YAML, \"yes\", \"YES\", \"on\", \"true\", \"True\", and \"TRUE\" are all\nvalid ways to spell \"true\", and \"no\", \"NO\", \"off\", \"false\", \"False\", and\n\"FALSE\", are all valid ways to spell \"false\".\n\nThe .condarc file is YAML, and any valid YAML syntax is allowed.\n\"\"\"\n\n\n# Note, the formatting of this is designed to work well with help2man\nexample = \"\"\"\nExamples:\n\nGet the channels defined in the system .condarc:\n\n conda config --get channels --system\n\nAdd the 'foo' Binstar channel:\n\n conda config --add channels foo\n\nDisable the 'show_channel_urls' option:\n\n conda config --set show_channel_urls no\n\"\"\"\n\nclass CouldntParse(NotImplementedError):\n def __init__(self, reason):\n self.args = [\"\"\"Could not parse the yaml file. Use -f to use the\nyaml parser (this will remove any structure or comments from the existing\n.condarc file). Reason: %s\"\"\" % reason]\n\nclass SingleValueKey(Completer):\n def _get_items(self):\n return rc_bool_keys + \\\n rc_string_keys + \\\n ['yes', 'no', 'on', 'off', 'true', 'false']\n\nclass ListKey(Completer):\n def _get_items(self):\n return rc_list_keys\n\nclass BoolOrListKey(Completer):\n def __contains__(self, other):\n return other in self.get_items()\n\n def _get_items(self):\n return rc_list_keys + rc_bool_keys\n\ndef configure_parser(sub_parsers):\n p = sub_parsers.add_parser(\n 'config',\n description=descr,\n help=descr,\n epilog=additional_descr + example,\n )\n add_parser_json(p)\n\n # TODO: use argparse.FileType\n location = p.add_mutually_exclusive_group()\n location.add_argument(\n \"--system\",\n action=\"store_true\",\n help=\"\"\"Write to the system .condarc file ({system}). Otherwise writes to the user\n config file ({user}).\"\"\".format(system=sys_rc_path,\n user=user_rc_path),\n )\n location.add_argument(\n \"--file\",\n action=\"store\",\n help=\"\"\"Write to the given file. Otherwise writes to the user config file ({user})\nor the file path given by the 'CONDARC' environment variable, if it is set\n(default: %(default)s).\"\"\".format(user=user_rc_path),\n default=os.environ.get('CONDARC', user_rc_path)\n )\n\n # XXX: Does this really have to be mutually exclusive. I think the below\n # code will work even if it is a regular group (although combination of\n # --add and --remove with the same keys will not be well-defined).\n action = p.add_mutually_exclusive_group(required=True)\n action.add_argument(\n \"--get\",\n nargs='*',\n action=\"store\",\n help=\"Get a configuration value.\",\n default=None,\n metavar=('KEY'),\n choices=BoolOrListKey()\n )\n action.add_argument(\n \"--add\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Add one configuration value to the beginning of a list key.\n To add to the end of the list, use --append.\"\"\",\n default=[],\n choices=ListKey(),\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--append\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Add one configuration value to a list key. The default\n behavior is to prepend.\"\"\",\n default=[],\n choices=ListKey(),\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--set\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Set a boolean or string key\"\"\",\n default=[],\n choices=SingleValueKey(),\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--remove\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Remove a configuration value from a list key. This removes\n all instances of the value.\"\"\",\n default=[],\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--remove-key\",\n nargs=1,\n action=\"append\",\n help=\"\"\"Remove a configuration key (and all its values).\"\"\",\n default=[],\n metavar=\"KEY\",\n )\n\n p.add_argument(\n \"-f\", \"--force\",\n action=\"store_true\",\n help=\"\"\"Write to the config file using the yaml parser. This will\n remove any comments or structure from the file.\"\"\"\n )\n\n p.set_defaults(func=execute)\n\n\ndef execute(args, parser):\n try:\n execute_config(args, parser)\n except (CouldntParse, NotImplementedError) as e:\n if args.json:\n exception_and_exit(e, json=True)\n else:\n raise\n\n\ndef execute_config(args, parser):\n json_warnings = []\n json_get = {}\n\n if args.system:\n rc_path = sys_rc_path\n elif args.file:\n rc_path = args.file\n else:\n rc_path = user_rc_path\n\n # read existing condarc\n if os.path.exists(rc_path):\n with open(rc_path, 'r') as fh:\n rc_config = yaml_load(fh) or {}\n else:\n rc_config = {}\n\n # Get\n if args.get is not None:\n if args.get == []:\n args.get = sorted(rc_config.keys())\n for key in args.get:\n if key not in rc_list_keys + rc_bool_keys + rc_string_keys:\n if key not in rc_other:\n message = \"unknown key %s\" % key\n if not args.json:\n print(message, file=sys.stderr)\n else:\n json_warnings.append(message)\n continue\n if key not in rc_config:\n continue\n\n if args.json:\n json_get[key] = rc_config[key]\n continue\n\n if isinstance(rc_config[key], (bool, string_types)):\n print(\"--set\", key, rc_config[key])\n else:\n # Note, since conda config --add prepends, these are printed in\n # the reverse order so that entering them in this order will\n # recreate the same file\n for item in reversed(rc_config.get(key, [])):\n # Use repr so that it can be pasted back in to conda config --add\n print(\"--add\", key, repr(item))\n\n # Add, append\n for arg, prepend in zip((args.add, args.append), (True, False)):\n for key, item in arg:\n if key == 'channels' and key not in rc_config:\n rc_config[key] = ['defaults']\n if key not in rc_list_keys:\n error_and_exit(\"key must be one of %s, not %r\" %\n (', '.join(rc_list_keys), key), json=args.json,\n error_type=\"ValueError\")\n if not isinstance(rc_config.get(key, []), list):\n bad = rc_config[key].__class__.__name__\n raise CouldntParse(\"key %r should be a list, not %s.\" % (key, bad))\n if key == 'default_channels' and rc_path != sys_rc_path:\n msg = \"'default_channels' is only configurable for system installs\"\n raise NotImplementedError(msg)\n arglist = rc_config.setdefault(key, [])\n if item in arglist:\n # Right now, all list keys should not contain duplicates\n message = \"Warning: '%s' already in '%s' list, moving to the %s\" % (\n item, key, \"front\" if prepend else \"back\")\n arglist = rc_config[key] = [p for p in arglist if p != item]\n if not args.json:\n print(message, file=sys.stderr)\n else:\n json_warnings.append(message)\n arglist.insert(0 if prepend else len(arglist), item)\n\n # Set\n set_bools, set_strings = set(rc_bool_keys), set(rc_string_keys)\n for key, item in args.set:\n # Check key and value\n yamlitem = yaml_load(item)\n if key in set_bools:\n if not isinstance(yamlitem, bool):\n error_and_exit(\"Key: %s; %s is not a YAML boolean.\" % (key, item),\n json=args.json, error_type=\"TypeError\")\n rc_config[key] = yamlitem\n elif key in set_strings:\n rc_config[key] = yamlitem\n else:\n error_and_exit(\"Error key must be one of %s, not %s\" %\n (', '.join(set_bools | set_strings), key), json=args.json,\n error_type=\"ValueError\")\n\n # Remove\n for key, item in args.remove:\n if key not in rc_config:\n if key != 'channels':\n error_and_exit(\"key %r is not in the config file\" % key, json=args.json,\n error_type=\"KeyError\")\n rc_config[key] = ['defaults']\n if item not in rc_config[key]:\n error_and_exit(\"%r is not in the %r key of the config file\" %\n (item, key), json=args.json, error_type=\"KeyError\")\n rc_config[key] = [i for i in rc_config[key] if i != item]\n\n # Remove Key\n for key, in args.remove_key:\n if key not in rc_config:\n error_and_exit(\"key %r is not in the config file\" % key, json=args.json,\n error_type=\"KeyError\")\n del rc_config[key]\n\n # config.rc_keys\n with open(rc_path, 'w') as rc:\n rc.write(yaml_dump(rc_config))\n\n if args.json:\n stdout_json_success(\n rc_path=rc_path,\n warnings=json_warnings,\n get=json_get\n )\n return\n", "path": "conda/cli/main_config.py"}], "after_files": [{"content": "# (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\nfrom __future__ import print_function, division, absolute_import\n\nimport os\nimport sys\n\nfrom .common import (Completer, add_parser_json, error_and_exit, exception_and_exit,\n stdout_json_success)\nfrom ..compat import string_types\nfrom ..config import (rc_bool_keys, rc_string_keys, rc_list_keys, sys_rc_path,\n user_rc_path, rc_other)\nfrom ..utils import yaml_load, yaml_dump\n\ndescr = \"\"\"\nModify configuration values in .condarc. This is modeled after the git\nconfig command. Writes to the user .condarc file (%s) by default.\n\n\"\"\" % user_rc_path\n\n# Note, the extra whitespace in the list keys is on purpose. It's so the\n# formatting from help2man is still valid YAML (otherwise it line wraps the\n# keys like \"- conda - defaults\"). Technically the parser here still won't\n# recognize it because it removes the indentation, but at least it will be\n# valid.\nadditional_descr = \"\"\"\nSee http://conda.pydata.org/docs/config.html for details on all the options\nthat can go in .condarc.\n\nList keys, like\n\n channels:\n - conda\n - defaults\n\nare modified with the --add and --remove options. For example\n\n conda config --add channels r\n\non the above configuration would prepend the key 'r', giving\n\n channels:\n - r\n - conda\n - defaults\n\nNote that the key 'channels' implicitly contains the key 'defaults' if it has\nnot been configured yet.\n\nBoolean keys, like\n\n always_yes: true\n\nare modified with --set and removed with --remove-key. For example\n\n conda config --set always_yes false\n\ngives\n\n always_yes: false\n\nNote that in YAML, \"yes\", \"YES\", \"on\", \"true\", \"True\", and \"TRUE\" are all\nvalid ways to spell \"true\", and \"no\", \"NO\", \"off\", \"false\", \"False\", and\n\"FALSE\", are all valid ways to spell \"false\".\n\nThe .condarc file is YAML, and any valid YAML syntax is allowed.\n\"\"\"\n\n\n# Note, the formatting of this is designed to work well with help2man\nexample = \"\"\"\nExamples:\n\nGet the channels defined in the system .condarc:\n\n conda config --get channels --system\n\nAdd the 'foo' Binstar channel:\n\n conda config --add channels foo\n\nDisable the 'show_channel_urls' option:\n\n conda config --set show_channel_urls no\n\"\"\"\n\nclass CouldntParse(NotImplementedError):\n def __init__(self, reason):\n self.args = [\"\"\"Could not parse the yaml file. Use -f to use the\nyaml parser (this will remove any structure or comments from the existing\n.condarc file). Reason: %s\"\"\" % reason]\n\nclass SingleValueKey(Completer):\n def _get_items(self):\n return rc_bool_keys + \\\n rc_string_keys + \\\n ['yes', 'no', 'on', 'off', 'true', 'false']\n\nclass ListKey(Completer):\n def _get_items(self):\n return rc_list_keys\n\nclass BoolOrListKey(Completer):\n def __contains__(self, other):\n return other in self.get_items()\n\n def _get_items(self):\n return rc_list_keys + rc_bool_keys\n\ndef configure_parser(sub_parsers):\n p = sub_parsers.add_parser(\n 'config',\n description=descr,\n help=descr,\n epilog=additional_descr + example,\n )\n add_parser_json(p)\n\n # TODO: use argparse.FileType\n location = p.add_mutually_exclusive_group()\n location.add_argument(\n \"--system\",\n action=\"store_true\",\n help=\"\"\"Write to the system .condarc file ({system}). Otherwise writes to the user\n config file ({user}).\"\"\".format(system=sys_rc_path,\n user=user_rc_path),\n )\n location.add_argument(\n \"--file\",\n action=\"store\",\n help=\"\"\"Write to the given file. Otherwise writes to the user config file ({user})\nor the file path given by the 'CONDARC' environment variable, if it is set\n(default: %(default)s).\"\"\".format(user=user_rc_path),\n default=os.environ.get('CONDARC', user_rc_path)\n )\n\n # XXX: Does this really have to be mutually exclusive. I think the below\n # code will work even if it is a regular group (although combination of\n # --add and --remove with the same keys will not be well-defined).\n action = p.add_mutually_exclusive_group(required=True)\n action.add_argument(\n \"--get\",\n nargs='*',\n action=\"store\",\n help=\"Get a configuration value.\",\n default=None,\n metavar=('KEY'),\n choices=BoolOrListKey()\n )\n action.add_argument(\n \"--add\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Add one configuration value to the beginning of a list key.\n To add to the end of the list, use --append.\"\"\",\n default=[],\n choices=ListKey(),\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--append\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Add one configuration value to a list key. The default\n behavior is to prepend.\"\"\",\n default=[],\n choices=ListKey(),\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--set\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Set a boolean or string key\"\"\",\n default=[],\n choices=SingleValueKey(),\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--remove\",\n nargs=2,\n action=\"append\",\n help=\"\"\"Remove a configuration value from a list key. This removes\n all instances of the value.\"\"\",\n default=[],\n metavar=('KEY', 'VALUE'),\n )\n action.add_argument(\n \"--remove-key\",\n nargs=1,\n action=\"append\",\n help=\"\"\"Remove a configuration key (and all its values).\"\"\",\n default=[],\n metavar=\"KEY\",\n )\n\n p.add_argument(\n \"-f\", \"--force\",\n action=\"store_true\",\n help=\"\"\"Write to the config file using the yaml parser. This will\n remove any comments or structure from the file.\"\"\"\n )\n\n p.set_defaults(func=execute)\n\n\ndef execute(args, parser):\n try:\n execute_config(args, parser)\n except (CouldntParse, NotImplementedError) as e:\n if args.json:\n exception_and_exit(e, json=True)\n else:\n raise\n\n\ndef execute_config(args, parser):\n json_warnings = []\n json_get = {}\n\n if args.system:\n rc_path = sys_rc_path\n elif args.file:\n rc_path = args.file\n else:\n rc_path = user_rc_path\n\n # read existing condarc\n if os.path.exists(rc_path):\n with open(rc_path, 'r') as fh:\n rc_config = yaml_load(fh) or {}\n else:\n rc_config = {}\n\n # Get\n if args.get is not None:\n if args.get == []:\n args.get = sorted(rc_config.keys())\n for key in args.get:\n if key not in rc_list_keys + rc_bool_keys + rc_string_keys:\n if key not in rc_other:\n message = \"unknown key %s\" % key\n if not args.json:\n print(message, file=sys.stderr)\n else:\n json_warnings.append(message)\n continue\n if key not in rc_config:\n continue\n\n if args.json:\n json_get[key] = rc_config[key]\n continue\n\n if isinstance(rc_config[key], (bool, string_types)):\n print(\"--set\", key, rc_config[key])\n else: # assume the key is a list-type\n # Note, since conda config --add prepends, these are printed in\n # the reverse order so that entering them in this order will\n # recreate the same file\n items = rc_config.get(key, [])\n numitems = len(items)\n for q, item in enumerate(reversed(items)):\n # Use repr so that it can be pasted back in to conda config --add\n if key == \"channels\" and q in (0, numitems-1):\n print(\"--add\", key, repr(item),\n \" # lowest priority\" if q == 0 else \" # highest priority\")\n else:\n print(\"--add\", key, repr(item))\n\n # Add, append\n for arg, prepend in zip((args.add, args.append), (True, False)):\n for key, item in arg:\n if key == 'channels' and key not in rc_config:\n rc_config[key] = ['defaults']\n if key not in rc_list_keys:\n error_and_exit(\"key must be one of %s, not %r\" %\n (', '.join(rc_list_keys), key), json=args.json,\n error_type=\"ValueError\")\n if not isinstance(rc_config.get(key, []), list):\n bad = rc_config[key].__class__.__name__\n raise CouldntParse(\"key %r should be a list, not %s.\" % (key, bad))\n if key == 'default_channels' and rc_path != sys_rc_path:\n msg = \"'default_channels' is only configurable for system installs\"\n raise NotImplementedError(msg)\n arglist = rc_config.setdefault(key, [])\n if item in arglist:\n # Right now, all list keys should not contain duplicates\n message = \"Warning: '%s' already in '%s' list, moving to the %s\" % (\n item, key, \"front\" if prepend else \"back\")\n arglist = rc_config[key] = [p for p in arglist if p != item]\n if not args.json:\n print(message, file=sys.stderr)\n else:\n json_warnings.append(message)\n arglist.insert(0 if prepend else len(arglist), item)\n\n # Set\n set_bools, set_strings = set(rc_bool_keys), set(rc_string_keys)\n for key, item in args.set:\n # Check key and value\n yamlitem = yaml_load(item)\n if key in set_bools:\n if not isinstance(yamlitem, bool):\n error_and_exit(\"Key: %s; %s is not a YAML boolean.\" % (key, item),\n json=args.json, error_type=\"TypeError\")\n rc_config[key] = yamlitem\n elif key in set_strings:\n rc_config[key] = yamlitem\n else:\n error_and_exit(\"Error key must be one of %s, not %s\" %\n (', '.join(set_bools | set_strings), key), json=args.json,\n error_type=\"ValueError\")\n\n # Remove\n for key, item in args.remove:\n if key not in rc_config:\n if key != 'channels':\n error_and_exit(\"key %r is not in the config file\" % key, json=args.json,\n error_type=\"KeyError\")\n rc_config[key] = ['defaults']\n if item not in rc_config[key]:\n error_and_exit(\"%r is not in the %r key of the config file\" %\n (item, key), json=args.json, error_type=\"KeyError\")\n rc_config[key] = [i for i in rc_config[key] if i != item]\n\n # Remove Key\n for key, in args.remove_key:\n if key not in rc_config:\n error_and_exit(\"key %r is not in the config file\" % key, json=args.json,\n error_type=\"KeyError\")\n del rc_config[key]\n\n # config.rc_keys\n with open(rc_path, 'w') as rc:\n rc.write(yaml_dump(rc_config))\n\n if args.json:\n stdout_json_success(\n rc_path=rc_path,\n warnings=json_warnings,\n get=json_get\n )\n return\n", "path": "conda/cli/main_config.py"}]} | 3,801 | 299 |
gh_patches_debug_25991 | rasdani/github-patches | git_diff | nvaccess__nvda-11605 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Chrome: "list" is reported on every line of a list in rich text editors
### Steps to reproduce:
1. Open this URL in Chrome:
`data:text/html,<div contentEditable="true" role="textbox" aria-multiline="true">Before<ul><li>a</li><li>b</li></ul>After</div>`
2. Focus the text box and ensure you are in focus mode.
3. Press control+home.
4. Read through the content line by line using the down arrow key.
### Expected behavior:
```
Before
list bullet a
bullet b
out of list After
```
### Actual behavior:
```
Before
list bullet a
list bullet b
After
```
Note: Whether you hear "bullet" depends on your symbol level; I have mine set to "all".
### System configuration:
NVDA version: next-14373,6bbe5915
NVDA Installed or portable: installed
Windows version: Windows 10 Version 1703 (OS Build 16251.0)
Name and version of other software in use when reproducing the issue: Chrome Version 62.0.3201.2 (Official Build) canary (64-bit)
### Technical info:
This happens because a contentEditable list (the `ul` tag) does not get the read-only state. Lists and list boxes both get the same role (list), but they're normally differentiated by the read-only state; a `<ul>` has read-only, whereas a `<select size="2">` doesn't. However, in this case, I can kinda understand why Chrome doesn't set read-only; after all, it does have the editable state.
I think we should probably just tweak `TextInfo.getPresentationCategory` to treat editable liss as being containers; i.e. allow for the editable state as well as the read-only state in the rule for `PRESCAT_CONTAINER`. Alternatively, we could file a bug against Chrome requesting this get fixed on their side.
P2 because this is quite annoying when dealing with rich text editors in Chrome, including the Gmail composer.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/NVDAObjects/IAccessible/chromium.py`
Content:
```
1 #NVDAObjects/IAccessible/chromium.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #This file is covered by the GNU General Public License.
4 #See the file COPYING for more details.
5 # Copyright (C) 2010-2013 NV Access Limited
6
7 """NVDAObjects for the Chromium browser project
8 """
9
10 from comtypes import COMError
11 import oleacc
12 import controlTypes
13 import IAccessibleHandler
14 from NVDAObjects.IAccessible import IAccessible
15 from virtualBuffers.gecko_ia2 import Gecko_ia2 as GeckoVBuf, Gecko_ia2_TextInfo as GeckoVBufTextInfo
16 from . import ia2Web
17
18
19 class ChromeVBufTextInfo(GeckoVBufTextInfo):
20
21 def _normalizeControlField(self, attrs):
22 attrs = super()._normalizeControlField(attrs)
23 if attrs['role'] == controlTypes.ROLE_TOGGLEBUTTON and controlTypes.STATE_CHECKABLE in attrs['states']:
24 # In Chromium, the checkable state is exposed erroneously on toggle buttons.
25 attrs['states'].discard(controlTypes.STATE_CHECKABLE)
26 return attrs
27
28
29 class ChromeVBuf(GeckoVBuf):
30 TextInfo = ChromeVBufTextInfo
31
32 def __contains__(self, obj):
33 if obj.windowHandle != self.rootNVDAObject.windowHandle:
34 return False
35 if not isinstance(obj,ia2Web.Ia2Web):
36 # #4080: Input composition NVDAObjects are the same window but not IAccessible2!
37 return False
38 accId = obj.IA2UniqueID
39 if accId == self.rootID:
40 return True
41 try:
42 self.rootNVDAObject.IAccessibleObject.accChild(accId)
43 except COMError:
44 return False
45 return not self._isNVDAObjectInApplication(obj)
46
47
48 class Document(ia2Web.Document):
49
50 def _get_treeInterceptorClass(self):
51 states = self.states
52 if controlTypes.STATE_EDITABLE not in states and controlTypes.STATE_BUSY not in states:
53 return ChromeVBuf
54 return super(Document, self).treeInterceptorClass
55
56 class ComboboxListItem(IAccessible):
57 """
58 Represents a list item inside a combo box.
59 """
60
61 def _get_focusRedirect(self):
62 # Chrome 68 and below fires focus on the active list item of combo boxes even when the combo box is collapsed.
63 # We get around this by redirecting focus back up to the combo box itself if the list inside is invisible (I.e. the combo box is collapsed).
64 if self.parent and controlTypes.STATE_INVISIBLE in self.parent.states:
65 return self.parent.parent
66
67
68 class ToggleButton(ia2Web.Ia2Web):
69
70 def _get_states(self):
71 # In Chromium, the checkable state is exposed erroneously on toggle buttons.
72 states = super().states
73 states.discard(controlTypes.STATE_CHECKABLE)
74 return states
75
76
77 def findExtraOverlayClasses(obj, clsList):
78 """Determine the most appropriate class(es) for Chromium objects.
79 This works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.
80 """
81 if obj.role==controlTypes.ROLE_LISTITEM and obj.parent and obj.parent.parent and obj.parent.parent.role==controlTypes.ROLE_COMBOBOX:
82 clsList.append(ComboboxListItem)
83 elif obj.role == controlTypes.ROLE_TOGGLEBUTTON:
84 clsList.append(ToggleButton)
85 ia2Web.findExtraOverlayClasses(obj, clsList,
86 documentClass=Document)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/NVDAObjects/IAccessible/chromium.py b/source/NVDAObjects/IAccessible/chromium.py
--- a/source/NVDAObjects/IAccessible/chromium.py
+++ b/source/NVDAObjects/IAccessible/chromium.py
@@ -74,6 +74,22 @@
return states
+class PresentationalList(ia2Web.Ia2Web):
+ """
+ Ensures that lists like UL, DL and OL always have the readonly state.
+ A work-around for issue #7562
+ allowing us to differentiate presentational lists from interactive lists
+ (such as of size greater 1 and ARIA list boxes).
+ In firefox, this is possible by the presence of a read-only state,
+ even in a content editable.
+ """
+
+ def _get_states(self):
+ states = super().states
+ states.add(controlTypes.STATE_READONLY)
+ return states
+
+
def findExtraOverlayClasses(obj, clsList):
"""Determine the most appropriate class(es) for Chromium objects.
This works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.
@@ -82,5 +98,7 @@
clsList.append(ComboboxListItem)
elif obj.role == controlTypes.ROLE_TOGGLEBUTTON:
clsList.append(ToggleButton)
+ elif obj.role == controlTypes.ROLE_LIST and obj.IA2Attributes.get('tag') in ('ul', 'dl', 'ol'):
+ clsList.append(PresentationalList)
ia2Web.findExtraOverlayClasses(obj, clsList,
documentClass=Document)
| {"golden_diff": "diff --git a/source/NVDAObjects/IAccessible/chromium.py b/source/NVDAObjects/IAccessible/chromium.py\n--- a/source/NVDAObjects/IAccessible/chromium.py\n+++ b/source/NVDAObjects/IAccessible/chromium.py\n@@ -74,6 +74,22 @@\n \t\treturn states\r\n \r\n \r\n+class PresentationalList(ia2Web.Ia2Web):\r\n+\t\"\"\"\r\n+\tEnsures that lists like UL, DL and OL always have the readonly state.\r\n+\tA work-around for issue #7562\r\n+\tallowing us to differentiate presentational lists from interactive lists\r\n+\t(such as of size greater 1 and ARIA list boxes).\r\n+\tIn firefox, this is possible by the presence of a read-only state,\r\n+\teven in a content editable.\r\n+\t\"\"\"\r\n+\r\n+\tdef _get_states(self):\r\n+\t\tstates = super().states\r\n+\t\tstates.add(controlTypes.STATE_READONLY)\r\n+\t\treturn states\r\n+\r\n+\r\n def findExtraOverlayClasses(obj, clsList):\r\n \t\"\"\"Determine the most appropriate class(es) for Chromium objects.\r\n \tThis works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.\r\n@@ -82,5 +98,7 @@\n \t\tclsList.append(ComboboxListItem)\r\n \telif obj.role == controlTypes.ROLE_TOGGLEBUTTON:\r\n \t\tclsList.append(ToggleButton)\r\n+\telif obj.role == controlTypes.ROLE_LIST and obj.IA2Attributes.get('tag') in ('ul', 'dl', 'ol'):\r\n+\t\tclsList.append(PresentationalList)\r\n \tia2Web.findExtraOverlayClasses(obj, clsList,\r\n \t\tdocumentClass=Document)\n", "issue": "Chrome: \"list\" is reported on every line of a list in rich text editors\n### Steps to reproduce:\r\n1. Open this URL in Chrome:\r\n `data:text/html,<div contentEditable=\"true\" role=\"textbox\" aria-multiline=\"true\">Before<ul><li>a</li><li>b</li></ul>After</div>`\r\n2. Focus the text box and ensure you are in focus mode.\r\n3. Press control+home.\r\n4. Read through the content line by line using the down arrow key.\r\n\r\n### Expected behavior:\r\n```\r\nBefore\r\nlist bullet a\r\nbullet b\r\nout of list After\r\n```\r\n\r\n### Actual behavior:\r\n```\r\nBefore\r\nlist bullet a\r\nlist bullet b\r\nAfter\r\n```\r\n\r\nNote: Whether you hear \"bullet\" depends on your symbol level; I have mine set to \"all\".\r\n\r\n### System configuration:\r\nNVDA version: next-14373,6bbe5915\r\nNVDA Installed or portable: installed\r\nWindows version: Windows 10 Version 1703 (OS Build 16251.0)\r\nName and version of other software in use when reproducing the issue: Chrome Version 62.0.3201.2 (Official Build) canary (64-bit)\r\n\r\n### Technical info:\r\nThis happens because a contentEditable list (the `ul` tag) does not get the read-only state. Lists and list boxes both get the same role (list), but they're normally differentiated by the read-only state; a `<ul>` has read-only, whereas a `<select size=\"2\">` doesn't. However, in this case, I can kinda understand why Chrome doesn't set read-only; after all, it does have the editable state.\r\n\r\nI think we should probably just tweak `TextInfo.getPresentationCategory` to treat editable liss as being containers; i.e. allow for the editable state as well as the read-only state in the rule for `PRESCAT_CONTAINER`. Alternatively, we could file a bug against Chrome requesting this get fixed on their side.\r\n\r\nP2 because this is quite annoying when dealing with rich text editors in Chrome, including the Gmail composer.\n", "before_files": [{"content": "#NVDAObjects/IAccessible/chromium.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n# Copyright (C) 2010-2013 NV Access Limited\r\n\r\n\"\"\"NVDAObjects for the Chromium browser project\r\n\"\"\"\r\n\r\nfrom comtypes import COMError\r\nimport oleacc\r\nimport controlTypes\r\nimport IAccessibleHandler\r\nfrom NVDAObjects.IAccessible import IAccessible\r\nfrom virtualBuffers.gecko_ia2 import Gecko_ia2 as GeckoVBuf, Gecko_ia2_TextInfo as GeckoVBufTextInfo\r\nfrom . import ia2Web\r\n\r\n\r\nclass ChromeVBufTextInfo(GeckoVBufTextInfo):\r\n\r\n\tdef _normalizeControlField(self, attrs):\r\n\t\tattrs = super()._normalizeControlField(attrs)\r\n\t\tif attrs['role'] == controlTypes.ROLE_TOGGLEBUTTON and controlTypes.STATE_CHECKABLE in attrs['states']:\r\n\t\t\t# In Chromium, the checkable state is exposed erroneously on toggle buttons.\r\n\t\t\tattrs['states'].discard(controlTypes.STATE_CHECKABLE)\r\n\t\treturn attrs\r\n\r\n\r\nclass ChromeVBuf(GeckoVBuf):\r\n\tTextInfo = ChromeVBufTextInfo\r\n\r\n\tdef __contains__(self, obj):\r\n\t\tif obj.windowHandle != self.rootNVDAObject.windowHandle:\r\n\t\t\treturn False\r\n\t\tif not isinstance(obj,ia2Web.Ia2Web):\r\n\t\t\t# #4080: Input composition NVDAObjects are the same window but not IAccessible2!\r\n\t\t\treturn False\r\n\t\taccId = obj.IA2UniqueID\r\n\t\tif accId == self.rootID:\r\n\t\t\treturn True\r\n\t\ttry:\r\n\t\t\tself.rootNVDAObject.IAccessibleObject.accChild(accId)\r\n\t\texcept COMError:\r\n\t\t\treturn False\r\n\t\treturn not self._isNVDAObjectInApplication(obj)\r\n\r\n\r\nclass Document(ia2Web.Document):\r\n\r\n\tdef _get_treeInterceptorClass(self):\r\n\t\tstates = self.states\r\n\t\tif controlTypes.STATE_EDITABLE not in states and controlTypes.STATE_BUSY not in states:\r\n\t\t\treturn ChromeVBuf\r\n\t\treturn super(Document, self).treeInterceptorClass\r\n\r\nclass ComboboxListItem(IAccessible):\r\n\t\"\"\"\r\n\tRepresents a list item inside a combo box.\r\n\t\"\"\"\r\n\r\n\tdef _get_focusRedirect(self):\r\n\t\t# Chrome 68 and below fires focus on the active list item of combo boxes even when the combo box is collapsed.\r\n\t\t# We get around this by redirecting focus back up to the combo box itself if the list inside is invisible (I.e. the combo box is collapsed).\r\n\t\tif self.parent and controlTypes.STATE_INVISIBLE in self.parent.states:\r\n\t\t\treturn self.parent.parent\r\n\r\n\r\nclass ToggleButton(ia2Web.Ia2Web):\r\n\r\n\tdef _get_states(self):\r\n\t\t# In Chromium, the checkable state is exposed erroneously on toggle buttons.\r\n\t\tstates = super().states\r\n\t\tstates.discard(controlTypes.STATE_CHECKABLE)\r\n\t\treturn states\r\n\r\n\r\ndef findExtraOverlayClasses(obj, clsList):\r\n\t\"\"\"Determine the most appropriate class(es) for Chromium objects.\r\n\tThis works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.\r\n\t\"\"\"\r\n\tif obj.role==controlTypes.ROLE_LISTITEM and obj.parent and obj.parent.parent and obj.parent.parent.role==controlTypes.ROLE_COMBOBOX:\r\n\t\tclsList.append(ComboboxListItem)\r\n\telif obj.role == controlTypes.ROLE_TOGGLEBUTTON:\r\n\t\tclsList.append(ToggleButton)\r\n\tia2Web.findExtraOverlayClasses(obj, clsList,\r\n\t\tdocumentClass=Document)\r\n", "path": "source/NVDAObjects/IAccessible/chromium.py"}], "after_files": [{"content": "#NVDAObjects/IAccessible/chromium.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n# Copyright (C) 2010-2013 NV Access Limited\r\n\r\n\"\"\"NVDAObjects for the Chromium browser project\r\n\"\"\"\r\n\r\nfrom comtypes import COMError\r\nimport oleacc\r\nimport controlTypes\r\nimport IAccessibleHandler\r\nfrom NVDAObjects.IAccessible import IAccessible\r\nfrom virtualBuffers.gecko_ia2 import Gecko_ia2 as GeckoVBuf, Gecko_ia2_TextInfo as GeckoVBufTextInfo\r\nfrom . import ia2Web\r\n\r\n\r\nclass ChromeVBufTextInfo(GeckoVBufTextInfo):\r\n\r\n\tdef _normalizeControlField(self, attrs):\r\n\t\tattrs = super()._normalizeControlField(attrs)\r\n\t\tif attrs['role'] == controlTypes.ROLE_TOGGLEBUTTON and controlTypes.STATE_CHECKABLE in attrs['states']:\r\n\t\t\t# In Chromium, the checkable state is exposed erroneously on toggle buttons.\r\n\t\t\tattrs['states'].discard(controlTypes.STATE_CHECKABLE)\r\n\t\treturn attrs\r\n\r\n\r\nclass ChromeVBuf(GeckoVBuf):\r\n\tTextInfo = ChromeVBufTextInfo\r\n\r\n\tdef __contains__(self, obj):\r\n\t\tif obj.windowHandle != self.rootNVDAObject.windowHandle:\r\n\t\t\treturn False\r\n\t\tif not isinstance(obj,ia2Web.Ia2Web):\r\n\t\t\t# #4080: Input composition NVDAObjects are the same window but not IAccessible2!\r\n\t\t\treturn False\r\n\t\taccId = obj.IA2UniqueID\r\n\t\tif accId == self.rootID:\r\n\t\t\treturn True\r\n\t\ttry:\r\n\t\t\tself.rootNVDAObject.IAccessibleObject.accChild(accId)\r\n\t\texcept COMError:\r\n\t\t\treturn False\r\n\t\treturn not self._isNVDAObjectInApplication(obj)\r\n\r\n\r\nclass Document(ia2Web.Document):\r\n\r\n\tdef _get_treeInterceptorClass(self):\r\n\t\tstates = self.states\r\n\t\tif controlTypes.STATE_EDITABLE not in states and controlTypes.STATE_BUSY not in states:\r\n\t\t\treturn ChromeVBuf\r\n\t\treturn super(Document, self).treeInterceptorClass\r\n\r\nclass ComboboxListItem(IAccessible):\r\n\t\"\"\"\r\n\tRepresents a list item inside a combo box.\r\n\t\"\"\"\r\n\r\n\tdef _get_focusRedirect(self):\r\n\t\t# Chrome 68 and below fires focus on the active list item of combo boxes even when the combo box is collapsed.\r\n\t\t# We get around this by redirecting focus back up to the combo box itself if the list inside is invisible (I.e. the combo box is collapsed).\r\n\t\tif self.parent and controlTypes.STATE_INVISIBLE in self.parent.states:\r\n\t\t\treturn self.parent.parent\r\n\r\n\r\nclass ToggleButton(ia2Web.Ia2Web):\r\n\r\n\tdef _get_states(self):\r\n\t\t# In Chromium, the checkable state is exposed erroneously on toggle buttons.\r\n\t\tstates = super().states\r\n\t\tstates.discard(controlTypes.STATE_CHECKABLE)\r\n\t\treturn states\r\n\r\n\r\nclass PresentationalList(ia2Web.Ia2Web):\r\n\t\"\"\"\r\n\tEnsures that lists like UL, DL and OL always have the readonly state.\r\n\tA work-around for issue #7562\r\n\tallowing us to differentiate presentational lists from interactive lists\r\n\t(such as of size greater 1 and ARIA list boxes).\r\n\tIn firefox, this is possible by the presence of a read-only state,\r\n\teven in a content editable.\r\n\t\"\"\"\r\n\r\n\tdef _get_states(self):\r\n\t\tstates = super().states\r\n\t\tstates.add(controlTypes.STATE_READONLY)\r\n\t\treturn states\r\n\r\n\r\ndef findExtraOverlayClasses(obj, clsList):\r\n\t\"\"\"Determine the most appropriate class(es) for Chromium objects.\r\n\tThis works similarly to L{NVDAObjects.NVDAObject.findOverlayClasses} except that it never calls any other findOverlayClasses method.\r\n\t\"\"\"\r\n\tif obj.role==controlTypes.ROLE_LISTITEM and obj.parent and obj.parent.parent and obj.parent.parent.role==controlTypes.ROLE_COMBOBOX:\r\n\t\tclsList.append(ComboboxListItem)\r\n\telif obj.role == controlTypes.ROLE_TOGGLEBUTTON:\r\n\t\tclsList.append(ToggleButton)\r\n\telif obj.role == controlTypes.ROLE_LIST and obj.IA2Attributes.get('tag') in ('ul', 'dl', 'ol'):\r\n\t\tclsList.append(PresentationalList)\r\n\tia2Web.findExtraOverlayClasses(obj, clsList,\r\n\t\tdocumentClass=Document)\r\n", "path": "source/NVDAObjects/IAccessible/chromium.py"}]} | 1,677 | 378 |
gh_patches_debug_54061 | rasdani/github-patches | git_diff | docker__docker-py-2793 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Couldn't create secret object
I couldn't create secret object, the problem seemed to boil down to the way that a secret was being created from the docker daemon response.
https://github.com/docker/docker-py/blob/467cacb00d8dce68aa8ff2bdacc85acecd2d1207/docker/models/secrets.py#L31-L33
Docker version 18.03.1-ce and python version 3.5 had the following error:
````
File "docker/models/secrets.py", line 10 in __repr__
return "<%s: %s'>" % (self.__class__.__name__, self.name)
File "docker/models/secrets.py", line 14 in name
return self.attrs['Spec']['Name']
KeyError: 'Spec'
````
When calling:
````
import docker
client -docker.from_env()
mySecret = client.secrets.create(name='randomName', data='platform_node_requirements.md')
````
Changing the code to the following seemed to fix it.
````
obj = self.client.api.create_secret(**kwargs)
secret = self.client.secrets.get(obj.get('ID'))
return self.prepare_model(secret)
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/models/secrets.py`
Content:
```
1 from ..api import APIClient
2 from .resource import Model, Collection
3
4
5 class Secret(Model):
6 """A secret."""
7 id_attribute = 'ID'
8
9 def __repr__(self):
10 return "<%s: '%s'>" % (self.__class__.__name__, self.name)
11
12 @property
13 def name(self):
14 return self.attrs['Spec']['Name']
15
16 def remove(self):
17 """
18 Remove this secret.
19
20 Raises:
21 :py:class:`docker.errors.APIError`
22 If secret failed to remove.
23 """
24 return self.client.api.remove_secret(self.id)
25
26
27 class SecretCollection(Collection):
28 """Secrets on the Docker server."""
29 model = Secret
30
31 def create(self, **kwargs):
32 obj = self.client.api.create_secret(**kwargs)
33 return self.prepare_model(obj)
34 create.__doc__ = APIClient.create_secret.__doc__
35
36 def get(self, secret_id):
37 """
38 Get a secret.
39
40 Args:
41 secret_id (str): Secret ID.
42
43 Returns:
44 (:py:class:`Secret`): The secret.
45
46 Raises:
47 :py:class:`docker.errors.NotFound`
48 If the secret does not exist.
49 :py:class:`docker.errors.APIError`
50 If the server returns an error.
51 """
52 return self.prepare_model(self.client.api.inspect_secret(secret_id))
53
54 def list(self, **kwargs):
55 """
56 List secrets. Similar to the ``docker secret ls`` command.
57
58 Args:
59 filters (dict): Server-side list filtering options.
60
61 Returns:
62 (list of :py:class:`Secret`): The secrets.
63
64 Raises:
65 :py:class:`docker.errors.APIError`
66 If the server returns an error.
67 """
68 resp = self.client.api.secrets(**kwargs)
69 return [self.prepare_model(obj) for obj in resp]
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/models/secrets.py b/docker/models/secrets.py
--- a/docker/models/secrets.py
+++ b/docker/models/secrets.py
@@ -30,6 +30,7 @@
def create(self, **kwargs):
obj = self.client.api.create_secret(**kwargs)
+ obj.setdefault("Spec", {})["Name"] = kwargs.get("name")
return self.prepare_model(obj)
create.__doc__ = APIClient.create_secret.__doc__
| {"golden_diff": "diff --git a/docker/models/secrets.py b/docker/models/secrets.py\n--- a/docker/models/secrets.py\n+++ b/docker/models/secrets.py\n@@ -30,6 +30,7 @@\n \n def create(self, **kwargs):\n obj = self.client.api.create_secret(**kwargs)\n+ obj.setdefault(\"Spec\", {})[\"Name\"] = kwargs.get(\"name\")\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_secret.__doc__\n", "issue": "Couldn't create secret object\nI couldn't create secret object, the problem seemed to boil down to the way that a secret was being created from the docker daemon response. \r\n\r\nhttps://github.com/docker/docker-py/blob/467cacb00d8dce68aa8ff2bdacc85acecd2d1207/docker/models/secrets.py#L31-L33\r\n\r\nDocker version 18.03.1-ce and python version 3.5 had the following error:\r\n\r\n````\r\nFile \"docker/models/secrets.py\", line 10 in __repr__\r\nreturn \"<%s: %s'>\" % (self.__class__.__name__, self.name)\r\nFile \"docker/models/secrets.py\", line 14 in name\r\nreturn self.attrs['Spec']['Name']\r\nKeyError: 'Spec'\r\n\r\n````\r\n\r\nWhen calling: \r\n\r\n````\r\nimport docker\r\n\r\nclient -docker.from_env()\r\nmySecret = client.secrets.create(name='randomName', data='platform_node_requirements.md')\r\n\r\n````\r\n\r\nChanging the code to the following seemed to fix it. \r\n````\r\nobj = self.client.api.create_secret(**kwargs)\r\nsecret = self.client.secrets.get(obj.get('ID'))\r\nreturn self.prepare_model(secret)\r\n````\r\n\r\n\r\n\n", "before_files": [{"content": "from ..api import APIClient\nfrom .resource import Model, Collection\n\n\nclass Secret(Model):\n \"\"\"A secret.\"\"\"\n id_attribute = 'ID'\n\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, self.name)\n\n @property\n def name(self):\n return self.attrs['Spec']['Name']\n\n def remove(self):\n \"\"\"\n Remove this secret.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If secret failed to remove.\n \"\"\"\n return self.client.api.remove_secret(self.id)\n\n\nclass SecretCollection(Collection):\n \"\"\"Secrets on the Docker server.\"\"\"\n model = Secret\n\n def create(self, **kwargs):\n obj = self.client.api.create_secret(**kwargs)\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_secret.__doc__\n\n def get(self, secret_id):\n \"\"\"\n Get a secret.\n\n Args:\n secret_id (str): Secret ID.\n\n Returns:\n (:py:class:`Secret`): The secret.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the secret does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_secret(secret_id))\n\n def list(self, **kwargs):\n \"\"\"\n List secrets. Similar to the ``docker secret ls`` command.\n\n Args:\n filters (dict): Server-side list filtering options.\n\n Returns:\n (list of :py:class:`Secret`): The secrets.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.secrets(**kwargs)\n return [self.prepare_model(obj) for obj in resp]\n", "path": "docker/models/secrets.py"}], "after_files": [{"content": "from ..api import APIClient\nfrom .resource import Model, Collection\n\n\nclass Secret(Model):\n \"\"\"A secret.\"\"\"\n id_attribute = 'ID'\n\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, self.name)\n\n @property\n def name(self):\n return self.attrs['Spec']['Name']\n\n def remove(self):\n \"\"\"\n Remove this secret.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If secret failed to remove.\n \"\"\"\n return self.client.api.remove_secret(self.id)\n\n\nclass SecretCollection(Collection):\n \"\"\"Secrets on the Docker server.\"\"\"\n model = Secret\n\n def create(self, **kwargs):\n obj = self.client.api.create_secret(**kwargs)\n obj.setdefault(\"Spec\", {})[\"Name\"] = kwargs.get(\"name\")\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_secret.__doc__\n\n def get(self, secret_id):\n \"\"\"\n Get a secret.\n\n Args:\n secret_id (str): Secret ID.\n\n Returns:\n (:py:class:`Secret`): The secret.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the secret does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_secret(secret_id))\n\n def list(self, **kwargs):\n \"\"\"\n List secrets. Similar to the ``docker secret ls`` command.\n\n Args:\n filters (dict): Server-side list filtering options.\n\n Returns:\n (list of :py:class:`Secret`): The secrets.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.secrets(**kwargs)\n return [self.prepare_model(obj) for obj in resp]\n", "path": "docker/models/secrets.py"}]} | 1,048 | 101 |
gh_patches_debug_29970 | rasdani/github-patches | git_diff | inventree__InvenTree-1159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Quick view of what roles are assigned to each group

As above:
- Next to each group, show a column for each possible role
- For each cell, show which permissions are used (read / add / modify / delete)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/users/admin.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.utils.translation import ugettext_lazy as _
5
6 from django.contrib import admin, messages
7 from django import forms
8 from django.contrib.auth import get_user_model
9 from django.contrib.admin.widgets import FilteredSelectMultiple
10 from django.contrib.auth.models import Group
11 from django.contrib.auth.admin import UserAdmin
12 from django.utils.safestring import mark_safe
13
14 from users.models import RuleSet
15
16 User = get_user_model()
17
18
19 class RuleSetInline(admin.TabularInline):
20 """
21 Class for displaying inline RuleSet data in the Group admin page.
22 """
23
24 model = RuleSet
25 can_delete = False
26 verbose_name = 'Ruleset'
27 verbose_plural_name = 'Rulesets'
28 fields = ['name'] + [option for option in RuleSet.RULE_OPTIONS]
29 readonly_fields = ['name']
30 max_num = len(RuleSet.RULESET_CHOICES)
31 min_num = 1
32 extra = 0
33
34
35 class InvenTreeGroupAdminForm(forms.ModelForm):
36 """
37 Custom admin form for the Group model.
38
39 Adds the ability for editing user membership directly in the group admin page.
40 """
41
42 class Meta:
43 model = Group
44 exclude = []
45 fields = [
46 'name',
47 'users',
48 ]
49
50 def __init__(self, *args, **kwargs):
51 super().__init__(*args, **kwargs)
52
53 if self.instance.pk:
54 # Populate the users field with the current Group users.
55 self.fields['users'].initial = self.instance.user_set.all()
56
57 # Add the users field.
58 users = forms.ModelMultipleChoiceField(
59 queryset=User.objects.all(),
60 required=False,
61 widget=FilteredSelectMultiple('users', False),
62 label=_('Users'),
63 help_text=_('Select which users are assigned to this group')
64 )
65
66 def save_m2m(self):
67 # Add the users to the Group.
68
69 self.instance.user_set.set(self.cleaned_data['users'])
70
71 def save(self, *args, **kwargs):
72 # Default save
73 instance = super().save()
74 # Save many-to-many data
75 self.save_m2m()
76 return instance
77
78
79 class RoleGroupAdmin(admin.ModelAdmin):
80 """
81 Custom admin interface for the Group model
82 """
83
84 form = InvenTreeGroupAdminForm
85
86 inlines = [
87 RuleSetInline,
88 ]
89
90 def get_formsets_with_inlines(self, request, obj=None):
91 for inline in self.get_inline_instances(request, obj):
92 # Hide RuleSetInline in the 'Add role' view
93 if not isinstance(inline, RuleSetInline) or obj is not None:
94 yield inline.get_formset(request, obj), inline
95
96 filter_horizontal = ['permissions']
97
98 def save_model(self, request, obj, form, change):
99 """
100 This method serves two purposes:
101 - show warning message whenever the group users belong to multiple groups
102 - skip saving of the group instance model as inlines needs to be saved before.
103 """
104
105 # Get form cleaned data
106 users = form.cleaned_data['users']
107
108 # Check for users who are members of multiple groups
109 warning_message = ''
110 for user in users:
111 if user.groups.all().count() > 1:
112 warning_message += f'<br>- <b>{user.username}</b> is member of: '
113 for idx, group in enumerate(user.groups.all()):
114 warning_message += f'<b>{group.name}</b>'
115 if idx < len(user.groups.all()) - 1:
116 warning_message += ', '
117
118 # If any, display warning message when group is saved
119 if warning_message:
120 warning_message = mark_safe(_(f'The following users are members of multiple groups:'
121 f'{warning_message}'))
122 messages.add_message(request, messages.WARNING, warning_message)
123
124 def save_formset(self, request, form, formset, change):
125 # Save inline Rulesets
126 formset.save()
127 # Save Group instance and update permissions
128 form.instance.save(update_fields=['name'])
129
130
131 class InvenTreeUserAdmin(UserAdmin):
132 """
133 Custom admin page for the User model.
134
135 Hides the "permissions" view as this is now handled
136 entirely by groups and RuleSets.
137
138 (And it's confusing!)
139 """
140
141 fieldsets = (
142 (None, {'fields': ('username', 'password')}),
143 (_('Personal info'), {'fields': ('first_name', 'last_name', 'email')}),
144 (_('Permissions'), {
145 'fields': ('is_active', 'is_staff', 'is_superuser', 'groups'),
146 }),
147 (_('Important dates'), {'fields': ('last_login', 'date_joined')}),
148 )
149
150
151 admin.site.unregister(Group)
152 admin.site.register(Group, RoleGroupAdmin)
153
154 admin.site.unregister(User)
155 admin.site.register(User, InvenTreeUserAdmin)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/users/admin.py b/InvenTree/users/admin.py
--- a/InvenTree/users/admin.py
+++ b/InvenTree/users/admin.py
@@ -87,6 +87,64 @@
RuleSetInline,
]
+ list_display = ('name', 'admin', 'part', 'stock', 'build', 'purchase_order', 'sales_order')
+
+ def get_rule_set(self, obj, rule_set_type):
+ ''' Return list of permissions for the given ruleset '''
+
+ # Get all rulesets associated to object
+ rule_sets = RuleSet.objects.filter(group=obj.pk)
+
+ # Select ruleset based on type
+ for rule_set in rule_sets:
+ if rule_set.name == rule_set_type:
+ break
+
+ def append_permission_level(permission_level, next_level):
+ if not permission_level:
+ return next_level
+
+ if permission_level[:-1].endswith('|'):
+ permission_level += next_level
+ else:
+ permission_level += ' | ' + next_level
+
+ return permission_level
+
+ permission_level = ''
+
+ if rule_set.can_view:
+ permission_level = append_permission_level(permission_level, 'V')
+
+ if rule_set.can_add:
+ permission_level = append_permission_level(permission_level, 'A')
+
+ if rule_set.can_change:
+ permission_level = append_permission_level(permission_level, 'C')
+
+ if rule_set.can_delete:
+ permission_level = append_permission_level(permission_level, 'D')
+
+ return permission_level
+
+ def admin(self, obj):
+ return self.get_rule_set(obj, 'admin')
+
+ def part(self, obj):
+ return self.get_rule_set(obj, 'part')
+
+ def stock(self, obj):
+ return self.get_rule_set(obj, 'stock')
+
+ def build(self, obj):
+ return self.get_rule_set(obj, 'build')
+
+ def purchase_order(self, obj):
+ return self.get_rule_set(obj, 'purchase_order')
+
+ def sales_order(self, obj):
+ return self.get_rule_set(obj, 'sales_order')
+
def get_formsets_with_inlines(self, request, obj=None):
for inline in self.get_inline_instances(request, obj):
# Hide RuleSetInline in the 'Add role' view
| {"golden_diff": "diff --git a/InvenTree/users/admin.py b/InvenTree/users/admin.py\n--- a/InvenTree/users/admin.py\n+++ b/InvenTree/users/admin.py\n@@ -87,6 +87,64 @@\n RuleSetInline,\n ]\n \n+ list_display = ('name', 'admin', 'part', 'stock', 'build', 'purchase_order', 'sales_order')\n+\n+ def get_rule_set(self, obj, rule_set_type):\n+ ''' Return list of permissions for the given ruleset '''\n+\n+ # Get all rulesets associated to object\n+ rule_sets = RuleSet.objects.filter(group=obj.pk)\n+\n+ # Select ruleset based on type\n+ for rule_set in rule_sets:\n+ if rule_set.name == rule_set_type:\n+ break\n+\n+ def append_permission_level(permission_level, next_level):\n+ if not permission_level:\n+ return next_level\n+\n+ if permission_level[:-1].endswith('|'):\n+ permission_level += next_level\n+ else:\n+ permission_level += ' | ' + next_level\n+\n+ return permission_level\n+\n+ permission_level = ''\n+\n+ if rule_set.can_view:\n+ permission_level = append_permission_level(permission_level, 'V')\n+\n+ if rule_set.can_add:\n+ permission_level = append_permission_level(permission_level, 'A')\n+\n+ if rule_set.can_change:\n+ permission_level = append_permission_level(permission_level, 'C')\n+\n+ if rule_set.can_delete:\n+ permission_level = append_permission_level(permission_level, 'D')\n+ \n+ return permission_level\n+\n+ def admin(self, obj):\n+ return self.get_rule_set(obj, 'admin')\n+\n+ def part(self, obj):\n+ return self.get_rule_set(obj, 'part')\n+\n+ def stock(self, obj):\n+ return self.get_rule_set(obj, 'stock')\n+\n+ def build(self, obj):\n+ return self.get_rule_set(obj, 'build')\n+\n+ def purchase_order(self, obj):\n+ return self.get_rule_set(obj, 'purchase_order')\n+\n+ def sales_order(self, obj):\n+ return self.get_rule_set(obj, 'sales_order')\n+\n def get_formsets_with_inlines(self, request, obj=None):\n for inline in self.get_inline_instances(request, obj):\n # Hide RuleSetInline in the 'Add role' view\n", "issue": "Quick view of what roles are assigned to each group\n\r\n\r\nAs above:\r\n\r\n- Next to each group, show a column for each possible role\r\n- For each cell, show which permissions are used (read / add / modify / delete)\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.contrib import admin, messages\nfrom django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.admin.widgets import FilteredSelectMultiple\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.utils.safestring import mark_safe\n\nfrom users.models import RuleSet\n\nUser = get_user_model()\n\n\nclass RuleSetInline(admin.TabularInline):\n \"\"\"\n Class for displaying inline RuleSet data in the Group admin page.\n \"\"\"\n\n model = RuleSet\n can_delete = False\n verbose_name = 'Ruleset'\n verbose_plural_name = 'Rulesets'\n fields = ['name'] + [option for option in RuleSet.RULE_OPTIONS]\n readonly_fields = ['name']\n max_num = len(RuleSet.RULESET_CHOICES)\n min_num = 1\n extra = 0\n\n\nclass InvenTreeGroupAdminForm(forms.ModelForm):\n \"\"\"\n Custom admin form for the Group model.\n\n Adds the ability for editing user membership directly in the group admin page.\n \"\"\"\n\n class Meta:\n model = Group\n exclude = []\n fields = [\n 'name',\n 'users',\n ]\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n if self.instance.pk:\n # Populate the users field with the current Group users.\n self.fields['users'].initial = self.instance.user_set.all()\n\n # Add the users field.\n users = forms.ModelMultipleChoiceField(\n queryset=User.objects.all(),\n required=False,\n widget=FilteredSelectMultiple('users', False),\n label=_('Users'),\n help_text=_('Select which users are assigned to this group')\n )\n\n def save_m2m(self):\n # Add the users to the Group.\n\n self.instance.user_set.set(self.cleaned_data['users'])\n\n def save(self, *args, **kwargs):\n # Default save\n instance = super().save()\n # Save many-to-many data\n self.save_m2m()\n return instance\n\n\nclass RoleGroupAdmin(admin.ModelAdmin):\n \"\"\"\n Custom admin interface for the Group model\n \"\"\"\n\n form = InvenTreeGroupAdminForm\n\n inlines = [\n RuleSetInline,\n ]\n\n def get_formsets_with_inlines(self, request, obj=None):\n for inline in self.get_inline_instances(request, obj):\n # Hide RuleSetInline in the 'Add role' view\n if not isinstance(inline, RuleSetInline) or obj is not None:\n yield inline.get_formset(request, obj), inline\n\n filter_horizontal = ['permissions']\n\n def save_model(self, request, obj, form, change):\n \"\"\"\n This method serves two purposes:\n - show warning message whenever the group users belong to multiple groups\n - skip saving of the group instance model as inlines needs to be saved before.\n \"\"\"\n\n # Get form cleaned data\n users = form.cleaned_data['users']\n\n # Check for users who are members of multiple groups\n warning_message = ''\n for user in users:\n if user.groups.all().count() > 1:\n warning_message += f'<br>- <b>{user.username}</b> is member of: '\n for idx, group in enumerate(user.groups.all()):\n warning_message += f'<b>{group.name}</b>'\n if idx < len(user.groups.all()) - 1:\n warning_message += ', '\n\n # If any, display warning message when group is saved\n if warning_message:\n warning_message = mark_safe(_(f'The following users are members of multiple groups:'\n f'{warning_message}'))\n messages.add_message(request, messages.WARNING, warning_message)\n\n def save_formset(self, request, form, formset, change):\n # Save inline Rulesets\n formset.save()\n # Save Group instance and update permissions\n form.instance.save(update_fields=['name'])\n\n\nclass InvenTreeUserAdmin(UserAdmin):\n \"\"\"\n Custom admin page for the User model.\n\n Hides the \"permissions\" view as this is now handled\n entirely by groups and RuleSets.\n\n (And it's confusing!)\n \"\"\"\n\n fieldsets = (\n (None, {'fields': ('username', 'password')}),\n (_('Personal info'), {'fields': ('first_name', 'last_name', 'email')}),\n (_('Permissions'), {\n 'fields': ('is_active', 'is_staff', 'is_superuser', 'groups'),\n }),\n (_('Important dates'), {'fields': ('last_login', 'date_joined')}),\n )\n\n\nadmin.site.unregister(Group)\nadmin.site.register(Group, RoleGroupAdmin)\n\nadmin.site.unregister(User)\nadmin.site.register(User, InvenTreeUserAdmin)\n", "path": "InvenTree/users/admin.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.contrib import admin, messages\nfrom django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.admin.widgets import FilteredSelectMultiple\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.utils.safestring import mark_safe\n\nfrom users.models import RuleSet\n\nUser = get_user_model()\n\n\nclass RuleSetInline(admin.TabularInline):\n \"\"\"\n Class for displaying inline RuleSet data in the Group admin page.\n \"\"\"\n\n model = RuleSet\n can_delete = False\n verbose_name = 'Ruleset'\n verbose_plural_name = 'Rulesets'\n fields = ['name'] + [option for option in RuleSet.RULE_OPTIONS]\n readonly_fields = ['name']\n max_num = len(RuleSet.RULESET_CHOICES)\n min_num = 1\n extra = 0\n\n\nclass InvenTreeGroupAdminForm(forms.ModelForm):\n \"\"\"\n Custom admin form for the Group model.\n\n Adds the ability for editing user membership directly in the group admin page.\n \"\"\"\n\n class Meta:\n model = Group\n exclude = []\n fields = [\n 'name',\n 'users',\n ]\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n if self.instance.pk:\n # Populate the users field with the current Group users.\n self.fields['users'].initial = self.instance.user_set.all()\n\n # Add the users field.\n users = forms.ModelMultipleChoiceField(\n queryset=User.objects.all(),\n required=False,\n widget=FilteredSelectMultiple('users', False),\n label=_('Users'),\n help_text=_('Select which users are assigned to this group')\n )\n\n def save_m2m(self):\n # Add the users to the Group.\n\n self.instance.user_set.set(self.cleaned_data['users'])\n\n def save(self, *args, **kwargs):\n # Default save\n instance = super().save()\n # Save many-to-many data\n self.save_m2m()\n return instance\n\n\nclass RoleGroupAdmin(admin.ModelAdmin):\n \"\"\"\n Custom admin interface for the Group model\n \"\"\"\n\n form = InvenTreeGroupAdminForm\n\n inlines = [\n RuleSetInline,\n ]\n\n list_display = ('name', 'admin', 'part', 'stock', 'build', 'purchase_order', 'sales_order')\n\n def get_rule_set(self, obj, rule_set_type):\n ''' Return list of permissions for the given ruleset '''\n\n # Get all rulesets associated to object\n rule_sets = RuleSet.objects.filter(group=obj.pk)\n\n # Select ruleset based on type\n for rule_set in rule_sets:\n if rule_set.name == rule_set_type:\n break\n\n def append_permission_level(permission_level, next_level):\n if not permission_level:\n return next_level\n\n if permission_level[:-1].endswith('|'):\n permission_level += next_level\n else:\n permission_level += ' | ' + next_level\n\n return permission_level\n\n permission_level = ''\n\n if rule_set.can_view:\n permission_level = append_permission_level(permission_level, 'V')\n\n if rule_set.can_add:\n permission_level = append_permission_level(permission_level, 'A')\n\n if rule_set.can_change:\n permission_level = append_permission_level(permission_level, 'C')\n\n if rule_set.can_delete:\n permission_level = append_permission_level(permission_level, 'D')\n \n return permission_level\n\n def admin(self, obj):\n return self.get_rule_set(obj, 'admin')\n\n def part(self, obj):\n return self.get_rule_set(obj, 'part')\n\n def stock(self, obj):\n return self.get_rule_set(obj, 'stock')\n\n def build(self, obj):\n return self.get_rule_set(obj, 'build')\n\n def purchase_order(self, obj):\n return self.get_rule_set(obj, 'purchase_order')\n\n def sales_order(self, obj):\n return self.get_rule_set(obj, 'sales_order')\n\n def get_formsets_with_inlines(self, request, obj=None):\n for inline in self.get_inline_instances(request, obj):\n # Hide RuleSetInline in the 'Add role' view\n if not isinstance(inline, RuleSetInline) or obj is not None:\n yield inline.get_formset(request, obj), inline\n\n filter_horizontal = ['permissions']\n\n def save_model(self, request, obj, form, change):\n \"\"\"\n This method serves two purposes:\n - show warning message whenever the group users belong to multiple groups\n - skip saving of the group instance model as inlines needs to be saved before.\n \"\"\"\n\n # Get form cleaned data\n users = form.cleaned_data['users']\n\n # Check for users who are members of multiple groups\n warning_message = ''\n for user in users:\n if user.groups.all().count() > 1:\n warning_message += f'<br>- <b>{user.username}</b> is member of: '\n for idx, group in enumerate(user.groups.all()):\n warning_message += f'<b>{group.name}</b>'\n if idx < len(user.groups.all()) - 1:\n warning_message += ', '\n\n # If any, display warning message when group is saved\n if warning_message:\n warning_message = mark_safe(_(f'The following users are members of multiple groups:'\n f'{warning_message}'))\n messages.add_message(request, messages.WARNING, warning_message)\n\n def save_formset(self, request, form, formset, change):\n # Save inline Rulesets\n formset.save()\n # Save Group instance and update permissions\n form.instance.save(update_fields=['name'])\n\n\nclass InvenTreeUserAdmin(UserAdmin):\n \"\"\"\n Custom admin page for the User model.\n\n Hides the \"permissions\" view as this is now handled\n entirely by groups and RuleSets.\n\n (And it's confusing!)\n \"\"\"\n\n fieldsets = (\n (None, {'fields': ('username', 'password')}),\n (_('Personal info'), {'fields': ('first_name', 'last_name', 'email')}),\n (_('Permissions'), {\n 'fields': ('is_active', 'is_staff', 'is_superuser', 'groups'),\n }),\n (_('Important dates'), {'fields': ('last_login', 'date_joined')}),\n )\n\n\nadmin.site.unregister(Group)\nadmin.site.register(Group, RoleGroupAdmin)\n\nadmin.site.unregister(User)\nadmin.site.register(User, InvenTreeUserAdmin)\n", "path": "InvenTree/users/admin.py"}]} | 1,781 | 526 |
gh_patches_debug_4188 | rasdani/github-patches | git_diff | mozmeao__snippets-service-632 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Snippets not displaying as expected for all after the A/S Router implimentation
The five (5) hard-coded AS Router snippets in the Snippets NG base are not displaying as expected for me, but they are for @glogiotatidis. Clearing my browser cache didn't fix this issue.
**Fx GA 61.0.1**
Clicking "Preview URL" in the snippet admin or on the magnifying glass icon in Snippets NG base displays snippets that look like this:

**Fx Beta 62.0b12**
Clicking "Preview URL" in the snippet admin or on the magnifying glass icon in Snippets NG base displays snippets that look like this:

**Clicking the "Preview snippet" button in the snippet admin displays correctly.**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snippets/base/views.py`
Content:
```
1 import json
2 import logging
3
4 from distutils.util import strtobool
5
6 from django.conf import settings
7 from django.contrib.auth.decorators import permission_required
8 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
9 from django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseRedirect
10 from django.shortcuts import get_object_or_404, render
11 from django.utils.functional import lazy
12 from django.views.generic import TemplateView
13 from django.views.decorators.cache import cache_control
14 from django.views.decorators.csrf import csrf_exempt
15 from django.views.decorators.http import require_POST
16
17 import django_filters
18 from django_statsd.clients import statsd
19 from raven.contrib.django.models import client as sentry_client
20
21 from snippets.base import util
22 from snippets.base.decorators import access_control
23 from snippets.base.encoders import JSONSnippetEncoder
24 from snippets.base.models import Client, JSONSnippet, Snippet, SnippetBundle, SnippetTemplate
25 from snippets.base.util import get_object_or_none
26
27
28 def _bundle_timeout():
29 return getattr(settings, 'SNIPPET_BUNDLE_TIMEOUT')
30 SNIPPET_BUNDLE_TIMEOUT = lazy(_bundle_timeout, int)() # noqa
31
32
33 class SnippetFilter(django_filters.FilterSet):
34
35 class Meta:
36 model = Snippet
37 fields = ['on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr',
38 'template']
39
40
41 class JSONSnippetFilter(django_filters.FilterSet):
42
43 class Meta:
44 model = JSONSnippet
45 fields = ['on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr']
46
47
48 class IndexView(TemplateView):
49 def render(self, request, *args, **kwargs):
50 paginator = Paginator(self.snippetsfilter.qs, settings.SNIPPETS_PER_PAGE)
51
52 page = request.GET.get('page', 1)
53 try:
54 snippets = paginator.page(page)
55 except PageNotAnInteger:
56 snippets = paginator.page(1)
57 except EmptyPage:
58 snippets = paginator.page(paginator.num_pages)
59
60 # Display links to the page before and after the current page when
61 # applicable.
62 pagination_range = range(max(1, snippets.number - 2),
63 min(snippets.number + 3, paginator.num_pages + 1))
64 data = {'snippets': snippets,
65 'pagination_range': pagination_range,
66 'snippetsfilter': self.snippetsfilter}
67 return render(request, self.template_name, data)
68
69
70 class SnippetIndexView(IndexView):
71 template_name = 'base/index.jinja'
72
73 def get(self, request, *args, **kwargs):
74 self.snippets = (Snippet.objects
75 .filter(published=True)
76 .prefetch_related('locales', 'countries',
77 'exclude_from_search_providers'))
78 self.snippetsfilter = SnippetFilter(request.GET, self.snippets)
79 return self.render(request, *args, **kwargs)
80
81
82 class JSONSnippetIndexView(IndexView):
83 template_name = 'base/index-json.jinja'
84
85 def get(self, request, *args, **kwargs):
86 self.snippets = (JSONSnippet.objects
87 .filter(published=True)
88 .prefetch_related('locales', 'countries'))
89 self.snippetsfilter = JSONSnippetFilter(request.GET, self.snippets)
90 return self.render(request, *args, **kwargs)
91
92
93 @cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)
94 @access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)
95 def fetch_snippets(request, **kwargs):
96 """
97 Return one of the following responses:
98 - 204 with the bundle is empty
99 - 302 to a bundle URL after generating it if not cached.
100 """
101 statsd.incr('serve.snippets')
102
103 client = Client(**kwargs)
104 bundle = SnippetBundle(client)
105 if bundle.empty:
106 statsd.incr('bundle.empty')
107 # This is not a 204 because Activity Stream expects content, even if
108 # it's empty.
109 return HttpResponse(status=200, content='')
110 elif bundle.cached:
111 statsd.incr('bundle.cached')
112 else:
113 statsd.incr('bundle.generate')
114 bundle.generate()
115
116 return HttpResponseRedirect(bundle.url)
117
118
119 @cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)
120 @access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)
121 def fetch_json_snippets(request, **kwargs):
122 statsd.incr('serve.json_snippets')
123 client = Client(**kwargs)
124 matching_snippets = (JSONSnippet.objects
125 .filter(published=True)
126 .match_client(client)
127 .filter_by_available())
128 return HttpResponse(json.dumps(matching_snippets, cls=JSONSnippetEncoder),
129 content_type='application/json')
130
131
132 @csrf_exempt
133 @permission_required('base.change_snippet')
134 def preview_snippet(request):
135 """
136 Build a snippet using info from the POST parameters, and preview that
137 snippet on a mock about:home page.
138 """
139 try:
140 template_id = int(request.POST.get('template_id', None))
141 except (TypeError, ValueError):
142 return HttpResponseBadRequest()
143
144 template = get_object_or_none(SnippetTemplate, id=template_id)
145 data = request.POST.get('data', None)
146
147 # Validate that data is JSON.
148 try:
149 json.loads(data)
150 except (TypeError, ValueError):
151 data = None
152
153 # If your parameters are wrong, I have no sympathy for you.
154 if data is None or template is None:
155 return HttpResponseBadRequest()
156
157 # Build a snippet that isn't saved so we can render it.
158 snippet = Snippet(template=template, data=data)
159
160 if strtobool(request.POST.get('activity_stream', 'false')):
161 template_name = 'base/preview_as.jinja'
162 preview_client = Client('5', 'Firefox', '57.0', 'default', 'default', 'en-US',
163 'release', 'default', 'default', 'default')
164 else:
165 template_name = 'base/preview.jinja'
166 preview_client = Client('4', 'Firefox', '24.0', 'default', 'default', 'en-US',
167 'release', 'default', 'default', 'default')
168
169 skip_boilerplate = request.POST.get('skip_boilerplate', 'false')
170 skip_boilerplate = strtobool(skip_boilerplate)
171 if skip_boilerplate:
172 template_name = 'base/preview_without_shell.jinja'
173
174 return render(request, template_name, {
175 'snippets_json': json.dumps([snippet.to_dict()]),
176 'client': preview_client,
177 'preview': True,
178 'current_firefox_major_version': util.current_firefox_major_version(),
179 })
180
181
182 def show_snippet(request, snippet_id, uuid=False):
183 preview_client = Client('4', 'Firefox', '24.0', 'default', 'default', 'en-US',
184 'release', 'default', 'default', 'default')
185
186 if uuid:
187 snippet = get_object_or_404(Snippet, uuid=snippet_id)
188 else:
189 snippet = get_object_or_404(Snippet, pk=snippet_id)
190 if not snippet.published and not request.user.is_authenticated():
191 raise Http404()
192
193 template = 'base/preview.jinja'
194 if snippet.on_startpage_5:
195 template = 'base/preview_as.jinja'
196 return render(request, template, {
197 'snippets_json': json.dumps([snippet.to_dict()]),
198 'client': preview_client,
199 'preview': True,
200 'current_firefox_major_version': util.current_firefox_major_version(),
201 })
202
203
204 @csrf_exempt
205 @require_POST
206 def csp_violation_capture(request):
207 data = sentry_client.get_data_from_request(request)
208 data.update({
209 'level': logging.INFO,
210 'logger': 'CSP',
211 })
212 try:
213 csp_data = json.loads(request.body)
214 except ValueError:
215 # Cannot decode CSP violation data, ignore
216 return HttpResponseBadRequest('Invalid CSP Report')
217
218 try:
219 blocked_uri = csp_data['csp-report']['blocked-uri']
220 except KeyError:
221 # Incomplete CSP report
222 return HttpResponseBadRequest('Incomplete CSP Report')
223
224 sentry_client.captureMessage(
225 message='CSP Violation: {}'.format(blocked_uri),
226 data=data)
227
228 return HttpResponse('Captured CSP violation, thanks for reporting.')
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/snippets/base/views.py b/snippets/base/views.py
--- a/snippets/base/views.py
+++ b/snippets/base/views.py
@@ -191,7 +191,7 @@
raise Http404()
template = 'base/preview.jinja'
- if snippet.on_startpage_5:
+ if snippet.on_startpage_5 or snippet.on_startpage_6:
template = 'base/preview_as.jinja'
return render(request, template, {
'snippets_json': json.dumps([snippet.to_dict()]),
| {"golden_diff": "diff --git a/snippets/base/views.py b/snippets/base/views.py\n--- a/snippets/base/views.py\n+++ b/snippets/base/views.py\n@@ -191,7 +191,7 @@\n raise Http404()\n \n template = 'base/preview.jinja'\n- if snippet.on_startpage_5:\n+ if snippet.on_startpage_5 or snippet.on_startpage_6:\n template = 'base/preview_as.jinja'\n return render(request, template, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n", "issue": "Snippets not displaying as expected for all after the A/S Router implimentation\nThe five (5) hard-coded AS Router snippets in the Snippets NG base are not displaying as expected for me, but they are for @glogiotatidis. Clearing my browser cache didn't fix this issue.\r\n\r\n**Fx GA 61.0.1**\r\nClicking \"Preview URL\" in the snippet admin or on the magnifying glass icon in Snippets NG base displays snippets that look like this:\r\n\r\n\r\n**Fx Beta 62.0b12**\r\nClicking \"Preview URL\" in the snippet admin or on the magnifying glass icon in Snippets NG base displays snippets that look like this:\r\n\r\n\r\n**Clicking the \"Preview snippet\" button in the snippet admin displays correctly.**\n", "before_files": [{"content": "import json\nimport logging\n\nfrom distutils.util import strtobool\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\nfrom django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render\nfrom django.utils.functional import lazy\nfrom django.views.generic import TemplateView\nfrom django.views.decorators.cache import cache_control\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\n\nimport django_filters\nfrom django_statsd.clients import statsd\nfrom raven.contrib.django.models import client as sentry_client\n\nfrom snippets.base import util\nfrom snippets.base.decorators import access_control\nfrom snippets.base.encoders import JSONSnippetEncoder\nfrom snippets.base.models import Client, JSONSnippet, Snippet, SnippetBundle, SnippetTemplate\nfrom snippets.base.util import get_object_or_none\n\n\ndef _bundle_timeout():\n return getattr(settings, 'SNIPPET_BUNDLE_TIMEOUT')\nSNIPPET_BUNDLE_TIMEOUT = lazy(_bundle_timeout, int)() # noqa\n\n\nclass SnippetFilter(django_filters.FilterSet):\n\n class Meta:\n model = Snippet\n fields = ['on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr',\n 'template']\n\n\nclass JSONSnippetFilter(django_filters.FilterSet):\n\n class Meta:\n model = JSONSnippet\n fields = ['on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr']\n\n\nclass IndexView(TemplateView):\n def render(self, request, *args, **kwargs):\n paginator = Paginator(self.snippetsfilter.qs, settings.SNIPPETS_PER_PAGE)\n\n page = request.GET.get('page', 1)\n try:\n snippets = paginator.page(page)\n except PageNotAnInteger:\n snippets = paginator.page(1)\n except EmptyPage:\n snippets = paginator.page(paginator.num_pages)\n\n # Display links to the page before and after the current page when\n # applicable.\n pagination_range = range(max(1, snippets.number - 2),\n min(snippets.number + 3, paginator.num_pages + 1))\n data = {'snippets': snippets,\n 'pagination_range': pagination_range,\n 'snippetsfilter': self.snippetsfilter}\n return render(request, self.template_name, data)\n\n\nclass SnippetIndexView(IndexView):\n template_name = 'base/index.jinja'\n\n def get(self, request, *args, **kwargs):\n self.snippets = (Snippet.objects\n .filter(published=True)\n .prefetch_related('locales', 'countries',\n 'exclude_from_search_providers'))\n self.snippetsfilter = SnippetFilter(request.GET, self.snippets)\n return self.render(request, *args, **kwargs)\n\n\nclass JSONSnippetIndexView(IndexView):\n template_name = 'base/index-json.jinja'\n\n def get(self, request, *args, **kwargs):\n self.snippets = (JSONSnippet.objects\n .filter(published=True)\n .prefetch_related('locales', 'countries'))\n self.snippetsfilter = JSONSnippetFilter(request.GET, self.snippets)\n return self.render(request, *args, **kwargs)\n\n\n@cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)\n@access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)\ndef fetch_snippets(request, **kwargs):\n \"\"\"\n Return one of the following responses:\n - 204 with the bundle is empty\n - 302 to a bundle URL after generating it if not cached.\n \"\"\"\n statsd.incr('serve.snippets')\n\n client = Client(**kwargs)\n bundle = SnippetBundle(client)\n if bundle.empty:\n statsd.incr('bundle.empty')\n # This is not a 204 because Activity Stream expects content, even if\n # it's empty.\n return HttpResponse(status=200, content='')\n elif bundle.cached:\n statsd.incr('bundle.cached')\n else:\n statsd.incr('bundle.generate')\n bundle.generate()\n\n return HttpResponseRedirect(bundle.url)\n\n\n@cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)\n@access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)\ndef fetch_json_snippets(request, **kwargs):\n statsd.incr('serve.json_snippets')\n client = Client(**kwargs)\n matching_snippets = (JSONSnippet.objects\n .filter(published=True)\n .match_client(client)\n .filter_by_available())\n return HttpResponse(json.dumps(matching_snippets, cls=JSONSnippetEncoder),\n content_type='application/json')\n\n\n@csrf_exempt\n@permission_required('base.change_snippet')\ndef preview_snippet(request):\n \"\"\"\n Build a snippet using info from the POST parameters, and preview that\n snippet on a mock about:home page.\n \"\"\"\n try:\n template_id = int(request.POST.get('template_id', None))\n except (TypeError, ValueError):\n return HttpResponseBadRequest()\n\n template = get_object_or_none(SnippetTemplate, id=template_id)\n data = request.POST.get('data', None)\n\n # Validate that data is JSON.\n try:\n json.loads(data)\n except (TypeError, ValueError):\n data = None\n\n # If your parameters are wrong, I have no sympathy for you.\n if data is None or template is None:\n return HttpResponseBadRequest()\n\n # Build a snippet that isn't saved so we can render it.\n snippet = Snippet(template=template, data=data)\n\n if strtobool(request.POST.get('activity_stream', 'false')):\n template_name = 'base/preview_as.jinja'\n preview_client = Client('5', 'Firefox', '57.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n else:\n template_name = 'base/preview.jinja'\n preview_client = Client('4', 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n skip_boilerplate = request.POST.get('skip_boilerplate', 'false')\n skip_boilerplate = strtobool(skip_boilerplate)\n if skip_boilerplate:\n template_name = 'base/preview_without_shell.jinja'\n\n return render(request, template_name, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\ndef show_snippet(request, snippet_id, uuid=False):\n preview_client = Client('4', 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n if uuid:\n snippet = get_object_or_404(Snippet, uuid=snippet_id)\n else:\n snippet = get_object_or_404(Snippet, pk=snippet_id)\n if not snippet.published and not request.user.is_authenticated():\n raise Http404()\n\n template = 'base/preview.jinja'\n if snippet.on_startpage_5:\n template = 'base/preview_as.jinja'\n return render(request, template, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\n@csrf_exempt\n@require_POST\ndef csp_violation_capture(request):\n data = sentry_client.get_data_from_request(request)\n data.update({\n 'level': logging.INFO,\n 'logger': 'CSP',\n })\n try:\n csp_data = json.loads(request.body)\n except ValueError:\n # Cannot decode CSP violation data, ignore\n return HttpResponseBadRequest('Invalid CSP Report')\n\n try:\n blocked_uri = csp_data['csp-report']['blocked-uri']\n except KeyError:\n # Incomplete CSP report\n return HttpResponseBadRequest('Incomplete CSP Report')\n\n sentry_client.captureMessage(\n message='CSP Violation: {}'.format(blocked_uri),\n data=data)\n\n return HttpResponse('Captured CSP violation, thanks for reporting.')\n", "path": "snippets/base/views.py"}], "after_files": [{"content": "import json\nimport logging\n\nfrom distutils.util import strtobool\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\nfrom django.http import Http404, HttpResponse, HttpResponseBadRequest, HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render\nfrom django.utils.functional import lazy\nfrom django.views.generic import TemplateView\nfrom django.views.decorators.cache import cache_control\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\n\nimport django_filters\nfrom django_statsd.clients import statsd\nfrom raven.contrib.django.models import client as sentry_client\n\nfrom snippets.base import util\nfrom snippets.base.decorators import access_control\nfrom snippets.base.encoders import JSONSnippetEncoder\nfrom snippets.base.models import Client, JSONSnippet, Snippet, SnippetBundle, SnippetTemplate\nfrom snippets.base.util import get_object_or_none\n\n\ndef _bundle_timeout():\n return getattr(settings, 'SNIPPET_BUNDLE_TIMEOUT')\nSNIPPET_BUNDLE_TIMEOUT = lazy(_bundle_timeout, int)() # noqa\n\n\nclass SnippetFilter(django_filters.FilterSet):\n\n class Meta:\n model = Snippet\n fields = ['on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr',\n 'template']\n\n\nclass JSONSnippetFilter(django_filters.FilterSet):\n\n class Meta:\n model = JSONSnippet\n fields = ['on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr']\n\n\nclass IndexView(TemplateView):\n def render(self, request, *args, **kwargs):\n paginator = Paginator(self.snippetsfilter.qs, settings.SNIPPETS_PER_PAGE)\n\n page = request.GET.get('page', 1)\n try:\n snippets = paginator.page(page)\n except PageNotAnInteger:\n snippets = paginator.page(1)\n except EmptyPage:\n snippets = paginator.page(paginator.num_pages)\n\n # Display links to the page before and after the current page when\n # applicable.\n pagination_range = range(max(1, snippets.number - 2),\n min(snippets.number + 3, paginator.num_pages + 1))\n data = {'snippets': snippets,\n 'pagination_range': pagination_range,\n 'snippetsfilter': self.snippetsfilter}\n return render(request, self.template_name, data)\n\n\nclass SnippetIndexView(IndexView):\n template_name = 'base/index.jinja'\n\n def get(self, request, *args, **kwargs):\n self.snippets = (Snippet.objects\n .filter(published=True)\n .prefetch_related('locales', 'countries',\n 'exclude_from_search_providers'))\n self.snippetsfilter = SnippetFilter(request.GET, self.snippets)\n return self.render(request, *args, **kwargs)\n\n\nclass JSONSnippetIndexView(IndexView):\n template_name = 'base/index-json.jinja'\n\n def get(self, request, *args, **kwargs):\n self.snippets = (JSONSnippet.objects\n .filter(published=True)\n .prefetch_related('locales', 'countries'))\n self.snippetsfilter = JSONSnippetFilter(request.GET, self.snippets)\n return self.render(request, *args, **kwargs)\n\n\n@cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)\n@access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)\ndef fetch_snippets(request, **kwargs):\n \"\"\"\n Return one of the following responses:\n - 204 with the bundle is empty\n - 302 to a bundle URL after generating it if not cached.\n \"\"\"\n statsd.incr('serve.snippets')\n\n client = Client(**kwargs)\n bundle = SnippetBundle(client)\n if bundle.empty:\n statsd.incr('bundle.empty')\n # This is not a 204 because Activity Stream expects content, even if\n # it's empty.\n return HttpResponse(status=200, content='')\n elif bundle.cached:\n statsd.incr('bundle.cached')\n else:\n statsd.incr('bundle.generate')\n bundle.generate()\n\n return HttpResponseRedirect(bundle.url)\n\n\n@cache_control(public=True, max_age=SNIPPET_BUNDLE_TIMEOUT)\n@access_control(max_age=SNIPPET_BUNDLE_TIMEOUT)\ndef fetch_json_snippets(request, **kwargs):\n statsd.incr('serve.json_snippets')\n client = Client(**kwargs)\n matching_snippets = (JSONSnippet.objects\n .filter(published=True)\n .match_client(client)\n .filter_by_available())\n return HttpResponse(json.dumps(matching_snippets, cls=JSONSnippetEncoder),\n content_type='application/json')\n\n\n@csrf_exempt\n@permission_required('base.change_snippet')\ndef preview_snippet(request):\n \"\"\"\n Build a snippet using info from the POST parameters, and preview that\n snippet on a mock about:home page.\n \"\"\"\n try:\n template_id = int(request.POST.get('template_id', None))\n except (TypeError, ValueError):\n return HttpResponseBadRequest()\n\n template = get_object_or_none(SnippetTemplate, id=template_id)\n data = request.POST.get('data', None)\n\n # Validate that data is JSON.\n try:\n json.loads(data)\n except (TypeError, ValueError):\n data = None\n\n # If your parameters are wrong, I have no sympathy for you.\n if data is None or template is None:\n return HttpResponseBadRequest()\n\n # Build a snippet that isn't saved so we can render it.\n snippet = Snippet(template=template, data=data)\n\n if strtobool(request.POST.get('activity_stream', 'false')):\n template_name = 'base/preview_as.jinja'\n preview_client = Client('5', 'Firefox', '57.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n else:\n template_name = 'base/preview.jinja'\n preview_client = Client('4', 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n skip_boilerplate = request.POST.get('skip_boilerplate', 'false')\n skip_boilerplate = strtobool(skip_boilerplate)\n if skip_boilerplate:\n template_name = 'base/preview_without_shell.jinja'\n\n return render(request, template_name, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\ndef show_snippet(request, snippet_id, uuid=False):\n preview_client = Client('4', 'Firefox', '24.0', 'default', 'default', 'en-US',\n 'release', 'default', 'default', 'default')\n\n if uuid:\n snippet = get_object_or_404(Snippet, uuid=snippet_id)\n else:\n snippet = get_object_or_404(Snippet, pk=snippet_id)\n if not snippet.published and not request.user.is_authenticated():\n raise Http404()\n\n template = 'base/preview.jinja'\n if snippet.on_startpage_5 or snippet.on_startpage_6:\n template = 'base/preview_as.jinja'\n return render(request, template, {\n 'snippets_json': json.dumps([snippet.to_dict()]),\n 'client': preview_client,\n 'preview': True,\n 'current_firefox_major_version': util.current_firefox_major_version(),\n })\n\n\n@csrf_exempt\n@require_POST\ndef csp_violation_capture(request):\n data = sentry_client.get_data_from_request(request)\n data.update({\n 'level': logging.INFO,\n 'logger': 'CSP',\n })\n try:\n csp_data = json.loads(request.body)\n except ValueError:\n # Cannot decode CSP violation data, ignore\n return HttpResponseBadRequest('Invalid CSP Report')\n\n try:\n blocked_uri = csp_data['csp-report']['blocked-uri']\n except KeyError:\n # Incomplete CSP report\n return HttpResponseBadRequest('Incomplete CSP Report')\n\n sentry_client.captureMessage(\n message='CSP Violation: {}'.format(blocked_uri),\n data=data)\n\n return HttpResponse('Captured CSP violation, thanks for reporting.')\n", "path": "snippets/base/views.py"}]} | 2,959 | 124 |
gh_patches_debug_26418 | rasdani/github-patches | git_diff | chainer__chainer-7533 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PTB custom loop examples are broken
The custom loop version of PTB example causes the following error:
```
AttributeError: 'ParallelSequentialIterator' object has no attribute 'reset'
```
`ParallelSequentialIterator` is a custom iterator that is defined in the example and does not implement `reset` method.
PTB example of the static graph optimization also has the same problem.
- [examples/ptb/train_ptb_custom_loop.py](https://github.com/chainer/chainer/blob/master/examples/ptb/train_ptb_custom_loop.py)
- [examples/static_graph_optimizations/ptb/train_ptb_custom_loop.py](https://github.com/chainer/chainer/blob/master/examples/static_graph_optimizations/ptb/train_ptb_custom_loop.py)
It looks that the change in #5834 was not enough.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/ptb/train_ptb.py`
Content:
```
1 #!/usr/bin/env python
2 """Sample script of recurrent neural network language model.
3
4 This code is ported from the following implementation written in Torch.
5 https://github.com/tomsercu/lstm
6
7 Note for contributors:
8 This example code is referred to from the "RNN Language Models" tutorial.
9 If this file is to be modified, please also update the line numbers in
10 `docs/source/examples/ptb.rst` accordingly.
11
12 """
13 from __future__ import division
14 import argparse
15 import sys
16
17 import numpy as np
18
19 import chainer
20 import chainer.functions as F
21 import chainer.links as L
22 from chainer import training
23 from chainer.training import extensions
24 import chainerx
25
26
27 # Definition of a recurrent net for language modeling
28 class RNNForLM(chainer.Chain):
29
30 def __init__(self, n_vocab, n_units):
31 super(RNNForLM, self).__init__()
32 with self.init_scope():
33 self.embed = L.EmbedID(n_vocab, n_units)
34 self.l1 = L.LSTM(n_units, n_units)
35 self.l2 = L.LSTM(n_units, n_units)
36 self.l3 = L.Linear(n_units, n_vocab)
37
38 for param in self.params():
39 param.array[...] = np.random.uniform(-0.1, 0.1, param.shape)
40
41 def reset_state(self):
42 self.l1.reset_state()
43 self.l2.reset_state()
44
45 def forward(self, x):
46 h0 = self.embed(x)
47 h1 = self.l1(F.dropout(h0))
48 h2 = self.l2(F.dropout(h1))
49 y = self.l3(F.dropout(h2))
50 return y
51
52
53 # Dataset iterator to create a batch of sequences at different positions.
54 # This iterator returns a pair of current words and the next words. Each
55 # example is a part of sequences starting from the different offsets
56 # equally spaced within the whole sequence.
57 class ParallelSequentialIterator(chainer.dataset.Iterator):
58
59 def __init__(self, dataset, batch_size, repeat=True):
60 self.dataset = dataset
61 self.batch_size = batch_size # batch size
62 # Number of completed sweeps over the dataset. In this case, it is
63 # incremented if every word is visited at least once after the last
64 # increment.
65 self.epoch = 0
66 # True if the epoch is incremented at the last iteration.
67 self.is_new_epoch = False
68 self.repeat = repeat
69 length = len(dataset)
70 # Offsets maintain the position of each sequence in the mini-batch.
71 self.offsets = [i * length // batch_size for i in range(batch_size)]
72 # NOTE: this is not a count of parameter updates. It is just a count of
73 # calls of ``__next__``.
74 self.iteration = 0
75 # use -1 instead of None internally
76 self._previous_epoch_detail = -1.
77
78 def __next__(self):
79 # This iterator returns a list representing a mini-batch. Each item
80 # indicates a different position in the original sequence. Each item is
81 # represented by a pair of two word IDs. The first word is at the
82 # "current" position, while the second word at the next position.
83 # At each iteration, the iteration count is incremented, which pushes
84 # forward the "current" position.
85 length = len(self.dataset)
86 if not self.repeat and self.iteration * self.batch_size >= length:
87 # If not self.repeat, this iterator stops at the end of the first
88 # epoch (i.e., when all words are visited once).
89 raise StopIteration
90 cur_words = self.get_words()
91 self._previous_epoch_detail = self.epoch_detail
92 self.iteration += 1
93 next_words = self.get_words()
94
95 epoch = self.iteration * self.batch_size // length
96 self.is_new_epoch = self.epoch < epoch
97 if self.is_new_epoch:
98 self.epoch = epoch
99
100 return list(zip(cur_words, next_words))
101
102 @property
103 def epoch_detail(self):
104 # Floating point version of epoch.
105 return self.iteration * self.batch_size / len(self.dataset)
106
107 @property
108 def previous_epoch_detail(self):
109 if self._previous_epoch_detail < 0:
110 return None
111 return self._previous_epoch_detail
112
113 def get_words(self):
114 # It returns a list of current words.
115 return [self.dataset[(offset + self.iteration) % len(self.dataset)]
116 for offset in self.offsets]
117
118 def serialize(self, serializer):
119 # It is important to serialize the state to be recovered on resume.
120 self.iteration = serializer('iteration', self.iteration)
121 self.epoch = serializer('epoch', self.epoch)
122 try:
123 self._previous_epoch_detail = serializer(
124 'previous_epoch_detail', self._previous_epoch_detail)
125 except KeyError:
126 # guess previous_epoch_detail for older version
127 self._previous_epoch_detail = self.epoch + \
128 (self.current_position - self.batch_size) / len(self.dataset)
129 if self.epoch_detail > 0:
130 self._previous_epoch_detail = max(
131 self._previous_epoch_detail, 0.)
132 else:
133 self._previous_epoch_detail = -1.
134
135
136 # Custom updater for truncated BackProp Through Time (BPTT)
137 class BPTTUpdater(training.updaters.StandardUpdater):
138
139 def __init__(self, train_iter, optimizer, bprop_len, device):
140 super(BPTTUpdater, self).__init__(
141 train_iter, optimizer, device=device)
142 self.bprop_len = bprop_len
143
144 # The core part of the update routine can be customized by overriding.
145 def update_core(self):
146 loss = 0
147 # When we pass one iterator and optimizer to StandardUpdater.__init__,
148 # they are automatically named 'main'.
149 train_iter = self.get_iterator('main')
150 optimizer = self.get_optimizer('main')
151
152 # Progress the dataset iterator for bprop_len words at each iteration.
153 for i in range(self.bprop_len):
154 # Get the next batch (a list of tuples of two word IDs)
155 batch = train_iter.__next__()
156
157 # Concatenate the word IDs to matrices and send them to the device
158 # self.converter does this job
159 # (it is chainer.dataset.concat_examples by default)
160 x, t = self.converter(batch, self.device)
161
162 # Compute the loss at this time step and accumulate it
163 loss += optimizer.target(x, t)
164
165 optimizer.target.cleargrads() # Clear the parameter gradients
166 loss.backward() # Backprop
167 loss.unchain_backward() # Truncate the graph
168 optimizer.update() # Update the parameters
169
170
171 # Routine to rewrite the result dictionary of LogReport to add perplexity
172 # values
173 def compute_perplexity(result):
174 result['perplexity'] = np.exp(result['main/loss'])
175 if 'validation/main/loss' in result:
176 result['val_perplexity'] = np.exp(result['validation/main/loss'])
177
178
179 def main():
180 parser = argparse.ArgumentParser()
181 parser.add_argument('--batchsize', '-b', type=int, default=20,
182 help='Number of examples in each mini-batch')
183 parser.add_argument('--bproplen', '-l', type=int, default=35,
184 help='Number of words in each mini-batch '
185 '(= length of truncated BPTT)')
186 parser.add_argument('--epoch', '-e', type=int, default=39,
187 help='Number of sweeps over the dataset to train')
188 parser.add_argument('--device', '-d', type=str, default='-1',
189 help='Device specifier. Either ChainerX device '
190 'specifier or an integer. If non-negative integer, '
191 'CuPy arrays with specified device id are used. If '
192 'negative integer, NumPy arrays are used')
193 parser.add_argument('--gradclip', '-c', type=float, default=5,
194 help='Gradient norm threshold to clip')
195 parser.add_argument('--out', '-o', default='result',
196 help='Directory to output the result')
197 parser.add_argument('--resume', '-r', type=str,
198 help='Resume the training from snapshot')
199 parser.add_argument('--test', action='store_true',
200 help='Use tiny datasets for quick tests')
201 parser.set_defaults(test=False)
202 parser.add_argument('--unit', '-u', type=int, default=650,
203 help='Number of LSTM units in each layer')
204 parser.add_argument('--model', '-m', default='model.npz',
205 help='Model file name to serialize')
206 group = parser.add_argument_group('deprecated arguments')
207 group.add_argument('--gpu', '-g', dest='device',
208 type=int, nargs='?', const=0,
209 help='GPU ID (negative value indicates CPU)')
210 args = parser.parse_args()
211
212 device = chainer.get_device(args.device)
213 if device.xp is chainerx:
214 sys.stderr.write('This example does not support ChainerX devices.\n')
215 sys.exit(1)
216
217 device.use()
218
219 # Load the Penn Tree Bank long word sequence dataset
220 train, val, test = chainer.datasets.get_ptb_words()
221 n_vocab = max(train) + 1 # train is just an array of integers
222 print('#vocab = {}'.format(n_vocab))
223
224 if args.test:
225 train = train[:100]
226 val = val[:100]
227 test = test[:100]
228
229 train_iter = ParallelSequentialIterator(train, args.batchsize)
230 val_iter = ParallelSequentialIterator(val, 1, repeat=False)
231 test_iter = ParallelSequentialIterator(test, 1, repeat=False)
232
233 # Prepare an RNNLM model
234 rnn = RNNForLM(n_vocab, args.unit)
235 model = L.Classifier(rnn)
236 model.compute_accuracy = False # we only want the perplexity
237 model.to_device(device)
238
239 # Set up an optimizer
240 optimizer = chainer.optimizers.SGD(lr=1.0)
241 optimizer.setup(model)
242 optimizer.add_hook(chainer.optimizer_hooks.GradientClipping(args.gradclip))
243
244 # Set up a trainer
245 updater = BPTTUpdater(train_iter, optimizer, args.bproplen, device)
246 trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
247
248 eval_model = model.copy() # Model with shared params and distinct states
249 eval_rnn = eval_model.predictor
250 trainer.extend(extensions.Evaluator(
251 val_iter, eval_model, device=device,
252 # Reset the RNN state at the beginning of each evaluation
253 eval_hook=lambda _: eval_rnn.reset_state()))
254
255 interval = 10 if args.test else 500
256 trainer.extend(extensions.LogReport(postprocess=compute_perplexity,
257 trigger=(interval, 'iteration')))
258 trainer.extend(extensions.PrintReport(
259 ['epoch', 'iteration', 'perplexity', 'val_perplexity']
260 ), trigger=(interval, 'iteration'))
261 trainer.extend(extensions.ProgressBar(
262 update_interval=1 if args.test else 10))
263 trainer.extend(extensions.snapshot())
264 trainer.extend(extensions.snapshot_object(
265 model, 'model_iter_{.updater.iteration}'))
266 if args.resume is not None:
267 chainer.serializers.load_npz(args.resume, trainer)
268
269 trainer.run()
270
271 # Evaluate the final model
272 print('test')
273 eval_rnn.reset_state()
274 evaluator = extensions.Evaluator(test_iter, eval_model, device=device)
275 result = evaluator()
276 print('test perplexity: {}'.format(np.exp(float(result['main/loss']))))
277
278 # Serialize the final model
279 chainer.serializers.save_npz(args.model, model)
280
281
282 if __name__ == '__main__':
283 main()
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/ptb/train_ptb.py b/examples/ptb/train_ptb.py
--- a/examples/ptb/train_ptb.py
+++ b/examples/ptb/train_ptb.py
@@ -57,18 +57,22 @@
class ParallelSequentialIterator(chainer.dataset.Iterator):
def __init__(self, dataset, batch_size, repeat=True):
+ super(ParallelSequentialIterator, self).__init__()
self.dataset = dataset
self.batch_size = batch_size # batch size
+ self.repeat = repeat
+ length = len(dataset)
+ # Offsets maintain the position of each sequence in the mini-batch.
+ self.offsets = [i * length // batch_size for i in range(batch_size)]
+ self.reset()
+
+ def reset(self):
# Number of completed sweeps over the dataset. In this case, it is
# incremented if every word is visited at least once after the last
# increment.
self.epoch = 0
# True if the epoch is incremented at the last iteration.
self.is_new_epoch = False
- self.repeat = repeat
- length = len(dataset)
- # Offsets maintain the position of each sequence in the mini-batch.
- self.offsets = [i * length // batch_size for i in range(batch_size)]
# NOTE: this is not a count of parameter updates. It is just a count of
# calls of ``__next__``.
self.iteration = 0
| {"golden_diff": "diff --git a/examples/ptb/train_ptb.py b/examples/ptb/train_ptb.py\n--- a/examples/ptb/train_ptb.py\n+++ b/examples/ptb/train_ptb.py\n@@ -57,18 +57,22 @@\n class ParallelSequentialIterator(chainer.dataset.Iterator):\n \n def __init__(self, dataset, batch_size, repeat=True):\n+ super(ParallelSequentialIterator, self).__init__()\n self.dataset = dataset\n self.batch_size = batch_size # batch size\n+ self.repeat = repeat\n+ length = len(dataset)\n+ # Offsets maintain the position of each sequence in the mini-batch.\n+ self.offsets = [i * length // batch_size for i in range(batch_size)]\n+ self.reset()\n+\n+ def reset(self):\n # Number of completed sweeps over the dataset. In this case, it is\n # incremented if every word is visited at least once after the last\n # increment.\n self.epoch = 0\n # True if the epoch is incremented at the last iteration.\n self.is_new_epoch = False\n- self.repeat = repeat\n- length = len(dataset)\n- # Offsets maintain the position of each sequence in the mini-batch.\n- self.offsets = [i * length // batch_size for i in range(batch_size)]\n # NOTE: this is not a count of parameter updates. It is just a count of\n # calls of ``__next__``.\n self.iteration = 0\n", "issue": "PTB custom loop examples are broken\nThe custom loop version of PTB example causes the following error:\r\n```\r\nAttributeError: 'ParallelSequentialIterator' object has no attribute 'reset'\r\n```\r\n\r\n`ParallelSequentialIterator` is a custom iterator that is defined in the example and does not implement `reset` method.\r\n\r\nPTB example of the static graph optimization also has the same problem.\r\n\r\n- [examples/ptb/train_ptb_custom_loop.py](https://github.com/chainer/chainer/blob/master/examples/ptb/train_ptb_custom_loop.py)\r\n- [examples/static_graph_optimizations/ptb/train_ptb_custom_loop.py](https://github.com/chainer/chainer/blob/master/examples/static_graph_optimizations/ptb/train_ptb_custom_loop.py)\r\n\r\nIt looks that the change in #5834 was not enough.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Sample script of recurrent neural network language model.\n\nThis code is ported from the following implementation written in Torch.\nhttps://github.com/tomsercu/lstm\n\nNote for contributors:\nThis example code is referred to from the \"RNN Language Models\" tutorial.\nIf this file is to be modified, please also update the line numbers in\n`docs/source/examples/ptb.rst` accordingly.\n\n\"\"\"\nfrom __future__ import division\nimport argparse\nimport sys\n\nimport numpy as np\n\nimport chainer\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer import training\nfrom chainer.training import extensions\nimport chainerx\n\n\n# Definition of a recurrent net for language modeling\nclass RNNForLM(chainer.Chain):\n\n def __init__(self, n_vocab, n_units):\n super(RNNForLM, self).__init__()\n with self.init_scope():\n self.embed = L.EmbedID(n_vocab, n_units)\n self.l1 = L.LSTM(n_units, n_units)\n self.l2 = L.LSTM(n_units, n_units)\n self.l3 = L.Linear(n_units, n_vocab)\n\n for param in self.params():\n param.array[...] = np.random.uniform(-0.1, 0.1, param.shape)\n\n def reset_state(self):\n self.l1.reset_state()\n self.l2.reset_state()\n\n def forward(self, x):\n h0 = self.embed(x)\n h1 = self.l1(F.dropout(h0))\n h2 = self.l2(F.dropout(h1))\n y = self.l3(F.dropout(h2))\n return y\n\n\n# Dataset iterator to create a batch of sequences at different positions.\n# This iterator returns a pair of current words and the next words. Each\n# example is a part of sequences starting from the different offsets\n# equally spaced within the whole sequence.\nclass ParallelSequentialIterator(chainer.dataset.Iterator):\n\n def __init__(self, dataset, batch_size, repeat=True):\n self.dataset = dataset\n self.batch_size = batch_size # batch size\n # Number of completed sweeps over the dataset. In this case, it is\n # incremented if every word is visited at least once after the last\n # increment.\n self.epoch = 0\n # True if the epoch is incremented at the last iteration.\n self.is_new_epoch = False\n self.repeat = repeat\n length = len(dataset)\n # Offsets maintain the position of each sequence in the mini-batch.\n self.offsets = [i * length // batch_size for i in range(batch_size)]\n # NOTE: this is not a count of parameter updates. It is just a count of\n # calls of ``__next__``.\n self.iteration = 0\n # use -1 instead of None internally\n self._previous_epoch_detail = -1.\n\n def __next__(self):\n # This iterator returns a list representing a mini-batch. Each item\n # indicates a different position in the original sequence. Each item is\n # represented by a pair of two word IDs. The first word is at the\n # \"current\" position, while the second word at the next position.\n # At each iteration, the iteration count is incremented, which pushes\n # forward the \"current\" position.\n length = len(self.dataset)\n if not self.repeat and self.iteration * self.batch_size >= length:\n # If not self.repeat, this iterator stops at the end of the first\n # epoch (i.e., when all words are visited once).\n raise StopIteration\n cur_words = self.get_words()\n self._previous_epoch_detail = self.epoch_detail\n self.iteration += 1\n next_words = self.get_words()\n\n epoch = self.iteration * self.batch_size // length\n self.is_new_epoch = self.epoch < epoch\n if self.is_new_epoch:\n self.epoch = epoch\n\n return list(zip(cur_words, next_words))\n\n @property\n def epoch_detail(self):\n # Floating point version of epoch.\n return self.iteration * self.batch_size / len(self.dataset)\n\n @property\n def previous_epoch_detail(self):\n if self._previous_epoch_detail < 0:\n return None\n return self._previous_epoch_detail\n\n def get_words(self):\n # It returns a list of current words.\n return [self.dataset[(offset + self.iteration) % len(self.dataset)]\n for offset in self.offsets]\n\n def serialize(self, serializer):\n # It is important to serialize the state to be recovered on resume.\n self.iteration = serializer('iteration', self.iteration)\n self.epoch = serializer('epoch', self.epoch)\n try:\n self._previous_epoch_detail = serializer(\n 'previous_epoch_detail', self._previous_epoch_detail)\n except KeyError:\n # guess previous_epoch_detail for older version\n self._previous_epoch_detail = self.epoch + \\\n (self.current_position - self.batch_size) / len(self.dataset)\n if self.epoch_detail > 0:\n self._previous_epoch_detail = max(\n self._previous_epoch_detail, 0.)\n else:\n self._previous_epoch_detail = -1.\n\n\n# Custom updater for truncated BackProp Through Time (BPTT)\nclass BPTTUpdater(training.updaters.StandardUpdater):\n\n def __init__(self, train_iter, optimizer, bprop_len, device):\n super(BPTTUpdater, self).__init__(\n train_iter, optimizer, device=device)\n self.bprop_len = bprop_len\n\n # The core part of the update routine can be customized by overriding.\n def update_core(self):\n loss = 0\n # When we pass one iterator and optimizer to StandardUpdater.__init__,\n # they are automatically named 'main'.\n train_iter = self.get_iterator('main')\n optimizer = self.get_optimizer('main')\n\n # Progress the dataset iterator for bprop_len words at each iteration.\n for i in range(self.bprop_len):\n # Get the next batch (a list of tuples of two word IDs)\n batch = train_iter.__next__()\n\n # Concatenate the word IDs to matrices and send them to the device\n # self.converter does this job\n # (it is chainer.dataset.concat_examples by default)\n x, t = self.converter(batch, self.device)\n\n # Compute the loss at this time step and accumulate it\n loss += optimizer.target(x, t)\n\n optimizer.target.cleargrads() # Clear the parameter gradients\n loss.backward() # Backprop\n loss.unchain_backward() # Truncate the graph\n optimizer.update() # Update the parameters\n\n\n# Routine to rewrite the result dictionary of LogReport to add perplexity\n# values\ndef compute_perplexity(result):\n result['perplexity'] = np.exp(result['main/loss'])\n if 'validation/main/loss' in result:\n result['val_perplexity'] = np.exp(result['validation/main/loss'])\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument('--batchsize', '-b', type=int, default=20,\n help='Number of examples in each mini-batch')\n parser.add_argument('--bproplen', '-l', type=int, default=35,\n help='Number of words in each mini-batch '\n '(= length of truncated BPTT)')\n parser.add_argument('--epoch', '-e', type=int, default=39,\n help='Number of sweeps over the dataset to train')\n parser.add_argument('--device', '-d', type=str, default='-1',\n help='Device specifier. Either ChainerX device '\n 'specifier or an integer. If non-negative integer, '\n 'CuPy arrays with specified device id are used. If '\n 'negative integer, NumPy arrays are used')\n parser.add_argument('--gradclip', '-c', type=float, default=5,\n help='Gradient norm threshold to clip')\n parser.add_argument('--out', '-o', default='result',\n help='Directory to output the result')\n parser.add_argument('--resume', '-r', type=str,\n help='Resume the training from snapshot')\n parser.add_argument('--test', action='store_true',\n help='Use tiny datasets for quick tests')\n parser.set_defaults(test=False)\n parser.add_argument('--unit', '-u', type=int, default=650,\n help='Number of LSTM units in each layer')\n parser.add_argument('--model', '-m', default='model.npz',\n help='Model file name to serialize')\n group = parser.add_argument_group('deprecated arguments')\n group.add_argument('--gpu', '-g', dest='device',\n type=int, nargs='?', const=0,\n help='GPU ID (negative value indicates CPU)')\n args = parser.parse_args()\n\n device = chainer.get_device(args.device)\n if device.xp is chainerx:\n sys.stderr.write('This example does not support ChainerX devices.\\n')\n sys.exit(1)\n\n device.use()\n\n # Load the Penn Tree Bank long word sequence dataset\n train, val, test = chainer.datasets.get_ptb_words()\n n_vocab = max(train) + 1 # train is just an array of integers\n print('#vocab = {}'.format(n_vocab))\n\n if args.test:\n train = train[:100]\n val = val[:100]\n test = test[:100]\n\n train_iter = ParallelSequentialIterator(train, args.batchsize)\n val_iter = ParallelSequentialIterator(val, 1, repeat=False)\n test_iter = ParallelSequentialIterator(test, 1, repeat=False)\n\n # Prepare an RNNLM model\n rnn = RNNForLM(n_vocab, args.unit)\n model = L.Classifier(rnn)\n model.compute_accuracy = False # we only want the perplexity\n model.to_device(device)\n\n # Set up an optimizer\n optimizer = chainer.optimizers.SGD(lr=1.0)\n optimizer.setup(model)\n optimizer.add_hook(chainer.optimizer_hooks.GradientClipping(args.gradclip))\n\n # Set up a trainer\n updater = BPTTUpdater(train_iter, optimizer, args.bproplen, device)\n trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)\n\n eval_model = model.copy() # Model with shared params and distinct states\n eval_rnn = eval_model.predictor\n trainer.extend(extensions.Evaluator(\n val_iter, eval_model, device=device,\n # Reset the RNN state at the beginning of each evaluation\n eval_hook=lambda _: eval_rnn.reset_state()))\n\n interval = 10 if args.test else 500\n trainer.extend(extensions.LogReport(postprocess=compute_perplexity,\n trigger=(interval, 'iteration')))\n trainer.extend(extensions.PrintReport(\n ['epoch', 'iteration', 'perplexity', 'val_perplexity']\n ), trigger=(interval, 'iteration'))\n trainer.extend(extensions.ProgressBar(\n update_interval=1 if args.test else 10))\n trainer.extend(extensions.snapshot())\n trainer.extend(extensions.snapshot_object(\n model, 'model_iter_{.updater.iteration}'))\n if args.resume is not None:\n chainer.serializers.load_npz(args.resume, trainer)\n\n trainer.run()\n\n # Evaluate the final model\n print('test')\n eval_rnn.reset_state()\n evaluator = extensions.Evaluator(test_iter, eval_model, device=device)\n result = evaluator()\n print('test perplexity: {}'.format(np.exp(float(result['main/loss']))))\n\n # Serialize the final model\n chainer.serializers.save_npz(args.model, model)\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/ptb/train_ptb.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"Sample script of recurrent neural network language model.\n\nThis code is ported from the following implementation written in Torch.\nhttps://github.com/tomsercu/lstm\n\nNote for contributors:\nThis example code is referred to from the \"RNN Language Models\" tutorial.\nIf this file is to be modified, please also update the line numbers in\n`docs/source/examples/ptb.rst` accordingly.\n\n\"\"\"\nfrom __future__ import division\nimport argparse\nimport sys\n\nimport numpy as np\n\nimport chainer\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer import training\nfrom chainer.training import extensions\nimport chainerx\n\n\n# Definition of a recurrent net for language modeling\nclass RNNForLM(chainer.Chain):\n\n def __init__(self, n_vocab, n_units):\n super(RNNForLM, self).__init__()\n with self.init_scope():\n self.embed = L.EmbedID(n_vocab, n_units)\n self.l1 = L.LSTM(n_units, n_units)\n self.l2 = L.LSTM(n_units, n_units)\n self.l3 = L.Linear(n_units, n_vocab)\n\n for param in self.params():\n param.array[...] = np.random.uniform(-0.1, 0.1, param.shape)\n\n def reset_state(self):\n self.l1.reset_state()\n self.l2.reset_state()\n\n def forward(self, x):\n h0 = self.embed(x)\n h1 = self.l1(F.dropout(h0))\n h2 = self.l2(F.dropout(h1))\n y = self.l3(F.dropout(h2))\n return y\n\n\n# Dataset iterator to create a batch of sequences at different positions.\n# This iterator returns a pair of current words and the next words. Each\n# example is a part of sequences starting from the different offsets\n# equally spaced within the whole sequence.\nclass ParallelSequentialIterator(chainer.dataset.Iterator):\n\n def __init__(self, dataset, batch_size, repeat=True):\n super(ParallelSequentialIterator, self).__init__()\n self.dataset = dataset\n self.batch_size = batch_size # batch size\n self.repeat = repeat\n length = len(dataset)\n # Offsets maintain the position of each sequence in the mini-batch.\n self.offsets = [i * length // batch_size for i in range(batch_size)]\n self.reset()\n\n def reset(self):\n # Number of completed sweeps over the dataset. In this case, it is\n # incremented if every word is visited at least once after the last\n # increment.\n self.epoch = 0\n # True if the epoch is incremented at the last iteration.\n self.is_new_epoch = False\n # NOTE: this is not a count of parameter updates. It is just a count of\n # calls of ``__next__``.\n self.iteration = 0\n # use -1 instead of None internally\n self._previous_epoch_detail = -1.\n\n def __next__(self):\n # This iterator returns a list representing a mini-batch. Each item\n # indicates a different position in the original sequence. Each item is\n # represented by a pair of two word IDs. The first word is at the\n # \"current\" position, while the second word at the next position.\n # At each iteration, the iteration count is incremented, which pushes\n # forward the \"current\" position.\n length = len(self.dataset)\n if not self.repeat and self.iteration * self.batch_size >= length:\n # If not self.repeat, this iterator stops at the end of the first\n # epoch (i.e., when all words are visited once).\n raise StopIteration\n cur_words = self.get_words()\n self._previous_epoch_detail = self.epoch_detail\n self.iteration += 1\n next_words = self.get_words()\n\n epoch = self.iteration * self.batch_size // length\n self.is_new_epoch = self.epoch < epoch\n if self.is_new_epoch:\n self.epoch = epoch\n\n return list(zip(cur_words, next_words))\n\n @property\n def epoch_detail(self):\n # Floating point version of epoch.\n return self.iteration * self.batch_size / len(self.dataset)\n\n @property\n def previous_epoch_detail(self):\n if self._previous_epoch_detail < 0:\n return None\n return self._previous_epoch_detail\n\n def get_words(self):\n # It returns a list of current words.\n return [self.dataset[(offset + self.iteration) % len(self.dataset)]\n for offset in self.offsets]\n\n def serialize(self, serializer):\n # It is important to serialize the state to be recovered on resume.\n self.iteration = serializer('iteration', self.iteration)\n self.epoch = serializer('epoch', self.epoch)\n try:\n self._previous_epoch_detail = serializer(\n 'previous_epoch_detail', self._previous_epoch_detail)\n except KeyError:\n # guess previous_epoch_detail for older version\n self._previous_epoch_detail = self.epoch + \\\n (self.current_position - self.batch_size) / len(self.dataset)\n if self.epoch_detail > 0:\n self._previous_epoch_detail = max(\n self._previous_epoch_detail, 0.)\n else:\n self._previous_epoch_detail = -1.\n\n\n# Custom updater for truncated BackProp Through Time (BPTT)\nclass BPTTUpdater(training.updaters.StandardUpdater):\n\n def __init__(self, train_iter, optimizer, bprop_len, device):\n super(BPTTUpdater, self).__init__(\n train_iter, optimizer, device=device)\n self.bprop_len = bprop_len\n\n # The core part of the update routine can be customized by overriding.\n def update_core(self):\n loss = 0\n # When we pass one iterator and optimizer to StandardUpdater.__init__,\n # they are automatically named 'main'.\n train_iter = self.get_iterator('main')\n optimizer = self.get_optimizer('main')\n\n # Progress the dataset iterator for bprop_len words at each iteration.\n for i in range(self.bprop_len):\n # Get the next batch (a list of tuples of two word IDs)\n batch = train_iter.__next__()\n\n # Concatenate the word IDs to matrices and send them to the device\n # self.converter does this job\n # (it is chainer.dataset.concat_examples by default)\n x, t = self.converter(batch, self.device)\n\n # Compute the loss at this time step and accumulate it\n loss += optimizer.target(x, t)\n\n optimizer.target.cleargrads() # Clear the parameter gradients\n loss.backward() # Backprop\n loss.unchain_backward() # Truncate the graph\n optimizer.update() # Update the parameters\n\n\n# Routine to rewrite the result dictionary of LogReport to add perplexity\n# values\ndef compute_perplexity(result):\n result['perplexity'] = np.exp(result['main/loss'])\n if 'validation/main/loss' in result:\n result['val_perplexity'] = np.exp(result['validation/main/loss'])\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument('--batchsize', '-b', type=int, default=20,\n help='Number of examples in each mini-batch')\n parser.add_argument('--bproplen', '-l', type=int, default=35,\n help='Number of words in each mini-batch '\n '(= length of truncated BPTT)')\n parser.add_argument('--epoch', '-e', type=int, default=39,\n help='Number of sweeps over the dataset to train')\n parser.add_argument('--device', '-d', type=str, default='-1',\n help='Device specifier. Either ChainerX device '\n 'specifier or an integer. If non-negative integer, '\n 'CuPy arrays with specified device id are used. If '\n 'negative integer, NumPy arrays are used')\n parser.add_argument('--gradclip', '-c', type=float, default=5,\n help='Gradient norm threshold to clip')\n parser.add_argument('--out', '-o', default='result',\n help='Directory to output the result')\n parser.add_argument('--resume', '-r', type=str,\n help='Resume the training from snapshot')\n parser.add_argument('--test', action='store_true',\n help='Use tiny datasets for quick tests')\n parser.set_defaults(test=False)\n parser.add_argument('--unit', '-u', type=int, default=650,\n help='Number of LSTM units in each layer')\n parser.add_argument('--model', '-m', default='model.npz',\n help='Model file name to serialize')\n group = parser.add_argument_group('deprecated arguments')\n group.add_argument('--gpu', '-g', dest='device',\n type=int, nargs='?', const=0,\n help='GPU ID (negative value indicates CPU)')\n args = parser.parse_args()\n\n device = chainer.get_device(args.device)\n if device.xp is chainerx:\n sys.stderr.write('This example does not support ChainerX devices.\\n')\n sys.exit(1)\n\n device.use()\n\n # Load the Penn Tree Bank long word sequence dataset\n train, val, test = chainer.datasets.get_ptb_words()\n n_vocab = max(train) + 1 # train is just an array of integers\n print('#vocab = {}'.format(n_vocab))\n\n if args.test:\n train = train[:100]\n val = val[:100]\n test = test[:100]\n\n train_iter = ParallelSequentialIterator(train, args.batchsize)\n val_iter = ParallelSequentialIterator(val, 1, repeat=False)\n test_iter = ParallelSequentialIterator(test, 1, repeat=False)\n\n # Prepare an RNNLM model\n rnn = RNNForLM(n_vocab, args.unit)\n model = L.Classifier(rnn)\n model.compute_accuracy = False # we only want the perplexity\n model.to_device(device)\n\n # Set up an optimizer\n optimizer = chainer.optimizers.SGD(lr=1.0)\n optimizer.setup(model)\n optimizer.add_hook(chainer.optimizer_hooks.GradientClipping(args.gradclip))\n\n # Set up a trainer\n updater = BPTTUpdater(train_iter, optimizer, args.bproplen, device)\n trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)\n\n eval_model = model.copy() # Model with shared params and distinct states\n eval_rnn = eval_model.predictor\n trainer.extend(extensions.Evaluator(\n val_iter, eval_model, device=device,\n # Reset the RNN state at the beginning of each evaluation\n eval_hook=lambda _: eval_rnn.reset_state()))\n\n interval = 10 if args.test else 500\n trainer.extend(extensions.LogReport(postprocess=compute_perplexity,\n trigger=(interval, 'iteration')))\n trainer.extend(extensions.PrintReport(\n ['epoch', 'iteration', 'perplexity', 'val_perplexity']\n ), trigger=(interval, 'iteration'))\n trainer.extend(extensions.ProgressBar(\n update_interval=1 if args.test else 10))\n trainer.extend(extensions.snapshot())\n trainer.extend(extensions.snapshot_object(\n model, 'model_iter_{.updater.iteration}'))\n if args.resume is not None:\n chainer.serializers.load_npz(args.resume, trainer)\n\n trainer.run()\n\n # Evaluate the final model\n print('test')\n eval_rnn.reset_state()\n evaluator = extensions.Evaluator(test_iter, eval_model, device=device)\n result = evaluator()\n print('test perplexity: {}'.format(np.exp(float(result['main/loss']))))\n\n # Serialize the final model\n chainer.serializers.save_npz(args.model, model)\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/ptb/train_ptb.py"}]} | 3,727 | 329 |
gh_patches_debug_14102 | rasdani/github-patches | git_diff | aio-libs__aiohttp-3752 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tests_require: add trustme
It is required since https://github.com/aio-libs/aiohttp/pull/3487.
<!-- Thank you for your contribution! -->
## What do these changes do?
<!-- Please give a short brief about these changes. -->
## Are there changes in behavior for the user?
<!-- Outline any notable behaviour for the end users. -->
## Related issue number
<!-- Are there any issues opened that will be resolved by merging this change? -->
## Checklist
- [ ] I think the code is well written
- [ ] Unit tests for the changes exist
- [ ] Documentation reflects the changes
- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
* The format is <Name> <Surname>.
* Please keep alphabetical order, the file is sorted by names.
- [ ] Add a new news fragment into the `CHANGES` folder
* name it `<issue_id>.<type>` for example (588.bugfix)
* if you don't have an `issue_id` change it to the pr id after creating the pr
* ensure type is one of the following:
* `.feature`: Signifying a new feature.
* `.bugfix`: Signifying a bug fix.
* `.doc`: Signifying a documentation improvement.
* `.removal`: Signifying a deprecation or removal of public API.
* `.misc`: A ticket has been closed, but it is not of interest to users.
* Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import codecs
2 import pathlib
3 import re
4 import sys
5 from distutils.command.build_ext import build_ext
6 from distutils.errors import (CCompilerError, DistutilsExecError,
7 DistutilsPlatformError)
8
9 from setuptools import Extension, setup
10
11
12 if sys.version_info < (3, 5, 3):
13 raise RuntimeError("aiohttp 3.x requires Python 3.5.3+")
14
15 here = pathlib.Path(__file__).parent
16
17
18 if (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):
19 print("Install submodules when building from git clone", file=sys.stderr)
20 print("Hint:", file=sys.stderr)
21 print(" git submodule update --init", file=sys.stderr)
22 sys.exit(2)
23
24
25 # NOTE: makefile cythonizes all Cython modules
26
27 extensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),
28 Extension('aiohttp._http_parser',
29 ['aiohttp/_http_parser.c',
30 'vendor/http-parser/http_parser.c',
31 'aiohttp/_find_header.c'],
32 define_macros=[('HTTP_PARSER_STRICT', 0)],
33 ),
34 Extension('aiohttp._frozenlist',
35 ['aiohttp/_frozenlist.c']),
36 Extension('aiohttp._helpers',
37 ['aiohttp/_helpers.c']),
38 Extension('aiohttp._http_writer',
39 ['aiohttp/_http_writer.c'])]
40
41
42 class BuildFailed(Exception):
43 pass
44
45
46 class ve_build_ext(build_ext):
47 # This class allows C extension building to fail.
48
49 def run(self):
50 try:
51 build_ext.run(self)
52 except (DistutilsPlatformError, FileNotFoundError):
53 raise BuildFailed()
54
55 def build_extension(self, ext):
56 try:
57 build_ext.build_extension(self, ext)
58 except (CCompilerError, DistutilsExecError,
59 DistutilsPlatformError, ValueError):
60 raise BuildFailed()
61
62
63
64 txt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')
65 try:
66 version = re.findall(r"^__version__ = '([^']+)'\r?$",
67 txt, re.M)[0]
68 except IndexError:
69 raise RuntimeError('Unable to determine version.')
70
71 install_requires = [
72 'attrs>=17.3.0',
73 'chardet>=2.0,<4.0',
74 'multidict>=4.0,<5.0',
75 'async_timeout>=3.0,<4.0',
76 'yarl>=1.0,<2.0',
77 'idna-ssl>=1.0; python_version<"3.7"',
78 'typing_extensions>=3.6.5; python_version<"3.7"',
79 ]
80
81
82 def read(f):
83 return (here / f).read_text('utf-8').strip()
84
85
86 NEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)
87 pytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []
88
89 tests_require = [
90 'pytest', 'gunicorn',
91 'pytest-timeout', 'async-generator',
92 'pytest-xdist',
93 ]
94
95
96 args = dict(
97 name='aiohttp',
98 version=version,
99 description='Async http client/server framework (asyncio)',
100 long_description='\n\n'.join((read('README.rst'), read('CHANGES.rst'))),
101 classifiers=[
102 'License :: OSI Approved :: Apache Software License',
103 'Intended Audience :: Developers',
104 'Programming Language :: Python',
105 'Programming Language :: Python :: 3',
106 'Programming Language :: Python :: 3.5',
107 'Programming Language :: Python :: 3.6',
108 'Programming Language :: Python :: 3.7',
109 'Development Status :: 5 - Production/Stable',
110 'Operating System :: POSIX',
111 'Operating System :: MacOS :: MacOS X',
112 'Operating System :: Microsoft :: Windows',
113 'Topic :: Internet :: WWW/HTTP',
114 'Framework :: AsyncIO',
115 ],
116 author='Nikolay Kim',
117 author_email='[email protected]',
118 maintainer=', '.join(('Nikolay Kim <[email protected]>',
119 'Andrew Svetlov <[email protected]>')),
120 maintainer_email='[email protected]',
121 url='https://github.com/aio-libs/aiohttp',
122 project_urls={
123 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',
124 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',
125 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',
126 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',
127 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',
128 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',
129 'Docs: RTD': 'https://docs.aiohttp.org',
130 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',
131 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',
132 },
133 license='Apache 2',
134 packages=['aiohttp'],
135 python_requires='>=3.5.3',
136 install_requires=install_requires,
137 extras_require={
138 'speedups': [
139 'aiodns',
140 'brotlipy',
141 'cchardet',
142 ],
143 },
144 tests_require=tests_require,
145 setup_requires=pytest_runner,
146 include_package_data=True,
147 ext_modules=extensions,
148 cmdclass=dict(build_ext=ve_build_ext),
149 )
150
151 try:
152 setup(**args)
153 except BuildFailed:
154 print("************************************************************")
155 print("Cannot compile C accelerator module, use pure python version")
156 print("************************************************************")
157 del args['ext_modules']
158 del args['cmdclass']
159 setup(**args)
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -83,16 +83,6 @@
return (here / f).read_text('utf-8').strip()
-NEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)
-pytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []
-
-tests_require = [
- 'pytest', 'gunicorn',
- 'pytest-timeout', 'async-generator',
- 'pytest-xdist',
-]
-
-
args = dict(
name='aiohttp',
version=version,
@@ -141,8 +131,6 @@
'cchardet',
],
},
- tests_require=tests_require,
- setup_requires=pytest_runner,
include_package_data=True,
ext_modules=extensions,
cmdclass=dict(build_ext=ve_build_ext),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -83,16 +83,6 @@\n return (here / f).read_text('utf-8').strip()\n \n \n-NEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)\n-pytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []\n-\n-tests_require = [\n- 'pytest', 'gunicorn',\n- 'pytest-timeout', 'async-generator',\n- 'pytest-xdist',\n-]\n-\n-\n args = dict(\n name='aiohttp',\n version=version,\n@@ -141,8 +131,6 @@\n 'cchardet',\n ],\n },\n- tests_require=tests_require,\n- setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n", "issue": "tests_require: add trustme\nIt is required since https://github.com/aio-libs/aiohttp/pull/3487.\r\n\r\n<!-- Thank you for your contribution! -->\r\n\r\n## What do these changes do?\r\n\r\n<!-- Please give a short brief about these changes. -->\r\n\r\n## Are there changes in behavior for the user?\r\n\r\n<!-- Outline any notable behaviour for the end users. -->\r\n\r\n## Related issue number\r\n\r\n<!-- Are there any issues opened that will be resolved by merging this change? -->\r\n\r\n## Checklist\r\n\r\n- [ ] I think the code is well written\r\n- [ ] Unit tests for the changes exist\r\n- [ ] Documentation reflects the changes\r\n- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`\r\n * The format is <Name> <Surname>.\r\n * Please keep alphabetical order, the file is sorted by names. \r\n- [ ] Add a new news fragment into the `CHANGES` folder\r\n * name it `<issue_id>.<type>` for example (588.bugfix)\r\n * if you don't have an `issue_id` change it to the pr id after creating the pr\r\n * ensure type is one of the following:\r\n * `.feature`: Signifying a new feature.\r\n * `.bugfix`: Signifying a bug fix.\r\n * `.doc`: Signifying a documentation improvement.\r\n * `.removal`: Signifying a deprecation or removal of public API.\r\n * `.misc`: A ticket has been closed, but it is not of interest to users.\r\n * Make sure to use full sentences with correct case and punctuation, for example: \"Fix issue with non-ascii contents in doctest text files.\"\r\n\n", "before_files": [{"content": "import codecs\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\nhere = pathlib.Path(__file__).parent\n\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer.c'])]\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.0,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5; python_version<\"3.7\"',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nNEEDS_PYTEST = {'pytest', 'test'}.intersection(sys.argv)\npytest_runner = ['pytest-runner'] if NEEDS_PYTEST else []\n\ntests_require = [\n 'pytest', 'gunicorn',\n 'pytest-timeout', 'async-generator',\n 'pytest-xdist',\n]\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'brotlipy',\n 'cchardet',\n ],\n },\n tests_require=tests_require,\n setup_requires=pytest_runner,\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n)\n\ntry:\n setup(**args)\nexcept BuildFailed:\n print(\"************************************************************\")\n print(\"Cannot compile C accelerator module, use pure python version\")\n print(\"************************************************************\")\n del args['ext_modules']\n del args['cmdclass']\n setup(**args)\n", "path": "setup.py"}], "after_files": [{"content": "import codecs\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\nhere = pathlib.Path(__file__).parent\n\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md'):\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer.c'])]\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.0,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5; python_version<\"3.7\"',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'brotlipy',\n 'cchardet',\n ],\n },\n include_package_data=True,\n ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n)\n\ntry:\n setup(**args)\nexcept BuildFailed:\n print(\"************************************************************\")\n print(\"Cannot compile C accelerator module, use pure python version\")\n print(\"************************************************************\")\n del args['ext_modules']\n del args['cmdclass']\n setup(**args)\n", "path": "setup.py"}]} | 2,297 | 200 |
gh_patches_debug_43575 | rasdani/github-patches | git_diff | google__mobly-392 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add timing information for instrumentation tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mobly/controllers/android_device_lib/adb.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from builtins import str
16 from past.builtins import basestring
17
18 import logging
19 import pipes
20 import psutil
21 import subprocess
22 import threading
23
24 # Command to use for running ADB commands.
25 ADB = 'adb'
26
27 # adb gets confused if we try to manage bound ports in parallel, so anything to
28 # do with port forwarding must happen under this lock.
29 ADB_PORT_LOCK = threading.Lock()
30
31 # Qualified class name of the default instrumentation test runner.
32 DEFAULT_INSTRUMENTATION_RUNNER = 'com.android.common.support.test.runner.AndroidJUnitRunner'
33
34
35 class Error(Exception):
36 """Base error type for adb proxy module."""
37
38
39 class AdbError(Error):
40 """Raised when an adb command encounters an error.
41
42 Args:
43 cmd: list of strings, the adb command executed.
44 stdout: byte string, the raw stdout of the command.
45 stderr: byte string, the raw stderr of the command.
46 ret_code: int, the return code of the command.
47 """
48
49 def __init__(self, cmd, stdout, stderr, ret_code):
50 self.cmd = cmd
51 self.stdout = stdout
52 self.stderr = stderr
53 self.ret_code = ret_code
54
55 def __str__(self):
56 return ('Error executing adb cmd "%s". ret: %d, stdout: %s, stderr: %s'
57 ) % (cli_cmd_to_string(self.cmd), self.ret_code, self.stdout,
58 self.stderr)
59
60
61 class AdbTimeoutError(Error):
62 """Raised when an command did not complete within expected time.
63
64 Args:
65 cmd: list of strings, the adb command that timed out
66 timeout: float, the number of seconds passed before timing out.
67 """
68
69 def __init__(self, cmd, timeout):
70 self.cmd = cmd
71 self.timeout = timeout
72
73 def __str__(self):
74 return 'Timed out executing command "%s" after %ss.' % (
75 cli_cmd_to_string(self.cmd), self.timeout)
76
77
78 def list_occupied_adb_ports():
79 """Lists all the host ports occupied by adb forward.
80
81 This is useful because adb will silently override the binding if an attempt
82 to bind to a port already used by adb was made, instead of throwing binding
83 error. So one should always check what ports adb is using before trying to
84 bind to a port with adb.
85
86 Returns:
87 A list of integers representing occupied host ports.
88 """
89 out = AdbProxy().forward('--list')
90 clean_lines = str(out, 'utf-8').strip().split('\n')
91 used_ports = []
92 for line in clean_lines:
93 tokens = line.split(' tcp:')
94 if len(tokens) != 3:
95 continue
96 used_ports.append(int(tokens[1]))
97 return used_ports
98
99
100 def cli_cmd_to_string(args):
101 """Converts a cmd arg list to string.
102
103 Args:
104 args: list of strings, the arguments of a command.
105
106 Returns:
107 String representation of the command.
108 """
109 if isinstance(args, basestring):
110 # Return directly if it's already a string.
111 return args
112 return ' '.join([pipes.quote(arg) for arg in args])
113
114
115 class AdbProxy(object):
116 """Proxy class for ADB.
117
118 For syntactic reasons, the '-' in adb commands need to be replaced with
119 '_'. Can directly execute adb commands on an object:
120 >> adb = AdbProxy(<serial>)
121 >> adb.start_server()
122 >> adb.devices() # will return the console output of "adb devices".
123
124 By default, command args are expected to be an iterable which is passed
125 directly to subprocess.Popen():
126 >> adb.shell(['echo', 'a', 'b'])
127
128 This way of launching commands is recommended by the subprocess
129 documentation to avoid shell injection vulnerabilities and avoid having to
130 deal with multiple layers of shell quoting and different shell environments
131 between different OSes.
132
133 If you really want to run the command through the system shell, this is
134 possible by supplying shell=True, but try to avoid this if possible:
135 >> adb.shell('cat /foo > /tmp/file', shell=True)
136 """
137
138 def __init__(self, serial=''):
139 self.serial = serial
140
141 def _exec_cmd(self, args, shell, timeout, stderr):
142 """Executes adb commands.
143
144 Args:
145 args: string or list of strings, program arguments.
146 See subprocess.Popen() documentation.
147 shell: bool, True to run this command through the system shell,
148 False to invoke it directly. See subprocess.Popen() docs.
149 timeout: float, the number of seconds to wait before timing out.
150 If not specified, no timeout takes effect.
151 stderr: a Byte stream, like io.BytesIO, stderr of the command will
152 be written to this object if provided.
153
154 Returns:
155 The output of the adb command run if exit code is 0.
156
157 Raises:
158 AdbError: The adb command exit code is not 0.
159 AdbTimeoutError: The adb command timed out.
160 """
161 proc = subprocess.Popen(
162 args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
163 process = psutil.Process(proc.pid)
164 if timeout and timeout <= 0:
165 raise Error('Timeout is not a positive value: %s' % timeout)
166 if timeout and timeout > 0:
167 try:
168 process.wait(timeout=timeout)
169 except psutil.TimeoutExpired:
170 process.terminate()
171 raise AdbTimeoutError(cmd=args, timeout=timeout)
172
173 (out, err) = proc.communicate()
174 if stderr:
175 stderr.write(err)
176 ret = proc.returncode
177 logging.debug('cmd: %s, stdout: %s, stderr: %s, ret: %s',
178 cli_cmd_to_string(args), out, err, ret)
179 if ret == 0:
180 return out
181 else:
182 raise AdbError(cmd=args, stdout=out, stderr=err, ret_code=ret)
183
184 def _construct_adb_cmd(self, raw_name, args, shell):
185 """Constructs an adb command with arguments for a subprocess call.
186
187 Args:
188 raw_name: string, the raw unsanitized name of the adb command to
189 format.
190 args: string or list of strings, arguments to the adb command.
191 See subprocess.Proc() documentation.
192 shell: bool, True to run this command through the system shell,
193 False to invoke it directly. See subprocess.Proc() docs.
194
195 Returns:
196 The adb command in a format appropriate for subprocess. If shell is
197 True, then this is a string; otherwise, this is a list of
198 strings.
199 """
200 args = args or ''
201 name = raw_name.replace('_', '-')
202 if shell:
203 args = cli_cmd_to_string(args)
204 # Add quotes around "adb" in case the ADB path contains spaces. This
205 # is pretty common on Windows (e.g. Program Files).
206 if self.serial:
207 adb_cmd = '"%s" -s "%s" %s %s' % (ADB, self.serial, name, args)
208 else:
209 adb_cmd = '"%s" %s %s' % (ADB, name, args)
210 else:
211 adb_cmd = [ADB]
212 if self.serial:
213 adb_cmd.extend(['-s', self.serial])
214 adb_cmd.append(name)
215 if args:
216 if isinstance(args, basestring):
217 adb_cmd.append(args)
218 else:
219 adb_cmd.extend(args)
220 return adb_cmd
221
222 def _exec_adb_cmd(self, name, args, shell, timeout, stderr):
223 adb_cmd = self._construct_adb_cmd(name, args, shell=shell)
224 out = self._exec_cmd(
225 adb_cmd, shell=shell, timeout=timeout, stderr=stderr)
226 return out
227
228 def getprop(self, prop_name):
229 """Get a property of the device.
230
231 This is a convenience wrapper for "adb shell getprop xxx".
232
233 Args:
234 prop_name: A string that is the name of the property to get.
235
236 Returns:
237 A string that is the value of the property, or None if the property
238 doesn't exist.
239 """
240 return self.shell('getprop %s' % prop_name).decode('utf-8').strip()
241
242 def has_shell_command(self, command):
243 """Checks to see if a given check command exists on the device.
244
245 Args:
246 command: A string that is the name of the command to check.
247
248 Returns:
249 A boolean that is True if the command exists and False otherwise.
250 """
251 try:
252 output = self.shell(['command', '-v',
253 command]).decode('utf-8').strip()
254 return command in output
255 except AdbError:
256 # If the command doesn't exist, then 'command -v' can return
257 # an exit code > 1.
258 return False
259
260 def forward(self, args=None, shell=False):
261 with ADB_PORT_LOCK:
262 return self._exec_adb_cmd(
263 'forward', args, shell, timeout=None, stderr=None)
264
265 def instrument(self, package, options=None, runner=None):
266 """Runs an instrumentation command on the device.
267
268 This is a convenience wrapper to avoid parameter formatting.
269
270 Example:
271
272 .. code-block:: python
273
274 device.instrument(
275 'com.my.package.test',
276 options = {
277 'class': 'com.my.package.test.TestSuite',
278 },
279 )
280
281 Args:
282 package: string, the package of the instrumentation tests.
283 options: dict, the instrumentation options including the test
284 class.
285 runner: string, the test runner name, which defaults to
286 DEFAULT_INSTRUMENTATION_RUNNER.
287
288 Returns:
289 The output of instrumentation command.
290 """
291 if runner is None:
292 runner = DEFAULT_INSTRUMENTATION_RUNNER
293 if options is None:
294 options = {}
295
296 options_list = []
297 for option_key, option_value in options.items():
298 options_list.append('-e %s %s' % (option_key, option_value))
299 options_string = ' '.join(options_list)
300
301 instrumentation_command = 'am instrument -r -w %s %s/%s' % (
302 options_string, package, runner)
303 logging.info('AndroidDevice|%s: Executing adb shell %s', self.serial,
304 instrumentation_command)
305 return self.shell(instrumentation_command)
306
307 def __getattr__(self, name):
308 def adb_call(args=None, shell=False, timeout=None, stderr=None):
309 """Wrapper for an ADB command.
310
311 Args:
312 args: string or list of strings, arguments to the adb command.
313 See subprocess.Proc() documentation.
314 shell: bool, True to run this command through the system shell,
315 False to invoke it directly. See subprocess.Proc() docs.
316 timeout: float, the number of seconds to wait before timing out.
317 If not specified, no timeout takes effect.
318 stderr: a Byte stream, like io.BytesIO, stderr of the command
319 will be written to this object if provided.
320
321 Returns:
322 The output of the adb command run if exit code is 0.
323 """
324 return self._exec_adb_cmd(
325 name, args, shell=shell, timeout=timeout, stderr=stderr)
326
327 return adb_call
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mobly/controllers/android_device_lib/adb.py b/mobly/controllers/android_device_lib/adb.py
--- a/mobly/controllers/android_device_lib/adb.py
+++ b/mobly/controllers/android_device_lib/adb.py
@@ -181,6 +181,47 @@
else:
raise AdbError(cmd=args, stdout=out, stderr=err, ret_code=ret)
+ def _execute_and_process_stdout(self, args, shell, handler):
+ """Executes adb commands and processes the stdout with a handler.
+
+ Args:
+ args: string or list of strings, program arguments.
+ See subprocess.Popen() documentation.
+ shell: bool, True to run this command through the system shell,
+ False to invoke it directly. See subprocess.Popen() docs.
+ handler: func, a function to handle adb stdout line by line.
+
+ Returns:
+ The stderr of the adb command run if exit code is 0.
+
+ Raises:
+ AdbError: The adb command exit code is not 0.
+ """
+ proc = subprocess.Popen(
+ args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ shell=shell,
+ bufsize=1)
+ try:
+ while proc.poll() is None:
+ line = proc.stdout.readline()
+ if line:
+ handler(line)
+ else:
+ break
+ finally:
+ (_, err) = proc.communicate()
+ ret = proc.returncode
+ if ret == 0:
+ return err
+ else:
+ raise AdbError(
+ cmd=args,
+ stdout='[elided, processed via handler]',
+ stderr=err,
+ ret_code=ret)
+
def _construct_adb_cmd(self, raw_name, args, shell):
"""Constructs an adb command with arguments for a subprocess call.
@@ -225,6 +266,12 @@
adb_cmd, shell=shell, timeout=timeout, stderr=stderr)
return out
+ def _execute_adb_and_process_stdout(self, name, args, shell, handler):
+ adb_cmd = self._construct_adb_cmd(name, args, shell=shell)
+ out = self._execute_and_process_stdout(
+ adb_cmd, shell=shell, handler=handler)
+ return out
+
def getprop(self, prop_name):
"""Get a property of the device.
@@ -262,7 +309,7 @@
return self._exec_adb_cmd(
'forward', args, shell, timeout=None, stderr=None)
- def instrument(self, package, options=None, runner=None):
+ def instrument(self, package, options=None, runner=None, handler=None):
"""Runs an instrumentation command on the device.
This is a convenience wrapper to avoid parameter formatting.
@@ -284,9 +331,14 @@
class.
runner: string, the test runner name, which defaults to
DEFAULT_INSTRUMENTATION_RUNNER.
+ handler: optional func, when specified the function is used to parse
+ the instrumentation stdout line by line as the output is
+ generated; otherwise, the stdout is simply returned once the
+ instrumentation is finished.
Returns:
- The output of instrumentation command.
+ The stdout of instrumentation command or the stderr if the handler
+ is set.
"""
if runner is None:
runner = DEFAULT_INSTRUMENTATION_RUNNER
@@ -302,7 +354,17 @@
options_string, package, runner)
logging.info('AndroidDevice|%s: Executing adb shell %s', self.serial,
instrumentation_command)
- return self.shell(instrumentation_command)
+ if handler is None:
+ # Flow kept for backwards-compatibility reasons
+ self._exec_adb_cmd(
+ 'shell',
+ instrumentation_command,
+ shell=False,
+ timeout=None,
+ stderr=None)
+ else:
+ return self._execute_adb_and_process_stdout(
+ 'shell', instrumentation_command, shell=False, handler=handler)
def __getattr__(self, name):
def adb_call(args=None, shell=False, timeout=None, stderr=None):
| {"golden_diff": "diff --git a/mobly/controllers/android_device_lib/adb.py b/mobly/controllers/android_device_lib/adb.py\n--- a/mobly/controllers/android_device_lib/adb.py\n+++ b/mobly/controllers/android_device_lib/adb.py\n@@ -181,6 +181,47 @@\n else:\n raise AdbError(cmd=args, stdout=out, stderr=err, ret_code=ret)\n \n+ def _execute_and_process_stdout(self, args, shell, handler):\n+ \"\"\"Executes adb commands and processes the stdout with a handler.\n+\n+ Args:\n+ args: string or list of strings, program arguments.\n+ See subprocess.Popen() documentation.\n+ shell: bool, True to run this command through the system shell,\n+ False to invoke it directly. See subprocess.Popen() docs.\n+ handler: func, a function to handle adb stdout line by line.\n+\n+ Returns:\n+ The stderr of the adb command run if exit code is 0.\n+\n+ Raises:\n+ AdbError: The adb command exit code is not 0.\n+ \"\"\"\n+ proc = subprocess.Popen(\n+ args,\n+ stdout=subprocess.PIPE,\n+ stderr=subprocess.PIPE,\n+ shell=shell,\n+ bufsize=1)\n+ try:\n+ while proc.poll() is None:\n+ line = proc.stdout.readline()\n+ if line:\n+ handler(line)\n+ else:\n+ break\n+ finally:\n+ (_, err) = proc.communicate()\n+ ret = proc.returncode\n+ if ret == 0:\n+ return err\n+ else:\n+ raise AdbError(\n+ cmd=args,\n+ stdout='[elided, processed via handler]',\n+ stderr=err,\n+ ret_code=ret)\n+\n def _construct_adb_cmd(self, raw_name, args, shell):\n \"\"\"Constructs an adb command with arguments for a subprocess call.\n \n@@ -225,6 +266,12 @@\n adb_cmd, shell=shell, timeout=timeout, stderr=stderr)\n return out\n \n+ def _execute_adb_and_process_stdout(self, name, args, shell, handler):\n+ adb_cmd = self._construct_adb_cmd(name, args, shell=shell)\n+ out = self._execute_and_process_stdout(\n+ adb_cmd, shell=shell, handler=handler)\n+ return out\n+\n def getprop(self, prop_name):\n \"\"\"Get a property of the device.\n \n@@ -262,7 +309,7 @@\n return self._exec_adb_cmd(\n 'forward', args, shell, timeout=None, stderr=None)\n \n- def instrument(self, package, options=None, runner=None):\n+ def instrument(self, package, options=None, runner=None, handler=None):\n \"\"\"Runs an instrumentation command on the device.\n \n This is a convenience wrapper to avoid parameter formatting.\n@@ -284,9 +331,14 @@\n class.\n runner: string, the test runner name, which defaults to\n DEFAULT_INSTRUMENTATION_RUNNER.\n+ handler: optional func, when specified the function is used to parse\n+ the instrumentation stdout line by line as the output is\n+ generated; otherwise, the stdout is simply returned once the\n+ instrumentation is finished.\n \n Returns:\n- The output of instrumentation command.\n+ The stdout of instrumentation command or the stderr if the handler\n+ is set.\n \"\"\"\n if runner is None:\n runner = DEFAULT_INSTRUMENTATION_RUNNER\n@@ -302,7 +354,17 @@\n options_string, package, runner)\n logging.info('AndroidDevice|%s: Executing adb shell %s', self.serial,\n instrumentation_command)\n- return self.shell(instrumentation_command)\n+ if handler is None:\n+ # Flow kept for backwards-compatibility reasons\n+ self._exec_adb_cmd(\n+ 'shell',\n+ instrumentation_command,\n+ shell=False,\n+ timeout=None,\n+ stderr=None)\n+ else:\n+ return self._execute_adb_and_process_stdout(\n+ 'shell', instrumentation_command, shell=False, handler=handler)\n \n def __getattr__(self, name):\n def adb_call(args=None, shell=False, timeout=None, stderr=None):\n", "issue": "Add timing information for instrumentation tests\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom builtins import str\nfrom past.builtins import basestring\n\nimport logging\nimport pipes\nimport psutil\nimport subprocess\nimport threading\n\n# Command to use for running ADB commands.\nADB = 'adb'\n\n# adb gets confused if we try to manage bound ports in parallel, so anything to\n# do with port forwarding must happen under this lock.\nADB_PORT_LOCK = threading.Lock()\n\n# Qualified class name of the default instrumentation test runner.\nDEFAULT_INSTRUMENTATION_RUNNER = 'com.android.common.support.test.runner.AndroidJUnitRunner'\n\n\nclass Error(Exception):\n \"\"\"Base error type for adb proxy module.\"\"\"\n\n\nclass AdbError(Error):\n \"\"\"Raised when an adb command encounters an error.\n\n Args:\n cmd: list of strings, the adb command executed.\n stdout: byte string, the raw stdout of the command.\n stderr: byte string, the raw stderr of the command.\n ret_code: int, the return code of the command.\n \"\"\"\n\n def __init__(self, cmd, stdout, stderr, ret_code):\n self.cmd = cmd\n self.stdout = stdout\n self.stderr = stderr\n self.ret_code = ret_code\n\n def __str__(self):\n return ('Error executing adb cmd \"%s\". ret: %d, stdout: %s, stderr: %s'\n ) % (cli_cmd_to_string(self.cmd), self.ret_code, self.stdout,\n self.stderr)\n\n\nclass AdbTimeoutError(Error):\n \"\"\"Raised when an command did not complete within expected time.\n\n Args:\n cmd: list of strings, the adb command that timed out\n timeout: float, the number of seconds passed before timing out.\n \"\"\"\n\n def __init__(self, cmd, timeout):\n self.cmd = cmd\n self.timeout = timeout\n\n def __str__(self):\n return 'Timed out executing command \"%s\" after %ss.' % (\n cli_cmd_to_string(self.cmd), self.timeout)\n\n\ndef list_occupied_adb_ports():\n \"\"\"Lists all the host ports occupied by adb forward.\n\n This is useful because adb will silently override the binding if an attempt\n to bind to a port already used by adb was made, instead of throwing binding\n error. So one should always check what ports adb is using before trying to\n bind to a port with adb.\n\n Returns:\n A list of integers representing occupied host ports.\n \"\"\"\n out = AdbProxy().forward('--list')\n clean_lines = str(out, 'utf-8').strip().split('\\n')\n used_ports = []\n for line in clean_lines:\n tokens = line.split(' tcp:')\n if len(tokens) != 3:\n continue\n used_ports.append(int(tokens[1]))\n return used_ports\n\n\ndef cli_cmd_to_string(args):\n \"\"\"Converts a cmd arg list to string.\n\n Args:\n args: list of strings, the arguments of a command.\n\n Returns:\n String representation of the command.\n \"\"\"\n if isinstance(args, basestring):\n # Return directly if it's already a string.\n return args\n return ' '.join([pipes.quote(arg) for arg in args])\n\n\nclass AdbProxy(object):\n \"\"\"Proxy class for ADB.\n\n For syntactic reasons, the '-' in adb commands need to be replaced with\n '_'. Can directly execute adb commands on an object:\n >> adb = AdbProxy(<serial>)\n >> adb.start_server()\n >> adb.devices() # will return the console output of \"adb devices\".\n\n By default, command args are expected to be an iterable which is passed\n directly to subprocess.Popen():\n >> adb.shell(['echo', 'a', 'b'])\n\n This way of launching commands is recommended by the subprocess\n documentation to avoid shell injection vulnerabilities and avoid having to\n deal with multiple layers of shell quoting and different shell environments\n between different OSes.\n\n If you really want to run the command through the system shell, this is\n possible by supplying shell=True, but try to avoid this if possible:\n >> adb.shell('cat /foo > /tmp/file', shell=True)\n \"\"\"\n\n def __init__(self, serial=''):\n self.serial = serial\n\n def _exec_cmd(self, args, shell, timeout, stderr):\n \"\"\"Executes adb commands.\n\n Args:\n args: string or list of strings, program arguments.\n See subprocess.Popen() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Popen() docs.\n timeout: float, the number of seconds to wait before timing out.\n If not specified, no timeout takes effect.\n stderr: a Byte stream, like io.BytesIO, stderr of the command will\n be written to this object if provided.\n\n Returns:\n The output of the adb command run if exit code is 0.\n\n Raises:\n AdbError: The adb command exit code is not 0.\n AdbTimeoutError: The adb command timed out.\n \"\"\"\n proc = subprocess.Popen(\n args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)\n process = psutil.Process(proc.pid)\n if timeout and timeout <= 0:\n raise Error('Timeout is not a positive value: %s' % timeout)\n if timeout and timeout > 0:\n try:\n process.wait(timeout=timeout)\n except psutil.TimeoutExpired:\n process.terminate()\n raise AdbTimeoutError(cmd=args, timeout=timeout)\n\n (out, err) = proc.communicate()\n if stderr:\n stderr.write(err)\n ret = proc.returncode\n logging.debug('cmd: %s, stdout: %s, stderr: %s, ret: %s',\n cli_cmd_to_string(args), out, err, ret)\n if ret == 0:\n return out\n else:\n raise AdbError(cmd=args, stdout=out, stderr=err, ret_code=ret)\n\n def _construct_adb_cmd(self, raw_name, args, shell):\n \"\"\"Constructs an adb command with arguments for a subprocess call.\n\n Args:\n raw_name: string, the raw unsanitized name of the adb command to\n format.\n args: string or list of strings, arguments to the adb command.\n See subprocess.Proc() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Proc() docs.\n\n Returns:\n The adb command in a format appropriate for subprocess. If shell is\n True, then this is a string; otherwise, this is a list of\n strings.\n \"\"\"\n args = args or ''\n name = raw_name.replace('_', '-')\n if shell:\n args = cli_cmd_to_string(args)\n # Add quotes around \"adb\" in case the ADB path contains spaces. This\n # is pretty common on Windows (e.g. Program Files).\n if self.serial:\n adb_cmd = '\"%s\" -s \"%s\" %s %s' % (ADB, self.serial, name, args)\n else:\n adb_cmd = '\"%s\" %s %s' % (ADB, name, args)\n else:\n adb_cmd = [ADB]\n if self.serial:\n adb_cmd.extend(['-s', self.serial])\n adb_cmd.append(name)\n if args:\n if isinstance(args, basestring):\n adb_cmd.append(args)\n else:\n adb_cmd.extend(args)\n return adb_cmd\n\n def _exec_adb_cmd(self, name, args, shell, timeout, stderr):\n adb_cmd = self._construct_adb_cmd(name, args, shell=shell)\n out = self._exec_cmd(\n adb_cmd, shell=shell, timeout=timeout, stderr=stderr)\n return out\n\n def getprop(self, prop_name):\n \"\"\"Get a property of the device.\n\n This is a convenience wrapper for \"adb shell getprop xxx\".\n\n Args:\n prop_name: A string that is the name of the property to get.\n\n Returns:\n A string that is the value of the property, or None if the property\n doesn't exist.\n \"\"\"\n return self.shell('getprop %s' % prop_name).decode('utf-8').strip()\n\n def has_shell_command(self, command):\n \"\"\"Checks to see if a given check command exists on the device.\n\n Args:\n command: A string that is the name of the command to check.\n\n Returns:\n A boolean that is True if the command exists and False otherwise.\n \"\"\"\n try:\n output = self.shell(['command', '-v',\n command]).decode('utf-8').strip()\n return command in output\n except AdbError:\n # If the command doesn't exist, then 'command -v' can return\n # an exit code > 1.\n return False\n\n def forward(self, args=None, shell=False):\n with ADB_PORT_LOCK:\n return self._exec_adb_cmd(\n 'forward', args, shell, timeout=None, stderr=None)\n\n def instrument(self, package, options=None, runner=None):\n \"\"\"Runs an instrumentation command on the device.\n\n This is a convenience wrapper to avoid parameter formatting.\n\n Example:\n\n .. code-block:: python\n\n device.instrument(\n 'com.my.package.test',\n options = {\n 'class': 'com.my.package.test.TestSuite',\n },\n )\n\n Args:\n package: string, the package of the instrumentation tests.\n options: dict, the instrumentation options including the test\n class.\n runner: string, the test runner name, which defaults to\n DEFAULT_INSTRUMENTATION_RUNNER.\n\n Returns:\n The output of instrumentation command.\n \"\"\"\n if runner is None:\n runner = DEFAULT_INSTRUMENTATION_RUNNER\n if options is None:\n options = {}\n\n options_list = []\n for option_key, option_value in options.items():\n options_list.append('-e %s %s' % (option_key, option_value))\n options_string = ' '.join(options_list)\n\n instrumentation_command = 'am instrument -r -w %s %s/%s' % (\n options_string, package, runner)\n logging.info('AndroidDevice|%s: Executing adb shell %s', self.serial,\n instrumentation_command)\n return self.shell(instrumentation_command)\n\n def __getattr__(self, name):\n def adb_call(args=None, shell=False, timeout=None, stderr=None):\n \"\"\"Wrapper for an ADB command.\n\n Args:\n args: string or list of strings, arguments to the adb command.\n See subprocess.Proc() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Proc() docs.\n timeout: float, the number of seconds to wait before timing out.\n If not specified, no timeout takes effect.\n stderr: a Byte stream, like io.BytesIO, stderr of the command\n will be written to this object if provided.\n\n Returns:\n The output of the adb command run if exit code is 0.\n \"\"\"\n return self._exec_adb_cmd(\n name, args, shell=shell, timeout=timeout, stderr=stderr)\n\n return adb_call\n", "path": "mobly/controllers/android_device_lib/adb.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom builtins import str\nfrom past.builtins import basestring\n\nimport logging\nimport pipes\nimport psutil\nimport subprocess\nimport threading\n\n# Command to use for running ADB commands.\nADB = 'adb'\n\n# adb gets confused if we try to manage bound ports in parallel, so anything to\n# do with port forwarding must happen under this lock.\nADB_PORT_LOCK = threading.Lock()\n\n# Qualified class name of the default instrumentation test runner.\nDEFAULT_INSTRUMENTATION_RUNNER = 'com.android.common.support.test.runner.AndroidJUnitRunner'\n\n\nclass Error(Exception):\n \"\"\"Base error type for adb proxy module.\"\"\"\n\n\nclass AdbError(Error):\n \"\"\"Raised when an adb command encounters an error.\n\n Args:\n cmd: list of strings, the adb command executed.\n stdout: byte string, the raw stdout of the command.\n stderr: byte string, the raw stderr of the command.\n ret_code: int, the return code of the command.\n \"\"\"\n\n def __init__(self, cmd, stdout, stderr, ret_code):\n self.cmd = cmd\n self.stdout = stdout\n self.stderr = stderr\n self.ret_code = ret_code\n\n def __str__(self):\n return ('Error executing adb cmd \"%s\". ret: %d, stdout: %s, stderr: %s'\n ) % (cli_cmd_to_string(self.cmd), self.ret_code, self.stdout,\n self.stderr)\n\n\nclass AdbTimeoutError(Error):\n \"\"\"Raised when an command did not complete within expected time.\n\n Args:\n cmd: list of strings, the adb command that timed out\n timeout: float, the number of seconds passed before timing out.\n \"\"\"\n\n def __init__(self, cmd, timeout):\n self.cmd = cmd\n self.timeout = timeout\n\n def __str__(self):\n return 'Timed out executing command \"%s\" after %ss.' % (\n cli_cmd_to_string(self.cmd), self.timeout)\n\n\ndef list_occupied_adb_ports():\n \"\"\"Lists all the host ports occupied by adb forward.\n\n This is useful because adb will silently override the binding if an attempt\n to bind to a port already used by adb was made, instead of throwing binding\n error. So one should always check what ports adb is using before trying to\n bind to a port with adb.\n\n Returns:\n A list of integers representing occupied host ports.\n \"\"\"\n out = AdbProxy().forward('--list')\n clean_lines = str(out, 'utf-8').strip().split('\\n')\n used_ports = []\n for line in clean_lines:\n tokens = line.split(' tcp:')\n if len(tokens) != 3:\n continue\n used_ports.append(int(tokens[1]))\n return used_ports\n\n\ndef cli_cmd_to_string(args):\n \"\"\"Converts a cmd arg list to string.\n\n Args:\n args: list of strings, the arguments of a command.\n\n Returns:\n String representation of the command.\n \"\"\"\n if isinstance(args, basestring):\n # Return directly if it's already a string.\n return args\n return ' '.join([pipes.quote(arg) for arg in args])\n\n\nclass AdbProxy(object):\n \"\"\"Proxy class for ADB.\n\n For syntactic reasons, the '-' in adb commands need to be replaced with\n '_'. Can directly execute adb commands on an object:\n >> adb = AdbProxy(<serial>)\n >> adb.start_server()\n >> adb.devices() # will return the console output of \"adb devices\".\n\n By default, command args are expected to be an iterable which is passed\n directly to subprocess.Popen():\n >> adb.shell(['echo', 'a', 'b'])\n\n This way of launching commands is recommended by the subprocess\n documentation to avoid shell injection vulnerabilities and avoid having to\n deal with multiple layers of shell quoting and different shell environments\n between different OSes.\n\n If you really want to run the command through the system shell, this is\n possible by supplying shell=True, but try to avoid this if possible:\n >> adb.shell('cat /foo > /tmp/file', shell=True)\n \"\"\"\n\n def __init__(self, serial=''):\n self.serial = serial\n\n def _exec_cmd(self, args, shell, timeout, stderr):\n \"\"\"Executes adb commands.\n\n Args:\n args: string or list of strings, program arguments.\n See subprocess.Popen() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Popen() docs.\n timeout: float, the number of seconds to wait before timing out.\n If not specified, no timeout takes effect.\n stderr: a Byte stream, like io.BytesIO, stderr of the command will\n be written to this object if provided.\n\n Returns:\n The output of the adb command run if exit code is 0.\n\n Raises:\n AdbError: The adb command exit code is not 0.\n AdbTimeoutError: The adb command timed out.\n \"\"\"\n proc = subprocess.Popen(\n args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)\n process = psutil.Process(proc.pid)\n if timeout and timeout <= 0:\n raise Error('Timeout is not a positive value: %s' % timeout)\n if timeout and timeout > 0:\n try:\n process.wait(timeout=timeout)\n except psutil.TimeoutExpired:\n process.terminate()\n raise AdbTimeoutError(cmd=args, timeout=timeout)\n\n (out, err) = proc.communicate()\n if stderr:\n stderr.write(err)\n ret = proc.returncode\n logging.debug('cmd: %s, stdout: %s, stderr: %s, ret: %s',\n cli_cmd_to_string(args), out, err, ret)\n if ret == 0:\n return out\n else:\n raise AdbError(cmd=args, stdout=out, stderr=err, ret_code=ret)\n\n def _execute_and_process_stdout(self, args, shell, handler):\n \"\"\"Executes adb commands and processes the stdout with a handler.\n\n Args:\n args: string or list of strings, program arguments.\n See subprocess.Popen() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Popen() docs.\n handler: func, a function to handle adb stdout line by line.\n\n Returns:\n The stderr of the adb command run if exit code is 0.\n\n Raises:\n AdbError: The adb command exit code is not 0.\n \"\"\"\n proc = subprocess.Popen(\n args,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n shell=shell,\n bufsize=1)\n try:\n while proc.poll() is None:\n line = proc.stdout.readline()\n if line:\n handler(line)\n else:\n break\n finally:\n (_, err) = proc.communicate()\n ret = proc.returncode\n if ret == 0:\n return err\n else:\n raise AdbError(\n cmd=args,\n stdout='[elided, processed via handler]',\n stderr=err,\n ret_code=ret)\n\n def _construct_adb_cmd(self, raw_name, args, shell):\n \"\"\"Constructs an adb command with arguments for a subprocess call.\n\n Args:\n raw_name: string, the raw unsanitized name of the adb command to\n format.\n args: string or list of strings, arguments to the adb command.\n See subprocess.Proc() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Proc() docs.\n\n Returns:\n The adb command in a format appropriate for subprocess. If shell is\n True, then this is a string; otherwise, this is a list of\n strings.\n \"\"\"\n args = args or ''\n name = raw_name.replace('_', '-')\n if shell:\n args = cli_cmd_to_string(args)\n # Add quotes around \"adb\" in case the ADB path contains spaces. This\n # is pretty common on Windows (e.g. Program Files).\n if self.serial:\n adb_cmd = '\"%s\" -s \"%s\" %s %s' % (ADB, self.serial, name, args)\n else:\n adb_cmd = '\"%s\" %s %s' % (ADB, name, args)\n else:\n adb_cmd = [ADB]\n if self.serial:\n adb_cmd.extend(['-s', self.serial])\n adb_cmd.append(name)\n if args:\n if isinstance(args, basestring):\n adb_cmd.append(args)\n else:\n adb_cmd.extend(args)\n return adb_cmd\n\n def _exec_adb_cmd(self, name, args, shell, timeout, stderr):\n adb_cmd = self._construct_adb_cmd(name, args, shell=shell)\n out = self._exec_cmd(\n adb_cmd, shell=shell, timeout=timeout, stderr=stderr)\n return out\n\n def _execute_adb_and_process_stdout(self, name, args, shell, handler):\n adb_cmd = self._construct_adb_cmd(name, args, shell=shell)\n out = self._execute_and_process_stdout(\n adb_cmd, shell=shell, handler=handler)\n return out\n\n def getprop(self, prop_name):\n \"\"\"Get a property of the device.\n\n This is a convenience wrapper for \"adb shell getprop xxx\".\n\n Args:\n prop_name: A string that is the name of the property to get.\n\n Returns:\n A string that is the value of the property, or None if the property\n doesn't exist.\n \"\"\"\n return self.shell('getprop %s' % prop_name).decode('utf-8').strip()\n\n def has_shell_command(self, command):\n \"\"\"Checks to see if a given check command exists on the device.\n\n Args:\n command: A string that is the name of the command to check.\n\n Returns:\n A boolean that is True if the command exists and False otherwise.\n \"\"\"\n try:\n output = self.shell(['command', '-v',\n command]).decode('utf-8').strip()\n return command in output\n except AdbError:\n # If the command doesn't exist, then 'command -v' can return\n # an exit code > 1.\n return False\n\n def forward(self, args=None, shell=False):\n with ADB_PORT_LOCK:\n return self._exec_adb_cmd(\n 'forward', args, shell, timeout=None, stderr=None)\n\n def instrument(self, package, options=None, runner=None, handler=None):\n \"\"\"Runs an instrumentation command on the device.\n\n This is a convenience wrapper to avoid parameter formatting.\n\n Example:\n\n .. code-block:: python\n\n device.instrument(\n 'com.my.package.test',\n options = {\n 'class': 'com.my.package.test.TestSuite',\n },\n )\n\n Args:\n package: string, the package of the instrumentation tests.\n options: dict, the instrumentation options including the test\n class.\n runner: string, the test runner name, which defaults to\n DEFAULT_INSTRUMENTATION_RUNNER.\n handler: optional func, when specified the function is used to parse\n the instrumentation stdout line by line as the output is\n generated; otherwise, the stdout is simply returned once the\n instrumentation is finished.\n\n Returns:\n The stdout of instrumentation command or the stderr if the handler\n is set.\n \"\"\"\n if runner is None:\n runner = DEFAULT_INSTRUMENTATION_RUNNER\n if options is None:\n options = {}\n\n options_list = []\n for option_key, option_value in options.items():\n options_list.append('-e %s %s' % (option_key, option_value))\n options_string = ' '.join(options_list)\n\n instrumentation_command = 'am instrument -r -w %s %s/%s' % (\n options_string, package, runner)\n logging.info('AndroidDevice|%s: Executing adb shell %s', self.serial,\n instrumentation_command)\n if handler is None:\n # Flow kept for backwards-compatibility reasons\n self._exec_adb_cmd(\n 'shell',\n instrumentation_command,\n shell=False,\n timeout=None,\n stderr=None)\n else:\n return self._execute_adb_and_process_stdout(\n 'shell', instrumentation_command, shell=False, handler=handler)\n\n def __getattr__(self, name):\n def adb_call(args=None, shell=False, timeout=None, stderr=None):\n \"\"\"Wrapper for an ADB command.\n\n Args:\n args: string or list of strings, arguments to the adb command.\n See subprocess.Proc() documentation.\n shell: bool, True to run this command through the system shell,\n False to invoke it directly. See subprocess.Proc() docs.\n timeout: float, the number of seconds to wait before timing out.\n If not specified, no timeout takes effect.\n stderr: a Byte stream, like io.BytesIO, stderr of the command\n will be written to this object if provided.\n\n Returns:\n The output of the adb command run if exit code is 0.\n \"\"\"\n return self._exec_adb_cmd(\n name, args, shell=shell, timeout=timeout, stderr=stderr)\n\n return adb_call\n", "path": "mobly/controllers/android_device_lib/adb.py"}]} | 3,714 | 940 |
gh_patches_debug_25148 | rasdani/github-patches | git_diff | GPflow__GPflow-1536 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check deps on CI
`pip install gpflow` currently installs dependencies (setuptools, scipy) with versions that are incompatible with the tensorflow version installed.
This ticket isn't to fix the dependencies, per se, but suggests adding a `pip check -vvv` stage to CI, so that such problems are caught at PR stage.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # pylint: skip-file
5
6 import os
7 import sys
8
9 from setuptools import find_packages, setup
10
11
12 # Dependencies of GPflow
13 requirements = [
14 "numpy>=1.10.0",
15 "scipy>=0.18.0",
16 "multipledispatch>=0.6",
17 "tabulate",
18 "typing_extensions",
19 "cloudpickle==1.3.0", # temporary workaround for tensorflow/probability#991
20 ]
21
22 if sys.version_info < (3, 7):
23 # became part of stdlib in python 3.7
24 requirements.append("dataclasses")
25
26 # We do not want to install tensorflow in the readthedocs environment, where we
27 # use autodoc_mock_imports instead. Hence we use this flag to decide whether or
28 # not to append tensorflow and tensorflow_probability to the requirements:
29 if os.environ.get("READTHEDOCS") != "True":
30 requirements.extend(["tensorflow>=2.1.0,<2.3", "tensorflow-probability>=0.9,<0.11"])
31
32
33 def read_file(filename):
34 with open(filename, encoding="utf-8") as f:
35 return f.read().strip()
36
37
38 version = read_file("VERSION")
39 readme_text = read_file("README.md")
40
41 packages = find_packages(".", exclude=["tests"])
42
43 setup(
44 name="gpflow",
45 version=version,
46 author="James Hensman, Alex Matthews",
47 author_email="[email protected]",
48 description="Gaussian process methods in TensorFlow",
49 long_description=readme_text,
50 long_description_content_type="text/markdown",
51 license="Apache License 2.0",
52 keywords="machine-learning gaussian-processes kernels tensorflow",
53 url="https://www.gpflow.org",
54 project_urls={
55 "Source on GitHub": "https://github.com/GPflow/GPflow",
56 "Documentation": "https://gpflow.readthedocs.io",
57 },
58 packages=packages,
59 include_package_data=True,
60 install_requires=requirements,
61 extras_require={"ImageToTensorBoard": ["matplotlib"]},
62 python_requires=">=3.6",
63 classifiers=[
64 "License :: OSI Approved :: Apache Software License",
65 "Natural Language :: English",
66 "Operating System :: MacOS :: MacOS X",
67 "Operating System :: Microsoft :: Windows",
68 "Operating System :: POSIX :: Linux",
69 "Programming Language :: Python :: 3.6",
70 "Topic :: Scientific/Engineering :: Artificial Intelligence",
71 ],
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,11 +12,10 @@
# Dependencies of GPflow
requirements = [
"numpy>=1.10.0",
- "scipy>=0.18.0",
+ "scipy>=0.18.0,==1.4.1", # pinned to ==1.4.1 to satisfy tensorflow requirements
"multipledispatch>=0.6",
"tabulate",
"typing_extensions",
- "cloudpickle==1.3.0", # temporary workaround for tensorflow/probability#991
]
if sys.version_info < (3, 7):
@@ -27,7 +26,18 @@
# use autodoc_mock_imports instead. Hence we use this flag to decide whether or
# not to append tensorflow and tensorflow_probability to the requirements:
if os.environ.get("READTHEDOCS") != "True":
- requirements.extend(["tensorflow>=2.1.0,<2.3", "tensorflow-probability>=0.9,<0.11"])
+ requirements.extend(
+ [
+ # tensorflow>=2.3 not compatible with tensorflow-probability<0.11
+ "tensorflow>=2.1.0,<2.3",
+ # tensorflow-probability==0.10.0 doesn't install correctly
+ # https://github.com/tensorflow/probability/issues/991
+ #
+ # gpflow uses private functionality not present in tensorflow-probability~=0.11
+ "tensorflow-probability>=0.9,<0.11,!=0.10.0",
+ "setuptools>=41.0.0", # to satisfy dependency constraints
+ ]
+ )
def read_file(filename):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,11 +12,10 @@\n # Dependencies of GPflow\n requirements = [\n \"numpy>=1.10.0\",\n- \"scipy>=0.18.0\",\n+ \"scipy>=0.18.0,==1.4.1\", # pinned to ==1.4.1 to satisfy tensorflow requirements\n \"multipledispatch>=0.6\",\n \"tabulate\",\n \"typing_extensions\",\n- \"cloudpickle==1.3.0\", # temporary workaround for tensorflow/probability#991\n ]\n \n if sys.version_info < (3, 7):\n@@ -27,7 +26,18 @@\n # use autodoc_mock_imports instead. Hence we use this flag to decide whether or\n # not to append tensorflow and tensorflow_probability to the requirements:\n if os.environ.get(\"READTHEDOCS\") != \"True\":\n- requirements.extend([\"tensorflow>=2.1.0,<2.3\", \"tensorflow-probability>=0.9,<0.11\"])\n+ requirements.extend(\n+ [\n+ # tensorflow>=2.3 not compatible with tensorflow-probability<0.11\n+ \"tensorflow>=2.1.0,<2.3\",\n+ # tensorflow-probability==0.10.0 doesn't install correctly\n+ # https://github.com/tensorflow/probability/issues/991\n+ #\n+ # gpflow uses private functionality not present in tensorflow-probability~=0.11\n+ \"tensorflow-probability>=0.9,<0.11,!=0.10.0\",\n+ \"setuptools>=41.0.0\", # to satisfy dependency constraints\n+ ]\n+ )\n \n \n def read_file(filename):\n", "issue": "Check deps on CI\n`pip install gpflow` currently installs dependencies (setuptools, scipy) with versions that are incompatible with the tensorflow version installed.\r\n\r\nThis ticket isn't to fix the dependencies, per se, but suggests adding a `pip check -vvv` stage to CI, so that such problems are caught at PR stage.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\n\n# Dependencies of GPflow\nrequirements = [\n \"numpy>=1.10.0\",\n \"scipy>=0.18.0\",\n \"multipledispatch>=0.6\",\n \"tabulate\",\n \"typing_extensions\",\n \"cloudpickle==1.3.0\", # temporary workaround for tensorflow/probability#991\n]\n\nif sys.version_info < (3, 7):\n # became part of stdlib in python 3.7\n requirements.append(\"dataclasses\")\n\n# We do not want to install tensorflow in the readthedocs environment, where we\n# use autodoc_mock_imports instead. Hence we use this flag to decide whether or\n# not to append tensorflow and tensorflow_probability to the requirements:\nif os.environ.get(\"READTHEDOCS\") != \"True\":\n requirements.extend([\"tensorflow>=2.1.0,<2.3\", \"tensorflow-probability>=0.9,<0.11\"])\n\n\ndef read_file(filename):\n with open(filename, encoding=\"utf-8\") as f:\n return f.read().strip()\n\n\nversion = read_file(\"VERSION\")\nreadme_text = read_file(\"README.md\")\n\npackages = find_packages(\".\", exclude=[\"tests\"])\n\nsetup(\n name=\"gpflow\",\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n long_description=readme_text,\n long_description_content_type=\"text/markdown\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"https://www.gpflow.org\",\n project_urls={\n \"Source on GitHub\": \"https://github.com/GPflow/GPflow\",\n \"Documentation\": \"https://gpflow.readthedocs.io\",\n },\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={\"ImageToTensorBoard\": [\"matplotlib\"]},\n python_requires=\">=3.6\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# pylint: skip-file\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\n\n# Dependencies of GPflow\nrequirements = [\n \"numpy>=1.10.0\",\n \"scipy>=0.18.0,==1.4.1\", # pinned to ==1.4.1 to satisfy tensorflow requirements\n \"multipledispatch>=0.6\",\n \"tabulate\",\n \"typing_extensions\",\n]\n\nif sys.version_info < (3, 7):\n # became part of stdlib in python 3.7\n requirements.append(\"dataclasses\")\n\n# We do not want to install tensorflow in the readthedocs environment, where we\n# use autodoc_mock_imports instead. Hence we use this flag to decide whether or\n# not to append tensorflow and tensorflow_probability to the requirements:\nif os.environ.get(\"READTHEDOCS\") != \"True\":\n requirements.extend(\n [\n # tensorflow>=2.3 not compatible with tensorflow-probability<0.11\n \"tensorflow>=2.1.0,<2.3\",\n # tensorflow-probability==0.10.0 doesn't install correctly\n # https://github.com/tensorflow/probability/issues/991\n #\n # gpflow uses private functionality not present in tensorflow-probability~=0.11\n \"tensorflow-probability>=0.9,<0.11,!=0.10.0\",\n \"setuptools>=41.0.0\", # to satisfy dependency constraints\n ]\n )\n\n\ndef read_file(filename):\n with open(filename, encoding=\"utf-8\") as f:\n return f.read().strip()\n\n\nversion = read_file(\"VERSION\")\nreadme_text = read_file(\"README.md\")\n\npackages = find_packages(\".\", exclude=[\"tests\"])\n\nsetup(\n name=\"gpflow\",\n version=version,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=\"Gaussian process methods in TensorFlow\",\n long_description=readme_text,\n long_description_content_type=\"text/markdown\",\n license=\"Apache License 2.0\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"https://www.gpflow.org\",\n project_urls={\n \"Source on GitHub\": \"https://github.com/GPflow/GPflow\",\n \"Documentation\": \"https://gpflow.readthedocs.io\",\n },\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n extras_require={\"ImageToTensorBoard\": [\"matplotlib\"]},\n python_requires=\">=3.6\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}]} | 1,014 | 417 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.